Pyspark2 - Overriding Default "Record" Delimiter


This question is on Spark 2.X

Could you please let me know how to override the default record delimiter of “\n” when using I understood that a similar topic exist with a resolution. However its in Scala.Could you please help with pyspark?

Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs

  • Click here for access to state of the art 13 node Hadoop and Spark Cluster