Pyspark2 - Overriding Default "Record" Delimiter

Hi,

This question is on Spark 2.X

Could you please let me know how to override the default record delimiter of “\n” when using spark.read.csv? I understood that a similar topic exist with a resolution. However its in Scala.Could you please help with pyspark?


Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs

  • Click here for access to state of the art 13 node Hadoop and Spark Cluster