Read rows and write to create file in Spark-Scala

apache-spark
scala

#1

Hi Team, My requirement is to read

  • All the rows from Hive table in scala-spark code than,
  • Write all the data row by row to create a file.

Please share the pointers as what I need to do here?


#2

you can use hivecontext to initialise a dataFrame from hive table and write the dataframe to the csv or json file using write api for dataframe.

You can find out more about dataframes here: