Spark-shell hive context


#1

In Durga sir’s video, I see that he uses sqlContext.sql to directly run hive queries where in some other videos I have seen people importing org.apache.spark.sql.hive.HiveContext and creating an instance of HiveContext to run Hive queries.

Can someone please explain to me what is the difference between these two and what should be the preferred method?


Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs

  • Click here for access to state of the art 13 node Hadoop and Spark Cluster