Unable to access hive tables through spark from jupyter note book

Below code used:

from pyspark.sql import SparkSession
spark=SparkSession.builder.
master(‘local’).
config(‘hive.metastore.urls’,‘thrift://ms.itversity.com:9083’).
config(‘spark.sql.warehouse.dir’,‘hdfs://nn01.itversity.com:8020/apps/hive/warehouse’).
enableHiveSupport().
appName(‘Hive-Spark-Integration’).
getOrCreate()
please let me know if configuaration incorrect.

Hi @Anil_Reddy,

In our lab spark is already integrated with hive hence no need to pass config parameters.
for reading data from hive table spark object has api called table , by using this we are able to load data into dataframe

use below code for reference-

from pyspark.sql import SparkSession
spark = SparkSession.\
    builder.\
    master("local").\
    appName("Getting Started with hive tables").\
    config("spark.ui.port", "0").\
    getOrCreate()

orders = spark.read.table('database_name.table_name')

there is another approach by using spark.sql

orders = spark.sql('select * from database_name.table_name')

Both spark.read.table and spark.sql returns Data Frame.