Error running HiveContext in Pyspark

Cloudera Quickstart VM 5.10
Spark version 1.6.0
Copied hive-site.xml to spark directory

>>> from pyspark.sql import HiveContext
>>> sqlContext = HiveContext(sc)
>>> cnt = sqlContext.sql(“select count(1) from customers”)

When I am trying to get Hive DB data from PySpark context , I am getting the below error.

17/05/05 15:05:01 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.1.0
17/05/05 15:05:01 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
_17/05/05 15:05:03 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.

Following this video from ITVERSITY https://www.youtube.com/watch?v=XxRh0X80E78&index=40&list=PLf0swTFhTI8rJvGpOp-LujOcpk-Rlz-yE

Please help

not sure whats happening, but did you try to launch hive and try the same SQL there? Also check if default DB has the table that you are looking for.

Hi Rahul- I am able to access the tables from HIVE. Also the tables are present in HIVE default databse