Hive table display issue in Scala

I got some issue while using hive table in scala.Please advise what is the issue

scala> import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.hive.HiveContext

scala> val sqlContext = new HiveContext(sc)
sqlContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.Hi

scala> val dep = sqlContext.sql(“select * from order_items”
** | )**
dep: org.apache.spark.sql.DataFrame = [order_item_id: int, order_item_order_id: int, order_item_product_id: int, order_item_quanti
ty: tinyint, order_item_subtotal: double, order_item_product_price: double]
scala> dep.collect().foreach(println)
java.io.IOException: Not a file: hdfs://nn01.itversity.com:8020/apps/hive/warehouse/order_items/order_items
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:322)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)

Try to do sqoop import-all-tables to your database, (don’t use default).
Here I have downloaded table order_items (with default mysql delimited) and order_items_imported (with ‘|’ and ‘\n’ delimiter )
Always specify database name before making spark-sql query, OR specify database name while making query.
Here are my results .
scala> val df = sqlContext.sql(“select * from vish.order_items”)
df: org.apache.spark.sql.DataFrame = [order_item_id: int, order_item_order_id: int, order_item_product_id: int, order_item_quantity: tinyint, order_item_subtotal: double, order_item_product_price: double]

scala> df.show()
±------------±------------------±--------------------±------------------±------------------±-----------------------+
|order_item_id|order_item_order_id|order_item_product_id|order_item_quantity|order_item_subtotal|order_item_product_price|
±------------±------------------±--------------------±------------------±------------------±-----------------------+
| 1| 1| 957| 1| 299.98| 299.98|
| 2| 2| 1073| 1| 199.99| 199.99|
| 3| 2| 502| 5| 250.0| 50.0|
| 4| 2| 403| 1| 129.99| 129.99|
| 5| 4| 897| 2| 49.98| 24.99|
| 6| 4| 365| 5| 299.95| 59.99|
| 7| 4| 502| 3| 150.0| 50.0|
| 8| 4| 1014| 4| 199.92| 49.98|
| 9| 5| 957| 1| 299.98| 299.98|
| 10| 5| 365| 5| 299.95| 59.99|
| 11| 5| 1014| 2| 99.96| 49.98|
| 12| 5| 957| 1| 299.98| 299.98|
| 13| 5| 403| 1| 129.99| 129.99|
| 14| 7| 1073| 1| 199.99| 199.99|
| 15| 7| 957| 1| 299.98| 299.98|
| 16| 7| 926| 5| 79.95| 15.99|
| 17| 8| 365| 3| 179.97| 59.99|
| 18| 8| 365| 5| 299.95| 59.99|
| 19| 8| 1014| 4| 199.92| 49.98|
| 20| 8| 502| 1| 50.0| 50.0|
±------------±------------------±--------------------±------------------±------------------±-----------------------+
only showing top 20 rows

scala> val df = sqlContext.sql(“select * from vish.order_items_imported”)
df: org.apache.spark.sql.DataFrame = [order_item_id: int, order_item_order_id: int, order_item_product_id: int, order_item_quantity: tinyint, order_item_subtotal: double, order_item_product_price: double]

scala> df.show()
±------------±------------------±--------------------±------------------±------------------±-----------------------+
|order_item_id|order_item_order_id|order_item_product_id|order_item_quantity|order_item_subtotal|order_item_product_price|
±------------±------------------±--------------------±------------------±------------------±-----------------------+
| 1| 1| 957| 1| 299.98| 299.98|
| 2| 2| 1073| 1| 199.99| 199.99|
| 3| 2| 502| 5| 250.0| 50.0|
| 4| 2| 403| 1| 129.99| 129.99|
| 5| 4| 897| 2| 49.98| 24.99|
| 6| 4| 365| 5| 299.95| 59.99|
| 7| 4| 502| 3| 150.0| 50.0|
| 8| 4| 1014| 4| 199.92| 49.98|
| 9| 5| 957| 1| 299.98| 299.98|
| 10| 5| 365| 5| 299.95| 59.99|
| 11| 5| 1014| 2| 99.96| 49.98|
| 12| 5| 957| 1| 299.98| 299.98|
| 13| 5| 403| 1| 129.99| 129.99|
| 14| 7| 1073| 1| 199.99| 199.99|
| 15| 7| 957| 1| 299.98| 299.98|
| 16| 7| 926| 5| 79.95| 15.99|
| 17| 8| 365| 3| 179.97| 59.99|
| 18| 8| 365| 5| 299.95| 59.99|
| 19| 8| 1014| 4| 199.92| 49.98|
| 20| 8| 502| 1| 50.0| 50.0|
±------------±------------------±--------------------±------------------±------------------±-----------------------+
only showing top 20 rows