Question on Hive Metastore



Data available in HDFS path
The requirement is to store this data as parquet format in snappy compression so that the users can query in Hive test database. Table Name: result.

What I can think the solution is:
In spark-shell,
Create a data frame for this data available in HDFS.
sqlContext.sql(“create database test”)
sqlContext.sql(“use test”)

Is this correct solution?
Could some help me?

Practice hive on state of the art Big Data cluster -