Sqoop hive-import as- parquetfile error


#1

Tried importing parquetfile but got below error? Also is it always parquetfile will be stored as snappy compression in Hive , can it be stored with compressing also ?

Below is the command used and error. Can you help in finding the mistake i did.

sqoop import
–connect jdbc:mysql://ms.itversity.com/retail_db
–username retail_user
–password itversity
–table order_items
–as-parquetfile
–where ‘order_item_product_price > 500’
–m 1
–warehouse-dir ‘/user/subrudata/sum/try1/p1q3’
–hive-import
–hive-database subrudata1
–create-hive-table
–hive-overwrite

error

The url to track the job: http://rm01.itve rsity.com:19288/proxy/application_1528589352821_31862/
18/07/25 13:41:05 INFO mapreduce.Job: Running job: job_1528589352821_31862
18/07/25 13:41:14 INFO mapreduce.Job: Job job_1528589352821_31862 running in ube r mode : false
18/07/25 13:41:15 INFO mapreduce.Job: map 0% reduce 0%
18/07/25 13:41:21 INFO mapreduce.Job: Task Id : attempt_1528589352821_31862_m_00 0000_0, Status : FAILED
Error: java.lang.ClassNotFoundException: org.kitesdk.compat.Hadoop$Configuration
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.kitesdk.data.spi.hive.HiveUtils.addResource(HiveUtils.java:482)
at org.kitesdk.data.spi.hive.Loader.newHiveConf(Loader.java:161)
at org.kitesdk.data.spi.hive.Loader.access$100(Loader.java:41)
at org.kitesdk.data.spi.hive.Loader$ManagedBuilder.getFromOptions(Loader .java:104)
at org.kitesdk.data.spi.hive.Loader$ManagedBuilder.getFromOptions(Loader .java:99)
at org.kitesdk.data.spi.Registration.lookupDatasetUri(Registration.java: 111)
at org.kitesdk.data.Datasets.load(Datasets.java:103)
at org.kitesdk.data.Datasets.load(Datasets.java:165)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.load(DatasetKeyOutp utFormat.java:510)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.getOutputCommitter( DatasetKeyOutputFormat.java:473)
at org.apache.hadoop.mapred.Task.initialize(Task.java:593)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:324)