Sqoop issue with --map-column-java for Timestamp


sqoop import --connect “jdbc:mysql://quickstart.cloudera:3306/retail_db” --username cloudera --password-file /user/cloudera/sqoop_mysql.pass --as-parquetfile --table orders --target-dir=/user/cloudera/import/retail_stage/parq/orders_tst --map-column-java order_id=Integer,order_date=TIMESTAMP,order_customer_id=Integer,order_status=String --fields-terminated-by ‘|’ --lines-terminated-by ‘\n’ -m 1 --outdir parq_conf

Its through below error:

Warning: /usr/lib/sqoop/…/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/01/22 03:36:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.12.0
18/01/22 03:36:04 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/01/22 03:36:04 INFO tool.CodeGenTool: Beginning code generation
18/01/22 03:36:04 INFO tool.CodeGenTool: Will generate java class as codegen_orders
18/01/22 03:36:04 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM orders AS t LIMIT 1
18/01/22 03:36:04 ERROR orm.ClassWriter: No ResultSet method for Java type TIMESTAMP
18/01/22 03:36:04 ERROR tool.ImportTool: Import failed: No ResultSet method for Java type TIMESTAMP



in your MySQL table ‘orders’ the order_date type is ‘TIMESTAMP’?? then only it supports otherwise use respective type.


I dont feel like its the correct analysis as i tried that option also and still i am getting the same error. Instead its some to do with the jar file which converts the data types while exporting the data from database to hdfs like orm.ClassWriter



You have used wrong datatype ‘Timestamp’, instead of this try ‘java.sql.Timestamp’.

If you can understand the errors then there will be no wrong analysis.


Ravi - Still i am getting error, the documentation looks to be for oracle import rather than mysql. Could please let me know if it works for you while u were practicing if so can u share the table details and the sqoop import statement used by you.


ok will let you know, with full import command



You’re importing data from Table in Parquet format to HDFS, then why explicitly needed ‘–map-column-java’?
Any reason?

Because it doesn’t make any difference, you can just remove that parameter & try it will work.

The main reason for this parameter is for overriding default mapping of datatype for exclusive cases.


I am currently practicing and even i tried to import the table with timestamp as the datatype, but unfortunatley the avsc schema for parquet file import is showing the datatype as long instead of timestamp.

When i try to create hive table on it, its not able to show the data in it.