Error while performing export the data from Hive to MySQL


#1

I am not able to export the data from Hive to MYSQL. Below is the export command that I used:
sqoop export
–connect jdbc:mysql://ms.itversity.com:3306/retail_export
–username retail_user
–password itversity
–export-dir /apps/hive/warehouse/sangurmath33_sqoop_import.db/daily_revenue
–table watson_123
–input-fields-terminated-by “\001”

Below is the error message:
Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/09/19 07:54:01 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292
18/09/19 07:54:01 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/09/19 07:54:01 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/09/19 07:54:01 INFO tool.CodeGenTool: Beginning code generation
18/09/19 07:54:02 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM watson_123 AS t LIMIT 1
18/09/19 07:54:02 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM watson_123 AS t LIMIT 1
18/09/19 07:54:02 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.6.5.0-292/hadoop-mapreduce
Note: /tmp/sqoop-sangurmath33/compile/206d9ab394a645538273faa900de25f1/watson_123.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/09/19 07:54:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-sangurmath33/compile/206d9ab394a645538273faa900de25f1/watson_123.jar
18/09/19 07:54:04 INFO mapreduce.ExportJobBase: Data will be staged in the table: watson_stage
18/09/19 07:54:04 INFO mapreduce.ExportJobBase: Beginning export of watson_123
18/09/19 07:54:07 INFO client.RMProxy: Connecting to ResourceManager at rm01.itversity.com/172.16.1.106:8050
18/09/19 07:54:07 INFO client.AHSProxy: Connecting to Application History server at rm01.itversity.com/172.16.1.106:10200
18/09/19 07:54:15 INFO input.FileInputFormat: Total input paths to process : 1
18/09/19 07:54:15 INFO input.FileInputFormat: Total input paths to process : 1
18/09/19 07:54:16 INFO mapreduce.JobSubmitter: number of splits:4
18/09/19 07:54:16 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1533622723243_22687
18/09/19 07:54:17 INFO impl.YarnClientImpl: Submitted application application_1533622723243_22687
18/09/19 07:54:17 INFO mapreduce.Job: The url to track the job: ://rm01.itversity.com:19288/proxy/application_1533622723243_22687/
18/09/19 07:54:17 INFO mapreduce.Job: Running job: job_1533622723243_22687
18/09/19 07:54:25 INFO mapreduce.Job: Job job_1533622723243_22687 running in uber mode : false
18/09/19 07:54:25 INFO mapreduce.Job: map 0% reduce 0%
18/09/19 07:54:33 INFO mapreduce.Job: map 100% reduce 0%
18/09/19 07:54:34 INFO mapreduce.Job: Job job_1533622723243_22687 failed with state FAILED due to: Task failed task_1533622723243_22687_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0

18/09/19 07:54:34 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=1
Killed map tasks=3
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=46948
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=23474
Total vcore-milliseconds taken by all map tasks=23474
Total megabyte-milliseconds taken by all map tasks=48074752
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
18/09/19 07:54:34 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/09/19 07:54:34 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 27.5659 seconds (0 bytes/sec)
18/09/19 07:54:34 INFO mapreduce.ExportJobBase: Exported 0 records.
18/09/19 07:54:34 ERROR mapreduce.ExportJobBase: Export job failed!
18/09/19 07:54:34 ERROR tool.ExportTool: Error during export: Export job failed!

Thank you


#2

Hi @RITESH_PRABHAKAR_SAN Create table as below query. And run sqoop export command.

create table watson_123(date varchar(50), daily_revenue float);

You should create mysql table format same as Hive table format.

image


#3

I have used the same query to create the table in MySQL and now I am getting different error message :
Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Java HotSpot™ 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000eab00000, 350748672, 0) failed; error=‘Cannot allocate memory’ (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 350748672 bytes for committing reserved memory.

An error report file with more information is saved as:

/home/sangurmath33/hs_err_pid15199.log


#4

@RITESH_PRABHAKAR_SAN Issue resolved. Now you can launch hive and perform queries.


#5

Thank you. The export is also working fine.


#6