Sqoop import-all-tables connection issues

hi I have executed below command
]$ sqoop import-all-tables \

-m 12
–connect “jdbc:mysql://quickstart.cloudera:3306/retail_db”
–username=retail_dba
–password=cloudera
–warehouse-dir=/user/cloudera/sqoop_import

it is giving me below error

lease set $ACCUMULO_HOME to the root of your Accumulo installation.
17/03/07 11:07:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.0
17/03/07 11:07:08 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/03/07 11:07:08 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/03/07 11:07:08 INFO tool.CodeGenTool: Beginning code generation
17/03/07 11:07:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM categories AS t LIMIT 1
17/03/07 11:07:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM categories AS t LIMIT 1
17/03/07 11:07:08 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/1f51e76e7d56fd515943378947ff77ae/categories.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/03/07 11:07:12 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/1f51e76e7d56fd515943378947ff77ae/categories.jar
17/03/07 11:07:12 WARN manager.MySQLManager: It looks like you are importing from mysql.
17/03/07 11:07:12 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
17/03/07 11:07:12 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
17/03/07 11:07:12 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
17/03/07 11:07:12 INFO mapreduce.ImportJobBase: Beginning import of categories
17/03/07 11:07:12 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
17/03/07 11:07:12 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/03/07 11:07:14 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/03/07 11:07:14 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/03/07 11:07:15 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:16 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:17 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:18 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:19 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:20 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:21 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:22 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:23 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:24 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:55 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:56 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:57 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:58 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:07:59 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:08:00 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:08:01 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:08:02 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:08:03 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/03/07 11:08:04 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

Looks like the YARN Resource Manager process is not running. Make sure all the services are running.

thanks for your support . I already started the YARN resource manager . After that again I executed below command
sqoop import-all-tables -m 12 --connect “jdbc:mysql://quickstart.cloudera:3306/retail_db” --username=retail_dba --password=cloudera --warehouse-dir=/user/cloudera/sqoop_import
but I got below messages and it is stuck several min. I am not able to get the data in my Sqoop_import directory.

Warning: /usr/lib/sqoop/…/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/03/09 08:59:45 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.0
17/03/09 08:59:45 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/03/09 08:59:45 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/03/09 08:59:46 INFO tool.CodeGenTool: Beginning code generation
17/03/09 08:59:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM categories AS t LIMIT 1
17/03/09 08:59:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM categories AS t LIMIT 1
17/03/09 08:59:46 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/cf63b00fe0d24945b9ee675512c85735/categories.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/03/09 08:59:49 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/cf63b00fe0d24945b9ee675512c85735/categories.jar
17/03/09 08:59:49 WARN manager.MySQLManager: It looks like you are importing from mysql.
17/03/09 08:59:49 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
17/03/09 08:59:49 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
17/03/09 08:59:49 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
17/03/09 08:59:49 INFO mapreduce.ImportJobBase: Beginning import of categories
17/03/09 08:59:49 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
17/03/09 08:59:50 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/03/09 08:59:51 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/03/09 08:59:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/03/09 08:59:53 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:53 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:53 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:53 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:54 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:54 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:55 WARN hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/03/09 08:59:55 INFO db.DBInputFormat: Using read commited transaction isolation
17/03/09 08:59:55 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(category_id), MAX(category_id) FROM categories
17/03/09 08:59:55 INFO db.IntegerSplitter: Split size: 4; Num splits: 12 from: 1 to: 58
17/03/09 08:59:55 INFO mapreduce.JobSubmitter: number of splits:12
17/03/09 08:59:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489077960426_0004
17/03/09 08:59:56 INFO impl.YarnClientImpl: Submitted application application_1489077960426_0004
17/03/09 08:59:56 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1489077960426_0004/
17/03/09 08:59:56 INFO mapreduce.Job: Running job: job_1489077960426_0004

try this…

sqoop import-all-tables -m 1 --connect “jdbc:mysql://quickstart.cloudera:3306/retail_db” --username=retail_dba --password=cloudera --warehouse-dir=/user/cloudera/sqoop_import

I have the same issue and the solution that you mentioned is not working for me either. Everything is good in Cloudera Manager.

try this:
sqoop import-all-tables -m 1 --connect “jdbc:mysql://quickstart.cloudera:3306/retail_db” --username retail_dba --password cloudera --warehouse-dir /user/cloudera/sqoop_import

Thanks Sri. I sorted out the problem. In my case, I had updated the jdk version to 1.8, i reverted this change to 1.7 and then tried again. It worked :).