Unable to insert new data into Hive warehouse table

Hello Durga Sir,
I was watching the 04Sqoop import video and was trying to play with my virtual machine which has a database called loudacre and underwhich I created something called sqoop_import and have a directory called customerservicerep. However, after inserting 2 additional rows of data am able to access it only from mysql, but this data is not accessible when I do hdfs dfs -ls or even if I log into hive and look into the databases/tables. All I see are NULL values on the fields.
. I am also attaching a picture of the output of sqoop import with append and with a where clause of acct_num > 2000 (which is what I created).

Can you please let me know how to resolve this?

I am attaching the mysql output for the above table that was created.

I would really appreciate if I can get some help with this. I have been working on this for more than 2 hrs and getting nowhere!

Please paste sqoop import command as well as output of describe formatted on hive table. Most likely the delimiter in your table is not ,

please if you can also confirm if when i perform commands to import in HIVE i can specify the DB within HIVE where i would like to store the data cause by default all info is stored in default DB but i would like to know if is possible someting like --hive-database i dont know something to point the data in a specifique DB thanks @itversity

I tried that. My computer shut down completely and I am not able to access the queries anymore.

From what I remember, the table had acct_num int, name string in the “describe formatted tablename”.
Also, values when I checked on Hue were of the form,
203Willis Natal
204Linda Kimmell
205Jerome Rucker.

Not sure if it doesn’t like to take ‘,’ delimited values. If this is the case, how do we insert new values?
I used something like this in my case:
–query “insert into customerservicerep values (2600, ‘Yamuna River’)”

And then another line as:
–query “insert into customerservicerep values (2500, ‘Ganga River’)”

Thank you.

about my question guys i just want to let you know how i fix it. the right command to point all your imports from mysql to hive in a desired BD is:

sqoop import --connect jdbc:mysql://master/poc --username root --direct --table dept --hive-import –hive-database poc -m 1

what do you want to do buddy perfom this query from mysql and put the data in a file within your HDFS or in HIVE?

I want to see that the Hive warehouse table has appended with the latest values instead of showing NULL NULL.

@mygen I’m not sure but you mean something like:
$ sqoop import --connect jdbc:mysql://db.foo.com/somedb --table sometable
–where “id > 100000” --target-dir /incremental_dataset --append

have you checked the official documentation?

yes thats right. I was following the video from Itversity. Link is on my first post. My suspicion is that the table has a different format that is not comma separated. Hue showed the previous values in this format:
203Willis Natal
204Linda Kimmell
205Jerome Rucker.

I am not sure how to add values if this is the case. Any pointers? Thx

Hi Durga Sir,
Finally was able to access this table after so many issues on my VMware.
describe formatted customerservicerep
> ;

col_name data_type comment

acct_num int
cs_name string

Detailed Table Information

Database: sqoop_import
Owner: training
CreateTime: Tue May 16 13:42:24 PDT 2017
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://localhost:8020/user/hive/warehouse/sqoop_import.db/customerservicerep
Table Parameters:
comment Imported by sqoop on 2017/05/16 13:42:23
numFiles 1
numRows 0
rawDataSize 0
totalSize 3635
transient_lastDdlTime 1494967344

Storage Information

SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.TextInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim \u0001
line.delim \n
serialization.format \u0001
Time taken: 0.193 seconds, Fetched: 35 row(s)

Sample output from the table below from Hive command:
199 Bradley Meacham
200 Daniel Maclean
201 Paul Garza
202 Harold Vinson
203 Willis Natal
204 Linda Kimmell
205 Jerome Rucker
206 Juan Francois
207 Kristy Brewer
208 Stephanie Meador
Time taken: 0.141 seconds, Fetched: 212 row(s)

Hi @mygen i have replicated your escenario and I ran the statement below:

qoop import --connect jdbc:mysql://master/poc --username root --table dept --where “id_dept > 3” --target-dir /user/hdfs/dept --mysql-delimiters --append -m 1

for me the output includes the new row coming from MySQL table.