CCA 175 Study Group


We have started a new study group for CCA 175. You can discuss about ideas, challenges, training resources by joining this group.

1 Like

Great initiative Durga.

Q1. Under what circumstances would we need to use HiveContext during exam?
Q2. How to handle small screen size during exam?

1 Like

When I use field delimiters while hive import in sqoop then the query in the hive on the imported table is not working correctly.

sqoop import --connect jdbc:mysql:// --username retail_user --password itversity --table orders --columns order_id,order_date,order_customer_id --as-textfile --hive-import --hive-table sadiqueahsan_sqoop_import.orders_imported --hive-overwrite --fields-terminated-by ā€˜\0x24ā€™ --lines-terminated-by ā€˜\nā€™

The data is populated in the table but the below query outputs zero

hive >
> SELECT COUNT(*) FROM ORDERS_IMPORTED where order_id = 68883 ;


While in case of default hive delims the same query was returning the record.

SELECT * FROM ORDERS_IMPORTED where order_id = 68883 ;
68883 2014-07-23 00:00:00.0 5533

Dear All,

I need help .I have taken product data as parquet file .I want to find empty string from product dataframe in spark scala.Please help me to find out the filter data without empty value .I have done this with RDD as per durga sir vedio.But I want to filter it on dataframe.