To avoid these kind of issues, at ITVersity we are coming up with Certification simulator. For now send email to firstname.lastname@example.org. It costs $14.45 per week
not really a success story - failed the exam today
For some reason the mysql command did not connect to the cloudera mysql.
I could not answer 2 out of 9 questions involving mysql.
and I made mistake on another question . so I failed
I tried labs type of mysql connectivity to check table/column. but could not connect
mysql -u username -h hostname
i got access denied error repeatedly.
Even through sqoop command same error.
I tried opening fresh terminal etc. still same error.
No idea what I did wrong.
If any of you had similar hiccups and managed to make it work please let me know. Thanks a ton!
I lost 25 minutes due to this mysql drama
The reason third question was incorrect because I by mistake deleted the input hdfs directory for that question
using hadoop fs -rm xxxx
(while deleting another junk output… i deleted the input data directory for one of the problems . once you lose source hdfs data dir… you cant do anything. be careful. It sucks that they made the source data RWX… it should be just read permission )
I am still wondering what went wrong with mysql connection. environment issue or my own ignoring. not sure.
But exam will be very easy . no complications. if you practice durga/arun exercises… you will be very comfortable. Unfortunately i goofed up.
practice various input/output formats along with compression.
make sure you open fresh spark shell if one question involved compression and another one uncompressed for same output format. if you use same shell and if you forget not to reset compression output will be wrong
setContext.setconf(“spark.sql.avro.compression.codec”,“uncompressed”) --> use this to reset the compression for same spark shell. or exit and launch new spark shell
Based on my experience this is what you need to pass
#Once you are done with udemy sessions of reading/review/memorizing commands
#Practice Arun’s blog and the problem scenarios listed there
(Especially on various input output formats and compression techniques)
#Learn Avro package import and passing through spark shell launch
#Learn handling sequence file of K,V for both writing and reading and processing V … and write values into new format
#Learn technique of one key ascending, another key descending using (K1,-K2)
#Practice Durga exercises
Memorize package names etc. saves time (avro pkg , compression pkg. databricks pkg for avro etc)
also memorize all sqoop commands, not that hard to memorize
#sqoop - master this and nail the questions you may receive in exam
#HDFS read / RDD . transformation , action , simple join . saving in text or some other output with or without compression
#Hive DB read through HiveContext
#Don’t break your head with complicated flume/kafka. simple flume/exec command good enough … just in case if you are asked in exam
Thank you very much for Durga and his Team for (udemy and awesome labs setup)
Here are the coupons for CCA exams which cover all important aspects of the examination.
- Click here for $35 coupon for CCA 175 Spark and Hadoop Developer using Python.
- Click here for $35 coupon for CCA 175 Spark and Hadoop Developer using Scala.
- Click here for signing up for our state of the art labs for hands on practice
Thank you very much for Arun’s blog . Problem scenarios are very good
Prepare for certifications on our state of the art labs which have Hadoop, Spark, Kafka, Hive and other Big Data technologies
- Click here for signing up for our state of the art 13 node Hadoop and Spark Cluster