In past I was facing few issues with flume .conf file and .py files and finaly with Itversity team support I over came those issues. When I finally triggered the Job once again now I am able to see directories getting created in HDFS. When I use -cat command to view there exists no streamed transformed data.
Can any any one kindly look in to it.
Sign up for our state of the art Big Data cluster for hands on practice as developer. Cluster have Hadoop, Spark, Hive, Sqoop, Kafka and more.