Fsmp.conf file for flume spark integration

I am trying the same example from
Big Data Workshop - 17 - Streaming Analytics - Flume, Kafka and Spark Streaming - Spark Streaming

But getting error as
Could not configure sink sparksink due to: Channel memoryChannel not in active set.

Can you please provide the link to flume-spark agent configuration file.

You need to paste the configuration file here.

Here is my Code
I don’t know why the # symbol sentence becomes Black Bold letter.

Flume spark sink.conf: A single-node Flume configuration

Name the components on this agent

fsmp.sources = logsource
fsmp.sinks = sparksink hdfssink
fsmp.channels = sparkchannel hdfschannel

Describe/configure the source

fsmp.sources.logsource.type = exec
fsmp.sources.logsource.command = tail -f /opt/gen_logs/logs/access.log

#describe SparkSink

fsmp.sinks.sparksink.type = org.apache.spark.streaming.flume.sink.SparkSink
fsmp.sinks.sparksink.hostname = gw01.itversity.com
fsmp.sinks.sparksink.port = 19999

Use a channel which buffers events in memory

fsmp.channels.sparkchannel.type = memory
fsmp.channels.sparkchannel.capacity = 1000
fsmp.channels.sparkchannel.transactionCapacity = 100

Bind the source and sink to the channel

fsmp.sources.logsource.channels = sparkchannel hdfschannel

Describe sink for hdfssink

fsmp.sinks.hdfssink.type = hdfs
fsmp.sinks.hdfssink.hdfs.path = hdfs://nn01.itversity.com:8020/user/digambarmishra/flume/sparkhdfs_%Y-%m-%d
fsmp.sinks.hdfssink.hdfs.fileType = DataStream

#Describe the hdfssink channel
fsmp.channels.hdfschannel.type = memory
fsmp.channels.hdfschannel.capacity = 1000
fsmp.channels.hdfschannel.transactionCapacity = 100

fsmp.sinks.sparksink.channel = memoryChannel

You have to use “preformatted text” which is available at the top of the editor.

You might have to change fsmp.sinks.sparksink.channel = memoryChannel this to fsmp.sinks.sparksink.channel = sparkchannel

Hello, this example has become quite confusing. I have copied and created StreamingDepartmentAnalysis.scala and made a jar.

Is there a complete spark flume config file somewhere?

In the video you had changed these files very fast.

I also did not see where you started kafka. I know these vids are quite long so I have been watching in sections, sorry.

It is working now. I could not see in the Video when you changed memory channel to sparkchannel.
It would have been helpful if the file has been pasted in github