It says "The specified configuration file does not exist"


#1

i type this command:

flume-ng agent -name a1 --conf /user/katukurivamshikrishna/flume --conf-file /user/katukurivamshikrishna/flume/example.conf

and i get an error:
18/06/27 14:34:05 ERROR node.Application: A fatal error occurred while running. Exception follows.
org.apache.commons.cli.ParseException: The specified configuration file does not exist: /user/katukurivamshikrishna/flume/example.conf
at org.apache.flume.node.Application.main(Application.java:275)

but when i check for the file, it exists:
[katukurivamshikrishna@gw02 conf]$ hadoop fs -cat /user/katukurivamshikrishna/flume/example.conf # example.conf: A single-node Flume configuration

Name the components on this agent

a1.sources = r1
a1.sinks = k1
a1.channels = c1

Describe/configure the source

a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

Describe the sink

a1.sinks.k1.type = hdfs

#customizing sink for hdfs
a1.sinks.k1.hdfs.path = /user/katukurivamshikrishna/flume
a1.sinks.k1.hdfs.fileprefix = netcat

Use a channel which buffers events in memory

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

Bind the source and sink to the channel

ources.r1.channels = c1
a1.sinks.k1.channel = c1


#2

@mayank2711, can you help on this bro?
@dgadiraju, any suggestions sir?


#3

@vamosvamshi

I can see there is a missing component in binding the source

ources.r1.channels = c1

replace the line as below and let us know

a1.sources.r1.channels = c1

Regards,
Sunil Abhishek


#4

@Sunil_Itversity , I tried the above thing, I see that it gets connected to a port number 44444, then i use “telnet localhost 44444” to connect to it , then i get this error

hdp/2.5.0.0-1245/hadoop/lib/native org.apache.flume.node.Application --name a1 - -conf-file /home/katukurivamshikrishna/flume/conf/example.conf
18/06/28 12:18:48 INFO node.PollingPropertiesFileConfigurationProvider: Configur ation provider starting
18/06/28 12:18:48 INFO node.PollingPropertiesFileConfigurationProvider: Reloadin g configuration file:/home/katukurivamshikrishna/flume/conf/example.conf
18/06/28 12:18:48 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
18/06/28 12:18:48 INFO conf.FlumeConfiguration: Processing:k1
18/06/28 12:18:48 INFO conf.FlumeConfiguration: Processing:k1
18/06/28 12:18:48 INFO conf.FlumeConfiguration: Processing:k1
18/06/28 12:18:48 INFO conf.FlumeConfiguration: Processing:k1
18/06/28 12:18:48 INFO conf.FlumeConfiguration: Post-validation flume configurat ion contains configuration for agents: [a1]
18/06/28 12:18:48 INFO node.AbstractConfigurationProvider: Creating channels
18/06/28 12:18:48 INFO channel.DefaultChannelFactory: Creating instance of chann el c1 type memory
18/06/28 12:18:48 INFO node.AbstractConfigurationProvider: Created channel c1
18/06/28 12:18:48 INFO source.DefaultSourceFactory: Creating instance of source r1, type netcat
18/06/28 12:18:48 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, t ype: hdfs
18/06/28 12:18:48 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
18/06/28 12:18:48 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r1, k1]
18/06/28 12:18:48 INFO node.Application: Starting new configuration:{ sourceRunn ers:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{n ame:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink .DefaultSinkProcessor@7201ac48 counterGroup:{ name:null counters:{} } }} channel s:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
18/06/28 12:18:48 INFO node.Application: Starting Channel c1
18/06/28 12:18:49 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
18/06/28 12:18:49 INFO instrumentation.MonitoredCounterGroup: Component type: CH ANNEL, name: c1 started
18/06/28 12:18:49 INFO node.Application: Starting Sink k1
18/06/28 12:18:49 INFO node.Application: Starting Source r1
18/06/28 12:18:49 INFO source.NetcatSource: Source starting
18/06/28 12:18:49 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
18/06/28 12:18:49 INFO instrumentation.MonitoredCounterGroup: Component type: SI NK, name: k1 started
18/06/28 12:18:49 INFO source.NetcatSource: Created serverSocket:sun.nio.ch.Serv erSocketChannelImpl[/127.0.0.1:44444]
18/06/28 12:19:11 INFO hdfs.HDFSSequenceFile: writeFormat = Writable, UseRawLoca lFileSystem = false
18/06/28 12:19:11 INFO hdfs.BucketWriter: Creating hdfs://gw02.itversity.com/use r/katukurivamshikrishna/flume/FlumeData.1530202751127.tmp
18/06/28 12:19:12 WARN ipc.Client: Failed to connect to server: gw02.itversity.c om/172.16.1.109:8020: try once and fail.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717 )
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout .java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:6 50)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:74 5)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEng ine.java:233)
at com.sun.proxy.$Proxy13.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB. create(ClientNamenodeProtocolTranslatorPB.java:311)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI nvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat ionHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat ionHandler.java:176)
at com.sun.proxy.$Proxy14.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputSt ream.java:1719)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1699)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1634)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFile System.java:479)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFile System.java:475)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkRes olver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSy stem.java:475)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSy stem.java:416)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:926)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:907)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:803)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:792)
at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.jav a:96)
at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.jav a:78)
at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.jav a:69)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:277)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:266)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:706 )
at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.ja va:187)
at org.apache.flume.sink.hdfs.BucketWriter.access$1400(BucketWriter.java :59)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:703)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor .java:624)
at java.lang.Thread.run(Thread.java:748)
18/06/28 12:19:12 WARN retry.RetryInvocationHandler: Exception while invoking Cl ientNamenodeProtocolTranslatorPB.create over null. Not retrying because try once and fail.
java.net.ConnectException: Call From gw02.itversity.com/172.16.1.109 to gw02.itv ersity.com:8020 failed on connection exception: java.net.ConnectException: Conne ction refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRe fused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEng ine.java:233)
at com.sun.proxy.$Proxy13.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB. create(ClientNamenodeProtocolTranslatorPB.java:311)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI nvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat ionHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat ionHandler.java:176)
at com.sun.proxy.$Proxy14.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputSt ream.java:1719)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1699)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1634)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFile System.java:479)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFile System.java:475)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkRes olver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSy stem.java:475)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSy stem.java:416)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:926)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:907)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:803)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:792)
at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.jav a:96)
at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.jav a:78)
at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.jav a:69)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:277)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:266)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:706 )
at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.ja va:187)


#5

@vamosvamshi

Can I know which command you used to start the flume agent?

Regards,
Sunil Abhishek


#6

flume-ng agent --name a1 --conf /user/katukurivamshikrishna/flume/ --conf-file /home/katukurivamshikrishna/flume/conf/example.conf


#7

@vamosvamshi

Issue resolved there is a mistake in setting the path for hdfs in your conf file(you have used hdfs://gw02.itversity.com), I edited the path with this /user/katukurivamshikrishna/flume now it’s working. Please check back and let us know

Regards,
Sunil Abhishek