Issue while running pyspark

Unfortunately, I am getting below error when I am running command - [cloudera@quickstart spark-1.4.0-bin-hadoop2.4]$ bin/pyspark

[cloudera@quickstart spark-1.4.0-bin-hadoop2.4]$ bin/pyspark
Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.
Setting default log level to “WARN”.
To adjust logging level use sc.setLogLevel(newLevel).
17/06/20 05:20:38 WARN util.Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 192.168.86.132 instead (on interface eth1)
17/06/20 05:20:38 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/06/20 05:22:15 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/cloudera/.sparkStaging/application_1497916678555_0001. Name node is in safe mode.
The reported blocks 0 needs additional 992 blocks to reach the threshold 0.9990 of total blocks 992.
The number of live datanodes 0 needs an additional 1 live datanodes to reach the minimum number 1.
Safe mode will be turned off automatically once the thresholds have been reached.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1463)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4352)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4327)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:873)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:323)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:618)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)

            at org.apache.hadoop.ipc.Client.call(Client.java:1472)
            at org.apache.hadoop.ipc.Client.call(Client.java:1409)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
            at com.sun.proxy.$Proxy21.mkdirs(Unknown Source)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
            at com.sun.proxy.$Proxy22.mkdirs(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3110)
            at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3077)
            at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:992)
            at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:988)
            at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
            at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:988)
            at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:980)
            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1954)
            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:614)
            at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:357)
            at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:724)
            at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:143)
            at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
            at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:158)
            at org.apache.spark.SparkContext.<init>(SparkContext.scala:538)
            at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
            at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
            at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
            at py4j.Gateway.invoke(Gateway.java:214)
            at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
            at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
            at py4j.GatewayConnection.run(GatewayConnection.java:209)
            at java.lang.Thread.run(Thread.java:745)

17/06/20 05:22:15 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
17/06/20 05:22:15 ERROR util.Utils: Uncaught exception in thread Thread-2
java.lang.NullPointerException
at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152)
at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1320)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:97)
at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1764)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1220)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1763)
at org.apache.spark.SparkContext.(SparkContext.scala:610)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:214)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Traceback (most recent call last):
File “/usr/lib/spark/python/pyspark/shell.py”, line 43, in
sc = SparkContext(pyFiles=add_files)
File “/usr/lib/spark/python/pyspark/context.py”, line 115, in init
conf, jsc, profiler_cls)
File “/usr/lib/spark/python/pyspark/context.py”, line 172, in _do_init
self._jsc = jsc or self._initialize_context(self._conf._jconf)
File “/usr/lib/spark/python/pyspark/context.py”, line 235, in _initialize_context
return self._jvm.JavaSparkContext(jconf)
File “/usr/lib/python2.6/site-packages/py4j-0.10.5-py2.6.egg/py4j/java_gateway.py”, line 1422, in call
answer, self._gateway_client, None, self._fqn)
File “/usr/lib/python2.6/site-packages/py4j-0.10.5-py2.6.egg/py4j/protocol.py”, line 320, in get_return_value
format(target_id, “.”, name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/cloudera/.sparkStaging/application_1497916678555_0001. Name node is in safe mode.
The reported blocks 0 needs additional 992 blocks to reach the threshold 0.9990 of total blocks 992.
The number of live datanodes 0 needs an additional 1 live datanodes to reach the minimum number 1.
Safe mode will be turned off automatically once the thresholds have been reached.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1463)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4352)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4327)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:873)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:323)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:618)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)

            at org.apache.hadoop.ipc.Client.call(Client.java:1472)
            at org.apache.hadoop.ipc.Client.call(Client.java:1409)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
            at com.sun.proxy.$Proxy21.mkdirs(Unknown Source)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
            at com.sun.proxy.$Proxy22.mkdirs(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3110)
            at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3077)
            at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:992)
            at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:988)
            at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
            at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:988)
            at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:980)
            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1954)
            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:614)
            at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:357)
            at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:724)
            at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:143)
            at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
            at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:158)
            at org.apache.spark.SparkContext.<init>(SparkContext.scala:538)
            at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
            at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
            at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
            at py4j.Gateway.invoke(Gateway.java:214)
            at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
            at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
            at py4j.GatewayConnection.run(GatewayConnection.java:209)
            at java.lang.Thread.run(Thread.java:745)