Spark Shell is not starting up

#1

Hi,

When trying to launch spark shell I am getting below message continously. Could you please help resolve this.

I tried with all combinations of executors and memory

19/06/25 17:05:21 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:22 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:23 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:24 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:25 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:26 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:27 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:28 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:29 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:30 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:31 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:32 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)
19/06/25 17:05:33 INFO Client: Application report for application_1540458187951_118212 (state: ACCEPTED)

0 Likes

#2

I got the same error … Support team could you please look into it.

0 Likes

#3

@itversity, @itversity1, @Itversity_Training, @Ramesh1, @dgadiraju I am also facing the same problem since one whole day! kindly look into this ASAP!

0 Likes

#4

I am trying to launch spark shell using spark-shell --master yarn --conf spark.ui.port=12456

Still the issue is not resolved @itversity, @itversity1, @Itversity_Training, @dgadiraju, the downtime is now more than 28 hours. Please look into this ASAP or provide any update on this issue

0 Likes

#5

@Itversity_Training, @dgadiraju
I’m also getting the same issue. Support team, kindly resolve it ASAP

0 Likes

#6

I am also getting same issue. pyspark2 is working but not pyspark. It seems service is down.

0 Likes

#7

@itversity This is so not done that we haven’t even received a reply that you guys are working on the issue or whether there is some issue with the labs etc.

Could someone please reply

0 Likes

#8

@itversity, @Itversity_Training, @itversity1, @dgadiraju, @hemanthvarma as there is no update about the issue and this is showstopper issue in spark shell. We are not able to use any of the services and I have certification exam pending in next week due to this issue I might have to reschedule my exam to some later date.

I would request to extend my subscription equal to at least the downtime which is now more than 34 hours. And also please work on the issue and provide an update on the same.

Thanks

1 Like

#9

It seems like a virus attack on Itversity lab as per his Linkedin Account,

1 Like

#10

Issue is fixed yesterday itself. But as we deleted and added Spark back we miss one property due to which spark-shell or PySpark is not coming up.

Now it is back to normal. Thank you for your patience.

0 Likes

#11

While connect with spark and hive , i am getting below issue. Can you please look into this?

Exception in thread “main” org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)

0 Likes

#12

What is the code you are running? I have ran all these commands successfully.

spark.sql("use training_retail")
spark.sql("show tables")
spark.sql("show tables").show
spark.sql("select count(1) from orders").show
spark.sql("select count(1) from order_items").show
0 Likes

#13

@itversity

I am still getting issues

19/06/27 23:56:52 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
19/06/27 23:56:52 ERROR Utils: Uncaught exception in thread Yarn application state monitor
org.apache.spark.SparkException: Error sending message [message = RequestExecutors(0,0,Map())]
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:118)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.doRequestTotalExecutors(YarnSchedulerBackend.scala:123)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:467)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:88)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:188)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:448)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1584)
at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1739)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1738)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:145)
Caused by: org.apache.spark.SparkException: Error sending message [message = RequestExecutors(0,0,Map())]
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:118)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$3.apply$mcV$sp(YarnSchedulerBackend.scala:268)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$3.apply(YarnSchedulerBackend.scala:268)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$3.apply(YarnSchedulerBackend.scala:268)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to send RPC 6853968660228119095 to wn01.itversity.com/172.16.1.102:46820: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)
at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)

0 Likes

#14

@Shubham_Gupta

Could you please reply what is the command you are trying.

0 Likes