Cluster is very busy at this time, not able to start pyspark


#1

Not able to start pyspark , its waiting for long time…

upon looking at Yarn, there is only 2 GB memory is available, not enough to start an executor.

Please see what is using more and kill them ?


#2

may be this is related I am unable to execute my HIve query too


#3

When i check few hours back, some one is running a mapreduce job for hours , taking all the cores on the system. And now not able to do any thing on the cluster. and its not usable.

I guess, you guys also have to restrict number of cores per user or cores per job. only one use can take all the cores/memory . Are you restricting only number of users/logons ?

Please resolve these issues ASAP. Its not use if we frequently running into these kind of problems?