Pyspark session interrupted every few seconds with below logs

03:48:39 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 2
20/10/13 03:48:39 INFO ExecutorAllocationManager: Removing executor 2 because it has been idle for 60 seconds (new desired total will be 0)
.first()20/10/13 03:48:45 INFO YarnClientSchedulerBackend: Disabling executor 2.
20/10/13 03:48:45 INFO DAGScheduler: Executor lost: 2 (epoch 0)
20/10/13 03:48:45 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
20/10/13 03:48:45 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2,, 39965)
20/10/13 03:48:45 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
20/10/13 03:48:45 INFO YarnScheduler: Executor 2 on killed by driver.
map = products.filter(lambda p: p.spl20/10/13 03:48:45 INFO ExecutorAllocationManager: Existing executor 2 has been removed (new total is 0)

Hi @Rashmi_Nayak,

Please share the code which you tried

This happens with me also, with no codes just after starting pyspark, logs keeps printing every while

@Amr_Kamal Issue is fixed, try to launch PySpark now.