Spark issue - some admin commands are getting executed automatically in every 10-12 seconds

After launching spark sessions in BigData lab, below sets of commands are executed automatically in every 10-12 seconds, making it extremely difficult to practise spark commands. I’m using command “pyspark --master yarn --conf spark.ui.port=12667” for launching spark, 12667 being a variable. Please provide a solution.

INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 3
20/11/16 00:03:50 INFO ExecutorAllocationManager: Removing executor 3 because it has been idle for 60 seconds (new desired total will be 0)
20/11/16 00:03:51 INFO YarnClientSchedulerBackend: Disabling executor 3.
20/11/16 00:03:51 INFO DAGScheduler: Executor lost: 3 (epoch 0)
20/11/16 00:03:51 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
20/11/16 00:03:51 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(3, wn02.itversity.com, 56885)
20/11/16 00:03:51 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor
20/11/16 00:03:51 INFO YarnScheduler: Executor 3 on wn02.itversity.com killed by driver.
20/11/16 00:03:51 INFO ExecutorAllocationManager: Existing executor 3 has been removed (new total is 0)

Hi @INBH,

please do your practice on pyspark2**

below is the command to launch pyspark2-

pyspark2 --master yarn --conf spark.ui.port=0