Hortonworks Spark does not read from HDFS when pyspark started with yarn

In the Hortonworks Spark developer exam, I was having a puzzling problem. If I start pyspark with yarn it does not read the data in HDFS. The one below says directory does not exist.

pyspark --master yarn
rdd =sc.textFile(“HDFS path”)
rdd.first()

But later after wasting many minutes, I found out the below code.
pyspark
rdd =sc.textFile(“HDFS path”)
rdd.first()

The one above works fine. Why is that it does not work with yarn?

@Fisseha_Berhane,

Which version of Spark is being launched when you’re launching Pyspark with YARN?
You can get this information from shell verbose logs or

‘sc.version()’

Might be that specific version of Spark is not configured on YARN in itversity labs.

.HDP 2.4.0
• Spark 1.6
• Scala 2.10.5
• Python 2.7.6 (pyspark)