RDD not created due to stage failure\

Step 1)Imported EmployeeName.csv to hdfs at /spark1/EmployeeName.csv usng put command
step 2)Executed below command
val name = sc.textFile(“spark1/EmployeeName.csv”)

Got RDD created ,verified using
name.collect().foreach(println)

17/02/20 15:34:03 INFO DAGScheduler: Job 1 finished: collect at :30, took 0.024327 s
E01,Lokesh
E02,Bhupesh
E03,Amit
E04,Ratan
E05,Dinesh
E06,Pavan
E07,Tejas
E08,Sheela
E09,Kumar
E10,Venkat

Step3) Now when i try to executed below command,below error is encountered and no RDD is created
val namePairRDD = name.map(x=> (x.split(",")(0),x.split(",")(1)))

17/02/20 15:29:46 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/02/20 15:29:46 INFO TaskSchedulerImpl: Cancelling stage 0
17/02/20 15:29:46 INFO DAGScheduler: ResultStage 0 (collect at :32) failed in 0.142 s
17/02/20 15:29:46 INFO DAGScheduler: Job 0 failed: collect at :32, took 0.192356 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent failure: Lost task 1.0 in stage 0.0 (TID 1, localhost): java.lang.ArrayIndexOutOfBoundsException: 1

There is a discrepancy between the timings you provided in the Step 2 and Step 3.
i.e Step 3 is 15:34:03 Hours but
Step 2 15:29:46 Hours

i reproduced again now ,you wnat me to paste logs again?

I again reproduced this after changing file extensions from /spark1/EmployeeName.csv to /spark1/EmployeeName.txt

17/02/20 18:45:27 ERROR TaskSetManager: Task 1 in stage 7.0 failed 1 times; aborting job
17/02/20 18:45:27 INFO TaskSchedulerImpl: Removed TaskSet 7.0, whose tasks have all completed, from pool
17/02/20 18:45:27 INFO TaskSchedulerImpl: Cancelling stage 7
17/02/20 18:45:27 INFO DAGScheduler: ResultStage 7 (collect at :32) failed in 0.019 s
17/02/20 18:45:27 INFO DAGScheduler: Job 7 failed: collect at :32, took 0.024472 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 7.0 failed 1 times, most recent failure: Lost task 1.0 in stage 7.0 (TID 15, localhost): java.lang.ArrayIndexOutOfBoundsException: 1
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(:29)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(:29)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)

Here is data of EmployeeName.csv file

EmployeeName.csv
E01,Lokesh
E02,Bhupesh
E03,Amit
E04,Ratan
E05,Dinesh
E06,Pavan
E07,Tejas
E08,Sheela
E09,Kumar
E10,Venkat

:neutral_face: I executed the same instructions and able to see the action results of namePairRdd

i still not bale to resolve , i am facing issues while starting spark-shell so i i have to manually give another port as
spark-shell --conf “spark.ui.port=10101”

no idea whether this is root cause of my issue…

Durga sir ,your help is needed

Issue resolved ,problem with new line at end of file,removed from file and namePairRDD got generated …Thank you everyone