Unable to access local file using PySpark


#1

HI team,

This is the first time I have configured Python and Spark in my system (Windows 10 , 64 bit ,16 GB Ram). I am trying to access a file placed on my Local system desktop through Pyspark code.
I am getting the error like
"
py4j.protocol.Py4JJavaError: An error occurred while calling o18.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/Admin/Desktop/sparkRDD.txt "

Command executed is :

sc.textFile(“C:\Users\Admin\Desktop\sparkRDD.txt”).first()

I have also attached the screenshot of the error.

Please let me know if there is something that I have missed in my process.

Thanks and Regards,
Siddhant Chowdhury


#2

I am facing the same issue too, Can someone please answer this. I and trying Exercise 02 of Durga Sir’s list. where it is specifially mentioned to load from local.

orders=sc.textFile("/home/shalinisarathykc/retail_db/orders") ----> not working (data is available)
orders=sc.textFile("/public/retail_db/orders") -----> works

PS: I am able to load from HDFS. But want to know what is the issue in loading data from local.


#3

@siddhant @shalinisarathykc

To read a local file which is in the local system we need to use below command

sc.textFile(“file:///path\filename”).first()

Thanks & Regards,
Sunil Abhishek


#4

orders=sc.textFile(“file:///home/shalinisarathykc/retail_db/orders”).first()
I used this query, still same problem.

Caused by: java.io.FileNotFoundException: File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:348)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
… 1 more


#6

@shalinisarathykc

orders=sc.textFile(“file:///home/shalinisarathykc/retail_db/orders”).first() this code works for the file to read from the local system.

If the file exists in the hdfs try the below the code

orders=sc.textFile("/user/username/path")

In my case, you can see the below screenshot


#7

The file is in my local file system. But when I try to read it. I get the below error as already mentioned in this trace.

image

orders=sc.textFile(“File:///home/shalinisarathykc/retail_db/orders”)
18/05/29 11:59:30 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 336.7 KB, free 709.6 KB)
18/05/29 11:59:30 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 28.4 KB, free 737.9 KB)
18/05/29 11:59:30 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.16.1.113:38813 (size: 28.4 KB, free: 511.1 MB)
18/05/29 11:59:30 INFO SparkContext: Created broadcast 2 from textFile at NativeMethodAccessorImpl.java:-2

orders.first()
18/05/29 11:59:33 INFO FileInputFormat: Total input paths to process : 1
18/05/29 11:59:33 INFO SparkContext: Starting job: runJob at PythonRDD.scala:393
18/05/29 11:59:33 INFO DAGScheduler: Got job 1 (runJob at PythonRDD.scala:393) with 1 output partitions
18/05/29 11:59:33 INFO DAGScheduler: Final stage: ResultStage 1 (runJob at PythonRDD.scala:393)
18/05/29 11:59:33 INFO DAGScheduler: Parents of final stage: List()
18/05/29 11:59:33 INFO DAGScheduler: Missing parents: List()
18/05/29 11:59:33 INFO DAGScheduler: Submitting ResultStage 1 (PythonRDD[5] at RDD at PythonRDD.scala:43), which has no missing parents
18/05/29 11:59:33 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 4.9 KB, free 742.8 KB)
18/05/29 11:59:33 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 3.1 KB, free 745.9 KB)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 172.16.1.113:38813 (size: 3.1 KB, free: 511.1 MB)
18/05/29 11:59:33 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1008
18/05/29 11:59:33 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (PythonRDD[5] at RDD at PythonRDD.scala:43)
18/05/29 11:59:33 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
18/05/29 11:59:33 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 4, wn04.itversity.com, partition 0,PROCESS_LOCAL, 2157 bytes)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on wn04.itversity.com:53299 (size: 3.1 KB, free: 511.1 MB)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on wn04.itversity.com:53299 (size: 28.4 KB, free: 511.1 MB)
18/05/29 11:59:33 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 4, wn04.itversity.com): java.io.FileNotFoundException: File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:348)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

18/05/29 11:59:33 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 5, wn01.itversity.com, partition 0,PROCESS_LOCAL, 2157 bytes)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on wn01.itversity.com:37876 (size: 3.1 KB, free: 511.1 MB)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on wn01.itversity.com:37876 (size: 28.4 KB, free: 511.1 MB)
18/05/29 11:59:33 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 5) on executor wn01.itversity.com: java.io.FileNotFoundException (File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist) [duplicate 1]
18/05/29 11:59:33 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 6, wn01.itversity.com, partition 0,PROCESS_LOCAL, 2157 bytes)
18/05/29 11:59:33 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 6) on executor wn01.itversity.com: java.io.FileNotFoundException (File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist) [duplicate 2]
18/05/29 11:59:33 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 7, wn02.itversity.com, partition 0,PROCESS_LOCAL, 2157 bytes)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on wn02.itversity.com:41636 (size: 3.1 KB, free: 511.1 MB)
18/05/29 11:59:33 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on wn02.itversity.com:41636 (size: 28.4 KB, free: 511.1 MB)
18/05/29 11:59:33 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 7) on executor wn02.itversity.com: java.io.FileNotFoundException (File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist) [duplicate 3]
18/05/29 11:59:33 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job
18/05/29 11:59:33 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool
18/05/29 11:59:33 INFO YarnScheduler: Cancelling stage 1
18/05/29 11:59:33 INFO DAGScheduler: ResultStage 1 (runJob at PythonRDD.scala:393) failed in 0.216 s
18/05/29 11:59:33 INFO DAGScheduler: Job 1 failed: runJob at PythonRDD.scala:393, took 0.221073 s
Traceback (most recent call last):
File “”, line 1, in
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 1315, in first
rs = self.take(1)
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 1297, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/context.py”, line 939, in runJob
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File “/usr/hdp/2.5.0.0-1245/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py”, line 813, in call
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/sql/utils.py”, line 45, in deco
return f(*a, **kw)
File “/usr/hdp/2.5.0.0-1245/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py”, line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 7, wn02.itversity.com): java.io.FileNotFoundException: File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:348)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1856)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1869)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1882)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: File file:/home/shalinisarathykc/retail_db/orders/part-00000 does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:422)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:146)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:348)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:237)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
… 1 more


#8

@shalinisarathykc

Firstly you need to copy the file to hdfs from your gateway using hadoop fs -copyFromLocal filename /user/username/path (in your case hadoop fs -copyFromLocal orders /user/shalinisarathykc/retail_db/orders)

then launch pyspark and try reading the file using below code

orders=sc.textFile("/user/username/path")

Thanks & Regards,
Sunil Abhishek