Not able to read deckofcards.txt using pyspark

pyspark

#1

I am following the playlist of CCA 175 certification. I am currently at video 14 Setup Development Environment - Install WinUtils - integrate Windows and HDFS.

I am trying to read the file deckofcards.txt using pyspark. I am getting the below error. Can you please help?

sc.textFile(“C:\deckofcards.txt”).first()
18/10/02 09:44:24 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.4 KB, free 511.0 MB)
18/10/02 09:44:24 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 511.0 MB)
18/10/02 09:44:24 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50089 (size: 13.9 KB, free: 511.1 MB)
18/10/02 09:44:24 INFO SparkContext: Created broadcast 0 from textFile at NativeMethodAccessorImpl.java:-2
18/10/02 09:44:24 INFO FileInputFormat: Total input paths to process : 1
18/10/02 09:44:24 INFO SparkContext: Starting job: runJob at PythonRDD.scala:393
18/10/02 09:44:24 INFO DAGScheduler: Got job 0 (runJob at PythonRDD.scala:393) with 1 output partitions
18/10/02 09:44:24 INFO DAGScheduler: Final stage: ResultStage 0 (runJob at PythonRDD.scala:393)
18/10/02 09:44:24 INFO DAGScheduler: Parents of final stage: List()
18/10/02 09:44:24 INFO DAGScheduler: Missing parents: List()
18/10/02 09:44:24 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[2] at RDD at PythonRDD.scala:43), which has no missing parents
18/10/02 09:44:24 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.8 KB, free 511.0 MB)
18/10/02 09:44:24 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.0 KB, free 511.0 MB)
18/10/02 09:44:24 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:50089 (size: 3.0 KB, free: 511.1 MB)
18/10/02 09:44:24 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
18/10/02 09:44:24 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (PythonRDD[2] at RDD at PythonRDD.scala:43)
18/10/02 09:44:24 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
18/10/02 09:44:24 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2126 bytes)
18/10/02 09:44:24 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/10/02 09:44:24 INFO HadoopRDD: Input split: file:/C:/deckofcards.txt:0+346
18/10/02 09:44:25 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
18/10/02 09:44:25 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
18/10/02 09:44:25 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
18/10/02 09:44:25 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
18/10/02 09:44:25 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
18/10/02 09:44:25 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)
at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/10/02 09:44:25 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)
at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

18/10/02 09:44:25 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
18/10/02 09:44:25 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
18/10/02 09:44:25 INFO TaskSchedulerImpl: Cancelling stage 0
18/10/02 09:44:25 INFO DAGScheduler: ResultStage 0 (runJob at PythonRDD.scala:393) failed in 0.532 s
18/10/02 09:44:25 INFO DAGScheduler: Job 0 failed: runJob at PythonRDD.scala:393, took 0.632723 s
Traceback (most recent call last):
File “”, line 1, in
File “C:\spark-1.6.3-bin-hadoop2.6\python\pyspark\rdd.py”, line 1315, in first
rs = self.take(1)
File “C:\spark-1.6.3-bin-hadoop2.6\python\pyspark\rdd.py”, line 1297, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
File “C:\spark-1.6.3-bin-hadoop2.6\python\pyspark\context.py”, line 939, in runJob
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
File “C:\spark-1.6.3-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py”, line 813, in call
File “C:\spark-1.6.3-bin-hadoop2.6\python\pyspark\sql\utils.py”, line 45, in deco
return f(*a, **kw)
File “C:\spark-1.6.3-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\protocol.py”, line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)
at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)
at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
… 1 more


#2

GIve double backward slash and try the query.


#3

Tried. Didn’t worked. Getting the same connection reset error.


#4

@Komal_Raulkar Can you paste environment variables and PATH variables screenshot.