PythonRDD error

pyspark
apache-spark

#1

Hi Durga,
firstly- i can’t seem to add all the details- its says “Sorry new user can only add 2 link”- don’t know what it means

pls help-i think pyspark is not working properly as whenever i generate PythonRDD, i get error:
transforms works, but Any ACTION results in error

distance
PythonRDD[5] at collect at :1

distance.take(10)
[u’2475.00’, u’2475.00’, u’2475.00’, u’2475.00’, u’3784.00’, u’3711.00’, u’3711.00’, u’3784.00’, u’2475.00’, u’2475.00’]

distance.count()
17/12/31 14:38:44 INFO SparkContext: Starting job: count at :1

org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/worker.py”, line 111, in main
process()
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/worker.py”, line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 317, in func
return f(iterator)
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 1004, in
IndexError: list index out of range

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more