Getting an error while executing function in pyspark

Hi,

I was trying to execute the following function in pyspark(used the code from github) and getting the error attached below. please help.

def getTopN(rec, topN):
x = [ ]
x = list(sorted(rec[1], key=lambda k: float(k.split(",")[4]), reverse=True))
import itertools
return (y for y in list(itertools.islice(x, 0, topN)))

for i in productsMap.groupByKey().flatMap(lambda x: getTopN(x, 2)).collect(): print(i)

Error screenshot:

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File “/usr/lib/spark/python/pyspark/worker.py”, line 111, in main
process()
File “/usr/lib/spark/python/pyspark/worker.py”, line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File “/usr/lib/spark/python/pyspark/serializers.py”, line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File “”, line 1, in
File “”, line 3, in getTopN
File “”, line 3, in
ValueError: empty string for float()