Regarding list index out of range error

pyspark
apache-spark

#1

Hai,
I am getting the following error given in the screenshot attached below when I try to execute the code below:

crime=sc.textFile("/public/crime/json")
crimeMap=crime.map(lambda rec:(rec.split(",(?=(?:[^"]"[^"]")[^"]$)", -1)[0],rec.split(",(?=(?:[^"]"[^"]")[^"]$)", -1)[5]))
crimecount=crimeMap.aggregateByKey(0, lambda acc, value: acc+1, lambda acc, value: acc+value)
for i in crimecount.take(100):
print(i)

Can anyone tell me what is the error?