Spark JSON load issue

Command used:

ab=sc.textFile(“file:///home/sathish274592/people.json”)
data = ab.map(lambda x: json.loads(x))
data.collect()

Input file :
{“name”:“Michael”}
{“name”:“Andy”, “age”:30}
{“name”:“Justin”, “age”:19}

Error :
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File “/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py”, line 111, in main
process()
File “/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py”, line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File “/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py”, line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File “”, line 1, in
File “/usr/lib64/python2.7/json/init.py”, line 338, in loads
return _default_decoder.decode(s)
File “/usr/lib64/python2.7/json/decoder.py”, line 365, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File “/usr/lib64/python2.7/json/decoder.py”, line 383, in raw_decode
raise ValueError(“No JSON object could be decoded”)
ValueError: No JSON object could be decoded

You have to use the SQLContext for it.

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val df = sqlContext.read.json(“examples/src/main/resources/people.json”)
df.show()