import pandas as pd
conn = cx_Oracle.connect(user=‘usr’, password=‘pwd’, dsn=‘host:1521/servicename’)
sql = “SELECT * from emp”
curs = conn.cursor()
res = curs.execute(sql)
for row in curs:
id name sal
1 abc 1000
2 def 2000
3 ghi 3000
—> I would like to convert the above output into a JSON file format including table column names.
How to do that. Pls help with some sample code.
Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs
- Click here for access to state of the art 13 node Hadoop and Spark Cluster