Unable to Run Pyspark in Pycharm getting an Eroor

I have set up the spark in my windows machine whereas from the command prompt i was able to launch Pyspark and Spark-shell
But when I’m trying to configure the Pyspark in Pycharm It’s throwing below error
C:\Users\SKusumanchi\PycharmProjects\Test\venv\Scripts\python.exe C:/Users/SKusumanchi/PycharmProjects/Test/prac/SparkTest.py
‘cmd’ is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File “C:/Users/SKusumanchi/PycharmProjects/Test/prac/SparkTest.py”, line 2, in
spark = SparkSession.builder.appName(“first”).master(“local”).getOrCreate()
File “C:\spark-2.2.0-bin-hadoop2.6\python\pyspark\sql\session.py”, line 169, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File “C:\spark-2.2.0-bin-hadoop2.6\python\pyspark\context.py”, line 334, in getOrCreate
SparkContext(conf=conf or SparkConf())
File “C:\spark-2.2.0-bin-hadoop2.6\python\pyspark\context.py”, line 115, in init
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File “C:\spark-2.2.0-bin-hadoop2.6\python\pyspark\context.py”, line 283, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File “C:\spark-2.2.0-bin-hadoop2.6\python\pyspark\java_gateway.py”, line 95, in launch_gateway
raise Exception(“Java gateway process exited before sending the driver its port number”)
Exception: Java gateway process exited before sending the driver its port number

Process finished with exit code 1

Sample code is for the error is:

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName(“first”).master(“local”).getOrCreate()

Please help me guys it’s been two days i have been unable to overcome this issue. The python Version is 2.7 and Spark version is 2.2.0 ,i believe these versions are compatible since Pyspark sell was launched successfully from Command prompt and able to read data from the pyspark shell

Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs

  • Click here for access to state of the art 13 node Hadoop and Spark Cluster

Can someone help with the above error , even i’m facing the same

Hi Are you able to fix ?
I am also getting same error