Not able to excute First() con rddd

from pyspark import SparkConf,SparkContext

sc = SparkContext(master =“local”,appName=“spark-demo”)

print(sc.textFile(“C:\Users\M1046255\Desktop\myfold\test\test12.txt”).first())

C:\Users\M1046255\PycharmProjects\mytest\venv\Scripts\python.exe C:/Users/M1046255/PycharmProjects/mytest/Sparkdemo.py
2019-02-28 16:59:51 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Setting default log level to “WARN”.
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
[Stage 0:> (0 + 1) / 1]2019-02-28 16:59:59 ERROR Executor:91 - Exception in task 0.0 in stage 0.0 (TID 0)
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:428)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:421)