UnicodeEncodeError in pyspark

Hi Team,

i am getting the below error while writing data into text file in pyspark

UnicodeEncodeError: ‘ascii’ codec can’t encode character u’\xc9’ in position 10: ordinal not in range(128)

and i have used the below code

h1b = spark.read.jdbc(“jdbc:mysql://ms.itversity.com/h1b_db”,‘h1b_data’,properties={‘user’:‘h1b_user’,‘password’:‘itversity’})

from pyspark.sql.functions import count,col
h1b.groupBy(‘EMPLOYER_NAME’,“CASE_STATUS”).agg(count(“ID”).alias(“COUNT”)).select(“EMPLOYER_NAME”,“CASE_STATUS”,col(“COUNT”)).orderBy([‘EMPLOYER_NAME’,‘COUNT’],ascending=[1,0]).show()

h1b.groupBy(‘EMPLOYER_NAME’,“CASE_STATUS”).agg(count(“ID”).alias(“COUNT”)).select(“EMPLOYER_NAME”,“CASE_STATUS”,col(“COUNT”)).orderBy([‘EMPLOYER_NAME’,‘COUNT’],ascending=[1,0]).rdd.map(lambda a : str(a[0]).encode() +"\t" + a[1] + “\t” + str(a[2])).saveAsTextFile("/user/arun1990/problem20/solution22/")

please anyone help me to resolve this issue.