compressionCodecClass causing crash for saveAsTextFile() method

pyspark
apache-spark

#1

crimeDataSortedMap.saveAsTextFile("/user/adil/solutions/solution01/crimes_by_type_by_month", compressionCodecClass=“org.apache.hadoop.io.compress.GZipCodec”);
Traceback (most recent call last):
File “”, line 1, in
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/rdd.py”, line 1503, in saveAsTextFile
compressionCodec = self.ctx._jvm.java.lang.Class.forName(compressionCodecClass)
File “/usr/hdp/2.5.0.0-1245/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py”, line 813, in call
File “/usr/hdp/2.5.0.0-1245/spark/python/pyspark/sql/utils.py”, line 45, in deco
return f(*a, **kw)
File “/usr/hdp/2.5.0.0-1245/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py”, line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:java.lang.Class.forName.
: java.lang.ClassNotFoundException: org.apache.hadoop.io.compress.GZipCodec
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)


#3

You can save with compressed output like this below using pyspark

Launch pyspark shell with below package

pyspark --packages com.databricks:spark-csv_2.10:1.4.0

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

df = sqlContext.read.format(‘com.databricks.spark.csv’).options(header=‘true’, inferschema=‘true’).load(‘path’)
df.write.format(‘com.databricks.spark.csv’).options(codec=“org.apache.hadoop.io.compress.GzipCodec”).save(‘path’)


#2

typo correction:

crimeDataSortedMap.saveAsTextFile(path="/user/adil/solutions/solution01/crimes_by_type_by_month", compressionCodecClass=“org.apache.hadoop.io.compress.GZipCodec”);