Cannot read parquet file in dataframe



i have a compressed parquet file in hdfs.
I need to read the file for aggregation, but it is throwing error.

org.apache.spark.sql.AnalysisException: Parquet type not yet supported:

code used:
val n =“administration_set.parquet.gzip”)

I found that spark supports the INT64 (TIMESTAMP_MILLIS) type. However,it’s after the 2.3 version,

Do we have any work around for spark 1.6

please suggest

Learn Spark 1.6.x or Spark 2.x on our state of the art big data labs

  • Click here for access to state of the art 13 node Hadoop and Spark Cluster