Data Engineering Spark SQL - Tables - DML & Partitioning - Creating Tables using Parquet

Let us create the order_items table using the Parquet file format, which compresses files by default using the Snappy algorithm. To execute the following code, make sure to start a Spark context in the Notebook environment.

Key Concepts Explanation

Parquet File Format

Parquet is a columnar storage format that offers compression and efficient processing of data. Here’s an example of creating a table with Parquet file format:

CREATE TABLE order_items (
  order_item_id INT,
  order_item_order_id INT,
  order_item_product_id INT,
  order_item_quantity INT,
  order_item_subtotal FLOAT,
  order_item_product_price FLOAT
) STORED AS parquet;

Managing Tables with Parquet

You can use the SHOW tables command to list the existing tables and the DESCRIBE FORMATTED order_items command to view detailed information about the order_items table.

Hands-On Tasks

  1. Start a Spark context in the Notebook environment.
  2. Create the order_items table with the specified columns using the Parquet file format.
  3. Check the details of the order_items table using the DESCRIBE FORMATTED order_items command.


In this article, we learned how to create tables using the Parquet file format in Spark SQL. By following the provided steps and code examples, you can create efficient and compressed tables for data storage and processing. Remember to practice these concepts and engage with the community for further learning.

Click here to watch the video tutorial on creating tables using Parquet in Spark SQL.

Remember to sign up for our interactive labs to enhance your Spark SQL skills.

Watch the video tutorial here