Data Engineering Spark SQL - Managing Tables - DDL & DML - Overview of File Formats

In this article, we will delve into the basics of managing tables using Spark SQL focusing on Data Definition Language (DDL) and Data Manipulation Language (DML) commands. We will cover key concepts such as creating databases, tables, inserting data, and querying tables in Spark SQL.

Explanation for the video

[Insert video here]

Key Concepts Explanation

File Formats

Let’s discuss the various file formats supported by the STORED AS clause in Spark SQL:

  • TEXTFILE
  • ORC
  • PARQUET
  • AVRO
  • SEQUENCEFILE
  • JSONFILE (recent versions of Hive support)
  • Custom file formats (out of scope)

We can explore more details about these file formats here.

Managing Tables

We will cover the following key concepts and commands while managing tables:

Create Database

CREATE DATABASE IF NOT EXISTS itversity_sms

Use Database

USE itversity_sms

Create Table

CREATE TABLE students (
    student_id INT,
    student_first_name STRING,
    student_last_name STRING,
    student_phone_numbers ARRAY<STRING>,
    student_address STRUCT<street:STRING, city:STRING, state:STRING, zip:STRING>
) STORED AS parquet

Insert Data

INSERT INTO students VALUES (1, 'Scott', 'Tiger', NULL, NULL)

Query Table

SELECT * FROM students

Hands-On Tasks

Here are some hands-on tasks for you to practice and implement the concepts discussed:

  1. Create a new database named ‘university_sms’.
  2. Create a table named ‘professors’ with appropriate column definitions.
  3. Insert data into the ‘professors’ table with relevant values.
  4. Query the ‘professors’ table to view the inserted data.

Conclusion

In this article, we have covered the basics of managing tables using Spark SQL, focusing on DDL and DML commands. It is essential to practice these concepts to gain a better understanding. Feel free to engage with the community for further learning and support.

Overview of File Formats

Let us go through the details about different file formats supported by STORED AS Clause.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
val username = System.getProperty(“user.name”)
import org.apache.spark.sql.SparkSession

val username = System.getProperty(“user.name”)

val spark = SparkSession.

builder.

config(“spark.ui.port”, “0”).

config(“spark.sql.warehouse.dir”, s"/user/${username}/warehouse").

enableHiveSupport.

appName(s"${username} | Spark SQL - Managing Tables - Basic DDL and DML").

master(“yarn”).

getOrCreate
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \

--master yarn \

--conf spark.ui.port=0 \

--conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \

--master yarn \

--conf spark.ui.port=0 \

--conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \

--master yarn \

--conf spark.ui.port=0 \

--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
  • Go to this page and review supported file formats.

  • Supported File Formats

    • TEXTFILE

    • ORC

    • PARQUET

    • AVRO

    • SEQUENCEFILE - is not important

    • JSONFILE - only available in recent vesions of Hive.

    • and more

  • We can even specify custom file formats (out of scope)
    %%sql

DROP DATABASE IF EXISTS itversity_sms CASCADE
%%sql

CREATE DATABASE IF NOT EXISTS itversity_sms
%%sql

USE itversity_sms
%%sql

CREATE TABLE students (

student_id INT,

student_first_name STRING,

student_last_name STRING,

student_phone_numbers ARRAY<STRING>,

student_address STRUCT<street:STRING, city:STRING, state:STRING, zip:STRING>

) STORED AS parquet
%%sql

INSERT INTO students VALUES (1, ‘Scott’, ‘Tiger’, NULL, NULL)
%%sql

INSERT INTO students VALUES (2, ‘Donald’, ‘Duck’, ARRAY(‘1234567890’, ‘2345678901’), NULL)
%%sql

INSERT

Watch the video tutorial here