Apache Spark Python - Transformations - Prepare Datasets for Joins

Let us prepare datasets to join. We will be using Airtraffic data as well as retail data while going through the tasks.

  • Make sure airport-codes is in HDFS.

  • We will also use airtraffic data for the month of January 2008. We have used that data set in the past as well.

  • We will be using retail data using JSON format. Make sure you also have the retail data using JSON format in the appropriate location. Let us start a spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass

username = getpass.getuser()

spark = SparkSession. \

    builder. \

    config('spark.ui.port', '0'). \

    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \

    enableHiveSupport(). \

    appName(f'{username} | Python - Joining Data Sets'). \

    master('yarn'). \

    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \

    --master yarn \

    --conf spark.ui.port=0 \

    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \

    --master yarn \

    --conf spark.ui.port=0 \

    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \

    --master yarn \

    --conf spark.ui.port=0 \

    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
!hdfs dfs -ls /public/airtraffic_all
!hdfs dfs -ls /public/airtraffic_all/airport-codes

airportCodesPath = "/public/airtraffic_all/airport-codes"
airportCodes = spark. \

    read. \

    option("sep", "\t"). \

    option("header", True). \

    option("inferSchema", True). \

    csv(airportCodesPath)

airportCodes.printSchema()
airportCodes.show()
airportCodes.count()

!hdfs dfs -ls /public/airtraffic_all/airtraffic-part/flightmonth=200801

!hdfs dfs -find /public/retail_db_json

orders = spark.read.json('/public/retail_db_json/orders')
orders.printSchema()
orders.show()
orders.count()

Watch the video tutorial here

Conclusion

In this blog post, we demonstrated how to prepare datasets for joining by using air traffic data, airport codes, and retail data in JSON format. We covered the steps to load these datasets into Spark, ensuring they are available in HDFS. By setting up the Spark context and executing the necessary code, we provided a comprehensive guide to effectively join and analyze these datasets. This process is crucial for performing advanced data analysis and gaining deeper insights. For a detailed walkthrough, watch the full video tutorial here. Happy data analyzing!