Apache Spark Python - Transformations - Solution - Get Origins without master data

This article provides a concise tutorial on leveraging Spark SQL to analyze air traffic data, focusing on identifying origins without matching records in the airport-codes dataset. Through step-by-step instructions and code examples, readers learn how to use outer joins and distinct operations effectively to extract valuable insights from large-scale datasets. By following along with this tutorial, readers gain practical skills in data analysis and manipulation using Spark SQL, enhancing their proficiency in big data processing.

Check if there are any origins in airtraffic data which do not have corresponding records in airport-codes.

  • This is an example for an outer join.
  • We need to get those airports which are in the Origin field in the January 2008 air traffic dataset but not in airport-codes. We need to consider all the valid records from airport codes.
  • Based on the side of the air traffic dataset, we can say left or right. We will be invoking join using the air traffic dataset and hence we will use a left outer join.
  • We will also apply distinct on Origin before performing a left outer join.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.


from pyspark.sql import SparkSession

import getpass

username = getpass.getuser()

spark = SparkSession. \

    builder. \

    config('spark.ui.port', '0'). \

    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \

    enableHiveSupport(). \

    appName(f'{username} | Python - Joining Data Sets'). \

    master('yarn'). \

    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
spark.conf.set("spark.sql.shuffle.partitions", "2")
airtrafficPath = "/public/airtraffic_all/airtraffic-part/flightmonth=200801"
airtraffic = spark. \

    read. \

    parquet(airtrafficPath)
airtraffic. \

    select(

        "Year", "Month", "DayOfMonth", 

        "Origin", "Dest", "CRSDepTime"

    ). \

    show()
airtraffic.count()
airportCodesPath = "/public/airtraffic_all/airport-codes"
def getValidAirportCodes(airportCodesPath):

    airportCodes = spark. \

        read. \

        option("sep", "\t"). \

        option("header", True). \

        option("inferSchema", True). \

        csv(airportCodesPath). \

        filter("!(State = 'Hawaii' AND IATA = 'Big')")

    return airportCodes
airportCodes = getValidAirportCodes(airportCodesPath)
airportCodes.count()
airtraffic. \

    select("Origin"). \

    distinct(). \

    show()
airtraffic. \

    select("Origin"). \

    distinct(). \

    count()
airtraffic. \

    select("Origin"). \

    distinct(). \

    join(airportCodes, airtraffic["Origin"] == airportCodes["IATA"], "left"). \

    show()
airtraffic. \

    select("Origin"). \

    distinct(). \

    join(airportCodes, airtraffic["Origin"] == airportCodes["IATA"], "left"). \

    filter("IATA IS NULL"). \

    show()
airtraffic. \

    select("Origin"). \

    distinct(). \

    join(airportCodes, airtraffic["Origin"] == airportCodes["IATA"], "left"). \

    filter("IATA IS NULL"). \

    count()

Watch the video tutorial here

Conclusion

This tutorial showcased a practical approach using Spark SQL to identify origins in the January 2008 air traffic dataset that lack corresponding records in the airport-codes dataset. By applying a left outer join and distinct operations, we efficiently processed the data, providing insights into missing airport information.