Data Engineering Spark SQL - Windowing Functions - Overview of Windowing Functions

Let us get an overview of Analytics or Windowing Functions in Spark SQL.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
val username = System.getProperty(“”)
import org.apache.spark.sql.SparkSession

val username = System.getProperty(“”)

val spark = SparkSession.


config("spark.ui.port", "0").

config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").


appName(s"${username} | Spark SQL - Windowing Functions").



If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \

    --master yarn \

    --conf spark.ui.port=0 \

    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \

    --master yarn \

    --conf spark.ui.port=0 \

    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \

    --master yarn \

    --conf spark.ui.port=0 \

    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

  • Aggregate Functions (sum, min, max, avg)
  • Window Functions (lead, lag, first_value, last_value)
  • Rank Functions (rank, dense_rank, row_number etc)
  • For all the functions we use OVER clause.
  • For aggregate functions we typically use PARTITION BY
  • For global ranking and windowing functions we can use ORDER BY sorting_column and for ranking and windowing with in a partition or group we can use PARTITION BY partition_column ORDER BY sorting_column.

Hands-On Tasks

Description of the hands-on tasks. Provide a list of tasks that the reader can perform to apply the concepts discussed in the article.

  1. Task 1
  2. Task 2


Summary of the main points discussed in the article. Encourage the reader to practice or engage with the community for further learning.

Watch the video tutorial here