Apache Spark Python - Transformations - Overview of Joins


Our data is often stored across multiple tables, each containing related information. These tables may have relationships defined based on Normalization Principles in transactional systems or Dimensional Modeling in data warehousing applications. With these relationships in place, tables are often related through one-to-one, one-to-many, or many-to-many relationships.

Types of Joins

When we have two datasets that share a common key, we typically perform a join operation to combine them based on this key. There are different types of joins that can be performed:

  • INNER JOIN: Returns only the rows where there is a match in both DataFrames.
  • OUTER JOIN (LEFT or RIGHT): Returns all rows from one DataFrame and the matched rows from the other DataFrame.
  • FULL OUTER JOIN: Combines the results of both LEFT OUTER JOIN and RIGHT OUTER JOIN, returning all rows when there is a match in either DataFrame.

Watch the video tutorial here


In this blog post, we explored the fundamentals of joining datasets, a critical operation when working with related tables in both transactional systems and data warehousing applications. We covered various types of joins, including INNER JOIN, OUTER JOIN (LEFT or RIGHT), and FULL OUTER JOIN, understanding their unique characteristics and use cases. By mastering these join operations, you can efficiently combine data from multiple sources to extract comprehensive insights. For a detailed walkthrough and hands-on practice, be sure to watch the full video tutorial here. Happy joining!