Apache Spark Python - Data Processing Overview - Overview of Functions

This article provides an overview of different functions available to process data in columns using PySpark. The video embedded in this article complements the text by providing visual demonstrations of the concepts discussed.

Overview of Functions

Let us get an overview of different functions that are available to process data in columns.

  • While Data Frame APIs work on the Data Frame, at times we might want to apply functions on column values.
  • Functions to process column values are available under pyspark.sql.functions. These are typically used in select or withColumn on top of Data Frame.
  • There are approximately 300 pre-defined functions available for us.
  • Some of the important functions can be broadly categorized into String Manipulation, Date Manipulation, Numeric Functions, and Aggregate Functions.
  • We will explore most of the functions as we get into the data processing at a later point in time.

String Manipulation Functions

  • Concatenating Strings: concat
  • Getting Length: length
  • Trimming Strings: trim, rtrim, ltrim

Date Manipulation Functions

  • Date Arithmetic: date_add, date_sub, datediff, add_months
  • Date Extraction: dayofmonth, month, year

Numeric Functions

  • abs, greatest

Aggregate Functions

  • sum, min, max

Watch the video tutorial here

Hands-On Task

Project full name by concatenating first name and last name along with other fields excluding first name and last name.

Conclusion

In summary, this article discussed various functions available in PySpark for column-wise data processing. By practicing the hands-on tasks provided, readers can gain a practical understanding of applying these functions. For further learning and engagement, consider joining the community to explore more advanced concepts.