Skip to main content

Posts

Showing posts from May 9, 2024

Azure Data Factory Transform and Enrich Activity with Databricks and Pyspark

In #azuredatafactory at #transform and #enrich part can be done automatically or manually written by #pyspark two examples below one data source #csv another is #sqlserver with #incrementalloading Below is a simple end-to-end PySpark code example for a transform and enrich process in Azure Databricks. This example assumes you have a dataset stored in Azure Blob Storage, and you're using Azure Databricks for processing. ```python # Import necessary libraries from pyspark.sql import SparkSession from pyspark.sql.functions import col, lit, concat # Initialize SparkSession spark = SparkSession.builder \     .appName("Transform and Enrich Process") \     .getOrCreate() # Read data from Azure Blob Storage df = spark.read.csv("wasbs://<container_name>@<storage_account>.blob.core.windows.net/<file_path>", header=True) # Perform transformations transformed_df = df.withColumn("new_column", col("old_column") * 2) # Enrich data enriched...