Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Tuesday

Data Masking When Ingesting Into Databricks

 

Photo by Alba Leader

Data masking is a data security technique that involves hiding data by changing its original numbers and letters. It's a way to create a fake version of data that's similar enough to the actual data, while still protecting it. This fake data can then be used as a functional alternative when the real data isn't needed. 



Unity Catalog is not a feature within Databricks. Instead, Databricks provides the Delta Lake feature, which includes data governance capabilities such as row filters and column masking.

Unity Catalog in Databricks allows you to apply data governance policies such as row filters and column masks to sensitive data. Let’s break it down:

  1. Row Filters:

    • Row filters enable you to apply a filter to a table so that subsequent queries only return rows for which the filter predicate evaluates to true.
    • To create a row filter, follow these steps:
      1. Write a SQL user-defined function (UDF) to define the filter policy.
      • CREATE FUNCTION <function_name> (<parametergoog_1380099708_name> <parameter_type>, ...) RETURN {filter clause whobe a boolean};
  2. Apply the row filter to an existing table using the following syntax:
    ALTER TABLE <table_name> SET ROW FILTER <function_name> ON (<column_name>, ...);
      1. You can also specify a row filter during the initial table creation.
    • Each table can have only one row filter, and it accepts input parameters that bind to specific columns of the table.
  3. Column Masks:

    • Column masks allow you to transform or mask specific column values before returning them in query results.
    • To apply column masks:
      1. Create a function that defines the masking logic.
      2. Apply the masking function to a table column using an ALTER TABLE statement.
      3. Alternatively, you can apply the masking function during table creation.
  4. Unity Catalog Best Practices:

  5. When setting up Unity Catalog, consider assigning a location to a catalog level. For example:
    CREATE CATALOG hr_prod
    LOCATION 'abfss://mycompany-hr-prod@storage-account.dfs.core.windows.net/unity-catalog';

You can apply column masks to transform or conceal specific column values before returning them in query results. Here’s how you can achieve this:

  1. Create a Masking Function:

    • Define a function that specifies the masking logic. This function will be used to transform the column values.
    • For example, let’s say you want to mask the last four digits of a credit card number. You can create a masking function that replaces the last four digits with asterisks.
  2. Apply the Masking Function to a Column:

    • Use an ALTER TABLE statement to apply the masking function to a specific column.
    • For instance, if you have a column named credit_card_number, you can apply the masking function to it:
      ALTER TABLE my_table SET COLUMN MASK credit_card_number USING my_masking_function;
      
  3. Example Masking Function:

    • Suppose you want to mask the last four digits of a credit card number with asterisks. You can create a masking function like this:
      CREATE FUNCTION my_masking_function AS
      BEGIN
          RETURN CONCAT('************', RIGHT(credit_card_number, 4));
      END;
      
  4. Query the Table:

    • When querying the table, the masked values will be returned instead of the original values.

Let’s focus on how you can achieve column masking in Databricks using Delta Lake:

  1. Column Masking:

    • Delta Lake allows you to apply column-level transformations or masks to sensitive data.
    • You can create custom masking functions to modify specific column values before returning them in query results.
  2. Creating a Masking Function:

    • Define a user-defined function (UDF) that specifies the masking logic. For example, you can create a function that masks the last four digits of a credit card number.
    • Here’s an example of a masking function that replaces the last four digits with asterisks:
      def mask_credit_card(card_number):
          return "************" + card_number[-4:]
      
  3. Applying the Masking Function:

    • Use the withColumn method to apply the masking function to a specific column in your DataFrame.
    • For instance, if you have a DataFrame named my_table with a column named credit_card_number, you can apply the masking function as follows:
      from pyspark.sql.functions import udf
      from pyspark.sql.types import StringType
      
      # Register the UDF
      spark.udf.register("mask_credit_card", mask_credit_card, StringType())
      
      # Apply the masking function to the column
      masked_df = my_table.withColumn("masked_credit_card", udf("credit_card_number"))
      
  4. Querying the Masked Data:

    • When querying the masked_df, the transformed (masked) values will be returned for the masked_credit_card column.

You can find different related articles here kindly search.


Friday

Database Sharding and Federation

 

                                    Photo by Azamat Esenaliev

Sharding and federation are both techniques for managing large datasets across multiple systems, but they differ in key aspects:

Data Location:

  • Sharding: Data is physically divided and distributed across different shards (databases or servers). Each shard holds a specific subset of the data, usually based on a key or range. Accessing data requires routing to the appropriate shard based on the key.
  • Federation: Data remains physically separate in its individual databases. Each database holds the complete data for its domain, and federated systems provide a mechanism to access and integrate data across these separate databases.

Data Ownership:

  • Sharding: Data ownership is centralized. The overall system manages the distribution and access to data across shards.
  • Federation: Data ownership is often decentralized. Individual databases retain ownership and control over their data, and the federated system acts as a mediator for data access and integration.

Data Schema:

  • Sharding: Requires consistent schema across all shards for efficient data manipulation and querying.
  • Federation: Allows for heterogeneous schema across different databases in the federation, as each database may manage its own data structure.

Complexity:

  • Sharding: Generally more complex to implement and maintain due to data distribution and routing requirements. Requires centralized management and scaling considerations.
  • Federation: Can be less complex initially, as existing databases are leveraged without major changes. However, managing data access and consistency across heterogeneous systems can be challenging.

Use Cases:

  • Sharding: Suitable for scaling horizontally to handle large volumes of data for a single domain. Useful for applications requiring high performance and consistent data access across the entire dataset.
  • Federation: Useful for integrating data from diverse sources with potentially different owners and schemas. Applicable for scenarios where accessing data across multiple domains without centralized control is necessary.

Additional Points:

  • Security: Both sharding and federation require robust security measures to protect data access and privacy across distributed systems.
  • Performance: Performance implications vary depending on implementation and use case. Sharding can offer high performance for specific queries, while federation may incur overhead for data integration and querying.

Choosing the right approach depends on your specific needs, data distribution, ownership requirements, and desired level of complexity.

Azure Data Factory Transform and Enrich Activity with Databricks and Pyspark

In #azuredatafactory at #transform and #enrich part can be done automatically or manually written by #pyspark two examples below one data so...