Showing posts with label python. Show all posts
Showing posts with label python. Show all posts

Thursday

Python Parallel Processing and Threading Comparison

If you want to maximize your CPU bound #python processing tasks you can think the following way.


Given that your Python process is CPU-bound and you have almost unlimited CPU capacity, using `concurrent.futures.ProcessPoolExecutor` is likely to provide better performance than `concurrent.futures.ThreadPoolExecutor`. Here's why:


1. Parallelism: `ProcessPoolExecutor` utilizes separate processes, each running in its own Python interpreter, which allows them to run truly concurrently across multiple CPU cores. On the other hand, `ThreadPoolExecutor` uses #threads, which are subject to the Global Interpreter Lock (GIL) in Python, limiting true parallelism when it comes to CPU-bound tasks.


2. GIL Limitation: The GIL restricts the execution of Python bytecode to a single thread at a time, even in multi-threaded applications. While threads can be useful for I/O-bound tasks or tasks that release the GIL, they are less effective for CPU-bound tasks because they cannot run simultaneously due to the GIL.


3. Isolation: Processes have their own memory space, providing better isolation compared to threads. This can be beneficial for tasks that involve shared state or resources, as processes don't share memory by default and thus avoid many concurrency issues.


4. CPU Utilization: Since processes run independently and can utilize multiple CPU cores without contention, `ProcessPoolExecutor` can fully utilize the available CPU capacity, leading to better performance for CPU-bound tasks.


Therefore, if you want to maximize the performance of your CPU-bound Python process with unlimited CPU capacity, using `concurrent.futures.ProcessPoolExecutor` is generally the preferred choice. It allows for true #parallelism across multiple CPU cores and avoids the limitations imposed by the GIL.

Cloud Resources for Python Application Development

  • AWS:

- AWS Lambda:

  - Serverless computing for executing backend code in response to events.

- Amazon RDS:

  - Managed relational database service for handling SQL databases.

- Amazon S3:

  - Object storage for scalable and secure storage of data.

- AWS API Gateway:

  - Service to create, publish, and manage APIs, facilitating API integration.

- AWS Step Functions:

  - Coordination of multiple AWS services into serverless workflows.

- Amazon DynamoDB:

  - NoSQL database for building high-performance applications.

- AWS CloudFormation:

  - Infrastructure as Code (IaC) service for defining and deploying AWS infrastructure.

- AWS Elastic Beanstalk:

  - Platform-as-a-Service (PaaS) for deploying and managing applications.

- AWS SDK for Python (Boto3):

  - Official AWS SDK for Python to interact with AWS services programmatically.


  • Azure:

- Azure Functions:

  - Serverless computing for building and deploying event-driven functions.

- Azure SQL Database:

  - Fully managed relational database service for SQL databases.

- Azure Blob Storage:

  - Object storage service for scalable and secure storage.

- Azure API Management:

  - Full lifecycle API management to create, publish, and consume APIs.

- Azure Logic Apps:

  - Visual workflow automation to integrate with various services.

- Azure Cosmos DB:

  - Globally distributed, multi-model database service for highly responsive applications.

- Azure Resource Manager (ARM):

  - IaC service for defining and deploying Azure infrastructure.

- Azure App Service:

  - PaaS offering for building, deploying, and scaling web apps.

- Azure SDK for Python (azure-sdk-for-python):

  - Official Azure SDK for Python to interact with Azure services programmatically.


  • Cloud Networking, API Gateway, Load Balancer, and Security for AWS and Azure:


AWS:

- Amazon VPC (Virtual Private Cloud):

  - Enables you to launch AWS resources into a virtual network, providing control over the network configuration.

- AWS Direct Connect:

  - Dedicated network connection from on-premises to AWS, ensuring reliable and secure data transfer.

- Amazon API Gateway:

  - Fully managed service for creating, publishing, and securing APIs.

- AWS Elastic Load Balancer (ELB):

  - Distributes incoming application traffic across multiple targets to ensure high availability.

- AWS WAF (Web Application Firewall):

  - Protects web applications from common web exploits by filtering and monitoring HTTP traffic.

- AWS Shield:

  - Managed Distributed Denial of Service (DDoS) protection service for safeguarding applications running on AWS.

- Amazon Inspector:

  - Automated security assessment service for applications running on AWS.


Azure:


- Azure Virtual Network:

  - Connects Azure resources to each other and to on-premises networks, providing isolation and customization.

- Azure ExpressRoute:

  - Dedicated private connection from on-premises to Azure, ensuring predictable and secure data transfer.

- Azure API Management:

  - Full lifecycle API management with features for security, scalability, and analytics.

- Azure Load Balancer:

  - Distributes network traffic across multiple servers to ensure application availability.

- Azure Application Gateway:

  - Web traffic load balancer that enables you to manage traffic to your web applications.

- Azure Firewall:

  - Managed, cloud-based network security service to protect your Azure Virtual Network resources.

- Azure Security Center:

  - Unified security management system that strengthens the security posture of your data centers.

- Azure DDoS Protection:

  - Safeguards against DDoS attacks on Azure applications.

 

Sunday

SQLAlchemy and Alembic

 

SQLAlchemy and Alembic: Explained with Example

SQLAlchemy:

  • A powerful Python library for interacting with relational databases.
  • Provides an object-relational mapper (ORM) that lets you define your data model as Python classes and map them to tables in a database.
  • Simplifies writing SQL queries and manipulating data through its object-oriented interface.

Alembic:

  • A migration tool built on top of SQLAlchemy.
  • Allows you to track changes to your database schema over time and manage upgrades and downgrades.
  • Generates migration scripts as your data model evolves, providing version control for your database schema.

Example:

Let's consider a model that defines a User table with two attributes: id (primary key) and username.

Python code (SQLAlchemy):

Python
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

classUser(Base):
    __tablename__ = "users"id = Column(Integer, primary_key=True)
    username = Column(String(50), nullable=False)

engine = create_engine("sqlite:///database.db")
Base.metadata.create_all(engine)

# Creating new User objects and writing to the database
user1 = User(username="alice")
user2 = User(username="bob")

session = Session(engine)
session.add_all([user1, user2])
session.commit()

# Reading users from the database
users = session.query(User).all()
print(users)

Creating a Migration with Alembic:

  1. Initialize Alembic:
alembic init
  1. Generate a migration script for the initial schema:
alembic revision --autogenerate

This creates a migration script containing the necessary SQL statements to create the users table.

  1. Upgrade the database schema:
alembic upgrade

This executes the migration script and creates the users table in the database.

Adding a new attribute to the model:

We can add a new attribute email to the User model:

Python
class User(Base):
    # ... existing code
    email = Column(String(100))

Base.metadata.alter(engine)

This will alter the existing users table in the database and add the email column.

  1. Generate a new migration script:
alembic revision --autogenerate
  1. Upgrade the database schema:
alembic upgrade

Benefits of using SQLAlchemy and Alembic:

  • Code readability: Focuses on the data model structure rather than writing raw SQL queries.
  • Maintainability: Easier to evolve the database schema and track changes.
  • Version control: Migration scripts act as version control for the database schema.
  • Portability: Code can be ported to different databases with minimal changes.

Remember: This is a simplified example. For real-world scenarios, you can define much more complex models with relationships, constraints, and other features.

You can find more example in internet also can check my github repo here https://github.com/dhirajpatra

Handling Large Binary Data with Azure Synapse

  Photo by Gül Işık Handling large binary data in Azure Synapse When dealing with large binary data types like geography or image data in Az...