Skip to main content

ML Ops in Azure


Setting up MLOps (Machine Learning Operations) in Azure involves creating a continuous integration and continuous deployment (CI/CD) pipeline to manage machine learning models efficiently. Below, I'll provide a step-by-step guide to creating an MLOps pipeline in Azure using Azure Machine Learning Services, Azure DevOps, and Azure Kubernetes Service (AKS) as an example. This example assumes you already have an Azure subscription and some knowledge of Azure services. You can check out for FREE learning resources at https://learn.microsoft.com/en-us/training/azure/


Step 1: Prepare Your Environment

Before you start, make sure you have the following:

- An Azure subscription.

- An Azure DevOps organization.

- Azure Machine Learning Workspace set up.


Step 2: Create an Azure DevOps Project

1. Go to Azure DevOps (https://dev.azure.com/) and sign in.

2. Create a new project that will host your MLOps pipeline.


Step 3: Set Up Your Azure DevOps Repository

1. In your Azure DevOps project, create a Git repository to store your machine learning project code.


Step 4: Create an Azure Machine Learning Experiment

1. Go to Azure Machine Learning Studio (https://ml.azure.com/) and sign in.

2. Create a new experiment or use an existing one to develop and train your machine learning model. This experiment will be the core of your MLOps pipeline.


Step 5: Create an Azure DevOps Pipeline

1. In your Azure DevOps project, go to Pipelines > New Pipeline.

2. Select the Azure Repos Git as your source repository.

3. Configure your pipeline to build and package your machine learning code. You may use a YAML pipeline script to define build and packaging steps.


Example YAML pipeline script (`azure-pipelines.yml`):

yaml
trigger: 
- main 
pool: 
    vmImage: 'ubuntu-latest' 
steps: 
- script: 'echo Your build and package commands here'

4. Commit this YAML file to your Azure DevOps repository.


Step 6: Create an Azure Kubernetes Service (AKS) Cluster

1. In the Azure portal, create an AKS cluster where you'll deploy your machine learning model. Note down the AKS cluster's connection details.


Step 7: Configure Azure DevOps for CD

1. In your Azure DevOps project, go to Pipelines > Releases.

2. Create a new release pipeline to define your CD process.


Step 8: Deploy to AKS

1. In your release pipeline, add a stage to deploy your machine learning model to AKS.

2. Use Azure CLI or kubectl commands in your release pipeline to deploy the model to your AKS cluster.


Example PowerShell Script to Deploy Model (`deploy-model.ps1`):

# Set Azure context and AKS credentials

az login --service-principal -u <your-service-principal-id> -p <your-service-principal-secret> --tenant <your-azure-tenant-id>

az aks get-credentials --resource-group <your-resource-group> --name <your-aks-cluster-name>

# Deploy the model using kubectl

kubectl apply -f deployment.yaml


3. Add this PowerShell script to your Azure DevOps release pipeline stage.


Step 9: Trigger CI/CD

1. Whenever you make changes to your machine learning code, commit and push the changes to your Azure DevOps Git repository.

2. The CI/CD pipeline will automatically trigger a build and deployment process.


Step 10: Monitor and Manage Your MLOps Pipeline

1. Monitor the CI/CD pipeline in Azure DevOps to track build and deployment status.

2. Use Azure Machine Learning Studio to manage your models, experiment versions, and performance.


This is a simplified example of setting up MLOps in Azure. In a real-world scenario, you may need to integrate additional tools and services, such as Azure DevTest Labs for testing, Azure Databricks for data processing, and Azure Monitor for tracking model performance. The exact steps and configurations can vary depending on your specific requirements and organization's needs.


However, if you are using say Python Flask REST API server application for users to interact. Then you can use the following changes.

To integrate your Flask application, which serves the machine learning models, into the same CI/CD pipeline as your machine learning models, you can follow these steps. Combining them into the same CI/CD pipeline can help ensure that your entire application, including the Flask API and ML models, stays consistent and updated together.


Step 1: Organize Your Repository

In your Git repository, organize your project structure so that your machine learning code and Flask application code are in separate directories, like this:


```

- my-ml-project/

  - ml-model/

    - model.py

    - requirements.txt

  - ml-api/

    - app.py

    - requirements.txt

  - azure-pipelines.yml

```


Step 2: Configure Your CI/CD Pipeline

Modify your `azure-pipelines.yml` file to include build and deploy steps for both your machine learning code and Flask application.

yaml
trigger: 
- main 
pr: 
- '*' 
pool: 
vmImage: 'ubuntu-latest' 
stages: 
- stage: Build 
    jobs: 
    - job: Build_ML_Model 
        steps: 
        - script:
            cd my-ml-project/ml-model 
            pip install -r requirements.txt 
            # Add any build steps for your ML model code here 
        displayName: 'Build ML Model' 
- job: Build_Flask_App 
    steps: 
    - script:
         cd my-ml-project/ml-api 
         pip install -r requirements.txt 
         # Add any build steps for your Flask app here 
    displayName: 'Build Flask App' 
- stage: Deploy 
    jobs: 
    - job: Deploy_ML_Model 
        steps: - script:
         # Add deployment steps for your ML model here 
            displayName: 'Deploy ML Model' 
    - job: Deploy_Flask_App 
        steps: 
        - script:
         # Add deployment steps for your Flask app here 
        displayName: 'Deploy Flask App'


Step 3: Update Your Flask Application

Whenever you need to update your Flask application or machine learning models, make changes to the respective code in your Git repository.


Step 4: Commit and Push Changes

Commit and push your changes to the Git repository. This will trigger the CI/CD pipeline.


Step 5: Monitor and Manage Your CI/CD Pipeline

Monitor the CI/CD pipeline in Azure DevOps to track the build and deployment status of both your machine learning code and Flask application.


By integrating your Flask application into the same CI/CD pipeline, you ensure that both components are updated and deployed together. This approach simplifies management and maintains consistency between your ML models and the API serving them.


Photo by ThisIsEngineering

Comments

Popular posts from this blog

Financial Engineering

Financial Engineering: Key Concepts Financial engineering is a multidisciplinary field that combines financial theory, mathematics, and computer science to design and develop innovative financial products and solutions. Here's an in-depth look at the key concepts you mentioned: 1. Statistical Analysis Statistical analysis is a crucial component of financial engineering. It involves using statistical techniques to analyze and interpret financial data, such as: Hypothesis testing : to validate assumptions about financial data Regression analysis : to model relationships between variables Time series analysis : to forecast future values based on historical data Probability distributions : to model and analyze risk Statistical analysis helps financial engineers to identify trends, patterns, and correlations in financial data, which informs decision-making and risk management. 2. Machine Learning Machine learning is a subset of artificial intelligence that involves training algorithms t...

Wholesale Customer Solution with Magento Commerce

The client want to have a shop where regular customers to be able to see products with their retail price, while Wholesale partners to see the prices with ? discount. The extra condition: retail and wholesale prices hasn’t mathematical dependency. So, a product could be $100 for retail and $50 for whole sale and another one could be $60 retail and $50 wholesale. And of course retail users should not be able to see wholesale prices at all. Basically, I will explain what I did step-by-step, but in order to understand what I mean, you should be familiar with the basics of Magento. 1. Creating two magento websites, stores and views (Magento meaning of website of course) It’s done from from System->Manage Stores. The result is: Website | Store | View ———————————————— Retail->Retail->Default Wholesale->Wholesale->Default Both sites using the same category/product tree 2. Setting the price scope in System->Configuration->Catalog->Catalog->Price set drop-down to...

How to Prepare for AI Driven Career

  Introduction We are all living in our "ChatGPT moment" now. It happened when I asked ChatGPT to plan a 10-day holiday in rural India. Within seconds, I had a detailed list of activities and places to explore. The speed and usefulness of the response left me stunned, and I realized instantly that life would never be the same again. ChatGPT felt like a bombshell—years of hype about Artificial Intelligence had finally materialized into something tangible and accessible. Suddenly, AI wasn’t just theoretical; it was writing limericks, crafting decent marketing content, and even generating code. The world is still adjusting to this rapid shift. We’re in the middle of a technological revolution—one so fast and transformative that it’s hard to fully comprehend. This revolution brings both exciting opportunities and inevitable challenges. On the one hand, AI is enabling remarkable breakthroughs. It can detect anomalies in MRI scans that even seasoned doctors might miss. It can trans...