End-to-End Deployment Steps for Azure Machine Learning with MLflow

 

End-to-End Deployment Steps for Azure Machine Learning with MLflow

Below are the structured steps to deploy an ML model using Azure Machine Learning (Azure ML) with MLflow, starting from script creation to full deployment:


1. Setup and Prerequisites

a. Create an Azure ML Workspace

  1. Log in to the Azure Portal

  2. Navigate to Azure Machine Learning

  3. Click CreateAzure Machine Learning workspace

  4. Fill in details (Subscription, Resource Group, Workspace Name, Region)

  5. Click Review + Create

b. Install Required Libraries

Install required Python packages:

pip install azureml-sdk mlflow azureml-mlflow

2. Develop and Train Model

a. Create a Training Script

Example: train.py

import mlflow
import mlflow.sklearn
from sklearn.linear_model import LogisticRegression

# Enable autologging
mlflow.sklearn.autolog()

# Train a simple model
model = LogisticRegression()
mlflow.log_param("solver", "liblinear")
mlflow.log_metric("accuracy", 0.92)

# Save model
mlflow.sklearn.log_model(model, "model")

b. Run Training Locally

python train.py

3. Track Experiments with MLflow

a. Connect to Azure ML with MLflow

from azureml.core import Workspace
import mlflow.azureml

ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("my_experiment")

b. Log Model to MLflow

with mlflow.start_run():
    mlflow.log_param("param1", 5)
    mlflow.log_metric("rmse", 0.03)
    mlflow.sklearn.log_model(model, "model")

4. Register Model in Azure ML

a. Register MLflow Model

from azureml.core.model import Model

model_uri = "runs:/<run_id>/model"
model_details = mlflow.register_model(model_uri=model_uri, name="my_model")

b. Verify Registered Model

models = Model.list(ws)
for model in models:
    print(model.name, model.version)

5. Deploy Model as an Azure ML Endpoint

a. Create a Scoring Script (score.py)

import json
import mlflow.sklearn
from azureml.core.model import Model

def init():
    global model
    model_path = Model.get_model_path("my_model")
    model = mlflow.sklearn.load_model(model_path)

def run(data):
    input_data = json.loads(data)["data"]
    prediction = model.predict(input_data)
    return json.dumps({"prediction": prediction.tolist()})

b. Create an Inference Environment

from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig

env = Environment("inference-env")
env.python.conda_dependencies.add_pip_package("scikit-learn")

inference_config = InferenceConfig(entry_script="score.py", environment=env)

c. Deploy as an Azure ML Web Service

from azureml.core.webservice import AciWebservice

deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)

service = Model.deploy(ws, "my-model-service", [model_details], inference_config, deployment_config)
service.wait_for_deployment(show_output=True)

6. Test the Deployed Model

import requests

input_data = json.dumps({"data": [[0.1, 0.2, 0.3, 0.4]]})
headers = {"Content-Type": "application/json"}

response = requests.post(service.scoring_uri, data=input_data, headers=headers)
print(response.json())

7. Monitor and Manage Deployment

a. Get Logs

service.get_logs()

b. Delete the Service

service.delete()

Final Thoughts

By following these steps, you successfully:
✅ Created a training script
✅ Logged experiments with MLflow
✅ Registered the model in Azure ML
✅ Deployed it as a REST API
✅ Tested and monitored the deployment


Comments

Popular posts from this blog

Self-contained Raspberry Pi surveillance System Without Continue Internet

COBOT with GenAI and Federated Learning

AI in Education: Embracing Change for Future-Ready Learning