Showing posts with label image processing. Show all posts
Showing posts with label image processing. Show all posts

Sunday

Google Cloud VertexAI AutoML Vision Identifying Damaged Cars

Vertex AI brings together the Google Cloud services for building ML under one, unified UI and API. In Vertex AI, you can now easily train and compare models using AutoML or custom code training and all your models are stored in one central model repository. These models can now be deployed to the same endpoints on Vertex AI.

AutoML Vision helps anyone with limited Machine Learning (ML) expertise train high quality image classification models. In this hands-on lab, you will learn how to produce a custom ML model that automatically recognizes damaged car parts. Since the time it takes to train the model is above the time limit of the lab, you will interact and request predictions from a hosted model in a different project trained on the same dataset. You will then tweak the values of the data for the prediction request and examine how it changes the resulting prediction from the model.

Screen Shots from Google Cloud 



























Objectives

In this lab, you learn how to:

  • Upload a labeled dataset to Cloud Storage using a CSV file and connect it to Vertex AI as a Managed Dataset.
  • Inspect uploaded images to ensure there are no errors in your dataset.
  • Kick off an AutoML Vision model training job.
  • Request predictions from a hosted model trained on the same dataset.

Setup and requirements

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Activate Cloud Shell

Cloud Shell is a virtual machine that contains development tools. It offers a persistent 5-GB home directory and runs on Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources. gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.

  1. Click the Activate Cloud Shell button (Activate Cloud Shell icon) at the top right of the console.

  2. Click Continue.
    It takes a few moments to provision and connect to the environment. When you are connected, you are also authenticated, and the project is set to your PROJECT_ID.

Sample commands

  • List the active account name:
gcloud auth list
Copied!

(Output)

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts:
 - google1623327_student@qwiklabs.net
  • List the project ID:
gcloud config list project
Copied!

(Output)

[core]
project = <project_ID>

(Example output)

[core]
project = qwiklabs-gcp-44776a13dea667a6

Task 1. Upload training images to Cloud Storage

In this task you will upload the training images you want to use to Cloud Storage. This will make it easier to import the data into Vertex AI later.

To train a model to classify images of damaged car parts, you need to provide the machine with labeled training data. The model will use the data to develop an understanding of each image, differentiating between car parts and those with damages on them.

In this example, your model will learn to classify five different damaged car parts: bumperengine compartmenthoodlateral, and windshield.

Create a Cloud Storage bucket

  1. To start, open a new Cloud Shell window and execute the following commands to set some environment variables:
export PROJECT_ID=$DEVSHELL_PROJECT_ID
export BUCKET=$PROJECT_ID
Copied!
  1. Next, to create a Cloud Storage bucket, execute the following command:
    gsutil mb -p $PROJECT_ID \
     -c standard    \
     -l REGION \
     gs://${BUCKET}
    Copied!

Upload car images to your Storage Bucket

The training images are publicly available in a Cloud Storage bucket. Again, copy and paste the script template below into Cloud Shell to copy the images into your own bucket.

  1. To copy images into your Cloud Storage bucket, execute the following command:
gsutil -m cp -r gs://car_damage_lab_images/* gs://${BUCKET}
Copied!
  1. In the navigation pane, click Cloud Storage > Buckets.

  2. Click the Refresh button at the top of the Cloud Storage browser.

  3. Click on your bucket name. You should see five folders of photos for each of the five different damaged car parts to be classified:

Bucket with folders titled: bumper, engine compartment, hood, lateral, and windshield.

  1. Optionally, you can click one of the folders and check out the images inside.

Great! Your car images are now organized and ready for training.

Click Check my progress to verify the objective.

Upload car images to your Storage Bucket

Task 2. Create a dataset

In this task you create a new dataset and connect your dataset to your training images to allow Vertex AI to access them.

Normally, you would create a CSV file where each row contains a URL to a training image and the associated label for that image. In this case, the CSV file has been created for you; you just need to update it with your bucket name and upload the CSV file to your Cloud Storage bucket.

Update the CSV file

Copy and paste the script templates below into Cloud Shell and press Enter to update, and upload the CSV file.

  1. To create a copy of the file, execute the following command:
gsutil cp gs://car_damage_lab_metadata/data.csv .
Copied!
  1. To update the CSV with the path to your storage, execute the following command:
sed -i -e "s/car_damage_lab_images/${BUCKET}/g" ./data.csv
Copied!
  1. Verify your bucket name was inserted into the CSV properly:
cat ./data.csv
Copied!
  1. To upload the CSV file to your Cloud Storage bucket, execute the following command:
gsutil cp ./data.csv gs://${BUCKET}
Copied!
  1. Once the command completes, click the Refresh button at the top of the Cloud Storage bucket and open your bucket.

  2. Confirm that the data.csv file is listed in your bucket.

data-csv.png

Create a managed dataset

  1. In the Google Cloud Console, on the Navigation menu (Navigation menu icon) click Vertex AI > Dashboard.

  2. Click Enable all recommended API.

  3. From the Vertex AI navigation menu on the left, click Datasets.

  4. At the top of the console, click + Create.

  5. For Dataset name, type damaged_car_parts.

  6. Select Image classification (Single label). (Note: in your own projects, you may want to check the "Multi-label Classification" box if you're doing multi-class classification).

  7. Select region as REGION

  8. Click Create.

Connect your dataset to your training images

In this section, you will choose the location of your training images that you uploaded in the previous step.

  1. In the Select an import method section, click Select import files from Cloud Storage.

  2. In the Select import files from Cloud Storage section, click Browse.

  3. Follow the prompts to navigate to your storage bucket and click your data.csv file. Click Select.

  4. Once you've properly selected your file, a green checkbox appears to the left of the file path. Click Continue to proceed.

  1. Once the import has completed, prepare for the next section by clicking the Browse tab. (Hint: You may need to refresh the page to confirm.)

Click Check my progress to verify the objective.

Create a dataset

Task 3. Inspect images

In this task, you examine the images to ensure there are no errors in your dataset.

Image tiles on the Browse tabbed page

Check image labels

  1. If your browser page has refreshed, click Datasets , select your image name, and then click Browse.

  2. Under Filter labels, click any one of the labels to view the specific training images.

  1. If an image is labeled incorrectly, you can click on it to select the correct label or delete the image from your training set:

Image details

  1. Next, click on the Analyze tab to view the number of images per label. The Label Stats window appears on the right side of your browser.

Task 4. Train your model

You're ready to start training your model! Vertex AI handles this for you automatically, without requiring you to write any of the model code.

  1. From the right-hand side, click Train New Model.

  2. From the Training method window, leave the default configurations and select AutoML as the training method. Click Continue.

  3. From the Model details window, enter a name for your model, use: damaged_car_parts_model. Click Continue.

  4. From the Training options window, click Continue.

  5. From the Explainability window, click continue and for Compute and pricing window, set your budget to 8 maximum node hours.

  6. Click Start Training.

Click Check my progress to verify the objective.

Train your model

Task 5. Request a prediction from a hosted model

For the purposes of this lab, a model trained on the exact same dataset is hosted in a different project so that you can request predictions from it while your local model finishes training, as it is likely that the local model training will exceed the limit of this lab.

A proxy to the pre-trained model is set up for you so you don't need to run through any extra steps to get it working within your lab environment.

To request predictions from the model, you will send predictions to an endpoint inside of your project that will forward the request to the hosted model and return back the output. Sending a prediction to the AutoML Proxy is very similar to the way that you would interact with your model you just created, so you can use this as practice.

Get the name of AutoML proxy endpoint

  1. In the Google Cloud Console, on the Navigation menu (≡) click Cloud Run.

  2. Click automl-proxy.

automl proxy endpoint

  1. Copy the URL to the endpoint. It should look something like: https://automl-proxy-xfpm6c62ta-uc.a.run.app.

endpoint url

You will use this endpoint for the prediction request in the next section.

Create a prediction request

  1. Open a new Cloud Shell window.

  2. On the Cloud Shell toolbar, click Open Editor.

  3. Click File > New File.

  4. Click on the link, copy the file content into the new file you just created:

  5. Save the file and name it payload.json.

For reference, the content you supplied is a Base64 string from the following image.

hood

  1. Next, set the following environment variables. Copy in your AutoML Proxy URL you retrieved in earlier.
AUTOML_PROXY=<automl-proxy url>
INPUT_DATA_FILE=payload.json
Copied!
  1. Perform a API request to the AutoML Proxy endpoint to request the prediction from the hosted model:
curl -X POST -H "Content-Type: application/json" $AUTOML_PROXY/v1 -d "@${INPUT_DATA_FILE}"
Copied!

If you ran a successful prediction, your output should resemble the following:

{"predictions":[{"confidences":[0.951557755],"displayNames":["bumper"],"ids":["1960986684719890432"]}],"deployedModelId":"4271461936421404672","model":"projects/1030115194620/locations/us-central1/models/2143634257791156224","modelDisplayName":"damaged_car_parts_vertex","modelVersionId":"1"}

For this model, the prediction results are pretty self-explanatory. The displayNames field should correctly predict a bumper with a high confidence threshold. Now, you can change the Base64 encoded image value in the JSON file you created.

Click Check my progress to verify the objective.

Create the prediction request

  1. Right-click on each image below, then select Save image As….

  2. Follow the prompts to save each image with a unique name. (Hint: Assign a simple name like 'Image1' and 'Image2' to assist with uploading).

image 2 image 3

  1. Open the Base64 Image Encoder follow the instructions to upload and encode an image to a Base64 string.

  2. Replace the Base64 encoded string value in the content field in your JSON payload file, and run the prediction again. Repeat for the other image(s).

How did your model do? Did it predict all three images correctly? You should see the the following outputs, respectively:

{"predictions":[{"ids":["5419751198540431360"],"confidences":[0.985487759],"displayNames":["engine_compartment"]}],"deployedModelId":"4271461936421404672","model":"projects/1030115194620/locations/us-central1/models/2143634257791156224","modelDisplayName":"damaged_car_parts_vertex","modelVersionId":"1"}
{"predictions":[{"displayNames":["hood"],"ids":["3113908189326737408"],"confidences":[0.962432086]}],"deployedModelId":"4271461936421404672","model":"projects/1030115194620/locations/us-central1/models/2143634257791156224","modelDisplayName":"damaged_car_parts_vertex","modelVersionId":"1"}

Task 6. Review

In this lab, you learned how to train your own custom machine learning model and generate predictions on hosted model via an API request. You uploaded training images to Cloud Storage and used a CSV file for Vertex AI to find these images. You inspected the labeled images for any discrepancies before finally evaluating a trained model. Now you've got what it takes to train a model on your own image dataset!

Next steps / learn more

Google QuikLabs courtsy.