Due to time constraint I am sharing the screen shots from Big Query project.
As a seasoned expert in AI, Machine Learning, Generative AI, IoT and Robotics, I empower innovators and businesses to harness the potential of emerging technologies. With a passion for sharing knowledge, I curate insightful articles, tutorials and news on the latest advancements in AI, Robotics, Data Science, Cloud Computing and Open Source technologies. Hire Me Unlock cutting-edge solutions for your business. With expertise spanning AI, GenAI, IoT and Robotics, I deliver tailor services.
Monday
Sunday
Google Cloud VertexAI AutoML Vision Identifying Damaged Cars
Vertex AI brings together the Google Cloud services for building ML under one, unified UI and API. In Vertex AI, you can now easily train and compare models using AutoML or custom code training and all your models are stored in one central model repository. These models can now be deployed to the same endpoints on Vertex AI.
AutoML Vision helps anyone with limited Machine Learning (ML) expertise train high quality image classification models. In this hands-on lab, you will learn how to produce a custom ML model that automatically recognizes damaged car parts. Since the time it takes to train the model is above the time limit of the lab, you will interact and request predictions from a hosted model in a different project trained on the same dataset. You will then tweak the values of the data for the prediction request and examine how it changes the resulting prediction from the model.
Screen Shots from Google Cloud
Objectives
In this lab, you learn how to:
- Upload a labeled dataset to Cloud Storage using a CSV file and connect it to Vertex AI as a Managed Dataset.
- Inspect uploaded images to ensure there are no errors in your dataset.
- Kick off an AutoML Vision model training job.
- Request predictions from a hosted model trained on the same dataset.
Setup and requirements
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example,
1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.Accept the terms and skip the recovery resource page.
Activate Cloud Shell
Cloud Shell is a virtual machine that contains development tools. It offers a persistent 5-GB home directory and runs on Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources. gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.
Click the Activate Cloud Shell button () at the top right of the console.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are also authenticated, and the project is set to your PROJECT_ID.
Sample commands
- List the active account name:
gcloud auth list
(Output)
Credentialed accounts: - <myaccount>@<mydomain>.com (active)
(Example output)
Credentialed accounts: - google1623327_student@qwiklabs.net
- List the project ID:
gcloud config list project
(Output)
[core] project = <project_ID>
(Example output)
[core] project = qwiklabs-gcp-44776a13dea667a6
Task 1. Upload training images to Cloud Storage
In this task you will upload the training images you want to use to Cloud Storage. This will make it easier to import the data into Vertex AI later.
To train a model to classify images of damaged car parts, you need to provide the machine with labeled training data. The model will use the data to develop an understanding of each image, differentiating between car parts and those with damages on them.
In this example, your model will learn to classify five different damaged car parts: bumper, engine compartment, hood, lateral, and windshield.
Create a Cloud Storage bucket
- To start, open a new Cloud Shell window and execute the following commands to set some environment variables:
export PROJECT_ID=$DEVSHELL_PROJECT_ID export BUCKET=$PROJECT_ID
- Next, to create a Cloud Storage bucket, execute the following command:
gsutil mb -p $PROJECT_ID \ -c standard \ -l REGION \ gs://${BUCKET}
Copied!
Upload car images to your Storage Bucket
The training images are publicly available in a Cloud Storage bucket. Again, copy and paste the script template below into Cloud Shell to copy the images into your own bucket.
- To copy images into your Cloud Storage bucket, execute the following command:
gsutil -m cp -r gs://car_damage_lab_images/* gs://${BUCKET}
In the navigation pane, click Cloud Storage > Buckets.
Click the Refresh button at the top of the Cloud Storage browser.
Click on your bucket name. You should see five folders of photos for each of the five different damaged car parts to be classified:
- Optionally, you can click one of the folders and check out the images inside.
Great! Your car images are now organized and ready for training.
Click Check my progress to verify the objective.
Task 2. Create a dataset
In this task you create a new dataset and connect your dataset to your training images to allow Vertex AI to access them.
Normally, you would create a CSV file where each row contains a URL to a training image and the associated label for that image. In this case, the CSV file has been created for you; you just need to update it with your bucket name and upload the CSV file to your Cloud Storage bucket.
Update the CSV file
Copy and paste the script templates below into Cloud Shell and press Enter to update, and upload the CSV file.
- To create a copy of the file, execute the following command:
gsutil cp gs://car_damage_lab_metadata/data.csv .
- To update the CSV with the path to your storage, execute the following command:
sed -i -e "s/car_damage_lab_images/${BUCKET}/g" ./data.csv
- Verify your bucket name was inserted into the CSV properly:
cat ./data.csv
- To upload the CSV file to your Cloud Storage bucket, execute the following command:
gsutil cp ./data.csv gs://${BUCKET}
Once the command completes, click the Refresh button at the top of the Cloud Storage bucket and open your bucket.
Confirm that the
data.csv
file is listed in your bucket.
Create a managed dataset
In the Google Cloud Console, on the Navigation menu () click Vertex AI > Dashboard.
Click Enable all recommended API.
From the Vertex AI navigation menu on the left, click Datasets.
At the top of the console, click + Create.
For Dataset name, type
damaged_car_parts
.Select Image classification (Single label). (Note: in your own projects, you may want to check the "Multi-label Classification" box if you're doing multi-class classification).
Select region as
REGION
Click Create.
Connect your dataset to your training images
In this section, you will choose the location of your training images that you uploaded in the previous step.
In the Select an import method section, click Select import files from Cloud Storage.
In the Select import files from Cloud Storage section, click Browse.
Follow the prompts to navigate to your storage bucket and click your
data.csv
file. Click Select.Once you've properly selected your file, a green checkbox appears to the left of the file path. Click Continue to proceed.
- Once the import has completed, prepare for the next section by clicking the Browse tab. (Hint: You may need to refresh the page to confirm.)
Click Check my progress to verify the objective.
Task 3. Inspect images
In this task, you examine the images to ensure there are no errors in your dataset.
Check image labels
If your browser page has refreshed, click Datasets , select your image name, and then click Browse.
Under Filter labels, click any one of the labels to view the specific training images.
- If an image is labeled incorrectly, you can click on it to select the correct label or delete the image from your training set:
- Next, click on the Analyze tab to view the number of images per label. The Label Stats window appears on the right side of your browser.
Task 4. Train your model
You're ready to start training your model! Vertex AI handles this for you automatically, without requiring you to write any of the model code.
From the right-hand side, click Train New Model.
From the Training method window, leave the default configurations and select AutoML as the training method. Click Continue.
From the Model details window, enter a name for your model, use:
damaged_car_parts_model
. Click Continue.From the Training options window, click Continue.
From the Explainability window, click continue and for Compute and pricing window, set your budget to 8 maximum node hours.
Click Start Training.
Click Check my progress to verify the objective.
Task 5. Request a prediction from a hosted model
For the purposes of this lab, a model trained on the exact same dataset is hosted in a different project so that you can request predictions from it while your local model finishes training, as it is likely that the local model training will exceed the limit of this lab.
A proxy to the pre-trained model is set up for you so you don't need to run through any extra steps to get it working within your lab environment.
To request predictions from the model, you will send predictions to an endpoint inside of your project that will forward the request to the hosted model and return back the output. Sending a prediction to the AutoML Proxy is very similar to the way that you would interact with your model you just created, so you can use this as practice.
Get the name of AutoML proxy endpoint
In the Google Cloud Console, on the Navigation menu (≡) click Cloud Run.
Click automl-proxy.
- Copy the URL to the endpoint. It should look something like:
https://automl-proxy-xfpm6c62ta-uc.a.run.app
.
You will use this endpoint for the prediction request in the next section.
Create a prediction request
Open a new Cloud Shell window.
On the Cloud Shell toolbar, click Open Editor.
Click File > New File.
Click on the link, copy the file content into the new file you just created:
Save the file and name it
payload.json
.
For reference, the content you supplied is a Base64 string from the following image.
- Next, set the following environment variables. Copy in your AutoML Proxy URL you retrieved in earlier.
AUTOML_PROXY=<automl-proxy url> INPUT_DATA_FILE=payload.json
- Perform a API request to the AutoML Proxy endpoint to request the prediction from the hosted model:
curl -X POST -H "Content-Type: application/json" $AUTOML_PROXY/v1 -d "@${INPUT_DATA_FILE}"
If you ran a successful prediction, your output should resemble the following:
{"predictions":[{"confidences":[0.951557755],"displayNames":["bumper"],"ids":["1960986684719890432"]}],"deployedModelId":"4271461936421404672","model":"projects/1030115194620/locations/us-central1/models/2143634257791156224","modelDisplayName":"damaged_car_parts_vertex","modelVersionId":"1"}
For this model, the prediction results are pretty self-explanatory. The displayNames
field should correctly predict a bumper
with a high confidence threshold. Now, you can change the Base64 encoded image value in the JSON file you created.
Click Check my progress to verify the objective.
Right-click on each image below, then select Save image As….
Follow the prompts to save each image with a unique name. (Hint: Assign a simple name like 'Image1' and 'Image2' to assist with uploading).
Open the Base64 Image Encoder follow the instructions to upload and encode an image to a Base64 string.
Replace the Base64 encoded string value in the
content
field in your JSON payload file, and run the prediction again. Repeat for the other image(s).
How did your model do? Did it predict all three images correctly? You should see the the following outputs, respectively:
{"predictions":[{"ids":["5419751198540431360"],"confidences":[0.985487759],"displayNames":["engine_compartment"]}],"deployedModelId":"4271461936421404672","model":"projects/1030115194620/locations/us-central1/models/2143634257791156224","modelDisplayName":"damaged_car_parts_vertex","modelVersionId":"1"}
{"predictions":[{"displayNames":["hood"],"ids":["3113908189326737408"],"confidences":[0.962432086]}],"deployedModelId":"4271461936421404672","model":"projects/1030115194620/locations/us-central1/models/2143634257791156224","modelDisplayName":"damaged_car_parts_vertex","modelVersionId":"1"}
Task 6. Review
In this lab, you learned how to train your own custom machine learning model and generate predictions on hosted model via an API request. You uploaded training images to Cloud Storage and used a CSV file for Vertex AI to find these images. You inspected the labeled images for any discrepancies before finally evaluating a trained model. Now you've got what it takes to train a model on your own image dataset!
Next steps / learn more
- Watch the intro video to Cloud AutoML.
- Learn more about how AutoML Vision works by listening to the GoogleCloudPlatform Podcast episode.
- Read the Cloud AutoML announcement blog post.
- Learn how to perform each step with the API.
-
The client want to have a shop where regular customers to be able to see products with their retail price, while Wholesale partners to see t...
-
URL based session management does not only have additional security risks compared to cookie based session management, but it can cause also...
-
Widgets and gadgets are small applications that run on your desktop or in your web browser which enable you to keep track of things like the...