Showing posts with label resnet. Show all posts
Showing posts with label resnet. Show all posts

Saturday

Reference Learning with Keras Hub

 


You might have experience in different types of image processing in deep learning [a part of machine learning]. One of them is reference learning.

Transfer Learning (Reference Learning) in CNN Image Processing

Transfer learning, also known as reference learning, is a machine learning technique where a model developed for one task is reused as the starting point for a model on a second task. In the context of Convolutional Neural Networks (CNNs) for image processing, transfer learning leverages pre-trained CNN models.


Key Concepts

Pre-trained models: Models trained on large, diverse image datasets (e.g., ImageNet).

Feature extraction: Pre-trained models extract general features (edges, shapes, textures).

Fine-tuning: Adapting pre-trained models to specific tasks through additional training.

Benefits

Reduced training time: Leverage existing knowledge.

Improved accuracy: Pre-trained models provide a solid foundation.

Smaller datasets: Effective with limited task-specific data.


Popular Pre-trained CNN Models

VGG16: 16-layer model, excellent for feature extraction.

ResNet50: 50-layer model, top performance on ImageNet.

InceptionV3: 48-layer model, efficient and accurate.

MobileNet: Lightweight, for mobile and embedded devices.


Transfer Learning Strategies

Feature extraction: Freeze pre-trained layers, use as feature extractor.

Fine-tuning: Update pre-trained weights for task-specific layers.

Weight transfer: Transfer selected weights to new models.

Applications

Image classification: Adapt pre-trained models for custom classes.

Object detection: Utilize pre-trained backbones for detection.

Segmentation: Leverage pre-trained models for image segmentation.


Example Code (Keras)

Python


# Import necessary libraries
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np

# Load ResNet-50 model
base_model = ResNet50(weights='imagenet', include_top=True)

# Load and preprocess image
image_path = "path_to_your_image.jpg" # Replace with your image path
img = image.load_img(image_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

# Make predictions
preds = base_model.predict(x)
decoded_preds = decode_predictions(preds, top=3)[0]

# Print predictions
for (i, (imagenetID, label, prob)) in enumerate(decoded_preds):
print(f"{i+1}. {label}: {prob*100}%")

# Fine-tune ResNet-50 for custom dataset (optional)

# 1. Freeze base layers
for layer in base_model.layers:
layer.trainable = False

# 2. Add custom layers
x = base_model.output
x = keras.layers.Dense(1024, activation='relu')(x)
x = keras.layers.Dropout(0.2)(x)
x = keras.layers.Dense(len(your_custom_classes), activation='softmax')(x)

# 3. Compile and train
model = keras.Model(inputs=base_model.input, outputs=x)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(your_custom_train_data, epochs=10)

# Make predictions
preds = base_model.predict(x)
decoded_preds = decode_predictions(preds, top=3)[0]

# Print predictions
for (i, (imagenetID, label, prob)) in enumerate(decoded_preds):
print(f"{i+1}. {label}: {prob*100}%")

# Evaluate model
loss, accuracy = model.evaluate(your_custom_test_data)
print(f"Test Loss: {loss}")
print(f"Test Accuracy: {accuracy*100}%")

# Additional metrics
from sklearn.metrics import classification_report, confusion_matrix

# Predict classes
predictions = model.predict(your_custom_test_data)
predicted_classes = np.argmax(predictions, axis=1)
true_classes = np.argmax(your_custom_test_data.labels, axis=1)

# Classification report
print(classification_report(true_classes, predicted_classes))

# Confusion matrix
print(confusion_matrix(true_classes, predicted_classes))


Guide

Choose suitable pre-trained models.

Monitor performance and adjust strategies.

Experiment with fine-tuning and feature extraction.

By applying transfer learning effectively, you can efficiently develop accurate CNN image processing models.


For comparison, I am providing you with the below example code without reference learning for image classification using TensorFlow and Keras. This code assumes you have TensorFlow installed.


Python


# Import necessary libraries
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt

# Load dataset (e.g., CIFAR-10)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()

# Normalize pixel values
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

# Define the model architecture
model = keras.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])

# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))

# Evaluate the model
test_loss, test_acc = model.evaluate(x_test, y_test)
print(f'Test accuracy: {test_acc}')

# Use the model for predictions
predictions = model.predict(x_test)

# Get the class labels
class_labels = np.argmax(predictions, axis=1)

# Visualize predictions (optional)
plt.figure(figsize=(10, 10))
for i in range(9):
plt.subplot(3, 3, i+1)
plt.imshow(x_test[i])
plt.title(class_labels[i])
plt.axis('off')
plt.show()


Explanation

Import Libraries: We import necessary TensorFlow, Keras and NumPy libraries.

Load Dataset: We load the CIFAR-10 dataset, which consists of 60,000 32x32 color images in 10 classes (animals, vehicles, etc.).

Preprocess Data: We normalize pixel values to improve model performance.

Define Model Architecture: We create a convolutional neural network (CNN) using Keras' Sequential API.

Compile Model: We configure the model's optimizer, loss function and evaluation metrics.

Train Model: We train the model on the training data for 10 epochs.

Evaluate Model: We assess the model's performance on the test data.

Make Predictions: We use the trained model to predict class labels for test images.

Visualize Predictions: Optionally, we display the first nine test images with predicted class labels.


Example Use Cases

Image Classification: Train the model on your dataset for multi-class image classification tasks.

Transfer Learning: Utilize pre-trained models (e.g., ResNet50) for image classification by replacing the final layers.

Hyperparameter Tuning: Experiment with different optimizers, learning rates and architectures to enhance performance.


Reference learning with Tensorflow KerasHub


However, now you can do the same thing directly with the newly launched Keras Hub.

Here is an example code to use reference learning with VGG 16 and keras-hub

Here's the example code:


Python


import os
import numpy as np
from tensorflow.keras.applications import VGG16
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input, decode_predictions
import tensorflow_datasets as tfds
from tensorflow.keras import layers, models

# Set Keras backend
os.environ["KERAS_BACKEND"] = "tensorflow"

# Load VGG16 model
classifier = VGG16(weights='imagenet', include_top=True)

# Predict label for single image
image_url = "https://upload.wikimedia.org/wikipedia/commons/a/aa/California_quail.jpg"
image_path = tf.keras.utils.get_file(origin=image_url)
img = image.load_img(image_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = classifier.predict(x)
print(decode_predictions(preds, top=3)[0])

# Load BERT model (no changes)
from tensorflow.keras_hub import models
classifier = models.BertClassifier.from_preset(
"bert_base_en_uncased",
activation="softmax",
num_classes=2,
)

# Fine-tune on IMDb movie reviews
imdb_train, imdb_test = tfds.load(
"imdb_reviews",
split=["train", "test"],
as_supervised=True,
batch_size=16,
)
classifier.fit(imdb_train, validation_data=imdb_test)

# Predict two new examples
preds = classifier.predict(
["What an amazing movie!", "A total waste of my time."]
)
print(preds)


Imported VGG16 from tensorflow.keras.applications.

VGG16 for image classification.

Utilized decode_predictions from tensorflow.keras.applications.vgg16 for prediction decoding.


Advice

Experiment with different pre-trained models.

Fine-tune VGG16, Resnet or another model for optimal performance on custom datasets.

Monitor prediction accuracy.

Visit https://keras.io/keras_hub/ for details.