Posts

Showing posts with the label gemma

ADK Smart Home Multi-Agent System

Image
📢 Excited to share my latest creation: the ADK Smart Home Multi-Agent System! 🏠💡 In a world where smart homes are becoming increasingly ubiquitous, the real challenge is creating seamlessly integrated, intelligent, and proactive systems. I've been developing a comprehensive solution that brings together real-time IoT data, weather intelligence, and AI-powered insights, all of which are controllable and viewable through Google Home. The ADK Smart Home Multi-Agent System is a sophisticated multi-agent microservice application designed for environmental monitoring and smart home automation. It's built around: 🏡 Real-time IoT Monitoring: Leveraging Arduino-based sensors, it captures precise indoor temperature and humidity data. ☁️ City-Wide Weather Intelligence: Integrates with external APIs (like OpenWeatherMap) to provide current outdoor temperature, humidity, and weather descriptions. 🤖 Multi-Agent Architecture (ADK): Built with the Agent Development Kit, the system featur...

OLLama and Gemma3 Tiny Test On CPU

Image
Have you ever tested the tiny LLM Gemma3:1B with OLLama on your laptop or system that lacks a GPU? You can build a fairly powerful GenAI application; however, it can be a little slow due to CPU processing.  Steps: Download and install ollama if not already there in your system:  go to https://ollama.com/download and get the installation command Check the ollama running by `ollama --version` Now pull the Gemma LLM:  Go to https://ollama.com/library/gemma3 Run: `ollama pull  gemma3:1b` Run Ollama server with LLM if not already running Check the list: `ollama list` Run: `ollama serve` Install the pip lib  Run: `pip install ollama` Run: `pip install "jupyter-ai[ollama]` To stop the ollama server Run: `ps aux | grep ollama` Run: `kill <PID>` Run: `sudo systemctl stop ollama` That all. Now got to your jupyter notebook. If not running run by command: `jupyter lab` or `jupyter notebook` You can test by running my eg. notebook here  https://github.co...

Local Gemma3 as VSCode Code Generation Extension

To use the #Gemma3 :1B model directly in #VSCode as a #codeassistant , you'll need to set up a local inference server or use an API that integrates with VS Code. Here's a step-by-step guide: Option 1: Run Gemma Locally & Integrate with VS Code 1. Install Required Dependencies Ensure you have Python (≥3.9) and `pip` installed. Then, install the necessary packages: ```bash pip install transformers torch sentencepiece ``` 2. Load Gemma 3:1B in a Python Script Create a Python script (`gemma_inference.py`) to load the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "google/gemma-3-1b-it" # or "google/gemma-3-7b-it" if you have more resources tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") def generate_code(prompt):   inputs = tokenizer(prompt, return_tensors="pt").to("cuda")   outputs = model.generate(**inputs, ...