OLLama and Gemma3 Tiny Test On CPU

Have you ever tested the tiny LLM Gemma3:1B with OLLama on your laptop or system that lacks a GPU?









You can build a fairly powerful GenAI application; however, it can be a little slow due to CPU processing. 

Steps:

  1. Download and install ollama if not already there in your system: 
    1. go to https://ollama.com/download and get the installation command
    2. Check the ollama running by `ollama --version`
  2. Now pull the Gemma LLM: 
    1. Go to https://ollama.com/library/gemma3
    2. Run: `ollama pull  gemma3:1b`
  3. Run Ollama server with LLM if not already running
    1. Check the list: `ollama list`
    2. Run: `ollama serve`
  4. Install the pip lib 
    1. Run: `pip install ollama`
    2. Run: `pip install "jupyter-ai[ollama]`
  5. To stop the ollama server
    1. Run: `ps aux | grep ollama`
    2. Run: `kill <PID>`
    3. Run: `sudo systemctl stop ollama`
That all. Now got to your jupyter notebook. If not running run by command: `jupyter lab` or `jupyter notebook`


Now it is your turn to configure, tuneing and develop many different application from RAG to Agentic AI. You can find out more code in my Github repos and also get the quick start guide here in blog. Thank you.

Comments

Popular posts from this blog

Self-contained Raspberry Pi surveillance System Without Continue Internet

COBOT with GenAI and Federated Learning

AI in Education: Embracing Change for Future-Ready Learning