OLLama and Gemma3 Tiny Test On CPU
Have you ever tested the tiny LLM Gemma3:1B with OLLama on your laptop or system that lacks a GPU?
You can build a fairly powerful GenAI application; however, it can be a little slow due to CPU processing.
Steps:
- Download and install ollama if not already there in your system:
- go to https://ollama.com/download and get the installation command
- Check the ollama running by `ollama --version`
- Now pull the Gemma LLM:
- Go to https://ollama.com/library/gemma3
- Run: `ollama pull gemma3:1b`
- Run Ollama server with LLM if not already running
- Check the list: `ollama list`
- Run: `ollama serve`
- Install the pip lib
- Run: `pip install ollama`
- Run: `pip install "jupyter-ai[ollama]`
- To stop the ollama server
- Run: `ps aux | grep ollama`
- Run: `kill <PID>`
- Run: `sudo systemctl stop ollama`
That all. Now got to your jupyter notebook. If not running run by command: `jupyter lab` or `jupyter notebook`
You can test by running my eg. notebook here https://github.com/dhirajpatra/jupyter_notebooks/blob/main/LLM/gemma3-1b-test.ipynb
Now it is your turn to configure, tuneing and develop many different application from RAG to Agentic AI. You can find out more code in my Github repos and also get the quick start guide here in blog. Thank you.







Comments