Posts

Showing posts with the label ollama

OLLama and Gemma3 Tiny Test On CPU

Image
Have you ever tested the tiny LLM Gemma3:1B with OLLama on your laptop or system that lacks a GPU? You can build a fairly powerful GenAI application; however, it can be a little slow due to CPU processing.  Steps: Download and install ollama if not already there in your system:  go to https://ollama.com/download and get the installation command Check the ollama running by `ollama --version` Now pull the Gemma LLM:  Go to https://ollama.com/library/gemma3 Run: `ollama pull  gemma3:1b` Run Ollama server with LLM if not already running Check the list: `ollama list` Run: `ollama serve` Install the pip lib  Run: `pip install ollama` Run: `pip install "jupyter-ai[ollama]` To stop the ollama server Run: `ps aux | grep ollama` Run: `kill <PID>` Run: `sudo systemctl stop ollama` That all. Now got to your jupyter notebook. If not running run by command: `jupyter lab` or `jupyter notebook` You can test by running my eg. notebook here  https://github.co...

Introducing the Local Copilot Chatbot Application: Your Ultimate Document-Based Query Assistant

Image
                                                  actual screenshot taken of the knowledge bot Introducing the Local Copilot Chatbot Application: Your Ultimate Document-Based Query Assistant In today's fast-paced world, finding precise information quickly can make a significant difference. Our Local Copilot Chatbot Application offers a cutting-edge solution for accessing and querying document-based knowledge with remarkable efficiency. This Flask-based application utilizes the powerful Ollama and Phi3 models to deliver an interactive, intuitive chatbot experience. Here's a deep dive into what our application offers and how it leverages modern technologies to enhance your productivity. What is the Local Copilot Chatbot Application? The Local Copilot Chatbot Application is designed to serve as your personal assistant for document-based queri...