Posts

Showing posts with the label code generation

Claude Skills Overview

Image
                                                             Anthropic Claude Skills Overview Skills are specialized knowledge packages that help me produce higher-quality outputs for specific tasks. Think of them as expert guides that I consult before tackling certain types of work. What Are Skills? Skills are folders containing best practices, tested techniques, and condensed wisdom for creating specific types of outputs. For example: docx skill : Best practices for creating professional Word documents pptx skill : Guidance for building high-quality presentations xlsx skill : Techniques for working with spreadsheets pdf skill : Methods for manipulating PDF files Each skill contains a SKILL.md file with detailed instructions that I read before starting the relevant task. How Skills Work When you ask me to create something, I: ...

Python Code Testing

 Let's look at how to test our Python code and follow the code coverage as much as possible. How to follow the MVC pattern in FastAPI How to write Pythonic code Types of testing with pytest Usage of patching , monkeypatching , fixture , and mocking 🚀 How to Follow the MVC Pattern in FastAPI FastAPI doesn’t enforce a strict MVC structure, but you can follow an organized MVC-like structure: 🔹 MVC Directory Structure Example app/ │ ├── models/ # ORM models (e.g., SQLAlchemy) │ └── user.py │ ├── schemas/ # Pydantic schemas (DTOs) │ └── user.py │ ├── controllers/ # Business logic (aka services) │ └── user_controller.py │ ├── routes/ # Route definitions │ └── user_routes.py │ ├── main.py # Entry point └── database.py # DB engine/session 🔹 MVC Mapping Model → app/models/ View → app/routes/ (FastAPI endpoints) Controller → app/controllers/ (business logic) 🐍 How to Write Pythonic Code Follo...

Local Gemma3 as VSCode Code Generation Extension

To use the #Gemma3 :1B model directly in #VSCode as a #codeassistant , you'll need to set up a local inference server or use an API that integrates with VS Code. Here's a step-by-step guide: Option 1: Run Gemma Locally & Integrate with VS Code 1. Install Required Dependencies Ensure you have Python (≥3.9) and `pip` installed. Then, install the necessary packages: ```bash pip install transformers torch sentencepiece ``` 2. Load Gemma 3:1B in a Python Script Create a Python script (`gemma_inference.py`) to load the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "google/gemma-3-1b-it" # or "google/gemma-3-7b-it" if you have more resources tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") def generate_code(prompt):   inputs = tokenizer(prompt, return_tensors="pt").to("cuda")   outputs = model.generate(**inputs, ...