Showing posts with label huggingface. Show all posts
Showing posts with label huggingface. Show all posts

Saturday

Preparing a Dataset for Fine-Tuning Foundation Model

 

I am trying to preparing a Dataset for Fine-Tuning on Pathology Lab Data.


1. Dataset Collection

   - Sources: Gather data from pathology lab reports, medical journals, and any other relevant medical documents.

   - Format: Ensure that the data is in a readable format like CSV, JSON, or text files.

2. Data Preprocessing

   - Cleaning: Remove any irrelevant data, correct typos, and handle missing values.

   - Formatting: Convert the data into a format suitable for fine-tuning, usually pairs of input and output texts.

   - Example Format:

     - Input: "Patient exhibits symptoms of hyperglycemia."

     - Output: "Hyperglycemia"

3. Tokenization

   - Tokenize the text using the tokenizer that corresponds to the model you intend to fine-tune.


Example Code for Dataset Preparation


Using Pandas and Transformers for Preprocessing


1. Install Required Libraries:

   ```sh

   pip install pandas transformers datasets

   ```

2. Load and Clean the Data:

   ```python

   import pandas as pd


   # Load your dataset

   df = pd.read_csv("pathology_lab_data.csv")


   # Example: Remove rows with missing values

   df.dropna(inplace=True)


   # Select relevant columns (e.g., 'report' and 'diagnosis')

   df = df[['report', 'diagnosis']]

   ```

3. Tokenize the Data:

   ```python

   from transformers import AutoTokenizer


   model_name = "pretrained_model_name"

   tokenizer = AutoTokenizer.from_pretrained(model_name)


   def tokenize_function(examples):

       return tokenizer(examples['report'], padding="max_length", truncation=True)


   tokenized_dataset = df.apply(lambda x: tokenize_function(x), axis=1)

   ```

4. Convert Data to HuggingFace Dataset Format:

   ```python

   from datasets import Dataset


   dataset = Dataset.from_pandas(df)

   tokenized_dataset = dataset.map(tokenize_function, batched=True)

   ```

5. Save the Tokenized Dataset:

   ```python

   tokenized_dataset.save_to_disk("path_to_save_tokenized_dataset")

   ```


Example Pathology Lab Data Preparation Script


Here is a complete script to prepare pathology lab data for fine-tuning:


```python

import pandas as pd

from transformers import AutoTokenizer

from datasets import Dataset


# Load your dataset

df = pd.read_csv("pathology_lab_data.csv")


# Clean the dataset (remove rows with missing values)

df.dropna(inplace=True)


# Select relevant columns (e.g., 'report' and 'diagnosis')

df = df[['report', 'diagnosis']]


# Initialize the tokenizer

model_name = "pretrained_model_name"

tokenizer = AutoTokenizer.from_pretrained(model_name)


# Tokenize the data

def tokenize_function(examples):

    return tokenizer(examples['report'], padding="max_length", truncation=True)


dataset = Dataset.from_pandas(df)

tokenized_dataset = dataset.map(tokenize_function, batched=True)


# Save the tokenized dataset

tokenized_dataset.save_to_disk("path_to_save_tokenized_dataset")

```


Notes

- Handling Imbalanced Data: If your dataset is imbalanced (e.g., more reports for certain diagnoses), consider techniques like oversampling, undersampling, or weighted loss functions during fine-tuning.

- Data Augmentation: You may also use data augmentation techniques to artificially increase the size of your dataset.


By following these steps, you'll have a clean, tokenized dataset ready for fine-tuning a model on pathology lab data.

You can read my other article about data preparation. 

Friday

Develop a Customize LLM Agent

 

Photo by MART PRODUCTION at pexel

If you’re interested in customizing an agent for a specific task, one way to do this is to fine-tune the models on your dataset. 

For preparing dataset you can see this article.

1. Curate the Dataset

- Using NeMo Curator:

  - Install NVIDIA NeMo: `pip install nemo_toolkit`

  - Use NeMo Curator to prepare your dataset according to your specific requirements.


2. Fine-Tune the Model


- Using NeMo Framework:

  1. Setup NeMo:

     ```python

     import nemo

     import nemo.collections.nlp as nemo_nlp

     ```

  2. Prepare the Data:

     ```python

     # Example to prepare dataset

     from nemo.collections.nlp.data.text_to_text import TextToTextDataset

     dataset = TextToTextDataset(file_path="path_to_your_dataset")

     ```

  3. Fine-Tune the Model:

     ```python

     model = nemo_nlp.models.NLPModel.from_pretrained("pretrained_model_name")

     model.train(dataset)

     model.save_to("path_to_save_fine_tuned_model")

     ```


- Using HuggingFace Transformers:

  1. Install Transformers:

     ```sh

     pip install transformers

     ```

  2. Load Pretrained Model:

     ```python

     from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, Trainer, TrainingArguments


     model_name = "pretrained_model_name"

     model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

     tokenizer = AutoTokenizer.from_pretrained(model_name)

     ```

  3. Prepare the Data:

     ```python

     from datasets import load_dataset


     dataset = load_dataset("path_to_your_dataset")

     tokenized_dataset = dataset.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)

     ```

  4. Fine-Tune the Model:

     ```python

     training_args = TrainingArguments(

         output_dir="./results",

         evaluation_strategy="epoch",

         learning_rate=2e-5,

         per_device_train_batch_size=16,

         per_device_eval_batch_size=16,

         num_train_epochs=3,

         weight_decay=0.01,

     )


     trainer = Trainer(

         model=model,

         args=training_args,

         train_dataset=tokenized_dataset['train'],

         eval_dataset=tokenized_dataset['validation']

     )


     trainer.train()

     model.save_pretrained("path_to_save_fine_tuned_model")

     tokenizer.save_pretrained("path_to_save_tokenizer")

     ```


3. Develop an Agent with LangChain


1. Install LangChain:

   ```sh

   pip install langchain

   ```


2. Load the Fine-Tuned Model:

   ```python

   from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

   from langchain.llms import HuggingFaceLLM


   model = AutoModelForSeq2SeqLM.from_pretrained("path_to_save_fine_tuned_model")

   tokenizer = AutoTokenizer.from_pretrained("path_to_save_tokenizer")


   llm = HuggingFaceLLM(model=model, tokenizer=tokenizer)

   ```


3. Define the Agent:

   ```python

   from langchain.agents import Agent


   agent = Agent(

       llm=llm,

       tools=["tool1", "tool2"],  # Specify the tools your agent will use

       memory="memory_option",    # Specify memory options if any

   )

   ```


4. Use the Agent:

   ```python

   response = agent("Your prompt here")

   print(response)

   ```


This process guides you through curating the dataset, fine-tuning the model, and integrating it into the LangChain framework to develop a custom agent.

You can get more details guide links following.

https://huggingface.co/docs/transformers/en/training

https://github.com/NVIDIA/NeMo-Curator/tree/main/examples

https://docs.smith.langchain.com/old/cookbook/fine-tuning-examples

Thursday

Code Auto Completion with Hugging Face LangChain and Phi3 SLM

 

Photo by energepic.com at pexel


You can create your own coding auto-completion co-pilot using Hugging Face LangChain and Phi3 SLM! Here's a breakdown of the steps involved:

1. Setting Up the Environment:

  • Install the required libraries:
    Bash
    pip install langchain transformers datasets phi3
    
  • Download the Phi3 SLM model:
    Bash
    from transformers import AutoModelForSeq2SeqLM
    model_name = "princeton-ml/ph3_base"
    model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
    

2. Preprocessing Code for LangChain:

  • LangChain provides a AutoTokenizer class to preprocess code. Identify the programming language you want to support and install the corresponding tokenizer from Hugging Face. For example, for Python:
    Bash
    from langchain.llms import AutoTokenizer
    tokenizer = AutoTokenizer.from_pretrained("openai/gpt-code-code")
    
  • Define a function to preprocess code into LangChain format. This might involve splitting the code into tokens, adding special tokens (e.g., start/end of code), and handling context (previous lines of code).

3. Integrating Phi3 SLM with LangChain:

  • LangChain allows creating custom prompts and completions. Leverage this to integrate Phi3 SLM for code completion suggestions.

  • Here's a basic outline:
    Python
    def generate_completion(code_input):
        # Preprocess code using tokenizer
        input_ids = tokenizer(code_input, return_tensors="pt")
    
        # Define LangChain prompt (e.g., "Write the next line of code: ")
        prompt = f"{prompt} {code_input}"
        prompt_ids = tokenizer(prompt, return_tensors="pt")
    
        # Generate outputs from Phi3 SLM using LangChain
        outputs = langchain.llms.TextLMRunner(model)(prompt_ids)
        generated_code = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
    
        return generated_code
    

4. Training and Fine-tuning (Optional):

  • While Phi3 SLM is a powerful model, you can further enhance its performance for specific coding tasks by fine-tuning on a dataset of code and completions. This might involve creating a custom training loop using LangChain's functionalities.

5. User Interface and Deployment:

  • Develop a user interface (UI) to accept code input from the user and display the generated completions from your co-pilot. This could be a web application or a plugin for an existing code editor.
  • Explore cloud platforms or containerization tools (e.g., Docker) to deploy your co-pilot as a service.

Additional Tips:

Remember, this is a high-level overview, and you'll need to adapt and implement the code based on your specific requirements and chosen programming language. 


PDF & CDF