Code Monkey home page Code Monkey logo

santacoder-finetuning's People

Contributors

esslushy avatar loubnabnl avatar lvwerra avatar muhtasham avatar stillerman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

santacoder-finetuning's Issues

moving a fine tuned model to gpt_bigcode

Hey,

A while ago I finetuned three models starting with the main sanatacoder model. That one requires trust_remote_code=True due to the custom modelling files. GPT-bigcode has been native in transformers for a while and I have also seen and used the gpt_bigcode variant of santacoder.
Now my question is: can I turn my models into natively supported variant too? If so, do you happen to have a script or at least some pointers?

Linked Colab isn't working

Hello, this may or may not be the maintainer's responsibility, but the Colab linked for training doesn't appear to be working. Got the following error:


TypeError Traceback (most recent call last)
in <cell line: 1>()
----> 1 next(iter(train_dataset))

in iter(self)
30
31 def iter(self):
---> 32 iterator = iter(self.dataset)
33 more_examples = True
34 while more_examples:

TypeError: 'method' object is not iterable

Ways to reproduce this approach

Hi @loubnabnl, thanks for this great repo.

I've seen a blog from the VMware OCTO, which described their works on fine-tuning star-coder, but modified the code provided by the [SantaCoder](https://github.com/loubnabnl/santacoder-finetuning) git repository for fine-tuning as it is focused on the code generation task..

There are some more details like:

  1. Accelerate and DeepSpeed are used to improve fine-tuning performance.
  2. Fine-tuning generates a small PEFT model .

I think this is not the best place to discuss their approach, but since you are the expert on fine-tuning santacoder/star-coder, are there any hints we can reproduce the approach in the blog on top of the current open-source code? I also checked the star-coder fine-tuning repo, but it looks like it suggests using instruction-based fine-tuning.

Question (not issue) related to dataset generation

Hi @loubnabnl

Thank you so much for this nice repo for running finetuning.

I have one question and did not find a better way to communicate, so feel free to answer and then close this issue.

In the following code, input_ids and labels are the same for supervised fine tuning.
Is there somewhere in the model training parameter that knows it is a causal LM training, so it will shift the labels by one, so that input_ids and labels become a next token prediction task?

...
            for example in examples:
                self.current_size += 1
                yield {
                        "input_ids": torch.LongTensor(example),
                        "labels": torch.LongTensor(example),
                    }

DeepSpeed Integration

Hi,
Since my GPU memory is low (12GB), I am finding the way to use deepspeed in training code, with CPU offload setting.
Here is my modification so far:

"""
Fine-Tune SantaCoder on code/text dataset
"""

import argparse
import os

import torch
from datasets import load_dataset
from torch.utils.data import IterableDataset
from torch.utils.data.dataloader import DataLoader
from tqdm import tqdm
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    Trainer,
    TrainingArguments,
    TrainerCallback,
    TrainerState,
    TrainerControl,
    logging,
    set_seed,
)
import deepspeed


def get_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--model_path", type=str, default="bigcode/santacoder")
    parser.add_argument("--dataset_name", type=str, default="bigcode/the-stack-dedup")
    parser.add_argument("--subset", type=str, default="data")
    parser.add_argument("--split", type=str, default="train")
    parser.add_argument("--size_valid_set", type=int, default=4000)
    parser.add_argument("--streaming", action="store_true")
    parser.add_argument("--shuffle_buffer", type=int, default=5000)
    parser.add_argument("--data_column", type=str, default="content")


    parser.add_argument("--seq_length", type=int, default=1024)
    parser.add_argument("--max_steps", type=int, default=10000)
    parser.add_argument("--batch_size", type=int, default=2)
    parser.add_argument("--gradient_accumulation_steps", type=int, default=8)
    parser.add_argument("--eos_token_id", type=int, default=49152)

    parser.add_argument("--learning_rate", type=float, default=5e-5)
    parser.add_argument("--lr_scheduler_type", type=str, default="cosine")
    parser.add_argument("--num_warmup_steps", type=int, default=100)
    parser.add_argument("--weight_decay", type=float, default=0.05)

    parser.add_argument("--local_rank", type=int, default=0)
    parser.add_argument("--no_fp16", action="store_false")
    parser.add_argument("--no_gradient_checkpointing", action="store_false")
    parser.add_argument("--seed", type=int, default=0)
    parser.add_argument("--num_workers", type=int, default=None)
    parser.add_argument("--output_dir", type=str, default="./checkpoints")
    parser.add_argument("--log_freq", default=1, type=int)
    parser.add_argument("--eval_freq", default=1000, type=int)
    parser.add_argument("--save_freq", default=1000, type=int)
    return parser.parse_args()


def chars_token_ratio(dataset, tokenizer, data_column, nb_examples=400):
    """
    Estimate the average number of characters per token in the dataset.
    """
    total_characters, total_tokens = 0, 0
    for _, example in tqdm(zip(range(nb_examples), iter(dataset)), total=nb_examples):
        total_characters += len(example[data_column])
        total_tokens += len(tokenizer(example[data_column]).tokens())

    return total_characters / total_tokens


DEEPSPEED_CONFIG = \
{
    'optimizer': {'type': 'AdamW', 'params': {'lr': 1e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}},
    'scheduler': {'type': 'WarmupLR', 'params': {'warmup_min_lr': 0, 'warmup_max_lr': 1e-05, 'warmup_num_steps': 100}},
    'zero_optimization': {
        'stage': 3,
        'offload_optimizer': {'device': 'cpu', 'pin_memory': False},
        'offload_param': {'device': 'cpu', 'pin_memory': False},
        'overlap_comm': True,
        'contiguous_gradients': True,
        'sub_group_size': 1e9,
        'reduce_bucket_size': 16777216,
        'stage3_prefetch_bucket_size': 15099494.4,
        'stage3_param_persistence_threshold': 40960,
        'stage3_max_live_parameters': 1e9,
        'stage3_max_reuse_distance': 1e9,
    },
    'train_batch_size': 32,
    'train_micro_batch_size_per_gpu': 4,
    'gradient_accumulation_steps': 8,
    'gradient_clipping': 1.0,
    'steps_per_print': 8,
    'wall_clock_breakdown': False,
    'compression_training': {'weight_quantization': {'shared_parameters': {}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {}, 'different_groups': {}}}
}

class ConstantLengthDataset(IterableDataset):
    """
    Iterable dataset that returns constant length chunks of tokens from stream of text files.
        Args:
            tokenizer (Tokenizer): The processor used for proccessing the data.
            dataset (dataset.Dataset): Dataset with text files.
            infinite (bool): If True the iterator is reset after dataset reaches end else stops.
            seq_length (int): Length of token sequences to return.
            num_of_sequences (int): Number of token sequences to keep in buffer.
            chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.
    """

    def __init__(
        self,
        tokenizer,
        dataset,
        infinite=False,
        seq_length=1024,
        num_of_sequences=1024,
        chars_per_token=3.6,
        content_field="content",
    ):
        self.tokenizer = tokenizer
        self.concat_token_id = (
            tokenizer.eos_token_id if tokenizer.eos_token_id else args.eos_token_id
        )
        self.dataset = dataset
        self.seq_length = seq_length
        self.infinite = infinite
        self.current_size = 0
        self.max_buffer_size = seq_length * chars_per_token * num_of_sequences
        self.content_field = content_field

    def __iter__(self):
        iterator = iter(self.dataset)
        more_examples = True
        while more_examples:
            buffer, buffer_len = [], 0
            while True:
                if buffer_len >= self.max_buffer_size:
                    break
                try:
                    buffer.append(next(iterator)[self.content_field])
                    buffer_len += len(buffer[-1])
                except StopIteration:
                    if self.infinite:
                        iterator = iter(self.dataset)
                    else:
                        more_examples = False
                        break
            tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"]
            all_token_ids = []
            for tokenized_input in tokenized_inputs:
                all_token_ids.extend(tokenized_input + [self.concat_token_id])
            for i in range(0, len(all_token_ids), self.seq_length):
                input_ids = all_token_ids[i : i + self.seq_length]
                if len(input_ids) == self.seq_length:
                    self.current_size += 1
                    yield {
                        "input_ids": torch.LongTensor(input_ids),
                        "labels": torch.LongTensor(input_ids),
                    }


def create_datasets(tokenizer, args):
    dataset = load_dataset(
        args.dataset_name,
        data_dir=args.subset,
        split=args.split,
        use_auth_token=True,
        num_proc=args.num_workers if not args.streaming else None,
        streaming=args.streaming,
    )
    if args.streaming:
        print("Loading the dataset in streaming mode")
        valid_data = dataset.take(args.size_valid_set)
        train_data = dataset.skip(args.size_valid_set)
        train_data = train_data.shuffle(buffer_size=args.shuffle_buffer, seed=args.seed)
    else:
        dataset = dataset.train_test_split(test_size=0.005, seed=args.seed)
        train_data = dataset["train"]
        valid_data = dataset["test"]
        print(
            f"Size of the train set: {len(train_data)}. Size of the validation set: {len(valid_data)}"
        )
    chars_per_token = chars_token_ratio(train_data, tokenizer, args.data_column)
    print(f"The character to token ratio of the dataset is: {chars_per_token:.2f}")
    train_dataset = ConstantLengthDataset(
        tokenizer,
        train_data,
        infinite=True,
        seq_length=args.seq_length,
        chars_per_token=chars_per_token,
        content_field=args.data_column,
    )
    valid_dataset = ConstantLengthDataset(
        tokenizer,
        valid_data,
        infinite=False,
        seq_length=args.seq_length,
        chars_per_token=chars_per_token,
        content_field=args.data_column,
    )
    return train_dataset, valid_dataset

class SantaCoderTrainerCallback(TrainerCallback):
    def on_step_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
        torch.cuda.empty_cache()
    def on_train_end(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
        torch.cuda.empty_cache()

def run_training(args, train_data, val_data):
    print("Loading the model")
    # disable caching mechanism when using gradient checkpointing
    model = AutoModelForCausalLM.from_pretrained(
        args.model_path,
        trust_remote_code=True,
        use_cache=not args.no_gradient_checkpointing,
    )
    train_data.start_iteration = 0

    print(f"Starting main loop")

    DEEPSPEED_CONFIG['train_micro_batch_size_per_gpu'] = args.batch_size
    DEEPSPEED_CONFIG['gradient_accumulation_steps'] = args.gradient_accumulation_steps
    DEEPSPEED_CONFIG['train_batch_size'] = args.batch_size * args.gradient_accumulation_steps
    DEEPSPEED_CONFIG['scheduler']['params']['warmup_num_steps'] = args.num_warmup_steps
    DEEPSPEED_CONFIG['scheduler']['params']['warmup_max_lr'] = args.learning_rate
    DEEPSPEED_CONFIG['optimizer']['params']['lr'] = args.learning_rate
    DEEPSPEED_CONFIG['optimizer']['params']['weight_decay'] = args.weight_decay

    training_args = TrainingArguments(
        output_dir=args.output_dir,
        dataloader_drop_last=True,
        evaluation_strategy="steps",
        max_steps=args.max_steps,
        eval_steps=args.eval_freq,
        save_steps=args.save_freq,
        logging_steps=args.log_freq,
        per_device_train_batch_size=args.batch_size,
        per_device_eval_batch_size=args.batch_size,
        learning_rate=args.learning_rate,
        lr_scheduler_type=args.lr_scheduler_type,
        warmup_steps=args.num_warmup_steps,
        gradient_accumulation_steps=args.gradient_accumulation_steps,
        gradient_checkpointing=args.no_gradient_checkpointing,
        fp16=args.no_fp16,
        weight_decay=args.weight_decay,
        run_name=f"santacoder-{args.subset}",
        report_to="wandb",
        deepspeed=DEEPSPEED_CONFIG
    )

    trainer = Trainer(
        model=model, args=training_args, train_dataset=train_data, eval_dataset=val_data, callbacks=[SantaCoderTrainerCallback]
    )

    print("Training...")
    trainer.train()

    print("Saving last checkpoint of the model")
    output_dir = os.path.join(args.output_dir, "final_checkpoint/")
    os.makedirs(output_dir, exist_ok=True)
    model.save_pretrained(output_dir)


def main(args):
    tokenizer = AutoTokenizer.from_pretrained(args.model_path, use_auth_token=True)
    train_dataset, eval_dataset = create_datasets(tokenizer, args)
    run_training(args, train_dataset, eval_dataset)


if __name__ == "__main__":

    args = get_args()
    set_seed(args.seed)
    os.makedirs(args.output_dir, exist_ok=True)

    logging.set_verbosity_error()

    main(args)

Could you help me to check if I am doing it in right way, Thanks ^^ The DeepSpeed config is inherited from https://github.com/salesforce/jaxformer/blob/main/jaxformer/hf/train.py

How to inference the model

Hi,
I have a question. When we use local finetuning we produce checkpoints. If we wish to perform inference on these models how can we do that?

model_name="checkpoint-9000"
tokenizer = AutoTokenizer.from_pretrained(model_name) # checkpoint-900

OSError: Can't load tokenizer for 'checkpoint-9000'. If you were trying to load..
It appears the tokenizers do not get saved.

Formatting FIM data

Hi,

I want to finetune my model on FIM-only data.
If I use this repo for FIM data formatting, seems like it could frequently happen that a single chunk (i.e. single element of ConstantLengthDataset) doesn't contain all the FIM components (or sometimes not containing any of them) due to long inputs that need to be chunked.

Does this "hurt" the FIM training? Would it benefit from a different way of formatting/splitting the data so that all FIM components fit into a single chunk (so that they get passed to the model together)?

Thanks!

Finetuning on multiple language

Hi,
I am wondering if it is possible to load dataset from multiple languages (c-sharp, python) for finetuning? Do I need to modify code to do that? Thank you ^^

no_fp16 ?

Hi, thanks for sharing the code.

Can you elaborate why the default option you chose is "--no_fp16" ?
If i understand correctly, the original model was trained in fp16

thanks,
Tal

I used multiple GPUs for training, but there were the following errors. How can I solve them?

lib/python3.10/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
WARNING:torch.distributed.run:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.

weird behavior when setting batch_size

Hi there, thanks a lot for the great script. However, I got a weird behavior that setting batch size is equal to setting num gpus, i.e. when I set batch_size=2, I use 2GPU; when I set batch_size=4, I use 4GPU, despite I have set all 4 GPUs visible by Pytorch. Have you met similar issue before? Thanks!

ValueError: Batch does not contain any data (`None`). At the end of all iterable data available before expected stop iteration.

Hey @loubnabnl,

Thanks for this repo - I've learned a lot from what you implemented here.

I am encountering a strange error when I attempt to use the command:

python santacoder-finetuning/train.py \
        --model_path="bigcode/santacoder" \
        --dataset_name="json" \
        --subset="./mydataset/" \
        --data_column "content" \
        --split="train" \
        --seq_length 2048 \
        --max_steps 1000 \
        --batch_size 2 \
        --gradient_accumulation_steps 4 \
        --learning_rate 5e-5 \
        --num_warmup_steps 100 \
        --eval_freq 100 \
        --save_freq 100 \
        --log_freq 1 \
        --no_fp16 \
        --fim_rate 0.5 \
        --fim_spm_rate 0.5

If I run this - I end up getting an error with that says:

ValueError: Batch does not contain any data (`None`). At the end of all iterable data available before expected stop iteration.

I hit this error when I pass through the 0.1 mark of the epoch:

{'loss': 0.2896, 'learning_rate': 4.9e-05, 'epoch': 0.1}
{'loss': 0.2095, 'learning_rate': 4.9500000000000004e-05, 'epoch': 0.1}
{'loss': 0.291, 'learning_rate': 5e-05, 'epoch': 0.1}
Traceback (most recent call last):
  File "../santacoder-finetuning/train.py", line 289, in <module>
    
  File "../santacoder-finetuning/train.py", line 279, in main
    run_training(args, train_dataset, eval_dataset)
  File "../santacoder-finetuning/train.py", line 268, in run_training
    trainer.train()
  File " /lib/python3.10/site-packages/transformers/trainer.py", line 1556, in train
    return inner_training_loop(
  File " /lib/python3.10/site-packages/transformers/trainer.py", line 1930, in _inner_training_loop
    self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
  File " /lib/python3.10/site-packages/transformers/trainer.py", line 2257, in _maybe_log_save_evaluate
    metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
  File " /lib/python3.10/site-packages/transformers/trainer.py", line 2982, in evaluate
    output = eval_loop(
  File " /lib/python3.10/site-packages/transformers/trainer.py", line 3161, in evaluation_loop
    for step, inputs in enumerate(dataloader):
  File " /lib/python3.10/site-packages/accelerate/data_loader.py", line 582, in __iter__
    raise ValueError(
ValueError: Batch does not contain any data (`None`). At the end of all iterable data available before expected stop iteration.

My dataset train and test is small and looks like this:

Size of the train set: 295. Size of the validation set: 2
 74%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉                                                 | 295/400 [00:00<00:00, 452.00it/s]
The character to token ratio of the dataset is: 3.90

Do you have any thoughts here what I need to do to adjust the training loop? Is it because my train set is too small?

Thanks!
Adam

Issue running inference on Huggingface after model upload

Followed the instructions to create a new model repo and add the required files via Git. When I test the uploaded model via the HF sandbox, I get the following error:

Loading umm-maybe/StackStar_Santa requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.

It's unclear which configuration file it's referring to, but I did notice the config.json references the parent model (santacoder), instead of mine, and changed that. I also executed the configuration_gpt2_mq.py, which does nothing. There's no trust_remote_code option in either of these files; from what I understand it's an option when running local inference using AutoModelForCausalLM.from_pretrained. It's not clear how to set this option for on-line inference via the HuggingFace Hub.

FIM Evaluation

Hello, thanks a lot for your great work.
Could you kindly advise on how to replicate the evaluation results of FIM as shown in Table 6 of your paper? I've been searching for the evaluation code for quite some time but haven't been able to find it.
Xnip2024-02-29_21-19-23

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.