Code Monkey home page Code Monkey logo

language-model-sts-cft's Introduction

Language-Model-STS-CFT

[ArXiv] [Hugging Face ๐Ÿค—]

This project aims to improve text embedding of smaller Language Models (LMs) up to 2B parameters using the contrastive fine-tuning technique. Specifically, the InfoNCE loss is utilized as a training objective.

$$\min - \log \frac{e^{\text{sim}(\textbf{h}_i, \textbf{h}_i^+) / \tau}}{\sum_i \left( e^{\text{sim}(\textbf{h}_i, \textbf{h}_j^+) / \tau }+ e^{\text{sim}(\textbf{h}_i, \textbf{h}_j^-) / \tau} \right)}$$

where $\textbf{h}_i$ denotes an embedding vector of a premise $x_i$, $\tau$ denotes a temperature and $\text{sim}(\textbf{h}_i, \textbf{h}_i^+)$ computes the cosine similarity between embedding vectors $\textbf{h}_i$ and $\textbf{h}_i^+$.

We employ LoRA as our parameter-efficient fine-tuning technique in order to reduce the memory requirement.

Embedding Extraction

  • Every prompt will be appended by the [EOS] token.
  • The embedding vector will be extracted from hidden states at the last layer of this [EOS] token.

Fine-tuned Weights

We have fine-tuned 3 models and we provide their LoRA adapter weights in this Hugging Face ๐Ÿค— collection.

The base models consist of

  1. MiniCPM-2B-dpo-bf16
  2. Gemma-2B-it
  3. Phi-2

The performance and fine-tuning details can be seen in the Hugging Face model page.

Dataset

We utilize the processed NLI dataset as our fine-tuning dataset. The dataset consists of 275K triplets of anchors, their corresponding entailments along with hard negatives. Please follow this README to see how to download the dataset.

Fine-tuning with your own resources

If you are willing to fine-tune the LMs with your own resources, we've provided the code for you. Our code can work with multi-GPUs settings. The more GPUs you have, the larger batch size you can fine-tune.

First, you need to setup the virtual environment. We provided the environment setup file you you.

conda env create --file environment.yml
conda activate cft

Then, download the processed NLI dataset following this README

After that, please follow this README for the fine-tuning steps.

Footnote

This work is the final project of the Natural Language Processing Spring 2024 course at Tsinghua University ๐ŸŸฃ. We would like to express our sincere gratitude to this course !

language-model-sts-cft's People

Contributors

trapoom555 avatar

Stargazers

Jie Feng avatar Rex O. Amin avatar rohit sohlot avatar Haoyun Xia avatar Shyam Peri avatar Kevin Armengol avatar Sempiternus avatar Huu Hiep Nguyen avatar Andrew Meier avatar Evan avatar PETER DIXON-MOSES avatar Michael Y. Choi avatar Neil Stoker avatar Deep Shah avatar Hildeberto avatar  avatar Aarti Darji avatar  avatar Eunhwan Park avatar Myungchul Shin avatar varunsaagar avatar Srikanta Prasad (Sri) avatar F i | /\/\ avatar Neil Van avatar Adam Zell avatar AdrienB134 avatar  avatar Vaibhav Bansal avatar Hakeem Demi avatar Lei Zhao avatar Shuzheng Si avatar  avatar Lulzx avatar  avatar Mohammadreza Esmaeiliyan avatar Laurent Debacker avatar Leo Lee avatar  avatar Dimitrios Athanasakis avatar Albert Yan avatar Farzad Hasanvand avatar kyle avatar Xiang Zhao avatar SeshurajuP avatar felix-wang avatar  avatar Yotam avatar Zhi Cheng Lee avatar  avatar  avatar Amy Xin avatar

Watchers

Kostas Georgiou avatar  avatar Yotam avatar Amy Xin avatar

language-model-sts-cft's Issues

STS Evaluation

  • STS Benchmark (validation set) can be used for evaluation during training
  • This benchmark can imply how well the model create an embedding vector given a sentence
  • Score will be calculated using Spearman Correlation

Enlarge Batch Size Per GPU

The result of training with batch_size_per_gpu = 2 is unstable.

image
  • The key success of contrastive learning heavily depends on batch size
  • batch_size_per_gpu = 2 has already consumed ~45% of GPU memory (RTX3090)
  • Now using a normal DDP strategy + bf16 Mixed Precision + LoRA for training

Improve discoverability of your work on HF

Hi,

Niels here from the open-source team at Hugging Face. Congrats on your work! I discovered your work through the paper page: https://huggingface.co/papers/2408.00690 (feel free to claim the paper so that it appears at your HF account!).

It's great to link the ๐Ÿค— Collection to the paper, and add the paper itself to the collection.

See here on how to do that: https://huggingface.co/docs/hub/en/paper-pages#linking-a-paper-to-a-model-dataset-or-space.

Cheers,

Niels
ML Engineer @ HF ๐Ÿค—

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.