Code Monkey home page Code Monkey logo

chawins / detect-pretrain-code Goto Github PK

View Code? Open in Web Editor NEW

This project forked from swj0419/detect-pretrain-code

0.0 1.0 0.0 266 KB

This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.

Home Page: https://swj0419.github.io/detect-pretrain.github.io/

License: Apache License 2.0

Python 100.00%

detect-pretrain-code's Introduction

๐Ÿ•ต๏ธ Detecting Pretraining Data from Large Language Models

This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer

Website | Paper | WikiMIA Benchmark | BookMIA Benchmark | Detection Method Min-K% Prob(see the following codebase)

Overview

We explore the pretraining data detection problem: given a piece of text and black-box access to an LLM without knowing the pretraining data, can we determine if the model was trained on the provided text? To faciliate the study, we built a dynamic benchmark WikiMIA to systematically evaluate detecting methods and proposed Min-K% Prob ๐Ÿ•ต๏ธ, a method for detecting undisclosed pretraining data from large language models.

โญ If you find our implementation and paper helpful, please consider citing our work โญ :

@misc{shi2023detecting,
    title={Detecting Pretraining Data from Large Language Models},
    author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer},
    year={2023},
    eprint={2310.16789},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

๐Ÿ“˜ WikiMIA Datasets

The WikiMIA datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models. Access our WikiMIA datasets directly on Hugging Face.

Loading the Datasets:

from datasets import load_dataset
LENGTH = 64
dataset = load_dataset("swj0419/WikiMIA", split=f"WikiMIA_length{LENGTH}")
  • Available Text Lengths: 32, 64, 128, 256.
  • Label 0: Refers to the unseen data during pretraining. Label 1: Refers to the seen data.
  • WikiMIA is applicable to all models released between 2017 to 2023 such as LLaMA1/2, GPT-Neo, OPT, Pythia, text-davinci-001, text-davinci-002 ...

๐Ÿ“˜ BookMIA Datasets for evaluating MIA on OpenAI models

The BookMIA datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from OpenAI models that are released before 2023 (such as text-davinci-003). Access our BookMIA datasets directly on Hugging Face.

The dataset contains non-member and member data:

  • non-member data consists of text excerpts from books first published in 2023
  • member data includes text excerpts from older books, as categorized by Chang et al. in 2023.

Loading the Datasets:

from datasets import load_dataset
dataset = load_dataset("swj0419/BookMIA")
  • Available Text Lengths: 512.
  • Label 0: Refers to the unseen data during pretraining. Label 1: Refers to the seen data.
  • WikiMIA is applicable to OpenAI models that are released before 2023 text-davinci-003, text-davinci-002 ...

๐Ÿš€ Run our Min-K% Prob & Other Baselines

Our codebase supports many models: Whether you're using OpenAI models that offer logits or models from Huggingface, we've got you covered:

  • OpenAI Models:

    • text-davinci-003
    • text-davinci-002
    • ...
  • Huggingface Models:

    • meta-llama/Llama-2-70b
    • huggyllama/llama-70b
    • EleutherAI/gpt-neox-20b
    • ...

๐Ÿ” Important: When using OpenAI models, ensure to add your API key at Line 38 in run.py:

openai.api_key = "YOUR_API_KEY"

Use the following command to run the model:

python src/run.py --target_model text-davinci-003 --ref_model huggyllama/llama-7b --data swj0419/WikiMIA --length 64

๐Ÿ” Parameters Explained:

  • Target Model: Set using --target_model. For instance, --target_model huggyllama/llama-70b.

  • Reference Model: Defined using --ref_model. Example: --ref_model huggyllama/llama-7b.

  • Data Length: Define the length for the WikiMIA benchmark with --length. Available options: 32, 54, 128, 256.

๐Ÿ“Œ Note: For optimal results, use fixed-length inputs with our Min-K% Prob method (When you evalaute Min-K% Prob method on your own dataset, make sure the input length of each example is the same.)

๐Ÿ“Š Baselines: Our script comes with the following baselines: PPL, Calibration Method, PPL/zlib_compression, PPL/lowercase_ppl

detect-pretrain-code's People

Contributors

swj0419 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.