Code Monkey home page Code Monkey logo

open-nllb's Introduction


I'm an ex Research Engineer at Google DeepMind & Microsoft, I run the The AI Epiphany community - and I'm currently in the process of building my first startup Runa AI. I'm also a proud father of 16 A100s and 16 H100s (generously sponsored by Together AI & Hyperstack respectively).

Latest:
* YugoGPT - trained SOTA 7B LLM for Croatian, Bosnian, Serbian, Montenegrin langs
* SlovenianGPT - trained SOTA 7B LLM for Slovenian language
* YugoChat - talk to YugoGPT
* First Serbian LLM eval
* Open-NLLB - Replicating Meta's "no language left behind" machine translation (MT) project

Most recent OSS contributions:

  • airoboros - synthetic instruction following data generation framework

My older recent projects:


The AI Epiphany banner

open-nllb's People

Contributors

alexeib avatar cndn avatar davidecaroselli avatar edunov avatar erip avatar freewym avatar gordicaleksa avatar huihuifan avatar jhcross avatar jingfeidu avatar jma127 avatar joshim5 avatar kahne avatar kartikayk avatar lematt1991 avatar liezl200 avatar liuchen9494 avatar louismartin avatar maigoakisame avatar mortimerp9 avatar multipath avatar myleott avatar pipibjc avatar skritika avatar sravyapopuri388 avatar sshleifer avatar tangyuq avatar theweiho avatar xu-song avatar xutaima avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-nllb's Issues

[Future - outside current project scope] 7B lang-family-specific Open-NLLB checkpoint

Our current project scope's goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.

Going past that I would love to scale up to 7B parameters dense transformers and train a set of such models for different language families:

  • 7B Open-NLLB model for Slavic languages
  • 7B Open-NLLB model for African languages
  • 7B Open-NLLB model for Germanic languages
  • ...

(I'm not a linguist so excuse me if any mistakes in the preliminary list above :) ).

Native language visualizations

Go through the files for your native language and see whether there are any issues.

Check out the getting started document here for how to download public bi-text for your language.

Additionally feel free to do whatever type of analysis you see fit:

  • embedding the sentences
  • visualizing, counting the number of sentences/words for your native lang
  • ...

See if there anything wrong in the data.

Share for example a Weights & Biases report as the final result of your work. We'll keep track of these for all covered languages.

[Future - outside current project scope] non-English LLMs (Serbian LLM, etc.)

The AI world is obsessed with LLMs but most of the models you can get your hands on atm support only (or mostly) English:
ChatGPT, GPT-4, Falcon, Llama 1/2 & derivatives, etc.

Let's train high-quality language specific LLMs and open-source them! ❤️

This will help spread this technology outside of the western-centric world and help preserve diversity and richness of culture around the world.

Understand how to do 4-stage curriculum learning from the paper

It's not quite clear how do we setup a 4-stage curriculum learning training going through the codebase & existing documentation.

Understanding this will be super important once we start running on a bigger number of languages.

There is some mention of it in this README.

Write a report and share the learnings on how do do this.

Note: we could always do this manually by stopping and restarting 4 different jobs, but that's error-prone and I suspect Meta had a more streamlined approach. :)

[Modeling] Release a 615M English -> HBS Open-NLLB checkpoint

Our goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.

To get to that point let's first start by releasing OSS checkpoints with smaller scale and smaller language scope.

The goal here will be to release a model with following properties:

  • (small scale) 615 M parameters
  • (smaller language scope) Support only translation from English into HBS (Croatian, Bosnian, Serbian, Montenegrin).

Hydra pickle issue in generate_multi.py

Figure out the pickle issue mentioned here:
facebookresearch/fairseq#5315

Conf file

conf.zip

Current workaround:

@hydra.main(config_path="conf", config_name="generate_multi_full")
def main(config: DictConfig) -> None:
    launcher = hydra.utils.instantiate(config.launcher)
    module = GenerateMultiModule(config)
    asyncio.run(module.run())
    # asyncio.run(tabulate(config))

and modify the run method of GenerateMultiModule class by prepending it with the following:

jobs = self.array()
        for iteration_value in jobs:

[Modeling] Release a 3.3B Open-NLLB checkpoint (~202 languages)

This the end goal for the current project scope.

Here the goal is to release a model with following properties:

  • Truly open-source
  • 3.3B dense
  • Supports all 202 NLLB languages in both direction

Note: it will be very hard to get a satisfactory level of quality for 202 languages with a dense checkpoint. The original work from Meta used a ~54B parameter MoE (mixture of experts) model to get decent results + a ton of compute (~52k hours on A100-SXM-80GB).

We do have plans to scale beyond 3.3B parameters scale.

Weird line length spikes in Serbian, Croatian, Bosnian (data analysis task)

We get weird regularly spaced out histogram spikes for HBS languages, see below:

image

On the x-axis is the line length (in chars) and on the y-axis is the number of lines with that line length in our corpus.

Understand why this is happening and write a short report.

Instructions:

  • Download the OPUS data using the download_opus.py script + some manual download see which datasets in the script (all those that are in the ignore list, see the OPUS website: https://opus.nlpl.eu/
  • Use the parse_macocu.py to parse the MaCoCu data that has to be manually downloaded (for now, feel free to add functions to support the download through Python) from https://macocu.eu/ (only a couple of datasets so doesn't take too much time to do it manually).
  • After that you can use the analyse_data.py script to replicate the graph above and then do your own analysis.

Get a compute grant

After we estimate the necessary compute requirements (see this) ask around for grants to support the project.

If you have experience writing grants - please reach out to me! (our Discord channel)

sub-batches creation

Understand which optimizations are done in fairseq under the hood (sub-batches creation wrt. src/target len)
Explore fairseq codebase in this matter

Question to answer

How are sub-batches generated wrt. src/target len in fairseq?

[Modeling] Release a 615M HBS (Croatian, Bosnian, Serbian) Open-NLLB checkpoint

Our goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.

To get to that point let's first start by releasing OSS checkpoints with smaller scale and smaller language scope.

The goal here will be to release a model with following properties:

  • (small scale) 615 M parameters
  • (smaller language scope) Support only Croatian, Bosnian, Serbian, Montenegrin languages + English.

[Modeling] Release a 1.3B Slavic languages Open-NLLB checkpoint

Our goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.

To get to that point let's first start by releasing OSS checkpoints with smaller scale and smaller language scope.

The goal here will be to release a model with following properties:

  • (small scale) 1.3B parameters
  • (smaller language scope) Supports only Slavic languages (~20 languages) and English.

Provide context for input and output

🚀 Feature Request

Provide context to the input, but not actually include the context in the translation. Useful for real time translation applications. Also continue translations from context translation.

Example:

Context Input: I’m in a cave and see
Input to translate: a bat.
Context Output: Estoy en una cueva y veo
Translated Output: un murciélago.

Notice how bat (🦇) can also mean baseball bat unless you have the context.

Motivation

Real time translation.

Pitch

I can implement this if given some pointers as well. Thanks!

Creation of a small model file for a few languages

Is it correct that there is currently no model file from the project even a small one for a few languages?

How long (how many GPU hours) would it take to generate a small model file (600M parameters) for a few languages? Is it possible to pause the generation and resume it after some time?

I can use a server with two NVIDIA A100 GPUs with 40 GB VRAM each and could possibly create a small model file for a few languages.

Standard Moroccan Tamazight is mislabeled

Standard Moroccan Tamazight (zgh) was mislabeld as Central Atlas Tamazight (tzm) in both FLORES and NLLB-SEED. This issue was fixed in the new FLORES+ and Seed datasets.
Which files should be changed to reflect this? can I just just find and replace (tzm -> zgh) on the entire repo or maybe follow this commit?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.