gordicaleksa / open-nllb Goto Github PK
View Code? Open in Web Editor NEWEffort to open-source NLLB checkpoints.
License: MIT License
Effort to open-source NLLB checkpoints.
License: MIT License
Figure out the pickle issue mentioned here:
facebookresearch/fairseq#5315
@hydra.main(config_path="conf", config_name="generate_multi_full")
def main(config: DictConfig) -> None:
launcher = hydra.utils.instantiate(config.launcher)
module = GenerateMultiModule(config)
asyncio.run(module.run())
# asyncio.run(tabulate(config))
and modify the run
method of GenerateMultiModule
class by prepending it with the following:
jobs = self.array()
for iteration_value in jobs:
Provide context to the input, but not actually include the context in the translation. Useful for real time translation applications. Also continue translations from context translation.
Example:
Context Input: I’m in a cave and see
Input to translate: a bat.
Context Output: Estoy en una cueva y veo
Translated Output: un murciélago.
Notice how bat (🦇) can also mean baseball bat unless you have the context.
Real time translation.
I can implement this if given some pointers as well. Thanks!
We have access to HBS bi-text data on websites such as:
I'm having conversations with local NLP experts, non-profits, and organizations that might help us acquire much higher quality bi-text.
Related issue (but not quite the same).
Our goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.
To get to that point let's first start by releasing OSS checkpoints with smaller scale and smaller language scope.
The goal here will be to release a model with following properties:
Standard Moroccan Tamazight (zgh) was mislabeld as Central Atlas Tamazight (tzm) in both FLORES and NLLB-SEED. This issue was fixed in the new FLORES+ and Seed datasets.
Which files should be changed to reflect this? can I just just find and replace (tzm -> zgh) on the entire repo or maybe follow this commit?
This issue concerns fuzzy deduplication of text pairs.
Find what's the tradeoff between memory, speed, accuracy when varying r and b.
We need to find a way to use way less than 9k because it requires too much CPU resources.
The AI world is obsessed with LLMs but most of the models you can get your hands on atm support only (or mostly) English:
ChatGPT, GPT-4, Falcon, Llama 1/2 & derivatives, etc.
Let's train high-quality language specific LLMs and open-source them! ❤️
This will help spread this technology outside of the western-centric world and help preserve diversity and richness of culture around the world.
Go through the files for your native language and see whether there are any issues.
Check out the getting started document here for how to download public bi-text for your language.
Additionally feel free to do whatever type of analysis you see fit:
See if there anything wrong in the data.
Share for example a Weights & Biases report as the final result of your work. We'll keep track of these for all covered languages.
Our goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.
To get to that point let's first start by releasing OSS checkpoints with smaller scale and smaller language scope.
The goal here will be to release a model with following properties:
This the end goal for the current project scope.
Here the goal is to release a model with following properties:
Note: it will be very hard to get a satisfactory level of quality for 202 languages with a dense checkpoint. The original work from Meta used a ~54B parameter MoE (mixture of experts) model to get decent results + a ton of compute (~52k hours on A100-SXM-80GB).
We do have plans to scale beyond 3.3B parameters scale.
Our current project scope's goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.
Going past that I would love to scale up to 7B parameters dense transformers and train a set of such models for different language families:
(I'm not a linguist so excuse me if any mistakes in the preliminary list above :) ).
Figure out the peak memory issue with FSDP when running the 615 M parameter on 2 GPUs I linked here:
Is it correct that there is currently no model file from the project even a small one for a few languages?
How long (how many GPU hours) would it take to generate a small model file (600M parameters) for a few languages? Is it possible to pause the generation and resume it after some time?
I can use a server with two NVIDIA A100 GPUs with 40 GB VRAM each and could possibly create a small model file for a few languages.
Figure out why we're filtering out that much sentences for Spanish (~38%) and Guarani (~50%)
Talk with the experts from Serbia to find high quality (and quantity) bi-text data.
Use this to train a couple of bilingual checkpoints before we kick off bigger runs.
So far we've been only dealing with public bi-text, and we haven't setup the pipeline for downloading & processing mined data.
There is ~450 GB of data in this dataset: https://huggingface.co/datasets/allenai/nllb
Create a pipeline for downloading & analysis of this data.
Find peak probability for LID output for right-skewed (left mode) languages.
Understand which optimizations are done in fairseq under the hood (sub-batches creation wrt. src/target len)
Explore fairseq codebase in this matter
How are sub-batches generated wrt. src/target len in fairseq?
It's not quite clear how do we setup a 4-stage curriculum learning training going through the codebase & existing documentation.
Understanding this will be super important once we start running on a bigger number of languages.
There is some mention of it in this README.
Write a report and share the learnings on how do do this.
Note: we could always do this manually by stopping and restarting 4 different jobs, but that's error-prone and I suspect Meta had a more streamlined approach. :)
After we estimate the necessary compute requirements (see this) ask around for grants to support the project.
If you have experience writing grants - please reach out to me! (our Discord channel)
Our goal is to release an open-source, 3.3B, dense checkpoint that does machine translation for 202 languages from the NLLB project.
To get to that point let's first start by releasing OSS checkpoints with smaller scale and smaller language scope.
The goal here will be to release a model with following properties:
Our first big milestone is to open-source a 3.3B NLLB checkpoint.
Estimate the necessary compute & hardware to train this in reasonable time.
We get weird regularly spaced out histogram spikes for HBS languages, see below:
On the x-axis is the line length (in chars) and on the y-axis is the number of lines with that line length in our corpus.
Understand why this is happening and write a short report.
Instructions:
download_opus.py
script + some manual download see which datasets in the script (all those that are in the ignore list, see the OPUS website: https://opus.nlpl.eu/parse_macocu.py
to parse the MaCoCu data that has to be manually downloaded (for now, feel free to add functions to support the download through Python) from https://macocu.eu/ (only a couple of datasets so doesn't take too much time to do it manually).analyse_data.py
script to replicate the graph above and then do your own analysis.A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.