Comments (6)
Hello
Same even I chose all the models !
from llama.
Hi @mntalha I tried downloading today using this link https://llama.meta.com/llama-downloads/
then I instantly got the mail on how to move forward with its download !
from llama.
Hi,
Interesting, which models did you choose to access? I chose all in my application.
from llama.
Please try again and let us know if it doesn't work. We are trying to streamline the approval process so it should be a lot faster.
from llama.
Hi! I am able to download the llama2 models directly via the email/meta release form, but have not gotten HF approval. The emails I used are the same ([email protected]). The HF repo I'm requesting access to is Llama-2-7b-chat-hf.
I can't resubmit a HF access request. It's been a while (~a month?) since my HF request. Could you help me troubleshoot this?
from llama.
Please try again and let us know if it doesn't work. We are trying to streamline the approval process so it should be a lot faster.
Thank you, I received an email by Meta and installed the model successfully.
from llama.
Related Issues (20)
- ### System Info HOT 1
- Architecture
- Agnostic Atheist AI not Normal HOT 14
- Discussing a potential bias in Llama2-Chat that can lead to content safety issues
- download.sh didn't work well HOT 3
- parameter count of Llama2-70B and Llama2-13B
- Change the name of openai to closeai and change the project name to openai.
- Error: llama runner process no longer running: 3221225785
- [Generation, Question] Why does the `seed` have to be the same in different processors (`Llama.build`)?
- how can i evaluate mathematic datasets like GSM8K?
- Test Tokenizer gives Incorrect padding error
- No response from request to access models
- how to download this model HOT 1
- Providing SHA-256 hashes
- This PR will implement code for reproducing results in the following paper:
- Unable to access the Hugging Face Llama-3 model repo
- [Parallel MD5] Accelerating `download.sh`
- LLaMA3 supports an 8K token context length. When continuously pretraining with proprietary data, the majority of the text data is significantly shorter than 8K tokens, resulting in a substantial amount of padding. To enhance training efficiency and effectiveness, it is necessary to merge multiple short texts into a longer text, with the length remaining below 8K tokens. However, the question arises: how should these short texts be combined into a single training sequence? Should they be separated by delimiters, or should an approach involving masking be used during the pretraining process?
- Oddities downloading the 8b-instruct model
- How to infer answer using llama2-7b-hf?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama.