Comments (3)
Thanks for your valuable suggestion, we found that the skills you mentioned improved when finetune with the larger llama model (i.e., 13B). We will continue to think about new ideas to improve these skills.
from wizardlm.
Thank you for the timely response. I'd be interested to see how well the 13B model performed on these questions, which I can't do since I only have 8GB of RAM and a pretty weak CPU, only being able to play with the model on Gradio or through LLaMA.cpp.
Still, I find it fascinating to see how projects like this push the limits of what's possible with that low of a parameter count, prompting even the attention of Google and Microsoft (referring to Google's "we have no moat" memo and Microsoft's TinyStories experiment). I wonder if any meaningful results on this complex task can be achieved at just 7B without even training a model from scratch.
from wizardlm.
The newly released WizardLM 13B, which dataset included more physics questions, had finally started forming coherent reasoning chains, correctly doing basic calculations, rearranging equations, and solves simple problems as well as gpt-3.5
, which Guanaco 65B couldn't achieve.
However, interestingly, WizardLM 30B consistently hallucinates an incorrect reasoning chain, giving us snowballing hallucinations that end up with an incorrect answer. Perhaps this can give us some insight into effective scaling and training settings for a given dataset and foundation model.
from wizardlm.
Related Issues (20)
- Number of vocab in WizardCoder-1B
- which model to use for what's the root of 256256?
- Question about WizardCoder-1B-V1.0 HOT 1
- how to change the script to finetune based on codellama
- quantity with auto_gptq avg loss: nan
- Which version of LLama
- Error when inference with WizardCoder-33B-V1.1
- RuntimeError : indices should be either on cpu or on the same device as the indexed tensor (cpu) HOT 1
- UnboundLocalError: local variable 'sentencepiece_model_pb2' referenced before assignment
- Caching doesn't work
- Where's alpaca_data_cleaned.json?
- Typo in about text
- Does the EVOL process of instruction dataset has been released?
- help!Why can't my vllm work?
- A minute of silence for wizardlm 2 (gpt-4) HOT 5
- How does it support multi-turn conversations?
- WizardLM/WizardCoder-33B-V1.1 cannot find in huggingface?
- Mirrors of deleted WizardLM models HOT 1
- Reasoning seems flawed
- Sorry but I have to say it: WOW?!
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from wizardlm.