hannibal046 / nanorwkv Goto Github PK
View Code? Open in Web Editor NEWThe nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.
License: MIT License
The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.
License: MIT License
It is recommended to put the adjustable parameters in one block.
Hi,
I'm playing around with this repo and facing the case where I want to train on sequence of length > 1024.
Setting block_length > 1024 accordingly seems a natural choice since it's a model parameter.
However I found that I also need to manually tune T_MAX parameter which is by default fixed to 1024 and prevents me to train on longer sequences. It seems to work but I have some doubts, it must have a reason why they're not by default logically linked together right ? Could you explain how should I properly use them ?
Thank you for this amazing repo
https://www.zhihu.com/question/612761391/answer/3128755930
看到知乎上的回答,后续还会支持retnet么。
Hi
I want to train an OCR model using RWKV but I am stuck at data formatting.
I tried to prepend the image tokens with text tokens and send it through RWKV.
After that I calculated the loss with token shifting:
#while creating the targets I made all image tokens as zero so that the loss is only calculated on text tokens
targets = targets[:,1:]
logits,_ = rwkv(token_embs)
logits = logits[:,:-1,:]
#goes on to calulate the loss with Cross entropy ignoring the index=0
But the model is not learning. Is there a way where we can make a seq2seq using RWKV?
where encoder could be image features from SWIN and we can append or add these features to RWKV while training.
Hi, I have been testing this code on a set of GPUs, and am getting some results that seem a little too good! Much faster training when compared to an equivilent GPT-2 model (please see attached loss plot).
On inspecting the code more closely it seems like the WKV kernel is missing the causal masking that is used in decoding transformers to stop the model from looking ahead in a sequence. I am new to the RWKV arch, so am not sure how to fix this, but could you take a look and see if my hunch is correct?
Am I right to affirm this isn't yet prepared to do something like the following?:
LLM.from_pretrained('RWKV-4-World-0.4B-v1-20230529-ctx4096', 'rwkv')
Line 162 in f36de8a
我看了日志文件,RWKV 130M和GPT 124M在8X V100上似乎只用了十几个小时就训练完成了。
而karpathy/nanoGPT中提到在复现GPT-2 124M时他使用了8X A100训练了4天才达到和本项目相同的Loss。
我想知道这是什么原因,还是说我的理解错误了?
Hi, I used the code to train a 169M rwkv model. But with the sample.py it seems like it can only do inference for GPT2 checkpoint, what should i modify the code to inference in rwkv checkpoint in a cpu only enviroment. Here is the error that I got from a no CUDA environment, does this mean that I must have a CUDA card to run inference?:
Traceback (most recent call last):
File "/Users/chris/Downloads/rwkv/sample.py", line 41, in
model = RWKV(gptconf)
File "/Users/chris/Downloads/rwkv/modeling_rwkv.py", line 277, in init
self.load_cuda_kernel(config.dtype)
File "/Users/chris/Downloads/rwkv/modeling_rwkv.py", line 602, in load_cuda_kernel
wkv_cuda = load(name=f"wkv_{T_MAX}_bf16", sources=["wkv_op_bf16.cpp", "wkv_cuda_bf16.cu"], verbose=True, extra_cuda_cflags=["-t 4", "-std=c++17", "-res-usage", "--maxrregcount 60", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-DTmax={T_MAX}"])
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1308, in load
return _jit_compile(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1800, in _write_ninja_file_and_build_library
extra_ldflags = _prepare_ldflags(
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1899, in _prepare_ldflags
if (not os.path.exists(_join_cuda_home(extra_lib_dir)) and
File "/Users/chris/opt/anaconda3/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2416, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
sample.py文件现在还不支持推理吗?好像目前只支持gptt推理
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.