csuhan / onellm Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
License: Other
[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
License: Other
Hi! I have been having some trouble to get the repo and models working. Specially, I tried to run the evaluation scripts (specifically COCO captioning) as reported in the README using the checkpoint that is available at the huggingface hub (https://huggingface.co/csuhan/OneLLM-7B). I'm using a A500 24GB GPU for inference.
The CIDEr result I get is 0.02, much lower than expected taking into account that the model is trained on MS COCO data. The captions are not accurate and lack variability (I pasted some examples below). Moreover, it consistently refers to the images being black and white. I doubled check that they are downloaded properly and I used the code as-is after only adapting the paths. Is the checkpoint use-ready and adequate for finetuning on additional tasks? Is there any step missing from the repo docs that I should be doing?
Please feel free to request additional information about my setup that might be relevant to the problem.
Thanks!
{
"image_id": 184613,
"caption": "A close up of a black and white photo of a cat."
},
{
"image_id": 403013,
"caption": "A black and white photo of a long object."
},
{
"image_id": 562150,
"caption": "A black and white photo of a long object."
},
{
"image_id": 360772,
"caption": "A black and white photo of a long thin object."
},
{
"image_id": 340559,
"caption": "A black and white photo of a long object."
},
{
"image_id": 321107,
"caption": "A black and white photo of a black object."
},
{
"image_id": 129001,
"caption": "A black and white photo of a long object."
},
{
"image_id": 556616,
"caption": "A black and white photo of a long object."
},
{
"image_id": 472621,
"caption": "A black and white photo of a blurry object."
},
{
"image_id": 364521,
"caption": "A black and white photo of a black and white object."
},
{
"image_id": 310391,
"caption": "A black and white photo of a blank screen."
},
Please include a license for this to be used. Thank you!
Hello, I would like to ask, the current code seems to support only one modality and text modality at a time of inference, is it possible to input multiple modal data (such as audio, video and text) at a time of inference?
Hello! Your work is excellent and I am also very interested, I wonder when you can open source the training code or give some examples, thanks!
Hi! Thank you for the great contribution. I am trying to run the demo, my setup is composed of 4x2080Ti 12GB GPUs, so I cannot run the model on a single card (It takes ~16GB as far as I know). The checkpoint is not distributed but the model class uses fairscale distributed modules so I haven't found a way to load the state dict on more than 1 GPU. Am I missing something? If not, would you release distributed checkpoints and/or distributed inference scripts? Thanks!!
您好,感谢您这项很有启发性的工作。
请问能给出不同专家的职能范围的大概描述吗,感觉不同专家并不是针对不同的模态,而是对image模态有不同侧重的理解,所以导致image和video等与image相近的模态对专家的数量更加敏感。
此外,这种情况的出现是否与encode阶段使用freeze的image encoder,限制了其他模态的学习有关?或者说这是在做一种软对齐,将其他模态与image做对齐是吗?
Could you provide some simple examples for inference? Like this one SPHINX-inference 🤥
Hi, Any idea or reference of the input fmri format or how to process the data?
It seems that tokenizer.model is the pretrained SentencePiece model. can we have the access to training code of this model so that we can add training to tokenize more modalities??
Can this be used with Mistral?
Thank you for releasing the model & code. Can the model work with images and videos of high resolution like 720x1280, without having to resize them to 224x224?
just like CLIP, whether embedings generated by Universal Encoder has comparability? if can, we can perform search and matching based on the similarity of embedings for different modal data. Could you provide the Encoder part of the model separately for testing? The overall 15GB model is too large at the moment.
I slightly modify the eval code of audio to run on my dataset, however, the outputs are vague even the audio is speech.
There are all like the blow ones:
I attach my code below
def inference_onellm(model, target_dtype, images, modal=['image']):
if 'imu' in modal:
inps = ['Describe the motion.'] * len(images)
if 'audio' in modal:
inps = ['Provide a one-sentence caption for the provided audio.'] * len(images)
# inps = ['Provide a one-sentence action description for the provided audio.'] * len(images)
if 'image' in modal:
inps = ['Describe the scene.'] * len(images)
images = images.cuda().to(target_dtype)
prompts = []
for inp in inps:
conv = conv_templates["v1"].copy()
conv.append_message(conv.roles[0], inp)
conv.append_message(conv.roles[1], None)
prompts.append(conv.get_prompt())
with torch.cuda.amp.autocast(dtype=target_dtype):
responses = model.generate(prompts, images, 128, temperature=0.1, top_p=0.75, modal=modal)
outputs = []
for response, prompt in zip(responses, prompts):
response = response[len(prompt):].split('###')[0]
response = response.strip()
outputs.append(response)
return outputs
audio = torch.tensor(make_audio_features('tmp_onellm.wav', mel_bins=128).transpose(0, 1)[None, None])
result_audio = inference_onellm(model, target_dtype, audio, modal=['audio'])
Hi, Thanks for the awesome contribution to the community!
There's something that has been bugging me for hours. It is mentioned in the paper that the LLM is frozen during the training of the projection modules. However, I couldn't pin-point the code that's responsible for that from the released code. Is the paper just a guideline or the relevant part hasn't been released yet? Or I could've just missed the relevant code that's responsible for this behavior?
Eugene
well done! I wonder when will the training and inference related code be provided?
in pretrain_dataset.py, you used client from petrel_client, but it's not a package covered in requirements.txt.
And directly installing by pip install petrel_client doesn't work.
Where can I find and install this package?
I have used https://github.com/csuhan/OneLLM/blob/main/docs/Evaluation.md:
Point-Text Evaluation
PointLLM Caption
Download PointLLM data from this link
Fill pretrained_path in eval/point_cap_pointllm.py and run: python eval/point_cap_pointllm.py.
Evaluate with eval/caption_eval.py. The annotation file is at datasets/Eval/point/pointllm_test_cococap.json
I and several of my team members, all got similar Bleu, METEOR and ROUGE_L to reproduce your Table 5 on OneLLM, we all got very low numbers like below, also, CIDEr is zero. Can you please double check that? We believe that we are using same point cloud files and scripts and model. Thank you. Rob
SPICE: 0.094
Bleu_1: 0.104
Bleu_2: 0.065
Bleu_3: 0.045
Bleu_4: 0.034
METEOR: 0.131
ROUGE_L: 0.175
CIDEr: 0.000
SPICE: 0.094
From https://arxiv.org/pdf/2312.03700, Page 6, Table 5, Evaluation on Point Cloud-Text Tasks. The evalua�tion dataset is from Objaverse [16], following the data split in
PointLLM [92]. InstructBLIP takes single-view image as input,
while PointLLM and OneLLM take point cloud as input. GPT4-
Acc.: GPT4 as the accuracy evaluator [92].
Model Captioning Classification
BLEU-1 ROUGE-L METEOR GPT4-Acc.
InstructBLIP-7B [15] 11.2 13.9 14.9 38.5
InstructBLIP-13B [15] 12.6 15.0 16.0 35.5
PointLLM-7B [92] 8.0 11.1 15.2 47.5
PointLLM-13B [92] 9.7 12.8 15.3 45.0
One-LLM-7B (Ours) 42.2 45.3 20.3 44.5
Thank you for your outstanding work.
I noticed that when running the demo you provided, for QA inference in the modalities of depth/normal maps, it seems essential to provide both the RGB image and the depth/normal maps together to obtain accurate answers. If only the depth/normal information is provided, the system appears unable to respond to questions.
Could you clarify whether the intended functionality of this system in the depth/normal mode aligns with the paper, which suggests that QA inference can be accomplished solely based on depth/normal information?
![Uploading WechatIMG5543.jpg…]()Hey authors,
Can y'all share the annotation json for clothov2 evaluation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.