Code Monkey home page Code Monkey logo

cogvlm's Issues

Baselines - comparison to IDEFICS

Hey, very cool work!
I like the idea of treating vision and language as different experts.
I think it would be great to have a comparison against IDEFICS-80b(-instruct), in particular to see how your model compares against larger systems!
Let me know if i can help!

run cli_demo error

image hi, when I run `torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --fp16` I encounter this error. After debuging, i found the error is here: image

Expecting LLaVA-Instruct dataset via mannual annotation

Very fantastic work! Thank you so much for your open source work!

I am mainly interested in the high-quality LLaVA-Instruct dataset via manual inspection and annotation for the SFT in the second stage of training. May I ask if I can obtain this part of training data from you? my email is [email protected].

This will be a great help to me, and I'm looking forward to your early reply!

部署时报错

Welcome to CogVLM-CLI. Enter an image URL or local file path to load an image. Continue inputting text to engage in a conversation. Type "clear" to start over, or "stop" to end the program.
Please enter the image path or URL (press Enter for plain text conversation): User: /mnt/models/CogVLM/examples/1.png
No image is not supported!
Please enter the image path or URL (press Enter for plain text conversation): /mnt/models/CogVLM/examples/1.png
User: wa^Hhta this
No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 1226, 16, 112) (torch.bfloat16)
key : shape=(1, 1226, 16, 112) (torch.bfloat16)
value : shape=(1, 1226, 16, 112) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
xFormers wasn't build with CUDA support
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.bfloat16 (supported: {torch.float32})
has custom scale
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 112
Please enter the image path or URL (press Enter for plain text conversation): ^CTraceback (most recent call last):
File "/mnt/models/CogVLM/cli_demo.py", line 154, in
main()
File "/mnt/models/CogVLM/cli_demo.py", line 76, in main
image_path = [input("Please enter the image path or URL (press Enter for plain text conversation): ")]
KeyboardInterrupt

Multiple images analysis

Hi,

Can this model be used for multi-image analysis and correlation detection? For instance, GPT-4V allows uploading several images and creating a prompt where you can ask to find the relationships between objects on different images. Is it possible to do the same with CogVLM? Or can it analyze only a single image?

Thanks,
Serhii

Fix web_demo.py

image

Suggest adding a line of code "torch.no_grad()" before chat(...) line, or It will occupy a large amount of gpu memory as model train process.

web demo does not work

The website demo does not work, showing "Timeout! Please wait a few minutes and retry.", can you fix it?

using PIL images instead of path

I tried to use PIL images instead of path like below but getting this error

TypeError: CogVLMModel.forward() missing 2 required positional arguments: 'vision_expert_mask' and 'image_embed_mask'

with torch.no_grad():
    response, history, cache_image = chat(
        None, 
        model, 
        text_processor_infer,
        image_processor,
        "Describe the image.", 
        history=[],
        max_length=2048, 
        top_p=0.9, 
        temperature=0.7,
        top_k=40,
        invalid_slices=text_processor_infer.invalid_slices,
        no_prompt=False,
        force_pil_image=image 
    )

ncclUnhandledCudaError: Call to CUDA function failed.

Environment: WSL2, Ubuntu 22.04, running in a conda environment with python 3.10.
Cuda12.1 is installed and working with other tasks like ExLllamaV2.
Hardware wise I have one 4090 and a 3090Ti.

After installation I attempted to run this command: torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16

The model loaded, but then threw this error:

(cogvlm) user@DESKTOP-User:~/CogVLM$ torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING]
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] *****************************************
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] *****************************************
[2023-10-26 22:47:18,276] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-26 22:47:18,299] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-26 22:47:18,781] [WARNING] Failed to load bitsandbytes:No module named 'bitsandbytes'
[2023-10-26 22:47:18,781] [WARNING] Failed to load bitsandbytes:No module named 'bitsandbytes'
[2023-10-26 22:47:18,925] [INFO] building CogVLMModel model ...
[2023-10-26 22:47:18,925] [INFO] building CogVLMModel model ...
[2023-10-26 22:47:20,543] [INFO] [RANK 0] > initializing model parallel with size 2
[2023-10-26 22:47:20,543] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[2023-10-26 22:47:26,335] [INFO] [RANK 1] > number of parameters on model parallel rank 1: 8893252992
[2023-10-26 22:47:27,171] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 8893252992
[2023-10-26 22:47:30,484] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-10-26 22:47:30,485] [INFO] [RANK 0] building CogVLMModel model ...
[2023-10-26 22:47:35,073] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 17639685376
[2023-10-26 22:47:40,102] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/user/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
[2023-10-26 22:48:15,001] [INFO] [RANK 0] > successfully loaded /home/user/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
Traceback (most recent call last):
File "/home/user/CogVLM/cli_demo.py", line 154, in
main()
File "/home/user/CogVLM/cli_demo.py", line 34, in main
model, model_args = CogVLMModel.from_pretrained(
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/sat/model/base_model.py", line 232, in from_pretrained
torch.distributed.barrier()
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3696, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'invalid argument'
Traceback (most recent call last):
File "/home/user/CogVLM/cli_demo.py", line 154, in
main()
File "/home/user/CogVLM/cli_demo.py", line 34, in main
model, model_args = CogVLMModel.from_pretrained(
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/sat/model/base_model.py", line 232, in from_pretrained
torch.distributed.barrier()
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3696, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'invalid argument'
[2023-10-26 22:48:26,410] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 947) of binary: /home/user/miniconda3/envs/cogvlm/bin/python
Traceback (most recent call last):
File "/home/user/miniconda3/envs/cogvlm/bin/torchrun", line 8, in
sys.exit(main())
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main
run(args)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
cli_demo.py FAILED


Failures:
[1]:
time : 2023-10-26_22:48:26
host : DESKTOP-User.
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 948)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2023-10-26_22:48:26
host : DESKTOP-User.
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 947)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

GPU usage stuck on 100% percent when using 2* RTX 3090

image

I run this command to start cli_demo:

torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained ~/CogVLM/cogvlm-chat --version chat --english --bf16

One of 2 * RTX 3090's utilization reached 100 percent and stuck on it, even I haven't input anything.
Can you give me some advice on how to solve this problem?

Driver Version: 530.30.02 CUDA Driver Version: 12.1

$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

nccl version:

$ python -c "import torch;print(torch.cuda.nccl.version())"
(2, 18, 1)

“list index out of range”when running Multi-GPU inference demo

Hello~
When I run the Multi-GPU inference demo bu using the order
torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16
an error occurred when i enter the image URL and username,an error occurred
'''
(CogVLM) tcexeexe@ea51c5d88cb7:~/CogVLM$ torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16 --local_tokenizer /hom
e/tcexeexe/checkpoints/lmsysvicuna-7b-v1.5
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING]
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] *****************************************
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] *****************************************
[2023-10-22 17:12:46,767] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-22 17:12:47,090] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-22 17:12:49,318] [INFO] building CogVLMModel model ...
[2023-10-22 17:12:49,662] [INFO] building CogVLMModel model ...
[2023-10-22 17:12:50,958] [INFO] [RANK 0] > initializing model parallel with size 2
[2023-10-22 17:12:50,960] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[2023-10-22 17:13:07,968] [INFO] [RANK 1] > number of parameters on model parallel rank 1: 8893252992
[2023-10-22 17:13:09,795] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 8893252992
[2023-10-22 17:13:22,369] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-10-22 17:13:22,371] [INFO] [RANK 0] building CogVLMModel model ...
[2023-10-22 17:13:36,394] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 17639685376
[2023-10-22 17:14:04,724] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/tcexeexe/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
[2023-10-22 17:14:36,256] [INFO] [RANK 0] > successfully loaded /home/tcexeexe/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
[2023-10-22 17:14:39,141] [INFO] [RANK 0] > initializing model parallel with size 2
[W ProcessGroupNCCL.cpp:1849] Warning: 0NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[W ProcessGroupNCCL.cpp:1849] Warning: 0NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
Welcome to CogVLM-CLI. Enter an image URL or local file path to load an image. Continue inputting text to engage in a conversation. Type "clear" to start over, or "stop" to end the program.
Please enter the image path or URL (press Enter for plain text conversation): /home/tcexeexe/CogVLM/demo/car.jpg
User: tcexeexe
list index out of range
Please enter the image path or URL (press Enter for plain text conversation):
'''
Do you know how to solve it?

the format of coordinate

As mentioned in papers and web_demo scripts, the coordinates of each box are presented in the format [[bin_1_x, bin_1_y, bin2_x, bin_2_y]]. whether each bin representing a three-digit number?

Furthermore, are there any additional special tokens employed when handling tasks related to coordinates?

Please make a replicate demo

This model is amazing! Good work guys! It would be awesome if you guys could push it to replicate.com, where many models are being shared rn

download 224 when --from_pretrained cogvlm-base-490

when i tried python cli_demo.py --from_pretrained cogvlm-base-490 --version base --english --bf16 --no_prompt, the code download cogvlm-base-224.zip and report FileNotFoundError: [Errno 2] No such file or directory: '/xxx/cogvlm-base-490.zip'

What is the meaning of different 'image_xxx_mask' ?

After check the text processor, I found there are thee image related masks, i.e., image_embed_mask, image_vision_mask, image_rope_mask, and they are slightly different. I wonder what is the function of each ?

请问预训练阶段训练了哪些参数?

你好,从论文中了解到,在sft阶段,除了视觉编码器之外的所有参数都进行了训练,那么请问一下在预训练阶段是冻结了哪些参数,训练了哪些参数呢?这部分我没有在论文中找到
谢谢!

AMD Support

Hello,
Would like to run the model but I only have that amount of vram avalible with amd gpus

which model does the web demo use?

thanks for sharing your work!
I have tried the cogvlm-base-224 and cogvlm-base-490 with cli_demo.py, I use the the same image and text input, but get different answer, and the cli_demo's answer seems be incoherent and inconsistent with the image. The answer of web demo is detailed and accurate, however, the answer I got is very short and not accurate. I use the following command:
python cli_demo.py --from_pretrained /local_path/cogvlm-base-490 --local_tokenizer /local_path/vicuna-7b-v1.5 --version base --english --bf16 --no_prompt --top_k 5
I download all the ckpt to the local_path, so I use local path to load model. Could tell me how to get the performance like the web demo show. thanks a lot.

gradio api result is inconsistent with web demo

Hi, thank you for your nice work. I'm using gradio api and the output result is inconsistent with web demo.

The result of web demo is shown as below, where the bounding box is perfect:
sceenshot-2023-10-12-103220

The gradio api code used is as follows

from gradio_client import Client

client = Client("http://36.103.203.44:7861/", output_dir="./VLM_output/test")

result = client.predict(
    "Where is human hand? answer in [[x0,y0,x1,y1]] format.",	# str  in 'Input Text' Textbox component
    0.8,	# int | float (numeric value between 0 and 1) in 'Temperature' Slider component
    0.4,	# int | float (numeric value between 0 and 1) in 'Top P' Slider component
    5,	# int | float (numeric value between 1 and 50) in 'Top K' Slider component
    "test_images/hand_test.jpg",	# str (filepath on your computer (or URL) of image) in 'Image Prompt' Image component
    "test_images/tmp.json",	# str (filepath to JSON file) in 'Multi-round conversation History' Chatbot component
    "Where is human hand? answer in [[x0,y0,x1,y1]] format.",	# str  in 'parameter_24' Textbox component
    True,	# bool  in 'Grounding' Checkbox component
    fn_index=1
)
print(result)

The content in output json is:
[["Where is human hand? answer in [[x0,y0,x1,y1]] format.", "[[423,418,560,579]]"], [null, {"name": "/tmp/gradio/8e4178eabe4143800647d3c11bc3d9018e319c08/1697078423_grounding.png", "mime_type": "image/png", "alt_text": null, "data": null, "is_file": true}]]

bounding box plot:
screenshot-2023-10-12-104731

I'm wondering which parameter can affect the result? BTW I dont know the effect of 'parameter_24' Textbox, and the 'Multi-round conversation History' Chatbot json file is set to default [("", "Hi, What do you want to know about this image?")].

Thank you for your help!

How to do Model Quantization?

  1. update web_demo.py:CogVLMModel.from_pretrained(...,device=f'cpu',...)
  2. python web_demo.py --version chat --english --quant 4

Is this OK?

Commercially available?

The current weighting is Vicuna-7B-v1.5, so the whole code and model should follow the commercial terms of llama2, right? I am more concerned about whether the weight of the future open source bilingual model can be kept commercially available. Thank you

这个可以拿来做PI CI图片信息抽取吗?

你好,作者,很感谢你的工作,我拿来测试了一下相关的PI CI图片,我的目标是让模型得到有关字段的结构化数据,为了更快的审核。
但是现在的demo的效果不尽人意,问相关字段的值很容易出现语言幻觉和回答的不对,回答的数字什么的都是错误的,请问可以通过微调的方式让他更对一些吗,或者是增加它的OCR能力? 期待你的回复。
AMMER

关于模型版本的问题

在cli_demo中有如下代码
print("模型:"+response)
if tokenizer.signal_type == "grounding":
print("Grounding 结果已保存至 ./output.png")
看样例似乎是指示了当模型版本为grounding时,保存一个画了检测目标bbox的图像

但是当前提供的模型,除了chat外,包括grounding模型,基础的版本都是base,代码中也只支持chat和base的输入

这是代码bug还是说目前还为发布能支持此功能的模型?

"Sorry, I can't analyze the content of the current image for you. "

When I was attempting to conduct a multi-modal test on a healthcare-related dataset, the online demo responded as follows:
"Sorry, I am unable to analyze the content of the current image for you. However, feel free to ask me other questions, and I will do my best to assist you. Thank you for your understanding!"

Is there any moderation or monitoring of inputs and outputs related to the healthcare field?

xformers attention to onnx?

1697164328632
requirements 里使用了xfomers,它是用来构建eva-clip的吗?
我之前做过eva转onnx的工作,如果是xformers构建的eva,其中有一个attention算子目前不能转成onnx,
而使用openclip-pytorch构建的eva是可以转为onnx模型的

请问,项目里的eva模型-e是指的具体哪一个模型,我暂时没有找到,我目前使用的是eva02-large-336,
可以直接用openclip-pytorch构建的eva模型来替换项目中的eva模型吗?

The requirements use xfomers, is it used to build eva-clip? I've worked with eva to onnx before, and if the eva is built with xformers, one of the attention operators can't be converted to onnx at the moment, whereas the eva built with openclip-pytorch can be converted to onnx models!

May I ask, which model does the eva model-e in the project refer to, I can't find it at the moment, and I'm currently using eva02-large-336, can I directly replace the eva model in the project with the openclip-pytorch built eva model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.