Code Monkey home page Code Monkey logo

cogvlm's Introduction

CogVLM & CogAgent

📗 中文版README

🌟 Jump to detailed introduction: Introduction to CogVLM, 🆕 Introduction to CogAgent

📔 For more detailed usage information, please refer to: CogVLM & CogAgent's technical documentation (in Chinese)

CogVLM

📖 Paper: CogVLM: Visual Expert for Pretrained Language Models

CogVLM is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion visual parameters and 7 billion language parameters, supporting image understanding and multi-turn dialogue with a resolution of 490*490.

CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC.

CogAgent

📖 Paper: CogAgent: A Visual Language Model for GUI Agents

CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters, supporting image understanding at a resolution of 1120*1120. On top of the capabilities of CogVLM, it further possesses GUI image Agent capabilities.

CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets including AITW and Mind2Web.

🌐 Web Demo for both CogVLM and CogAgent: this link

Table of Contents

Release

  • 🔥🔥🔥 News: 2024/4/5: CogAgent was selected as a CVPR 2024 Highlights!

  • 🔥🔥 News: 2023/12/26: We have released the CogVLM-SFT-311K dataset, which contains over 150,000 pieces of data that we used for CogVLM v1.0 only training. Welcome to follow and use.

  • 🔥 News: 2023/12/18: New Web UI Launched! We have launched a new web UI based on Streamlit, users can painlessly talk to CogVLM, CogAgent in our UI. Have a better user experience.

  • News: 2023/12/15: CogAgent Officially Launched! CogAgent is an image understanding model developed based on CogVLM. It features visual-based GUI Agent capabilities and has further enhancements in image understanding. It supports image input with a resolution of 1120*1120, and possesses multiple abilities including multi-turn dialogue with images, GUI Agent, Grounding, and more.

  • News: 2023/12/8 We have updated the checkpoint of cogvlm-grounding-generalist to cogvlm-grounding-generalist-v1.1, with image augmentation during training, therefore more robust. See details.

  • News: 2023/12/7 CogVLM supports 4-bit quantization now! You can inference with just 11GB GPU memory!

  • News: 2023/11/20 We have updated the checkpoint of cogvlm-chat to cogvlm-chat-v1.1, unified the versions of chat and VQA, and refreshed the SOTA on various datasets. See details

  • News: 2023/11/20 We release cogvlm-chat, cogvlm-grounding-generalist/base, cogvlm-base-490/224 on 🤗Huggingface. you can infer with transformers in a few lines of codenow!

  • 2023/10/27 CogVLM bilingual version is available online! Welcome to try it out!

  • 2023/10/5 CogVLM-17B released。

Get Started

Option 1: Inference Using Web Demo.

If you need to use Agent and Grounding functions, please refer to Cookbook - Task Prompts

Option 2:Deploy CogVLM / CogAgent by yourself

We support two GUIs for model inference, CLI and web demo . If you want to use it in your python code, it is easy to modify the CLI scripts for your case.

First, we need to install the dependencies.

# CUDA >= 11.8
pip install -r requirements.txt
python -m spacy download en_core_web_sm

All code for inference is located under the basic_demo/ directory. Please switch to this directory first before proceeding with further operations.

Situation 2.1 CLI (SAT version)

Run CLI demo via:

# CogAgent
python cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16  --stream_chat
python cli_demo_sat.py --from_pretrained cogagent-vqa --version chat_old --bf16  --stream_chat

# CogVLM
python cli_demo_sat.py --from_pretrained cogvlm-chat --version chat_old --bf16  --stream_chat
python cli_demo_sat.py --from_pretrained cogvlm-grounding-generalist --version base --bf16  --stream_chat

The program will automatically download the sat model and interact in the command line. You can generate replies by entering instructions and pressing enter. Enter clear to clear the conversation history and stop to stop the program.

We also support model parallel inference, which splits model to multiple (2/4/8) GPUs. --nproc-per-node=[n] in the following command controls the number of used GPUs.

torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16
  • If you want to manually download the weights, you can replace the path after --from_pretrained with the model path.

  • Our model supports SAT's 4-bit quantization and 8-bit quantization. You can change --bf16 to --fp16, or --fp16 --quant 4, or --fp16 --quant 8.

    For example

    python cli_demo_sat.py --from_pretrained cogagent-chat --fp16 --quant 8 --stream_chat
    python cli_demo_sat.py --from_pretrained cogvlm-chat-v1.1 --fp16 --quant 4 --stream_chat
    # In SAT version,--quant should be used with --fp16
  • The program provides the following hyperparameters to control the generation process:

    usage: cli_demo_sat.py [-h] [--max_length MAX_LENGTH] [--top_p TOP_P] [--top_k TOP_K] [--temperature TEMPERATURE]
    
    optional arguments:
    -h, --help            show this help message and exit
    --max_length MAX_LENGTH
                            max length of the total sequence
    --top_p TOP_P         top p for nucleus sampling
    --top_k TOP_K         top k for top k sampling
    --temperature TEMPERATURE
                            temperature for sampling
    
  • Click here to view the correspondence between different models and the --version parameter.

Situation 2.2 CLI (Huggingface version)

Run CLI demo via:

# CogAgent
python cli_demo_hf.py --from_pretrained THUDM/cogagent-chat-hf --bf16
python cli_demo_hf.py --from_pretrained THUDM/cogagent-vqa-hf --bf16

# CogVLM
python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --bf16
python cli_demo_hf.py --from_pretrained THUDM/cogvlm-grounding-generalist-hf --bf16
  • If you want to manually download the weights, you can replace the path after --from_pretrained with the model path.

  • You can change --bf16 to --fp16, or --quant 4. For example, our model supports Huggingface's 4-bit quantization:

    python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --quant 4

Situation 2.3 Web Demo

We also offer a local web demo based on Gradio. First, install Gradio by running: pip install gradio. Then download and enter this repository and run web_demo.py. See the next section for detailed usage:

python web_demo.py --from_pretrained cogagent-chat --version chat --bf16
python web_demo.py --from_pretrained cogagent-vqa --version chat_old --bf16
python web_demo.py --from_pretrained cogvlm-chat-v1.1 --version chat_old --bf16
python web_demo.py --from_pretrained cogvlm-grounding-generalist --version base --bf16

The GUI of the web demo looks like:

Option 3:Finetuning CogAgent / CogVLM

You may want to use CogVLM in your own task, which needs a different output style or domain knowledge. All code for finetuning is located under the finetune_demo/ directory.

We here provide a finetuning example for Captcha Recognition using lora.

  1. Start by downloading the Captcha Images dataset. Once downloaded, extract the contents of the ZIP file.

  2. To create a train/validation/test split in the ratio of 80/5/15, execute the following:

    python utils/split_dataset.py
  3. Start the fine-tuning process with this command:

    bash finetune_demo/finetune_(cogagent/cogvlm)_lora.sh
  4. Merge the model to model_parallel_size=1: (replace the 4 below with your training MP_SIZE)

    torchrun --standalone --nnodes=1 --nproc-per-node=4 utils/merge_model.py --version base --bf16 --from_pretrained ./checkpoints/merged_lora_(cogagent/cogvlm490/cogvlm224)
  5. Evaluate the performance of your model.

    bash finetune_demo/evaluate_(cogagent/cogvlm).sh

Option 4: OpenAI Vision format

We provide the same API examples as GPT-4V, which you can view in openai_demo.

  1. First, start the node
python openai_demo/openai_api.py
  1. Next, run the request example node, which is an example of a continuous dialogue
python openai_demo/openai_api_request.py
  1. You will get output similar to the following
This image showcases a tranquil natural scene with a wooden pathway leading through a field of lush green grass. In the distance, there are trees and some scattered structures, possibly houses or small buildings. The sky is clear with a few scattered clouds, suggesting a bright and sunny day.

Hardware requirement

  • Model Inference:

    For INT4 quantization: 1 * RTX 3090(24G) (CogAgent takes ~ 12.6GB, CogVLM takes ~ 11GB)

    For FP16: 1 * A100(80G) or 2 * RTX 3090(24G)

  • Finetuning:

    For FP16: 4 * A100(80G) [Recommend] or 8* RTX 3090(24G).

Model checkpoints

If you run the basic_demo/cli_demo*.py from the code repository, it will automatically download SAT or Hugging Face weights. Alternatively, you can choose to manually download the necessary weights.

  • CogAgent

    Model name Input resolution Introduction Huggingface model SAT model
    cogagent-chat 1120 Chat version of CogAgent. Supports GUI Agent, multiple-round chat and visual grounding. HF link
    OpenXLab link
    HF link
    OpenXLab link
    cogagent-vqa 1120 VQA version of CogAgent. Has stronger capabilities in single-turn visual dialogue. Recommended for VQA benchmarks. HF link
    OpenXLab link
    HF link
    OpenXLab link

c

Introduction to CogVLM

  • CogVLM is a powerful open-source visual language model (VLM). CogVLM-17B has 10 billion vision parameters and 7 billion language parameters.

  • CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and rank the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X 55B. CogVLM can also chat with you about images.

Click to view results on MM-VET, POPE, TouchStone.
Method LLM MM-VET POPE(adversarial) TouchStone
BLIP-2 Vicuna-13B 22.4 - -
Otter MPT-7B 24.7 - -
MiniGPT4 Vicuna-13B 24.4 70.4 531.7
InstructBLIP Vicuna-13B 25.6 77.3 552.4
LLaMA-Adapter v2 LLaMA-7B 31.4 - 590.1
LLaVA LLaMA2-7B 28.1 66.3 602.7
mPLUG-Owl LLaMA-7B - 66.8 605.4
LLaVA-1.5 Vicuna-13B 36.3 84.5 -
Emu LLaMA-13B 36.3 - -
Qwen-VL-Chat - - - 645.2
DreamLLM Vicuna-7B 35.9 76.5 -
CogVLM Vicuna-7B 52.8 87.6 742.0
Click to view results of cogvlm-grounding-generalist-v1.1.
RefCOCO RefCOCO+ RefCOCOg Visual7W
val testA testB val testA testB val test test
cogvim-grounding-generalist 92.51 93.95 88.73 87.52 91.81 81.43 89.46 90.09 90.96
cogvim-grounding-generalist-v1.1 **92.76** **94.75** **88.99** **88.68** **92.91** **83.39** **89.75** **90.79** **91.05**

Examples

  • CogVLM can accurately describe images in details with very few hallucinations.

    Click for comparison with LLAVA-1.5 and MiniGPT-4.

  • CogVLM can understand and answer various types of questions, and has a visual grounding version.


  • CogVLM sometimes captures more detailed content than GPT-4V(ision).

Click to expand more examples.

Chat Examples

Introduction to CogAgent

CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters

CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets such as AITW and Mind2Web.

In addition to all the features already present in CogVLM (visual multi-round dialogue, visual grounding), CogAgent:

  1. Supports higher resolution visual input and dialogue question-answering. It supports ultra-high-resolution image inputs of 1120x1120.

  2. Possesses the capabilities of a visual Agent, being able to return a plan, next action, and specific operations with coordinates for any given task on any GUI screenshot.

  3. Enhanced GUI-related question-answering capabilities, allowing it to handle questions about any GUI screenshot, such as web pages, PC apps, mobile applications, etc.

  4. Enhanced capabilities in OCR-related tasks through improved pre-training and fine-tuning.

GUI Agent Examples

Cookbook

Task Prompts

  1. General Multi-Round Dialogue: Say whatever you want.

  2. GUI Agent Task: Use the Agent template and replace <TASK> with the task instruction enclosed in double quotes. This query can make CogAgent infer Plan and Next Action. If adding (with grounding) at the end of the query, the model will return a formalized action representation with coordinates.

For example, to ask the model how to complete the task "Search for CogVLM" on a current GUI screenshot, follow these steps:

  1. Randomly select a template from the Agent template. Here, we choose What steps do I need to take to <TASK>?.

  2. Replace with the task instruction enclosed in double quotes, for example, What steps do I need to take to "Search for CogVLM"? . Inputting this to the model yields:

Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources.

Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it.

  1. If adding (with grounding) at the end, i.e. changing the input to What steps do I need to take to "Search for CogVLM"?(with grounding), the output of CogAgent would be:

Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources.

Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it. Grounded Operation:[combobox] Search -> TYPE: CogVLM at the box [[212,498,787,564]]

Tip: For GUI Agent tasks, it is recommended to conduct only single-round dialogues for each image for better results.

  1. Visual Grounding. Three modes of grounding are supported:

    • Image description with grounding coordinates (bounding box). Use any template from caption_with_box template as model input. For example:

    Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?

    • Returning grounding coordinates (bounding box) based on the description of objects. Use any template from caption2box template, replacing <expr> with the object's description. For example:

    Can you point out children in blue T-shirts in the image and provide the bounding boxes of their location?

    • Providing a description based on bounding box coordinates. Use a template from box2caption template, replacing <objs> with the position coordinates. For example:

    Tell me what you see within the designated area [[086,540,400,760]] in the picture.

Format of coordination: The bounding box coordinates in the model's input and output use the format [[x1, y1, x2, y2]], with the origin at the top left corner, the x-axis to the right, and the y-axis downward. (x1, y1) and (x2, y2) are the top-left and bottom-right corners, respectively, with values as relative coordinates multiplied by 1000 (prefixed with zeros to three digits).

Which --version to use

Due to differences in model functionalities, different model versions may have distinct --version specifications for the text processor, meaning the format of the prompts used varies.

model name --version
cogagent-chat chat
cogagent-vqa chat_old
cogvlm-chat chat_old
cogvlm-chat-v1.1 chat_old
cogvlm-grounding-generalist base
cogvlm-base-224 base
cogvlm-base-490 base

FAQ

  • If you have trouble in accessing huggingface.co, you can add --local_tokenizer /path/to/vicuna-7b-v1.5 to load the tokenizer.
  • If you have trouble in automatically downloading model with 🔨SAT, try downloading from 🤖modelscope or 🤗huggingface or 💡wisemodel manually.
  • Download model using 🔨SAT, the model will be saved to the default location ~/.sat_models. Change the default location by setting the environment variable SAT_HOME. For example, if you want to save the model to /path/to/my/models, you can run export SAT_HOME=/path/to/my/models before running the python command.

License

The code in this repository is open source under the Apache-2.0 license, while the use of the CogVLM model weights must comply with the Model License.

Citation & Acknowledgements

If you find our work helpful, please consider citing the following papers

@misc{wang2023cogvlm,
      title={CogVLM: Visual Expert for Pretrained Language Models}, 
      author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
      year={2023},
      eprint={2311.03079},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{hong2023cogagent,
      title={CogAgent: A Visual Language Model for GUI Agents}, 
      author={Wenyi Hong and Weihan Wang and Qingsong Lv and Jiazheng Xu and Wenmeng Yu and Junhui Ji and Yan Wang and Zihan Wang and Yuxiao Dong and Ming Ding and Jie Tang},
      year={2023},
      eprint={2312.08914},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

In the instruction fine-tuning phase of the CogVLM, there are some English image-text data from the MiniGPT-4, LLAVA, LRV-Instruction, LLaVAR and Shikra projects, as well as many classic cross-modal work datasets. We sincerely thank them for their contributions.

cogvlm's People

Contributors

1049451037 avatar aisensiy avatar artur-ag avatar dm-thu avatar duchenzhuang avatar eltociear avatar ildar-idrisov avatar iyuge2 avatar jianxindong avatar kq-chen avatar lykeven avatar mactavish91 avatar shotarok avatar shreyasskandans avatar sleepychord avatar truebit avatar wenyihong avatar zrzrzrzrzrzrzr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cogvlm's Issues

AMD Support

Hello,
Would like to run the model but I only have that amount of vram avalible with amd gpus

which model does the web demo use?

thanks for sharing your work!
I have tried the cogvlm-base-224 and cogvlm-base-490 with cli_demo.py, I use the the same image and text input, but get different answer, and the cli_demo's answer seems be incoherent and inconsistent with the image. The answer of web demo is detailed and accurate, however, the answer I got is very short and not accurate. I use the following command:
python cli_demo.py --from_pretrained /local_path/cogvlm-base-490 --local_tokenizer /local_path/vicuna-7b-v1.5 --version base --english --bf16 --no_prompt --top_k 5
I download all the ckpt to the local_path, so I use local path to load model. Could tell me how to get the performance like the web demo show. thanks a lot.

What is the meaning of different 'image_xxx_mask' ?

After check the text processor, I found there are thee image related masks, i.e., image_embed_mask, image_vision_mask, image_rope_mask, and they are slightly different. I wonder what is the function of each ?

How to do Model Quantization?

  1. update web_demo.py:CogVLMModel.from_pretrained(...,device=f'cpu',...)
  2. python web_demo.py --version chat --english --quant 4

Is this OK?

GPU usage stuck on 100% percent when using 2* RTX 3090

image

I run this command to start cli_demo:

torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained ~/CogVLM/cogvlm-chat --version chat --english --bf16

One of 2 * RTX 3090's utilization reached 100 percent and stuck on it, even I haven't input anything.
Can you give me some advice on how to solve this problem?

Driver Version: 530.30.02 CUDA Driver Version: 12.1

$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

nccl version:

$ python -c "import torch;print(torch.cuda.nccl.version())"
(2, 18, 1)

Commercially available?

The current weighting is Vicuna-7B-v1.5, so the whole code and model should follow the commercial terms of llama2, right? I am more concerned about whether the weight of the future open source bilingual model can be kept commercially available. Thank you

run cli_demo error

image hi, when I run `torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --fp16` I encounter this error. After debuging, i found the error is here: image

Multiple images analysis

Hi,

Can this model be used for multi-image analysis and correlation detection? For instance, GPT-4V allows uploading several images and creating a prompt where you can ask to find the relationships between objects on different images. Is it possible to do the same with CogVLM? Or can it analyze only a single image?

Thanks,
Serhii

Baselines - comparison to IDEFICS

Hey, very cool work!
I like the idea of treating vision and language as different experts.
I think it would be great to have a comparison against IDEFICS-80b(-instruct), in particular to see how your model compares against larger systems!
Let me know if i can help!

gradio api result is inconsistent with web demo

Hi, thank you for your nice work. I'm using gradio api and the output result is inconsistent with web demo.

The result of web demo is shown as below, where the bounding box is perfect:
sceenshot-2023-10-12-103220

The gradio api code used is as follows

from gradio_client import Client

client = Client("http://36.103.203.44:7861/", output_dir="./VLM_output/test")

result = client.predict(
    "Where is human hand? answer in [[x0,y0,x1,y1]] format.",	# str  in 'Input Text' Textbox component
    0.8,	# int | float (numeric value between 0 and 1) in 'Temperature' Slider component
    0.4,	# int | float (numeric value between 0 and 1) in 'Top P' Slider component
    5,	# int | float (numeric value between 1 and 50) in 'Top K' Slider component
    "test_images/hand_test.jpg",	# str (filepath on your computer (or URL) of image) in 'Image Prompt' Image component
    "test_images/tmp.json",	# str (filepath to JSON file) in 'Multi-round conversation History' Chatbot component
    "Where is human hand? answer in [[x0,y0,x1,y1]] format.",	# str  in 'parameter_24' Textbox component
    True,	# bool  in 'Grounding' Checkbox component
    fn_index=1
)
print(result)

The content in output json is:
[["Where is human hand? answer in [[x0,y0,x1,y1]] format.", "[[423,418,560,579]]"], [null, {"name": "/tmp/gradio/8e4178eabe4143800647d3c11bc3d9018e319c08/1697078423_grounding.png", "mime_type": "image/png", "alt_text": null, "data": null, "is_file": true}]]

bounding box plot:
screenshot-2023-10-12-104731

I'm wondering which parameter can affect the result? BTW I dont know the effect of 'parameter_24' Textbox, and the 'Multi-round conversation History' Chatbot json file is set to default [("", "Hi, What do you want to know about this image?")].

Thank you for your help!

download 224 when --from_pretrained cogvlm-base-490

when i tried python cli_demo.py --from_pretrained cogvlm-base-490 --version base --english --bf16 --no_prompt, the code download cogvlm-base-224.zip and report FileNotFoundError: [Errno 2] No such file or directory: '/xxx/cogvlm-base-490.zip'

Fix web_demo.py

image

Suggest adding a line of code "torch.no_grad()" before chat(...) line, or It will occupy a large amount of gpu memory as model train process.

using PIL images instead of path

I tried to use PIL images instead of path like below but getting this error

TypeError: CogVLMModel.forward() missing 2 required positional arguments: 'vision_expert_mask' and 'image_embed_mask'

with torch.no_grad():
    response, history, cache_image = chat(
        None, 
        model, 
        text_processor_infer,
        image_processor,
        "Describe the image.", 
        history=[],
        max_length=2048, 
        top_p=0.9, 
        temperature=0.7,
        top_k=40,
        invalid_slices=text_processor_infer.invalid_slices,
        no_prompt=False,
        force_pil_image=image 
    )

"Sorry, I can't analyze the content of the current image for you. "

When I was attempting to conduct a multi-modal test on a healthcare-related dataset, the online demo responded as follows:
"Sorry, I am unable to analyze the content of the current image for you. However, feel free to ask me other questions, and I will do my best to assist you. Thank you for your understanding!"

Is there any moderation or monitoring of inputs and outputs related to the healthcare field?

这个可以拿来做PI CI图片信息抽取吗?

你好,作者,很感谢你的工作,我拿来测试了一下相关的PI CI图片,我的目标是让模型得到有关字段的结构化数据,为了更快的审核。
但是现在的demo的效果不尽人意,问相关字段的值很容易出现语言幻觉和回答的不对,回答的数字什么的都是错误的,请问可以通过微调的方式让他更对一些吗,或者是增加它的OCR能力? 期待你的回复。
AMMER

部署时报错

Welcome to CogVLM-CLI. Enter an image URL or local file path to load an image. Continue inputting text to engage in a conversation. Type "clear" to start over, or "stop" to end the program.
Please enter the image path or URL (press Enter for plain text conversation): User: /mnt/models/CogVLM/examples/1.png
No image is not supported!
Please enter the image path or URL (press Enter for plain text conversation): /mnt/models/CogVLM/examples/1.png
User: wa^Hhta this
No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 1226, 16, 112) (torch.bfloat16)
key : shape=(1, 1226, 16, 112) (torch.bfloat16)
value : shape=(1, 1226, 16, 112) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
xFormers wasn't build with CUDA support
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.bfloat16 (supported: {torch.float32})
has custom scale
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 112
Please enter the image path or URL (press Enter for plain text conversation): ^CTraceback (most recent call last):
File "/mnt/models/CogVLM/cli_demo.py", line 154, in
main()
File "/mnt/models/CogVLM/cli_demo.py", line 76, in main
image_path = [input("Please enter the image path or URL (press Enter for plain text conversation): ")]
KeyboardInterrupt

请问预训练阶段训练了哪些参数?

你好,从论文中了解到,在sft阶段,除了视觉编码器之外的所有参数都进行了训练,那么请问一下在预训练阶段是冻结了哪些参数,训练了哪些参数呢?这部分我没有在论文中找到
谢谢!

Please make a replicate demo

This model is amazing! Good work guys! It would be awesome if you guys could push it to replicate.com, where many models are being shared rn

web demo does not work

The website demo does not work, showing "Timeout! Please wait a few minutes and retry.", can you fix it?

关于模型版本的问题

在cli_demo中有如下代码
print("模型:"+response)
if tokenizer.signal_type == "grounding":
print("Grounding 结果已保存至 ./output.png")
看样例似乎是指示了当模型版本为grounding时,保存一个画了检测目标bbox的图像

但是当前提供的模型,除了chat外,包括grounding模型,基础的版本都是base,代码中也只支持chat和base的输入

这是代码bug还是说目前还为发布能支持此功能的模型?

xformers attention to onnx?

1697164328632
requirements 里使用了xfomers,它是用来构建eva-clip的吗?
我之前做过eva转onnx的工作,如果是xformers构建的eva,其中有一个attention算子目前不能转成onnx,
而使用openclip-pytorch构建的eva是可以转为onnx模型的

请问,项目里的eva模型-e是指的具体哪一个模型,我暂时没有找到,我目前使用的是eva02-large-336,
可以直接用openclip-pytorch构建的eva模型来替换项目中的eva模型吗?

The requirements use xfomers, is it used to build eva-clip? I've worked with eva to onnx before, and if the eva is built with xformers, one of the attention operators can't be converted to onnx at the moment, whereas the eva built with openclip-pytorch can be converted to onnx models!

May I ask, which model does the eva model-e in the project refer to, I can't find it at the moment, and I'm currently using eva02-large-336, can I directly replace the eva model in the project with the openclip-pytorch built eva model?

Expecting LLaVA-Instruct dataset via mannual annotation

Very fantastic work! Thank you so much for your open source work!

I am mainly interested in the high-quality LLaVA-Instruct dataset via manual inspection and annotation for the SFT in the second stage of training. May I ask if I can obtain this part of training data from you? my email is [email protected].

This will be a great help to me, and I'm looking forward to your early reply!

ncclUnhandledCudaError: Call to CUDA function failed.

Environment: WSL2, Ubuntu 22.04, running in a conda environment with python 3.10.
Cuda12.1 is installed and working with other tasks like ExLllamaV2.
Hardware wise I have one 4090 and a 3090Ti.

After installation I attempted to run this command: torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16

The model loaded, but then threw this error:

(cogvlm) user@DESKTOP-User:~/CogVLM$ torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING]
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] *****************************************
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2023-10-26 22:47:16,010] torch.distributed.run: [WARNING] *****************************************
[2023-10-26 22:47:18,276] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-26 22:47:18,299] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-26 22:47:18,781] [WARNING] Failed to load bitsandbytes:No module named 'bitsandbytes'
[2023-10-26 22:47:18,781] [WARNING] Failed to load bitsandbytes:No module named 'bitsandbytes'
[2023-10-26 22:47:18,925] [INFO] building CogVLMModel model ...
[2023-10-26 22:47:18,925] [INFO] building CogVLMModel model ...
[2023-10-26 22:47:20,543] [INFO] [RANK 0] > initializing model parallel with size 2
[2023-10-26 22:47:20,543] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[2023-10-26 22:47:26,335] [INFO] [RANK 1] > number of parameters on model parallel rank 1: 8893252992
[2023-10-26 22:47:27,171] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 8893252992
[2023-10-26 22:47:30,484] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-10-26 22:47:30,485] [INFO] [RANK 0] building CogVLMModel model ...
[2023-10-26 22:47:35,073] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 17639685376
[2023-10-26 22:47:40,102] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/user/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
[2023-10-26 22:48:15,001] [INFO] [RANK 0] > successfully loaded /home/user/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
Traceback (most recent call last):
File "/home/user/CogVLM/cli_demo.py", line 154, in
main()
File "/home/user/CogVLM/cli_demo.py", line 34, in main
model, model_args = CogVLMModel.from_pretrained(
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/sat/model/base_model.py", line 232, in from_pretrained
torch.distributed.barrier()
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3696, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'invalid argument'
Traceback (most recent call last):
File "/home/user/CogVLM/cli_demo.py", line 154, in
main()
File "/home/user/CogVLM/cli_demo.py", line 34, in main
model, model_args = CogVLMModel.from_pretrained(
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/sat/model/base_model.py", line 232, in from_pretrained
torch.distributed.barrier()
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
return func(*args, **kwargs)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3696, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1331, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.1
ncclUnhandledCudaError: Call to CUDA function failed.
Last error:
Cuda failure 'invalid argument'
[2023-10-26 22:48:26,410] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 947) of binary: /home/user/miniconda3/envs/cogvlm/bin/python
Traceback (most recent call last):
File "/home/user/miniconda3/envs/cogvlm/bin/torchrun", line 8, in
sys.exit(main())
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main
run(args)
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/user/miniconda3/envs/cogvlm/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
cli_demo.py FAILED


Failures:
[1]:
time : 2023-10-26_22:48:26
host : DESKTOP-User.
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 948)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2023-10-26_22:48:26
host : DESKTOP-User.
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 947)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

the format of coordinate

As mentioned in papers and web_demo scripts, the coordinates of each box are presented in the format [[bin_1_x, bin_1_y, bin2_x, bin_2_y]]. whether each bin representing a three-digit number?

Furthermore, are there any additional special tokens employed when handling tasks related to coordinates?

“list index out of range”when running Multi-GPU inference demo

Hello~
When I run the Multi-GPU inference demo bu using the order
torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16
an error occurred when i enter the image URL and username,an error occurred
'''
(CogVLM) tcexeexe@ea51c5d88cb7:~/CogVLM$ torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo.py --from_pretrained cogvlm-chat --version chat --english --bf16 --local_tokenizer /hom
e/tcexeexe/checkpoints/lmsysvicuna-7b-v1.5
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING]
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] *****************************************
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2023-10-22 17:12:42,895] torch.distributed.run: [WARNING] *****************************************
[2023-10-22 17:12:46,767] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-22 17:12:47,090] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-10-22 17:12:49,318] [INFO] building CogVLMModel model ...
[2023-10-22 17:12:49,662] [INFO] building CogVLMModel model ...
[2023-10-22 17:12:50,958] [INFO] [RANK 0] > initializing model parallel with size 2
[2023-10-22 17:12:50,960] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[2023-10-22 17:13:07,968] [INFO] [RANK 1] > number of parameters on model parallel rank 1: 8893252992
[2023-10-22 17:13:09,795] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 8893252992
[2023-10-22 17:13:22,369] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-10-22 17:13:22,371] [INFO] [RANK 0] building CogVLMModel model ...
[2023-10-22 17:13:36,394] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 17639685376
[2023-10-22 17:14:04,724] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/tcexeexe/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
[2023-10-22 17:14:36,256] [INFO] [RANK 0] > successfully loaded /home/tcexeexe/.sat_models/cogvlm-chat/1/mp_rank_00_model_states.pt
[2023-10-22 17:14:39,141] [INFO] [RANK 0] > initializing model parallel with size 2
[W ProcessGroupNCCL.cpp:1849] Warning: 0NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[W ProcessGroupNCCL.cpp:1849] Warning: 0NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
Welcome to CogVLM-CLI. Enter an image URL or local file path to load an image. Continue inputting text to engage in a conversation. Type "clear" to start over, or "stop" to end the program.
Please enter the image path or URL (press Enter for plain text conversation): /home/tcexeexe/CogVLM/demo/car.jpg
User: tcexeexe
list index out of range
Please enter the image path or URL (press Enter for plain text conversation):
'''
Do you know how to solve it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.