Code Monkey home page Code Monkey logo

mmc's Issues

Request for Chart-Text Alignment Data Download Link

Hello,

I have been exploring the MMC project repository and find it immensely valuable. I am particularly interested in accessing the Chart-Text Alignment Data to further my research. However, I noticed that the download link provided in the README only includes the Chart Instruction-Tuning Data.

Would it be possible for you to kindly provide the download link for the Chart-Text Alignment Data as well? Access to this dataset would greatly contribute to my work and research efforts.

Thank you very much for your time and consideration.

Best regards,

Help setting the MMCA gradio demo up

Hello there! I've been trying to set up the MMCA gradio demo and I have encountered some bugs. I will list them here and hopefully receive help for the last one that I have not been able to solve by myself.

Bugs I've found:

gradio version

gradio needs to be in version 3.20.0 for it to works because gradio.components does not have the classes Changeable, IOComponent, JSONSerializable and possibly others that are imported in mplug-owl/serve/gradio_patch.py using the wildcard import.

transformers, huggingface-hub and AutoTokenizer class

In mplug-owl/serve/model_worker.py, the AutoTokenizer class was giving me problems because it did not detect mplug-owl, I first tried to solve changing either transformers or huggingface_hub versions thinking that mplug-owl was supported by them, but at the end I could find a combination of versions that would work. Instead I imported MplugOwlTokenizer from mplug_owl.tokenization_mplug_owl. I do not know if this is a substantial change to the code that would affect the model, but it stopped giving me problems.

peft imports

Also in mplug-owl/serve/model_worker.py I had to manually import LoraConfig from peft.tuners.lora.config and get_peft_model from peft.mapping import as they are used in the edited code we have to paste from this repo.

checkpoint.pth missing/unexpected keys

After dealing with the other problems, again in mplug-owl/serve/model_worker.py I got this one when loading state_dict from the lora_path to the checkpoint.pth refered to in the repo. Keys does not seem to match. Is it the correct checkpoint? Am I doing something wrong with the code?

Error below (I had to eliminate some of the keys in the "Missing keys" part of the error so it would fit in the issue):
RuntimeError: Error(s) in loading state_dict for PeftModel: Missing key(s) in state_dict: "base_model.model.query_tokens", "base_model.model.vision_model.embeddings.cls_token", "base_model.model.vision_model.embeddings.position_embedding", "base_model.model.vision_model.embeddings.patch_embed.weight", "base_model.model.vision_model.embeddings.pre_layernorm.weight", "base_model.model.vision_model.embeddings.pre_layernorm.bias", "base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.0.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.0.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.0.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.0.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.0.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.0.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.0.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.0.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.1.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.1.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.1.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.1.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.1.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.1.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.1.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.1.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.2.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.2.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.2.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.2.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.2.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.2.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.2.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.2.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.3.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.3.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.3.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.3.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.3.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.3.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.3.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.3.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.4.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.4.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.4.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.4.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.4.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.4.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.4.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.4.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.5.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.5.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.5.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.5.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.5.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.5.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.5.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.5.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.6.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.6.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.6.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.6.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.6.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.6.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.6.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.6.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.7.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.7.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.7.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.7.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.7.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.7.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.7.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.7.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.8.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.8.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.8.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.8.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.8.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.8.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.8.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.8.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.9.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.9.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.9.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.9.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.9.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.9.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.9.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.9.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.10.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.10.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.10.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.10.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.10.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.10.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.10.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.10.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.11.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.11.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.11.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.11.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.11.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.11.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.11.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.11.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.12.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.12.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.12.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.12.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.12.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.12.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.12.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.12.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.13.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.13.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.13.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.13.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.13.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.13.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.13.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.13.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.14.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.14.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.14.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.14.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.14.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.14.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.14.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.14.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.15.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.15.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.15.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.15.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.15.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.15.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.15.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.15.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.16.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.16.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.16.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.16.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.16.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.16.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.16.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.16.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.17.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.17.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.17.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.17.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.17.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.17.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.17.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.17.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.18.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.18.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.18.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.18.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.18.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.18.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.18.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.18.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.19.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.19.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.19.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.19.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.19.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.19.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.19.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.19.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.20.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.20.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.20.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.20.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.20.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.20.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.20.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.20.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.21.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.21.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.21.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.21.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.21.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.21.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.21.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.21.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.22.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.22.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.22.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.22.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.22.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.22.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.22.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.22.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.23.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.23.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.23.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.23.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.23.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.23.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.23.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.23.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.bias", "base_model.model.vision_model.post_layernorm.weight", "base_model.model.vision_model.post_layernorm.bias", "base_model.model.abstractor.vit_eos", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.normk.bias", "base_model.model.abstractor.visual_fc.weight", "base_model.model.abstractor.visual_fc.bias", "base_model.model.language_model.model.embed_tokens.weight", "base_model.model.language_model.model.layers.0.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.0.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.0.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.0.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.0.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.0.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.0.mlp.down_proj.weight", "base_model.model.language_model.model.layers.0.mlp.up_proj.weight", "base_model.model.language_model.model.layers.0.input_layernorm.weight", "base_model.model.language_model.model.layers.0.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.1.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.1.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.1.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.1.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.1.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.1.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.1.mlp.down_proj.weight", "base_model.model.language_model.model.layers.1.mlp.up_proj.weight", "base_model.model.language_model.model.layers.1.input_layernorm.weight", "base_model.model.language_model.model.layers.1.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.2.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.2.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.2.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.2.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.2.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.2.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.2.mlp.down_proj.weight", "base_model.model.language_model.model.layers.2.mlp.up_proj.weight", "base_model.model.language_model.model.layers.2.input_layernorm.weight", "base_model.model.language_model.model.layers.2.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.3.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.3.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.3.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.3.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.3.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.3.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.3.mlp.down_proj.weight", "base_model.model.language_model.model.layers.3.mlp.up_proj.weight", "base_model.model.language_model.model.layers.3.input_layernorm.weight", "base_model.model.language_model.model.layers.3.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.4.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.4.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.4.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.4.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.4.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.4.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.4.mlp.down_proj.weight", "base_model.model.language_model.model.layers.4.mlp.up_proj.weight", "base_model.model.language_model.model.layers.4.input_layernorm.weight", "base_model.model.language_model.model.layers.4.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.5.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.5.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.5.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.5.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.5.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.5.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.5.mlp.down_proj.weight", "base_model.model.language_model.model.layers.5.mlp.up_proj.weight", "base_model.model.language_model.model.layers.5.input_layernorm.weight", "base_model.model.language_model.model.layers.5.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.6.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.6.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.6.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.6.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.6.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.6.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.6.mlp.down_proj.weight", "base_model.model.language_model.model.layers.6.mlp.up_proj.weight", "base_model.model.language_model.model.layers.6.input_layernorm.weight", "base_model.model.language_model.model.layers.6.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.7.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.7.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.7.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.7.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.7.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.7.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.7.mlp.down_proj.weight", "base_model.model.language_model.model.layers.7.mlp.up_proj.weight", "base_model.model.language_model.model.layers.7.input_layernorm.weight", "base_model.model.language_model.model.layers.7.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.8.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.8.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.8.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.8.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.8.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.8.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.8.mlp.down_proj.weight", "base_model.model.language_model.model.layers.8.mlp.up_proj.weight", "base_model.model.language_model.model.layers.8.input_layernorm.weight", "base_model.model.language_model.model.layers.8.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.9.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.9.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.9.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.9.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.9.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.9.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.9.mlp.down_proj.weight", "base_model.model.language_model.model.layers.9.mlp.up_proj.weight", "base_model.model.language_model.model.layers.9.input_layernorm.weight", "base_model.model.language_model.model.layers.9.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.10.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.10.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.10.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.10.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.10.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.10.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.10.mlp.down_proj.weight", "base_model.model.language_model.model.layers.10.mlp.up_proj.weight", "base_model.model.language_model.model.layers.10.input_layernorm.weight", "base_model.model.language_model.model.layers.10.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.11.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.11.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.11.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.11.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.11.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.11.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.11.mlp.down_proj.weight", "base_model.model.language_model.model.layers.11.mlp.up_proj.weight", "base_model.model.language_model.model.layers.11.input_layernorm.weight", "base_model.model.language_model.model.layers.11.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.12.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.12.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.12.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.12.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.12.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.12.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.12.mlp.down_proj.weight", "base_model.model.language_model.model.layers.12.mlp.up_proj.weight", "base_model.model.language_model.model.layers.12.input_layernorm.weight", "base_model.model.language_model.model.layers.12.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.13.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.13.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.13.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.13.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.13.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.13.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.13.mlp.down_proj.weight", "base_model.model.language_model.model.layers.13.mlp.up_proj.weight", "base_model.model.language_model.model.layers.13.input_layernorm.weight", "base_model.model.language_model.model.layers.13.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.14.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.14.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.14.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.14.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.14.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.14.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.14.mlp.down_proj.weight", "base_model.model.language_model.model.layers.14.mlp.up_proj.weight", "base_model.model.language_model.model.layers.14.input_layernorm.weight", "base_model.model.language_model.model.layers.14.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.15.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.15.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.15.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.15.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.15.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.15.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.15.mlp.down_proj.weight", "base_model.model.language_model.model.layers.15.mlp.up_proj.weight", "base_model.model.language_model.model.layers.15.input_layernorm.weight", "base_model.model.language_model.model.layers.15.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.16.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.16.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.16.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.16.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.16.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.16.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.16.mlp.down_proj.weight", "base_model.model.language_model.model.layers.16.mlp.up_proj.weight", "base_model.model.language_model.model.layers.16.input_layernorm.weight", "base_model.model.language_model.model.layers.16.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.17.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.17.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.17.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.17.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.17.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.17.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.17.mlp.down_proj.weight", "base_model.model.language_model.model.layers.17.mlp.up_proj.weight", "base_model.model.language_model.model.layers.17.input_layernorm.weight", "base_model.model.language_model.model.layers.17.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.18.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.18.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.18.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.18.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.18.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.18.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.18.mlp.down_proj.weight", "base_model.model.language_model.model.layers.18.mlp.up_proj.weight", "base_model.model.language_model.model.layers.18.input_layernorm.weight", "base_model.model.language_model.model.layers.18.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.19.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.19.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.19.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.19.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.19.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.19.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.19.mlp.down_proj.weight", "base_model.model.language_model.model.layers.19.mlp.up_proj.weight", "base_model.model.language_model.model.layers.19.input_layernorm.weight", "base_model.model.language_model.model.layers.19.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.20.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.20.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.20.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.20.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.20.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.20.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.20.mlp.down_proj.weight", "base_model.model.language_model.model.layers.20.mlp.up_proj.weight", "base_model.model.language_model.model.layers.20.input_layernorm.weight", "base_model.model.language_model.model.layers.20.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.21.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.21.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.21.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.21.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.21.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.21.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.21.mlp.down_proj.weight", "base_model.model.language_model.model.layers.21.mlp.up_proj.weight", "base_model.model.language_model.model.layers.21.input_layernorm.weight", "base_model.model.language_model.model.layers.21.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.22.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.22.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.22.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.22.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.22.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.22.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.22.mlp.down_proj.weight", "base_model.model.language_model.model.layers.22.mlp.up_proj.weight", "base_model.model.language_model.model.layers.22.input_layernorm.weight", "base_model.model.language_model.model.layers.22.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.23.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.23.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.23.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.23.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.23.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.23.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.23.mlp.down_proj.weight", "base_model.model.language_model.model.layers.23.mlp.up_proj.weight", "base_model.model.language_model.model.layers.23.input_layernorm.weight", "base_model.model.language_model.model.layers.23.post_attention_layernorm.weight". Unexpected key(s) in state_dict: "Qformer.bert.embeddings.position_ids", "llama_model.model.layers.0.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.1.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.2.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.3.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.4.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.5.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.6.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.7.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.8.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.9.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.10.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.11.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.12.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.13.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.14.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.15.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.16.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.17.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.18.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.19.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.20.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.21.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.22.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.23.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.24.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.25.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.26.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.27.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.28.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.29.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.30.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.31.self_attn.rotary_emb.inv_freq", "llama_proj.weight", "llama_proj.bias".

Thanks in advance!

'load_state_dict' Missing Key(s)

Hi, thank you for your hard work.
Unfortunately, I encountered an issue when running the code for the fine-tuned model inference.
If I’ve made any mistakes, please let me know.
Thank you!

Code

import os
import sys
import torch
from peft import LoraConfig, get_peft_model

from mPLUG_Owl.mplug_owl.modeling_mplug_owl import MplugOwlForConditionalGeneration
from mPLUG_Owl.mplug_owl.tokenization_mplug_owl import MplugOwlTokenizer
from mPLUG_Owl.mplug_owl.processing_mplug_owl import MplugOwlImageProcessor, MplugOwlProcessor
from transformers import AutoTokenizer

base_model = 'MAGAer13/mplug-owl-llama-7b'
load_in_8bit=False
bf16 = True
image_processor = MplugOwlImageProcessor.from_pretrained(base_model)
tokenizer = AutoTokenizer.from_pretrained(base_model)
processor = MplugOwlProcessor(image_processor, tokenizer)
model = MplugOwlForConditionalGeneration.from_pretrained(
     base_model,
     load_in_8bit=load_in_8bit,
     torch_dtype=torch.bfloat16 if bf16 else torch.half,
     device_map="auto"
 )
tokenizer = processor.tokenizer
        
peft_config = LoraConfig(target_modules=r'.*language_model.*\.(q_proj|v_proj)', inference_mode=False, r=8,lora_alpha=32, lora_dropout=0.05)
model = get_peft_model(model, peft_config)
lora_path = 'checkpoint.pth'
prefix_state_dict = torch.load(lora_path, map_location='cpu',)
model.load_state_dict(prefix_state_dict)

Error

RuntimeError: Error(s) in loading state_dict for PeftModel:
Missing key(s) in state_dict: "base_model.model.query_tokens", ...
Unexpected key(s) in state_dict: "model", "optimizer", "config", "scaler", "epoch". 

Inquiry Regarding Chart-Text Alignment Data and Instructions

Hello,

I hope this message finds you well. I've been actively exploring the MMC project repository and must say it's an invaluable resource.

I have a few queries regarding the Chart-Text Alignment Data:

  • Regarding the non-arXiv JSON files, is the "text" section utilized for the textual alignment?
  • Regarding the arXiv JSON files, is the "caption" section utilized for the textual alignment?
  • Does the "reference_sentence_in_article" field in the arXiv JSON files play a role in the alignment process?

Your guidance on the construction of instructions for generating the Chart-Text Alignment Data would also be immensely helpful. If possible, could you provide some specifics or some prompts to follow?

Thank you sincerely for your time and consideration.

Warm regards,

How many epochs and what is the best loss of your model in the Chart-Text Alignment stage?

Hi,
I am trying to replicate the Chart-Text Alignment stage as described in your paper. It's my first time pre-training such a big model on a large-scale dataset so I don't know when I should stop the pre-training loop. Could you please kindly provide the information about the number of epochs and the best loss that you pre-trained your model in the Chart-Text Alignment stage.

Thank you for your attention to this matter.

Chart-Text Alignment Data

Great Work! Can you provide Chart-Text Alignment Data? Or how to separate this apart from instruction tuning data?

Inquiry Regarding MMC Dataset

Thank you very much for providing the data, it has been tremendously helpful to me! However, I have encountered a few points of confusion regarding the dataset obtained through the current download links. I would greatly appreciate clarification on the following:.

  1. In Chart-Text Alignment Data, MMC-Instruction, the Scientific (Arxiv) Chart-Caption section contains 250k data, whereas the paper mentions 210k. Is this difference due to an addition made after the paper's publication?
  2. The Filtered Existing Datasets part contains 160k data, sourced from Unichart for both images and summaries. However, the paper mentions 190k from five datasets; are they all included in Unichart?
  3. In the non-arxiv part of Chart Instruction-Tuning Data, MMC-Instruction, Part1 offers 2M questions, significantly more than the 200k mentioned in the paper. Are they extracted from Unichart's questions?
  4. Do Part2 and Part3 respectively contain the GPT-4 results of Chart Information Extraction and Chart Reasoning QAs, as mentioned in the paper?

Request for summarization data for existing dataset

Thanks for open-sourcing the MMC training data, which is quite helpful for developing document-oriented MLLMs. I notice that the download link for existing datasets is not updated yet and I believe it is better to include them in our training. Do you have a plan for uploading these images recently?

Incomplete Arxiv Image Dataset

Hello,

I have downloaded 202k Arxiv images, but it seems incomplete. The JSON data for Scientific (Arxiv) Chart-Caption comprises a total of 250,000 entries. Yet, I found that 79,040 entries reference images that are missing from the dataset provided in this Google Drive link.

Could you kindly provide the missing images or offer insight into why they are not included in the provided dataset?

Thank you for your attention to this matter.

Best regards,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.