fuxiaoliu / mmc Goto Github PK
View Code? Open in Web Editor NEW[NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning
[NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning
Hello,
I have been exploring the MMC project repository and find it immensely valuable. I am particularly interested in accessing the Chart-Text Alignment Data to further my research. However, I noticed that the download link provided in the README only includes the Chart Instruction-Tuning Data.
Would it be possible for you to kindly provide the download link for the Chart-Text Alignment Data as well? Access to this dataset would greatly contribute to my work and research efforts.
Thank you very much for your time and consideration.
Best regards,
Hello there! I've been trying to set up the MMCA gradio demo and I have encountered some bugs. I will list them here and hopefully receive help for the last one that I have not been able to solve by myself.
Bugs I've found:
gradio
needs to be in version 3.20.0 for it to works because gradio.components
does not have the classes Changeable
, IOComponent
, JSONSerializable
and possibly others that are imported in mplug-owl/serve/gradio_patch.py
using the wildcard import.
In mplug-owl/serve/model_worker.py
, the AutoTokenizer
class was giving me problems because it did not detect mplug-owl, I first tried to solve changing either transformers
or huggingface_hub
versions thinking that mplug-owl was supported by them, but at the end I could find a combination of versions that would work. Instead I imported MplugOwlTokenizer
from mplug_owl.tokenization_mplug_owl
. I do not know if this is a substantial change to the code that would affect the model, but it stopped giving me problems.
Also in mplug-owl/serve/model_worker.py
I had to manually import LoraConfig
from peft.tuners.lora.config
and get_peft_model
from peft.mapping import
as they are used in the edited code we have to paste from this repo.
After dealing with the other problems, again in mplug-owl/serve/model_worker.py
I got this one when loading state_dict
from the lora_path to the checkpoint.pth refered to in the repo. Keys does not seem to match. Is it the correct checkpoint? Am I doing something wrong with the code?
Error below (I had to eliminate some of the keys in the "Missing keys" part of the error so it would fit in the issue):
RuntimeError: Error(s) in loading state_dict for PeftModel: Missing key(s) in state_dict: "base_model.model.query_tokens", "base_model.model.vision_model.embeddings.cls_token", "base_model.model.vision_model.embeddings.position_embedding", "base_model.model.vision_model.embeddings.patch_embed.weight", "base_model.model.vision_model.embeddings.pre_layernorm.weight", "base_model.model.vision_model.embeddings.pre_layernorm.bias", "base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.0.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.0.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.0.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.0.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.0.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.0.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.0.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.0.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.1.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.1.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.1.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.1.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.1.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.1.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.1.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.1.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.2.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.2.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.2.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.2.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.2.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.2.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.2.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.2.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.3.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.3.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.3.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.3.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.3.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.3.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.3.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.3.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.4.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.4.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.4.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.4.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.4.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.4.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.4.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.4.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.5.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.5.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.5.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.5.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.5.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.5.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.5.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.5.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.6.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.6.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.6.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.6.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.6.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.6.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.6.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.6.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.7.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.7.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.7.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.7.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.7.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.7.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.7.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.7.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.8.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.8.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.8.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.8.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.8.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.8.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.8.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.8.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.9.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.9.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.9.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.9.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.9.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.9.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.9.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.9.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.10.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.10.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.10.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.10.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.10.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.10.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.10.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.10.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.11.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.11.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.11.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.11.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.11.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.11.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.11.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.11.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.12.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.12.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.12.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.12.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.12.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.12.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.12.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.12.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.13.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.13.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.13.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.13.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.13.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.13.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.13.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.13.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.14.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.14.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.14.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.14.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.14.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.14.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.14.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.14.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.15.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.15.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.15.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.15.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.15.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.15.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.15.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.15.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.16.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.16.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.16.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.16.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.16.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.16.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.16.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.16.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.17.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.17.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.17.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.17.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.17.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.17.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.17.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.17.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.18.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.18.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.18.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.18.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.18.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.18.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.18.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.18.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.19.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.19.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.19.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.19.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.19.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.19.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.19.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.19.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.20.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.20.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.20.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.20.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.20.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.20.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.20.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.20.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.21.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.21.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.21.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.21.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.21.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.21.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.21.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.21.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.22.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.22.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.22.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.22.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.22.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.22.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.22.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.22.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.bias", "base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.weight", "base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.bias", "base_model.model.vision_model.encoder.layers.23.self_attn.dense.weight", "base_model.model.vision_model.encoder.layers.23.self_attn.dense.bias", "base_model.model.vision_model.encoder.layers.23.input_layernorm.weight", "base_model.model.vision_model.encoder.layers.23.input_layernorm.bias", "base_model.model.vision_model.encoder.layers.23.mlp.fc1.weight", "base_model.model.vision_model.encoder.layers.23.mlp.fc1.bias", "base_model.model.vision_model.encoder.layers.23.mlp.fc2.weight", "base_model.model.vision_model.encoder.layers.23.mlp.fc2.bias", "base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.weight", "base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.bias", "base_model.model.vision_model.post_layernorm.weight", "base_model.model.vision_model.post_layernorm.bias", "base_model.model.abstractor.vit_eos", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.0.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.0.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.1.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.1.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.2.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.2.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.3.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.3.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.4.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.4.crossattention.normk.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.norm1.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.norm1.bias", "base_model.model.abstractor.encoder.layers.5.crossattention.normk.weight", "base_model.model.abstractor.encoder.layers.5.crossattention.normk.bias", "base_model.model.abstractor.visual_fc.weight", "base_model.model.abstractor.visual_fc.bias", "base_model.model.language_model.model.embed_tokens.weight", "base_model.model.language_model.model.layers.0.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.0.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.0.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.0.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.0.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.0.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.0.mlp.down_proj.weight", "base_model.model.language_model.model.layers.0.mlp.up_proj.weight", "base_model.model.language_model.model.layers.0.input_layernorm.weight", "base_model.model.language_model.model.layers.0.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.1.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.1.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.1.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.1.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.1.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.1.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.1.mlp.down_proj.weight", "base_model.model.language_model.model.layers.1.mlp.up_proj.weight", "base_model.model.language_model.model.layers.1.input_layernorm.weight", "base_model.model.language_model.model.layers.1.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.2.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.2.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.2.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.2.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.2.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.2.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.2.mlp.down_proj.weight", "base_model.model.language_model.model.layers.2.mlp.up_proj.weight", "base_model.model.language_model.model.layers.2.input_layernorm.weight", "base_model.model.language_model.model.layers.2.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.3.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.3.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.3.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.3.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.3.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.3.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.3.mlp.down_proj.weight", "base_model.model.language_model.model.layers.3.mlp.up_proj.weight", "base_model.model.language_model.model.layers.3.input_layernorm.weight", "base_model.model.language_model.model.layers.3.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.4.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.4.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.4.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.4.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.4.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.4.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.4.mlp.down_proj.weight", "base_model.model.language_model.model.layers.4.mlp.up_proj.weight", "base_model.model.language_model.model.layers.4.input_layernorm.weight", "base_model.model.language_model.model.layers.4.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.5.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.5.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.5.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.5.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.5.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.5.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.5.mlp.down_proj.weight", "base_model.model.language_model.model.layers.5.mlp.up_proj.weight", "base_model.model.language_model.model.layers.5.input_layernorm.weight", "base_model.model.language_model.model.layers.5.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.6.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.6.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.6.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.6.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.6.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.6.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.6.mlp.down_proj.weight", "base_model.model.language_model.model.layers.6.mlp.up_proj.weight", "base_model.model.language_model.model.layers.6.input_layernorm.weight", "base_model.model.language_model.model.layers.6.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.7.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.7.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.7.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.7.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.7.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.7.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.7.mlp.down_proj.weight", "base_model.model.language_model.model.layers.7.mlp.up_proj.weight", "base_model.model.language_model.model.layers.7.input_layernorm.weight", "base_model.model.language_model.model.layers.7.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.8.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.8.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.8.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.8.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.8.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.8.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.8.mlp.down_proj.weight", "base_model.model.language_model.model.layers.8.mlp.up_proj.weight", "base_model.model.language_model.model.layers.8.input_layernorm.weight", "base_model.model.language_model.model.layers.8.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.9.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.9.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.9.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.9.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.9.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.9.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.9.mlp.down_proj.weight", "base_model.model.language_model.model.layers.9.mlp.up_proj.weight", "base_model.model.language_model.model.layers.9.input_layernorm.weight", "base_model.model.language_model.model.layers.9.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.10.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.10.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.10.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.10.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.10.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.10.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.10.mlp.down_proj.weight", "base_model.model.language_model.model.layers.10.mlp.up_proj.weight", "base_model.model.language_model.model.layers.10.input_layernorm.weight", "base_model.model.language_model.model.layers.10.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.11.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.11.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.11.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.11.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.11.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.11.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.11.mlp.down_proj.weight", "base_model.model.language_model.model.layers.11.mlp.up_proj.weight", "base_model.model.language_model.model.layers.11.input_layernorm.weight", "base_model.model.language_model.model.layers.11.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.12.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.12.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.12.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.12.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.12.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.12.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.12.mlp.down_proj.weight", "base_model.model.language_model.model.layers.12.mlp.up_proj.weight", "base_model.model.language_model.model.layers.12.input_layernorm.weight", "base_model.model.language_model.model.layers.12.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.13.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.13.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.13.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.13.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.13.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.13.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.13.mlp.down_proj.weight", "base_model.model.language_model.model.layers.13.mlp.up_proj.weight", "base_model.model.language_model.model.layers.13.input_layernorm.weight", "base_model.model.language_model.model.layers.13.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.14.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.14.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.14.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.14.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.14.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.14.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.14.mlp.down_proj.weight", "base_model.model.language_model.model.layers.14.mlp.up_proj.weight", "base_model.model.language_model.model.layers.14.input_layernorm.weight", "base_model.model.language_model.model.layers.14.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.15.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.15.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.15.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.15.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.15.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.15.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.15.mlp.down_proj.weight", "base_model.model.language_model.model.layers.15.mlp.up_proj.weight", "base_model.model.language_model.model.layers.15.input_layernorm.weight", "base_model.model.language_model.model.layers.15.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.16.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.16.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.16.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.16.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.16.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.16.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.16.mlp.down_proj.weight", "base_model.model.language_model.model.layers.16.mlp.up_proj.weight", "base_model.model.language_model.model.layers.16.input_layernorm.weight", "base_model.model.language_model.model.layers.16.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.17.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.17.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.17.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.17.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.17.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.17.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.17.mlp.down_proj.weight", "base_model.model.language_model.model.layers.17.mlp.up_proj.weight", "base_model.model.language_model.model.layers.17.input_layernorm.weight", "base_model.model.language_model.model.layers.17.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.18.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.18.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.18.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.18.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.18.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.18.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.18.mlp.down_proj.weight", "base_model.model.language_model.model.layers.18.mlp.up_proj.weight", "base_model.model.language_model.model.layers.18.input_layernorm.weight", "base_model.model.language_model.model.layers.18.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.19.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.19.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.19.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.19.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.19.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.19.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.19.mlp.down_proj.weight", "base_model.model.language_model.model.layers.19.mlp.up_proj.weight", "base_model.model.language_model.model.layers.19.input_layernorm.weight", "base_model.model.language_model.model.layers.19.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.20.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.20.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.20.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.20.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.20.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.20.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.20.mlp.down_proj.weight", "base_model.model.language_model.model.layers.20.mlp.up_proj.weight", "base_model.model.language_model.model.layers.20.input_layernorm.weight", "base_model.model.language_model.model.layers.20.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.21.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.21.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.21.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.21.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.21.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.21.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.21.mlp.down_proj.weight", "base_model.model.language_model.model.layers.21.mlp.up_proj.weight", "base_model.model.language_model.model.layers.21.input_layernorm.weight", "base_model.model.language_model.model.layers.21.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.22.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.22.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.22.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.22.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.22.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.22.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.22.mlp.down_proj.weight", "base_model.model.language_model.model.layers.22.mlp.up_proj.weight", "base_model.model.language_model.model.layers.22.input_layernorm.weight", "base_model.model.language_model.model.layers.22.post_attention_layernorm.weight", "base_model.model.language_model.model.layers.23.self_attn.q_proj.base_layer.weight", "base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.23.self_attn.k_proj.weight", "base_model.model.language_model.model.layers.23.self_attn.v_proj.base_layer.weight", "base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_A.default.weight", "base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_B.default.weight", "base_model.model.language_model.model.layers.23.self_attn.o_proj.weight", "base_model.model.language_model.model.layers.23.self_attn.rotary_emb.inv_freq", "base_model.model.language_model.model.layers.23.mlp.gate_proj.weight", "base_model.model.language_model.model.layers.23.mlp.down_proj.weight", "base_model.model.language_model.model.layers.23.mlp.up_proj.weight", "base_model.model.language_model.model.layers.23.input_layernorm.weight", "base_model.model.language_model.model.layers.23.post_attention_layernorm.weight". Unexpected key(s) in state_dict: "Qformer.bert.embeddings.position_ids", "llama_model.model.layers.0.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.1.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.2.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.3.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.4.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.5.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.6.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.7.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.8.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.9.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.10.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.11.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.12.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.13.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.14.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.15.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.16.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.17.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.18.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.19.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.20.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.21.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.22.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.23.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.24.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.25.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.26.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.27.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.28.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.29.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.30.self_attn.rotary_emb.inv_freq", "llama_model.model.layers.31.self_attn.rotary_emb.inv_freq", "llama_proj.weight", "llama_proj.bias".
Thanks in advance!
Hi, thank you for your hard work.
Unfortunately, I encountered an issue when running the code for the fine-tuned model inference.
If I’ve made any mistakes, please let me know.
Thank you!
Code
import os
import sys
import torch
from peft import LoraConfig, get_peft_model
from mPLUG_Owl.mplug_owl.modeling_mplug_owl import MplugOwlForConditionalGeneration
from mPLUG_Owl.mplug_owl.tokenization_mplug_owl import MplugOwlTokenizer
from mPLUG_Owl.mplug_owl.processing_mplug_owl import MplugOwlImageProcessor, MplugOwlProcessor
from transformers import AutoTokenizer
base_model = 'MAGAer13/mplug-owl-llama-7b'
load_in_8bit=False
bf16 = True
image_processor = MplugOwlImageProcessor.from_pretrained(base_model)
tokenizer = AutoTokenizer.from_pretrained(base_model)
processor = MplugOwlProcessor(image_processor, tokenizer)
model = MplugOwlForConditionalGeneration.from_pretrained(
base_model,
load_in_8bit=load_in_8bit,
torch_dtype=torch.bfloat16 if bf16 else torch.half,
device_map="auto"
)
tokenizer = processor.tokenizer
peft_config = LoraConfig(target_modules=r'.*language_model.*\.(q_proj|v_proj)', inference_mode=False, r=8,lora_alpha=32, lora_dropout=0.05)
model = get_peft_model(model, peft_config)
lora_path = 'checkpoint.pth'
prefix_state_dict = torch.load(lora_path, map_location='cpu',)
model.load_state_dict(prefix_state_dict)
Error
RuntimeError: Error(s) in loading state_dict for PeftModel:
Missing key(s) in state_dict: "base_model.model.query_tokens", ...
Unexpected key(s) in state_dict: "model", "optimizer", "config", "scaler", "epoch".
Hello,
I hope this message finds you well. I've been actively exploring the MMC project repository and must say it's an invaluable resource.
I have a few queries regarding the Chart-Text Alignment Data:
Your guidance on the construction of instructions for generating the Chart-Text Alignment Data would also be immensely helpful. If possible, could you provide some specifics or some prompts to follow?
Thank you sincerely for your time and consideration.
Warm regards,
Hi,
I am trying to replicate the Chart-Text Alignment stage as described in your paper. It's my first time pre-training such a big model on a large-scale dataset so I don't know when I should stop the pre-training loop. Could you please kindly provide the information about the number of epochs and the best loss that you pre-trained your model in the Chart-Text Alignment stage.
Thank you for your attention to this matter.
Hello! Is the data publicly available? If so, can I have access to it?
These files cannot reach , they report "Sorry, you can't view or download this file at this time."
#Part 1
#Images
gdown https://drive.google.com/file/d/1Y17wNYdBlPxhB5KKiux2BD8C2FlA5MC9
#Text
gdown https://drive.google.com/file/d/1tUtntLRgsBJ9v5NcdTMvVI32ruLHAyFe
MMC-Benchmark should contain multiple tasks like Multiple-choice questions and Free-form questions according to the paper. However, the data.json downloaded from this link https://drive.google.com/file/d/1HOVhPuFJ0roaHt-6AFyYX2E5MxKjoFug seems to only contain true-or-false questions.
Great Work! Can you provide Chart-Text Alignment Data? Or how to separate this apart from instruction tuning data?
when will dataset open?
MMC-Benchmark
Questions and Answers
gdown https://drive.google.com/file/d/1iojBp5uzTAjZBtGU0cmXawzzWrnzrc1L
It is an empty link. please fix it thx
As mentioned by #9 , the updated MMC-Benchmark in huggingface still only contains true-or-false questions. Since the original issue was closed, I reopen this one.
Thank you very much for providing the data, it has been tremendously helpful to me! However, I have encountered a few points of confusion regarding the dataset obtained through the current download links. I would greatly appreciate clarification on the following:.
Thanks for open-sourcing the MMC training data, which is quite helpful for developing document-oriented MLLMs. I notice that the download link for existing datasets is not updated yet and I believe it is better to include them in our training. Do you have a plan for uploading these images recently?
Hello,
I have downloaded 202k Arxiv images, but it seems incomplete. The JSON data for Scientific (Arxiv) Chart-Caption comprises a total of 250,000 entries. Yet, I found that 79,040 entries reference images that are missing from the dataset provided in this Google Drive link.
Could you kindly provide the missing images or offer insight into why they are not included in the provided dataset?
Thank you for your attention to this matter.
Best regards,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.