Code Monkey home page Code Monkey logo

chinese-clip's Introduction

中文说明 | English




ModelScope  |  Demo  |  Paper  |  Blog



本项目为CLIP模型的中文版本,使用大规模中文数据进行训练(~2亿图文对),旨在帮助用户快速实现中文领域的图文特征&相似度计算跨模态检索零样本图片分类等任务。本项目代码基于open_clip project建设,并针对中文领域数据以及在中文数据上实现更好的效果做了优化。本项目提供了API、训练代码和测试代码,下文中将详细介绍细节。

新闻

模型及实验

模型规模 & 下载链接

Chinese-CLIP目前开源5个不同规模,其模型信息和下载方式见下表:

模型规模下载链接参数量视觉侧骨架视觉侧参数量文本侧骨架文本侧参数量分辨率
CN-CLIPRN50Download77MResNet5038MRBT339M224
CN-CLIPViT-B/16Download188MViT-B/1686MRoBERTa-wwm-Base102M224
CN-CLIPViT-L/14Download406MViT-L/14304MRoBERTa-wwm-Base102M224
CN-CLIPViT-L/14@336pxDownload407MViT-L/14304MRoBERTa-wwm-Base102M336
CN-CLIPViT-H/14Download958MViT-H/14632MRoBERTa-wwm-Large326M224


实验结果

针对图文检索任务,我们在MUGE RetrievalFlickr30K-CNCOCO-CN上进行了zero-shot和finetune的实验。针对图像零样本分类,我们在ELEVATER的10个数据集上进行了实验。实验结果如下表所示。篇幅所限,我们这里给出baseline模型和Chinese-CLIP的最优规模模型结果,关于Chinese-CLIP各规模的详细结果指标,请详见Results.md

MUGE Text-to-Image Retrieval (Official Validation Set):

SetupZero-shotFinetune
MetricR@1R@5R@10MRR@1R@5R@10MR
Wukong42.769.078.063.252.777.985.672.1
R2D249.575.783.269.560.182.989.477.5
CN-CLIP63.084.189.278.868.988.793.183.6

Flickr30K-CN Retrieval (Official Test Set):

TaskText-to-ImageImage-to-Text
SetupZero-shotFinetuneZero-shotFinetune
MetricR@1R@5R@10R@1R@5R@10R@1R@5R@10R@1R@5R@10
Wukong51.778.986.377.494.597.076.194.897.592.799.199.6
Taiyi60.885.091.0---------
R2D260.986.892.784.496.798.477.696.798.995.699.8100.0
CN-CLIP71.291.495.583.896.998.681.697.598.895.399.7100.0

COCO-CN Retrieval (Official Test Set):

TaskText-to-ImageImage-to-Text
SetupZero-shotFinetuneZero-shotFinetune
MetricR@1R@5R@10R@1R@5R@10R@1R@5R@10R@1R@5R@10
Wukong53.480.290.174.094.498.155.281.090.673.394.098.0
Taiyi60.084.093.3---------
R2D256.485.093.179.196.598.963.389.395.779.397.198.7
CN-CLIP69.289.996.181.596.999.163.086.692.983.597.399.2

Zero-shot Image Classification:

TaskCIFAR10CIFAR100DTDEuroSATFERFGVCKITTIMNISTPCVOC
GIT88.561.142.943.441.46.722.168.950.080.2
ALIGN94.976.866.152.150.825.041.274.055.283.0
CLIP94.977.056.063.048.333.311.579.062.384.0
Wukong95.477.140.950.3------
CN-CLIP96.079.751.252.055.126.249.979.463.584.9



开始用起来!

安装要求

开始本项目前,需先检查是否满足下列环境配置要求:

  • python >= 3.6.4
  • pytorch >= 1.8.0 (with torchvision >= 0.9.0)
  • CUDA Version >= 10.2

运行下列命令即可安装本项目所需的三方库。

pip install -r requirements.txt

API快速上手

下面提供一段简单的代码示例说明如何使用中文CLIP的API。开始使用前,请先安装cn_clip:

# 通过pip安装
pip install cn_clip

# 或者从源代码安装
cd Chinese-CLIP
pip install -e .

安装成功后,即可通过如下方式轻松调用API,传入指定图片(示例)和文本,提取图文特征向量并计算相似度:

import torch 
from PIL import Image

import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
print("Available models:", available_models())  
# Available models: ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')
model.eval()
image = preprocess(Image.open("examples/pokemon.jpeg")).unsqueeze(0).to(device)
text = clip.tokenize(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    # 对特征进行归一化,请使用归一化后的图文特征用于下游任务
    image_features /= image_features.norm(dim=-1, keepdim=True) 
    text_features /= text_features.norm(dim=-1, keepdim=True)    

    logits_per_image, logits_per_text = model.get_similarity(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # [[1.268734e-03 5.436878e-02 6.795761e-04 9.436829e-01]]

我们也准备了部署ONNX和TensorRT模型的相关支持,流程详见deployment.md

如果你不满足于仅仅使用API,欢迎继续阅读本文档,了解如何使用我们的项目进行CLIP模型的训练和测试。

教程

下文将包括跨模态检索教程(包含finetune和inference,及KNN计算等)以及零样本图像分类教程

跨模态检索

代码组织

下载本项目后, 请创建新的文件夹 ${DATAPATH} 以存放数据集、预训练ckpt、以及finetune产生的模型日志&ckpt。推荐工作区目录结构如下:

Chinese-CLIP/
├── run_scripts/
│   ├── muge_finetune_vit-b-16_rbt-base.sh
│   ├── flickr30k_finetune_vit-b-16_rbt-base.sh
│   └── ...           # 更多finetune或评测脚本...
└── cn_clip/
    ├── clip/
    ├── eval/
    ├── preprocess/
    └── training/

${DATAPATH}
├── pretrained_weights/
├── experiments/
├── deploy/	      # 用于存放ONNX & TensorRT部署模型
└── datasets/
    ├── MUGE/
    ├── Flickr30k-CN/
    └── .../          # 更多自定义数据集...

准备工作

这里我们提供预训练模型参数的下载方式,以及进行finetune前对数据进行的预处理过程

预训练CKPT

请参考前文模型规模 & 下载链接部分,下载对应模型ckpt。推荐将下载的ckpt文件存放于${DATAPATH}/pretrained_weights/目录下。

数据集格式预处理

为了与Chinese-CLIP代码适配,同时保证数据处理和读取的效率,我们建议将训练&评测使用的图文数据集统一组织成如下的方式:

${DATAPATH}
└── datasets/
    └── ${dataset_name}/
        ├── train_imgs.tsv      # 图片id & 图片内容
        ├── train_texts.jsonl   # 文本id & 文本内容,连同匹配的图片id列表
        ├── valid_imgs.tsv
        ├── valid_texts.jsonl
        ├── test_imgs.tsv
        └── test_texts.jsonl

其中${dataset_name}代指数据集名称(如MUGE)

为保证文件处理效率,我们不是将图片以大量的小文件方式存放,而是将训练/验证/测试图片以base64形式分别存放在${split}_imgs.tsv文件中。文件每行表示一张图片,包含图片id(int型)与图片base64,以tab隔开,格式如下:

1000002	/9j/4AAQSkZJ...YQj7314oA//2Q==

将图片原始文件转换为base64的方式非常简单,请执行以下python代码:

from PIL import Image
from io import BytesIO
import base64

img = Image.open(file_name) # 访问图片路径
img_buffer = BytesIO()
img.save(img_buffer, format=img.format)
byte_data = img_buffer.getvalue()
base64_str = base64.b64encode(byte_data) # bytes
base64_str = base64_str.decode("utf-8") # str

文本信息及图文对匹配关系则保存在${split}_texts.jsonl文件。文件每行是一行json,格式如下:

{"text_id": 8428, "text": "高级感托特包斜挎", "image_ids": [1076345, 517602]}

对于测试集只有文本,不知道图文对匹配关系的情况,每行的image_ids字段处理为空列表即可,即"image_ids": []

最后,我们还需要将tsv和jsonl文件一起序列化,转换为内存索引的LMDB数据库文件,方便训练时的随机读取

python cn_clip/preprocess/build_lmdb_dataset.py \
    --data_dir ${DATAPATH}/datasets/${dataset_name}
    --splits train,valid,test

例如对于MUGE数据集,则${dataset_name}设为MUGE,--splits指定需要转换的数据集划分,以逗号不加空格分隔。转换后,数据集文件夹下会对应增加以下LMDB序列化文件

${DATAPATH}
└── datasets/
    └── ${dataset_name}/
        └── lmdb/
            ├── train
            │   ├── imgs
            │   └── pairs
            ├── valid
            └── test

为了降低上手难度,我们也提供了按上述步骤预处理好的MUGE数据(下载链接)和Flickr30K-CN数据(下载链接)压缩包,直接下载解压并放置于${DATAPATH}/datasets/目录下即可。如果需要COCO-CN数据,请向原作者进行申请许可完成后,邮件联系我们吧。

模型finetune

在此我们介绍训练的步骤,方便其他用户了解模型细节,使用我们提供的中文CLIP预训练模型进行finetune。基于MUGE和Flickr30K-CN两个下游检索数据集,我们提供了训练样例脚本run_scripts/muge_finetune_vit-b-16_rbt-base.shrun_scripts/flickr30k_finetune_vit-b-16_rbt-base.sh运行脚本同时支持单机(单卡或多卡)和多机分布式训练,请在运行前,先根据脚本开头的指引注释,填写好分布式相关配置,之后运行如下命令即可开始训练(多机训练请在各机器上都运行命令)。对于显存不足的情况,可以考虑激活配置项中的重计算策略训练产生的log和模型ckpt文件,会自动保存在用户指定的目录下:

cd Chinese-CLIP/
bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ${DATAPATH}

相关的训练配置项包括:

  • 分布式
    • WORKER_CNT: 训练的机器个数
    • GPUS_PER_NODE: 每个机器上的GPU个数
  • 训练/验证数据
    • train-data: 训练数据LMDB目录,准备LMDB数据文件的预处理流程见上。
    • val-data: 验证数据LMDB目录,指定为None时,则不进行训练过程中的验证。
    • num-workers: 训练集数据处理(DataLoader)的进程数,默认为4。
    • valid-num-workers: 验证集数据处理(DataLoader)的进程数(如果进行验证),默认为1。
  • 训练超参数
    • vision-model: 指定视觉backbone, 从 ["ViT-B-16", "ViT-L-14", "ViT-L-14-336", "ViT-H-14", "RN50"]选择。
    • text-model: 指定文本backbone, 从 ["RoBERTa-wwm-ext-base-chinese", "RoBERTa-wwm-ext-large-chinese", "RBT3-chinese"]选择。
    • context-length: 文本输入序列长度。
    • warmup: warmup步数。
    • batch-size: 训练时单卡batch-size。(请保证训练样本总数 > batch-size * GPU数,至少满足1个训练batch)
    • lr: 学习率。
    • wd: weight decay。
    • max-steps: 训练步数,也可通过max-epochs指定训练轮数。
    • freeze-vision: 是否freeze视觉backbone。
    • use-augment: 是否使用AutoAugment对图片进行数据增强。
    • valid-batch-size: 验证时单机batch-size。(请保证验证集样本总数 > batch-size * GPU数,至少满足1个验证batch)
    • valid-step-intervalvalid-epoch-interval: 验证step/epoch频率,指定为-1时则在训练中不进行验证。
    • grad-checkpointing: 使用重计算策略,在前向过程中不保存中间结果,以训练时间换取更小的显存开销,适用于显存不足的情况。(store_true参数,直接在脚本中加上--grad-checkpointing即可,目前要求Pytorch>1.8.0)
    • mask-ratio: 参照FLIP的策略,在finetune时可指定随机mask一定比例的图像patch,以降低显存开销、加快训练速度。默认为0.0,即不激活这一策略。
    • use-flash-attention: 使用FlashAttention,可在不影响效果的条件下为Chinese-CLIP的finetune过程显著提速以及降低显存占用。(store_true参数,配置好环境后,在脚本中加上--use-flash-attention即可,请详见flash_attention.md
    • accum-freq: 梯度累积频率,默认为1。指定为大于1的整数时开启对比学习梯度累积,模拟更大的batch size。如果单卡batch size为m,则总的batch size为accum_freq * m * GPU数
    • gather-with-grad: 是否在分布式训练时进行带有完整梯度的特征gather,默认关闭。
  • 输出选项
    • name: 指定输出路径。超参日志, 训练日志以及产出ckpt均会存放至 ${DATAPATH}/experiments/${name}/
    • save-step-frequencysave-epoch-frequency: 存ckpt的步数或轮数间隔。
    • report-training-batch-acc: 日志是否报告训练图到文&文到图batch准确率。
  • 权重读取相关选项
    • resume: 权重读取的路径。示例脚本中指定为预训练ckpt路径,也可以指定为用户自己finetune的ckpt路径做继续训练。
    • reset-data-offset: 是否从此前的数据断点续跑。如batch size或GPU卡数超参改变,建议打开此选项。
    • reset-optimizer: 是否使用optimizer state。

训练完毕,log 会自动存在${DATAPATH}/experiments/${name}/out_${timestamp}.log,训练log格式如下所示:

2022-12-11,20:40:34 | INFO | Rank 0 | Global Steps: 1/735 | Train Epoch: 1 [1024/250880 (0%)] | Loss: 2.371020 | Image2Text Acc: 49.90 | Text2Image Acc: 48.73 | Data Time: 1.039s | Batch Time: 3.625s | LR: 0.000000 | logit_scale: 4.605 | Global Batch Size: 1024

验证log格式如下所示:

2022-12-11,20:42:47 | INFO | Rank 0 | Validation Result (epoch 1 @ 150 steps) | Valid Loss: 0.502810 | Image2Text Acc: 84.95 | Text2Image Acc: 84.26 | logit_scale: 4.605 | Valid Batch Size: 128

注意: 对比学习的训练收敛和稳定性和总batch size相关。如您使用更小的batch size(相比默认配置128 per-GPU * 8 GPU),建议使用更小的学习率。我们推荐使用更多的GPU和更大的batch size以取得更好的效果。

预测及评估

我们提供特征提取、以及图文检索任务评估的流程,具体如下:

图文特征提取

目前本代码支持使用GPU单卡进行图文特征提取,请参考使用以下命令。我们也提供了部署ONNX和TensorRT模型,加速特征推理的支持,详见deployment.md

cd Chinese-CLIP/
export CUDA_VISIBLE_DEVICES=0
export PYTHONPATH=${PYTHONPATH}:`pwd`/cn_clip

split=valid # 指定计算valid或test集特征
resume=${DATAPATH}/pretrained_weights/clip_cn_vit-b-16.pt

python -u cn_clip/eval/extract_features.py \
    --extract-image-feats \
    --extract-text-feats \
    --image-data="${DATAPATH}/datasets/${dataset_name}/lmdb/${split}/imgs" \
    --text-data="${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl" \
    --img-batch-size=32 \
    --text-batch-size=32 \
    --context-length=52 \
    --resume=${resume} \
    --vision-model=ViT-B-16 \
    --text-model=RoBERTa-wwm-ext-base-chinese

产出图文特征默认将保存于${DATAPATH}/datasets/${dataset_name}目录下,图片特征保存于${split}_imgs.img_feat.jsonl文件,每行以json存储一张图片的特征,格式如下:

{"image_id": 1000002, "feature": [0.0198, ..., -0.017, 0.0248]}

文本特征则保存于${split}_texts.txt_feat.jsonl,格式如下:

{"text_id": 248816, "feature": [0.1314, ..., 0.0018, -0.0002]}

KNN检索

对于小规模的学术检索数据集,我们提供一个简单的KNN检索实现,便于计算文到图、图到文检索的top-k召回结果(tips:如想仿照我们在项目中搭建检索demo,建议基于中文CLIP模型产出图文特征后,结合开源工程框架clip-retrieval搭建前后端服务。)

对于文到图检索(文本召回相关图片),请运行以下命令:

cd Chinese-CLIP/
split=valid # 指定计算valid或test集特征
python -u cn_clip/eval/make_topk_predictions.py \
    --image-feats="${DATAPATH}/datasets/${dataset_name}/${split}_imgs.img_feat.jsonl" \
    --text-feats="${DATAPATH}/datasets/${dataset_name}/${split}_texts.txt_feat.jsonl" \
    --top-k=10 \
    --eval-batch-size=32768 \
    --output="${DATAPATH}/datasets/${dataset_name}/${split}_predictions.jsonl"

产出的结果保存在指定的jsonl文件中,每行表示一个文本召回的top-k图片id,格式如下:

{"text_id": 153915, "image_ids": [5791244, 1009692167, 7454547004, 3564007203, 38130571, 2525270674, 2195419145, 2503091968, 4966265765, 3690431163]}

对于图到文检索(图片召回相关文本),类似地,请运行以下命令:

split=valid # 指定计算valid或test集特征
python -u cn_clip/eval/make_topk_predictions_tr.py \
    --image-feats="${DATAPATH}/datasets/${dataset_name}/${split}_imgs.img_feat.jsonl" \
    --text-feats="${DATAPATH}/datasets/${dataset_name}/${split}_texts.txt_feat.jsonl" \
    --top-k=10 \
    --eval-batch-size=32768 \
    --output="${DATAPATH}/datasets/${dataset_name}/${split}_tr_predictions.jsonl"

产出结果每行表示一个图片召回的top-k文本id,格式如下:

{"image_id": 977856234, "text_ids": [156914, 157914, 158914, 155914, 156179, 158907, 157179, 154179, 154914, 154723]}

Recall计算

我们提供了评测脚本计算检索任务的Recall@1/5/10,同时给出mean recall(Recall@1/5/10的平均数)。运行如下命令即可获取分数:

对于文到图检索,请运行命令:

split=valid # 指定计算valid或test集特征
python cn_clip/eval/evaluation.py \
    ${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl \
    ${DATAPATH}/datasets/${dataset_name}/${split}_predictions.jsonl \
    output.json
cat output.json

对于图到文检索,请先运行下面的命令,将图文对标注的jsonl文件由文到图的格式转为图到文:

python cn_clip/eval/transform_ir_annotation_to_tr.py \
    --input ${DATAPATH}/datasets/${dataset_name}/${split}_texts.jsonl

完成后,请运行命令:

split=valid # 指定计算valid或test集特征
python cn_clip/eval/evaluation_tr.py \
    ${DATAPATH}/datasets/${dataset_name}/${split}_texts.tr.jsonl \
    ${DATAPATH}/datasets/${dataset_name}/${split}_tr_predictions.jsonl \
    output.json
cat output.json

打印出的结果格式将如下:

{"success": true, "score": 85.67, "scoreJson": {"score": 85.67, "mean_recall": 85.67, "r1": 71.2, "r5": 90.5, "r10": 95.3}}

关于整套跨模态检索的训练和测试流程,我们以MUGE检索数据集(多模态电商图文挑战赛)为例,也提供了一个包含上述全部流程并可运行的Jupyter Notebook(下载链接),欢迎大家上手实践。


零样本图像分类

本部分介绍如何使用Chinese-CLIP实现零样本图像分类,以零样本图像分类Benchmark ELEVATER中的数据集为例。ELEVATER是由多个知名的分类数据集(包括CIFAR-10、CIFAR-100、MNIST等)组成的评测集合,评测模型在这些数据集上的零样本效果。我们在实验中,给其中每个数据集准备了中文版本的prompt、类别标签连同原始图片,详见数据文档,用于测试Chinese-CLIP模型。更多关于该benchmark的详情请点击链接。大家也可以参考我们提供的流程,仿照在自己的中文分类数据集准备数据并进行测试。

准备工作

首先将数据按照如下格式进行准备。由于零样本图像分类仅需测试,因此只需要准备好测试集和预训练模型参数,按照如下目录结构,存放在用户指定的${DATAPATH}下:

${DATAPATH}
├── pretrained_weights/
└── datasets/
    └── ${dataset_name}/
        ├── label_cn.txt
        └── test/
	    ├── 000/ # label id,如label个数大于10,则将其向左补零到3位数保证字典序
	    │   ├── image_0003.jpg # 图片样本,命名无特殊要求
	    │   ├── image_0005.jpg
	    │   └── ...
	    ├── 001/
	    │   ├── image_0001.jpg
	    │   ├── image_0002.jpg
	    │   └── ...
	    └── 002/
	        ├── image_0003.jpg
	        ├── image_0005.jpg
	        └── ...
	    ...
	

测试集保证test文件夹内数据按照label对应的id进行划分,并保证id为字典序(10以上的多位数,需向左补零label.zfill(3), 如001,002等)。label_cn.txt为数据标签,每行一个标签名,如下所示:

手风琴
飞机
锚
...

每行的标签对应的label id为行号-1,如第1行的标签的id为0,第二行的标签的id为1。如果标签总数大于10,则统一向左补零到3位数,比如标签个数为100,标签id则为000-099。用户需为每个label id生成对应的文件夹,并将标注该label的样本放入其中。我们以ELEVATER中的CIFAR-100数据集为样例,请点击链接下载处理好的数据。如果想尝试在其他ELEVATER包含的数据集上测试Chinese-CLIP,请参见我们的数据文档

预测和评估

我们准备了预测脚本,请查看run_scripts/zeroshot_eval.sh。运行命令例子如下:

bash run_scripts/zeroshot_eval.sh 0 \
    ${DATAPATH} ${dataset_name} \
    ${vision_model} ${text_model} \
    ${ckpt_path} ${index_file}

其中各参数意义为:

  • 第一个入参0为GPU id
  • DATAPATH参见上面的准备工作部分,根据实际位置输入对应路径
  • dataset_name参见上面的准备工作部分,输入评测的数据集目录名,如cifar-100
  • vision_model为指定模型类型,选项包括["ViT-B-32", "ViT-B-16", "ViT-L-14", "ViT-L-14-336", "RN50", "ViT-H-14"]
  • text_model包括["RoBERTa-wwm-ext-base-chinese", "RoBERTa-wwm-ext-large-chinese", "RBT3-chinese"]
  • ckpt_path为模型预训练ckpt的完整路径
  • index_file(可选,仅提交ELEVATER官网评测需要指定),请参见数据文档

例如,用ViT-B/16规模预训练模型进行评测CIFAR-100,则运行(${DATAPATH}需根据实际情况替换):

bash run_scripts/zeroshot_eval.sh 0 \
    ${DATAPATH} cifar-100 \
    ViT-B-16 RoBERTa-wwm-ext-base-chinese \
    ${DATAPATH}/pretrained_weights/clip_cn_vit-b-16.pt

返回结果会打印top-1的准确率。

Result:
zeroshot-top1: 0.6444

在CIFAR-100上,ViT-B/16规模的Chinese-CLIP预期应该达到64.4%。我们在ELEVATER上其他规模、其他数据集的零样本分类结果,请详见Results.md

同时,程序还会存下一个json文件用于提交ELEVATER官方用,json文件内容如下所示:

{"model_name": "CN-CLIP-ViT-B-16", "dataset_name": "cifar-100", "num_trainable_params": 0, "num_params": 188262913, "num_visual_params": 86192640, "num_backbone_params": 188262913, "n_shot": 0, "rnd_seeds": [123], "predictions": "prediction probability tensor [size: (1, 10000, 100)]"}

其中包括模型名model_name、数据集名称dataset_name、总参数量num_params、视觉塔的参数量num_visual_params等模型的meta信息,以及模型输出结果,即模型的预测概率tensor,size为[1, 样本数, 标签个数]

零样本分类在线Demo

基于我们集成于Huggingface transformers的特征提取API,我们在Huggingface Model Hub🤗提供了在线简单尝试零样本图像分类的demo(Hosted inference API),各个模型规模的demo链接见下,欢迎尝试!

引用

如果觉得本项目好用,希望能给我们提个star并分享给身边的用户,欢迎给相关工作citation,感谢支持!

@article{chinese-clip,
  title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese},
  author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang},
  journal={arXiv preprint arXiv:2211.01335},
  year={2022}
}

chinese-clip's People

Contributors

dtyxs avatar justinlin610 avatar jxst539246 avatar manymuch avatar yangapku avatar zwkkk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chinese-clip's Issues

训练数据最后出现EOFError

image
2023-02-23,06:08:33 | INFO | Rank 0 | Validation Result (epoch 3 @ 99 steps) | Valid Loss: 0.000000 | Image2Text Acc: 100.00 | Text2Image Acc: 100.00 | logit_scale: 4.605 | Valid Batch Size: 1
2023-02-23,06:08:40 | INFO | Rank 0 | Saved checkpoint ../clip_set/experiments/muge_finetune_vit-b-16_roberta-base_bs128_8gpu_poizon/checkpoints/epoch3.pt (epoch 3 @ 99 steps) (writing took 7.470757007598877 seconds)
2023-02-23,06:08:48 | INFO | Rank 0 | Saved checkpoint ../clip_set/experiments/muge_finetune_vit-b-16_roberta-base_bs128_8gpu_poizon/checkpoints/epoch_latest.pt (epoch 3 @ 99 steps) (writing took 7.439142227172852 seconds)
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/logging/handlers.py", line 1482, in _monitor
record = self.dequeue(True)
File "/usr/lib/python3.8/logging/handlers.py", line 1431, in dequeue
return self.queue.get(block)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 97, in get
res = self._recv_bytes()
File "/usr/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError

关于 all_gather的速度问题

您好,非常感谢开源中文clip,对我们的学术研究有这很大的帮助。
我们实验室也曾经尝试过大数据及的pretrain,但pytorch,all_gather的速度会随节点数上升而显著变慢,导致训练时间严重边长,请问你们训练的时候是如何解决all_gather的速度问题呀,感谢

训练batchsize

作者你好,请教一下128卡 v100是如何做到训练ViT- L/14时batchsize可以设置为32k的?是使用了fp16么?我实验时用混合精度,batchsize最大为10k,再大就out of menmory了。

数据集组成有哪些

你好,数据集除了包含wukong1亿图文对,还有哪些组成部分呢,另外数据集考虑开源么?

是否可以作为stable diffusion的text encoder?

尝试将Chinese CLIP作为stable diffusion的text encoder,但是一直生成纯黑图像(安全检查已经关闭),我想问下是否可以作为sd的text encoder呢?官方是否做过测试。

转onnx出错

hello,感谢开源cnclip,我用自己的数据 finetune了 cnclip 并想把vision模块转成onnx,转完之后 用onnxruntime运行包如下错,请问是什么情况呀
2022-12-06 20:54:43.837539176 [E:onnxruntime:, sequential_executor.cc:368 Execute] Non-zero status code returned while running Reshape node. Name:'Reshape_54' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:40 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{197,1,768}, requested shape:{197,2364,64}

Traceback (most recent call last):
File "test.py", line 18, in
(text_feature) = sessison.run(output_names, {'image': image})
File "/home/jovyan/.local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_54' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:40 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{197,1,768}, requested shape:{197,2364,64}

lmdb存放千万级别数据IO问题

您好,我将zero数据处理后约7000w,存储到lmdb文件,但是lmdb仅能生成一个data.mdb,存放在一个device上,导致我训练模型时数据读取io瓶颈,CPU与GPU利用不充分,请问有遇到这个问题吗?

Imagenet zero shot results

Thanks for sharing these models and results for Chinese. Multilingual clips are important!

I see you have evaluation code for imagenet with Chinese prompts. Do you have results of your Chinese clip on imagenet with Chinese prompts and classnames?

CLIP for Image Captioning

非常棒的代码,感觉是目前最贴心最易用的 VLP 代码框架,感谢各位作者耐心的开源工作!不知道作者有没有计划添加在生成任务(比如 image captioning)上进行微调的代码,目前中文 VLP 似乎都没有像 BLIP 那样基于 captioning 预训练的,不知道 CN-CLIP 在生成任务上的表现如何,如果有微调代码就再好不过了❤️

关于多机多卡训练

你好,我用2台4卡A100训练1个epoch时间在10h,单机4卡A100一个epoch的训练时间30min。请问哪些地方导致多机多卡训练效率降低的?

关于bert.pooler

我在下载预训练模型之后,直接调用eval下的extract_features.py时加载模型显示我缺失bert.pooler相关参数(Missing key(s) in state_dict: "bert.pooler.dense.weight", "bert.pooler.dense.bias"),我看到代码里对这一部分进行了强调(sd = {k[len('module.'):]: v for k, v in sd.items() if "bert.pooler" not in k}),但我没有发现解决这一问题的办法,请问对于这一部分为什么会代码中进行强调,以及该如何解决这一问题呢???

实验文本-图片对计算相似度结果全为1

作者您好,很高兴能用到您开源的中文clip,但我在实验过程中发现,所有文本-图片对的相似度计算结果全为1,运行了您提供的pokeman的demo的输出是正常,这是否是模型训练数据的问题,还是我用的不对,请指教。
这里是我的数据,共32张图片,将数据解压到代码所在目录即可,下面是我的脚本运行大约需要两分钟

# coding: utf8
from glob import glob
import os
import pickle

from prettytable import PrettyTable
import torch
from PIL import Image
import cn_clip.clip as clip
from cn_clip.clip import load_from_name
from tqdm import tqdm

device = 'cuda:0'
model_path = './'
data_path = './samples/*'
total = len(list(glob(os.path.join(data_path, '*')))) * len(os.listdir(model_path))
res = {}
pbar = tqdm(total=total)
for model_name in ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']:
    model, preprocess = load_from_name(model_name, device=device, download_root=model_path)
    model.eval()
    res[model_name] = {}
    for dp in glob(data_path):
        text = dp.split('/')[-1]
        text_encode = clip.tokenize([text]).to(device)
        res[model_name][text] = {}
        path = os.path.join(dp, '*')
        for pic_path in glob(path):
            pic_name = pic_path.split('/')[-1].split('.')[0]
            image = preprocess(Image.open(pic_path)).unsqueeze(0).to(device)
            with torch.no_grad():
                logits_per_image, logits_per_text = model.get_similarity(image, text_encode)
                probs = logits_per_image.softmax(dim=-1).cpu().item()
                res[model_name][text][pic_name] = probs
                pbar.update(1)
for k in res:
    table = PrettyTable()
    table.title = '{} results'.format(k)
    table.field_names = ['model_name', 'text', 'true0', 'neg0', 'neg1', 'neg2']
    for kk in res[k]:
        row = [k, kk, res[k][kk]['true0'], res[k][kk]['neg0'], res[k][kk]['neg1'], res[k][kk]['neg2']]
        table.add_row(row)
    print(table)

下面是我跑出来的结果:

+------------+--------------------------------+-------+------+------+------+
| model_name |              text              | true0 | neg0 | neg1 | neg2 |
+------------+--------------------------------+-------+------+------+------+
|  ViT-B-16  |         踩三轮的印度人         |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  |          吃竹子的熊猫          |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  |        大家在餐桌上交谈        |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  | 非洲大草原上一头斑马正看着镜头 |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  |          好多玫瑰花呀          |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  |       沙地里的巨型仙人掌       |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  |            舞狮的人            |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-B-16  |      一艘油轮行驶在海洋中      |  1.0  | 1.0  | 1.0  | 1.0  |
+------------+--------------------------------+-------+------+------+------+
+------------+--------------------------------+-------+------+------+------+
| model_name |              text              | true0 | neg0 | neg1 | neg2 |
+------------+--------------------------------+-------+------+------+------+
|  ViT-L-14  |         踩三轮的印度人         |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  |          吃竹子的熊猫          |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  |        大家在餐桌上交谈        |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  | 非洲大草原上一头斑马正看着镜头 |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  |          好多玫瑰花呀          |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  |       沙地里的巨型仙人掌       |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  |            舞狮的人            |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-L-14  |      一艘油轮行驶在海洋中      |  1.0  | 1.0  | 1.0  | 1.0  |
+------------+--------------------------------+-------+------+------+------+
+--------------+--------------------------------+-------+------+------+------+
|  model_name  |              text              | true0 | neg0 | neg1 | neg2 |
+--------------+--------------------------------+-------+------+------+------+
| ViT-L-14-336 |         踩三轮的印度人         |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 |          吃竹子的熊猫          |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 |        大家在餐桌上交谈        |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 | 非洲大草原上一头斑马正看着镜头 |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 |          好多玫瑰花呀          |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 |       沙地里的巨型仙人掌       |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 |            舞狮的人            |  1.0  | 1.0  | 1.0  | 1.0  |
| ViT-L-14-336 |      一艘油轮行驶在海洋中      |  1.0  | 1.0  | 1.0  | 1.0  |
+--------------+--------------------------------+-------+------+------+------+
+------------+--------------------------------+-------+------+------+------+
| model_name |              text              | true0 | neg0 | neg1 | neg2 |
+------------+--------------------------------+-------+------+------+------+
|  ViT-H-14  |         踩三轮的印度人         |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  |          吃竹子的熊猫          |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  |        大家在餐桌上交谈        |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  | 非洲大草原上一头斑马正看着镜头 |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  |          好多玫瑰花呀          |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  |       沙地里的巨型仙人掌       |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  |            舞狮的人            |  1.0  | 1.0  | 1.0  | 1.0  |
|  ViT-H-14  |      一艘油轮行驶在海洋中      |  1.0  | 1.0  | 1.0  | 1.0  |
+------------+--------------------------------+-------+------+------+------+
+------------+--------------------------------+-------+------+------+------+
| model_name |              text              | true0 | neg0 | neg1 | neg2 |
+------------+--------------------------------+-------+------+------+------+
|    RN50    |         踩三轮的印度人         |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    |          吃竹子的熊猫          |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    |        大家在餐桌上交谈        |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    | 非洲大草原上一头斑马正看着镜头 |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    |          好多玫瑰花呀          |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    |       沙地里的巨型仙人掌       |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    |            舞狮的人            |  1.0  | 1.0  | 1.0  | 1.0  |
|    RN50    |      一艘油轮行驶在海洋中      |  1.0  | 1.0  | 1.0  | 1.0  |
+------------+--------------------------------+-------+------+------+------+

可以看出,对于文本的真实对应图片true0和其他反例neg*,相似度都为1,给人的感觉就是模型无法区分每张图片。

evaluation datasets

could you please provide the muge, flickr 30 cn and coco cn you used for eval ?

thanks

基于提供的权重,无法复现clip_cn_vit-b-16结果

  1. 使用你们提供的数据集(Flickr30k-CN)及pretrain权重,运行代码,无法得到预期结果。
    {"score": 74.3, "mean_recall": 74.3, "r1": 54.42, "r5": 80.82000000000001, "r10": 87.66000000000001}}

image

image

启动脚本为:
image

是不是某个超参不对?

训练数据集

感谢您的分享!我想请问一下,训练的(~2亿图文对)是从网站上随机爬取的嘛?有数据筛选环节嘛?是否包含了一些公开的caption数据集?期待您的回复

加载训练好的模型的问题

你好,我请教一下,我训练好了新的模型:
image

请问我怎么应用这个模型啊,例如项目中给了一个例子:

import torch 
from PIL import Image

import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
print("Available models:", available_models())  
# Available models: ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')
model.eval()
image = preprocess(Image.open("examples/pokemon.jpeg")).unsqueeze(0).to(device)
text = clip.tokenize(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    # 对特征进行归一化,请使用归一化后的图文特征用于下游任务
    image_features /= image_features.norm(dim=-1, keepdim=True) 
    text_features /= text_features.norm(dim=-1, keepdim=True)    

    logits_per_image, logits_per_text = model.get_similarity(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # [[1.268734e-03 5.436878e-02 6.795761e-04 9.436829e-01]]

怎么让这儿加载的是我训练好的模型啊?多多指教,感谢。

导入CLIP模型Finetune,clip_model.encode_text()输出Nan

您好,我通过以下代码导入模型后想要微调CLIP模型,我先冻结了CLIP的参数,添加了全连接层在我的任务上训练, 之后再解冻CLIP部分参数,但是CLIP模型输出为Nan,请问我应该如何微调CLIP模型呢?

model, preprocess = load_from_name("ViT-B-16", device=device, download_root='./')

关于训练过程中all_gather的疑问

您好,在train.py的35行到44行,为什么聚合成global_batch的时候要再做一次35行到44行的操作,不可以直接用gathered_image_features和gathered_text_features吗?

我的理解是这样的(单机多卡下),只是顺序不同,但每张卡上的损失是一样的?
image

Flickr数据使用问题

请问CN-CLIP是如何使用Flickr数据中一个图片对应的多个文本的,如何避免同一图片的多个文本出现在同一个batch中,成为错误的负样本?谢谢。

我想串一下

我想训练自己的模型 然后基于自己的模型执行检索
参考 预训练CKPT 数据集格式预处理 模型finetune 在output中可以拿到新的.pt模型文件
我要针对我的模型文件执行以图/文搜图如何实现

KNN检索 示例中
${split}_imgs.img_feat.jsonl 文件如何生成的不太清楚
输入也没有找到图/文

复现不出结果

你好
我跑MUGE数据集,看训练过程中的验证集指标一直没有变化如下:
2022-12-08,21:22:28 | INFO | Rank 1 | Validation Result (epoch 2 @ 4650 steps) | Valid Loss: 1.668444 | Image2Text Acc: 32.41 | Text2Image Acc: 32.94 | logit_scale: 4.595 | Valid Batch Size: 48
2022-12-08,21:22:28 | INFO | Rank 0 | Validation Result (epoch 2 @ 4650 steps) | Valid Loss: 1.668444 | Image2Text Acc: 32.41 | Text2Image Acc: 32.94 | logit_scale: 4.595 | Valid Batch Size: 48

准确率一直是32左右。
请问怎么浮现出仓库中写的60+的准确率?

import cn_clip出错UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequence

import cn_clip.clip as clip
发生异常: UnicodeDecodeError
Traceback (most recent call last):
File "D:\develop\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "D:\develop\anaconda3\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "c:\Users\saizong.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy_main
.py", line 45, in
cli.main()
File "c:\Users\saizong.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 444, in main
run()
File "c:\Users\saizong.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str("main"))
File "D:\develop\anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\develop\anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\develop\anaconda3\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "d:\develop\workspace\today_video\clipcn.py", line 5, in
import cn_clip.clip as clip
File "D:\develop\anaconda3\Lib\site-packages\cn_clip\clip_init
.py", line 3, in
_tokenizer = FullTokenizer()
File "D:\develop\anaconda3\Lib\site-packages\cn_clip\clip\bert_tokenizer.py", line 170, in init
self.vocab = load_vocab(vocab_file)
File "D:\develop\anaconda3\Lib\site-packages\cn_clip\clip\bert_tokenizer.py", line 132, in load_vocab
token = convert_to_unicode(reader.readline())
UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequence

请问如何处理?谢谢!

triton server onnxbackend 部署报错

tritonserver image version: nvcr.io/nvidia/tritonserver:22.05-py3
model: ViT-H-14
bash error: creating server: Invalid argument - load failed for model 'clip-image-onnx': version 1 is at UNAVAILABLE state: Internal: onnx runtime error 6: Exception during initialization: /workspace/onnxruntime/onnxruntime/core/optimizer/optimizer_execution_frame.cc:78 onnxruntime::OptimizerExecutionFrame::Info::Info(const std::vector<const onnxruntime::Node*>&, const InitializedTensorSet&, const onnxruntime::Path&, const onnxruntime::IExecutionProvider&, const std::function<bool(const std::__cxx11::basic_string<char>&)>&) [ONNXRuntimeError] : 1 : FAIL : tensorprotoutils.cc:622 GetExtDataFromTensorProto External initializer: visual.transformer.resblocks.31.mlp.c_proj.weight offset: 1251033600 size to read: 13107200 given file_length: 391708810 are out of bounds or can not be read in full.

MUGE数据集finetuning图文检索结果

感谢大佬开源,请问技术报告中Table 2: 针对MUGE数据集跨模态检索的结果展示中,文字说明on text-to-image retrieval AND image-to-text retrieval,没看错的话表中应该只展示了text-to-image retrieval。在另外两个数据集COCO-CN和 Flickr30K-CN都展示了图文和文图的检索效果,所以也想知道MUGE图文检索的效果。

Got OrtException when deploy ONNX model to Android

Hello your team,

I followed the guide here: https://github.com/OFA-Sys/Chinese-CLIP/blob/master/deployment.md and success get the ONNX model that list below:
vit-b-16.txt.fp32.onnx 391 MB
vit-b-16.txt.fp16.onnx 2.27 MB
vit-b-16.img.fp32.onnx 332 MB
vit-b-16.img.fp16.onnx 3.34 MB
vit-b-16.txt.fp16.onnx.extra_file 194 MB
vit-b-16.img.fp16.onnx.extra_file 164 MB

But when I deployed the img model("vit-b-16.img.fp32.onnx") to Android, I just met the follow exception:
ai.onnxruntime.OrtException: Error code - ORT_INVALID_GRAPH - message: This is an invalid model. Error in Node:/visual/Unsqueeze : Node (/visual/Unsqueeze) has input size 2 not in range [min=1, max=1]. at ai.onnxruntime.OrtSession.createSession(Native Method) at ai.onnxruntime.OrtSession.<init>(OrtSession.java:82) at ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:206) at ai.onnxruntime.OrtEnvironment.createSession(OrtEnvironment.java:179)

I just a newbee here, can you team give some suggestions to overcome this bug?

Thanks so much.

运行zeroshot_eval.sh遇到问题,望作者解答

作者您好,我使用如下命令运行zeroshot_eval.sh出现报错,是不是我的命令书写哪里存在问题呢,谢谢。 ${DATAPATH}文件夹命名为data。

错误如下:

$ bash run_scripts/zeroshot_eval.sh 0 \
>     data fgvc-aircraft-2013b-variants102 \
>     ViT-B-16 RoBERTa-wwm-ext-base-chinese \
>     data/ckpt
Traceback (most recent call last):
  File "E:/Project/CLIP/Chinese-CLIP-master/cn_clip/eval/zeroshot_evaluation.py", line 18, in <module>
    from cn_clip.eval.data import get_zeroshot_dataset, _preprocess_text
ImportError: cannot import name 'get_zeroshot_dataset' from 'cn_clip.eval.data' (E:\Anaconda\envs\PyTorch\lib\site-packages\cn_clip\eval\data.py)

脚本如下:

export CUDA_VISIBLE_DEVICES=0
export PYTHONPATH=`pwd`/cn_clip

path=/data/datasets
dataset=fgvc-aircraft-2013b-variants102-example
datapath=${path}/datasets/${dataset}/test:data/datasets/fgvc-aircraft-2013b-variants102-example/test
savedir=${path}/save_predictions:data/pretrained_weights
vision_model=ViT-B-16
text_model=RoBERTa-wwm-ext-base-chinese
resume=data/pretrained_weights/clip_cn_vit-b-16.pt
label_file=${path}/${dataset}/label_cn.txt
index=${7:-}

python -u E:/Project/CLIP/Chinese-CLIP-master/cn_clip/eval/zeroshot_evaluation.py \
    --datapath="${datapath}" \
    --label-file=${label_file} \
    --save-dir=${savedir} \
    --dataset=${dataset} \
    --index=${index} \
    --img-batch-size=64 \
    --resume=${resume} \
    --vision-model=${vision_model} \
    --text-model=${text_model}

训练数据集图文多对多的问题

感谢分享这个工作。
想请教下训练集里会存在一个图对应多个文本以及一个文本对应多个图的情况吗?目前我的finetune数据训练集里有比较多的这种多对多的数据,请教下最恰当的处理方式是?像一个图对多个文本的话,一般是将这多个文本拼接成一个长文本,还是拆成多个图文对样本来处理会比较好?谢谢!

grad-checkpointing error

Hi an~ @yangapku

When I set the parameter 'grad-checkpointing' to True, but report this error:

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.

Do you have any suggestions to solve this problem?

模型在单模态的效果

跨模态模型几乎都会关注img2text或者text2img的效果,体现了模态对齐的能力强弱。但在做跨模态对齐的预训练后,请问大佬其在单模态的检索能力相比其他在imageNet上预训练的特征提取模型比如ResNet系列的如何呢?我自己简单尝试了一下,把跨模态预训练模型如ViT-B-16的图像塔拿出来做特征提取器,构建一个小型图片向量检索数据库,和vgg16比了一下,效果只是和vgg16差不多...

运行训练脚本没跑起来

作者您好,在您的帮助下前一个问题zeroshot_eval.sh已成功运行,也成功得到了您所提到的准确率,非常感谢。
之后想尝试训练模型,但使用训练命令时却没有反应,可以麻烦您解答一下吗?
${DATAPATH}文件夹命名为data,datasets使用作者提供预处理好的MUGE。

输入命令如下:
bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh data
输入后,没有反馈信息
219132259-57a31c54-8aaa-4fe2-abb0-c7354ec0f2da

脚本设置如下:


GPUS_PER_NODE=1
WORKER_CNT=1
export MASTER_ADDR=localhost
export MASTER_PORT=8514
export RANK=0 

export PYTHONPATH=${PYTHONPATH}:`pwd`/cn_clip/

…

脚本后面部分没有改动

打包模型出现问题

你好,请教一下,训练的时候,出现如下问题:
cd Chinese-CLIP/ bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ${DATAPATH}

出现下面的问题:
root@clip-test-d9cd48656-q2zbl:~/workspace/clip/Chinese-CLIP# bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ../clip_set/ /usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects --local_rankargument to be set, please change it to read fromos.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
WARNING:torch.distributed.run:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "cn_clip/training/main.py", line 300, in
main()
File "cn_clip/training/main.py", line 54, in main
torch.cuda.set_device(args.local_device_rank)
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 326, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 722 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 723) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 195, in
main()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 191, in main
launch(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 176, in launch
run(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 132, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

cn_clip/training/main.py FAILED

Failures:
[1]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 724)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 725)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 4 (local_rank: 4)
exitcode : 1 (pid: 726)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[4]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 5 (local_rank: 5)
exitcode : 1 (pid: 727)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[5]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 728)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[6]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 7 (local_rank: 7)
exitcode : 1 (pid: 729)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2023-02-21_09:58:00
host : clip-test-d9cd48656-q2zbl
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 723)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================`

能看出是什么原因吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.