Code Monkey home page Code Monkey logo

anomalygpt's Introduction

AnomalyGPT_logo

AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models

License

🌐 Project Page • 🤗 Online Demo • 📃 Paper • 🤖 Model • 📹 Video

Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao Wang


Catalogue:


1. Introduction: [Back to Top]

AnomalyGPT_logo

AnomalyGPT is the first Large Vision-Language Model (LVLM) based Industrial Anomaly Detection (IAD) method that can detect anomalies in industrial images without the need for manually specified thresholds. Existing IAD methods can only provide anomaly scores and need manually threshold setting, while existing LVLMs cannot detect anomalies in the image. AnomalyGPT can not only indicate the presence and location of anomaly but also provide information about the image.

AnomalyGPT

We leverage a pre-trained image encoder and a Large Language Model (LLM) to align IAD images and their corresponding textual descriptions via simulated anomaly data. We employ a lightweight, visual-textual feature-matching-based image decoder to obtain localization result, and design a prompt learner to provide fine-grained semantic to LLM and fine-tune the LVLM using prompt embeddings. Our method can also detect anomalies for previously unseen items with few normal sample provided.


2. Running AnomalyGPT Demo [Back to Top]

2.1 Environment Installation

Clone the repository locally:

git clone https://github.com/CASIA-IVA-Lab/AnomalyGPT.git

Install the required packages:

pip install -r requirements.txt

2.2 Prepare ImageBind Checkpoint:

You can download the pre-trained ImageBind model using this link. After downloading, put the downloaded file (imagebind_huge.pth) in [./pretrained_ckpt/imagebind_ckpt/] directory.

2.3 Prepare Vicuna Checkpoint:

To prepare the pre-trained Vicuna model, please follow the instructions provided [here].

2.4 Prepare Delta Weights of AnomalyGPT:

We use the pre-trained parameters from PandaGPT to initialize our model. You can get the weights of PandaGPT trained with different strategies in the table below. In our experiments and online demo, we use the Vicuna-7B and openllmplayground/pandagpt_7b_max_len_1024 due to the limitation of computation resource. Better results are expected if switching to Vicuna-13B.

Base Language Model Maximum Sequence Length Huggingface Delta Weights Address
Vicuna-7B (version 0) 512 openllmplayground/pandagpt_7b_max_len_512
Vicuna-7B (version 0) 1024 openllmplayground/pandagpt_7b_max_len_1024
Vicuna-13B (version 0) 256 openllmplayground/pandagpt_13b_max_len_256
Vicuna-13B (version 0) 400 openllmplayground/pandagpt_13b_max_len_400

Please put the downloaded 7B/13B delta weights file (pytorch_model.pt) in the ./pretrained_ckpt/pandagpt_ckpt/7b/ or ./pretrained_ckpt/pandagpt_ckpt/13b/ directory.

After that, you can download AnomalyGPT weights from the table below.

Setup and Datasets Weights Address
Unsupervised on MVTec-AD AnomalyGPT/train_mvtec
Unsupervised on VisA AnomalyGPT/train_visa
Supervised on MVTec-AD, VisA, MVTec-LOCO-AD and CrackForest AnomalyGPT/train_supervised

After downloading, put the AnomalyGPT weights in the ./code/ckpt/ directory.

In our online demo, we use the supervised setting as our default model to attain an enhanced user experience. You can also try other weights locally.

2.5. Deploying Demo

Upon completion of previous steps, you can run the demo locally as

cd ./code/
python web_demo.py

3. Train Your Own AnomalyGPT [Back to Top]

Prerequisites: Before training the model, making sure the environment is properly installed and the checkpoints of ImageBind, Vicuna and PandaGPT are downloaded.

3.1 Data Preparation:

You can download MVTec-AD dataset from [this link] and VisA from [this link]. You can also download pre-training data of PandaGPT from [here]. After downloading, put the data in the [./data] directory.

The directory of [./data] should look like:

data
|---pandagpt4_visual_instruction_data.json
|---images
|-----|-- ...
|---mvtec_anomaly_detection
|-----|-- bottle
|-----|-----|----- ground_truth
|-----|-----|----- test
|-----|-----|----- train
|-----|-- capsule
|-----|-- ...
|----VisA
|-----|-- split_csv
|-----|-----|--- 1cls.csv
|-----|-----|--- ...
|-----|-- candle
|-----|-----|--- Data
|-----|-----|-----|----- Images
|-----|-----|-----|--------|------ Anomaly 
|-----|-----|-----|--------|------ Normal 
|-----|-----|-----|----- Masks
|-----|-----|-----|--------|------ Anomaly 
|-----|-----|--- image_anno.csv
|-----|-- capsules
|-----|-----|----- ...

3.2 Training Configurations

The table below show the training hyperparameters used in our experiments. The hyperparameters are selected based on the constrain of our computational resources, i.e. 2 x RTX3090 GPUs.

Base Language Model Epoch Number Batch Size Learning Rate Maximum Length
Vicuna-7B 50 16 1e-3 1024

3.3 Training AnomalyGPT

To train AnomalyGPT on MVTec-AD dataset, please run the following commands:

cd ./code
bash ./scripts/train_mvtec.sh

The key arguments of the training script are as follows:

  • --data_path: The data path for the json file pandagpt4_visual_instruction_data.json.
  • --image_root_path: The root path for training images of PandaGPT.
  • --imagebind_ckpt_path: The path of ImageBind checkpoint.
  • --vicuna_ckpt_path: The directory that saves the pre-trained Vicuna checkpoints.
  • --max_tgt_len: The maximum sequence length of training instances.
  • --save_path: The directory which saves the trained delta weights. This directory will be automatically created.
  • --log_path: The directory which saves the log. This directory will be automatically created.

Note that the epoch number can be set in the epochs argument at ./code/config/openllama_peft.yaml file and the learning rate can be set in ./code/dsconfig/openllama_peft_stage_1.json


4. Examples

An image of concrete with crack.


A crack capsule.


An image of a cut hazelnut.


A damaged bottle.


A photo of normal carpet.


A photo of a piece of wood with defect.


A piece of normal fabric.


License

AnomalyGPT is licensed under the CC BY-NC-SA 4.0 license.


Citation:

If you found AnomalyGPT useful in your research or applications, please kindly cite using the following BibTeX:

@article{gu2023anomalyagpt,
  title={AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language Models},
  author={Gu, Zhaopeng and Zhu, Bingke and Zhu, Guibo and Chen, Yingying and Tang, Ming and Wang, Jinqiao},
  journal={arXiv preprint arXiv:2308.15366},
  year={2023}
}

Acknowledgments:

We borrow some codes and the pre-trained weights from PandaGPT. Thanks for their wonderful work!

Star History Chart

anomalygpt's People

Contributors

fantasticgnu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

anomalygpt's Issues

咨询文章ablation study 结果

作者,
感谢你们出色的工作!
有个问题咨询一下。在你们文章表四的结果中,如果只用Decoder, 和Decoder 加上prompt learner及LLM 的设置对比的话,在MVTec集上Image-AUC结果只有0.3的差距。语言的加入帮助不显著。这里的只用Decoder在MVTec上unsupervised设置,是指只用所有的正样本来训练,没用伪负样本训练,对吧?另外,如果同patch core文章中的99.1%比,你们的结果还是差了一点。可有什么设定能超过99.1,例如unsupervised和few-shot结合起来有可能吗?

期待你们的恢复。
谢谢!

训练时无法加载vicuna,但运行web_demo可以成功加载

[!] load base configuration: config/base.yaml
[!] load configuration from config/openllama_peft.yaml
[2023-12-01 15:10:58,146] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[!] load base configuration: config/base.yaml
[!] load configuration from config/openllama_peft.yaml
[!] collect 161151 samples for training
Initializing visual encoder from ../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth ...
[!] collect 161151 samples for training
Initializing visual encoder from ../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth ...
Visual encoder initialized.
Initializing language decoder from ../pretrained_ckpt/vicuna_ckpt/7b_v0/ ...
Visual encoder initialized.
Initializing language decoder from ../pretrained_ckpt/vicuna_ckpt/7b_v0/ ...
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s][2023-12-01 15:14:47,133] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 383754
[2023-12-01 15:14:47,133] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 383755
[2023-12-01 15:14:49,694] [ERROR] [launch.py:434:sigkill_handler] ['/root/miniconda3/envs/AnomalyGPT_env/bin/python', '-u', 'train_mvtec.py', '--local_rank=1', '--model', 'openllama_peft', '--stage', '1', '--imagebind_ckpt_path', '../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth', '--vicuna_ckpt_path', '../pretrained_ckpt/vicuna_ckpt/7b_v0/', '--delta_ckpt_path', '../pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt', '--max_tgt_len', '1024', '--data_path', '../data/pandagpt4_visual_instruction_data.json', '--image_root_path', '../data/images/', '--save_path', './ckpt/train_mvtec/', '--log_path', './ckpt/train_mvtec/log_rest/'] exits with return code = -9

在模型加载预训练的vicuna时出现了错误【self.llama_model = LlamaForCausalLM.from_pretrained(vicuna_ckpt_path)】
请教一下如何解决,万分感谢!

about few shot anomaly map

Hi,
Thanks for your great work. I have one question for the anomaly map , for few shot case, the anomaly map can't precise point which part is anomaly, is there any method to refine the anomaly map
截屏2023-09-08 17 28 25

for the no shot case, the anomaly map is ok

vicuna_0 模型

我在将llama模型和vicuna_delta_7b_0合成vicuna模型时,出现下面错误The size of tensor a (32000) must match the size of tensor b (32001) at non-singleton dimension 0,想问下现在v0版本是不可用了吗?就是如果合成了v1.1版本对最终的anomalyCPT模型会有影响吗

运行成功了

你好,可以成功运行了,但是每次运行启动时间都需要很长时间,调试一直等待,权重太大的原因吗

关于复现结果

请问直接运行train_mvtec.sh文件是否能直接得到mvtec数据集unsupervised下的结果,我的复现出来的acc才84

web_demo的前向传播问题

image
在web_demo的前向传播中问图片所属分类时如果用“object”当作【c_name】进行文本编码会报错,请问,在前向传播测试中该如何调整这个class_neme才能不报错

Few shot training configuration

Hi,
Thanks for the great work. The demo looks amazing. For the few shot experiment, I would like to ask whether the decoder and LLM are trained on the entire MVTec dataset or just the sampled few shot examples (e.g. 30 images for 2 shot)? If it is the latter, could you provide the checkpoints for each shot (e.g. k=1, 2, 4)?
Thank you.

vicuna_ckpt生成问题

作者您好,很感谢您提供的代码

请问我按照您的介绍combine权重的时候,我生成的权重文件是.safetensors为后缀的,而不是.bin后缀的,我的目录如下图:

1700577809536

请问这是什么原因导致的呢?

训练报错 [ERROR] [launch.py:320:sigkill_handler]

作者你好,我在训练时出现了下面的问题,请问应该用什么办法解决?谢谢

[email protected]:/workspace/AnomalyGPT/code$ bash ./scripts/train_mvtec.sh
Setting ds_accelerator to cuda (auto detect)
[2023-11-13 16:57:43,920] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-11-13 16:57:43,966] [INFO] [runner.py:555:main] cmd = /opt/conda/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=28400 --en
able_each_rank_log=None train_mvtec.py --model openllama_peft --stage 1 --imagebind_ckpt_path ../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth --vicuna_ckpt_path ../pretrained_ckpt/vicuna_ckpt/7b_v0/
--delta_ckpt_path ../pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt --max_tgt_len 1024 --data_path ../data/pandagpt4_visual_instruction_data.json --image_root_path ../data/images/ --save_path ./ckpt/tr
ain_mvtec/ --log_path ./ckpt/train_mvtec/log_rest/
Setting ds_accelerator to cuda (auto detect)
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.13.4-1+cuda11.7
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.13.4-1
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NCCL_VERSION=2.13.4-1
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.13.4-1+cuda11.7
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.13.4-1+cuda11.7 [15/1862]
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2
[2023-11-13 16:57:45,547] [INFO] [launch.py:138:main] 0 NV_LIBNCCL_PACKAGE_VERSION=2.13.4-1
[2023-11-13 16:57:45,547] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2023-11-13 16:57:45,547] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0
[2023-11-13 16:57:45,547] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2023-11-13 16:57:45,547] [INFO] [launch.py:163:main] dist_world_size=2
[2023-11-13 16:57:45,547] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1

Setting ds_accelerator to cuda (auto detect)
Setting ds_accelerator to cuda (auto detect)
/opt/conda/lib/python3.10/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules di
rectly from transformers.integrations
warnings.warn(
/opt/conda/lib/python3.10/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules di
rectly from transformers.integrations
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in th
e future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in th
e future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in t
he future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in t
he future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
[!] load base configuration: config/base.yaml
[!] load configuration from config/openllama_peft.yaml
[2023-11-13 16:57:53,230] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-11-13 16:57:53,230] [INFO] [comm.py:594:init_distributed] cdb=None
[!] load base configuration: config/base.yaml
[!] load configuration from config/openllama_peft.yaml
[2023-11-13 16:57:53,259] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-11-13 16:57:53,259] [INFO] [comm.py:594:init_distributed] cdb=None
[2023-11-13 16:57:53,259] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[!] collect 161151 samples for training
[!] collect 161151 samples for training
Initializing visual encoder from ../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth ...
Initializing visual encoder from ../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth ...
Visual encoder initialized.
Initializing language decoder from ../pretrained_ckpt/vicuna_ckpt/7b_v0/ ...
Visual encoder initialized.
Initializing language decoder from ../pretrained_ckpt/vicuna_ckpt/7b_v0/ ...
[2023-11-13 16:59:54,760] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 9439
[2023-11-13 16:59:54,783] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 9440
[2023-11-13 16:59:55,650] [ERROR] [launch.py:320:sigkill_handler] ['/opt/conda/bin/python', '-u', 'train_mvtec.py', '--local_rank=1', '--model', 'openllama_peft', '--stage', '1', '--imagebind_ckpt_path',
'../pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth', '--vicuna_ckpt_path', '../pretrained_ckpt/vicuna_ckpt/7b_v0/', '--delta_ckpt_path', '../pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt', '--max_t
gt_len', '1024', '--data_path', '../data/pandagpt4_visual_instruction_data.json', '--image_root_path', '../data/images/', '--save_path', './ckpt/train_mvtec/', '--log_path', './ckpt/train_mvtec/log_rest/'
] exits with return code = -9

Why do we need text and LLM?

Hello! I have read your code and found that the anomaly map in test_mvtec.py is entirely based on calculating cosine similarity with the few-shot normal samples. The calculation of Image AUC and Pixel AUC is also based on this anomaly map. It seems that this can already achieve anomaly detection, why do we still employ text encoder and LLama?

运行train_all_supervised_cn.sh 训练的时候报错

运行train_all_supervised_cn.sh 训练的时候报错

image

应该是没有数据的原因:
data = all_supervised_with_cn.SupervisedDataset('../data/all_anomalygpt')
image

请问all_anomalygpt这个数据应该是什么格式,是把mvtec_anomaly_detection和VisA文件夹放在下面吗,谢谢!

anomaly image-text data

作者你好,我想请问文中的“anomaly image-text data”是指MVTec-AD和VisA数据集的文本描述吗,我并没有发现数据集中有类似“pandagpt4_visual_instruction_data.json”这种描述文件的

捞一个问题

请问在运行bash ./scripts/train_mvtec.sh会出现NameError: name 'LoraConfig' is not defined这个问题,但是LoraConfig不是已经在header.py里面引入了吗?#17

数据集如何放置?

我理解的是MVTec-AD dataset和pre-training data of PandaGPT这两个数据集应该都要下载,那么如何放置呢?我是下面这样放置的,但是训练时候貌似不太对,望指教,谢谢!
data
|---pandagpt4_visual_instruction_data.json
|---images
|-----|----mvtec_anomaly_detection
|----------|-- bottle
|----------|-----|---- ground_truth
|----------|-----|----- test
|----------|-----|----- train
|-----|----images
|-----|-----|----- 000000124934.jpg
|-----|-----|----- 000000257414.jpg

关于文章的实验结果

作者:
您好!我看到您的模型结构中,除了Prompt Learner和LLM之外,都与APRIL-GAN是一样的,为什么表二的评价结果中,没有APRIL-GAN的对比结果,而且所测量的指标表面上看和后续您添加的模块没有直接的关系,好像应该和APRIL-GAN的指标一样的,但是,却比该论文的要高,请问这是什么原因,或者能否更清楚的阐释其中的原因,谢谢。

About Issue #9

Why issue #9 is closed? I think the response is good and should be seen by more people to avoid others' questions.

terminate called after throwing an instance of 'c10::Error'

请问训练过程中出现如标题所写的错误该如何解决?
Traceback (most recent call last): [0/1808]
File "/workspace/AnomalyGPT/code/train_mvtec.py", line 149, in
main(args)
File "/workspace/AnomalyGPT/code/train_mvtec.py", line 124, in main
agent.train_model(
File "/workspace/AnomalyGPT/code/model/agent.py", line 84, in train_model
self.ds_engine.step()
File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2041, in step
self.tput_timer.stop(global_step=self.is_gradient_accumulation_boundary(), report_speed=report_progress)
File "/opt/conda/lib/python3.10/site-packages/deepspeed/utils/timer.py", line 191, in stop
get_accelerator().synchronize()
File "/opt/conda/lib/python3.10/site-packages/deepspeed/accelerator/cuda_accelerator.py", line 63, in synchronize
return torch.cuda.synchronize(device_index)
File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 566, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
[!] loss: 1.1797; token_acc: 68.0: 23%|███████████████████████████▎ | 20490/90725 [3:23:40<11:38:10, 1.68it/s]
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:31 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f9246211457 in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const
, char const
, unsigned int, std::string const&) + 0x64 (0x7f92461db3ec in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(std::string const&, std::string const&, int, bool) + 0xb4 (0x7f927127dc64 in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: + 0x1e0dc (0x7f92712550dc in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x244 (0x7f9271258054 in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: + 0x4d7d63 (0x7f929c148d63 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #6: c10::TensorImpl::~TensorImpl() + 0x1a0 (0x7f92461f19e0 in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f92461f1af9 in /opt/conda/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #8: + 0x735788 (0x7f929c3a6788 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #9: THPVariable_subclass_dealloc(_object*) + 0x2a5 (0x7f929c3a6a75 in /opt/conda/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #10: /opt/conda/bin/python() [0x4e9dc8]
frame #11: /opt/conda/bin/python() [0x4df132]
frame #12: _PyModule_ClearDict + 0x14d (0x55a86d in /opt/conda/bin/python)
frame #13: /opt/conda/bin/python() [0x5c49a3]
frame #14: Py_FinalizeEx + 0x143 (0x5c3433 in /opt/conda/bin/python)
frame #15: Py_RunMain + 0x109 (0x5b5229 in /opt/conda/bin/python)
frame #16: Py_BytesMain + 0x39 (0x585639 in /opt/conda/bin/python)
frame #17: __libc_start_main + 0xf3 (0x7f92bdf1a083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #18: /opt/conda/bin/python() [0x5854ee]

What is the meaning of 'WIDTH_BOUNDS_PCT', 'NUM_PATCHES', 'INTENSITY_LOGISTIC_PARAMS' and 'BACKGROUND' in the file of 'mvtec.py'?

The assignments of these four variables in the code are as follows:
WIDTH_BOUNDS_PCT = {'bottle':((0.03, 0.4), (0.03, 0.4)), 'cable':((0.05, 0.4), (0.05, 0.4)), 'capsule':((0.03, 0.15), (0.03, 0.4)),
'hazelnut':((0.03, 0.35), (0.03, 0.35)), 'metal_nut':((0.03, 0.4), (0.03, 0.4)), 'pill':((0.03, 0.2), (0.03, 0.4)),
'screw':((0.03, 0.12), (0.03, 0.12)), 'toothbrush':((0.03, 0.4), (0.03, 0.2)), 'transistor':((0.03, 0.4), (0.03, 0.4)),
'zipper':((0.03, 0.4), (0.03, 0.2)),
'carpet':((0.03, 0.4), (0.03, 0.4)), 'grid':((0.03, 0.4), (0.03, 0.4)),
'leather':((0.03, 0.4), (0.03, 0.4)), 'tile':((0.03, 0.4), (0.03, 0.4)), 'wood':((0.03, 0.4), (0.03, 0.4))}
NUM_PATCHES = {'bottle':3, 'cable':3, 'capsule':3, 'hazelnut':3, 'metal_nut':3,
'pill':3, 'screw':4, 'toothbrush':3, 'transistor':3, 'zipper':4,
'carpet':4, 'grid':4, 'leather':4, 'tile':4, 'wood':4}

k, x0 pairs

INTENSITY_LOGISTIC_PARAMS = {'bottle':(1/12, 24), 'cable':(1/12, 24), 'capsule':(1/2, 4), 'hazelnut':(1/12, 24), 'metal_nut':(1/3, 7),
'pill':(1/3, 7), 'screw':(1, 3), 'toothbrush':(1/6, 15), 'transistor':(1/6, 15), 'zipper':(1/6, 15),
'carpet':(1/3, 7), 'grid':(1/3, 7), 'leather':(1/3, 7), 'tile':(1/3, 7), 'wood':(1/6, 15)}

brightness, threshold pairs

BACKGROUND = {'bottle':(200, 60), 'screw':(200, 60), 'capsule':(200, 60), 'zipper':(200, 60),
'hazelnut':(20, 20), 'pill':(20, 20), 'toothbrush':(20, 20), 'metal_nut':(20, 20)}

关于运行内存报错,但多GPU并行无果

作者,你好
delta_ckpt = torch.load(args['delta_ckpt_path'], map_location=torch.device('cpu'))
model.load_state_dict(delta_ckpt, strict=False)
delta_ckpt = torch.load(args['anomalygpt_ckpt_path'], map_location=torch.device('cpu'))
model.load_state_dict(delta_ckpt, strict=False)
print(device)
model = model.eval().half().to(device)
最后一行,转移到GPU上运行,内存不足,但是设置多个GPU使用,仍然只有一个10.76G的GPU工作。可否给出一个解决方法或思路,谢谢
image

训练报错

作者您好,我训练报了以下错误,可以帮忙看下嘛。。。。。。。
bash ./scripts/train_mvtec.sh
Setting ds_accelerator to cuda (auto detect)
[2023-10-18 10:51:44,336] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-10-18 10:51:44,350] [INFO] [runner.py:555:main] cmd = /home/witai4090/anaconda3/envs/zkh/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=28400 --enable_each_rank_log=None train_mvtec.py --model openllama_peft --stage 1 --imagebind_ckpt_path /home/witai4090/data/zkh/AnomalyGPT-main/pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth --vicuna_ckpt_path /home/witai4090/data/zkh/AnomalyGPT-main/vicuna_ckpt/7b_v0/ --delta_ckpt_path /home/witai4090/data/zkh/AnomalyGPT-main/pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt --max_tgt_len 1024 --data_path /home/witai4090/data/zkh/AnomalyGPT-main/data/pandagpt4_visual_instruction_data.json --image_root_path /home/witai4090/data/zkh/AnomalyGPT-main/data/images/ --save_path /home/witai4090/data/zkh/AnomalyGPT-main/code/ckpt/train_mvtec/ --log_path /home/witai4090/data/zkh/AnomalyGPT-main/code/ckpt/train_mvtec/log_rest/
Setting ds_accelerator to cuda (auto detect)
[2023-10-18 10:51:45,133] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2023-10-18 10:51:45,133] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0
[2023-10-18 10:51:45,133] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2023-10-18 10:51:45,133] [INFO] [launch.py:163:main] dist_world_size=2
[2023-10-18 10:51:45,133] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1
Setting ds_accelerator to cuda (auto detect)
Setting ds_accelerator to cuda (auto detect)
/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations
warnings.warn(
/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations
warnings.warn(
/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
[!] load base configuration: config/base.yaml
[!] load configuration from config/openllama_peft.yaml
[!] load base configuration: config/base.yaml
[!] load configuration from config/openllama_peft.yaml
[2023-10-18 10:51:49,415] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-10-18 10:51:49,415] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-10-18 10:51:49,415] [INFO] [comm.py:594:init_distributed] cdb=None
[2023-10-18 10:51:49,415] [INFO] [comm.py:594:init_distributed] cdb=None
[2023-10-18 10:51:49,415] [INFO] [comm.py:625:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[!] collect 161151 samples for training
[!] collect 161151 samples for training
Initializing visual encoder from /home/witai4090/data/zkh/AnomalyGPT-main/pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth ...
Initializing visual encoder from /home/witai4090/data/zkh/AnomalyGPT-main/pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth ...
Visual encoder initialized.
Initializing language decoder from /home/witai4090/data/zkh/AnomalyGPT-main/vicuna_ckpt/7b_v0/ ...
Visual encoder initialized.
Initializing language decoder from /home/witai4090/data/zkh/AnomalyGPT-main/vicuna_ckpt/7b_v0/ ...
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:28<00:00, 14.10s/it]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:28<00:00, 14.15s/it]
trainable params: 33554432 || all params: 6771970048 || trainable%: 0.49548996469513035
trainable params: 33554432 || all params: 6771970048 || trainable%: 0.49548996469513035
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in huggingface/transformers#24565
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in huggingface/transformers#24565
Language decoder initialized.
Language decoder initialized.
[2023-10-18 10:54:12,519] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.3, git-hash=unknown, git-branch=unknown
[2023-10-18 10:54:12,519] [INFO] [comm.py:619:init_distributed] Distributed backend already initialized
[2023-10-18 10:54:34,114] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
Using /home/witai4090/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Using /home/witai4090/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Traceback (most recent call last):
File "train_mvtec.py", line 152, in
main(**args)
File "train_mvtec.py", line 115, in main
agent = load_model(args)
File "/home/witai4090/data/zkh/AnomalyGPT-main/code/model/init.py", line 10, in load_model
agent = globals()[agent_name](model, args)
File "/home/witai4090/data/zkh/AnomalyGPT-main/code/model/agent.py", line 29, in init
self.ds_engine, self.optimizer, _ , _ = deepspeed.initialize(
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/init.py", line 165, in initialize
engine = DeepSpeedEngine(args=args,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 309, in init
self._configure_optimizer(optimizer, model_parameters)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1174, in _configure_optimizer
basic_optimizer = self._configure_basic_optimizer(model_parameters)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1230, in _configure_basic_optimizer
optimizer = DeepSpeedCPUAdam(model_parameters,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in init
self.ds_opt_adam = CPUAdamBuilder().load()
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load
return self.jit_load(verbose)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load
op_module = load(name=self.name,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1597, in _write_ninja_file_and_build_library
get_compiler_abi_compatibility_and_version(compiler)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 336, in get_compiler_abi_compatibility_and_version
if not check_compiler_ok_for_platform(compiler):
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 290, in check_compiler_ok_for_platform
which = subprocess.check_output(['which', compiler], stderr=subprocess.STDOUT)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['which', 'c++']' returned non-zero exit status 1.
Loading extension module cpu_adam...
Traceback (most recent call last):
File "train_mvtec.py", line 152, in
main(**args)
File "train_mvtec.py", line 115, in main
agent = load_model(args)
File "/home/witai4090/data/zkh/AnomalyGPT-main/code/model/init.py", line 10, in load_model
agent = globals()[agent_name](model, args)
File "/home/witai4090/data/zkh/AnomalyGPT-main/code/model/agent.py", line 29, in init
self.ds_engine, self.optimizer, _ , _ = deepspeed.initialize(
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/init.py", line 165, in initialize
engine = DeepSpeedEngine(args=args,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 309, in init
self._configure_optimizer(optimizer, model_parameters)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1174, in _configure_optimizer
basic_optimizer = self._configure_basic_optimizer(model_parameters)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1230, in _configure_basic_optimizer
optimizer = DeepSpeedCPUAdam(model_parameters,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/adam/cpu_adam.py", line 94, in init
self.ds_opt_adam = CPUAdamBuilder().load()
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load
return self.jit_load(verbose)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load
op_module = load(name=self.name,
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1534, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1936, in _import_module_from_library
module = importlib.util.module_from_spec(spec)
File "", line 556, in module_from_spec
File "", line 1101, in create_module
File "", line 219, in _call_with_frames_removed
ImportError: /home/witai4090/.cache/torch_extensions/py38_cu117/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory
Exception ignored in: <function DeepSpeedCPUAdam.del at 0x7f880d33daf0>
Traceback (most recent call last):
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in del
self.ds_opt_adam.destroy_adam(self.opt_id)
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
Exception ignored in: <function DeepSpeedCPUAdam.del at 0x7f5f1d8fdaf0>
Traceback (most recent call last):
File "/home/witai4090/anaconda3/envs/zkh/lib/python3.8/site-packages/deepspeed/ops/adam/cpu_adam.py", line 102, in del
self.ds_opt_adam.destroy_adam(self.opt_id)
AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam'
[2023-10-18 10:54:43,332] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 785186
[2023-10-18 10:54:43,379] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 785187
[2023-10-18 10:54:43,382] [ERROR] [launch.py:320:sigkill_handler] ['/home/witai4090/anaconda3/envs/zkh/bin/python', '-u', 'train_mvtec.py', '--local_rank=1', '--model', 'openllama_peft', '--stage', '1', '--imagebind_ckpt_path', '/home/witai4090/data/zkh/AnomalyGPT-main/pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth', '--vicuna_ckpt_path', '/home/witai4090/data/zkh/AnomalyGPT-main/vicuna_ckpt/7b_v0/', '--delta_ckpt_path', '/home/witai4090/data/zkh/AnomalyGPT-main/pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt', '--max_tgt_len', '1024', '--data_path', '/home/witai4090/data/zkh/AnomalyGPT-main/data/pandagpt4_visual_instruction_data.json', '--image_root_path', '/home/witai4090/data/zkh/AnomalyGPT-main/data/images/', '--save_path', '/home/witai4090/data/zkh/AnomalyGPT-main/code/ckpt/train_mvtec/', '--log_path', '/home/witai4090/data/zkh/AnomalyGPT-main/code/ckpt/train_mvtec/log_rest/'] exits with return code = 1

建议增加百度网盘,模型全部放网盘里面

1.现在模型文件说明分散,还要找对应的目录放进去
2.GitHub国内访问情况,你们懂的

建议增加百度网盘,
模型全部放网盘里面

创建对应的目录
然后模型放里面

整个目录上传到网盘

这样用户直接就能下载用了

LLM生成问题

请问作者如何规范LLM的生成范式,是通过引导生成吗?我在无监督的实验中观察到LLM并不能根据代码给出的prompt稳定的回答yes或者no。

训练报错

训练的时候报错, No slot '5' specified on host localhost, 有没有兄弟知道这个错误怎么解决?

训练报错

作者您好,训练到中间报错,报了TypeError: cannot pickle 'torch._C_distributed_c10d.ProcessGroup' object
看上去应该是分布式的原因 请教一下是什么原因造成的

Anomaly Simulation

您好,我想问一下Anomaly Simulation这部分模拟的异常数据在框架和代码中哪里体现的
image

训练时长

作者您好!请问用两张3900训练AnomalyGPT需要多久呢?

resume from ckpt file during training

Hi! Is there a particular way to load the trainable parameters and resume the training from the ckpt file?
I tried already to initialize the model with some extra lines of code that you're using during inference (i.e. test_visa.py), but there isn't an entire state_dict stored on, and only specific values that pass the require_grad condition, and therefore trainable.


  # load the delta parameters

    checkpoint_path = f"{args['save_path']}/pytorch_model.pt"
    if os.path.exists(checkpoint_path):
        model = OpenLLAMAPEFTModel(**args)
        checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu'))
        model.load_state_dict(checkpoint, strict=False)
        tokenizer = AutoTokenizer.from_pretrained(args['save_path'])
        config = AutoConfig.from_pretrained(args['save_path'])

        start_epoch = checkpoint['epoch']
        current_step = checkpoint['step']
    else:
        start_epoch = 0
        current_step = 0

Any idea?

请教如何使用LLaMA2

作者:
你们好!
我下载不到LLaMA。请问用LLaMA2,应该也可以吧?只是用LLaMA2,Vicuna的版本也得用对应LLaMA2的,对吗?我查到lmsys/vicuna-7b-v1.5是fine-tuning Llama 2得到的,那我下载这个版本可以吗(https://huggingface.co/lmsys/vicuna-7b-v1.5)?下载vicuna-7b-v1.5这个版本,是不是就不用下载delta weights of vicuna, 也不用Combine the Weights,因为它就是完整的weights of vicuna, 对吗?直接把vicuna-7b-v1.5的weight放在/vicuna_ckpt/7b_v0/下面。其它步骤都按你INSTRUCTION做,可行吗?
谢谢答复!

运行维度失败

你好,请问为什么运行前向传播的时候显示Llamaforcausallm.from_pertrained(vicuna_ckpt_path)这一块有错误,显示维度不对应,跑的标准数据集,还有能不能将线上运行重新上线一下,挂掉了

Test results

作者你好,我在训练两次得到不同的权重,但是测试结果是完全一样的,请问可能是什么原因造成的?

image-20231030161833052
image-20231101091510995

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.