Code Monkey home page Code Monkey logo

acformer's Introduction

ACFormer

Code for Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection (ICCV 2023)

(Continually updating ...)

Overall Framework

Requisities

-python=3.8

-pytorch=1.12.1+cu102

Installation

Install mmcv using mim

pip install -U openmim
mim install mmcv-full==1.6.1

Git clone acformer

git clone https://github.com/LL3RD/ACFormer.git

Install

cd ACFormer
cd thirdparty/mmdetection 
python -m pip install -e .
cd ../.. 
python -m pip install -e .

Dataset

Lizard Dataset

Your can download Original Lizard from the official website or Preprocessed Lizard that is converted to hovernet consep format and split into patches.

CoNSeP Dataset

Your can download 20x CoNSeP Dataset from here.

BRCA Dataset

Your can download BRCA Dataset from the official website or Preprocessed BRCA.

Main Result

Lizard Dataset

Method F1d F1c Model Weights Config Files
ACFormer 0.782 0.557 Checkpoint Config

CoNSeP Dataset

Method F1d F1c Model Weights Config Files
ACFormer 0.739 0.613 Checkpoint Config

BRCA Dataset

Method F1d F1c Model Weights Config Files
ACFormer 0.796 0.485 Checkpoint Config

Train

For Lizard Dataset

First Download the preprocess dataset and change the dataset path in config/ACFormer_Lizard.py and Run

CUDA_VISIBLE_DEVICES=0 bash tools/dist_train.sh configs/ACFormer_Lizard.py 1 --work-dir=Path to save

For your own dataset (e.g. CoNSeP 40x for three classes)

Prepare Dataset

First Install the sahi package.

cd tools/sahi
pip install -e .
cd ..

Then Prepare the slicing dataset. (Modify the CoNSeP Path in prepare_consep_dataset_40x.py)

python prepare_consep_dataset_40x.py

Change the dataset path in config/ACFormer_CoNSeP_40x.py and Run

CUDA_VISIBLE_DEVICES=0 bash tools/dist_train.sh configs/ACFormer_CoNSeP_40x.py 1 --work-dir=Path to save

Notably, the training of ACFormer is not particularly stable. When faced with small data, the AAT is not easy to converge quickly.

Evaluation

Download the preprocessed dataset and Modify your dataset path and checkpoint path in tools/inference_lizard.py and run

python tools/inference_lizard.py
python tools/inference_consep.py
python tools/inference_brca.py

Acknowledgement

acformer's People

Contributors

ll3rd avatar

Stargazers

Zhengyang avatar  avatar Rabe avatar Hu Shaowen avatar Sheng Qingfeng avatar  avatar  avatar Bingxian Chen avatar  avatar Minghong Duan avatar Zhou Jintao avatar  avatar  avatar  avatar  avatar PeiChaoXu avatar  avatar 青博蓝 avatar 俞航 avatar 魔鬼面具 avatar  avatar Wang Xiaodong avatar 卢考燕 avatar angleboy avatar Serdar Yıldız avatar Dejan Štepec avatar Xiaoqi_ avatar  avatar  avatar  avatar 飞天小女警 avatar Zhangzhichao avatar MHumble avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

acformer's Issues

Compatibility Issue with NVIDIA RTX 4060 and PyTorch: CUDA Capability sm_89 Not Supported and TypeError in MMCV Config

(ACFormer) silan@Qingbolan:~/MUST/2309/ACFormer$ CUDA_VISIBLE_DEVICES=0 bash tools/dist_train.sh configs/ACFormer_Lizard.py 1 --work-dir=./1.output/
/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

  warnings.warn(
/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/cuda/__init__.py:146: UserWarning:
NVIDIA GeForce RTX 4060 Laptop GPU with CUDA capability sm_89 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 4060 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
  File "tools/train.py", line 198, in <module>
    main()
  File "tools/train.py", line 137, in main
    cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/mmcv/utils/config.py", line 596, in dump
    f.write(self.pretty_text)
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/mmcv/utils/config.py", line 508, in pretty_text
    text, _ = FormatCode(text, style_config=yapf_style, verify=True)
TypeError: FormatCode() got an unexpected keyword argument 'verify'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 26049) of binary: /home/silan/develop/python/anaconda3/envs/ACFormer/bin/python
Traceback (most recent call last):
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/runpy.py", line 192, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
    elastic_launch(
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/silan/develop/python/anaconda3/envs/ACFormer/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
tools/train.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-12-01_18:12:41
  host      : Qingbolan.
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 26049)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

sampler is not in the sampler registry issue

fatal: not a git repository (or any parent up to mount point /home)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
Traceback (most recent call last):
  File "tools/train.py", line 200, in <module>
    main()
  File "tools/train.py", line 188, in main
    train_detector(
  File "/home/data1/my/Project/nucleiDetCls/ACFormer/ssod/apis/train.py", line 70, in train_detector
    data_loaders = [
  File "/home/data1/my/Project/nucleiDetCls/ACFormer/ssod/apis/train.py", line 71, in <listcomp>
    build_dataloader(
  File "/home/data1/my/Project/nucleiDetCls/ACFormer/ssod/datasets/builder.py", line 68, in build_dataloader
    build_sampler(sampler_cfg, default_args=default_sampler_cfg)
  File "/home/data1/my/Project/nucleiDetCls/ACFormer/ssod/datasets/builder.py", line 40, in build_sampler
    return build_from_cfg(cfg, SAMPLERS, default_args)
  File "/home/my/.conda/envs/ACFormer/lib/python3.8/site-packages/mmcv/utils/registry.py", line 61, in build_from_cfg
    raise KeyError(
KeyError: 'sampler is not in the sampler registry'

I'm not very familiar with the mechanics of mmdetection, can you help me with this?

无法运行,所有环境和条件都是安装readme调的,但是运行报错,显卡是tesla t4

  • /root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
  • and will be removed in future. Use torchrun.
  • Note that --use_env is set by default in torchrun.
  • If your script expects --local_rank argument to be set, please
  • change it to read from os.environ['LOCAL_RANK'] instead. See
  • https://pytorch.org/docs/stable/distributed.html#launch-utility for
  • further instructions
  • warnings.warn(
  • usage: train.py [-h] [--work-dir WORK_DIR] [--resume-from RESUME_FROM] [--no-validate] [--gpus GPUS | --gpu-ids GPU_IDS [GPU_IDS ...]] [--seed SEED]
  • [--deterministic] [--options OPTIONS [OPTIONS ...]] [--cfg-options CFG_OPTIONS [CFG_OPTIONS ...]]
  • [--launcher {none,pytorch,slurm,mpi}] [--local_rank LOCAL_RANK]
  • config
  • train.py: error: unrecognized arguments: to save
  • ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 3059) of binary: /root/miniconda3/bin/python
  • Traceback (most recent call last):
  • File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
  • return _run_code(code, main_globals, None,
  • File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code
  • exec(code, run_globals)
  • File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
  • main()
  • File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
  • launch(args)
  • File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
  • run(args)
  • File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
  • elastic_launch(
  • File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
  • return launch_agent(self._config, self._entrypoint, list(args))
  • File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
  • raise ChildFailedError(
  • torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
  • ============================================================
  • tools/train.py FAILED

  • Failures:
  • <NO_OTHER_FAILURES>

  • Root Cause (first observed failure):
  • [0]:
  • time : 2024-01-12_20:59:48
  • host : autodl-container-4edf11b9fa-bb697b37
  • rank : 0 (local_rank: 0)
  • exitcode : 2 (pid: 3059)
  • error_file: <N/A>
  • traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
  • ============================================================

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.