Code Monkey home page Code Monkey logo

whole-slide-cnn's Introduction

Whole-slide CNN Training Pipeline

This repository provides scripts to reproduce the results in the paper "An annotation-free whole-slide training approach to pathological classification of lung cancer types by deep learning", including model training, inference, visualization, and statistics calculation, etc. Also, the pipeline is seamlessly adaptable to other pathological cases by simply creating new configuration files.

Publication

Chen, CL., Chen, CC., Yu, WH. et al. An annotation-free whole-slide training approach to pathological classification of lung cancer types using deep learning. Nat Commun 12, 1193 (2021). https://doi.org/10.1038/s41467-021-21467-y

Chuang, WY., Chen, CC., Yu, WH. et al. Identification of nodal micrometastasis in colorectal cancer using deep learning on annotation-free whole-slide images. Mod Pathol (2021). https://doi.org/10.1038/s41379-021-00838-2

License

Copyright (C) 2021 aetherAI Co., Ltd. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).

TCGA Pre-trained Model

A referenced pre-trained weight for lung cancer type classification is now available at https://drive.google.com/file/d/1XuONWICAzJ-cUKjC7uHLS0YLJhbLRoo1/view?usp=sharing.

The model was trained by TCGA-LUAD and TCGA-LUSC diagnostic slides specified in data_configs/pure_tcga/train_pure_tcga.csv using the config train_configs/pure_tcga/config_pure_tcga_wholeslide_4x.yaml. Since no normal lung slides were provided in these data sets, the model predicts a slide as either adenocarcinoma (class_id=1) or squamous cell carcinoma (class_id=2). The prediction scores for normal (class_id=0) should be ignored.

Validation results (n = 192) on data_configs/pure_tcga/val_pure_tcga.csv are listed as follow.

  • AUC (LUAD vs LUSC) = 0.9794 (95% CI: 0.9635-0.9953)
  • Accuracy (LUAD vs LUSC) = 0.9323 (95% CI: 0.8876-0.9600, @threshold = 0.7 for class1, 0.3 for class2)

Requirements

Hardware Requirements

Make sure the system contains adequate amount of main memory space (minimal: 256 GB, recommended: 512 GB) to prevent out-of-memory error. For ones who would like to have a try with less concern about model accuracy, setting a lower resizing ratio and image size in configuration can drastically reduce memory consumption, friendly for limited computing resources.

Packages

The codes are tested on the environment with Ubuntu 18.04 / CentOS 7.5, Python 3.7.3, cuda 10.0, cudnn 7.6 and Open MPI 4.0.1. Some Python packages should be installed before running the scripts, including

Refer to requirements.txt for the full list. The installation of these packages should take few minutes.

Usage

1. Define Datasets

To initiate a training task, several CSV files, e.g. train.csv, val.csv and test.csv, should be prepared to define training, validation and testing datasets.

These CSV files should follow the format:

[slide_name_1],[class_id_1]
[slide_name_2],[class_id_2]
...

, where [slide_name_*] specify the filename without extension of a slide image and [class_id_*] is an integer indicating a slide-level label (e.g. 0 for normal, 1 for cancerous).

The configuration files for our experiments are placed at data_configs/.

2. Set Up Training Configurations

Model hyper-parameters are set up in a YAML file.

For convenience, you can copy one from train_configs/ (e.g. train_configs/config_wholeslide_2x.yaml) and make modifications for your own recipe.

The following table describes each field in a train_config.

Field Description
RESULT_DIR Directory to store output stuffs, including model weights, testing results, etc.
MODEL_PATH Path to store the model weight. (default: ${RESULT_DIR}/model.h5)
LOAD_MODEL_BEFORE_TRAIN Whether to load the model weight before training. (default: False)
CONFIG_RECORD_PATH Path to back up this config file. (default: ${RESULT_DIR}/config.yaml)
USE_MIXED_PRECISION Whether to enable mixed precision training.
USE_HMS Whether to enable whole-slide training by optimized unified memory.
USE_MIL Whether to use MIL for training.
TRAIN_CSV_PATH CSV file defining the training dataset.
VAL_CSV_PATH CSV file defining the validation dataset.
TEST_CSV_PATH CSV file defining the testing dataset.
SLIDE_DIR Directory containing all the slide image files (can be soft links).
SLIDE_FILE_EXTENSION File extension. (e.g. ".ndpi", ".svs")
SLIDE_READER Library to read slides. (default: openslide)
RESIZE_RATIO Resize ratio for downsampling slide images.
INPUT_SIZE Size of model inputs in [height, width, channels]. Resized images are padded or cropped to the size. Try decreasing this field when main memory are limited.
MODEL Model architecture to use. One of fixup_resnet50, fixup_resnet34 and resnet34.
NUM_CLASSES Number of classes.
BATCH_SIZE Number of slides processed in each training iteration for each MPI worker. (default: 1)
EPOCHS Maximal number of training epochs.
NUM_UPDATES_PER_EPOCH Number of interations in an epoch.
INIT_LEARNING_RATE Initial learning rate for Adam optimizer.
POOL_USE Global pooling method in ResNet. One of gmp and gap.
REDUCE_LR_FACTOR The learning rate will be decreased by this factor upon no validation loss improvement in consequent epochs.
REDUCE_LR_PATIENCE Number of consequent epochs to reduce learning rate.
TIME_RECORD_PATH Path to store a CSV file recording per-iteration training time.
TEST_TIME_RECORD_PATH Path to store a CSV file recording per-iteration inference time.
TEST_RESULT_PATH Path to store the model predictions after testing in a JSON format. (default: ${RESULT_DIR}/test_result.json)
USE_TCGA_VAHADANE Whether to enable color normalization on TCGA images to TMUH color style. (default: False)
ENABLE_VIZ Whether to draw prediction maps when testing. (default: False)
VIZ_SIZE Size of the output prediction maps in [height, width].
VIZ_FOLDER Folder to store prediction maps. (default: ${RESULT_DIR}/viz)

The following fields are valid only when USE_MIL: True.

Field Description
MIL_PATCH_SIZE Patch size of the MIL model in [height, width].
MIL_INFER_BATCH_SIZE Batch size for MIL finding representative patches.
MIL_USE_EM Whether to use EM-MIL.
MIL_K Number of representative patches. (default: 1)
MIL_SKIP_WHITE Whether to skip white patches. (default: True)
POST_TRAIN_METHOD Patch aggregation method to use. One of svm, lr, maxfeat_rf, milrnn and "" (disable).
POST_TRAIN_MIL_PATCH_SIZE (The same as above, for patch aggregation method training process.)
POST_TRAIN_INIT_LEARNING_RATE (The same as above, for patch aggregation method training process.)
POST_TRAIN_REDUCE_LR_FACTOR (The same as above, for patch aggregation method training process.)
POST_TRAIN_REDUCE_LR_PATIENCE (The same as above, for patch aggregation method training process.)
POST_TRAIN_EPOCHS (The same as above, for patch aggregation method training process.)
POST_TRAIN_NUM_UPDATES_PER_EPOCH (The same as above, for patch aggregation method training process.)
POST_TRAIN_MODEL_PATH Path to store patch aggregation model weights.

3. Train a Model

To train a model, simply run

python -m whole_slide_cnn.train --config YOUR_TRAIN_CONFIG.YAML [--continue_mode]

, where --continue_mode is optional that makes the training process begin after loading the model weights.

To enable multi-node, multi-GPU distributed training, simply add mpirun in front of the above command, e.g.

mpirun -np 4 -x CUDA_VISIBLE_DEVICES="0,1,2,3" python -m whole_slide_cnn.train --config YOUR_TRAIN_CONFIG.YAML

Note that you should cd to the root folder of this repo before calling the above commands.

Typically, this step takes days to complete, depending on the computing power, while you can trace the progress in real time from program output.

4. (Optional) Post-train Patch Aggregation Model for MIL

EM-MIL-SVM, EM-MIL-LR, MIL-RNN and CNN-MaxFeat-based RF involve training a second patch aggregation model, requiring users to run another script to initiate patch aggregation model training. Just like the command above, simply call

[mpirun ...] python -m whole_slide_cnn.post_train --config YOUR_TRAIN_CONFIG.YAML 

5. Evaluate the Model

To evaluate the model or optionally generate prediction heatmap, call

[mpirun ...] python -m whole_slide_cnn.test --config YOUR_TRAIN_CONFIG.YAML

This command will generate a JSON file in the result directory named test_result.json by default. The file contains the model predictions for each testing slide.

To statistically analyze the results, some scripts are provided in tools/. See the following table for the usage of each tool.

Tool Description Example
tools/calc_auc.R Calculate AUC and CI. tools/calc_auc.R RESULT_DIR/test_result.json
tools/compare_auc.R Testing significance of the AUCs of two models. tools/compare_auc.R RESULT_DIR_1/test_result.json RESULT_DIR_2/test_result.json
tools/draw_roc.py Draw the ROC diagram. python tools/draw_roc.py test_result.json:MODEL_NAME:#FF0000
tools/gen_bootstrap_aucs.R Generate 100 AUCs by bootstrapping. tools/gen_bootstrap_aucs.R RESULT_DIR/test_result.json

Note that these tools are currently profiled for lung cancer maintype classification and should be modified when applying to your own tasks.

Data Availability

The slide data from TMUH, WFH and SHH are not publicly available due to patient privacy constraints, but are available uponon reasonable request from the corresponding author Chao-Yuan Yeh or Cheng-Yu Chen. The slide data supporting the cross-site generalization capability in this study are obtained from TCGA via the Genomic Data Commons Data Portal (https://gdc.cancer.gov).

A dataset consists of several slides from TCGA-LUAD and TCGA-LUSC is suitable for testing our pipeline in small scale, with some proper modifications of configuration files described above.

whole-slide-cnn's People

Contributors

chenchc avatar

Stargazers

 avatar Mark avatar Eternal_LD avatar Mehmet Serkan Apaydın avatar  avatar Abhijit Deo avatar  avatar Georgina Gonzalez avatar Deep_learner avatar se122811 avatar  avatar William Cheong avatar Xiang Yann, Lim avatar  avatar  avatar weber avatar Linchn avatar Minseong Kim avatar  avatar Youngjin Shin avatar Siem de Jong avatar Jason D. Kim avatar  avatar  avatar Dani El-Ayyass avatar  avatar Cpop avatar  avatar Changwoo Lee avatar Michael avatar tao avatar Aldoc  avatar AidenChen avatar Austin avatar Ziyan_Chen avatar wilson avatar Masamaloka avatar Fadillah Adamsyah Ma'ani avatar André Pedersen avatar  avatar Rikiya Yamashita avatar Lian Jie avatar Sachin Srivastava avatar tiankuan avatar  avatar Chen Yang avatar Alfons Hwu avatar Jun Yoon avatar Rukhmini Roy avatar  avatar cocoa avatar  avatar Paul Hsieh-Fu Tsai avatar  avatar slp avatar Cameron Smith avatar  avatar Venom avatar Jurriaan Barkey Wolf avatar Hans Pinckaers avatar Wen Ke avatar

Watchers

James Cloos avatar KjB avatar  avatar

whole-slide-cnn's Issues

Promblem with train.py

Hello dear author,
I have tried your code. But I got
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
for:
if config["USE_MIL"]:
model.fit(
train_dataloader,
workers=0, # MIL dataloader should be in the main thread
max_queue_size=1,
use_multiprocessing=False,
epochs=config["EPOCHS"],
steps_per_epoch=config["NUM_UPDATES_PER_EPOCH"],
validation_data=val_dataloader,
callbacks=callbacks,
shuffle=False, # Shuffling is already done in dataloader
verbose=(1 if is_rank0 else 0),
)

which is in train.py.
do you know how to solve this Problem.

Thank you very much

training problem by using HMS

I use
Tensorflow version is 2.4.1
training config:
RESULT_DIR: "result_wholeslide_1x"
MODEL_PATH: "${RESULT_DIR}/model.h5"
LOAD_MODEL_BEFORE_TRAIN: False
CONFIG_RECORD_PATH: "${RESULT_DIR}/config.yaml"

USE_MIXED_PRECISION: True
USE_HMS: True
USE_MIL: False

TRAIN_CSV_PATH: "/home/de1119151/PycharmProjects/whole-slide-cnn-main/slide_data_targos/Train_SKIN_TCGA.csv"
VAL_CSV_PATH: "/home/de1119151/PycharmProjects/whole-slide-cnn-main/slide_data_targos/Val_SKIN_TCGA.csv"
TEST_CSV_PATH: "/home/de1119151/PycharmProjects/whole-slide-cnn-main/slide_data_targos/Test_SKIN_TCGA.csv"
SLIDE_DIR: "/mnt/data/RawImages/HE_SKIN_WSI_TCGA/"
SLIDE_FILE_EXTENSION: ".svs"
SLIDE_READER: "openslide"
RESIZE_RATIO: 0.05 # 1x magnification for 20x WSIs
INPUT_SIZE: [21500, 21500, 3]

MODEL: "fixup_resnet50"
NUM_CLASSES: 3
BATCH_SIZE: 1
EPOCHS: 200
NUM_UPDATES_PER_EPOCH: 100
INIT_LEARNING_RATE: 0.00002
POOL_USE: "gmp"
REDUCE_LR_FACTOR: 0.1
REDUCE_LR_PATIENCE: 24
TIME_RECORD_PATH: "${RESULT_DIR}/time_record.csv"
TEST_TIME_RECORD_PATH: "${RESULT_DIR}/test_time_record.csv"

MIL_PATCH_SIZE: NULL
MIL_INFER_BATCH_SIZE: NULL
MIL_USE_EM: False
MIL_K: NULL
MIL_SKIP_WHITE: NULL

TEST_RESULT_PATH: "${RESULT_DIR}/test_result.json"
ENABLE_VIZ: False
VIZ_SIZE: [2150, 2150]
VIZ_FOLDER: "${RESULT_DIR}/viz"

DEBUG_PATH: NULL

I tried this config, and
Traceback (most recent call last):
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/train.py", line 128, in
model = build_model(
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/model.py", line 129, in build_model
conv_block = get_conv_block(input_shape)
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/model.py", line 85, in get_conv_block
conv_block = model_fn(
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/model.py", line 26, in
"fixup_resnet50": lambda *args, **kwargs: ResNet50(
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/resnet.py", line 557, in ResNet50
return ResNet(stack_fn, False, True, 'resnet50',
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/resnet.py", line 436, in ResNet
x = _ZeroPadding2D(padding=((3, 3), (3, 3)), name='conv1_pad')(x)
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/huge_layer_wrapper.py", line 206, in call
res = super(HugeLayerWrapper, self).call(inputs, **kwargs)
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1012, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/huge_layer_wrapper.py", line 267, in call
output_tensor_list = self._do_padding(inputs, **kwargs)
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/whole_slide_cnn/huge_layer_wrapper.py", line 517, in _do_padding
self.layer.compute_output_shape(self._get_shape(inputs)),
File "/home/de1119151/PycharmProjects/whole-slide-cnn-main/venv/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py", line 2868, in compute_output_shape
if input_shape[1] is not None:
IndexError: list index out of range

Process finished with exit code 1

packages installation error with poetry

Hi
I am trying to install pipeline under python 3.9 conda enviroment. I followed these steps :
-> poetry install
It gives error for installation of dependency pacakges.

Package operations: 35 installs, 7 updates, 0 removals

• Updating importlib-metadata (4.11.4 -> 6.0.0)
• Updating numpy (1.18.0 -> 1.18.5)
• Updating h5py (3.7.0 -> 2.10.0): Failed

CalledProcessError

Command '['/home/user/.conda/envs/whole2/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/home/user/.conda/envs/whole2', '--upgrade', '--no-deps', '/home/user/.cache/pypoetry/artifacts/67/73/11/59cd898ab55f65d9213500215b00e663dafb02beb2e48301f1ffc078ae/h5py-2.10.0.tar.gz']' returned non-zero exit status 1.

at ~/.conda/envs/whole2/lib/python3.9/subprocess.py:528 in run
524│ # We don't call process.wait() as .exit does that for us.
525│ raise
526│ retcode = process.poll()
527│ if check and retcode:
→ 528│ raise CalledProcessError(retcode, process.args,
529│ output=stdout, stderr=stderr)
530│ return CompletedProcess(process.args, retcode, stdout, stderr)
531│
532│

The following error occurred when trying to handle this error:

EnvCommandError

Command ['/home/user/.conda/envs/whole2/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/home/user/.conda/envs/whole2', '--upgrade', '--no-deps', '/home/user/.cache/pypoetry/artifacts/67/73/11/59cd898ab55f65d9213500215b00e663dafb02beb2e48301f1ffc078ae/h5py-2.10.0.tar.gz'] errored with the following return code 1, and output:
Processing /home/user/.cache/pypoetry/artifacts/67/73/11/59cd898ab55f65d9213500215b00e663dafb02beb2e48301f1ffc078ae/h5py-2.10.0.tar.gz
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: h5py
Building wheel for h5py (pyproject.toml): started
Building wheel for h5py (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error

× Building wheel for h5py (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [109 lines of output]
    running bdist_wheel
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-cpython-39
    creating build/lib.linux-x86_64-cpython-39/h5py
    copying h5py/__init__.py -> build/lib.linux-x86_64-cpython-39/h5py
    copying h5py/h5py_warnings.py -> build/lib.linux-x86_64-cpython-39/h5py
    copying h5py/highlevel.py -> build/lib.linux-x86_64-cpython-39/h5py
    copying h5py/ipy_completer.py -> build/lib.linux-x86_64-cpython-39/h5py
    copying h5py/version.py -> build/lib.linux-x86_64-cpython-39/h5py
    creating build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/__init__.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/attrs.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/base.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/compat.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/dataset.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/datatype.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/dims.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/files.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/filters.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/group.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/selections.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/selections2.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    copying h5py/_hl/vds.py -> build/lib.linux-x86_64-cpython-39/h5py/_hl
    creating build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/__init__.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/common.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_attribute_create.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_attrs.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_attrs_data.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_base.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_completions.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_dataset.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_datatype.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_deprecation.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_dimension_scales.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_dtype.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_file.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_file2.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_file_image.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_filters.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_group.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_h5.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_h5f.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_h5p.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_h5pl.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_h5t.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_objects.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_selections.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_slicing.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    copying h5py/tests/test_threads.py -> build/lib.linux-x86_64-cpython-39/h5py/tests
    creating build/lib.linux-x86_64-cpython-39/h5py/tests/test_vds
    copying h5py/tests/test_vds/__init__.py -> build/lib.linux-x86_64-cpython-39/h5py/tests/test_vds
    copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-x86_64-cpython-39/h5py/tests/test_vds
    copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-x86_64-cpython-39/h5py/tests/test_vds
    copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-x86_64-cpython-39/h5py/tests/test_vds
    running build_ext
    Traceback (most recent call last):
      File "/home/user/.conda/envs/whole2/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
        main()
      File "/home/user/.conda/envs/whole2/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
        json_out['return_val'] = hook(**hook_input['kwargs'])
      File "/home/user/.conda/envs/whole2/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 249, in build_wheel
        return _build_backend().build_wheel(wheel_directory, config_settings,
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 413, in build_wheel
        return self._build_with_temp_dir(['bdist_wheel'], '.whl',
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 398, in _build_with_temp_dir
        self.run_setup()
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 484, in run_setup
        super(_BuildMetaLegacyBackend,
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 335, in run_setup
        exec(code, locals())
      File "<string>", line 140, in <module>
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
        return run_commands(dist)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
        dist.run_commands()
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
        self.run_command(cmd)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
        super().run_command(command)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
        cmd_obj.run()
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 325, in run
        self.run_command("build")
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
        self.distribution.run_command(command)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
        super().run_command(command)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
        cmd_obj.run()
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 132, in run
        self.run_command(cmd_name)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
        self.distribution.run_command(command)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
        super().run_command(command)
      File "/tmp/pip-build-env-empfbsy4/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
        cmd_obj.run()
      File "/tmp/pip-req-build-lrcwoa_e/setup_build.py", line 161, in run
        from Cython.Build import cythonize
    ModuleNotFoundError: No module named 'Cython'
    [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for h5py

Failed to build h5py
ERROR: Could not build wheels for h5py, which is required to install pyproject.toml-based projects

at ~/.conda/envs/whole2/lib/python3.9/site-packages/poetry/utils/env.py:1540 in run
1536│ output = subprocess.check_output(
1537│ command, stderr=subprocess.STDOUT, env=env, **kwargs
1538│ )
1539│ except CalledProcessError as e:
→ 1540│ raise EnvCommandError(e, input=input
)
1541│
1542│ return decode(output)
1543│
1544│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:

The following error occurred when trying to handle this error:

PoetryException

Failed to install /home/user/.cache/pypoetry/artifacts/67/73/11/59cd898ab55f65d9213500215b00e663dafb02beb2e48301f1ffc078ae/h5py-2.10.0.tar.gz

at ~/.conda/envs/whole2/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install
54│
55│ try:
56│ return environment.run_pip(*args)
57│ except EnvCommandError as e:
→ 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
59│

It would be great if you can help me to fix this issue.

I would really appreciate your help!!

Thanks in advance

Where to find the requirements.txt file?

Though it is written in the README that you can find the list of all required libraries in the requirements.txt file, but the file is nowhere to be found. Am I missing something or is it genuinely missing? Sorry if this is a trivial issue.

trained model availability?

Dear authors,

I am led here following your paper in nature communications. I wonder is the model file available for a test on some in-house whole slide images? Thanks!

Best regards,
QW

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.