Code Monkey home page Code Monkey logo

histocartography's Introduction

Build Status codecov PyPI version GitHub Downloads

Documentation | Paper

Welcome to the histocartography repository! histocartography is a python-based library designed to facilitate the development of graph-based computational pathology pipelines. The library includes plug-and-play modules to perform,

  • standard histology image pre-processing (e.g., stain normalization, nuclei detection, tissue detection)
  • entity-graph representation building (e.g. cell graph, tissue graph, hierarchical graph)
  • modeling Graph Neural Networks (e.g. GIN, PNA)
  • feature attribution based graph interpretability techniques (e.g. GraphGradCAM, GraphGradCAM++, GNNExplainer)
  • visualization tools

All the functionalities are grouped under a user-friendly API.

If you encounter any issue or have questions regarding the library, feel free to open a GitHub issue. We'll do our best to address it.

Installation

PyPI installer (recommended)

pip install histocartography

Development setup

  • Clone the repo:
git clone https://github.com/histocartography/histocartography.git && cd histocartography
  • Create a conda environment:
conda env create -f environment.yml

NOTE: To use GPUs, install GPU compatible Pytorch, Torchvision and DGL packages according to your OS, package manager, and CUDA.

  • Activate it:
conda activate histocartography
  • Add histocartography to your python path:
export PYTHONPATH="<PATH>/histocartography:$PYTHONPATH"

Tests

To ensure proper installation, run unit tests as:

python -m unittest discover -s test -p "test_*" -v

Running tests on cpu can take up to 20mn.

Using histocartography

The histocartography library provides a set of helpers grouped in different modules, namely preprocessing, ml, visualization and interpretability.

For instance, in histocartography.preprocessing, building a cell-graph from an H&E image is as simple as:

>> from histocartography.preprocessing import NucleiExtractor, DeepFeatureExtractor, KNNGraphBuilder
>> 
>> nuclei_detector = NucleiExtractor()
>> feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=72)
>> knn_graph_builder = KNNGraphBuilder(k=5, thresh=50, add_loc_feats=True)
>>
>> image = np.array(Image.open('docs/_static/283_dcis_4.png'))
>> nuclei_map, _ = nuclei_detector.process(image)
>> features = feature_extractor.process(image, nuclei_map)
>> cell_graph = knn_graph_builder.process(nuclei_map, features)

The output can be then visualized with:

>> from histocartography.visualization import OverlayGraphVisualization, InstanceImageVisualization

>> visualizer = OverlayGraphVisualization(
...     instance_visualizer=InstanceImageVisualization(
...         instance_style="filled+outline"
...     )
... )
>> viz_cg = visualizer.process(
...     canvas=image,
...     graph=cell_graph,
...     instance_map=nuclei_map
... )
>> viz_cg.show()

A list of examples to discover the capabilities of the histocartography library is provided in examples. The examples will show you how to perform:

  • stain normalization with Vahadane or Macenko algorithm
  • cell graph generation to transform an H&E image into a graph-based representation where nodes encode nuclei and edges nuclei-nuclei interactions. It includes: nuclei detection based on HoverNet pretrained on PanNuke dataset, deep feature extraction and kNN graph building.
  • tissue graph generation to transform an H&E image into a graph-based representation where nodes encode tissue regions and edges tissue-to-tissue interactions. It includes: tissue detection based on superpixels, deep feature extraction and RAG graph building.
  • feature cube extraction to extract deep representations of individual patches depicting the image
  • cell graph explainer to generate an explanation to highlight salient nodes. It includes inference on a pretrained CG-GNN model followed by GraphGradCAM explainer.

A tutorial with detailed descriptions and visualizations of some of the main functionalities is provided here as a notebook.

External Ressources

Learn more about GNNs

  • We have prepared a gentle introduction to Graph Neural Networks. In this tutorial, you can find slides, notebooks and a set of reference papers.
  • For those of you interested in exploring Graph Neural Networks in depth, please refer to this content or this one.

Papers already using this library

  • Hierarchical Graph Representations for Digital Pathology, Pati et al., Medical Image Analysis, 2021. [pdf] [code]
  • Quantifying Explainers of Graph Neural Networks in Computational Pathology, Jaume et al., CVPR, 2021. [pdf] [code]
  • Learning Whole-Slide Segmentation from Inexact and Incomplete Labels using Tissue Graphs, Anklin et al., MICCAI, 2021. [pdf] [code]

If you use this library, please consider citing:

@inproceedings{jaume2021,
    title = {HistoCartography: A Toolkit for Graph Analytics in Digital Pathology},
    author = {Guillaume Jaume, Pushpak Pati, Valentin Anklin, Antonio Foncubierta, Maria Gabrani},
    booktitle={MICCAI Workshop on Computational Pathology},
    pages={117--128},
    year = {2021}
} 

@inproceedings{pati2021,
    title = {Hierarchical Graph Representations for Digital Pathology},
    author = {Pushpak Pati, Guillaume Jaume, Antonio Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosuè Scognamiglio, Nadia Brancati, Maryse Fiche, Estelle Dubruc, Daniel Riccio, Maurizio Di Bonito, Giuseppe De Pietro, Gerardo Botti, Jean-Philippe Thiran, Maria Frucci, Orcun Goksel, Maria Gabrani},
    booktitle = {Medical Image Analysis (MedIA)},
    volume={75},
    pages={102264},
    year = {2021}
} 

histocartography's People

Contributors

afoncubierta avatar code-for-papers avatar kevthan avatar patricio-astudillo avatar pushpak-pati avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

histocartography's Issues

CUDA Error

Hi, I am trying to run the example you provided, but I am getting the following error at the line (feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=72)):
"RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1."

Any idea why I am getting this error?
Thanks!

Pypi Installer issue

Hi, I'm having an issue.
When I try to install histocartography with pip I get the following error:

ERROR: Could not find a version that satisfies the requirement dgl==0.4.3.post2 (from histocartography) (from versions: 0.9.0, 0.9.1, 1.0.0, 1.0.1, 1.1.0, 1.1.1, 1.1.2, 1.1.2.post1, 1.1.3, 2.0.0)
ERROR: No matching distribution found for dgl==0.4.3.post2

Also, I was wondering I you could be able to specify the versions of the packages used in the requirements.txt file to be able to install it properly.

Thank you.

Vahadane StainNormalizer raises error

Hi,

I am trying to run Macenko and Vahadane stain normalizer on my datasets.

The dataset has separate folders for I am trying to make the list of files, initialize the VahadaneStainNormalizer instance and call the _normalize_image(img) method. It works in the beginning but stops suddenly after some time and leaves this error. I have tried on different datasets but the error is the same.

Images are in PNG, I am loading them using PIL, converting them to RGB, and making ndarray. I do not understand where NaN or inf values might be appearing.

Traceback (most recent call last):
File "normalizer.py", line 58, in
norm_img = normalization._normalize_image(target)
File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/histocartography/preprocessing/stain_normalizers.py", line 498, in _normalize_image
input_image, stain_matrix_source
File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/histocartography/preprocessing/stain_normalizers.py", line 103, in _get_concentrations
stain_matrix.T, optical_density.T, rcond=-1)[0].T
File "<array_function internals>", line 6, in lstsq
File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 2306, in lstsq
x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj)
File "/home/neel/miniconda3/envs/DKL/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 100, in _raise_linalgerror_lstsq
raise LinAlgError("SVD did not converge in Linear Least Squares")
numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares

Can someone please suggest here?

about BRACS dataset

Can you explain how you downloaded the BRACS dataset? Follow the instructions, but I still cannot download the data set.

Question on the expected behavior for RAGGraphBuilder

Hello,

Thank you for your work, this package has been very useful and I enjoy learning from your source code.

I was preparing a dataset using the RAGGraphBuilder and noticed that all my tissue graphs had edges that connected all the "background" or 0 instance map valued nodes in the graph.

Original RAG graph:

before_RAG_fix

I noticed in the adjacency graph that the nth row and column correspond to the 0 instance map node, and that you can remove these edges by modifying the _build_topology method to:
for instance_id in np.arange(1, len(instance_ids) + 1):
mask = (instance_map == instance_id).astype(np.uint8)
dilation = cv2.dilate(mask, kernel, iterations=1)
boundary = dilation - mask
idx = pd.unique(instance_map[boundary.astype(bool)])
instance_id -= 1 # because instance_map id starts from 1
idx -= 1 # because instance_map id starts from 1
idx = idx[idx >= 0] # remove background idx and prevents "end" node -1 from making edges
adjacency[instance_id, idx] = 1

Modified RAG graph:

after_RAG_fix

I am new to learning about GNNs and am not familiar with the pros or cons of having the graph connected this way originally or disconnected. I would think that having a connected graph with the "background" nodes connected by edges would allow the GNN to perform message passing through those edges, so the original method may be desirable.

More importantly, I was wondering if these edges between the 0 instance map nodes was the expected behavior for the method.

Thank you again!
Jack

greycoprops/comatrix renamed to graycoprops/comatrix in skimage 0.19

greycoprops/comatrix renamed to graycoprops/comatrix in skimage 0.19 as stated here meaning that installation via the included environment.yml results in import errors for histocartography.preprocessing.feature_extraction .
Need to either pin skimage==0.18 in requirements file or change import statement in histocartography.preprocessing.feature_extraction.

Pretrained Weights for CellGraphModel not available

Hi,
Whilst running cell_graph_explainer.py, I am facing issues in loading pretrained weights for the CellGraphModel -
https://github.com/BiomedSciAI/histocartography/blob/5ec422092adbc2aae2cde3dbcbd4b28dca6685e2/histocartography/ml/models/cell_graph_model.py#L53C22-L53C38

Since we are trying to fetch this file named bracs_cggnn_3_classes_gin.pt, the URL doesn't work -
https://github.com/BiomedSciAI/histocartography/blob/5ec422092adbc2aae2cde3dbcbd4b28dca6685e2/histocartography/ml/models/zoo.py#L11C6-L11C34

Can you please have a look at this?
Thanks

Process tensor with shape (batch,channels,w,h)

Thanks for your excellent work! can this framework process multiple images at once,that is a batch of image,i.e. (batchSize,3-channel,width,height). I can only process one image with shape(width,height,3-channel) at once.

forward() missing 1 required positional argument: 'H'

Got this error when trying to implement my own model into the histocartography model

`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
c:\Users\raman\OneDrive - softsensor.ai\histocartography\v1.ipynb Cell 1 in <cell line: 13>()
10 knn_graph_builder = KNNGraphBuilder(k=5, thresh=50, add_loc_feats=True)
12 image = np.array(Image.open('docs/_static/283_dcis_4.png'))
---> 13 nuclei_map, _ = nuclei_detector.process(image)
14 features = feature_extractor.process(image, nuclei_map)
15 cell_graph = knn_graph_builder.process(nuclei_map, features)

File c:\Users\raman\OneDrive - softsensor.ai\histocartography\histocartography\pipeline.py:138, in PipelineStep.process(self, output_name, *args, **kwargs)
135 return self._process_and_save(
136 *args, output_name=output_name, **kwargs)
137 else:
--> 138 return self._process(*args, **kwargs)

File c:\Users\raman\OneDrive - softsensor.ai\histocartography\histocartography\preprocessing\nuclei_extraction.py:118, in NucleiExtractor._process(self, input_image, tissue_mask)
106 def _process( # type: ignore[override]
107 self,
108 input_image: np.ndarray,
109 tissue_mask: Optional[np.ndarray] = None,
110 ) -> Tuple[np.ndarray, np.ndarray]:
111 """Extract nuclei from the input_image
112 Args:
113 input_image (np.array): Original RGB image
(...)
...
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

TypeError: forward() missing 1 required positional argument: 'H'`

How do i proceed?

Memory requirements

Thanks for the awesome repository!
I am trying to run the cell graph generation example but I get CUDA out of memory errors. I am using a GPU with 8.5Gb, not running anything else or shared in any way.
Is there a minimum requirement for graph representation inference?

Environment dependency issue

Hi, I'm using mac and I followed the command to create conda environments, but encountered something like "torchvision requires torch 1.2.1 but torch version requires 1.3.0." I then tried to remove the version requirement for torch, but encountered PIL issues such as "cannot import name 'PILLOW_VERSION' from 'PIL'". I don't see a similar issue from anyone else, and I don't know if this is a mac problem or not. Thank you!

Weights instead of model

Nuclei_extraction.py gives an option to use our own model instead of the existing model created by pannuke dataset.

In my case i have the model weights and not the model itself, how would i have to adapt the code to run the nuclei detector with just my model weights?

image

RuntimeError: unexpected EOF, expected 4530578 more bytes. The file might be corrupted.

from histocartography.preprocessing import (
VahadaneStainNormalizer, # stain normalizer
NucleiExtractor, # nuclei detector
DeepFeatureExtractor, # feature extractor
KNNGraphBuilder, # kNN graph builder
ColorMergedSuperpixelExtractor, # tissue detector
DeepFeatureExtractor, # feature extractor
RAGGraphBuilder, # build graph
AssignmnentMatrixBuilder # assignment matrix
)

nuclei_detector = NucleiExtractor()

when this code is running, the mistake is:

图片

File already downloaded.
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.HoverNet' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Encoder' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Conv2dWithActivation' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.BNReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.ResidualBlock' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.SamepaddingLayer' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Decoder' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.Upsample2x' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'torch.nn.modules.upsampling.Upsample' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py:658: SourceChangeWarning: source code of class 'histocartography.ml.models.hovernet.DenseBlock' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
Traceback (most recent call last):
File "", line 1, in
File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/histocartography/preprocessing/nuclei_extraction.py", line 82, in init
self._load_model_from_path(model_path)
File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/histocartography/preprocessing/nuclei_extraction.py", line 88, in _load_model_from_path
self.model = torch.load(model_path)
File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/yyang3/anaconda3/envs/yy/lib/python3.6/site-packages/torch/serialization.py", line 781, in _legacy_load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 4530578 more bytes. The file might be corrupted.

multi-gpu usage

Thanks for your great package! It would be wonderful if multi-gpu training is possible.

Torch version issue

Im trying to run own .pth model instead but my model is based off the recent torch version and does not work in 1.10.1 which is needed by nuclei_detection. is there a way to overcome the "AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'" without downgrading the torch package?

Graph creation doesn't behave properly when used in patho-quant-explainer

See these three issues in the patho-quant-explainer repository.

To summarize, I was trying to reproduce the results shown in the pathology quantitative explainer paper, but failed to do so. After doing debugging and traceback I found that it was because the nuclei extractor detects too few nuclei even on the original (latest) BRACS dataset. One example would be detecting only 4 nuclei in BRACS_1897_DCIS_4.png. The lack of nuclei sometimes causes the DeepFeatureExtractor to fail, which then causes the KNNGraphBuilder to fail and the graph output to file function to throw an error.

I've also tried running the patho-quant-explainer pipeline on the previous version of the dataset, but that method fails on the very first graph in the test set because the KNNGraphBuilder fails to run, causing a save error.

Since I made no modifications to the source code, this could be due to an environment or hardware issue. As mentioned in another issue, the environment yaml provided in any of the histocartography repositories appear to be incomplete, outdated, or both. If this error isn't replicated by the maintenance team, would you be able to provide the exact environment you're using? Thanks!

Nuclei labels

Thanks for your great package! I was wondering whether it is possible to return the nuclei labels e.g. inflammatory cells? I can't seem to see this option. Thank you

Extract coordinates of nuclei in image

Hello!

Hope all is well. I have two images. One with the H+E staining, and another one, which is the exact same, but is colored by cell label. I'm wondering if it is possible to extract the coordinates of each nuclei in the input image? I would like to go from the nuclei in one picture to its label in the other using the coordinates. Is this possible?

See the two images below (this is from an open source dataset):

Screen Shot 2021-06-10 at 11 40 33 PM

Screen Shot 2021-06-10 at 11 40 48 PM

'Upsample' object has no attribute 'recompute_scale_factor'

Hi, thanks for making this available.

I'm running into an issue when trying to execute the following code:

feature_extractor = DeepFeatureExtractor(architecture='resnet34', patch_size=25)
knn_graph_builder = KNNGraphBuilder(k=6, thresh=50, add_loc_feats=True)
nuclei_map, x = nuclei_detector.process(img)

Screenshot from 2022-04-25 20-02-57

I tried the solution suggested here but it didn't help: https://stdworkflow.com/1508/attributeerror-upsample-object-has-no-attribute-recompute-scale-factor

Has anyone come across this and found a solution?

Memory issues with Nuclei Extractor

Hello! Really loving the package. I'm working on scaling this to more images, and am finding that the program uses a crazy amount of memory. The progress bar for the nuclei extractor gets to 100%, it hangs, then my program kills it because it surpasses the memory allocated for it (for reference, I have allocated 200GB). The image that I'm using is around 50MB, which could be causing this. Here's the image: https://drive.google.com/file/d/1HThowD4uzjJz9nZ7QYaBcdpiB2e9ZPMP/view?usp=sharing

This is the code that I'm using:

image_fnames = glob(os.path.join(image_path, '*[!A].jpg'))

print(image_fnames)
# 2. define nuclei extractor
nuclei_detector = NucleiExtractor()

# 3. define feature extractor: Extract patches of 72x72 pixels around each
# nucleus centroid, then resize to 224 to match ResNet input size.
feature_extractor = DeepFeatureExtractor(
    architecture='resnet34',
    patch_size=72,
    resize_size=224,
)

# 4. define k-NN graph builder with k=5 and thresholding edges longer
# than 50 pixels. Add image size-normalized centroids to the node features.
# For e.g., resulting node features are 512 features from ResNet34 + 2
# normalized centroid features.
knn_graph_builder = KNNGraphBuilder(k=5, thresh=50, add_loc_feats=True)

# 5. define graph visualizer
visualizer = OverlayGraphVisualization()

# 6. process all the images
for image_path in tqdm(image_fnames):

    # a. load image
    _, image_name = os.path.split(image_path)
    image = np.array(Image.open(image_path))

    # b. extract nuclei
    nuclei_map, centroids = nuclei_detector.process(image)

Any help is appreciated!!

GraphGradCAMExplainer use of backpropogation

When using the GraphGradCAMExplainer, we use an pretrained torch GNN model set to eval mode since we're no longer training the model. However, to find the node importances, the Explainer module uses backpropogation to find the node importances via the weight coefficients of the hooked activation maps, which shouldn't be possible on an eval model instance.

image

For whatever reason, this doesn't throw an error in the recommended python 3.7, dgl 0.4.3post2, and torch 1.10 environment, but does in my more up-to-date python 3.9, dgl 0.9, torch 1.12.1 env even though the written code is identical.

The only solution I've found so far is to set the model used in the Explainer to training mode before running the explainer, but that's far from ideal.

Is there a way to find the node importances without committing to backpropogation? Is that what backpropogating in the original histocartography environment does instead? If it doesn't, is it not an issue that the model is being updated via backpropogation during the process of explaining node importance?

Nuclei detection using StarDist?

Hello, thanks for this repo! I was wondering if it is possible to contribute to nuclei detection by writing a function that detects nuclei using StarDist? It uses tensorflow, so I am not sure, if you would allow it.

AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

2. nuclei detection

nuclei_map, nuclei_centroids = nuclei_detector.process(image)

I had error this line . Could you help me ?

AttributeError Traceback (most recent call last)
in ()
40
41 # 2. nuclei detection
---> 42 nuclei_map, nuclei_centroids = nuclei_detector.process(image)
43
44 # 3. nuclei feature extraction

11 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in getattr(self, name)
1184 return modules[name]
1185 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1186 type(self).name, name))
1187
1188 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:

AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

how to change the device id?

Hi, thanks for your project, but I have a problem while using it.
It seems that you set the "cuda:0" as the default device in some modules such as 'DeepFeatureExtractor', but I have come across some cases that need to change it, could you please add a parameter to allow the selection of the device ID?

Error

I get this error after installation
Help me please

return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

File "E:\anime\pythonProject1seg.venv\lib\site-packages\torch\serialization.py", line 777, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x0a'.

HE Reference matrix

Hi,

I had a question regarding Macenko Stain Normalizer. I see that the H&E Reference matrix is a hard coded matrix of shape (2, 3). Can you shed some light on how did you get this matrix? I am looking for a paper where this matrix is provided. Below is the line of code which I am referring to in the code histocartography/preprocessing/stain_normalizers.py

self.stain_matrix_target = np.array(
                [[0.5626, 0.7201, 0.4062], [0.2159, 0.8012, 0.5581]]

Aborted (core dumped)

Hi
I am using your code for cell graph generation but I'm getting this error
would you please help me with this error?
OMP: Error #179: Function Can't open SHM2 failed:
OMP: System error #13: Permission denied
Aborted (core dumped)

PyPi's skimage import

Would you be able to update the current code to Pypi, the current version there is not compatible with the current skimage version. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.