Code Monkey home page Code Monkey logo

Comments (10)

SahPet avatar SahPet commented on August 26, 2024 3

A tutorial/description of how to predict (multiclass) with DeepMIB on the WSI level (no need for patches) on exported rendered downsampled whole slide images (WSIs) from QuPath, and importing the predictions directly back into QuPath

  • Download and install the latest MIB-version, with support for creating TIFs directly from predictions: http://mib.helsinki.fi/web-update/MIB2_Win.exe

  • Download and install the latest QuPath-version: https://github.com/qupath/qupath/releases/download/v0.3.2/QuPath-0.3.2-Windows.msi

  • This is an example from the PANDA dataset, which can be downloaded from here (whole slide images from prostate with corresponding masks with the following annotations (color nr 1, 2, 3, 4, 5 = "Stroma", "Benign", "Gleason3", "Gleason4", "Gleason5"): https://www.kaggle.com/competitions/prostate-cancer-grade-assessment/data

  • In this example we've use a few images from the PANDA Radboud dataset:
    Screenshot_1672

  • Annotations can be imported by using the same script as we'll use later to import downsampled WSI prediciton tifs from DeepMIB, which can be found in the NoCodeSeg repository (created by @andreped): https://github.com/andreped/NoCodeSeg/blob/main/source/importStitchedTIFfromMIB.groovy

  • You will have to add a "Labels_" term before each file name (e. g. by using BulkRenameUtility, see below) or remove that term from a bit further down the script. The image below is slightly inaccurate at FastPathology is set to true, this should be "false", and DeepMIB set to "true":
    Screenshot_1685

  • Say we've already trained a deep segmentation network in DeepMIB from the PANDA dataset, by exporting tiles from QuPath and training in DeepMIB, as described here in this tutorial: https://youtu.be/9dTfUwnL6zY

  • The network was trained from exported patches from the PANDA WSIs with corresponding imported annotations. We've deleted the "Stroma" class (import value 1) annotations and creating a combined "Tumor" class for "Gleason3", "Gleason4", and "Gleason5" with this QuPath script:

replaceClassification('Gleason3', 'Tumor')
replaceClassification('Gleason4', 'Tumor')
replaceClassification('Gleason5', 'Tumor')
  • A lot of the PANDA dataset is incompletely or falsely labelled, so say we want to use the trained deep segmentation netowk to predict a few more WSIs which we've identified as incorrectly labelled, so we later can correct the annotations in QuPath and then add to our training data.

  • First, we'll export a rendered 2x downsampled version of the PANDA WSIs we want to predict in DeepMIB using this script from Pete Bankhead, the creator of QuPath:

/**
 * Script to export a rendered (RGB) image in QuPath v0.2.0.
 *
 * This is much easier if the image is currently open in the viewer,
 * then see https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_images.html
 *
 * The purpose of this script is to support batch processing (Run -> Run for project (without save)),
 * while using the current viewer settings.
 *
 * Note: This was written for v0.2.0 only. The process may change in later versions.
 *
 * @author Pete Bankhead
 */

import qupath.imagej.tools.IJTools
import qupath.lib.gui.images.servers.RenderedImageServer
import qupath.lib.gui.viewer.overlays.HierarchyOverlay
import qupath.lib.regions.RegionRequest

import static qupath.lib.gui.scripting.QPEx.*

// It is important to define the downsample!
// This is required to determine annotation line thicknesses
double downsample = 2

// Add the output file path here
String path = buildFilePath(PROJECT_BASE_DIR, 'Rendered_DS2_WSIs_040522', getProjectEntry().getImageName() + '.jpg')

// Request the current viewer for settings, and current image (which may be used in batch processing)
def viewer = getCurrentViewer()
def imageData = getCurrentImageData()

// Create a rendered server that includes a hierarchy overlay using the current display settings
def server = new RenderedImageServer.Builder(imageData)
    .downsamples(downsample)
    .layers(new HierarchyOverlay(viewer.getImageRegionStore(), viewer.getOverlayOptions(), imageData))
    .build()

// Write or display the rendered image
if (path != null) {
    mkdirs(new File(path).getParent())
    writeImage(server, path)
} else
    IJTools.convertToImagePlus(server, RegionRequest.createInstance(server)).getImage().show()

  • The 2x downsampled images are stored in a folder in the QuPath project. Remember to move the resultant jpg images into an "Images" folder before proceeding to DeepMIB prediction, as DeepMIB looks for an "Images" folder in the specified prediction folder:
    Screenshot_1682

  • We'll set this as the prediction folder in DeepMIB and create a "4_Results...." folder which will contain the tif files from the predictions.
    Screenshot_1675

  • We'll set the output from the prediction to TIF compressed format and also tick the "bigimage mode" (this prevents overloading the GPU when predictiing on larger image files).
    Screenshot_1678

  • The resultant tifs can now be found here after pressing "Predict":
    Screenshot_1684

  • For this demo I've use jpg converted from the original tiff images in the PANDA dataset to save some disk space. The ".jpg" is included in the filename for some reason after the rendering export from Qupath, so we'll remove ".jpg" from the filename in BulkRenameUtility (https://www.bulkrenameutility.co.uk/):
    Screenshot_1681

  • We'll also copy the path to the "ResultsModels" folder in our DeepMIB project directory and change the slashes in Notepad first:
    Screenshot_1680

  • Now we're ready to import the WSI-predictions into QuPath - just copy the path to the "ResultsModels" folder into the script from above and run in batch mode for all the images you've predicted in DeepMIB (script can be found here: https://github.com/andreped/NoCodeSeg/blob/main/source/importStitchedTIFfromMIB.groovy)
    Screenshot_1683

  • That's it. You've now predicted WSIs with a multiclass deep segmentation network in DeepMIB from downsampled WSIs exported from QuPath, and imported the multiclass predictions into QuPath, all without patch generation. You're now ready to correct your predictions in QuPath and expand your dataset further through this active learning process.

  • The multiclass supported import script above was created by @andreped

from nocodeseg.

Ajaxels avatar Ajaxels commented on August 26, 2024 2

@ajr82

We actually tried predicting (singleclass) directly on WSIs (.svs) in DeepMIB, but it didn't work out well. Your workflow, with the downsampling and jpg files will be much more efficient.

have you been using bigimage mode? We have currently a beta version of DeepMIB that works way better with large rasters.

from nocodeseg.

andreped avatar andreped commented on August 26, 2024 1

I assume @SahPet's comment above will be of interest to you, @pr4deepr, @ajr82, @aaronsathya.

I have moved this comment to its own wiki page:
https://github.com/andreped/NoCodeSeg/wiki/Tutorial-on-how-to-import-multiclass-predictions-from-MIB-into-QuPath

Great work, @SahPet !!

from nocodeseg.

pr4deepr avatar pr4deepr commented on August 26, 2024 1

Great stuff.
Thanks for tagging me!

from nocodeseg.

ajr82 avatar ajr82 commented on August 26, 2024 1

Thank you @andreped and @SahPet!
We actually tried predicting (singleclass) directly on WSIs (.svs) in DeepMIB, but it didn't work out well. Your workflow, with the downsampling and jpg files will be much more efficient.
We also find improvements in the training after using your fast-stain-normalization technique on the tiles. Now with the DeepLabV3Resnet18 architecture in the latest release of DeepMIB, things will likely improve further in our upcoming projects.
Thank you for you help!

from nocodeseg.

aaronsathya avatar aaronsathya commented on August 26, 2024 1

Wow fantastic. Awesome @SahPet and @andreped - this is great. We were able to get multiclass working using our email discussion. But having this tutorial will be significant moving forward.

from nocodeseg.

andreped avatar andreped commented on August 26, 2024

@ajr82 Great to hear, @ajr82!

I will be returning to the task of integrating stain normalization into FastPathology quite soon, which will enable you to perform stain normalization during deployment.

from nocodeseg.

ajr82 avatar ajr82 commented on August 26, 2024

@Ajaxels Hi Ilya, not as much, but eventually we plan on using it. I will try the beta version. Thanks for letting me know. Also thank you very much for DeepMIB. DeepMIB and NoCodeSeg, along with QuPath, have made our ongoing projects easier to work on (we are all pathology residents and pathologists)!

from nocodeseg.

Ajaxels avatar Ajaxels commented on August 26, 2024

@ajr82 that beta is not deposited yet, if you want I can share Matlab version,
But I also suggest to test at least bigimage mode (there is a checkbox in the Predict tab)

from nocodeseg.

andreped avatar andreped commented on August 26, 2024

As this seems to have been solved, I am closing for now. If you guys have any other requests or issues, please, let me know by making a new issue or similar.

from nocodeseg.

Related Issues (10)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.