Code Monkey home page Code Monkey logo

phirat-passi / volumetric-mri-validation-of-additional-brain-structures Goto Github PK

View Code? Open in Web Editor NEW
5.0 1.0 0.0 125.12 MB

Semantic segmentation in computer vision enables precise brain tumor diagnosis, differentiating tumors from surrounding brain regions. It empowers healthcare with micro-level insights for enhanced patient care and diagnostics.

Jupyter Notebook 100.00%
artificial-intelligence brain braintumorsegmentation cnn computer-vision convolutional-neural-networks data-science deep-learning image image-processing medical-imaging opensource segmentation semantic-segmentation u-net

volumetric-mri-validation-of-additional-brain-structures's Introduction

Volumetric MRI - Validation of Additional Brain Structures

Semantic segmentation for brain tumor analysis is a crucial application within the realm of computer vision, enabling early and precise diagnoses with heightened accuracy and safety. This advanced technique involves pixel-level classification, where each pixel in an image is assigned a label representing its corresponding object or feature. Particularly in tumor detection, semantic segmentation plays a pivotal role in identifying and distinguishing tumors from neighboring regions within the brain image, such as healthy tissue, edema, and necrotic regions. By harnessing deep learning architectures like U-NET and capitalizing on annotated datasets, semantic segmentation models become adept at recognizing tumors and precisely outlining their boundaries. This innovation holds transformative potential for revolutionizing the biomedical imaging domain, leading to more dependable and secure early patient diagnoses. The capacity to comprehend intricate brain tumor details on a micro level empowers healthcare systems with the acumen required for instantaneous decision-making, thereby ensuring patient well-being and fostering safer and more efficient patient care and diagnostics. This advancement is poised to significantly enhance patient care and diagnosis through a more nuanced understanding of brain tumor characteristics and their intricate interactions with surrounding anatomy, ultimately resulting in informed decision-making and tailored treatment strategies.

image

Dataset: BraTS2020 (Training + Validation)

Source: https://www.kaggle.com/datasets/awsaf49/brats20-dataset-training-validation

BraTS (Multimodal Brain Tumor Segmentation) dataset is a widely used collection of medical imaging data obtained through MRI scans for brain imaging, each represented as NIfTI files (.nii.gz). These files are a common format for storing medical imaging data.

MRI Modalities Included:

  1. T1 (T1-weighted):

     • Image Type: Native image.
     • Acquisition: Sagittal or axial 2D acquisitions.
     • Slice Thickness: 1–6 mm.
    
  2. T1c (T1-weighted with Contrast Enhancement):

     • Image Type: Contrast-enhanced (Gadolinium) image.
     • Acquisition: 3D acquisition.
     • Voxel Size: 1 mm isotropic voxel size for most patients.
    
  3. T2 (T2-weighted):

     • Image Type: T2-weighted image.
     • Acquisition: Axial 2D acquisition.
     • Slice Thickness: 2–6 mm.
    
  4. FLAIR (T2-weighted FLAIR):

     • Image Type: T2-weighted FLAIR image.
     • Acquisition: Axial, coronal, or sagittal 2D acquisitions.
     • Slice Thickness: 2–6 mm
    

U-Net Model Architecture

The U-Net model is a deep convolutional neural network architecture created specifically for semantic segmentation tasks, where the objective is to segment and identify each pixel in an image with an appropriate class, particularly in tasks like glioma segmentation in pre-operative MRI scans. It is known for its effectiveness in capturing fine-grained details while maintaining context, making it suitable for intricate segmentation tasks

image

  Fig: U-Net Model Architecture 

The main elements of the U-Net architecture are as follows:

Encoder

The input image is processed by the U-Net encoder portion using a number of convolutional and pooling layers. The encoder is in charge of recognising significant patterns and traits that 8 will enable accurate segmentation and collecting contextual information from the input image. Two 3x3 convolutions are applied repeatedly in the encoder, and after each one, a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 are applied for downsampling. The encoder can extract more precise information by mixing various convolutional and pooling layers. While the depth steadily increases, the size of the image gradually decreases. The encoder is simply a conventional stack of max pooling and convolutional layers. Downsampling and feature extraction are performed by the encoder block in the U-Net architecture.

image

  Fig: Encoder in U-Net Model

Decoder

Upsampling the features to the original image resolution is done by the decoder in the U-Net architecture. It accomplishes this using other convolutional layers and transposed convolutions, also referred to as deconvolutions. The decoder's task is to piece together the segmented output using the features that the encoder extracted. The same connectivity is created by the transposed convolution procedure but in the other direction from the standard convolution. A low-resolution image is present in the input volume, and a high-resolution image is present in the output volume. Upsampling, concatenation, and standard convolution operations make up the decoder. By upsampling the feature maps to the original picture resolution, transposed convolutions are used in the decoder to improve the spatial resolution of the feature maps.

image

  Fig: Decoder in U-Net Model

Results

image

  Fig: Sample Input

image

  Fig: Output

volumetric-mri-validation-of-additional-brain-structures's People

Contributors

phirat-passi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.