Code Monkey home page Code Monkey logo

mortentabaka / semantic-segmentation-of-landcover.ai-dataset Goto Github PK

View Code? Open in Web Editor NEW
16.0 2.0 6.0 134.77 MB

An implementation of Deeplabv3plus in TensorFlow2 for semantic land cover segmentation

License: Other

Makefile 1.72% Python 78.12% Jupyter Notebook 20.07% Dockerfile 0.10%
python deep-learning tensorflow2 semantic-segmentation deeplab-v3-plus deep-neural-networks segmentation tensorflow convolutional-neural-networks land-cover-mapping

semantic-segmentation-of-landcover.ai-dataset's Introduction

Semantic segmentation of LandCover.ai dataset

The LandCover dataset consists of aerial images of urban and rural areas of Poland. The project focuses on the application of various neural networks for semantic segmentation, including the reconstruction of the neural network implemented by the authors of the dataset.

The dataset used in this project is the Landcover.ai Dataset, which was originally published with LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery paper also accessible on PapersWithCode.

Please note that I am not the author or owner of this dataset, and I am using it under the terms of the license specified by the original author. All credits for the dataset go to the original author and contributors.

Make predictions on custom images

After installing the necessary dependencies, execute the following scripts.

Run prediction on images in models/custom_data/input:

python3 models/scripts/run_prediction_on_folder.py

This script allows you to make predictions using the DeepLabv3+ model on a folder containing custom input images. You can use the following parameters to customize the prediction process:

  • model_revision: This optional parameter allows you to choose which model revision to use for making predictions. The default is "deeplabv3plus_v5.10.2", but you can select a different revision from list of availabe ones (--help).

  • input_folder: This optional parameter specifies the folder containing the input images that you want to make predictions on. The default folder is models/custom_data/input. Accepted image formats are JPG, PNG and TIFF.

  • output_folder: This optional parameter specifies the folder where the output predictions will be saved. The default folder is models/custom_data/output.

To get more information on how to use the script, execute the following command:

python3 models/scripts/run_prediction_on_folder.py --help

Sample result

The image used in this sample is a high-resolution TIFF orthophotomap covering an area of approximately 3.5 km². The image has a resolution of 25453x13176, and it is not part of the project dataset. Similar images for Poland regions can be obtained free of charge from the Head Office of Geodesy and Cartography through their service.

To facilitate analysis, the image is split into tiles, and predictions are made on each tile. The outputs are then concatenated to the original size to produce the final result.

Legend

  • #000000 Background
  • #FF0000 Buildings
  • #008000 Woodland
  • #0000FF Water
  • #FFFFFF Roads

prediction.png orthophotomap.png

Installation

There are two ways to run this project: installing the environment via Anaconda or running a Docker container (recommended).

Docker

Installation guide - Docker

Anaconda Environment (legacy)

Installation guide - Conda

Jupyter Notebooks

Jupyter notebooks used in early-stage development.

Jupyter notebook templates for machine learning operations in the project.

Available templates

DeepLabv3+ Architecture - Legacy Revisions

Developemnt notebooks

Ver. Backbone Weights Frozen convolution base Loss function Data augmentation Train dataset size Loss weights mIoU on test dataset
5.1 Tensorflow Xception Imagenet Yes Sparse Categorical Crossentropy No 7470 No 0.587
5.2 Tensorflow Xception Imagenet Yes Sparse Categorical Crossentropy Yes 14940 No 0.423
5.3 Tensorflow Xception Imagenet Yes Sparse Categorical Crossentropy No 7470 Yes 0.542
5.4 Modified Xception Cityscapes Yes Sparse Categorical Crossentropy No 7470 No 0.549
5.4 Modified Xception Cityscapes Yes Sparse Categorical Crossentropy No 7470 Yes 0.562
5.5 Modified Xception Cityscapes Yes Sparse Categorical Crossentropy No 7470 Yes 0.567
5.6 Modified Xception Cityscapes Yes Sparse Categorical Crossentropy No 7470 Yes 0.536
5.7 Modified Xception Cityscapes No Sparse Categorical Crossentropy No 7470 Yes 0.359
5.8 Modified Xception Cityscapes Yes Soft Dice Loss No 7470 No 0.559
5.9 Modified Xception Pascal VOC Partially Soft Dice Loss No 7470 No 0.607
5.10 Modified Xception Cityscapes Partially Soft Dice Loss No 7470 No 0.718
5.11 Modified Xception Cityscapes Partially Soft Dice Loss Yes 14940 No 0.659
5.12 Modified Xception Cityscapes Partially Soft Dice Loss Yes 7470 No 0.652

Currently best mIoU score

alt text

Notebook v.5.10 with meanIoU = 0.718.

Notebooks are available on Google Drive.

References

[1] Boguszewski, Adrian and Batorski, Dominik and Ziemba-Jankowska, Natalia and Dziedzic, Tomasz and Zambrzycka, Anna (2021). "LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery"

[2] A. Abdollahi, B. Pradhan, G. Sharma, K. N. A. Maulud and A. Alamri, "Improving Road Semantic Segmentation Using Generative Adversarial Network," in IEEE Access, vol. 9, pp. 64381-64392, 2021, doi: 10.1109/ACCESS.2021.3075951.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
|   |   ├── architectures      <- Model architectures available for training
│   │   ├── predict_model.py   
│   │   └── model_builder.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.readthedocs.io

Citation

If you use this software, please cite it using these metadata.

@software{Tabaka_Semantic_segmentation_of_2021,
author = {Tabaka, Marcin Jarosław},
license = {Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)},
month = {11},
title = {{Semantic segmentation of LandCover.ai dataset}},
url = {https://github.com/MortenTabaka/Semantic-segmentation-of-LandCover.ai-dataset},
year = {2021}
}

Project based on the cookiecutter data science project template. #cookiecutterdatascience

Shield: CC BY-NC-SA 4.0

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

semantic-segmentation-of-landcover.ai-dataset's People

Contributors

marcin-tabaka-mk avatar mortentabaka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

semantic-segmentation-of-landcover.ai-dataset's Issues

tf2.yml enviroment

Hey,
I've been trying to use conda as you have suggested in order to create the environment specified in tf2.yml
I keep getting the below error and can't find a solution:
image
I'm using conda 4.12.0.
I would like some help,
Thanks,
Tal

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.