Code Monkey home page Code Monkey logo

aitlas's Introduction

Project Status: Active – The project has reached a stable, usable state and is being actively developed. Python 3.7+ License: Apache License 2.0 Documentation Status

logo

The AiTLAS toolbox (Artificial Intelligence Toolbox for Earth Observation) includes state-of-the-art machine learning methods for exploratory and predictive analysis of satellite imagery as well as repository of AI-ready Earth Observation (EO) datasets. It can be easily applied for a variety of Earth Observation tasks, such as land use and cover classification, crop type prediction, localization of specific objects (semantic segmentation), etc. The main goal of AiTLAS is to facilitate better usability and adoption of novel AI methods (and models) by EO experts, while offering easy access and standardized format of EO datasets to AI experts which allows benchmarking of various existing and novel AI methods tailored for EO data.

Getting started

AiTLAS Introduction https://youtu.be/-3Son1NhdDg

AiTLAS Software Architecture: https://youtu.be/cLfEZFQQiXc

AiTLAS in a nutshell: https://www.youtube.com/watch?v=lhDjiZg7RwU

AiTLAS examples:

Installation

The best way to install aitlas, is if you create a virtual environment and install the requirements with pip. Here are the steps:

  • Go to the folder where you cloned the repo.
  • Create a virtual environment
conda create -n aitlas python=3.8
  • Use the virtual environment
conda activate aitlas
pip install GDAL-3.4.1-cp38-cp38-win_amd64.whl 
pip install Fiona-1.8.20-cp38-cp38-win_amd64.whl
pip install rasterio-1.2.10-cp38-cp38-win_amd64.whl
  • Install the requirements
pip install -r requirements.txt

And, that's it, you can start using aitlas!

python -m aitlas.run configs/example_config.json

If you want to use aitlas as a package run

pip install .

in the folder where you cloned the repo.


Note: You will have to download the datasets from their respective source. You can find a link for each dataset in the respective dataset class in aitlas/datasets/ or use the AiTLAS Semantic Data Catalog


Citation

For attribution in academic contexts, please cite our work 'AiTLAS: Artificial Intelligence Toolbox for Earth Observation' published in Remote Sensing (2023) link as

@article{aitlas2023,
AUTHOR = {Dimitrovski, Ivica and Kitanovski, Ivan and Panov, Panče and Kostovska, Ana and Simidjievski, Nikola and Kocev, Dragi},
TITLE = {AiTLAS: Artificial Intelligence Toolbox for Earth Observation},
JOURNAL = {Remote Sensing},
VOLUME = {15},
YEAR = {2023},
NUMBER = {9},
ARTICLE-NUMBER = {2343},
ISSN = {2072-4292},
DOI = {10.3390/rs15092343}
}

The AiTLAS Ecosystem

AiTLAS: Benchmark Arena

An open-source benchmark framework for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO). To this end, it presents a comprehensive comparative analysis of more than 500 models derived from ten different state-of-the-art architectures and compare them to a variety of multi-class and multi-label classification tasks from 22 datasets with different sizes and properties. In addition to models trained entirely on these datasets, it employs benchmark models trained in the context of transfer learning, leveraging pre-trained model variants, as it is typically performed in practice. All presented approaches are general and can be easily extended to many other remote sensing image classification tasks.To ensure reproducibility and facilitate better usability and further developments, all of the experimental resources including the trained models, model configurations and processing details of the datasets (with their corresponding splits used for training and evaluating the models) are available on this repository.

repo: https://github.com/biasvariancelabs/aitlas-arena

paper: Current Trends in Deep Learning for Earth Observation: An Open-source Benchmark Arena for Image Classification , ISPRS Journal of Photogrammetry and Remote Sensing, Vol.197, pp 18-35

AiTLAS Semantic Data Catalog of Earth Observation (EO) datasets (beta)

A novel semantic data catalog of numerous EO datasets, pertaining to various different EO and ML tasks. The catalog, that includes properties of different datasets and provides further details for their use, is available here

aitlas's People

Contributors

alicebaudhuin avatar elenamer avatar ivankitanovski avatar ivicadimitrovski avatar popovstefan avatar simidjievskin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aitlas's Issues

ignore_index in the loss calculation

Hello,

In case I would like to ignore certain areas of images, how could it be done?

I suppose we have to add "ignore_index" parameters in the loss function but not sure how to set it in this framework.

Many thanks and best regards,
T.

Data import isn't working

Hi,
I imported the dataset using the from aitlas.datasets import AIRSDataset but didn't find any.

Any help is appreciated!
Thanks

Can't use aitlas out of the box

Hi,

I'm trying to run the segmentation example notebook, but there are some problems.

First, when running model.train_model( train_dataset=train_dataset, epochs=epochs, model_directory=model_directory) I get:
TypeError: join() argument must be str or bytes, not 'NoneType'

The solution to this is adding: run_id ="1" as an argument to the previous call.

The error now becomes: AttributeError: 'DeepLabV3' object has no attribute 'optimizer'

The solution to this is adding:
self.criterion = self.load_criterion() self.optimizer = self.load_optimizer() self.lr_scheduler = self.load_lr_scheduler() to the __init__ in deeplabv3.py

After which the model trains, but it throws a warning and then an error after finishing the first epoch while computing the loss:
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([4, 480, 3, 480])) that is different to the input size (torch.Size([4, 3, 480, 480])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction)
then:
RuntimeError: The size of tensor a (480) must match the size of tensor b (3) at non-singleton dimension 2

Two questions:

1- How to deal with the previous error?
2- Why are there so many errors? I installed through cloning the repository and python setup.py install.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.