Code Monkey home page Code Monkey logo

cse524_project's Introduction

Tasks:

  1. Filter down data to train a segmentation model.

    1. Train a weakly supervised model for efficient segmentation.

    2. Train a fully unsupervised model.

  2. Visualize the result on top of Google Earth view.

    1. Project Penguin Colony Visualization for a specific set of locations.

    2. Create an application for running this visualization as a desktop app.

Progress Report:

Task 1: Filter down data to train a segmentation model

Summary: For the locations given, the patches are hand labelled and this information is stored in the KML files which are parsed to create relevant geometries and generate 2 images. One is the RAW Image of the location and the other one is the one that has the hand labelled guano patches. These 2 form our dataset for training and testing the segmentation model with the later being the ground truth. U-Net is used for the segmentation purpose.

For the 19 locations, the KML file is parsed to capture details and generate the original raw image and the Ground Truth Image for the location. This prepares the dataset for training and testing the segmentation model. This process has a series of steps as shown below:

  1. Original Image set

    1. Read files and find bounding box and centroid and store in dictionary with place names as keys
    2. Using the dictionary save the html files for the maps
    3. Using selenium run each html file and take screenshot
    4. save screenshot with name as oi
  2. Ground Truth Image Set

    1. Use the geometry to mark regions on map

    2. Save the html files for the map

    3. Using selenium run each html file and take screenshot

    4. save screenshot with name as gt

For a better understanding, we represent the process using a flowchart as below:

img

This process captures and generates image sets as follows:

Capebatterbee

Original Image:

img

Ground Truth Image:

img

For all the locations, the images are ready for segmentation except acuna and wpec which returned the following results:

Acuna

Original Image:

img

Wpec

img

This has happened because the Sentinel 2A didn’t have a capture for these locations for the year 2019. Hence, we discard these two for the further steps of segmentation.

Next, we create the U-Net model as follows:

https://miro.medium.com/max/800/1*OkUrpDD6I0FpugA_bbYBJQ.png

This model is now ready for training. Hence, we take the dataset and split it into training and testing data.

Task 2: Visualize the results on top of Google Earth View

Summary: There were 19 locations provided for mapping the penguin colonies. The locations have been presented as KML files which have to be analyzed to get coordinates for the location and then these coordinates can be used to collect images of the location from specific datasets for current time. The median of the collection is then used for segmentation and the result is presented as a marker on the location which pops up a display box to show the results for the specific location. This has been achieved using Google Earth Engine python API and Folium.

For the 19 locations provided, after the segmentation process is done, there is a marker placed at every location. Upon clicking the marker, the relevant segmentation result is displayed in a pop-up box. This has been implemented using the folium’s Iframe package and the result is shown below:

img

The results are displayed currently for the 19 locations provided but the same code can be reused to perform segmentation on an unknown location and test the model’s performance. For an unknown location, the user inputs a (lat, long) tuple for the supposedly center of the location. This is a jupyter notebook application which can be run using anaconda and installing the following packages: folium, selenium, google earth engine python API.

cse524_project's People

Contributors

cramor865 avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.