Code Monkey home page Code Monkey logo

arnukk / caspian Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 1.0 281.95 MB

The official repository accompanying the paper "Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations".

License: MIT License

Jupyter Notebook 88.47% Python 11.40% PureBasic 0.13%
attention-unet caspian climate-change coastal-flood depth-estimation depth-map hydrodynamics inundation-mapping keras-tensorflow surrogate-modelling

caspian's Introduction

Deep Vision-Based Framework for Coastal Flood Prediction Under Climate Change Impacts and Shoreline Adaptations

This repository contains the complete source code and data for reproducing the results reported in the paper. The proposed framework and models were implemented in tensorflow.keras (v 2.1). The weights of all the trained DL models are included.

The implementations of the SWIN-Unet and Attention U-net were adapted from the keras-unet-collection repository of Yingkai (Kyle) Sha.

Repository Structure

  • data includes the raw data, as well as the datasets (in tf.data.Dataset format) created from it, based on which the coastal flood prediction models were trained, validated and tested.

  • models contains the implementation of the models (in tensorflow.keras v 2.1 ) along with the weights of the trained models (in h5 format).

  • model_training.ipynb provides a sample code for training Deep Vision-based coastal flood prediction models with the proposed approach.

  • performance_evaluation.ipynb includes a sample code for assessing the performance of the developed models and visualizing predictions (see also Illustrations.ipynb).

Training From Scratch

To re-train the aforementioned three models (SWIN-Unet, Attention U-net, CASPIAN):

1: Open model_training.ipynb, select the model and define your desired hyperparameters:

grid_size = 1024

AUTOTUNE = tf.data.AUTOTUNE

batch_size = 2

split = 1

output_1d = False

EPOCHS = 200

  

MODEL_NAME = "SWIN-Unet"

LR = 0.0008

MIN_LR = LR/10

WARMUP_EPOCHS = 20

2: Load the dataset:

ds = {

'train': tf.data.Dataset.load("./data/train_ds_aug_split_%d" % split).map(lambda f,x,y,yf: tf.py_function(clear_ds,

inp=[f,x,y,yf, output_1d],

Tout=[tf.float32, tf.float32])),

'val': tf.data.Dataset.load("./data/val_ds_aug_split_%d" % split).map(lambda f,x,y,yf: tf.py_function(clear_ds,

inp=[f,x,y,yf, output_1d],

Tout=[tf.float32, tf.float32]))

}

In the current implementation, the training and validation datasets are assumed to be pre-augmented. To recreate these datasets run the data/Dataset_construction.ipynb notebook. For a more memory-efficient implementation the augmentation can be performed on the fly during the training by passing a data generator to the model.fit() function.

3: Select the remaining hyperparameters, callbacks and initiate the training:

model.summary()

  

history_warmup = model.fit(ds['train'],

epochs=WARMUP_EPOCHS,

validation_data=ds['val'],

callbacks=[checkpoint, tensorboard_callback, warm_up_lr]) #PrintLearningRate()#reduce_lr#early_stop

  

model.load_weights("./models/trained_models/%s/initial/" % MODEL_NAME)

history = model.fit(ds['train'],

epochs=EPOCHS,

validation_data=ds['val'],

callbacks=[checkpoint, tensorboard_callback, early_stop, reduce_lr]) #PrintLearningRate()#reduce_lr#early_stop

  

model.load_weights("./models/trained_models/%s/initial/" % MODEL_NAME)

model.save("./models/trained_models/"+MODEL_NAME+"_split_{}".format(str(split)), save_format='h5')

Applying the Trained Models:

To ensure the robustness of the results, the models were trained on three (randomly generated) data splits. To produce a flood inundation map with the trained models for a given shoreline protection scenario, select a split, load the corresponding weights of the chosen trained model, and provide the input hypothetical flood susceptibility map:

model = tf.keras.models.load_model("./models/trained_models/"+MODEL_NAME+"_split_{}".format(str(split)), compile=False)

for sample in ds_eval['test'].as_numpy_iterator():

scenario, input_grid, label, label_flat = sample

pred = model.predict(input_grid)[0, :, :, 0]

caspian's People

Contributors

arnukk avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

yxinjiang

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.