Code Monkey home page Code Monkey logo

rapid-inflows's Introduction

rapid_inflows

Louis R. Rosas, Dr Riley Hales

Rapid Inflows is a python interface for RAPID that assists to prepare inflow data. Gridded runoff data is taken in, along with lists of basins and their area. The runoff for each basin is calculated for every time step, and the resulting flows are written to an output inflow NetCDF file, the main dataset used by RAPID. Inspired by ERDC's RAPIDpy. More information about RAPID can be found here.

Inputs

The create_inflow_file function takes in four parameters: 1) a list of gridded runoff NetCDF datasets 2) a weight table 3) a comid_lat_lon_z file 4) an output filename Items 2-4 may be wrapped together in a list of iterable objects (tuples, lists, arrays, etc.) and passed instead to the 'input_tuples' function. This allows the NetCDF datasets to be opened once, and multiple weight tables which match the datasets can be used to make multiple outputs.

Gridded Runoff Datasets

The input NetCDF datasets should follow the following conventions. It is expected that the datasets have 3 or 4 dimensions, in the following orders: time, latitude, longitude or time, expver, latitude, longitude (The 4th dimension is often present in the most recent releases of the ECMWF datasets, which has an 'experimental version' dimension. Internally, this dimension is flattened and the data summed. The name of this dimension does not matter). The runoff, longitude, and latitude variables may be any of the following, respectively: 'ro', 'RO', 'runoff', 'RUNOFF', 'lon', 'longitude', 'LONGITUDE', 'LON', 'lat', 'latitude', 'LATITUDE', 'LAT'. Latitude and longitude is expected in even steps, from 90 to -90 degrees and 0 to 360 degrees repectively. Time is expected as 'time'. The difference in time between each dataset should be equal (typically a time step of a day), and should be an interger.

The user may input as many datasets as desired. There is a built in memory check which warns the user if the amount of memory required exceeds 80% of the available RAM and will terminate the process if the amount of memory required exceeds all the available memory.

Example NetCDFs can be found in /tests/inputs/era5_721x1440_sample_data.

Weight Table

A single csv file containing at least the following 5 columns of data with the following column names: 1) 'area_sqm', the area in square meters of a certain basin that intersects with the gridded runoff data. 2) 'lon', longitude of the centroid of the gridded runoff data cell that intersects with a certain basin 3) 'lat', latitude of the centroid of the gridded runoff data cell that intersects with a certain basin 4) 'lon_index', x index of the gridded runoff data cell that intersects with a certain basin 5) 'lat_index', y index of the gridded runoff data cell that intersects with a certain basin

Weight tables are expected to contain some digits, followed by an 'x' and the some more digits (i.e. weight_123x4567.csv). These digits represent the lat-lon dimensions of the input NetCDFS. There is an internal check which verifies that the weight table and NetCDF dimensions are the same (currently order does not matter). This file can be generated using either these ArcGIS tools or these python scripts.

An example weight table can be found in /tests/inputs.

Comid csv

A csv which has the unique ids (COMIDs, reach ids, LINKNOs, etc.) of the basins/streams modeled in its first column, sorted in the order in which the user wants RAPID to process. Two other columns must be present, labeled 'lat' and 'lon'. These columns represent a point related to each river reach. This file can be generated using either these ArcGIS tools or these python scripts.

An example comid csv can be found in /tests/inputs.

Outputs

The create_inflow_file outputs a new NetCDF 3 (classic) dataset. It is desgined to be accepted by rapid and pass all of its internal tests. Example output datasets can be found in /tests/validation.

rapid-inflows's People

Contributors

rileyhales avatar rickytheguy avatar j-ogden99 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.