Code Monkey home page Code Monkey logo

pybrat's Introduction

This is the Beaver Restoration Assessment Tool (pyBRAT) source code and documentation about it. pyBRAT is superseded by sqlBRAT

The documentation was previously on http://brat.joewheaton.org, and will now be accessed at http://brat.riverscapes.net.

The last stable release of pyBRAT (with ArcPy 10.x depenencies) was 3.1.0 (latest realease) and that was also published as:

  • Jordan Gilbert, Joe Wheaton, Wally Macfarlane, & Margaret Hallerud. (2019). pyBRAT - Beaver Restoration Assessment Tool (Python) (3.1.0). Zenodo. DOI: 10.5281/zenodo.7086388

A Riverscapes Consortium Report Card for v 3.1.00 is available here.

Next Versions

Note pyBRAT as an ArcGIS toolbox is no longer supported as we have shifted our development focus to a Production-Grade sqlBRAT and ESRI is sunsetting support for 10.X. sqlBRAT is currently a command-line tool.

pybrat's People

Contributors

albonicomt avatar banderson1618 avatar bangen avatar brittgraham avatar joewheaton avatar jtgilbert avatar lhaycock avatar matthewmeier avatar mattreimer avatar mhallerud avatar philipbaileynar avatar tyler1218hatch avatar wally-mac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pybrat's Issues

Landfire VEG_CODE and CODE Key

The following is an excel sheet that describes the VEG_CODE and CODE used for different Landfire codes. Let me know what you think. Specifically if the developed needs to be lowered. I was on the premise that even though it was developed that like you said beavers are not discriminatory of who owns the land but that the CODE would take care of the conflicts that would arise from land ownership and agriculture. https://usu.box.com/s/jsm3nmwckr74rmklb4f7heh10hnvtouz
@wally-mac @bangen @joewheaton @banderson1618

Conversation about where Field App Can go

From @webernick79:

Some more ideas for validation workflow. These are just concept what could work and are not meant to be finished products

This example shows how Arc Explorer and Survey123 work together to allow offline data collection:
2018-05-09_152408
https://www.youtube.com/watch?v=26rqpWeay-o , this makes use of a vector tile package as a basemap, and is brought into Arc Explorer using a the mobile map package, which also launches survey 123 to record validation observations.

The survey 123 data would then be available in as a layer in ArcOnline. This can be used in a web map or app, here’s an app example: https://www.youtube.com/watch?v=ieZPHtKUPxk

Let’s chat about limitations of this approach and others soon and decide where to go. If this is a route we consider I would need to start getting final layers to develop Basemaps and validation layers etc…..

Navigation buttons

@banderson1618
I noticed you made a new page under the documentation for the braid handler- unfortunately now there are a couple navigation buttons that are not working. I took one last look at the dead link checker and noticed the last two buttons are not working (see below). Let me know if you need more info about how to fix them.
image

UnboundlocalError

Do you have any idea of what could be causing this error. I am running ArcGIS10.6 and it works for other datasets so I redownloaded and cleaned up some things but I am getting this error
image
I sent the project folder to Sara in an email

Title problem

@philipbaileynar I tried adding
--- Title ---
To each of the .md pages in the pyBRAT repo that I have created. See the last 3 commits:

2a9089c
4954570
f19ff02

It looks like on the last one I did, something else major happened because it says I changed 100 files with over 14,000 additions, and I also got an email that said the page build failed...

Can you help me debug what happened?

pyBRAT could use a development branch

We've been talking creating a development branch for pyBRAT for at least a month now, so it seems fair to make it an issue. I propose that @bangen should be the one who makes the branch, primarily because she would be the main person using it, but also so that she can gain experience and familiarity with using branches. I'm willing to do it myself, but I believe that the experience would be better for @bangen than for me.

In case you need a refresher on how branching works, here's a link to the Branch Demo I made back in March. There's plenty of other resources online about branching, which can be useful if you get stuck.

@wally-mac and @joewheaton, should we finally pull the trigger on this? Feedback is appreciated.

BRAT-cIS (Capacity Inference System)

This is from #29

Joe, thought that it would be useful to have a BRAT calculator that would allow the user to run a simple inference system and get a capacity output “in the field.”
-NW: Yup, should be OK

What it is:

I think we need to build out what we'll call the BRAT-cIS (capacity inference system) and distinguish that from the 'model' of BRAT-cFIS (capacity fuzzy inference system). This is nothing more than the combination of the two inference systems that comprise the capacity model of BRAT from the Macfarlane et al. (2015). What is different is we don't make the inference system fuzzy, and just run it from the two rule tables based on categorical inputs from the user. The first assesses capacity solely on vegetation next to stream, vs. within a 100 m of stream (on either side... but could break into river right and river left):
table2
This output is analagous to BRAT's oVC_EX output.

The second takes that output, and feeds it in with a few other questions:
table3

Unlike the FIS, the output of IS NOT a continuous number (in dams/km) but rather a category of Maximum Dam Density:

legend_brat_damdensity_wide
This output is analagous to BRAT's oCC_EX output but is actually directly comparable to the Ex_Category output.

The Specific Request

I think one of the Data Capture Apps (probably a ArcSurvey123) would be similar to what Nick has already made (https://www.youtube.com/watch?v=26rqpWeay-o) but is set up so it can be run either from Desktop or Field (i.e. #31), so that the call could be based on field data or desktop call. We just need to know who is making the call, where they are making it, and what evidence they are basing it off of.

The User Inputs

The first part should be called the 'Vegetation to Support Beaver Dam Building'. Right now it looks like:
brat_veg
That's pretty much there...

The second part brings in the 'Combined Capacity to Support Dam Building'. Here we need the following questions:

  1. Can beaver build a dam at baseflows?
  • Probably can build dam
  • Can build dam
  • Can build dam (saw evidence of recent dams)
  • Could build dam at one time (saw evidence of relic dams)
  • Cannot build dam (streampower really high)
  1. If beavers build a dam, consider what happens to the dam(s) in a typical flood (e.g. mean annual flood)?
  • Blowout
  • Occasional Blowout
  • Occasional Breach
  • Dam Persists
    What empirical evidence exists of this at that location:
  • None
  1. How does the reach slope impact their ability or need to build dams?
  • So steep they cannot build a dam (e.g. > 20% slope)
  • Probably can build dam
  • Can build dam (inferred)
  • Can build dam (evidence or current or past dams)
  • Really flat (can build dam, but might not need as many as one dam might back up water > 0.5 km)

This stuff needs to expand the lower part of @webernick79's mock form
brat_rest

Some additional contextual component should be:

  • Streamflow

    • Perrenial
    • Potentially Intermitent
    • Intermitent
    • Potentially Ephemeral
    • Ephemeral
  • If 'Intermittent' - also get 'Proximity to Perennial Water Source'

    • > 5 km
  • 1 - 5 km

  • < 1 km

  • Channel Setting

    • Only Channel
    • Primary Anabranch
    • Secondary Anabranch or Side Channel
    • Backwater
  • Current Activity

    • Signs of Beaver Activity (checkboxes)
    • Current
    • Recent
    • Relic
    • Type of Beaver Activity
      • Dam Building
      • Dam Maineance
      • Food Caching
      • Woody Material Harvest
    • Existing Beaver Dams
    • How many primary
    • How many secondary
    • How many complexes
    • How active (radio):
      • All maintained
      • Some maintaned
      • Some maintenance but all intact
      • Some maintenance - mixed intact, breached, blownout
      • No maintenance - but all intact
      • No maintenance - mixed intact, breached, blownout
      • No maintenance - all breached or blownout

The Output

  • The output would be the polyline segment (250 m to 500 m long) of the stream where the assessment was made. The output would be the category of dam capacity (none, rare, occasional, frequent, or pervasive). It would have all of the above fields with having idc prefixes (for input data capture instead of i for inputs to model) and outputs having odc prefix.

Analyses it Could Support

We'll need to work with @banderson1618 and @bangen on these. A few that come to mind:

  • Comparison of odc_cIS to Ex_Category, on a reach by reach basis and having a map output with categories of:
    • Agreement
    • Minor Model Underprediction
    • Major Model Underprediction
    • Minor Model Overpredictino
    • Major Model Overprediction
  • A pie chart (with percentages) and bar plot (with distances of stream) of above.
  • An error matrix table
  • An actual dam count
    • Total
    • Primary
    • Secondary
    • Complexes

BRAT Manuscript Ideas

@joewheaton and I discussed the need to outline the manuscripts that could come from our various BRAT work. @bangen and @joewheaton lets get your ideals here.

  • Case studies: highlight all the various work we are doing (compare and contrast)
  • Ground Verified outputs vs. Non verified and what gains you get with additional data capture
  • Conflict and Management Model
  • BRAT Castor comparison?

Have Elijah and Kristen as a co-authors

Question about /images folder

Philip,
Would it be best practice for us to always use the {{ site.baseurl }}/assets/images/
folder naming convention instead of the {{ site.baseurl }}/images/ naming convention? Joe thinks you only need assets folders when in site is hosted inside the docs folder, but it seems like it might just be easiest to always use the assets/images folder regardless if site is in docs folder or in master root?

Add or Modify Tool to add Existing Dams to BRAT Table

We need to add a tool to BRAT to ingest in the dam census data. We already have a couple fields to populate in the e_DamCt, eDamDens and eDAMPcC attributes. We need a tool that:

  • Asks user to load:
    • Specify BRAT Project
    • Choose BRAT Realization/Ouput Table to Add to'
    • Load from Shapefile (of type point) a Dam census layer
    • Choose (radio button) whether this is a 'Dam Inventory', or a 'Scenario'
  • Specify the date this dam census corresponds to.
  • Upon loading:
    1. snaps points to network
    2. populates e_DamCt for each segment by counting points that intersect line (do we want to do this separately for primary dams or complexes)?
    3. Calculates eDamDens for each segment by dividing e_DamCt by iGeo_Length
    4. Calculates eDAMPcC' by dividing eDamDensbyoCC_EX`
  • Then we need to output as layers
    • eDamDens symbolized in same way as capacity
    • eDamPcC with new symbology to show off the areas of network with additional capacity.

In management model, we will need to add this eDamPcC to look specifically at restoration potential and limit it by conflict potential areas. Similarly, we need to look at eDamCt that are high and focus on those as conservation zones. We should anticipate this being one of many Dam Inventories added to a BRAT project (say over time).

Need to move BRAT source code over.

@jtgilbert, We need to move the BRAT source code over here. Maybe what you could do is start by uploading the Matlab version 1 code up from the old bitbucket repo? Then use the 'release feature' and publish that as version 1.0. Maybe ask @philipbaileynar about best practice here. We could then either start a new branch or move the old stuff into a MatlabBRAT_v1 folder. Then start pushing all your pyBRAT code for version 2 over to the repo. Then publish a release of that. This will allow us to keep both in same repo. Philip any suggestions?

Data Capture Distinctions

@wally-mac and I were discussing some of @webernick79 work on the Arc Survey123 and ArcExplorer web apps (see also #29). We think the following are important distinctions and have tried to genericize this from a specific 'Validation' exercise or 'field app' to something more generic aimed at capturing additional data to serve different needs, but be captured in a Riverscapes Project (something @banderson1618 and @bangen can help us with creating placeholders in the BRAT Riverscapes Project). Those needs include (we'll want to split these out into specific issues with requests after we hash the details out):

  • Verification Exercises of specific inputs, intermediates and model outputs
  • Calculation of useful 'performance metrics' (i.e. comparisons of model outputs to data capture summaries)
  • Analyses and Calculation of Data (captured) summaries that provide independent metrics, graphs, tables, and map output summaries of the collected data; e.g.:
    • For Beaver Dam Counts
      • A layer of dam density (dams/km) for each segment (i.e. a field)
      • A summary of total dams, dam densities,
    • For Beaver Activity Surveys
    • For BRAT-cIS (i.e. BRAT Capacity Inference System)
    • For Vegetation Observations
    • For Nuisance Beaver Activities
    • For Nuisance Beaver Potential Impacts

@banderson1618 should think about where in the Riverscapes BRAT project the raw Data Capture Distinctions go (maybe a Inputs\DataCapture folder?), where the

Concepts of Data Capture

@wally-mac and I thought it made sense to genericize the concept of data capture into modes, sample-design, and spatial scale.

Modes

We see three<Modes> of Data Capture:

  • <Mode>Field-Based</Mode> - Note this is most likely done with Arc Survey123 using collector forms, from the 'field'.
  • <Mode>Desktop</Mode> - This is done at computer, in a browser through a Web App (e.g. Nick's example: https://www.youtube.com/watch?v=ieZPHtKUPxk)? @webernick79 can you clarify?
  • <Mode>Combined Field-Desktop</Mode> - This would be a mix of the two. For example, you went in the field, capture some observations, but maybe not a complete sample, and using your memory and having seen it on the ground, you augment the field sample with a desktop sample.

Sample-Desgin

We see the following types of sample-design worth differentiating:

  • <SampleDesign>Unstructured</SampleDesign> - This is just opportunistic, take points where you feel like it. These will often be biased, but still useful samples (i.e. people would go to where they are interested in or where they have easy access to.
  • <SampleDesign>Structured</SampleDesign> - By contrast a structured sample design would follow some pre-defined method, independently derived ahead of time (i.e. we make a draw of 50 sample locations (sample points or specific polyline reach segments or specific polygons like the valley-bottom thiessen polygons in RCAT or the 30m or 100 m riparian buffers from BRAT). Specific examples would include:
    • <SampleDesign>Structured - Random</SampleDesign> - random points within a defined extent (polygon) or along a defined route (polyline network)
    • <SampleDesign>Structured - GRTS </SampleDesign> - spatially balanced version of above
    • <SampleDesign>Structured - Stratified </SampleDesign> -
    • <SampleDesign>Structured - Systematic </SampleDesign> -

Spatial Scale

It is important to recognize that there is an implicit spatial scale from the data capture in terms of extent, resolution, spatial data type. For the most part, the raw data capture will always be vector, and the Sample Design explains how many. We don't need to define these independently, but recognize that the extent could be a watershed, a whole stream (collection of reaches). The vector types (points, polyline stream segments, and polygons (thiessen valley bottom polygons or 30 m or 100 m buffers in BRAT).

Additional fields in the BRAT output

Purpose: Documentation for relating beaver dam censuses collected in the field or google earth to a recent run of BRAT.
Creator: Matthew Meier
Date: 4/30/2018
Steps to creating new fields:

  1. The dams as point shapefiles need to be snapped to the edges of the BRAT table. User needs to define the search radius
  2. Through the spatial join tool to get the counts for the amount of dams that are on each line segment. What you will want is to have the BRAT output as your target features, your join features as the dams point shapefile, the new output feature class as the output polyline file for this tool, join one to one for the join operation, keep all the target features, match option as intersect, and the search radius should not matter because you snapped it to the line(you might not even need the snap with this tool but I did for working through it).
    a. Note that there will be fields that were created in this from the points shapefile that may desired or not depending on the information in it.
    b. The tool creates a join count field which you will be creating a e_DamCt field that matches this joint count field i.e. how many dams are on each line segment. Description is underlined, the attribute field name is bold, and the specifics of the field name in ArcGIS italicized
    • Empirical: Actual Dam Count
    e_DamCt : estimating dam count per segment (double, precision = 0, Scale = 0)
    From this information you will run through the following calculations…
    Calculating for additional fields… (note all of these will be new fields and could be done through the update cursor function in python but these are the functions as I did them in excel.)
    • Actual Dam Density - These are the adjusted flow types by FHC [dams]
    e_DamDens: =(e_DamCt/seg_length)*1000 (double, precision = 0, Scale = 0)
    • Actual Percent of Existing Capacity Ratio - These are adjusted flow types by FHC [dams/km]
    e_DamPcC: =IFERROR(e_DamCt/ oCC_EX,0) (double, precision = 0, Scale = 0)
    • Existing Capacity Dam Count defined by category- Product of oCC_EX and Segment Length [dams]
    Ex_Categor: =IF(oCC_EX=0,"None",IF(AND(oCC_EX>0, oCC_EX<=1),"Rare",IF(AND(oCC_EX>1, oCC_EX<=5),"Occasional",IF(AND(oCC_EX>5, oCC_EX<=15),"Frequent",IF(AND(oCC_EX>15, oCC_EX<=40),"Pervasive"))))) (text)
    • Potential Capacity Dam Count defined by category- Product of oCC_PT and Segment length [dams]
    Pt_Categor: =IF(oCC_PT=0,"None",IF(AND(oCC_PT>0, oCC_PT<=1),"Rare",IF(AND(oCC_PT>1, oCC_PT<=5),"Occasional",IF(AND(oCC_PT>5, oCC_PT<=15),"Frequent",IF(AND(oCC_PT>15, oCC_PT<=40),"Pervasive"))))) (text)
    • Existing Capacity Dam Count - Product of oCC_EX and Segment Length [dams] mCC_EX_Ct:
    =( oCC_EX * SegLen)/1000 (double, precision = 0, Scale = 0)
    • Potential Capacity Dam Count - Product of oCC_PT and Segment length [dams]
    mCC_PT_Ct: =( oCC_PT * SegLen)/1000 (double, precision = 0, Scale = 0)
    • Existing to Potential Capacity Ratio - Ratio of actual to potential dam densities [dimensionless ratio between 0 and 1]
    mCC_EXtoPT: =IFERROR(oCC_EX / oCC_PT,0) (double, precision = 0, Scale = 0)
    Once these are added to the BRAT shapefile you are done with the additional attributes that I have calculated by hand.

Existing Drainage area for Large rivers

When you have a large river that passes through your watershed that Drainage area for upstream of where that river enters you watershed will appear as zero based on the relative window your looking at. A good way to solve this is to select the mainstem of that river that goes through your watershed and apply a DA value that you can find on Streamstats and then add it to the whole mainstem in your watershed.

Issue with segments

Didn't know exactly where to report this Sara but there are some issues that I have been seeing with the network building for idaho and the ones you have uploaded recently that need to be addressed. I am seeing that not including the artificial segments indeed saves time with canals but enters in the possibility of discontinuity due to small segments missing in the network. These are difficult to see as you are editing and can result in the following issues where overlapping nhd areas occur or culverts under roads. I think this is a prime candidate for a tweak in the code and after speaking with wally is pretty important to remedy. examples include...
image
image
Maybe this could be an extending line segment to connect to the vertex of another in the segmenting network tool.

Some ideas for changes to Conflict Model

Background
I would like to see our management model improved. Currently, we use the following lines of evidence:

  • Distance to bridges/culverts
  • Distance to roads (in valley bottom)
  • Distance to railroads (in valley bottom)
  • Proximity to developed or agricultural land use

We use the above to calculate a probability of conflict based on notion that the closer you are the higher the potential for conflict (simple transform function between distance and probability). We calculate this independently for each of these, but then just go with max probability (or worse of most conservative).

What we'd like
I'd like to see us separate out:
What are the potential impacts:

  • Flooding
    • Backwater flooding or raised water level (urban/ or roads inundated in valley bottom)
    • Clogging flooding (culverts or crossings in small streams)
    • Dam break - downstream flood risk (big enough pond... maybe we only have that for if we do #15 )
  • Undesirable Harvest of Trees
    • Damage from trees falling on infrastructure (proximity of good riparian veg/tall trees (veg ht) in proximity to infrastructure)
    • Harvest of ornamental (trees in back yards; trees in urban parks and recreational areas)
    • Harvest of ag orchards (trees in orchards)
    • Harvest of scientific monitoring equipment (e.g. USGS gages are, pit tag readers)
  • Impacting Flow Diversion or Irrigation Works
    • Points of Diversion
    • Canals

What are the tolerances for beaver or mitigation:
Could be:

  • No tolerance (only good beaver is a dead beaver)
  • Can't allow Beaver (e.g. Superfund sites)
  • Some potential impacts, but willing to mitigate (start simple first, escalate only if necc.)
  • Beaver can do no harm (even if flooding my land)
  • Beaver Conservation Zones

To get started:

  • Start with Property Ownership or Parcel Map
    • Assume all private property (assume worse... no tolerance)
    • Update as you have input...
    • Attempt to get by agency from specifically finding out....

** Discussion Ideas** 👍

  • Another line of evidence could be - Proximity to higher capacity reaches.
  • Also, we should lower probability where capacity is low!!!! Sara points out we should use logic from management model....
  • Scott suggests changing from probability of conflict to potential for conflict
  • Nick suggested using e_DamCt, eDamDens and eDAMPcC (from attribute table) to say if you already have lots of dams in close proximity, is conflict actually pretty low? Ask if mitigation taking place, or lethal removal, etc.?
  • Nick mentioned thinking through how these intermediates and outputs can be used to guide building of an adaptive beaver management plan
  • Bring Topology in (talk to @MattReimer) and look at Network Profiler and GNAT

Structured Data Capture Navigation - An Arc Explorer Mobile App for Truckee

@bangen wrote to @webernick79:

Hey Nick,

Attached below is a zip with the shapefiles for the Truckee. There is a polyline shp of the 300 m reaches in the Truckee. I added a 'ValReach' field for our workshop validaiton reaches (1 - is a validation reach; 0 - is not a validation reach). I also included the 30 m and 100 m buffers for all the reaches in the basin.

I selected 43 reaches that were mostly within walking distance from the field station and flagged by Kristen as areas worth checking out. This might increase depending on the feedback I get from Wally.

Let me know if something is missing, needs to be subsetted differently, etc.

Sara

Zip file:
https://drive.google.com/open?id=1TZGn9Zw5T4Qemw-GbXpMdVatSrqEOLmT

favicon.ico

Hey @philipbaileynar or @MattReimer . An annoying one for you. I've tried all sorts of tricks to get the favicon.ico to properly display. These were semi-useful:

But ultimately, none of them worked. Any suggestions? I haven't tried saving it as a png instead and changing from the: <link rel="shortcut icon" type="image/x-icon" href="favicon.ico"> syntax to the <link rel="shortcut icon" type="image/png" href="favicon.png">. I tried it on a few sites... no luck. I'm not sure if putting this html in the index.md is the best thing. Someone mentioned to put it in the [head] part of the document. Not sure what that is with md?

Is there a way we can have parent pages in navigation sidebar tree?

@philipbaileynar
It is really nice that the Page menu automatically populates simply based on file structure. I also really like that it builds the labels it uses based on that title: SOME PAGE NAME header in each file. Some really annoying things are:

  • You can't have a parent page (only a parent folder). This makes it really difficult to implement the concept of child pages from a page and means if we implement breadcrumbs, its only possible based on folder structure. I don't like this and it creates a real problem for how we've built a lot of our websites in the past and parsing them over. Is it possible to add a new parameter in the frontmatter (maybe a parentof: FolderName where if specified it would add it in the menu as a both a expandable node, and viewable page (as opposed to only a expandable folder)? We'd obiously have to make sure we specify the correct FolderName of a folder in the same root folder of the current page making the call.
  • The sort order of the navigation tree is only alphabetical. This means that we can only sort these if we add numbers to the title tag in the frontmatter (e.g. like I've done here. This is annoying if we don't want to have the numbers show up. Is it not possible to add a sortorder tag with an integer argument to the list and it over-rides alphabetical sorting and does it based on sort order?

Mainstem attribute missing error

It seems to me that the Mainstem attribute field is not being created anymore since the last 3.0.6 release. I get the following error when I run Braiden's Braid Handler tool.
image
notice no "is mainstem" field was created...
image

BRAT Summary Report

Looking into the output for the BRAT Summary Report tool the attribute table fields that are with the dam point shapefile are being added to the output from this tool. Due to multiple dams being snapped to some segments these point shapefile fields being carried over are not applicable and should not be included in the output. @banderson1618 @wally-mac@ @bangen

Lets make BRAT data capture snapshot of dam activity for Sagehen

There is a ton of data in this Beier and Barrett (1989) from Kristen for Sagehen. We should turn this into an opportunity to test the concept of a 'desktop' data capture event (#31) to translate these data into point-base feature classes from which we can run subsequent analyses.

Lets use this information too from Kristen

Attached is a kmz file with ideas for beaver validation locations near Sagehen. I identified 250-600 m reaches of stream that are within walking distance of the field station where we are staying. I would suggest we stay near the field station. The one exception are two reaches downstream of the highway. We can drive and park nearby if we want to visit those sites. There is a nice trail that parallels the river downstream. I put notes in the location names to indicate why these reaches might work well. There are few good viewpoints to see 300 m reaches unfortunately.
Also attached are some references to share with participants, if they are interested, about the history of beaver research at Sagehen.

https://usu.box.com/s/mkt5fy3t3s4rvspu2ku2ph8w54oh17at - KMZ file...
Summary of beaver research at Sagehen from Kristen: https://usu.box.com/s/rq4mgf2hg88ruxnsa1p5srxrngiqikf3

Add releases

Hey @jtgilbert
Can you make at least on release of the latest functional version of pyBRAT (v3.x?)? If there are other good benchmark v 3.x releases, make multiple releases and just make the latest, the latest. See #4 for background on this, as well as this Trello card for suggestions. You'll need to package up the necessary files into a zip file for the ArcPy Toolbox to work. We'll need to update the documentation for how to access this once @lhaycock moves content over from old website.

Beaver Dam Data Capture

Based on the suggest of @webernick79 @joewheaton and I on 5/10/18 decided that a BRAT data collector webmap app would be a useful investment. As initially conceived, the app will allow users to collect dam location data in a web version of arcgis online. The app will have a drop down form that would allow the user to collect:

active dams
inactive dams
stock ponds (as training data for AI)
lodges
primary dams (optional)
secondary (optional)
record how confident they are in their “call” i.e., high, medium or low.
record the date of the imagery

This app would also allow users to collect additional information such as:
known conflict areas
Beaver translocation areas
BDA loctions
Sensitive infrastructure

The thought is here that the app should be extended beyond just dam counts. The rationale for app is our current method of using Google Earth is wrought with flaws. It's clunky for users to collect data in Google Earth because of difficulties in assigning attributes resulting in time-consuming data collection.
It's difficult for novice users to create KMZ files and even more difficult for them to save the files therefore there is a high potential to lose data.
The advantages of having an app to that is eliminate the above issues all together. Nick are you interested in given the development of this app a go? @bangen please chime in if you have any suggestions.

weighted contents/folders

@philipbaileynar I am trying to add weights to the some of the contents in this pyBRAT repo. I have given Vision weight one and it is at the top of the summary on the side. If I understood our conversation yesterday I needed to change the title of my "summary.md" page in the Documentation folder to "index.md" and it should make the folder also a link like the example ThingsA in the Template Docs. I am not sure if it is taking longer than normal to show these changes on http://brat.riverscapes.xyz/ or if I have done something incorrectly. Let me know if you can figure out what is going on.
Part 2 of my question is then to assign a weight to a folder do I then add the weight in the index file of the subfolders?
Let me know if my question is clear or if it would be easier to get on a gotomeeting again.

Data Request: Unedited BRAT Valley Bottoms

Chris Murphy, Wetland/Riparian Ecologist with Idaho Department of Fish and Game wants to use the unedited valley bottoms, feeding the BRAT tool. We will revisit this when we complete valley bottoms. @bangen I'm making you aware of this because it does suggest that people are going to want to use the valley bottoms we produce so the better we can make the with minimal edit the better.

BRAT Table tool crashes when running Euclidean Distance on road crossings

The tool crashed when it was running the Euclidean Distance tool on the road crossings shape file. I have re-run the Project Builder and the BRAT table tool, which did not change the error message. I also restarted ArcGIS, which did not change the error message. The error message is pasted below. I'm currently looking into what went wrong, and will update when I know more.

Traceback (most recent call last):

  File "<string>", line 298, in execute
  File "C:\Users\A02150284\Documents\ArcGIS\AddIns\Desktop10.6\pyBRAT\BRAT_table.py", line 133, in main
    ipc_attributes(out_network, road, railroad, canal, valley_bottom, buf_30m, buf_100m, landuse, scratch)
  File "C:\Users\A02150284\Documents\ArcGIS\AddIns\Desktop10.6\pyBRAT\BRAT_table.py", line 397, in ipc_attributes
    ed_roadx = EucDistance(roadx, "", 5) # cell size of 5 m
  File "c:\program files (x86)\arcgis\desktop10.6\arcpy\arcpy\sa\Functions.py", line 989, in EucDistance
    out_direction_raster)
  File "c:\program files (x86)\arcgis\desktop10.6\arcpy\arcpy\sa\Utils.py", line 53, in swapper
    result = wrapper(*args, **kwargs)
  File "c:\program files (x86)\arcgis\desktop10.6\arcpy\arcpy\sa\Functions.py", line 983, in Wrapper
    out_direction_raster)
  File "c:\program files (x86)\arcgis\desktop10.6\arcpy\arcpy\geoprocessing\_base.py", line 510, in <lambda>
    return lambda *args: val(*gp_fixargs(args, True))
ExecuteError: ERROR 999999: Error executing function.
Failed to execute (EucDistance). 

Management Model Improvements and Ideas

@bangen please post what your ideas and such are for improvements and refinements to the Management model here.

Discussion Points:

  • I would like to see the management model expanded to break out 'restoration/conservation' zones into separate restoration and conservation zones, based on the eDAMPcC attributes. In other words, if beaver are there, and conflict is low, you want to keep encouraging.
  • Also, we should look at 1 - eDAMPcC attributes as an indication of additional capacity!
  • Sara @bangen points out how land use is not really being used meaningfully.
  • Can we do a better job of providing an input of where 'incised (stage 1) channels' are?

Project Structure Revision

Mic and I have been been assigned with figuring out what the folder structure of pyBRAT should look like, including showing intermediary data.

My rough draft for folder structure is laid out below, with bullet points.

  • Inputs
    • Conflict Layers
      • Valley Bottom
      • Roads
      • Railroads
      • Canals
      • Land Use Raster
    • Network
      • Stream Network
    • Vegetation
      • Existing Vegetation
      • Historic Vegetation
    • Topography
      • DEM
      • Slope
      • Flow Accumulation
      • Hillshade
  • Intermediates
    • Buffers
      • 30 m Buffer
      • 100 m Buffer
    • BRAT Table output
    • BRAT Braid Handler output
    • iHyd Streamflow Attributes output
    • BRAT Vegetation Dam Capacity Model output
  • Output
    • BRAT Combined Dam Capacity Model output
    • BRAT Conflict Potential output
    • BRAT Conservation and Restoration Model output
    • Summary Report output

We could potentially put the Slope, Flow Accumulation, and Hillshade into the Intermediates folder, since we calculate them in the tool. Both seem reasonable to me.

I'll be having more conversations with Mic over email, since (to my knowledge) he does not have a GitHub account, but I'll do my best to keep this page updated with what we're working on, and I'll link my commits to it.

Best Practice for Making Releases of Software to Users (NOT PROGRAMMERS)

@MattReimer and @philipbaileynar , We need your professional guidance here. GIT seems great at keeping track of all our changes to code and documenting that transparently to serve programmers and developers that may use the code. I've noticed the concept of a 'release', which I asked about in #2. My question is, what is the best way to package up just the file(s) that constitute a 'download' of the software for an idiot end user, that doesn't know anything about GIT and certainly doesn't need all the sourcecode? Do 'releases' just make a snapshot of the whole repo, or do they allow the release to select just the files that the user needs (e.g. an *.addin or a *.zip or a *.exe if there is an installer or something)? Do you have some examples you can point us to and some suggestions of best practice or at least options? Is it as simple as just packaging up what you want and organizing them in a different 'releases' folder? Or do we have to put them on an external site (e.g. in our AWS S3 buckets set up as file servers)?

Thanks for help!

Issues with euclidean distance

For documentation purposes I had been working with ArcGIS10.5 and there is an issue with the version of Arc that messes up the euclidean distance for railroads in my case. Falsely stating that the distance from RR crossing was 0 when it nowhere near a railroad. Anyway I installed ArcGIS10.6 and it fixed this issue but unfortunately ArcGIS10.6 has an issue that is earlier on in the process of gathering the Landfire inputs. When you clip the raster it will not transfer over the attribute table and so I just performed this function on another machine with a different version of Arc.

Implement a new Dam Placement Algorithm

We discussed moving from just a dam density algorithm output of the FIS, we could use modifications to Konrad's dam placement algorithm turned on its head to:

  1. place a primary dam (starting similarly in rank-order based on vegetation)...
  2. Then, sample off an empirical distribution of dam heights (preferably from low gradient systems)
  3. Then, use his BDSWEA, to backwater from that dam and figure out how far upstream the dam would extend (also take the volume and area outputs)
  4. Then, using a new empirical distribution of dam complex counts in low-gradient systems, take a random sample of how many dams to include.
  5. Then, if more dams, look upstream and downstream and decide if beaver would build dam there (based on vegetation ouptuts in those areas/reaches).

In end, you'd get locations of dams (points), plus inundated areas, volumes, and we could develop some new metrics along the lines of:

  • Dam count and density backed out from this algorithm - Added to network line as new attribute
  • Dam inundation areas (polygon) - Added to network line as new attribute
  • Dam surface volume (upper and lower estimates too) - as new attribute
  • Ratio of water surface storage to valley bottom area.
  • Potential inundated area divided by reach length (i.e. a normalized water depth storage per kilometer)

Just some thoughts based on conversations with @bangen, @wally-mac, Scott Shahveridan, Nicole Stohl and Nick Bouwes. Please add to discussion...

New field for summary report idea

Hi Wally and Joe,

Further investigation of the dam points for the rock creek area is given below in a Layer Package to better illustrate the under predictions of the BRAT model in this area. Here are some statistics

Of the total 34 stream segments with validation dam counts only 28 exceeded the capacity estimates indicating that the model effectively segregates the factors controlling beaver dam occurrence and density 82% of the time. (Total segments in the network is ~8351)

The following are some more statistics when I dived into the reasons why these under predictions occurred.

image

The following layer package describes this further and more information can be found there. From examining it I found that the landfire that the under predicted reaches occur on are primarily sagebrush and agricultural land with "VEG_CODE" values of 2 and 1 respectively. There is also one that has a dirt road going over it which is throwing it into conflict. After speaking with Joe today the origin of the dam census according to duration and precise year/s collected is unknown.
Another idea is to base the distinction of the calibration and quality of the model based on censused dam counts
I hope this provides some information and look forward to hearing from you all on your input.
@wally-mac @bangen @joewheaton @banderson1618

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.