Code Monkey home page Code Monkey logo

nshm-cous-2014's People

Contributors

bclayton-usgs avatar emartinez-usgs avatar pmpowers-usgs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nshm-cous-2014's Issues

Magnitude scaling relation consistency

In the 2014 model, a somerville MSR was used for point-source distance corrections but WC94 continued to be used for fixed strike RLME grid sources. Moreover, several magnitude scaling relations are used to broaden the epistemic uncertainty of Casacadia unsegmented rupture magnitudes, but then Youngs/Geomatrix is use to compute length for floating (segmented) ruptures.

Moved from nshmp-haz-fortran Issue 19.

Correct Seattle Fault Zone (SFZ) rates and magnitudes

Reference issue: nshmp-haz-fortran #32

Table 1. The following table summarizes the SFZ GR parameterization spanning 3 model updates: 2002, 2008, and 2014.

Strand M (2002) M (2008) L (2008) a (2008) M (2014) L (2014) a (2014) M (WC94L)
North 7.2 7.2 71 1.15 7.23 71 1.15 7.23
Middle 7.1¹ 7.2¹ 64 1.11 7.2 64 1.15 7.18
South 7.2¹ 7.1¹ 56 1.16 7.2 56 1.15 7.10

¹ The 2002 OFR describes the magnitudes as presented, but this is inconsistent with their lengths and was likely a typo.

Table 2. Recurrence Intervals

Year GR CH Total (50% CH, 50% GR)
2008 977 5000 1629
2014 982 5000 1636
2014x² 1408

² This the recurrence interval for the incorrectly updated 2014 model that needs to be reverted.

Background

  1. For the 2014 model (in the USGS fortran codes), there was a lot of back and forth between members of the team that involved both the periodic automatic generation and manual editing of hazard input files. This introduced a variety of errors and inconsistencies that were corrected in late 2015, e.g. #28, #30, #31.
  2. In computing fault slip rates, the geodetic modelers were supposed to only be supplied with the primary traces for faults. That is, for faults with alternate traces, for example the Southern Whidbey Island Fault, only the primary 'middle' trace was to be used when inverting for slip rates. This detail was agreed upon internally but not recorded in the 2014 OFR.
  3. The primary description of the Seattle Fault Zone is on p. 79 of the 2014 OFR. Although the description includes updated references relative to the 2002 and 2008 models, the description states: "the recurrence parameters are the same as those used in the 2008 NSHMP maps". Only on p. 204, in the discussion of hazard changes in the WUS is it mentioned that: We have not modified this fault using the geodetic model. Although recurrence rates for the SFZ are difficult to constrain, this treatment is inconsistent with the treatment of WUS faults generally, and should be better justified and documented.
  4. The implementation approach that was used to 'ignore' the geodetic models was to swap in the geologic rates so as to not have to change the weights on the geologic model. However, in the 2014 model, geologic magnitudes and rates for the SFZ changed, without apparent justification, further confusing things; more on this below.

Introduced Error

In making the corrections in item 1, above, item 3 was overlooked and therefore geodetic based slip rates were incorrectly added to the model. This issue serves as a marker that this change needs to be reverted.

GR Parameter Changes (2008 to 2014)

In 2002, Art Frankel revised the SFZ to be modeled as 3 distinct strands. The northern strand was modeled as a characteristic M7.2 rupture with a 5000yr recurrence receiving 50% weight, and all three strands were used to model GR behavior (also with 50% weight), with the total rate across all three strands approximating a M≥6.5 recurrence rate of 1100yrs. The actual MFDs for the 2008 GR branch give a recurrence interval closer to 1000 years (977 years in Table 2). The total return period considering both GR and CH branches is around 1630 years.

In 2008, the Seattle model was not modified in any way, and there is no discussion of it in the corresponding OFR. For some reason, however, despite citing consistency with 2008 in the 2014 OFR, both the geologic rates and magnitudes changed (values in bold in Table 1). While the rate changes affect have little effect on recurrence interval (1636 vs 1629 years), the magnitude changes certainly will affect hazard. Although the traces were modified slightly in 2014 to correct GIS projection errors, they only moved on the scale of meters and the lengths remained the same. Table 1 also shows the Wells & Coppersmith (1994) length based magnitudes to two decimal places. In the case of the northern strand, the magnitude change reflects an increase in precision consistent with the length of its trace. If a precision change was to be made, however, why was it then not applied to the middle and southern strands as well? In any case, the M7.2 for the southern strand is inconsistent with WC94L M7.1 and is likely in error.

The 2014 model also added CH branches with nominal rates of 1e-7 for the middle and southern strands. This has little effect on M≥6.5 rates and hazard, and provides consistency between GR and CH representations.

Pending further comment, the 2014 SFZ model will be reverted to match that used in 2008 while preserving the 1e-7 characteristic branches.

See also: #17

Develop timeline and process document for 2017 model

Placeholder for developing a timeline and process for the 2017 NSHM.

Everyone in the group should feel free to add to this repository issues and questions relating to the 2017 maps. Once a repository has been created for the 2017 maps, we'll migrate the issues there.

cc: @usgs/nshmp

Notes:
  • Filter fault model of low-probability sources etc. prior to hand off to deformation modelers.
  • Lock fault model subsequent to hand-off.
  • Modelers must record changes to fault geometry necessitated by modeling codes.

Review slab mMax branch weights

The text of the 2014 OFR and Figure 37 both indicate the two slab mMax branches should get 50/50 weight. In the fortran and this model, the implemented weights are 90 (mMax:7.5) / 10 (mMax:8.0).

fig37-2014ofr

Seattle fault weights and rates

The Seattle fault is modeled as having three possible strands in the geologic model; only the northern strand is included with the geodetic source models (see nshmp-haz-fortran #32).

In the geologic model, the characteristic rate of the northern strand is assigned the a-priori 5000yr. rate and the middle and southern strands are assigned a negligible 10-million year rate. The GR rates for all three strands, however, are equal. Why?

Remove unsupported multi-point GMMs

To produce sets of hazard curves for multiple periods and sites, we need to remove Idriss from the NGAW-2 lineup, as well as AB03 from slab and interface. New release required.

Revisit M=6.5 epi/alea uncertainty cutoff

Revisit brange.3dip.gr.xml which throws warning and losers eli branch has no mags, but does not qualify for m<6.5 suppression of all epistemic uncertainty
dMag = 0.14
mMin = 6.5 --> 6.57 (min bin center)
mMax = 6.78 --> 6.71 (max bin center)

mMax - 0.2 = 6.51

Related question: When bins in gridded sources are capped (to avoid double counting) are they capped at a magnitude that considers uncertainty (M - epiDelta)?

Review UCERF3 zTop implementation

Current implementation considers the aseismicity factor as a downdip reduction in width. This reduction was imposed in the UCERF3 inversion to reduce the moment release possible with creeping fault sections, but is inconsistent with the NGAW2 definition of zTop (or zTor). Review should consider fact that almost all smaller M ruptures will now come to the surface (zTop = 0.0); there is no capacity, currently, with the way the UCERF3 fault system is represented, to model down-dip floating ruptures as are modeled elsewhere in the WUS, and previously in CA. Also note that zTop for a rupture is the area-weight average of the aseis-reduced zTops of all participating fault sections.

See also the white paper and email exchanges with @njgregor on this topic.

Caballo Fault NM-TX

There are two Caballo faults, one in NM and one in TX. The geodetic modelers did not consider (were not supplied the) Texas faults as a class in 2014. However, it seems the TX Caballo fault variant, with longitudes in the -105° range, was supplied in lieu of the NM variant. The texas faults without geodetic slip rates were generally corrected to receive a weight of 1, however the Caballo fault model is overweighted in TX (the geologic branch weights assume there are no geodetic branches, but there are) and underweighted in NM (the geologic branche weights assume there are geodetic branches, but they're aren't any).

In reformatting 2014 for 2018 we're leaving everything as is.

add gmmUncertainty parameter to Western US model config.json file

Update Western US/config.json file; add gmmUncertainity parameter and set to 'true' so that this is the default:

{
"model": {
"name": "NSHMP Western U.S. 2014",
"surfaceSpacing": 1.0,
"ruptureFloating": "NSHM",
"ruptureVariability": false,
"pointSourceType": "FINITE",
"areaGridScaling": "SCALED_SMALL"
},
"curve": {
"exceedanceModel": "TRUNCATION_UPPER_ONLY",
"truncationLevel": 3.0,
"gmmUncertainty": true,
"imts": ["PGA", "SA0P2", "SA1P0"]
}
}

Update fault database and export process

The postgres tables used to generate input files for Fortran (via fltrate.2.f) should be modified to:

  • Be more future proof
  • Provide consistent names
  • Use nshmp-haz enum equivalents as IDs for fields that represent logic tree branches (e.g. for slip rate (ZENG, BIRD, GEO)

See the "Fault Database Plan" email thread for more details.

Also needed:

  • Export utilities to read postgres db or GIS and create versionable XML.
  • Alternate fault source model that includes slip rate from which MFDs can be derived.

Segregate sources and supply inline comments for exceptions

The 2014 update saw the unification of WUS sources in input files based on their being geologic or geodetic. This created much confusion with respect to GR/CH weight ratios, which vary by region (50/50 or 33/67). It also didn't solve the problem of having the same fault defined in multiple files (GR vs CH).

The preferred approach would be to have single files per (and named by) region or state (reverting to 2008 practice) wherein a fault source is defined once and all associated MFD's are listed together. This would provide a complete picture of any particular fault in a single place. Additional MFD attributes would be used to identify the source of the slip rate used to derive rupture rates.

Any exceptional faults would also be carved out into their own file with inline explanations of how their numbers were derived. For example, there might be independent files for all fixed magnitude faults (e.g. Seattle, Dixie Valley), faults designated as having a probability of occurrence <1.0 (PNW offshore faults), and faults with alternate slip rate branches (e.g. Saddle Mtn.; for 2014 this would have 8 MFDs: 2 Bird, 2 Zeng, and 4 geologic).

cc: @mpetersen-usgs @haller-usgs

Review UCERF3 rupture parameter averaging

For example: rake, zTop, and width

Currently calculation of rJB and rRup are based off the closest fault section to site but other parameters are area-weight averaged. We should consider the implications of changing to a closest-fault-section approach.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.