Code Monkey home page Code Monkey logo

ccpp-physics's Introduction

CCPP Physics

The Common Community Physics Package (CCPP) is designed to facilitate the implementation of physics innovations in state-of-the-art atmospheric models, the use of various models to develop physics, and the acceleration of transition of physics innovations to operational NOAA models.

Please see more information about the CCPP at the locations below.

For the use of CCPP with its Single Column Model, see the Single Column Model User's Guide.

For the use of CCPP with NOAA's Unified Forecast System (UFS), see the UFS Medium-Range Application User's Guide, the UFS Short-Range Application User's Guide and the UFS Weather Model User's Guide.

Questions can be directed to the CCPP User Support Forum or posted in the CCPP Physics GitHub discussions or CCPP Framework GitHub discussions. When using the CCPP with NOAA's UFS, questions can be posted in the UFS Weather Model section of the UFS Forum.

Corresponding CCPP Standard Names dictionary

This revision of the CCPP physics library is compliant with version 0.1.1 of the CCPP Standard Names dictionary.

Licensing

The Apache license will be in effect unless superseded by an existing license in specific files.

ccpp-physics's People

Contributors

anningcheng-noaa avatar barlage avatar binliu-noaa avatar chunxizhang-noaa avatar climbfuji avatar domheinzeller avatar dusanjovic-noaa avatar dustinswales avatar ericaligo-noaa avatar grantfirl avatar gthompsnwrf avatar haiqinli avatar helinwei-noaa avatar jeffbeck-noaa avatar joeolson42 avatar jongilhan66 avatar lisa-bengtsson avatar mdtoynoaa avatar microted avatar mzhangw avatar pjpegion avatar qingfu-liu avatar rmontuoro avatar samueltrahannoaa avatar smoorthi-emc avatar tanyasmirnova avatar uturuncoglu avatar xiaqiongzhou-noaa avatar xiasun-atmos avatar yihuawu-noaa avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ccpp-physics's Issues

add three parameters to namelist for NSSL MP

Description

To support RRFS multiphysics ensemble, three parameters in NSSL MP are being added to input.nml, including rain shape parameter (nssl_alphar), graupel-droplet collection efficiency (nssl_ehw0_in) and hail-droplet collection efficiency (nssl_ehlw0_in)

Solution

Related PR will be submitted shortly

Convective reflectivity

Description

Convective reflectivity is estimated and combined with the reflectivity computed in the microphysics for Thompson and NSSL microhysics as well as deep/shallow GF convection and SAS convection.

Solution

@RuiyuSun added the calculations for SAS and Thompson. I moved the calculations over to GFS_MP_generic_post.F90 in order for the calculations to be available for multiple applications. The 3D array, refl_10cm, will contain both a convective and microphysics component.

The attachment illustrates the impact of these changes on reflectivity in RRFS with GF convection. Note that as it is coded now in ufs-weather-model, the convective reflectivity is considered constant from the surface up to model top, which is not physical. The new code also uses the convective precip rate and estimates a constant reflectivity up to the freezing level above which it decays with height up until cloud top using the convective cloud top arrays from both the GF and SAS schemes.

https://docs.google.com/presentation/d/1a0EpXKLxu-1Rz5yb3pQ-ePR3DJcrnCy05Ogg1ZItCCI/edit?usp=sharing

Parameterize real variable kind where necessary

Description

@dongli submitted #66 in order to not rely on default real kind compiler options, which can be a problem with some hosts. A better solution is to parameterize the real kind where variables are declared within CCPP subroutines. Some modifications to subroutines/functions may need to be made to facilitate this approach, however, and this should be able to be accomplished carefully without affecting regression test baselines. This approach should also be consistent (and help with) running physics at different precisions.

Solution

Add real(kind_phys) as appropriate within schemes. This should be done alongside understanding which variables need to retain double precision and declaring them explicitly as such (e.g. kind_dbl instead of kind_phys).

Alternatives (optional)

Continue to rely on mixed parameterized real kind and default real kind behavior controlled by compiler flags.

Related to (optional)

#66

module_sf_ruclsm imprecision breaks gfortran in debug mode for RRFS tests

Description

The module_sf_ruclsm has comparisons to zero, which break the code when numbers are very close to 0, such as 1e-322. This happens with the gfortran compiler when compled -DDEBUG=ON since the option to truncate subnormal numbers is turned off.

Steps to Reproduce

  1. Fix bugs in the fv_regional_bc.F90 that prevent RRFS from running in DEBUG=ON mode
  2. Run one of the RRFS tests in DEBUG=ON mode with the gnu compiler on Hera
  3. View the resulting core dumps using gdb
  4. Sadness

Additional Context

The model will fail much earlier, due to known bugs. I'm about to submit a PR and some issues to fix that.

32-bit physics crashes in aerinterp due to type mismatch in NetCDF calls.

Description

The aerinterp routines use real*4 arrays to read real*8 data when compiled with 32-bit physics. This causes a crash because of the NetCDF Fortran 77 interface, which does not know which datatype it has.

Steps to Reproduce

  1. Create a 32-bit physics version of the RRFS 13km conus ufs-weather-model regression tests.
  2. Run it and you'll see the crash.
  3. Run the gnu version of the test through valgrind, and you'll see where and why it crashes.

Additional Context

Tested with hera.gnu and hera.intel using the FV3_HRRR suite in the UFS regression tests. Must compile with 32-bit physics to trigger the error.

Output

N/A

binary/unary operator precedence error on Cray compiler

This issue was reported by @michalakes

Description

There are violations of the Fortran standard that prevent module_bl_mynn.F90 and perhaps other components of CCPP physics from compiling with the Cray compiler on Narwhal (the DoD HPC system we’re using).

Gnu generates warning messages but compiles anyway. Cray just flat out generates an error:

ftn-855 ftn: ERROR MODULE_BL_MYNN, File = ccpp/ccpp-physics/physics/module_bl_mynn.F90, Line = 239, Column = 8

The compiler has detected errors in module "MODULE_BL_MYNN". No module information file will be created for this module.

ftn-1725 ftn: ERROR MYNN_MIX_CHEM, File = /ccpp/ccpp-physics/physics/module_bl_mynn.F90, Line = 5481, Column = 26

Unexpected syntax while parsing the assignment statement : "operand" was expected but found "-".

ftn-1725 ftn: ERROR PHIM, File = /ccpp/ccpp-physics/physics/module_bl_mynn.F90, Line = 7675, Column = 45

Unexpected syntax while parsing the assignment statement : "operand" was expected but found "-".

ftn-1725 ftn: ERROR PHIH, File = ccpp/ccpp-physics/physics/module_bl_mynn.F90, Line = 7726, Column = 45

Unexpected syntax while parsing the assignment statement : "operand" was expected but found "-".

In each of the referenced lines, there is an instance of a binary operator (*) followed immediately by a unary operator (-); for example,

5474 DO ic = 1,nchem

5475 k=kts

5476

5477 a(k)= -dtz(k)*khdz(k)*rhoinv(k)

5478 b(k)=1.+dtz(k)*(khdz(k+1)+khdz(k))rhoinv(k) - 0.5dtz(k)*rhoinv(k)*s_aw(k+1)

5479 c(k)= -dtz(k)*khdz(k+1)rhoinv(k) - 0.5dtz(k)*rhoinv(k)*s_aw(k+1)

5480 d(k)=chem1(k,ic) & !dtz(k)*flt !neglecting surface sources

5481 & + dtz(k) * -vd1(ic)*chem1(1,ic) &

5482 & - dtz(k)*rhoinv(k)*s_awchem(k+1,ic)

The Fortran standard explicitly (and, I would say, arbitrarily) disallows consecutive operators and requires the order of operations be made explicit with parentheses:

5481 & + dtz(k) * -(vd1(ic)*chem1(1,ic)) &

I was able to find a reference:

“Fortran doesn’t allow consecutive operators! (Many compilers, Intel Fortran for example,

will let you do this as an extension, but it’s non-standard). To conform to the standard you would

have to write this [using explicit parentheses]).”

https://stevelionel.com/drfortran/2021/04/03/doctor-fortran-in-order-order/

Steps to Reproduce

Compile module_bl_mynn.F90 with Cray compiler. Note that the GNU compiler issues warnings for this same issue.

RUC LSM - adding leaf area index and other surface parameters to the history

Description

Provide a clear and concise description of the problem to be solved.
Missing output that is requested by the stakeholders.

Solution

Add a clear and concise description of the proposed solution.
The missing variables are added to the history files.

Alternatives (optional)

If applicable, add a description of any alternative solutions or features you've considered.

Related to (optional)

Directly reference any issues or PRs in this or other repositories that this is related to, and describe how they are related.
PR#116

Update geopotential height between cumulus scheme and microphysics scheme

Description

The geopotential height is updated after radiation+PBL+GWD+O3+H2O are called, which makes it consistent to the updates of air temperature and moisture. However, geopotential height was not updated between cumulus and microphysics schemes after the updates of temperature and moisture from the cumulus scheme. This could impact all CCPP suites with cumulus scheme turned on. The preliminary test for C96 contril_p8 showed that the impacts were small. The precipitation from one time step showed better organized but the total precipitation within 24 hours was very similar. Please see attached file.
phii_recalculation.pptx

Some CODEOWNERS lack write access, breaking CODEOWNERS functionality.

For the CODEOWNERS functionality to work, all accounts in the CODEOWNERS file must have write access to this repository. That means you need to do one of these:

  1. Remove people from CODEOWNERS.
  2. Delete CODEOWNERS.
  3. Grant accounts in CODEOWNERS write access.
  4. Ignore the problem, and have CODEOWNERS continue to be non-functional, but still match the NCAR ccpp-physics CODEOWNERS file.

If you want the repository to be identical to NCAR ccpp-physics, but don't want CODEOWNERS to work, then option 4 is the best way. In that case, you should give this bug the "will not fix" label.

0 hour restart file

I know this is strange question but I'm dealing w/ implementing some passive tracers in UFS. While interpolated output is nice to use for intepretation, only the native fields should be somewhat "perfect" relative to mass conservation questions. It would be nice to know if anything strange is happening w/ my tracer initialization. Interpolated fields at 0 hrs are pretty close to init conds but not exact. A restart file written at 0 hrs would allow me to verify that init conds are being input accurately. Is this possible? I often just see the ability to write restarts every 6 or 12 hrs, w/ an additional option to write at end of run.

Reduce library dependencies in ccpp-physics

Description

The ccpp-physics repository depends on 3 libraries in NCEPlibs: sp, w3emc, and bacio. The sp library is used by GFS_phys_time_vary.fv3.F90/gcycle/sfccycle (the actual dependency is in sfcsub.F). The dependency is something like 300 lines with comments from the sp library, but it is all contained in splat.F and lapack_gen.F in https://github.com/NOAA-EMC/NCEPLIBS-sp. lapack_gen.F contains code from Numerical Recipes for matrix inversion. splat.F does
Computes cosines of colatitude and Gaussian weights for sets of latitudes according to the comments.

Additionally, several schemes use subroutines from the w3emc library, which also has a dependency on the bacio library.

Solution

It would be relatively straightforward to remove the dependency on the sp library by including the necessary code directly in sfcsub.F, albeit duplicating from the library.

Removing the w3 dependency looks like significantly more work.

@ChunxiZhang-NOAA also mentioned a more complete refactoring of gcycle to remove the dependency.

Alternatives (optional)

Keep the dependencies.

Related to (optional)

#41

CLM Lake initialization issues cause an energy budget problem

Description

Some 3D lake variables are either initialized only on one 2D slice, or have the same values in all levels of a column. This unphysical situation causes energy budget issues in the CLM Lake Model.

@tanyasmirnova discovered this problem

Steps to Reproduce

Run the model with clm_lake_debug turned on and you'll see troubles reported for the first hour or so. Some of the surface fields will lack lake ice for the first hour or so, as a consequence of this.

Additional Context

  • Machine - hera
  • Compiler - intel
  • Suite Definition File or Scheme - FV3_HRRR

Output

No.

Instability issue for regression test merra2_thompson

Description

The regression test (RT) merra2_thompson has many Warn_K warnings. RT merra2_thompson is the aerosol-aware Thompson scheme with water- and ice-friendly aerosols from MERRA2. It is likely that something is not correctly implemented.

Steps to Reproduce

./rt.sh -k -n merra2_thompson > rt.log 2>&1&

Check out file in the run directory.

HEAP memory in physics schemes

Description

Some schemes rely on the dynamic use of HEAP memory (e.g module memory). This is problematic for host model applications that wish to run multiple instances of the ccpp-physics within the same executable, where we run into the issue of multiple tasks trying to read/write to the same memory.

Solution

Remove dynamic HEAP references from parameterizations and replace with Interstitials. This way each ccpp instance is self contained and "stateless" (no dynamic shared memory). This is straightforward for some schemes (i.e. Thompson MP), while quite invasive to others (i.e. ozone physics and RRTMGP).

incorrect filename in warning message for missing noahmptable.tbl

Description

When the model requires noahmptable.tbl, and it isn't present, the error message is wrong. The message asks for noahmptable.tb without the final l.

    inquire( file='noahmptable.tbl', exist=file_named )
    if ( file_named ) then
       open(15, file="noahmptable.tbl", status='old', form='formatted', action='read', iostat=ierr)
    else
       open(15, status='old', form='formatted', action='read', iostat=ierr)
    end if
    if ( ierr /= 0 ) then
       errmsg = 'warning: cannot find file noahmptable.tb' !<---- ERROR IS HERE
       errflg = 1
       return
!      write(*,'("warning: cannot find file noahmptable.tbl")')
    endif

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Run a NOAHMP configuration without noahmptable.tbl
  2. Get the wrong error message

Additional Context

  • Machine - N/A
  • Compiler - N/A
  • Suite Definition File or Scheme - anything that uses noahmp
  • Reference other issues or PRs in other repositories that this is related to, and how they are related.

Improve stability of unified drag suite and add diagnostics

Description

The unified gravity wave physics (UGWP) package has experienced numerical instability during FV3GFS Prototype 8 testing at C384 horizontal resolution. (Model crashes are documented here and here.) The source of the instabilities has been traced to the mesoscale gravity wave drag (GWD) and turbulent orographic form drag (TOFD) parameterizations contained in the CCPP module drag_suite.F90. The instability can be alleviated by taking a smaller physics time step, however, this is not a solution as it would not be amenable to operational forecasting.

Solution

The solution is to limit the tendencies calculated by the mesoscale GWD scheme in drag_suite.F90 in a similar manner as is done in the GFSv16 scheme in gwdps.f. The instability due to the TOFD parameterization was due to an earlier modification of the scheme that removed a limit on the subgrid terrain variability and caused excessively large tendencies. The limits have been re-introduced to help alleviate the instability problem.
Another change to the code was the addition of optional diagnostic outputs, controlled by the namelist option "ldiag_ugwp", which outputs the drag contributions from each of the drag components of the UGWP: mesoscale GWD, low-level blocking, small-scale GWD, TOFD and non-stationary GWD.

FV3_HRRR_c3 suite crashes in c3 calls on Cheyenne with gnu compiler in debug mode.

Description

The FV3_HRRR_c3 suite crashes in c3 calls on Cheyenne when compiled with the gnu compiler in debug mode.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Create a hrrr_c3_debug variant of hrrr_control_debug in the ufs weather model regression tests
  2. Run it on Cheyenne with the gnu compiler.
  3. Model crashes.

Additional Context

Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.

  • Machine - Cheyenne
  • Compiler - gnu
  • Suite Definition File or Scheme - FV3_HRRR_c3
  • fixed by #102

Add new RRTMGP aerosol-optics

Opened on behalf of @yangfanglin and @Qingfu-Liu

RRTMGP now provides procedures to compute the aerosol optical properties. Currently RRTMGP uses the same external code as RRTMG. Everything done from the gas/cloud optics in GP needs to be done for the aerosols, using these files..

Some background and comments from offline discussions.

Both the longwave and shortwave have their own gas and cloud optics modules, each with their own derived-data type that contains BOTH the data needed for the optical calculations and the routines to compute them.
rrtmgp_lw_gas_optics.F90 loads the lw_gas_props DDT
rrtmgp_lw_cloud_optics.F90 loads the lw_cloud_props DDT
rrtmgp_sw_gas_optics.F90 loads the sw_gas_props DDT
rrtmgp_sw_cloud_optics.F90 loads the sw_cloud_props DDT

These modules all operate as follows. Read in file, load DDT. The "read" part needs to be MPI compliant, so there are additional commands/logic in the code for this.
The input file (netcdf) is provided through the namelist, this file name is passed in as an interstitial variable. If you look at the contents of any of these input files and compare them to their respective routines you will see that they are doing something very simple, but it looks complicated. They are just reading in everything in the file and calling a rrtmgp routine to load the data into the DDT. You will need to create analogous routines starting with the aerosol data that is part of the rte-RRTMGP aerosol optics.

Later these DDTs are referenced by the main RRTMGP drivers, one for LW, one for SW.
rrtmgp_lw_main.F90
rrtmgp_sw_main.F90
They are set up during the ccpp _init() phase, and referenced/used during the ccpp _run() phase. You will need to add calls to initialize the aerosol optics in the main drivers _init() phase, and add calls to the aerosol optics routines within the main radiation loop in the _run() phase.

Backing up to how this differs from RRTMG. In RRTMG, the data, constants, and code used by the gas optics calculation are in varying places. For example, in the longwave, the k-distribution data for the gas optics-is defined in radlw_datatb.f, the configuration information mixed in radlw_param.f, and all is referenced in radlw_main.F90 when calling taumol(). Whereas in GP, everything needed by the main longwave driver is contained within lw_gas_props (e.g lw_gas_props%get_nbands, lw_gas_props%gas_optics,...)

The aerosols in RRTMGP are isolated to a single module, rrtmgp_aerosol_optics.F90, where the LW/SW aerosol optical properties are computed using the routine setaer(), this is common to both RRTMG and RRTMGP. Then in the main GP driver, we add in the contributions from the aerosol optical properties by incrementing the gaseous-optics. This needs explaining... Within the RRTMGP DDTs mentioned above, there are native routines (type-bound procedures) to combine optical properties, called incrementing functions. These routines are "smart" in that they know how to combine to optical properties of varying spectral definitions, as long as they have the same underlying band structure (See https://github.com/earth-system-radiation/rte-rrtmgp/blob/main/rte/mo_optical_props.F90#L39 for more details). Unrelated, but there are other instances of incrementing in the cloud-mp coupling module, where optical properties of varying cloud types are incremented together before calling McICA. Sorry if I got off-track, but these increment calls aid the philosophy that GP can be configured to use an arbitrary number of radiative contributors.

Clean up unified convection scheme and general convection pre/post

Description

The starting point of the unified convection scheme is at large inherited from the GF scheme. In order to be transitioned to operations it needs to be optimized for speed, not re-define CCPP defined physical constants, remove old goto statements etc. Many comments have been provided by code managers in PR #56. In addition, there is a large overlap in pre/post for the different convection schemes that should be combined.

Solution

Address comments in PR #56, and potential additional optimizations. Combine pre/post for convection schemes that need them.

CFL violations and issues plotting output from changes made in PR #65 (3a306a4) for RRFS_CONUS_25km grid

Description

While updating the UFS-WM to the latest hash (e403bb4), the SRW App's WE2E tests started failing with the following error message:

FATAL from PE 1: compute_qs: saturation vapor pressure table overflow, nbad= 1

Ultimately, we were able to get around this issue by decreasing DT_ATMOS from 180 to 150. This change caused the grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v17_p8_plot WE2E test fail. To make this test work again, DT_ATMOS was set to 180 for this specific test.

Trying to identify the issue that led to the failures, I attempted to change the cq parameter values in mfpbltq.f, mfscuq.f, samfdeepcnv.f, samfshalcnv.f, and satmedmfvdifq.f from 1.0 back to 1.3. Changing this value in samfdeepcnv.f allowed the RRFS_CONUS_25km tests to run using the original DT_ATMOS value of 180. Further, this change corrected the issue seen in the plotting WE2E test.

Unfortunately, I'm not familiar enough with CCPP to know what the cq parameter is used for. Is there a reason that it was reduced from 1.3 to 1.0 in the noted routines as part of PR #65? Would it be possible to set this value back to 1.3 for samfdeepcnv.f, or maybe add a namelist variable so that the value can be set at the application level?

Tagging @grantfirl, @JongilHan66, and @Qingfu-Liu since these individuals are either the PR owner or worked closely with the changes made in PR #65 for HR2.

Steps to Reproduce

  1. Clone the SRW App on Hera: git clone [email protected]:ufs-community/ufs-srweather-app.git
  2. cd ufs-srweather-app
  3. ./manage_externals/checkout_externals
  4. ./devbuild.sh -p=hera
  5. module use $PWD/modulefiles
  6. module load wflow_hera
  7. conda activate workflow_tools
  8. vi ush/predef_grid_params.yaml find RRFS_CONUS_25km and set DT_ATMOS from 150 to 180
  9. cd tests/WE2E
  10. ./run_WE2E_tests.py -t= grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2 -m=hera -a=<insert account here>
  11. See the noted error message in the description in the log/run_fcst* log file. A copy of the error in the log file has been added to the end of this issue as well.

Additional Context

  • Issues were encountered on UCAR's Cheyenne (with Intel compilers) and Hera (both Intel and GNU).
  • Cheyenne's Intel compiler used is 2022.1, Hera's Intel compiler used is 2022.1.2, and Hera's GNU compiler used is 9.2.0.
  • The test noted above uses the FV3_GFS_v15p2 SDF, while the noted failure of the grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v17_p8_plot test uses the FV3_GFS_v17_p8 SDF.
  • The issues became apparent in PR #65 in the ccpp-physics repo, PR #1731 in the ufs-weather-model repo (the PR that brought the changes made in PR #65 into the UFS-WM), and PR #799 in the SRW App, where the UFS-WM hash was updated.

Output

grid_RRFS_CONUS_25km_ics_FV3GFS_lbcs_FV3GFS_suite_GFS_v15p2_run_fcst_log

Floating point exception in module_sf_noahmp_glacier.F90

Description

Potential division by zero (or 0/0) at line 2656 when dzsnso(0) = 0 [and snice(0)=0]. In debug compile of UFS model/FV3. Print statements show that isnow=0 at the (first) point where it fails and sneqv is just barely exceeding mwd.

module_sf_noahmp_glacier.F90, lines 2653-2661

!to obtain equilibrium state of snow in glacier region
       
   if(sneqv > mwd) then   ! 100 mm -> maximum water depth
      bdsnow      = snice(0) / dzsnso(0)
      snoflow     = (sneqv - mwd)
      snice(0)    = snice(0)  - snoflow 
      dzsnso(0)   = dzsnso(0) - snoflow/bdsnow
      snoflow     = snoflow / dt
   end if

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

The RT rrfs_v1nssl test in debug compile fails with an FPE (tested on hera with intel compilers)
./opnReqTest -n rrfs_v1nssl -c dbg -ek

Additional Context

Curiously, the rrfs_v1beta SDF does not trigger this failure, where the only difference is the microphysics option (Thompson instead of NSSL). I haven't found anything yet why that may be. When isnow=0, dzsnso is still initialized to zero but doesn't get updated in snowh2o_glacier. I don't know this code at all, so I hope somebody with knowledge can help isolate a possible cause (or just a patch like checking that dzsnso(0) /= 0 and snice(0) /= 0, or just isnow /= 0).

Presumably (I haven't checked), with an optimized compile, a NaN would be produced but doesn't cause any other harm.

Output

On hera:

/scratch1/BMC/gsd-fv3-dev/Ted.Mansell/ufs/rt/stmp2/Ted.Mansell/FV3_OPNREQ_TEST/opnReqTest_24904/rrfs_v1nssl_intel_dbg_base/err:

 46: [h9c28:309589:0:309589] Caught signal 8 (Floating point exception: floating-point invalid operation)
 46: ==== backtrace (tid: 309589) ====
 46:  0 0x000000000004d455 ucs_debug_print_backtrace()  ???:0
 46:  1 0x000000000c33bf83 noahmp_glacier_routines_mp_snowwater_glacier_()  /scratch1/BMC/gsd-fv3-dev/Ted.Mansell/ufs/ufs-nssl3m/FV3/ccpp/physics/physics/module_sf_noahmp_glacier.F90:2656
 46:  2 0x000000000c339822 noahmp_glacier_routines_mp_water_glacier_()  /scratch1/BMC/gsd-fv3-dev/Ted.Mansell/ufs/ufs-nssl3m/FV3/ccpp/physics/physics/module_sf_noahmp_glacier.F90:2518

Change of iswmode from a parameter to an arguement inhibits optimization efforts.

commit f3499e3 added iswmode as an argument to spcvrtm() (among others) and deleted the physparam.f file which previously had iswmode set to 2. The setting in physparam.f had the 'parameter' attribute which makes the value of iswmode available to the compiler.

The reason I noticed this is that I'm working on some optimizations to the spcvrtm() routine. Having iswmode unknown at compile time is destroying my efforts. The iswmode value controls some branch logic inside a performance critical loop. With iswmode set as a parameter, the compiler can eliminate that messy branch code. Eliminating the branch code and making another code modification allows the compiler to vectorize that loop.

Besides cleanup, was there some other reason for making iswmode an argument to spcvrtm()?
If I wanted to change iswmode back to a parameter, is the physcons module a good place to put the iswmode parameter assignment?

GFS_phys_time_vary_init does not report errmsg/errflg correctly due to thread race condition

Description

The GFS_phys_time_vary_init is parallelized using mpi sections, but it does not correctly handle errmsg or errflg. All threads update the same errmsg and errflg. That means a failure message can be overwritten by a success message in a later step.

Pinging @hu5970 who encountered the problem and @tanyasmirnova who is debugging it too.

To visualize this, suppose there are two threads running at once. For simplicity's sake, lets say there are only two initialization calls: init_that_fails() and init_that_succeeds()

Failure happens first

Events happened in this order:

Thread 1: Completes init_that_fails() and sets errmsg=1
Thread 2: Completes init_that_succeeds() and sets errmsg=0

The errmsg is 0 and the model will run even though one of the initialization steps failed.

Failure happens second

Events happened in this order:

Thread 2: Completes init_that_succeeds() and sets errmsg=0
Thread 1: Completes init_that_fails() and sets errmsg=1

The errmsg is 1 so the model will abort as expected.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Delete noahmptable.tbl
  2. Use a scheme that does not require that file.
  3. Run the model a few times with at least two threads.
  4. Notice that it fails sporadically instead of 100% of the time.

Additional Context

This was discovered in an RRFS parallel. The machine, compiler, etc. doesn't matter. However, the easiest way to see it is to run a non-NOAHMP suite without noahmptable.tbl.

Remove the use of the FV3 sigma pressure hybrid coordinate "ak" and "bk" coefficients in the UGWP scheme

Description

The three modules that call the non-stationary gravity wave drag (NGWD) schemes of the UGWP, cires_ugwp.F90, unified_ugwp.F90 and ugwpv1_gsldrag.F90, explicitely use the FV3 sigma-pressure hybrid-coordinate coefficients "ak" and "bk" to determine the "launch level" for non-stationary gravity waves. This prevents the CCPP-physics library from being "agnostic" to the host dynamical core, as not all dycores use this vertical coordinate system.

Solution

The goal would be to find a solution that reproduces identical results to the current code. No such solution has been found yet. A compromise may be to have results that are "close enough" to those of the current code.

ccpp-physics not ready for new fms

Description

The latest FMS beta tag fms/2023.02-beta1 doesn't work with ccpp-physics. This is because starting with this version, the legacy fms_io (version 1) is no longer compiled unless explicitly requested in the build step of FMS (see NOAA-GFDL/FMS#1225 (comment)). Please update ccpp-physics to use FMS_io version 2.

cd /Users/heinzell/work/ufs-bundle/20230714/build-atm-fms-2023.02-beta1-debug/ufs-weather-model/src/ufs-weather-model-build/FV3/atmos_cubed_sphere && /Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/openmpi-4.1.5-j7pjg6h/bin/mpif90 -DDEBUG -DENABLE_QUAD_PRECISION -DGFS_PHYS -DGFS_TYPES -DINTERNAL_FILE_NML -DMOIST_CAPPA -DSPMD -DUSE_COND -DUSE_GFSL63 -Duse_WRTCOMP -I/Users/heinzell/work/ufs-bundle/20230714/build-atm-fms-2023.02-beta1-debug/ufs-weather-model/src/ufs-weather-model-build/FV3/atmos_cubed_sphere/include/fv3 -I/Users/heinzell/work/ufs-bundle/20230714/ufs-bundle-use-jedi-develop/ufs-weather-model/FV3/atmos_cubed_sphere -I/Users/heinzell/work/ufs-bundle/20230714/ufs-bundle-use-jedi-develop/ufs-weather-model/FV3/atmos_cubed_sphere/tools -I/Users/heinzell/work/ufs-bundle/20230714/build-atm-fms-2023.02-beta1-debug/ufs-weather-model/src/ufs-weather-model-build/FV3/ccpp/mod -I/Users/heinzell/work/ufs-bundle/20230714/build-atm-fms-2023.02-beta1-debug/ufs-weather-model/src/ufs-weather-model-build/FV3/ccpp/framework/src -I/Users/heinzell/work/ufs-bundle/20230714/build-atm-fms-2023.02-beta1-debug/ufs-weather-model/src/ufs-weather-model-build/FV3/ccpp/physics -I/Users/heinzell/scratch/fms-beta-testing/spack-stack-rel-141-add-fms-beta/envs/fms-beta-chained-clang/install/apple-clang/13.1.6/fms-2023.02-beta1-chtgof5/include_r8 -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/netcdf-fortran-4.6.0-4jlf4fw/include -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/netcdf-c-4.9.2-vrrvi2u/include -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/w3emc-2.9.2-6jxtpbc/include_d -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/bacio-2.4.1-wc2qkbz/include_4 -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/sp-2.3.3-s7cl343/include_d -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/esmf-8.4.2-j4unthd/include -I/Users/heinzell/prod/spack-stack-1.4.1/envs/unified-env/install/apple-clang/13.1.6/parallelio-2.5.9-r5txd2q/include -fPIC -ggdb -fbacktrace -cpp -fcray-pointer -ffree-line-length-none -fno-range-check -fallow-argument-mismatch -fallow-invalid-boz -fdefault-real-8 -fdefault-double-8 -ggdb -fbacktrace -cpp -fcray-pointer -ffree-line-length-none -fno-range-check -fdefault-real-8 -fdefault-double-8 -fallow-argument-mismatch -fallow-invalid-boz -O0 -fno-unsafe-math-optimizations -frounding-math -fsignaling-nans -ffpe-trap=invalid,zero,overflow -fbounds-check -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -Jinclude/fv3 -fPIC -c /Users/heinzell/work/ufs-bundle/20230714/ufs-bundle-use-jedi-develop/ufs-weather-model/FV3/atmos_cubed_sphere/model/molecular_diffusion.F90 -o CMakeFiles/fv3.dir/model/molecular_diffusion.F90.o
/Users/heinzell/work/ufs-bundle/20230714/ufs-bundle-use-jedi-develop/ufs-weather-model/FV3/atmos_cubed_sphere/model/molecular_diffusion.F90:44:52:

   44 |       use fms_mod,            only: check_nml_error, open_namelist_file, close_file
      |                                                    1
Error: Symbol 'open_namelist_file' referenced at (1) not found in module 'fms_mod'
/Users/heinzell/work/ufs-bundle/20230714/ufs-bundle-use-jedi-develop/ufs-weather-model/FV3/atmos_cubed_sphere/model/molecular_diffusion.F90:44:72:

   44 |       use fms_mod,            only: check_nml_error, open_namelist_file, close_file
      |                                                                        1
Error: Symbol 'close_file' referenced at (1) not found in module 'fms_mod'
make[5]: *** [FV3/atmos_cubed_sphere/CMakeFiles/fv3.dir/model/molecular_diffusion.F90.o] Error 1
make[4]: *** [FV3/atmos_cubed_sphere/CMakeFiles/fv3.dir/all] Error 2
make[3]: *** [all] Error 2
make[2]: *** [ufs-weather-model/src/ufs-weather-model-stamp/ufs-weather-model-build] Error 2
make[1]: *** [CMakeFiles/ufs-weather-model.dir/all] Error 2
make: *** [all] Error 2

Steps to Reproduce

Build latest fms/2023.02-beta1 with standard options and then try to build the UFS weather model.

Additional Context

n/a

Output

See above

RRTMGP shortwave radiation cache blocking code fails with blocksize > 1

Description

Cache blocking code was recently added to RRTMGP radiation scheme. The run time speedup can be dramatic when logwave blocking factor is set to a small value such as rrtmgp_lw_phys_blksz = 4. On a small c96 test run this yields a 20% run time improvement!

However, setting shortwave blocking to something larger than 1 results in model instability in gas_optics and cloud_optics hitting run time checks in those routines. We believe the problem is that some MPI ranks will have daylight column counts that are not evenly mapped to the blocksize in which case some array indicies will be set to 0. For MPI ranks that do not have this problem we see run time speedups of 15-20%.

Steps to Reproduce

  1. set shortwave blocking factor to 4 in UFS input.nml rrtmgp_sw_phys_blksz = 4
  2. Run will encounter errors like the following

NaN in input field of mpp_reproducing_sum(_2d), this indicates numeri
cal instability

ERROR(rrtmgp_sw_main_gas_optics):
gas_optics(): array tlay has values outside range
ERROR(rrtmgp_sw_main_cloud_optics):
cloud optics: ice effective radius is out of bounds

Additional Context

  • WCOSS2 HPE/Cray EX
  • Intel 19.1.3.304
  • RRTMGP radiation scheme

Output

Explore adding additional compilation flags to physics for speedup of execution with Intel

Description

The following compiler options may speed up execution of physics:
(for Intel) -fp-model=fast -fprotect-parens -fimf-precision=high

@dkokron in #76 found a significant speedup for the UGWP v0 scheme. Code managers decided it was best to not single out one scheme for such treatment, however, and would like to see the impact of using these flags on all physics, or at least an entire suite.

Solution

Add the given flags "higher up" in the build system hierarchy, perhaps controlled by an environment variable.

Alternatives (optional)

Add the flags in the ccpp/physics CMakeLists.txt

Related to (optional)

#76

Add OpenACC directives to speed up the MYNN SFC scheme

Description

NOAA GSL scientists have been working to add openACC directives to GPU-accelerate schemes within one physics suite. The MYNN SFC scheme needs to have these directives added to join GF convection, MYNN PBL, and Thompson MP schemes.

Solution

Add openACC directives as necessary for GPU acceleration to the MYNN SFC scheme.

Adding tendency limiter for mesosphere and horizontal wave number filter for orographic gravity wave drag in UGWP

Description

  1. The orographic gravity wave drag (OGWD) component of the unified gravity wave physics (UGWP) suite of parameterizations can cause model crashes due to excessive wind tendencies. These crashes occur at high altitudes, typically in the mesosphere, where the rarified air density causes parameterized waves to (physically) significantly increase in amplitude. (See slide 1 in pdf attachment below.) These crashes can be remedied by taking a much smaller time step, but this is not practical.
  2. The zonally averaged surface OGWD stress is excessive in the southern hemisphere, in association with the Andes mountain range. (See slide 3 in pdf attachment below.) The near-surface winds are frequently very strong in these regions, and the OGWD scheme does not limit the OGWD stress when wind speed reaches a point where the waves become evanescent and do not impart a drag force.

Solution

  1. Model crashes in the upper levels of the model can be avoided by limiting the velocity tendency due to OGWD such that in one time step, the winds can not reach zero (or reverse sign). Proposed code for the fix above the pressure level "plolevmeso", which we've tested at 0.7mb (roughly the height of the stratopause) is shown in slide 2 of the attached pdf. Model crashes are avoided with the proposed tendency limiter.
  2. From linear theory, if the ratio of the static stability (N) to the wind speed (U) is less than the horizontal wave number of the topography (k_s), that is if N/U < k_s, then orographic gravity waves do not vertically propagate and their wave stress is zero. We've added code to the OGWD component to take this into effect, where we set k_s from the minimum sub-grid topographic wavelength expected from a grid size of dx. We consider the assumed maximum sub-grid wavelength to be 0.5*dx.

CCPP_Issue_slides_2023_08_04.pdf

How to shut off parameterized mixing of tracers

I'm tasked with putting passive tracers into UFS. I'm adding a few tracers into the field_table and diag_table and dumping into the tracer arrays in UFS. It appears to be working. However, I'm getting a small continuous loss of global tracer mass (0.5ppm per 6 hrs on 400ppm bgd) and local unmixing of flat tracer fields. I've noticed a lot spurious concs near the top few model levels. I'd like to investigate to what degree this is caused by advection in FV3 (probably smaller but not insignificant piece), or possibly parameterized mixing of the tracers by either the PBL or convection routines. I'd essentially like to know how to turn off/on any parameterized mixing schemes in order to reduce the model to pure advection by FV3. For now, I'm using the FV3_GFS_v17_p8 scheme w /the merra2_thompson as the RT sandbox choice. I'm assuming I'll need to remove/comment_out pieces of the following physics scheme file? Does anybody know which ones are relevant? And as I move to other physics suites, how should I approach this same question?

#####################

GFS_time_vary_pre GFS_rrtmg_setup GFS_rad_time_vary GFS_phys_time_vary GFS_suite_interstitial_rad_reset GFS_rrtmg_pre GFS_radiation_surface rad_sw_pre rrtmg_sw rrtmg_sw_post rrtmg_lw_pre rrtmg_lw rrtmg_lw_post GFS_rrtmg_post GFS_suite_interstitial_phys_reset GFS_suite_stateout_reset get_prs_fv3 GFS_suite_interstitial_1 GFS_surface_generic_pre GFS_surface_composites_pre dcyc2t3 GFS_surface_composites_inter GFS_suite_interstitial_2
<!-- Surface iteration loop -->
<subcycle loop="2">
  <scheme>sfc_diff</scheme>
  <scheme>GFS_surface_loop_control_part1</scheme>
  <scheme>sfc_nst_pre</scheme>
  <scheme>sfc_nst</scheme>
  <scheme>sfc_nst_post</scheme>
  <scheme>noahmpdrv</scheme>
  <scheme>sfc_sice</scheme>
  <scheme>GFS_surface_loop_control_part2</scheme>
</subcycle>
<!-- End of surface iteration loop -->
<subcycle loop="1">
  <scheme>GFS_surface_composites_post</scheme>
  <scheme>sfc_diag</scheme>
  <scheme>sfc_diag_post</scheme>
  <scheme>GFS_surface_generic_post</scheme>
  <scheme>GFS_PBL_generic_pre</scheme>
  <scheme>satmedmfvdifq</scheme>
  <scheme>GFS_PBL_generic_post</scheme>
  <scheme>gml_ghg_emi_wrapper</scheme>
  <scheme>GFS_GWD_generic_pre</scheme>
  <scheme>unified_ugwp</scheme>
  <scheme>unified_ugwp_post</scheme>
  <scheme>GFS_GWD_generic_post</scheme>
  <scheme>GFS_suite_stateout_update</scheme>
  <scheme>ozphys_2015</scheme>
  <scheme>h2ophys</scheme>
  <scheme>get_phi_fv3</scheme>
  <scheme>GFS_suite_interstitial_3</scheme>
  <scheme>GFS_DCNV_generic_pre</scheme>
  <scheme>samfdeepcnv</scheme>
  <scheme>GFS_DCNV_generic_post</scheme>
  <scheme>GFS_SCNV_generic_pre</scheme>
  <scheme>samfshalcnv</scheme>
  <scheme>GFS_SCNV_generic_post</scheme>
  <scheme>GFS_suite_interstitial_4</scheme>
  <scheme>cnvc90</scheme>
  <scheme>GFS_MP_generic_pre</scheme>
  <scheme>mp_thompson_pre</scheme>
  <scheme>gml_ghg_wrapper</scheme>
  </subcycle>
  <subcycle loop="1">
    <scheme>mp_thompson</scheme>
  </subcycle>
    <subcycle loop="1">
    <scheme>mp_thompson_post</scheme>
    <scheme>GFS_MP_generic_post</scheme>
    <scheme>maximum_hourly_diagnostics</scheme>
  </subcycle>
GFS_stochastics phys_tend

Fix issue with level of dividing streamline (rdxzb) being overwritten and affecting stochastic physics in ugwp

Description

As part of ccpp-physics PR#22, the order in which subroutines "gwdps_run" and "drag_suite_run" were called by subroutine "unified_ugwp_run" was reversed. Subroutine "drag_suite_run" is called last, but the variable "RDXZB" (the level of the dividing streamline for blocking) is initialized, which zeros out the value that was calculated in "gwdps_run". RDXZB is needed outside of the unified_ugwp subroutine by stochastic physics. This caused stochastic physics runs to crash. The bug was not caught by regression testing.

The problem has been fixed by changing the declaration of "RDXZB" from "intent(out)" to "intent(inout)" in drag_suite.F90 and drag_suite.meta, and initializing it in drag_suite.F90 only when the GSL drag suite blocking is performed, i.e., when "do_gsl_drag_ls_bl" is equal to .true.

Steps to Reproduce

This problem doesn't occur when running a deterministic forecast. It only causes crashes when running stochastic physics.

RRTMGp radiation physics fails when using more than 1 OpenMP thread

Description

We are running UFS on WCOSS2 HPE/Cray systems to evaluate RRTMGP radiation scheme and find that it fails in mo_gas_concentrat routine in longwave code if more than 1 OMP thread is used. shortwave code (and other RRTMGP routines executed before that) are OK.

Steps to Reproduce

  1. Build UFS with OpenMP enabled
  2. Run with OMP_NUM_THREADS=1 and get succesful completion
  3. Run with OMP_NUM_THREADS=2 code will get segfault in mo_gas_concentrat routine

Additional Context

  • WCOSS2 HPE/Cray EX
  • Intel 19.1.3.304
  • RRTMGP (RRTM radiation scheme is OK)

Output

Traceback and errors from UFS

68: fv3.exe 00000000043CEE02 Unknown Unknown Unknown
68: fv3.exe 000000000393039E mo_gas_concentrat 288 mo_gas_concentrations.F90
68: fv3.exe 00000000038066A7 rrtmgp_lw_main_mp 300 rrtmgp_lw_main.F90
68: fv3.exe 0000000003429255 ccpp_fv3_gfs_v17_ 518 ccpp_FV3_GFS_v17_p8_rrtmgp_radiation_cap.F90
68: fv3.exe 00000000030E13B6 ccpp_static_api_m 943 ccpp_static_api.F90
68: fv3.exe 00000000030DDDA2 ccpp_driver_mp_cc 188 CCPP_driver.F90

78: [h24c21:2818 :0:2884] Caught signal 11 (Segmentation fault: address not mapped to object at address (nil))
56: [h24c12:18836:0:18888] Caught signal 11 (Segmentation fault: address not mapped to object at address (nil))
41: [h24c12:18821:0:18894] Caught signal 11 (Segmentation fault: address not mapped to object at address (nil))
68: forrtl: severe (153): allocatable array or pointer is not allocated

issue for RRFS-SD upgrade

  1. The FV3_HRRR and FV3_HRRR_smoke are unified as one suite of FV3_HRRR. If the input namelist option rrfs_sd is false, it runs as the standard alone RRFS. IF rrfs_sd is true, it runs with smoke and dust for RRFS-SD. The meteorological fields are identical between the runs with rrfs_sd=T or rrfs_sd=F.

  2. There are three tracers of smoke, dust (fine dust), coarsepm (mainly from coarse dust emission) in RRFS-SD. Smoke and fine dust could be used for PM2.5, and coarsepm could be used for PM10 diagnostics.

  3. There is just one field_table, which contains smoke, dust and coarsepm tracers. If the input namelist option rrfs_sd is false, it runs as the standard alone RRFS. IF rrfs_sd is true, it runs with smoke, find dust and coarsepm for RRFS-SD. The meteorological fields are identical between the runs with rrfs_sd=T or rrfs_sd=F.

  4. The dry deposition velocity of smoke, fine dust and coarsepm is an input variable to MYNN-EDMF PBL.

  5. The smoke and dust codes are significantly cleaned and improved, and significantly improved the model integration speed. According to the test on Hera and WCOSS2, the additional computing cost of RRFS-SD is less than 5% than atmospheric only RRFS.

  6. An update of smoke/dust direct and indirect feedback.

  7. Emission bugfix and clean-up in FENGSHA dust scheme. The issue of unrealistic strong dust emission was resolved.

ozinterp and h2ointerp data is read when it is not needed

Description

The ozinterp and h2ointerp data is read unconditionally in GFS_time_vary_pre.fv3.F90. This makes the files mandatory, even for configurations that don't need them. That error may not be caught due to bug #105 but it will be caught after that bug is fixed.

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Choose a suite that doesn't need one of those files, and ensure the file is missing.
  2. Run it many times. It'll succeed frequently due to bug #105 but it'll eventually fail.

GFS diagnostic AOD output is not at exact 550nm

Description

In the current GFS code, the diagnostic AOD output is not at exact 550nm, however, a band average covering 550nm. Most of the observation AOD data is at exact 550nm wavelength, so this is not consistent wavelength when compare with the observation AOD. It would better to update it by calculating the exact 550nm AOD instead of current AOD of band average covering 550nm. Also the AOD of band average covering 550nm is much high than that of exact 550nm AOD by update to 20%, especially over the fire regions. This would cause high bias for AOD evaluation when compare with observed AOD from satellite or in situ measurement.

Some related experiments can be found at: https://docs.google.com/presentation/d/1d7l2ZlQmIDAGr9xZX4sWVpoQPiM_ipZ0X3n9c1_jIu4/edit#slide=id.p

Solution

I have modified the code to calculate the exact 550nm AOD to replace the current AOD of band average covering 550nm.

Alternatives (optional)

If applicable, add a description of any alternative solutions or features you've considered.

Related to (optional)

Directly reference any issues or PRs in this or other repositories that this is related to, and describe how they are related.

Possible correction to declaration intent for diagnostic variables in drag_suite.F90

Description

The UGWP diagnostic variables listed below are declared with "intent(out)" in unified_ugwp.F90 (subroutine "unified_ugwp_run") and are initialized in the same subroutine. These variables are then passed to subroutine "drag_suite_run". In subroutine "drag_suite_run" (in drag_suite.F90), these variables are declared with "intent(out)", and then calculations are performed on most of the grid points. If a grid point is skipped (mainly ocean points), then the initialized value holds. Because of this, I believe the variables should be declared with "intent(inout)" instead of "intent(out)" in "drag_suite_run".

List of effected diagnostic variables:
dusfc_ms(:),dvsfc_ms(:), dusfc_bl(:),dvsfc_bl(:), dusfc_ss(:),dvsfc_ss(:), dusfc_fd(:),dvsfc_fd(:)
dtaux2d_ms(:,:),dtauy2d_ms(:,:), dtaux2d_bl(:,:),dtauy2d_bl(:,:),dtaux2d_ss(:,:),dtauy2d_ss(:,:), dtaux2d_fd(:,:),dtauy2d_fd(:,:),

Steps to Reproduce

This "bug" does not appear to be causing any issues. I changed the declaration in drag_suite.F90 from intent "out" to "inout", and there was no change in the results. I'm not a computer scientist, but I believe the correct way is to declare the variables in drag_suite.F90 as "inout", since they come in to the subroutine already initialized.

Please let me know if this should indeed be changed and I can issue a PR with this minor change.

Additional Context

  • Machine: Hera
  • Compiler: Intel
  • Suite Definition File or Scheme: unified_ugwp

Output

N/A

Possible bug in GFS_diagnostics.F90, instantaneous versus accumulated short/longwave upward-directed, TOA

Description

In my simulation, I am attempting to use the diag_table to output values of instantaneous (not time-average or accumulated) top-of-atmosphere outgoing (upward directed) shortwave and longwave radiation. The variables USWRFItoa and ULWRFItoa are clearly growing larger during my simulation appearing like accumulations, not instantaneous.

I definitely need to see instantaneous, not time averaged values and the two relevant sections of code are lines 302-326 for what is labeled as instantaneous and lines 579-607 for time-averaged. The two sections differ only by taking the time-averaging lines found in the latter section and not present in the former section.

The instantaneous upward-directed SW/LW radiation fluxes from surface are easy to locate in the file, but why not have TOA values as well, or have I missed them somewhere?

The `noahmptable.tbl` file is mandatory, even for configurations that don't need it.

Description

The GFS_phys_time_vary.fv3.F90 reads the noahmptable.tbl file unconditionally. Even configurations that don't need that table will try to read it. The code tries to set errflg=1 if the file is missing. (Though, as of this writing, that file's error checking is broken: #105)

Steps to Reproduce

Please provide detailed steps for reproducing the issue.

  1. Run a non-noahmp suite many times.
  2. Some of the runs will fail. It should be 100% of the runs, but the error checking is broken (#105) so errors in that routine are usually not reported.

Additional Context

Discovered by @hu5970 in the RRFS parallels

missing type-kind in module_nst_water_prop.f90

The module interfaces for sw_ps_9b and sw_ps_9b_aw lack definitions for the kind of the input arguments. This works fine for UWM (the default type for real variables is set in the compile (-real-size 64)).

+ 4.8314e-4 * s**2
end subroutine density
!
!======================
!
!>\ingroup gfs_nst_main_mod
!! This subroutine computes the fraction of the solar radiation absorbed
!! by the depth z following Paulson and Simpson (1981) \cite paulson_and_simpson_1981 .
elemental subroutine sw_ps_9b(z,fxp)
!
! fraction of the solar radiation absorbed by the ocean at the depth z
! following paulson and simpson, 1981
!
! input:
! z: depth (m)
!
! output:
! fxp: fraction of the solar radiation absorbed by the ocean at depth z (w/m^2)
!
implicit none
real,intent(in):: z
real,intent(out):: fxp

I was attempting to compile this module from inside CMEPS however, which does not specify a default. It failed with

There is no matching specific subroutine for this generic subroutine call.   [SW_PS_9B]

Specifying kind=kind_phys in the required interfaces should not have any impact on current UWM RTs. However, testing found that two tests did in fact fail with this change:

hafs_regional_atm_wav_intel
atmwav_control_noaero_p8_intel

Both tests ran to completion but failed in the comparison of only the (binary) wave restart file:

[Hera: hfe03] /scratch1/NCEPDEV/nems/Denise.Worthen/WORK/ufs_dev/tests: grep NOT logs/RegressionTests_hera.log
4862: Comparing ufs.hafs.ww3.r.2019-08-29-21600 .........NOT OK
5793: Comparing ufs.atmw.ww3.r.2021-03-22-64800 .........NOT OK

CLM Lake restarts nine 3D fields that are constant

Description

These fields in the CLM Lake model are constant:

  • lake_z3d
  • lake_dz3d
  • lake_soil_watsat3d
  • lake_csol3d
  • lake_soil_tkmg3d
  • lake_soil_tkdry3d
  • lake_soil_tksatu3d
  • lake_clay3d
  • lake_sand3d

The later seven are 2D arrays that are needlessly duplicated in the vertical (lake k) dimension. The first two are 3D. All of these are sparse arrays, zero everywhere except lake points.

All nine should be removed.

Steps to Reproduce

  1. Read the code. The lakeini in clm_lake.f90 calculates those variables. They're never modified after that.

Additional Context

The CLM Lake Model is taking up too much disk space in the RRFS. Removing these nine variables would reduce that problem.

Problem identified by @hu5970 and @tanyasmirnova

Test updated RRTMGP cloud optics

Description

One of the radiation codes used in CCPP physics, RRTMGP, just updated the cloud optics to correct an error in the spectral ordering. This introduces changes of a few W/m2 in thick clouds. The updated data comes in a standalone file read at run time.

Solution

Forecasts should be run to determine whether this correction changes forecast skill for better or worse. This will require updating the dtc/ccpp branch of the RRTMGP repository (last updated by Sam Trahan), using the updated code to build the UFS, and changing the name of the data file used for the SW cloud optics (the old file no longer being present).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.