Code Monkey home page Code Monkey logo

iris's People

Contributors

ajdawson avatar bblay avatar bjlittle avatar bouweandela avatar corinnebosley avatar dependabot[bot] avatar djkirkham avatar dpeterk avatar esadek-mo avatar esc24 avatar github-actions[bot] avatar gm-s avatar hgwright avatar kwilliams-mo avatar lbdreyer avatar mir06 avatar ocefpaf avatar pelson avatar pp-mo avatar pre-commit-ci[bot] avatar qulogic avatar rcomer avatar rhattersley avatar scitools-ci[bot] avatar shoyer avatar stephenworsley avatar tkknight avatar trexfeathers avatar tv3141 avatar wjbenfold avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iris's Issues

Cube merge dimension hint

The current merge process will choose the first candidate dimension or first candidate dimension group that satisfies a functional relationship.

This may not align with the expected or desired cube from a user perspective.

Investigate the hint mechanism and factory querying in order to determine the best default ordering for candidate dimensions.

i.e.

  • time over forecast_reference_time over forecast_period,
  • and level_height over model_level_number over sigma (or atmosphere_hybrid_height_coordinate) for hybrid-height.

Update: Refer to #25 for an update to the preferred order (model_level_number is now primary)

iris.sample_data_path widely misused

I have now seen several codefiles from users where "sample_data_path" has been used inappropriately.
For example, like --

fname = iris.sample_data_path('my', 'file', 'that', 'isnt', 'part', 'of', 'the', 'sample', 'data')

( Spot the pointless-ness. )
I'm sure this just comes from copying example code.
So, I think it could be better managed, either in documentation or code structure.

I have a couple of related queries with this --

  1. Although "sample_data_path" appears all over the examples, there is no documentation entry for it, because of the visibility restriction built into iris.init.py ...
# Restrict the names imported when using "from iris import *"
__all__ = ['load', 'load_strict', 'save', 'Constraint', 'AttributeConstraint']
  1. Possibly, it should live in iris_tests anyway ??

Improve the error message when pcolormeshing a non-bounded cube

qplt.pcolormesh(iris.load(iris.sample_data_path('uk_hires.pp'))[0][0, 0, ...])

Generates the exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "lib/iris/quickplot.py", line 158, in pcolormesh
    result = iplt.pcolormesh(cube, *args, **kwargs)
  File "lib/iris/palette.py", line 138, in wrapper_func
    return func(*args, **kwargs)
  File "lib/iris/plot.py", line 705, in pcolormesh
    return _draw_2d_from_bounds('pcolormesh', cube, *args, **kwargs)
  File "lib/iris/plot.py", line 214, in _draw_2d_from_bounds
    result = _map_common(draw_method_name, None, iris.coords.BOUND_MODE, cube, data, *args, **kwargs)
  File "lib/iris/plot.py", line 349, in _map_common
    lats, lons = iris.analysis.cartography.get_lat_lon_contiguous_bounded_grids(cube)
  File "lib/iris/analysis/cartography.py", line 196, in get_lat_lon_contiguous_bounded_grids
    lons = lon_coord.contiguous_bounds()
  File "lib/iris/coords.py", line 548, in contiguous_bounds
    self._sanity_check_contiguous()
  File "lib/iris/coords.py", line 533, in _sanity_check_contiguous

Attempting to collapse a scalar coordinate raises an exception

Currently the user must perform an additional check to see whether the coordinate is scalar or dimensioned before attempting a collapse.

cubecol = cube.collapsed('time',iris.analysis.MEAN)

"... iris/cube.py",
line 1839, in collapsed
raise iris.exceptions.CoordinateCollapseError('Cannot
collapse a dimension which does not describe any data.')

Use of CF variable names

The use of standard name and long name is sometimes sporadic in netCDF files. It appears that netCDF files sometimes contain coordinate variables where the variable name has significant meaning and may be more suitable than the long name as the coordinate name on the resulting cube. For example, consider a file with a coordinate variable called 'level', with no standard name, but a descriptive long name with lots of spaces. Constrained loading then becomes troublesome as:

iris.load(filename, my long descriptive long name coordinate=10)

clearly won't work. Perhaps this is an indication that this syntax is a little strange, and yes there is the option of the coord_values keyword. However, what the user wants to do is:

iris.load(filename, level=10)

where level is the CF variable name, but as things currently stand, this psuedo-metadata is lost on loading.

Consider:

  1. Should we retain the CF variable name?
  2. If so, how?
  3. How does this fit with the CF data model?
  4. Should it be used on save to netCDF?
  5. What about saving to other formats?

Example data

Prepare and make available the data for the gallery examples.

Cube merge dimension order

Review the iris._merge.ProtoCube._guess_axis() method.

Align this with the iris.util.guess_coord_axis() function.

In particular, ensure the correct ordering for forecast_time.

NB. This change will result in wide-spread .cml changes due to cube dimension reordering.

Also see #7.

Lat, lon parameter ordering is inconsistent

Conventionally latitude precedes longitude.

analysis.cartography.get_lat_lon_grids(cube) returns lats, lons
analysis.cartography.unrotate_pole(...) returns lons, lats

More generally:
grep -r 'lat._lon' * | wc -l
1668
grep -r 'lon._lat' * | wc -l
206

Does this need to be consistent, or return a dictionary {lat: y, lon: x}?

Currently the Iris documentation does not state which order the coords will be returned for rotate_pole and unrotate_pole.

Fix gitwash docs

The gitwash point to the old Iris.git repository rather than the iris.git repository. Re-build the gitwash docs with the appropriate command.

Note: See docs/iris/src/developers_guide and specifically, gitwash_command.txt

Remove CF default ellipsoid

As a hangover from the transition to the CF data model, the CF loader is creating a default GeogCS when no coordinate system is supplied in the file. This behaviour should be removed.

Nimrod plotting

#54 avoided numrod plotting, pending #67.

Once #67 is merged, add a test to plot the loaded nimrod data.

Remove symlinks

Convert the tests and results symlinks back to real, in-line content.

"Optional" dependencies are not optional

(py2.7sci)pelson@~/dev/iris> python -c "import matplotlib"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named matplotlib
(py2.7sci)pelson@~/dev/iris> python setup.py build
running build
running build_py
/Users/pelson/dev/virtual_envs/py2.7sci/bin/python tools/generate_std_names.py etc/cf-standard-name-table.xml build/lib/iris/std_names.py
Traceback (most recent call last):
  File "setup.py", line 201, in <module>
    'std_names': MakeStdNames},
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup
    dist.run_commands()
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands
    self.run_command(cmd)
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build.py", line 127, in run
    self.run_command(cmd_name)
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 326, in run_command
    self.distribution.run_command(command)
  File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "setup.py", line 165, in run
    import iris.fileformats._pyke_rules
  File "build/lib/iris/__init__.py", line 32, in <module>
    import iris.cube
  File "build/lib/iris/cube.py", line 35, in <module>
    import iris.analysis
  File "build/lib/iris/analysis/__init__.py", line 38, in <module>
    import iris.coords
  File "build/lib/iris/coords.py", line 33, in <module>
    import iris.analysis.calculus
  File "build/lib/iris/analysis/calculus.py", line 35, in <module>
    import iris.analysis.cartography
  File "build/lib/iris/analysis/cartography.py", line 25, in <module>
    from mpl_toolkits.basemap import pyproj
  File "/Users/pelson/dev/virtual_envs/py2.7sci/lib/python2.7/site-packages/mpl_toolkits/basemap/__init__.py", line 15, in <module>
    from matplotlib import __version__ as _matplotlib_version
ImportError: No module named matplotlib
(py2.7sci)pelson@~/dev/iris> 

Review iris.util.guess_coord_axis() behaviour

Remove the legacy heuristics from iris.util.guess_coord_axis().

if coord.name().upper() == "X":
        axis = "X"
    if coord.name().upper() == "Y":
        axis = "Y"
    if coord.name().upper() == "Z":
        axis = "Z"

Note that these cases are used by the calculus code.

Cube.aggregated_by() broken

Cube.aggregated_by() appears to be broken.
The aggregated data dimension is not being collapsed.

https://github.com/bblay/iris_issue_files/blob/master/test_agg.py

$  python test_agg.py 
------------------
original_cube                       (w: 3; z: 3; y: 3; x: 3)
     Dimension coordinates:
          w                           x     -     -     -
          z                           -     x     -     -
          y                           -     -     x     -
          x                           -     -     -     x
shape (3, 3, 3, 3)
------------------
aggregated_cube                     (*ANONYMOUS*: 3; z: 3; y: 3; x: 3)
     Dimension coordinates:
          z                                     -     x     -     -
          y                                     -     -     x     -
          x                                     -     -     -     x
     Auxiliary coordinates:
          w                                     x     -     -     -
     Attributes:
          history: Mean of original_cube aggregated over w
     Cell methods:
          mean: w
shape (3, 3, 3, 3)

Plot cube over OS map

Scenario:

  • load data
  • setup current map projection as osgb (with region taken from cube)
  • plot OS map image (with region taken from current map)
  • cell-plot data

Hypothetical example code:

cube = iris.load("my_file")
iris.plot.map_setup("osgb", cube)
iris.plot.draw_wms(wms_url)
iris.quickplot.contourf(cube)

Regridding a masked cube

Currently iris cubes regridded method ignores the fact that masked data is present in the cube.

The culprit is within the analysis method interpolate.py where the numpy data array of the resulting cube is defined by the following:
new_data = numpy.empty(new_shape, dtype=source_cube.data.dtype)

A simple example of current behaviour is here:
https://gist.github.com/3512677

Transition to new XML format

i.e.

  • Switch XML output to new format.
  • Remove legacy XML support.
  • Update XML_NAMESPACE_URI to urn:x-iris:cubeml-0.2.
  • Validate test results.

cube_maths documentation typo

docs/iris/src/userguide/cube_maths.rst lines 28 and 29 refer to a cube called air_press. This should be air_temp.

Met Office attributes

Use standardised Met Office attribute names and values to store STASH codes (as STASH() instances) and UM version numbers.

  • Ensure the STASH instances are written out in netCDF as strings, not a triple of numbers - even when the STASH attribute exists on a coordinate variable.
  • Ensure UM version number is expressed in standard form.

iris pp loading

iris pp loading (metadata translation) is rather slow.
There seems to be some room for code optimisation here within the rule, though the amount of saving is as of yet are not known

Cynthia Brewer colour schemes

Library changes:

  • Add colour-deficient colour schemes from http://colorbrewer2.org/.
  • Each colour scheme file should include the licence as shown below.
  • Convenience function for adding a citation to a plot.

Documentation changes:

Licence text:

Apache-Style Software License for ColorBrewer software and ColorBrewer Color Schemes

Copyright (c) 2002 Cynthia Brewer, Mark Harrower, and The Pennsylvania State University.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

Interpolation user guide

There is no interpolation user guide, only reference documentation.
A few examples might be helpful. Perhaps a new section 4.4?
Or, as there's quite a bit in there, perhaps a new section after 4?

Refactor _Groupby.group()

Refactor the _Groupby.group() method while loop in iris.analysis.__init__.py in order to remove the dependency of None elements in the items list variable.

This will involve binding the content of items and groups variables.

Mac-compatible shared library refs

User feedback from Mac installation:

Mac OSX dynamic libraries end .dylib
rather than .so

I cheated by soft-linking libc.dylib to libc.so.6 and
libudunits2.dylib to libudunits2.so.

Extend iris.unit to support Mac-compatible references to libc and libudunits2 - i.e. the dylib suffix.

Remove the iterability of a cube

Making a cube iterable causes more issues than it is worth (@munslowa found this to his surprise).

import iris
fname = iris.sample_data_path('air_temp.pp')
temperature = iris.load_strict(fname)
for thing in temperature:
    print thing

Remove this capability from the cube.

bbox extract

This ticket proposes an enhancement to allow a region to be extracted by specifying a bounding box.

Currently, users may attempt to extract a region from a global dataset using constraints:

latc = iris.Constraint(latitude=lambda cell: minlat <= cell <= maxlat)
lonc = iris.Constraint(longitude=lambda cell: minlon <= cell <= maxlon)
region = cube.extract(latc & lonc)

However, the output will retain the circular property, which they would have to change manually:
region.coord('longitude').circular = False

Cube summary misalignment.

The cube summary markup can be misaligned.

See the output for dimension coordinates and auxiliary coordinates for the following netCDF file:

>>> import iris
>>> filename = '/data/local/dataZoo/NetCDF/label_and_climate/FC_167_mon_19601101.nc'
>>> cube = iris.load_strict(filename)
>>> print cube
air_temperature                     (*ANONYMOUS*: 120; *ANONYMOUS*: 21; latitude: 73; longitude: 144)
     Dimension coordinates:
          latitude                              -                 -             x              -
          longitude                             -                 -             -              x
     Auxiliary coordinates:
          forecast_period                                     x                 -             -              -
          forecast_reference_time                             x                 -             -              -
          Experiment identifier                               -                 x             -              -
          Institution responsible for the forecast system     -                 x             -              -
          Method of production of the data                    -                 x             -              -
          realization                                         -                 x             -              -
     Scalar coordinates:
          height: 2.0 m
     Attributes:
          Comment: Data interpolated from original model grid into a regular grid. Data restrictions:...
          Generator: SeasPy v1.1
          unit_long: kelvin
          Title: ENSEMBLES project
          Created: Thu Apr 16 14:36:57 2009
          Conventions: CF-1.0
          data_type: float
          References: http://www.ecmwf.int/research/EU_projects/ENSEMBLES/index.html, http:/...
     Cell methods:
          mean: leadtime (interval 6 h)
>>> 

The cube summary should also clip the returned name of a coordinate, particularly those that do not have a standard_name and a long long_name.

Unsighlty unicode 'u' in warnings

lib/iris/fileformats/_pyke_rules/fc_rules_cf.krb line 700 issues a warning when encountering an invalid unit. The use of %r means that the user can get warnings like:

 UserWarning: Ignoring netCDF variable u'time' invalid units u'wibble'

Rolling cubes

When working with global fields it is occasionally useful to extract data across the longitude boundary, for example when attempting to extract the region 30W-30E. At the moment it is not straightforward to extract such a region in iris (for plotting or analysis), but Carwyn (cpelley) has provided a solution using the following code

import iris
import numpy as np

extb = iris.load_strict(fname)

longco = extb.coord('longitude')
if longco.is_monotonic():
    extb.data = np.roll(extb.data, len(longco.points)//2)
    longco.points -= 180.
    if longco.has_bounds():
        longco.bounds = longco.bounds - 180.
else:
    print 'ERROR: the points are not monotonic, this workaround will not work'
    sys.exit(0)

after which a suitably phrased constraint can be used to extract the data from the cube extb.

The numpy method to "roll" an array along an index could be directly implemented within the cube object allowing this task to be performed more easily. Such a method could also be very useful in preparing global fields to be plotted with the most important region centred.

Maintain the type of scalars when merging.

The type of a scalar coordinate is not preserved for scalars:

>>> ml_10 = iris.Constraint(model_level_number=10)
>>> print type(iris.load_strict('/data/local/dataZoo/iris_resources/tests/PP/globClim1/dec_subset.pp', \
...                                                  'air_potential_temperature' & ml_10).coord('sigma'))
<class 'iris.coords.DimCoord'>

>>> print type(iris.load_strict('/data/local/dataZoo/iris_resources/tests/PP/globClim1/dec_subset.pp',  \
...                                                 'air_potential_temperature').extract(ml_10).coord('sigma'))
<class 'iris.coords.AuxCoord'>

doctest failure

Investigate what PR change caused the userguide/cube_statistics.rst doctest to fail in upstream/master and resolve.

git bisect may help here.

Record dimension when writing NETCDF files with Iris

As far as I can tell, writing NetCDF files with Iris won’t let me specify that, for example, time is a “record dimension”. It seems that time in this case has to be a coordinate, just like latitude, longitude, etc. Is it possible to make time (or anything) a record dimension?

The record dimension of a NetCDF file has size unlimited. It also means that the files can be simply concatenated with ncrcat. If there are no record dimensions then ncrcat doesn’t know which dimension of the data to append to.

Iris automatically making the outermost dimension unlimited would be a good and useful first step. Ultimately though it would be useful to have some sort of override to specifiy which, if any, dimension is unlimited

Remove Coord trig methods

Remove Coord.sin(), Coord.cos(), and Coord._trig_method() from the Coord classes. This will require updating iris.analysis.calculus and various tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.