Code Monkey home page Code Monkey logo

Comments (10)

SpacemanPaul avatar SpacemanPaul commented on August 24, 2024

You will need to update the materialised views after indexing new data, and the range tables after adding a new layer:

datacube-ows-update --views; datacube-ows-update

(If you haven't done this, this is almost certainly your problem.)

from datacube-ows.

SpacemanPaul avatar SpacemanPaul commented on August 24, 2024

Sorry for the slow response. OWS is pretty stable now so I don't look here all that often. For support, reach out on Slack - there's a #ows channel.

from datacube-ows.

SpacemanPaul avatar SpacemanPaul commented on August 24, 2024

Other random feedback on your indexing:

In product yaml:

load:
  crs: EPSG:4326
  resolution:
    lon: 0.01
    lat: 0.01

storage:
...

Storage is ignored if the load section is provided - remove the storage section.

In dataset yaml:

    spatial_reference: GEOGCS["GCS_WGS_1984",DATUM["WGS_1984",SPHEROID["WGS_84",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["degree",0.0174532925199433],AUTHORITY["EPSG","4326"]]

Spatial reference inside a grid definition is not used. The grid always uses the CRS of the dataset.

We will be supporting this in future, but it will be called crs rather than spatial_reference and epsg codes are always preferred over WKT format.

from datacube-ows.

lucapaganotti avatar lucapaganotti commented on August 24, 2024

Hi Paul thanks for yor answer,

I already run datacube-ows-update --views; datacube-ows-update. I made the changes you're suggesting to my product and dataset metadata files. All is running as before so, the code about storage and spatial_reference was not necessary, thank you.

I think I'm now managing a multi variable netcdf file, I got these output images from my datacube-ows istance referring to the same netcdf file:

Schermata da 2023-07-13 16-15-57
Schermata da 2023-07-13 16-06-37
Schermata da 2023-07-13 15-11-16
Schermata da 2023-07-13 15-11-09

I create a queriable layer in a folder layer inside the ows_cfg_example.py file for each variable I have in my netcdf file. Is this the right way to proceed?

Now I'd like to setup a layer referring a netcdf file with more than one time slot and then mix up the whole thing.

As off today I'm querying the datacube ows with something like that:

http://192.168.88.40:5000/wms?request=GetMap&crs=EPSG:32632&layers=sentinel_5p_cf&bbox=168701.015089,4657521.06215,956095.072713,5222660.65263&width=800&height=600&format=image/png&service=WMS&version=1.3.0

to get some output.

I tried to add to the previous url the time parametr as &time=2015-04-10 where 2015-04-10 is the time reference in the netcdf file I'm using and I get the same image, if I change the date (i.e. 2015-04-09) I do get

<ServiceExceptionReport xmlns="http://www.opengis.net/ogc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.3.0" xsi:schemaLocation="http://www.opengis.net/ogc http://schemas.opengis.net/wms/1.3.0/exceptions_1_3_0.xsd">
<ServiceException code="InvalidDimensionValue" locator="Time parameter"> Time dimension value '2015-04-09' not valid for this layer </ServiceException>
</ServiceExceptionReport>

so I think that the time parameter can be used to get different maps in time based way. Is that right?

Is there any other meaningful parameter I can add to the url query string to further filter my datasets?

P.S. How I can get in touch with ows developers/architects via slack in order to have more informations?

Sincere thanks for helping me in getting odc running with my own data.

Have a nice day.

from datacube-ows.

SpacemanPaul avatar SpacemanPaul commented on August 24, 2024
  1. Mapping products to layers.

I create a queriable layer in a folder layer inside the ows_cfg_example.py file for each variable I have in my netcdf file. Is this the right way to proceed?

That's one way to proceed. Another would be to use a single queryable layer for the product, and expose each variable as a style of that layer. You also don't need to have a folder layer - a folder hierarchy is optional.

My normal answer would be to do whatever makes your data easiest for your users to navigate. OWS tries not to be prescriptive on this question.

  1. Accessing multiple time slices from a stacked NetCDF file

An ODC dataset is associated with a single time value (or range). If you have n time values stacked up in the one NetCDF files, you will need to create n dataset yamls (each referring to a different time slice of the same NetCDF file) and index all of them as separate datasets.

The datasets will (at minimum) differ in the following respects:

  • id - each time slice needs it's own dataset id (uuid).
  • The time field(s). (In the properties section, normally either dtr:start_datetime and dtr.end_datetime or simply datetime.
  • The mapping to the NetCDF time index, for each measurement, as explained below.

The preferred way to access NetCDF time slices is as follows in the measurements section:

measurements:
  tropospheric_NO2_column_number_density:
    path: file:///home/buck/dev/odc/data/netcdf/rodrigo_5.nc
    layer: tropospheric_NO2_column_number_density
    band: 1
  NO2_column_number_density:
    path: file:///home/buck/dev/odc/data/netcdf/rodrigo_5.nc
    layer: NO2_column_number_density
    band: 1

Where band is a NetCDF "part" number, using rasterio-style 1-based indexing. (i.e. the first part is band: 1, and the second part band: 2, etc.

For completeness, you may also see examples using a "part" anchor in the path url:

  NO2_column_number_density:
    path: file:///home/buck/dev/odc/data/netcdf/rodrigo_5.nc#part=0
    layer: NO2_column_number_density

Note that in contrast to "band" above, this is a zero-based index, with the first NetCDF part being part=0.

But this usage is considered confusing and ambiguous and is planned to be deprecated, and then removed in future releases, so please stick to using "layer" and the 1-based "band, as per my first example above.

from datacube-ows.

SpacemanPaul avatar SpacemanPaul commented on August 24, 2024

The best place to leave question on Slack is the #ows channel.

from datacube-ows.

lucapaganotti avatar lucapaganotti commented on August 24, 2024

from datacube-ows.

lucapaganotti avatar lucapaganotti commented on August 24, 2024

from datacube-ows.

SpacemanPaul avatar SpacemanPaul commented on August 24, 2024

I am talking about the opendatacube Slack workspace. Accounts are not shared between Slack workspaces. You do not need to pay for access to the ODC Slack.

from datacube-ows.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.