Code Monkey home page Code Monkey logo

ogcapi-features's Introduction

OGC API - Features

This GitHub repository contains OGC's multi-part standard for querying geospatial information on the web, "OGC API - Features". All approved versions of the specification can be found here.

OGC API standards define modular API building blocks to spatially enable Web APIs in a consistent way. OpenAPI is used to define the reusable API building blocks with responses in JSON and HTML.

The OGC API family of standards is organized by resource type. OGC API Features specifies the fundamental API building blocks for interacting with features. The spatial data community uses the term 'feature' for things in the real world that are of interest.

If you are unfamiliar with the term 'feature', the explanations on Spatial Things, Features and Geometry in the W3C/OGC Spatial Data on the Web Best Practice document provide more detail.

Overview

OGC API Features provides access to collections of geospatial data.

GET /collections

Lists the collections of data on the server that can be queried (section 7.13), and each describes basic information about the geospatial data collection, like its id and description, as well as the spatial and temporal extents of all the data contained.

GET /collections/buildings/items?bbox=160.6,-55.95,-170,-25.89

Requests all the data in the collection "buildings" that is in the New Zealand economic zone. The response format (typically HTML or a GeoJSON feature collection, but GML is supported, too, and extensions can easily supply others) is determined using HTTP content negotiation.

Data is returned in pageable chunks, with each response containing a next link as many collections are quite large. The core specification supports a few basic filters, in addition to the bbox filter above, with extensions providing more advanced options (section 7.15).

GET /collections/{collectionId}/items/{featureId}

Returns a single 'feature' - something in the real-world (a building, a stream, a county, etc.) that typically is described by a geometry plus other properties. This provides a stable, canonical URL to link to the 'thing' (section 7.16).

See here for an overview of the extensions to support additional coordinate reference systems beyond WGS 84.

Using the standard

The standard is on the OGC website:

Those who want to just see the endpoints and responses can explore examples of OpenAPI definitions.

The reference version of the OpenAPI components and XML schemas are published in the OGC schema repository.

Server and client implementations

Overview of tools implementing OGC API Features

OGC Product Database

OGC maintains a public database of software products implementing approved OGC standards. The database also identifies products that pass the OGC Compliance Test for the standard, where available.

Implementers of OGC standards are encouraged to register their products in the database.

Communication

Join the mailing list or chat at https://gitter.im/opengeospatial/WFS_FES

Most all work on the specification takes place in GitHub issues, so browse there to get a good idea of what is happening, as well as past decisions.

Additional parts of OGC API - Features

The OGC Features API SWG has identified the following extensions as the highest priority:

OGC API Features in ISO

Part 1 (Core) has been published by ISO as ISO 19168-1:2020.

Part 2 (Coordinate Reference Systems by Reference) has been published by ISO as ISO 19168-2:2022.

Additional information

Open issues for all parts are organized in GitHub projects:

Additional links:

Building

The latest drafts of each standard in this repository are build daily (based on the configuration contained in the asciidoctor.json file):

To generate the HTML versions of the standards from this repository yourself, ensure that you have Ruby and Asciidoctor set up and installed. Then run:

# Part 1: Core
asciidoctor -a data-uri -r asciidoctor-diagram core/standard/17-069.adoc
# Part 2: Coordinate Reference Systems by Reference
asciidoctor -a data-uri -r asciidoctor-diagram extensions/crs/standard/18-058.adoc
# Part 3: Filtering
asciidoctor -a data-uri -r asciidoctor-diagram extensions/filtering/standard/19-079.adoc
# Common Query Language (CQL2)
asciidoctor -a data-uri -r asciidoctor-diagram cql2/standard/21-065.adoc
# Part 4: Create, Replace, Update and Delete
asciidoctor -a data-uri -r asciidoctor-diagram extensions/transactions/create-replace-update-delete/standard/20-002.adoc
# Part 5: Schemas
asciidoctor extensions/schemas/standard/23-058.adoc
# Part 6: Property Selection
asciidoctor extensions/property-selection/standard/24-019.adoc
# Part 7: Geometry Simplification
asciidoctor extensions/geometry-simplification/standard/24-020.adoc

The resulting HTML files will be built in the same directory as the AsciiDoc file, e.g. as core/standard/17-069.html.

The contributor understands that any contributions, if accepted by the OGC Membership and ISO/TC 211, shall be incorporated into OGC and ISO/TC 211 OGC API standards documents and that all copyright and intellectual property shall be vested to the OGC.

The Features API Standards Working Group (SWG) is the group at OGC responsible for the stewardship of the standard, but is working to do as much work in public as possible.

Pull Requests from contributors are welcomed. However, please note that by sending a Pull Request or Commit to this GitHub repository, you are agreeing to the terms in the Observer Agreement https://portal.ogc.org/files/?artifact_id=92169

ogcapi-features's People

Contributors

aaime avatar adamajm avatar bermud avatar cholmes avatar cmheazel avatar cportele avatar eseglem avatar gbuehler avatar ghobona avatar haoliangyu avatar heidivanparys avatar hmassih avatar ilkkarinne avatar jahow avatar jakobmiksch avatar jampukka avatar jerstlouis avatar jivanamara avatar jvanulde avatar koalageo avatar kstegemoller avatar m-mohr avatar mkeller3 avatar ogcportal avatar ogcscotts avatar pvretano avatar rouault avatar sampov2 avatar tomkralidis avatar tschaub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ogcapi-features's Issues

WFS 2.0 to 3.0 Mapping

The WFS 3.0 Guide has been created on GitHub. It includes a mapping of the WFS 2.0 compliance tests against proposed WFS 3.0 core and extension capabilities. Please review this mapping and provide:

  1. suggested improvements to the format (does this mapping do what we need it to do?)
  2. verify that the mapping is correct (or suggest corrections).

Bounding Box Terminology for Geographic CRS

7.8.4.18 currently refers to:

The bounding box is provided as four numbers:

  1. Lower corner, coordinate axis 1 (minimum longitude)
  2. Lower corner, coordinate axis 2 (minimum latitude)
  3. Upper corner, coordinate axis 1 (maximum longitude)
  4. Upper corner, coordinate axis 2 (maximum latitude)

And the OpenAPI yml also has

bbox:
  description: minimum longitude, minimum latitude, maximum longitude, maximum latitude

For geographic/polar/cylindrical/etc coordinate systems it's important to use (left, bottom, right, top); or (west, south, east, north) when referring to bounding boxes, since it's clearly possible to have useful bounding boxes that span the discontinuity.

eg. describing the area of the New Zealand Exclusive Economic Zone; longitudes 160.6ยฐE - 170ยฐW (spanning the antimeridian) & latitudes 55.95ยฐS - 25.89ยฐS

This should be represented as {160.6, -55.95, -170, -25.89}

But using "minimum" and "maximum" terminology leads to lots of software either:

  • asserting that x0 < x1 and failing because -170 < 160.6
  • attempting to correct "bad" boxes by normalising them (ie. to {-170, -55.95, 160.6, -25.89}) so it now describes the the rest of the world except the area we want

Alternatively, the spec could support longitudes <-180ยฐ and >+180ยฐ (currently not permitted), but that moves the problem to the server, since in order to filter the dataset properly the server needs to re-normalise or otherwise adapt them. Or leave it otherwise undefined like most of the other OGC specs.

OGC 06-121r3 D.13 Minimum bounding boxes discusses this and sits firmly on the fence ("[the min/max approach] can always be used by proper selection of the CRS"), but describes the above WSEN approach as:

b) For a circular coordinate, specify that the LowerCorner shall define the box edge furthest toward decreasing values, and the UpperCorner shall define the box edge furthest toward larger values. For longitude, the LowerCorner longitude would define the West-most box edge, and the UpperCorner longitude would define the East-most box edge. (The LowerCorner would no longer always use the minimum value, and the UpperCorner would no longer always use the maximum value. The value at the LowerCorner can be greater than at the UpperCorner when this bounding box crosses the value discontinuity.)

Lack of standardization for collection metadata response with application/xml media type

As far as I can see in the spec, there's no XML schema defined for the collection metadata response with application/xml metadata type

For example,
$ curl -s -H "Accept: application/xml" "https://www.ldproxy.nrw.de/kataster" | xmllint --format -

returns

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<wfs:Collections xmlns:wfs="http://www.opengis.net/wfs/3.0">
  <wfs:Collection>
    <wfs:name>flurstueck</wfs:name>
    <wfs:title>Flurstรผck</wfs:title>
    <wfs:extent>
      <wfs:bbox>5.61272621360749</wfs:bbox>
      <wfs:bbox>50.2373512077239</wfs:bbox>
      <wfs:bbox>9.58963433710139</wfs:bbox>
      <wfs:bbox>52.5286304537795</wfs:bbox>
    </wfs:extent>
  </wfs:Collection>
  <wfs:Collection>
    <wfs:name>gebaeudebauwerk</wfs:name>
    <wfs:title>Gebรคude, Bauwerk</wfs:title>
    <wfs:extent>
      <wfs:bbox>5.61272621360749</wfs:bbox>
      <wfs:bbox>50.2373512077239</wfs:bbox>
      <wfs:bbox>9.58963433710139</wfs:bbox>
      <wfs:bbox>52.5286304537795</wfs:bbox>
    </wfs:extent>
  </wfs:Collection>
  <wfs:Collection>
    <wfs:name>verwaltungseinheit</wfs:name>
    <wfs:title>Verwaltungseinheit</wfs:title>
    <wfs:extent>
      <wfs:bbox>5.61272621360749</wfs:bbox>
      <wfs:bbox>50.2373512077239</wfs:bbox>
      <wfs:bbox>9.58963433710139</wfs:bbox>
      <wfs:bbox>52.5286304537795</wfs:bbox>
    </wfs:extent>
  </wfs:Collection>
</wfs:Collections>

but it is not immediately obvious that this is howl https://github.com/opengeospatial/WFS_FES/blob/master/core/openapi/schemas/content.yaml should be encoded in XML

Align the Guide with OGC github conventions

The current version of the WFS 3.0 Guide is a single ASCIIdoc file. This is not consistent with the OGC conventions for building specifications. The existing content will be refactored to comply.

Proposal: Remove 'resultType' from core and put in an extension

(happy to create a PR for the spec changes of this and other issues I'll put in, but I've delayed on getting this feedback out, so figured I'd post now instead of continuing to wait).

Though being able to ask for 'hits' instead of actual results likely seems pretty trivial to implement, it is an additional overhead, and can be more of a pain with a backend like elasticsearch than a standard rdbms. Especially when you get to hundreds of millions of features. At Planet in our data API we have a dedicated 'stats' endpoint that returns the equivalent of 'hits' (see full api spec and search for stats). Ours also can return time bucketed data, which drives nice looking graphs. One could see an advanced 'stats' endpoint being used for crossfilter style interactive filtering.

So a good microservices architecture I believe would have 'hits' be its own endpoint that takes the same 'query' as the 'results' endpoint, so it could be backed by a cluster dedicated and optimized for aggregations. Hits as an extension would allow services to include it if it's easy to implement, but not require it for minimal implementations.

For the SpatioTemporal Asset Catalog spec we started with a draft WFS 3.0 swagger spec, but removed the result type stuff, see the swagger spec

Pick a recommended encoding

Making people choose between HTML and JSON (as well as XML unless #24 is accepted) decreases interoperability if there is no guidance on which one must be supported. Some may choose HTML, some JSON, some XML.

It forces them to do additional thinking and research, or just go with their gut - but not matter what there is additional decision making.

It matters less what the default is, and indeed the second one can be strongly encouraged.

My gut often says HTML, but I've been convinced that it should be JSON. The primary consumer of WFS API's is machines, and it should be a clear machine readable format. HTML should be strongly encouraged, and indeed there likely could be open source libraries that make it really easy to turn the JSON responses in to useful HTML - even doing it on the fly in javascript.

Precision level filter responsibility?

I'm wondering if limit precision should be managed in the spec with a filter like 'precision=6' or should be kept at data file/store behind the web service or both.

This viewpoint comes from 2 main reasons:

  • Sending ten digits precision has no meaning: see this summary https://gis.stackexchange.com/a/8674
  • The cost of geometry precision: for WGS 84, each coordinates with 10 digit precision would cost around 30% of the geometry size

Geometry with 6 digits (10 characters)

179.999999

Geometry with 10 digits (14 characters)

179.999999999

For point geometry, the gain is small but for lines and polygones, the improvement would be worth for the response size.

The other question it would open is about rounding or cutting for this precision, if considered in the core spec.

Lack of DescribeFeatureType request

Part 1 Core doesn't have a DescribeFeatureType sort of request, which makes it a bit inconvenient for some client like GDAL/OGR that requires to know quite early the schema to be able to expose a layer/ feature collection. For full feature collection ingestion, one can potentially download the whole response and analyze it afterwards to discover the schema. But for filtering based on properties, this wouldn't be an appropriate behaviour.

One way of implementing DescribeFeatureType would be to extend the feature collection metadata. Or in the OpenAPI description, instead of refering to the standard GeoJSON FeatureCollection to refer instead fo a derived FeatureCollection that describes the properties (although that solution would require to have a description for each encoding format, and that probably only works for json-based encodings)

Do not include /api path in OpenAPI definition

It seems to be rather uncommon to include an operation in an OpenAPI definition that returns an OpenAPI document.

One of the reasons for excluding the operation would be that it will not work well with code generation tool.

Proposal: Simply require that the OpenAPI definition is available at /api, but drop the example and exclude the /api operation from all other examples, too.

Considering streaming output

Like the previous issue I opened #14, I would like to be able to use streaming output like http://ndjson.org
Why? it's about partial consumption of data and lowering footprint on the back-end as not streaming = output size = 600Mo and RAM consumption on the server = 600Mo

HTTP headers vs payload

One design question is whether information about a resource returned by the server should be put in the HTTP headers or included in the payload.

Advantages of using HTTP headers:

  • If well-known headers exist like "Date" (for the timestamp) or "Link" (for qualified links), using those generic Web mechanisms would seem clearer than defining OGC/API-specific payload.
  • If existing payloads cannot be extended with additional information (e.g. a GML-SF0 feature collection or a PNG), using headers enables the direct reuse of such payloads, too.

Disadvantages of using HTTP headers:

  • For responses that will benefit from "streaming" (i.e., starting the response before the complete response is known), e.g. a query response, some information may not be available when the headers are written. numberMatched or some resource links are examples.
  • If only the payload is saved, the information in the header is "lost". It is unclear, if this is really an issue.

Ability to set no null/empty geometry in returned response

Most of the data consumers from WFS need the geometry but sometimes, the service can provide data for charts or HTML tables where geometry would slow down the data retrieval due to geometry size in response.

I'm not sure if the specification implicitly manage this use case as I didn't found a way to do it from my understanding of the current Core spec.

Considering binary format output?

Contrary to most REST API, the response size in web mapping can be really large due to complex geometry data.

Most of the time, the trick is about restricting number of returned features with attributes filtering, geometry filtering or a limit.

Unfortunately, for a complex geometry, it's not always enough.

Providing an output based on Protocol buffers or MessagePack could lower waiting time.

An existing implementation example for geospatial could be https://github.com/mapbox/geobuf based on 'Protocol buffer'.

Include a list of conformance classes somewhere?

The goal is to support both types of developers:

  • those that have never heard about WFS - it should be possible to create a client using the OpenAPI definition (they may need to learn a little bit about geometry, etc., but it should not be required to read the WFS spec, for example);
  • those that want to write a "generic" client that can access WFSs (and maybe WMSs, etc), i.e. are not specific for an specific API/server.

To make it simpler for the second group, should we include somewhere an array of requirements classes met by the server?

This could either be in the root resource or in some extension element in the OpenAPI document.
Here an example how this could be added to the root resource:

{
   "collections" : [ ... ],
   "conformsTo" : [ 
       "http://www.opengis.net/spec/wfs-1/3.0/req/core", 
       "http://www.opengis.net/spec/wfs-1/3.0/req/geojson" 
    ]
}

Those that do not care about WFS etc. could simply ignore that information.

How does a client determine which security protocols/standards/etc. a server supports

From teleconference 12-FEB-2018:

  • discussion between Peter and Chuck about security
  • Chuck mentioned that the support path of geoapi security (i.e. OAuth and https) is not sufficient in a number of cases
  • Peter asked what is the basic issue
  • Chuck mentioned that there is no mechanism to allow a client to determine from the server what security protocols/standards/etc. are supported
  • this is related to the TB12/TB13 work of putting this information in the capabilities document
  • Peter thought that a new access path, something like /api/security could be used as an end point to get this information
  • Chuck confirmed that there are no standards in this area yet but that he, in cooperation with others, is working on something

UML for OpenAPI

I have put together a UML model (Enterprise Architect) of OpenAPI 3.0. I thought it may be of use to this effort. Since I put it together for my own use, it is still a bit rough around the edges. I'll put the time into cleaning it up and aligning it with other WFS UML models if there is any interest.

OpenAPI_20180130.zip

Response format specification: f parameter and content negotiation

The current draft allows a client to pass the f-response format parameter and to use the Accept Header mechanism at the same time.

For discussion:

  • Since passing different response formats is possible, I would expect an error message in that case. I'm afraid that otherwise, errors are harder to trace.
  • Wouldn't the user assume that if he specifies the f-parameter, only the desired format is returned?

One possibility that would allow both mechanisms would be the use of format suffixes, like .json / .hmtl / .xml :

/buildings/{fid}:
    get:
      summary: retrieve a feature
      operationId: getFeature
      tags:
        - Features
      parameters:
        // f parameter dropped
        - $ref: '#/components/parameters/id'
      responses:
        '200':
          description: A feature.
          content:
            application/geo+json:
              schema:
                $ref: '#/components/schemas/buildingGeoJSON'
            application/gml+xml;version=3.2:
              schema:
                $ref: '#/components/schemas/buildingGML'
            text/html:
              schema:
                type: string
        default:
          description: An error occured.
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/exception'
            application/xml:
              schema:
                $ref: '#/components/schemas/exception'
            text/html:
              schema:
                type: string

/buildings/{fid}.json:
    get:
      summary: retrieve a feature 
      operationId: getFeatureJson
      tags:
        - Features
      parameters:
        - $ref: '#/components/parameters/id'
      responses:
        '200':
          description: A feature.
          content:
            application/geo+json:
              schema:
                $ref: '#/components/schemas/buildingGeoJSON'
        default:
          description: An error occured.
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/exception'

/buildings/{fid}.xml
...
/buildings/{fid}.html
...

A generic client could only use the /buildings/{fid} path and use content negotiation mechanism.

HTTPS

Does there need to be any discussion around secure channel (e.g. HTTPS) vs HTTP? How about protocol-relative URI's? I understand that it's an anti-pattern. A best practice might be warranted.

Change path structure

During discussions at the WFS 3.0 hackathon the following new path structure was discussed by a subgroup and was supported by the participants (underlined paths are part of the core, all other in extensions):

img_4596

Advantages:

  • avoids naming collisions
  • clearer hooks for extensions
  • more compatible with some/many URI routers
  • can support servers with many collections (paging of collection metadata may be specified in an extension)

Review terminology

We need to make the document understandable to the developers that have no geo/OGC background (in addition to those that have that background). This may require some clarifications (e.g. use of "service" vs "API").

Typo in the Conformance section

The following section has a typo "defines a six requirements"

WFS_FES/core/standard/clause_2_conformance.adoc

Another typo "None of these encodings is mandatory" should be "None of these encodings are mandatory"

OpenAPI Validation

Note on requirement 36 - "Currently, no tool is known to validate that a server implements the API specified in its OpenAPI definition."
One result from the hackathon should be a first cut at Annex A and a strategy on how it will be tested using TeamEngine. A general purpose OpenAPI test (does your service conform to your OpenAPI advertisement) seems like a good place to start.

Missing example for links in GeoJSON FeatureCollection response

It would be good to have an example illustrating this part of Requirement 25 /req/geojson/content:
"The links specified in the requirements /req/core/fc-links and /req/core/f-links SHALL be added in a extension property (foreign member) with the name links."

Clarification on features being part of exactly one collection

During WFS 3.0 hackathon a question was raised about the below.

Section 7.1 of OGC Web Feature Service 3.0 - Part 1: Core specifies the following:
Each Collection consists of the features in the collection where each feature in the distribution is part of exactly one collection.

Can more clarification please be given on this point?

It might seem the statement tries to imply how data supporting the WFS implementation is to be organised, which might sound unnecessary.

Although it makes perfect sense, some data models might require different approach. An example of a feature belonging to more than one collection was given during the event e.g.:

  • a feature of id: 7 can be part of collections:
    -- buildings
    -- skyscrapers
    The above example also sparked conversations on how to deal with transactions of such features (if building id: 7 is changed, the skyscraper id: 7 should be changed as well logically).

Is therefore the statement a 'best practice' possibly?

Add contributor IPR statement to Readme

We should add a brief note on contribution to the standard as was done with GeoPackage:

"Contributing
The contributor understands that any contributions, if accepted by the OGC Membership, shall be incorporated into the formal OGC GeoPackage standards document and that all copyright and intellectual property shall be vested to the OGC.

The GeoPackage Standards Working Group (SWG) is the group at OGC responsible for the stewardship of the standard, but is working to do as much GeoPackage work in public as possible."

Semantics of attribute filtering?

https://rawgit.com/opengeospatial/WFS_FES/master/docs/17-069.html#_parameters_for_filtering_on_feature_properties

specifies that query parameters could be added for significant feature properties and goes on to show a case in which an enum is used.
What about open ended strings and numbers though, what is the semantics of the filtering?
Like, for open ended string it might be useful to match anything that contains the specified string ("like" filter wise). for numbers and also for dates it would be useful to support not just equality matches but also ranges.

This could allow to support a minimal filtering interface for WFS core. (for more complex needs a dedicated extension based on FES or CQL filter would be nice).

Move XML output to extension

Having XML in as a top tier option brings in unneeded complexity. It makes a lot of sense as an extension, especially for GML 3.2 compatibility and alignment with past WFS's.

Even though it is clearly optional in the core specification it requires more cognitive overhead to figure out what is needed, and how to take out all the references. Or we may also end up with API implementations that just leave some of the XML definitions in even though they don't support it.

The core spec should be as short and sweet as possible, and having XML in there makes it not as short as it could be.

OpenAPI documents and normative statements

In another issue (#1), a discussion started about normative OpenAPI documents or normative elements in an OpenAPI document. Since this is out-of-scope of the other issue, I have created a new topic.

Beyond WFS, it might be worth seeking broader guidance (from the OAB perhaps?) as to whether API descriptions can be used at all to contain any normative constraints.

I would prefer that we come to a conclusion first of what we think works best and then discuss with the OAB, if they see any concerns.

But the answer to this question wouldn't stand in the way of continuing on the current WFS course (since you're currently just using it as a source of examples).

Yes, but some parts of the OpenAPI definition of a WFS implementation will be based upon normative statements. See https://github.com/opengeospatial/WFS_FES/blob/master/core/standard/clause_5_conventions.adoc#references-to-openapi-components-in-normative-statements.

The idea behind the general approach in the current draft is that it should be possible to implement a single API (i.e. have a single OpenAPI definition) that conforms to, for example, WFS, WMS and WMTS at the same time (and using the same segmentation of layers/feature collections). There are very likely still elements in the current draft that do not yet fit this idea, but it is on my list to review the normative statements, and the general text, from that perspective.

In other words, the idea is not to define standalone services, but more to define "building blocks" for an API that supports spatial aspects. This is closely related to the idea of the OGC essentials. In that sense the use of "service" in the names of our standards which implies that a WFS instance and a WMS instance are separate services would no longer be true.

Impact of access restrictions

If an implementation limits the access to a subset of the operations for some users (or if the server implements even more fine-grained access), it may no longer be possible for such a user to de-reference all links, including relationships between features. Does this require a discussion? Or more?

CRS support beyond WGS84 lon/lat

Discussed in the Gitter chat during the WFS 3.0 hackathon:

@aaime: I'm reading the spec but not understanding CRS treatment... on some parts it seems like it would be ignorable, in others that everything should default to CRS84... any clue?

@cportele: CRS handling in the Core is only a placeholder for future extensions. In the Core it is always WGS84 lon/lat.

@aaime: what about data that is some other CRS? Should it be left as is? Reprojected to WGS84?

@cportele: Reprojected to WGS84

@aaime: Agh... that's bad for all data in some other datum, the precision loss is is significant... does that mean WFS3 core is not for public administrations? (in Italy for example WGS84 data delivery is an extra, projected systems are the mandatory ones) and unfortunately, projected systems not even based on WGS84, but some other datum

@hannesaddec: @aaime and imagine one wants to make to display on a epsg:3035 map... are we projection than twice ?

@cportele: It is the same in Germany (and most of Europe). If high-precision is needed, the core will not be sufficient and an extension is needed. Support for additional CRS is one of the extensions we will be looking at this year. A key problem: How to encode data in JSON? GeoJSON is always WGS84 lon/lat...

@hannesaddec: only in the new GeoJSON specs..
2008 (yes) vs 2016 (restricted to wgs84 with a backdoor): However, where all involved parties have a prior arrangement, alternative coordinate reference systems can be used without risk of data being misinterpreted.... https://tools.ietf.org/html/rfc7946#page-12....

@cportele: Yes, that has been discussed to great length - if you use it in any other CRS all your users need to be aware of that - and their libraries would need to support that, too. I do not think this is the situation for most public WFSs where users may not be known in advance. So, yes, you may use a WFS with GeoJSON in other CRSs, but I do not see this as an approach to take in a conformance class of the standard.

I guess one approach would be to specify a new JSON encoding that is basically the GeoJSON encoding, but adds a capability to specify a CRS. This could not be called GeoJSON, but would have to use a new name (if the IETF licensing rules allow that approach). In a way this is similar to the approach in GeoSPARQL for WKT where GeoSPAQL specifies how to state the CRS for a WKT geometry.

@aaime: the WFS specification could add that "prior arrangement" between server and client, as an extra attribute, if the client has to explicitly ask for a CRS other than WGS84, there is probably not going to be confusion
Looking at the spec, I don't see anywhere an indication that using a different CRS would make it not geojson anymore, they explictly discuss the loophole instead of denying any other CRS usage

@cportele: I understand that, but having a geometry serialization that does not have a fixed CRS or a capability to declare the CRS of the coordinates seems to be a hack / "broken".

@aaime: the problem happens if you download the data in the context of the protocol, store it, and then give it to some other software outside of the arrangement
the real issue is in GeoJSON, the format became useless in a lot of applications due to the fixed CRS....

@pduchesne: +1 on having the core spec say that a CRS query parameter /should/ be supported, and if supported and specified the returned geojson /must/ be using that CRS. But since the geojson spec mandates the use of WGS84, wouldn't we have to return both geometries, in the requested CRS and in WGS84 ? To me adding an extra attribute (like 'projected_geom') doesn't break the spec, but changing the semantic of the existing 'geometry' attribute is another story.

@aaime: "doubling" the size of the payload will likely make many cry out loud

@pduchesne: indeed

need to add a license

We need to add a license to the repo. Discussion underway of we use Apache or OGC for the repo.

More detail / examples on caching

The section on caching is pretty light, https://rawgit.com/opengeospatial/WFS_FES/master/docs/17-069.html#_web_caching but is quite important.

There's a todo there for adding an OpenAPI example, which would be a good start. Opening this issue as it'd be really good to get that todo in there, and start figuring out how to test for caching. Should make at least recommendations on what headers to use (etags vs cache-control), when to update them / how long to set them for, proper http responses (304 not modified), making proper requests (If-Modified-Since or If-None-Match), etc.

I think many people in the geospatial world aren't fluent in the web caching paradigm, so giving some solid recommendations I think would go a long way.

Ideally we'd also have a tutorial of some sort, showing how you could use a polling sort of mechanism with cache headers to keep two services in sync. And/or perhaps we make a little extension to request all features that are updated since a certain time.

Listing of all applicable HTTP Status Codes

Requirement 38 - I question if this is feasible. Implementers should not be required to research and describe every status code which may be returned by the HTTP protocol. Many of these are artifacts of the network topology and HTTP protocol itself. It is feasible for implemeters to describe any status codes produced by their service which are not standard HTTP codes. I suspect that is the intent of the requirement. But without a specific definition of "service" as being the software logic accessible through the Base URL, this requirement is ambiguous.

Support for CORS (or JSONP?)

Servers should support a mechanism to overcome cross-domain restrictions.

To be consistent with HTTP, it probably should be CORS. Is that only a separate requirement or does it need to be reflected in the OpenAPI definition? The discussion here suggests that it should not be reflected explicitly in the OpenAPI definition.

Is the convenience of JSONP important enough to support it, too, even with its drawbacks?

Query parameter collisions

How should a server handle the case where the property name of a feature collides with the name of one of the defined api parametrs? COUNT, RESULTTYPE, BBOX are all api parameter names that could conceivably also be property names in a feature.
Perhaps we might consider prepending the API parameter names with "wfs:". Just a thought.

Remove OpenAPI 3.0 dependency from the Core requirements class

Discussed during the meeting on 2017-11-28:

To avoid lock-in to OpenAPI 3.0 we should change the requirements to provide a formal API description at /api and have a separate conformance class for OpenAPI 3.0, but not preclude that in the future or in parallel other versions of OpenAPI or other description mechanisms are provided by a server.

HTTP 1.1 - what is the current RFC?

Currently we reference RFC 2616. It has no marking as being obsolete, so I assume this is still the HTTP 1.1 specification.

However, since references to the 723x RFCs came up in discussions, I wanted to raise this to see, if am wrong. There is a revision of the HTTP 1.1 spec, split into multiple RFCs (7230 to 7235). According to http://www.rfc-editor.org/info/rfc7231 that RFC is not approved and still a "proposed standard". This is the early phase in IETF, even before it is called a "draft standard". The version is from 2014, so no idea what its status in IETF really is, but my conclusion was/is that RFC 2616 is and will continue to be the HTTP 1.1 spec for some time.

Or am I wrong?

Support response that returns only the ids of the selected features

The following was implementation feedback from using an implementation:

If a client already has a copy of the relevant features locally, a query to a feature collection would only need to return the feature ids, not the whole features. This can reduce network load and improve application performance.

This could be implemented in several ways. One option could be to use resultType=ids. However, to be consistent with the media types, this would still need to be a feature collection (but without geometries or other properties). In that sense it would be a special projection case and could also be handled in the extension that will cover the projection capability.

Features without a collection

Just being a devil's advocate here, but is it possible that we may have a feature without a collection? I am looking at #18... I imagine that a "dataset" could be any arbitrary "collection" of web resources (i.e. "spatial things" aka "features"). For example, a dataset could contain references to "features" that exist in zero or many collections, and while that may be a collection unto itself, it isn't necessarily so.

More communication forums

It'd be great to link to a couple more communication forums in the readme, where people can ask questions, etc.

  • Email - I think the SWG list is closed? There seems to be an old, not used public list - https://lists.opengeospatial.org/mailman/listinfo/wfs-dev Should we revive that and make it a public discussion list? Or make a new public one? In general github issues is where people will likely communicate most things, but having an email list can be more welcoming.

  • Chat - I'm liking http://gitter.im recently - great github integration. Could also use slack. For this to work we'd probably need at least either Clemens or Peter to occasionally pop in.

I'd be happy to help create the chat channel.

add temporal support

Woudc be valuable to add:

  • temporal extent definition in feature collection metadata
  • temporal dimension definition in feature collection to provide clients capability to temporally query features. (OpenAPI may allow for this already by design by specifying data type along with attribute query property)

Feature Identifiers

There are two types of Feature Identifier and we need to make sure we distinguish between them.

  1. The feature instance identifier identifies a specific instance of a feature resource.
  2. The feature object identifier associates an real world object with the feature resource.
    As the Feature is updated over time the instance identifier will change with each update, the object identifier will not. This is important for linked data. An association with a real world object has different semantics from an association with a specific digital representation of that object. In the first case, the Feature can be modified without impacting the association. This is not true in the second case.

Move Annex B to the release notes?

Annex B is meant as a discussion / explanation to the community that uses WFS today. Maybe we should remove this from the document and make it a separate document (part of the release notes)?

Paging improvements

So I believe paging as it stands right now is a bit problematic. Hopefully a developer who has actually worked with paging on restful api's with elastic search backends sounds in and can explain more.

But in Planet's data API we don't have 'startIndex' or anything like that, since it's a lot more complexity to enable clients to request arbitrary locations in an index. Indeed if the index changes, with data added or deleted then that can throw off the results.

Planet solves this by requiring users to create a saved search, or there's a shortcut 'quicksearch' to get results right away. But both create an index on the server side that's set in time, that the user can page through. DigitalGlobe actually doesn't enable paging on their Catalog results, and the underlying vector data store seems to require a similar creation of a paging ID - https://gbdxdocs.digitalglobe.com/v1/docs/vs-retrieve-page-of-vector-items

I don't think we need to specify that level of paging / search in the core of the spec (though possibly an extension / best practice). But I think we should remove 'startIndex', in favor of just having results supply their 'next' link, which can be generated by the service. Planet's 'next' links look like https://api.planet.com/data/v1/searches/f8abab5007a14b31b5ccfb8a3d3f02d1/results?_page=eyJxdWVyeV9wYXJhbXMiOiB7fSwgInNvcnRfcHJldiI6IGZhbHNlLCAicGFnZV9zaXplIjogMjUwLCAic29ydF9ieSI6ICJwdWJ

(with a string that is actually 3 times that long).

In the STAC spec we just have an optional nextPageToken, and then the 'links' section uses that token.

Services wouldn't be required to use an obscure string for paging through results. Though we should make a recommendation on if services are expected to return consistent results, or if it is ok that they are returning like less than the 'count'. See like https://developers.facebook.com/blog/post/478/ for an api that doesn't guarantee the number of results.

Additional discussion on paging best practices at https://stackoverflow.com/questions/13872273/api-pagination-best-practices - we should perhaps recommend some default ordering of results that paging can be driven off of.

The other thing that seems a bit arbitrary is the 10000 maximum limit on the 'count' parameter. And I couldn't figure out in the spec if that is something that implementations can change, or is like hard coded in. At Planet we enable dynamic setting of the page size, but the limit is 250 results per page. And the operations team resists any increases to that, as it introduces more complexity to support.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.