Code Monkey home page Code Monkey logo

solar-and-wind-potentials's People

Contributors

timtroendle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

solar-and-wind-potentials's Issues

Config schema

We can validate our config using a JSON Schema (see the Snakemake docs), which would help to manage what a user is able to define.

As well as listing configurable options, we can set which ones are required and which are optional (with defaults) and limit the configuration data type and set of possible entries. Already mentioned in #7 is the need to limit the NUTS years that can be defined.

Others I see are:

  1. Countries, they must be within the European continent
  2. Bounds, they must make sense as coordinates
  3. Administrative units, must be in [nuts0, nuts1, gadm0, lau2, etc.]
  4. Floats in parameters must be positive, and some have more stringent ranges

Make the workflow more technology-agnostic

Currently, there are exactly four technologies, two of which are competing on land (onshore wind and solar farms). These four technologies (including the competing ones) are baked into the analysis early on when land elgibility is determined here.

This has two problems. First, the workflow is not as technology agnostic as it could be. Adding or removing technologies is difficult. Second, it mingles the competing technologies in an error-prone way (all pv/wind-prio things in following rules and output files, like here).

A cleaner solution would be to assess all technologies in isolation, and decide between competing technologies only at the very last step. For example, instead of a integer-map with Eligibility categories baked in, we'd create four boolean-maps:
build/technically-eligible-land.tif (with values 0, 250, 180, 110, 40) -> build/technically-eligible-land-{technology}.tif (with values 0 and 1).

All other maps accordingly. Again, this'll create many more maps and will require more disk space, but workflow code and result data will be much cleaner, hopefully lead to fewer miss-uses, and allow to introduce other technologies more easily.

Make this repo a submodule in Euro-Calliope

Instead of packaging up the output of this repository and then pulling that information from Zenodo when running the Euro-Calliope workflow, I think it would be better to have this is a submodule of Euro-Calliope, similar to how Euro-Calliope is a submodule of the OSE model workflow. Some reasons for this:

a. They share a lot of the same input data, so there isn't much overhead in terms of preparing the datasets. In fact, downloading and generating shapefiles (one of the more time intensive tasks) only needs to be done once.
b. If a user wants to change something in Euro-Calliope that leads to needing different technical potential data, they have to wait for this to be re-packaged on Zenodo. This includes changing the spatial scope (see #1) and having a different spatial resolution (e.g. NUTS3).
c. You could just re-generate the technical eligibility data for the resolutions of interest for your energy system model (and ignore the report generation), so time/memory penalty would be low.

Consistent source of elevation data

At the moment, 3 arc second data is used for most of Europe, but it doesn't cover anything further North than 60N. To get Northern Nordic countries covered, I think the current approach is to supplement the data with 7.5 arc second GMTED data. Perhaps, for consistency, a single data source could be used, such as this attempt at filling in missing 3 arc second SRMT data: www.viewfinderpanoramas.org/dem3.html

Add a minimal test of entire workflow

As we are adding changes to this repo from time to time now, it would be good if there was a continuous integration test. For that, the workflow must be 100% automatic, and we should have a configuration that requires minimal downloads and minimal runtime. We can then use a simple GitHub action that runs Snakemake with this configuration (example).

Communes are not always LAU2

The dataset you use to inform the 'LAU2'/'commune' spatial resolution is not strictly 2013 LAU2. For instance, the data for the UK is more correctly 'wards'. The IDs for these wards don't always match LAU2 and, in the case of Scotland and Northern Ireland, their boundaries don't match LAU2 either. I suspect that all the communes with None values in the TRUE_COMM_ column are not what they seem, i.e.:

image

This isn't a problem for your paper's analysis, since you don't aggregate based on IDs, but rather by using spatial joins. I only noticed the issue when trying to match known electricity consumption in the UK, at the LAU1 resolution to data coming out of this. I thought would be easiest (ha!) to do it by aggregating using LAU2 -> LAU1 correspondence tables).

Update to EPSG3035 and then verify that buffering invalid shapes doesn't destroy data

As discussed in #9, we buffer shapes that are 'invalid' to stop them breaking the workflow elsewhere. Invalid shapes are those with edges that self-intersect, e.g. think of a 'bow tie'. Many shape operations cannot be undertaken on invalid shapes (aggregating, overlaying, joining, etc.). Validity can be checked by calling is_valid() on a shape object. Buffering cleans this up, but can cause bits of shapes to completely disappear (see cautionary comment here).

At the moment, in #9, a check has been added to ensure the final area of a buffered shape is the same as the pre-buffered shape area, within a tolerance. Ideally we would set this tolerance in absolute units of e.g. m2. This can only be done, however, if the coordinate reference system is in metres (e.g. EPSG3035), which isn't always the case.

This can be resolved by moving the entire workflow to operate in a single coordinate reference system from very early on. Depending on the reference system, the absolute buffer tolerance could be set. EPSG3035 would possibly be the best for this, but if it is configurable, then we need to ensure that the buffer tolerance changes accordingly.

Adding Iceland to analysis also adds random islands

This issue relates to adding Iceland for use in a downstream dependency, Euro-Calliope (see https://github.com/timtroendle/euro-calliope/issues/15).

Including Iceland here is relatively straightforward, it just requires:

  1. Updating the snakemake config to include Iceland in the list of countries (inc. at every resolution), and to extend the spatial bounds to cover Iceland.
  2. Updating the renewables.ninja capacity factor datasets, now that there are more points to simulate.

There's only one issue: on creating the ninja input configs I noticed that the extended spatial bounds of the problem leads to a bunch of island territories ending up in the model:

image

These same islands will end up elsewhere in the analysis, since the spatial bounds are used to set the study area in e.g. generating shapefiles. Thoughts on the easiest way to exclude these @timtroendle, or do you imagine they won't be a problem?

Update shared coast IDs to EEZ MRGID

Currently, shared-coast.csv is built based on EEZ 'ids', which are based on the geojson feature numbers generated by fiona when reading the shapefile. These IDs are not actually in the dataset itself, instead we should use mrgid, which is a unique identifier for each EEZ shape.

This update is required for calliope-project/euro-calliope#99 to function correctly, since automatic download of EEZs can only be done with v11 (currently, we use v10), which has a different order of shapes, effectively messing up the use of feature IDs as an identifier.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.