calliope-project / solar-and-wind-potentials Goto Github PK
View Code? Open in Web Editor NEWEstimation of solar and wind power generation potentials in Europe.
License: MIT License
Estimation of solar and wind power generation potentials in Europe.
License: MIT License
To streamline editor configurations such as "newline at end of file."
Currently, shared-coast.csv
is built based on EEZ 'ids', which are based on the geojson feature numbers generated by fiona when reading the shapefile. These IDs are not actually in the dataset itself, instead we should use mrgid
, which is a unique identifier for each EEZ shape.
This update is required for calliope-project/euro-calliope#99 to function correctly, since automatic download of EEZs can only be done with v11 (currently, we use v10), which has a different order of shapes, effectively messing up the use of feature IDs as an identifier.
This issue relates to adding Iceland for use in a downstream dependency, Euro-Calliope (see https://github.com/timtroendle/euro-calliope/issues/15).
Including Iceland here is relatively straightforward, it just requires:
There's only one issue: on creating the ninja input configs I noticed that the extended spatial bounds of the problem leads to a bunch of island territories ending up in the model:
These same islands will end up elsewhere in the analysis, since the spatial bounds are used to set the study area in e.g. generating shapefiles. Thoughts on the easiest way to exclude these @timtroendle, or do you imagine they won't be a problem?
We can validate our config using a JSON Schema (see the Snakemake docs), which would help to manage what a user is able to define.
As well as listing configurable options, we can set which ones are required and which are optional (with defaults) and limit the configuration data type and set of possible entries. Already mentioned in #7 is the need to limit the NUTS years that can be defined.
Others I see are:
[nuts0, nuts1, gadm0, lau2, etc.]
Currently, there are exactly four technologies, two of which are competing on land (onshore wind and solar farms). These four technologies (including the competing ones) are baked into the analysis early on when land elgibility is determined here.
This has two problems. First, the workflow is not as technology agnostic as it could be. Adding or removing technologies is difficult. Second, it mingles the competing technologies in an error-prone way (all pv/wind-prio
things in following rules and output files, like here).
A cleaner solution would be to assess all technologies in isolation, and decide between competing technologies only at the very last step. For example, instead of a integer-map with Eligibility categories baked in, we'd create four boolean-maps:
build/technically-eligible-land.tif
(with values 0, 250, 180, 110, 40) -> build/technically-eligible-land-{technology}.tif
(with values 0 and 1).
All other maps accordingly. Again, this'll create many more maps and will require more disk space, but workflow code and result data will be much cleaner, hopefully lead to fewer miss-uses, and allow to introduce other technologies more easily.
The dataset you use to inform the 'LAU2'/'commune' spatial resolution is not strictly 2013 LAU2. For instance, the data for the UK is more correctly 'wards'. The IDs for these wards don't always match LAU2 and, in the case of Scotland and Northern Ireland, their boundaries don't match LAU2 either. I suspect that all the communes with None
values in the TRUE_COMM_
column are not what they seem, i.e.:
This isn't a problem for your paper's analysis, since you don't aggregate based on IDs, but rather by using spatial joins. I only noticed the issue when trying to match known electricity consumption in the UK, at the LAU1 resolution to data coming out of this. I thought would be easiest (ha!) to do it by aggregating using LAU2 -> LAU1 correspondence tables).
At the moment, 3 arc second data is used for most of Europe, but it doesn't cover anything further North than 60N. To get Northern Nordic countries covered, I think the current approach is to supplement the data with 7.5 arc second GMTED data. Perhaps, for consistency, a single data source could be used, such as this attempt at filling in missing 3 arc second SRMT data: www.viewfinderpanoramas.org/dem3.html
Instead of packaging up the output of this repository and then pulling that information from Zenodo when running the Euro-Calliope workflow, I think it would be better to have this is a submodule of Euro-Calliope, similar to how Euro-Calliope is a submodule of the OSE model workflow. Some reasons for this:
a. They share a lot of the same input data, so there isn't much overhead in terms of preparing the datasets. In fact, downloading and generating shapefiles (one of the more time intensive tasks) only needs to be done once.
b. If a user wants to change something in Euro-Calliope that leads to needing different technical potential data, they have to wait for this to be re-packaged on Zenodo. This includes changing the spatial scope (see #1) and having a different spatial resolution (e.g. NUTS3).
c. You could just re-generate the technical eligibility data for the resolutions of interest for your energy system model (and ignore the report generation), so time/memory penalty would be low.
As discussed in #9, we buffer shapes that are 'invalid' to stop them breaking the workflow elsewhere. Invalid shapes are those with edges that self-intersect, e.g. think of a 'bow tie'. Many shape operations cannot be undertaken on invalid shapes (aggregating, overlaying, joining, etc.). Validity can be checked by calling is_valid()
on a shape object. Buffering cleans this up, but can cause bits of shapes to completely disappear (see cautionary comment here).
At the moment, in #9, a check has been added to ensure the final area of a buffered shape is the same as the pre-buffered shape area, within a tolerance. Ideally we would set this tolerance in absolute units of e.g. m2. This can only be done, however, if the coordinate reference system is in metres (e.g. EPSG3035), which isn't always the case.
This can be resolved by moving the entire workflow to operate in a single coordinate reference system from very early on. Depending on the reference system, the absolute buffer tolerance could be set. EPSG3035
would possibly be the best for this, but if it is configurable, then we need to ensure that the buffer tolerance changes accordingly.
As we are adding changes to this repo from time to time now, it would be good if there was a continuous integration test. For that, the workflow must be 100% automatic, and we should have a configuration that requires minimal downloads and minimal runtime. We can then use a simple GitHub action that runs Snakemake with this configuration (example).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.