Code Monkey home page Code Monkey logo

timex's Introduction

bw_timex logo

Read the Docs PyPI - Version Conda Version Conda - License Binder

ℹ️ This package is still under development and some functionalities may change in the future.

This is a python package for time-explicit Life Cycle Assessment that helps you assess the environmental impacts of products and processes over time. bw_timex builds on top of the Brightway LCA framework.

Features

This package enables you to account for:

  • Timing of processes throughout the supply chain (e.g., end-of-life treatment occurs 20 years after construction)
  • Variable and/or evolving supply chains & technologies (e.g., increasing shares of renewable electricity in the future)
  • Timing of emissions (by applying dynamic characterization functions)

You can define temporal distributions for process and emission exchanges, which are then automatically propagated through the supply chain and mapped to corresponding time-explicit databases. The resulting time-explicit LCI reflects the current technology status within the production system at the actual time of each process. Also, bw_timex keeps track of the timing of emissions which means that you can apply dynamic characterization functions.

Use cases

bw_timex is ideal for cases with:

  • Variable or strongly evolving production systems
  • Long-lived products
  • Biogenic carbon

Documentation and Resources

Contributing

We welcome contributions! If you have suggestions or want to fix a bug, please:

Support

If you have any questions or need help, do not hesitate to contact us:

timex's People

Contributors

timodiepers avatar muelleram avatar adelinejerome avatar cyclochlo avatar renovate[bot] avatar michaelweinold avatar jakobsarthur avatar pingping1997 avatar

Stargazers

Minji Yoon avatar Laura À. Pérez-Sánchez avatar  avatar  avatar Simon Schulte avatar  avatar  avatar Stew avatar  avatar  avatar Felix avatar Wajju avatar  avatar Joao Santos avatar Victor Tulus avatar Skrrrr avatar Fabian Lechtenberg avatar  avatar  avatar Daniel de Koning avatar Tobias Augspurger avatar Michael Lejeune avatar  avatar  avatar  avatar

Watchers

 avatar Joao Santos avatar

Forkers

damon244

timex's Issues

Dynamic characterisation bug with latest bw2data and ecoinvent 3.10

In some cases, the dynamic characterisation doesn't work. The Error occurs when the default dynamic characterisation functions get mapped to the biosphere flows:
https://github.com/TimoDiepers/timex/blob/66634c0b913c95be57933c32c952535d454b9812/timex_lca/dynamic_characterization.py#L341
https://github.com/TimoDiepers/timex/blob/66634c0b913c95be57933c32c952535d454b9812/timex_lca/dynamic_characterization.py#L350-L351

Timex expects the IA Methods CF's to be stored as a tuple of ((database, key), CF), whereas we now saw the problem that sometimes, in recent versions with brand-new environments, its (database_id, CF).

wrong matrix modifications when biosphere flows are added at different levels

See linked notebook:
if biosphere flows exist in the deep background (e.g. a process that is in the background database but not directly linked to the foreground) and in the foreground), the matrix is modified correctly.
if biosphere flows only exists in the deep background, or if biosphere flows exist at the intersection between background and foreground (temporal markets), the matrices get modified incorrectly, see screenshot.

I do not know why this is the case, but assume it might have something to do with the ids and or the biosphere_datapackage.

tempsnip

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/workflows/python-package-deploy.yml
  • actions/setup-python v5
.github/workflows/python-test.yml
  • actions/checkout v4
  • conda-incubator/setup-miniconda v3
pep621
pyproject.toml
  • bw2calc >=2.0.dev13
  • bw2data >=4.0.dev39
  • bw_graph_tools >0.4
  • bw_temporalis >=1.1
  • dynamic_characterization >=0.0.4
  • setuptools >=68.0

  • Check this box to trigger a request for Renovate to run again on this repository

Explosion of processes that are not consumed

Currently, foreground processes are exploded even if they are not consumed. This adds unnecessary columns and rows to the matrix. This should be changed in the future, moving towards bigger databases.

bug: for processes with earler inputs, new exchanges are added to the wrong process copies, add tests

For a simple case study, see screenshot below, Case 1 (TD only at B) works as anticipated. However, case 2, in which C happens 1 year (or any amount of time) earlier than B, the code doesn't work.

image

The problem is that the new inputs of C to the new exploded process copies of B are done for the time of C, in this case 2019 and 2021, instead of at the time of B, in this case 2020 and 2022. This leads to empty inputs from C for new process copies of B in 2020 and 2022, which is are the rows consumed by A, which results in a wrong LCIA score.

I'm not entirely sure how to fix it myself, but I think we need to make sure that we take the t(link) of the processes, not of the exchanges, when relinking the exchanges. In case 1 we didn't have this problem as tlink(exchange) == tlink(lprocess).

See also attached notebook and excel:
issue_consecutive_TDs.zip

ids between mlca.activity_time_mapping_dict and mlca.activity_dict don't match unless project directory is purged

the ids in mlca.activity_time_mapping_dict start at len(biosphere)+1, so assuming that biosphere ids start at 0 and skipping those of the biosphere flows. This works only if we always start from scratch by deleting the project directory and forcing BW to restart counting its internal ids also at 0.

If we don't start from scratch by

if project_name in bd.projects:
      bd.projects.delete_project(project_name)
      bd.projects.purge_deleted_directories()

the BW ids start at a higher integer and do not match the ids in mlca.activity_time_mapping_dict

image

in the screenshot:
id=mlca.activity_time_mapping_dict
self.activity_dict contains BW ids

simplify mapping to seasonal or diurnal databases

Currently databases are linked to a specific point in time: e.g. 2020 or 2020-01-01 12:00 am and are then matched based on temporal closeness (closest or linear interpolation between closest neighbors). For seasonal or diurnal analysis, we want to draw from the database that represents best the time within the year or within the day.

Here we should add a function to map to the specific database based on season/diurnal time instead on temporal closeness.

Adding biosphere exchanges to exploded processes

Currently, we only relink the technosphere exchanges of the exploded processes. These new processes need to also inherit the biosphere exchanges from their parent process. This corresponds to copying the biosphere matrix entries at the parent process to the exploded process copy. The temporal distributions of biosphere flows are dealt with separately in an additional temporal-mapping-cube.

Potential future case study

Maybe we could use the "Car example" from the Brightway from-the-ground-up Repo as an easy-to-grasp case study in the future. I think the effects of changing background databases would be clear to see, e.g., looking at the use phase of the electric car. Also, the biofuel car could include some temporalized biosphere flows which could be interesting.

Maybe I'm thinking too far ahead here, but I stumbled upon this again and thought I'd leave a note here.

image

Wrong scores when no filter function is defined

When no filter function is defined, it defaults to a function that always returns false, i.e. no nodes are skipped. Not defining a filter has worked before, but yields wrongs results now. I guess its because of the biosphere-stuff?

add check for cutoff for graph traversal for prospective databases

The current approach traverses the original database to stop at a certain defined cut-off. It may happen that in a prospective database the production technologies are so different that processes above the cutoff that have a large impact are excluded or processes below the cutoff are included.

non-unitary foreground unit processes are not scaled

see test test_nonunitary_unitprocess on this issue branch.

If a foreground process produces an amount other than 1, this is not used to scale the inputs of the timeline, leading to wrong scores. Should be relatively easy to adjust our code.

Split dynamic technosphere and biosphere creation?

Currently, the technosphere and biosphere creation is always coupled. If users are not interested in the timing of the emissions, but just wants to know the "new" overall score, they should be able to skip this step as it takes quite some time.

Fix amounts and add tests

The amounts just happened to work out because some exchange amounts were set to 1, so the interpolated shares had no influence on the total. We should also add tests checking if the Medusalca calculations are correct.

add check for database names

differences in the strings 'database', which is the name of a databases, between the bw.project and when assigning their temporal validity (database_date_dict) simply result in a zero medusa LCA score.
We should add a check with an error message.

add test for advanced biosphere

Add a test that checks if the static LCA inventory per emission (e.g. CO2) is the same as the sum of the temporalized emissions of the same type.

add documentation

a lot of functions are hard to understand in a few weeks from now -> add proper documentation

Improve user-friendliness

I think there's potential to make the use of the medusa classes and functions more user friendly.

  • integrate build_datapackage() into lci()
  • make the filter function more clear or provide a helper-function for easy exclusion of e.g. background dbs while allowing custom filter functions
  • harmonised naming of functions and variables

@muelleram feel free to add what comes to your mind

add temporal distribution of biosphere flows in a mapping cube.

Currently, we only "store" the time of the technosphere activities by creating time-specific copies of activities, but we do not account for an additional shift in time of biosphere flows. In reality, this is important since e.g. a land fill process may emit emissions over decades after the waste was dumped, or a tree may have captured CO2 for decades before harvest. The Super-B-Matrix in our approach only contains the amounts of biosphere flows, without (potential) specific temporal distributions. Thus, we want to add an additional mapping cube to store the temporal information with the dimensions: rows: biosphere flows, columns: processes of Super-A-Matrix, 3rd-dimension (depth): time steps of the biosphere flows. The sum for each value on the 3rd dimension could be 1, and then the multiplication of the Super-B-Matrix-value with the value on the respective timestep would yield the time-specific emission. The implementation should allow to aggregate the time steps flexibly to allow for dynamic LCIA with different temporal resolutions (e.g. seasonal CFs).

Adapting remap_inventory_dicts()

On calling lca.remap_inventory_dicts(), the newly added columns and rows cannot be mapped. Once we created a "MedusaLCA" or whatever class, we should also overwrite this function and allow a remapping back to the database keys. So basically revert the *10000 etc. and add in the correct prospective database.

Generalize DynamicCharacterization interface

Instead of the specific timex-implementation, this could be generalised to a kind of "interface" in the dynamic_characterization package. Default input to the interface would be just the dynamic inventory and a characterization function dict, optionally also temporal grouping and demand_timing_dict (should per default just look for the -1 in consumer)

make t(link) (temporal aggregation of new process copies) flexible

Currently, t(link) (temporal aggregation of new process copies) is set to 1 year. It would be great to let the users define the level of aggregation themselves, possibly within a reasonable temporal range. E.g. seconds/minutes might not be suitable as this would lead to a large quantity of new processes in the database and does not fit with common LCA problems.

substitution exchanges are not recognized

while modifying technosphere matrix, exchanges of type "substitution" are not recognized as such but written as type "technosphere". This leads to the wrong sign for these exchanges.

Prevent redundant calculations in dynamic_biosphere_builder

Right now, the lci for each 'exploded' or timeline activity is calculated separately. However, this includes many duplicate calculations. Can create a list or dict which contains all lci's that have been calculated, and before calculating a new lci for a given demand, check if it has already been calculated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.