Code Monkey home page Code Monkey logo

trynthink / scout Goto Github PK

View Code? Open in Web Editor NEW
60.0 9.0 22.0 4.27 GB

A tool for estimating the future energy use, carbon emissions, and capital and operating cost impacts of energy efficiency and demand flexibility technologies in the U.S. residential and commercial building sectors.

Home Page: https://scout.energy.gov

License: Other

Python 99.99% JavaScript 0.01%
building-energy energy-data energy-consumption energy-efficiency demand-side-management carbon-emissions

scout's People

Contributors

ardeliam avatar asatremeloy avatar aspeake avatar carlobianchi89 avatar dewittpe avatar handichan avatar jlreyna avatar jtlangevin avatar lainsworth8801 avatar robertfares avatar trynthink avatar vnubbe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scout's Issues

Consider valuation of ancillary measure benefits

We should think about how we can quantitatively capture ancillary benefits or "value streams" from various measures. These value streams are derived from a given measure but are not a result of the particular technology's energy efficiency. For example, envelope components can be a structural component of a building, supplanting other structural elements. As the efficiency of building components increases, the apparent energy benefit of an improved envelope declines, but that might be mitigated by including these additional benefits.

Add function that automatically checks external links

For maintenance purposes, it would be helpful to have a snippet of code that checks links that point outside of the site/repository to see if they are still valid, and flagging broken links by changing the link text color.

Capture interactions between microsegments from some measures

Some measures, while they might apply to a set of microsegments, can also have interaction effects on other microsegments. As an example, a measure for LED bulbs is a lighting measure and should be coded as impacting the appropriate lighting microsegments. At the same time, replacing fluorescent or incandescent bulbs with solid-state LED bulbs will also reduce cooling load and increase heating load. These interactions need to be captured. They should probably be included in the model itself, and not be coded into the measure definition, as that would require that individuals creating measures accurately note the relevant interactions for that particular measure, and might create an opportunity for errors (e.g. if someone forgets to code for changes in both heating and cooling from a lighting-specific measure).

Change measure details modal into a static form

Reformat the measure details modal into a form with the class .form-control-static. The “Edit Measure” button should then change the class of the form such that it becomes editable. The “Edit Measure” button should then also be replaced with a “Save Changes” (or similar) button, and a “Cancel” button should be added that switches back to a static form by again editing the class.

Support handling of 'all' within the microsegment subfields of measures

It might be helpful to incorporate the ability to parse "all" for some of the microsegment definition fields in the measures, particularly building type and climate zone. I don't think it would be too difficult to implement, and would make the measure database more readable (if less explicit). I don't know if the lack of specificity of "all" makes this approach problematic or helpful in the long run.

Automatically calculate data quality score

In the interest of automating as many management tasks as possible, we might be able to develop a method for automatically generating a score for each of the data sources. This automation would eliminate the need for someone to manually review each new measure for quality. For example, using some regexes on the source information provided, we could determine the type of source indicated and whether it is considered a "trusted" or "high quality" source. As needed, we could use some webcrawling capability to explore the linked page to make sure the measure and the sources provided are related. We could also automatically flag just those measures that can't be easily scored using automatic means and push them to someone for review.

Allow measures to have microsegment-specific performance levels

Currently, each measure can have only a single performance (e.g. IEER 20) or energy savings (e.g. 20%) level specified. Using output from EnergyPlus commercial building models, for measures that apply to commercial buildings, there will be data to support multiple performance or energy savings levels. There could be performance levels specified by climate zone, building type, and end use.

The model should be able to support multiple performance levels at various levels of specificity. At a minimum, HVAC measures should be able to have separate performance levels if they apply to both heating and cooling (e.g. heat pumps).

Provide square footage information in microsegments JSON

The "RESDBOUT.txt" file includes number of units associated with each of the "supply" microsegment technologies in the residential sector, allowing us to normalize each of these microsegments' energy consumption values to a per unit basis. This normalization facilitates comparison across microsegments and measures.

However, number of units information is not applicable in the case of residential "demand" microsegments (i.e., wall insulation upgrades), and will not be available in the AEO for any of the commercial microsegments. In these cases, we can normalize the microsegment energy consumption values to a per square foot floor area basis, as floor area data are available in the AEO for both the residential and commercial sectors, and are broken down by census division and building type.

To add this information to the existing "microsegments.json", we can create a new "square footage" level, and wrap the existing microsegment information in an "energy & number units" level, as follows:

{ "square footage" : { (new square footage information here) } },
{ "energy & number units" : { (existing microsegments information here) } }

Configure RESTful API for microsegments JSON

To be able to query the JSON for the market definition calculator (and to provide general access to outside users), the database should be set up with an API. It seems that there are many methods for setting up a REST API, so we'll need to figure out what offers the best/easiest/cleanest approach.

Address handling of embedded product measures

A measure might be specific to electric motors, where improvements in the efficiency of the technology have the potential to affect multiple microsegments, but the efficiency improvement in that technology will not have a uniform effect on the relevant microsegments. For example, a new electric motor design might use 23% less electricity to produce an equivalent amount of work, but that might reduce the electricity use of a central air conditioning system by 1% or a dishwasher by 3%.

Another embedded product measure could be an improvement in the efficiency of AC-DC converters rated less than 100W. The efficiency improvement would have unequal impacts on different microsegments, but rather more importantly, would probably apply to some fraction of the products in multiple microsegments. These variations in applicability by microsegment create a further complication.

It is not obvious how to implement such a measure without having to create a separate measure for each affected microsegment, which is not desirable. It would be far better if the efficiency improvements relevant to different microsegments could somehow be incorporated into a single measure.

Develop site-source and carbon emissions conversion factors and add to "run.py"

Site-source conversion factors are needed for each EIA projection year to convert our site energy numbers to source (or primary) energy numbers. At the same time, data on the carbon intensities of each fuel source included in our analysis will allow us to estimate the avoided carbon emissions associated with calculated measure energy savings.

The above information may be gleaned from EIA AEO summary tables, which include data on delivered/total energy consumption and carbon emissions for the entire buildings sector, for each projection year.

Going forward, we may continue to construct site-source and carbon intensity information from the summary tables (in Excel); however, it would ultimately be easier to mine this information from a raw data file attached to each new version of the AEO. Whether this file exists is a question for EIA.

Additionally, we may want to break our carbon intensity data down to a more granular level, as the carbon intensity of the electricity grid varies greatly on a regional basis. If census-level carbon intensity data exist, for example, these data could be easily integrated into our existing "microsegments.json" structure.

Capturing ET spending on measures

We should probably include the ability to track ET spending on various measures, but I'm not sure if it should be a separate database or combined into the measure definition JSON.

Add fields to sample object in JSON database

There are several data fields that could or should be added to all of the measures in the JSON database:

  • Data quality measures (energy savings, cost, and market)
  • ET program spending and target market entry date
  • Person who submitted/last updated (should probably timestamp this record as well)
  • Current TRL/R&D status
  • Fuel switching/fuel mix
  • Cascade/staging parameters

Do we want to have the capability to include multiple estimates and sources for e.g., energy savings?

Develop test for random sampling outputs in "run.py"

"Run.py" now accommodates input distributions for measure performance and cost, which yield value distributions for associated outputs. A robust way of testing these output distributions is needed. This test might check for approximately correct values of distribution parameters (i.e. mean/variance), correct number of output values (based on sampling N), and could also check that the output values are drawn from a population with the expected probability distribution (i.e., via a chi-square test).

Change the recording of years associated with the microsegment data

The stock and consumption data for each microsegment are currently stored in lists, where each entry in the list corresponds to a year indicated in the original data but not currently stored. Instead, the associated years are stored in a separate list in the JSON database. Instead, the years should be recorded alongside the data in a dict at each leaf node in the database. For example, at a leaf node:

Current method: "stock": [6714.1, 6703.5, ... , 6169.3, 6151.2]
New method: "stock": {"2009": 6714.1, "2010": 6703.5, ... , "2039": 6169.3, "2040": 6151.2}

This approach should reduce the risk of the data and years being mismatched without warning or visibility to the error. Both mseg.py and mseg_test.py will require updating, as well as switching microsegment.json back to its earlier structure.

Determine workflow for verifying and adding submitted measures

To prevent junk measures, a captcha and/or an API key approved by a real person might be needed. Once accepted, should measures be added directly or be stored in a separate JSON file awaiting approval and semi-manual addition to the approved measures list?

Support both unitized and scalable measures

Some measures relate to unitized devices or technologies, such as induction cooktops or set-top boxes. On the other hand, HVAC technologies in particular come in many sizes. I'm not sure if there's some reason we might need to be able to define a unit size (e.g. 3 ton) for these technologies where the system could be increased or decreased in size for a given installation.

Configure market calculator to clearly show only valid selection pathways

As an example to illustrate the problem with the way the calculator is set up now, there are some cases where multiple end uses will share fuel type selections, and in these cases, the fuel type selection block will appear for each end use. This will inevitably lead to confusion about which fuel type selection goes with which end use, and it would be better if multiple end uses could share only one selection of fuel types.

Unless we choose to restrict the user to only “sensible” combinations of selections (preventing, for example, an HVAC selection along with TVs), some revisions to the interface might be required to clearly convey that a fuel type field applies only to some of the end uses chosen. Without that, users might be misled into thinking, using the prior example, that the fuel type selection for the HVAC end use also applies to TVs and that there’s some category for natural gas-powered TVs.

Support measures that include fuel switching

Some measures might incorporate fuel switching within a segment, such as replacing a gas cooktop or range with a more efficient induction electric unit. Such measures might require defining two microsegments, corresponding to the incumbent and new fuel types, as well as other supporting information not in traditional measures.

It is also possible that we should support fuel switching by simply defining measures and then instructing the solver/engine to select the appropriate technologies either with or without fuel switching. Incorporating it into the solution step might be inconsistent with the installed cost data in each measure.

Update HTML to support accessibility

Bootstrap includes in their documentation several notes regarding how to configure various page elements to support accessibility. These changes should be incorporated throughout.

Understand the commercial microsegment data

The commercial microsegment data are divided across two files and the content in the files is often unclear. The following questions remain unanswered:

  • KSDOUT
    • Which rows need to be included or can be ignored (considering, in particular, the Description column)?
    • What are the units of the Eff column?
    • What is the meaning of the v (vintage?) column?
  • KDBOUT
    • What do the Label column entries mean?
    • Can any of these data be ignored?

Generate baseline microsegment performance levels

I believe it is preferred that measures be described with absolute technical performance, e.g. COP, HSPF, kWh, lumens/W, etc. instead of percent energy savings, which is how many measures were defined historically. If measures are defined in absolute terms, to determine the impact of the measure against the baseline, do we then need to define the baseline or stock absolute performance level?

For example, if we have a measure for "ultrasonic clothes dryers" that uses 56 Wh/load, am I correct in thinking that without knowing the Wh/load performance of the existing stock of clothes dryers, we can't determine the impact of the measure?

If these data are needed to support the tool, it is possible that it belongs in the microsegment database or a companion database with the same structure, not embedded in the measures.

Implement ability to group measures together

Each measure should be elemental, that is, they should not combine multiple efficiency improvements or multiple technologies. Rather, it should be possible for the user to join or link multiple measures together for staging purposes, while keeping each measure separate and specific.

Put another way, to consider the effect of measures A, B, and C, as well as combinations of the three, one would need to create measures for each of the combinations of the three, for a total of seven measures (A, B, C, AB, BC, AC, ABC). This requirement creates a chance for errors to be introduced; if A has a mistake, it is possible that measure A will be updated but measures AB, AC, and ABC could be missed. Trying to impose a requirement on users to carefully check all other measures when applying corrections is not a sustainable long-term strategy. Instead, by allowing users to define groups of measures, when one measure is updated, all of the groups using that measure will be updated as well.

Incorporate deployment actions

Deployment actions include, for example, "incorporate technology C into commercial codes" or "accelerate retirement of product D." These actions modify technology measures (by shortening the lifetime of the current stock, for example) or modify the operation of the engine by redefining the market conditions for a microsegment (by increasing market uptake of improved products in that microsegment, for example).

These actions are currently commingled with the technology measures, but they should probably be separated, since they have a different effect on the technology prioritization calculations. These actions are only relevant once we can support the adoption potential scenarios.

Tracking units in the model and databases

It might be helpful to have a method that tracks and checks units of numbers in the various databases. As it is currently set up, the measure JSON database includes a field to support strings denoting units. Units are not tracked in the microsegment data. While we can include units in the wiki, if new versions of the microsegment source data are reported in different units, there is no method currently in place to ensure that those changes will be detected and/or corrected. Additionally, units in measures are not necessarily standardized, and might require conversion for use in the model.

Support Non-Energy Benefits (NEBs) evaluation

Each measure we include in our technical potential analysis may have benefits beyond energy savings/energy cost savings alone (Non-Energy Benefits, or NEBs), including [1]:

  1. Participant NEBs (i.e., increased value, comfort, health, and safety).
  2. Utility NEBs (i.e., infrastructure savings).
  3. Societal NEBs (i.e., reduced carbon emissions/water use, other environmental benefits, job creation, labor productivity, neighborhood stability)

This milestone seeks to incorporate carbon emissions reductions as an initial NEB consideration in our analysis framework. Carbon emissions data for various fuel sources are available from EIA, and the Cost of Conserved Carbon has recently been added to the existing version of the p tool; both resources may be leveraged in achieving this milestone.

[1] http://www.cpuc.ca.gov/NR/rdonlyres/BA1A54CF-AA89-4B80-BD90-0A4D32D11238/0/AddressingNEBsFinal.pdf

Use Jekyll to provide header and footer templates

GitHub Pages supports Jekyll, which enables templates for common page elements, like headers and footers. Since the headers and footers on all the pages are the same, using Jekyll should reduce maintenance effort.

Create measure integrity tests

There are two schools of thought regarding testing data (measures, in our case) for integrity. One view is that the data should be checked to prevent later errors in the program; the opposing view is that those later errors necessarily expose the problem with the underlying data, and thus checking the data initially is a waste of computation time. There appear to be strong ideas supporting both potential approaches, but since the measure data are largely static, I think it makes sense to check it when it is read into the program, rather than waiting for a later error that is harder to diagnose.

Describe technologies supplanted by measures

In creating the sample measures, I realized we might need to provide some information defining the performance of the products being replaced as well as the measures (or replacing products).

  • Service lifetime
  • Cost

It is possible that this information is only needed for the adoption potential-type simulations.

Figure out how to filter or subset instances of the Measure class

I have a lingering concern that using a (single) class to define all of the measures will leave us without a way of selecting a subset of the measures on which to perform an analysis. I imagine we'd want to be able to filter on microsegment and filter out data with low data quality rankings.

There appear to be several candidate approaches, including (this isn't an exhaustive list):

  • assigning each measure to lists representing the appropriate group(s) when it is created
  • coding measures something like itertools

Revise market entry year dropdown to be dynamic

  • Generate list of years based on years in microsegments JSON
  • Ensure that current year (as of the user’s access date) is the earliest year in the list
  • Continue, with dynamically generated list, to have 2030 as the default year

Restructure commercial data

Since the commercial data from EIA is not given as total energy for each microsegment, as in the residential database, the data must be restructured into that form, as is done in the p-tool.

Developing micro segments in conjunction with ongoing Excel work

We need to develop a market micro segments database for our analysis framework to reference, which can be updated as new AEO, RECS/CBECS, and other relevant data sources are refreshed. These micro segments could be defined through a tree structure, such as for a highly insulating window: Climate Zone X -> Commercial Buildings -> Commercial Building Type X -> Heating Energy -> Heating Energy Lost through Windows (note in that particular case, we might create a parallel segment for Cooling Energy to identify effects of higher window insulation on cooling energy).

Some of this work is already being done in the existing Excel framework to bring the tool up to AEO 2014 and ensure the micro segments are correctly defined. There is a question of whether the above efforts may be combined with this ongoing Excel work to save time. One possibility discussed is to establish a code that updates all the micro segments (or perhaps just the micro segments not already updated) and pastes the resultant data into the existing Excel micro segments tab as a .csv.

Add AJAX to measure details modal

Revise the measure details modal to include all of the measure content for the measure selected using an AJAX call to the measures JSON file.

Add license to repository

Define a license with text (named LICENSE.md) and an accompanying link to the master license from the source organization. Whatever we choose should allow our use of components or tools that have other licenses (e.g. MIT, BSD). We should also avoid a copyleft license that requires derivative works to be licensed similarly (e.g. GPL, CC BY-SA).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.