Code Monkey home page Code Monkey logo

hack's Introduction

image

5282542_linkedin_network_social network_linkedin logo_icon (3) 5282551_tweet_twitter_twitter logo_icon

Welcome to Carbon Hack 2024

🌍 Beyond Carbon, Beyond Limits

From 18th March to 8th April 2024, technology professionals are invited to revolutionize the way we measure software to reduce its environmental impact!

This year, we're pushing boundaries by going beyond carbon—exploring the full spectrum of environmental impacts with the Impact Framework (IF), an Open Source measurement tool (currently in Beta) for measuring software across all components and environments, to reduce the ecological footprint of software.

We’re challenging practitioners to come together in small virtual teams to showcase IF's prowess in reducing software's ecological footprint—carbon emissions, water usage, land impact, and more. This is your chance to make a real impact.

Closing & Awards Ceremony

ATTEND THE CLOSING & AWARD CEREMONY LIVE STREAM ON APRIL 18TH! 🏆

For three weeks, software practitioners globally have been at work to redefine environmental impact measurement using Impact Framework, an innovative opensource measurement tool thats breaking convention to make software measurement transparent, auditable, and verifiable for everyone.

Witness the culmination of this global hackathon that's invited over 500 software hacktivists and started 50+ new solutions.

Attend the Closing & Award Ceremony Livestream on April 18th!

Join us https://hack.greensoftware.foundation/awards/

Impact Framework

Check out our latest video explaining the use and power of Impact Framework (IF), a cutting-edge, open source measurement tool debuting at Carbon Hack 24 that is democratizing impact data and decentralizing software measurement to reduce software's environmental impacts.

impact framework explainer video)

Sponsors

accenture      Amadeus      Aveva      317013143-3760be21-5205-405d-bc07-76883a62f7a1      IMDA      nedbank      NTT Data      Sentry Software

Community Partners

308240684-555aa97b-0dbf-4fa5-b659-7e982a63743e      microsoft-supporting

Quick Links

Contact Us

For support or questions, email [email protected].

hack's People

Contributors

adamj89 avatar jawache avatar jmcook1186 avatar russelltrow avatar seanmcilroy29 avatar zanete avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

mdejolier

hack's Issues

k8s importer and visualization

Prize category

Best Plugin

Overview

Kubernetes has emerged as the Defacto container orchestration platform and is supported by all major CSP's.

We intend to build a k8s metrics importer that will import metrics from the k8s metrics-server and in near realtime we want to expose those metrics as Prometheus metrics to be scraped by Prometheus and subsequently visualised by Grafana. This will allow us to have carbon metrics at the same rate we get other operational metrics like utilization. Realtime carbon metrics on Prometheus open up many possibilities when it comes to alerts and setting up automations when certain thresholds are reached.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@gholtzhausen, @kungelaxyz, @tshepotshabalala, @Njuraa, @ShanelUchee02, @adamaucamp, @Choogenhout

Terms of Participation


Project Submission

Summary:

Kubernetes (k8s) is the main container orchestration platform, supported by all major Cloud Service Providers (CSPs).

Our project revolves around importing real time metrics from Kubernetes, processing those metrics through a standard Impact Framework (IF) pipeline to derive the SCI (Software Carbon Intensity), and then exporting them as Prometheus metrics. These metrics are subsequently scraped by a Prometheus server and visualised in Grafana, appearing alongside regular performance metrics that we currently collect from Kubernetes.

This enables us to scale our application based on Carbon metrics, treating them with the same importance as e.g. CPU or memory. Thus, Carbon becomes a top priority when evaluating application performance and efficiency.

Problem:

Our solution addresses the pressing need for real time metric acquisition and visualisation, with a particular focus on data generated from the IF. Despite the widespread adoption of Kubernetes across our organization, Nedbank, we encountered a significant gap: the absence of emissions tracking capability within Kubernetes. Furthermore, within IF, the lack of real time emissions monitoring posed a challenge, as existing solutions did not offer this capability.

Aiming to quantify the environmental impact of software, we identified several underlying problems to solve. Firstly, we needed to procure real time usage metrics from k8s, which is our chosen compute platform. Thereafter, we implemented a system to export these metrics as Prometheus metrics and expose them on an endpoint accessible to a Prometheus server for scraping.

Leveraging the IF as an open-source tool, our solution aims to bridge these gaps and provide a comprehensive approach to measuring and visualising the environmental impact of software in real time.

Application:

The application consists of three key components:

  • The Kubernetes importer plugin, responsible for importing metrics from Kubernetes.
  • The Prometheus exporter plugin, facilitating the export of metrics.
  • The server, which continuously executes the Impact Framework (IF) upon request of the “/metrics” endpoint, exposing the exported Prometheus metrics.

These importer and exporter components are integrated into a standard IF file, which is executed when a Prometheus server scrapes the metrics from the server.

The application is designed to operate within a standard k8s cluster with the metrics-server installed. To streamline visualisation and scraping processes, we opted for the kube-prometheus stack, the go-to Prometheus installation for monitoring and visualizing Kubernetes metrics.

Additionally, the solution features a default dashboard that can be effortlessly imported into Grafana. This dashboard includes a pre-configured alert set to trigger when carbon usage (SCI) surpasses a specified threshold within any 5-minute window.

This setup ensures seamless integration, efficient monitoring, and proactive alerting, making it easier to manage and optimise our carbon footprint in real time.

Prize category:

Best Plugin

Judging Criteria

Overall Impact:

Our solution facilitates real time export and visualization of metrics via the IF -> Prometheus -> Grafana, enabling the export and calculation of any metric in real time. This shifts the focus on sustainability from being an afterthought to a proactive decision. Achieving this impact requires monitoring teams to incorporate IF metrics into their monitoring practices and be willing to share and publish the YAML files that contain the pipelines, which informs these real time metrics.

Opportunity:

While our chosen platform for the purposes of the hack was Kubernetes, our system can be applied to any metric for real time calculation and export. The IF’s composability allows any valid YAML file to add the Prometheus exporter, facilitating metric export and visualisation. We intentionally decomposed our implementation to encourage diverse usage scenarios in a myriad of ways, that don't necessarily include each other.

Modular:

Our implementation aligns well with the Unix philosophy and micro-model architecture. The k8s-importer and Prometheus-exporter plugins each perform a single task efficiently. The Express.js server seamlessly exposes Prometheus-formatted metrics from the IF, ensuring modularity and interoperability with other plugins. Each component can function independently or in conjunction with other official and unofficial plugins, promoting flexibility and integration within the Green Software Foundation (GSF) ecosystem.

Video

Youtube Link

Artefacts

The kubernetes importer plugin:
nb-green-ops/if-k8s-metrics-importer (github.com)

The prometheus exporter plugin:
nb-green-ops/if-prometheus-exporter (github.com)

The main "hack" application repo that brings it all together:
nb-green-ops/carbon-hack-24 (github.com)

Usage

The main usage Readme. Located in the main repo:
carbon-hack-24/README.md at main · nb-green-ops/carbon-hack-24 (github.com)

Process

Being from an organization versed in the Agile way of doing things, we started by identifying the end goal and the tasks we needed, to complete this goal. We then prioritized and distributed these tasks based on each team members’ skill set. Checking in regularly was key, as we are a distributed team. We had standups to facilitate this and to ensure everyone was on the right track. We soon started integrating everyone’s work and to have larger working sessions where everyone collaborated to put the solution together and make sure all the pieces fit.

This iterative process continued as we solved challenges and changed implementations as needed for the various components to interface correctly. We finished the technical part of the hack sooner than expected and had some fun figuring out video editing tools and discovering the youtuber within ourselves.

Inspiration

At Nedbank, our organisation is actively engaged in several strategic initiatives centered around ESG and sustainability. Among these initiatives is the modernisation of numerous legacy applications, often involving containerisation and deployment to Kubernetes. This is the case for most financial service organisations in South Africa.

We aim to embed Green Engineering principles and practices within our squads. As Peter Drucker said, “You can’t improve what you don’t measure". Therefore, we recognize the need for a real time way to measure and inform squads about their performance from an ESG perspective. As we know, the key part of making ESG real is not to just to measure for the sake of measuring but instead to measure what matters.

Challenges

Dynamic Infrastructure:

Navigating the rigidity of the current framework posed a challenge as it lacked a mechanism to dynamically generate a tree of components before pipeline execution. In the dynamic landscape of modern cloud environments, particularly within Kubernetes, the constant creation and deletion of pods and nodes presented a hurdle. We addressed this by devising a solution that involved specifying a flat list and appending infrastructure-specific values to observations, enabling us to identify them accurately upon completion.

(A flat list refers to a list structure where all elements are at the same level, typically arranged sequentially with no hierarchical relationship.)

Node Package Manager (NPM):

We encountered difficulties with NPM which was frustrating as it failed to install packages from GitHub correctly, rendering our newly developed plugins ineffective. To circumvent this issue, we devised a workaround involving traditional git clones and NPM linking, ensuring all necessary files were included in the package upon installation.

Tests:

Understanding the testing framework within the plugin template repository proved challenging, given our team’s limited expertise in JavaScript and TypeScript development. Overcoming this hurdle required diligent debugging and familiarisation with the testing procedures.

Acronyms: The learning curve associated with mastering the numerous acronyms linked to ESG, k8s, Prometheus, and Grafana proved to be steep.

Accomplishments

We’ve tackled challenges that we were faced with head on, and seamlessly integrated various components into our solution. Our solution’s impressive scalability underscores our team’s expertise. By integrating four complex components namely IF, k8s, Prometheus and Grafana, we have demonstrated our team’s problem solving abilities. Despite the numerous challenges we faced, our determination yielded a successful solution, showcasing our ability to tackle intricate problems. Not only did we create two IF plugins, but we also made our Grafana visualisation available as a JSON template, enabling widespread sharing and use.

Through targeted communication efforts we sparked the interest of both internal and external stakeholders. We raised awareness about the Carbon Hack 24, emphasizing the environmental impact of software, and highlighting the potential significance of leveraging Kubernetes for carbon reduction.

Learnings

Our project journey involved learning about the IF and tracking emissions through coding practices. Kubernetes, despite its complexity, proved to be invaluable.
A notable challenge was the lack of publicly accessible ESG data for programmatic use.
Effective time management and collaboration was achieved with a team of seven members through daily stand-ups and the use of Git, ensuring everyone was synchronized with the project’s progress. We individually expanded our knowledge on plugin frameworks/architecture and developed skills in crafting clear, comprehensible technical documentation.

Some additional learnings were highlighted through data analysis, which underscored the energy efficiency of MacOS, conserving power by shutting down when inactive. We have noted our assumptions and from the data on the dashboard we can clearly see an increase in emissions whenever a more demanding workload was run. While it feels trivial, it goes a long way to validate the functionality of the system and learn to grasp the impact software has on the environment.

What's next?

Our proposed solution will bode well for large enterprises that use Kubernetes. With the rise of sustainability tracking and monitoring, being able to demonstrate and integrate the IF with current enterprise architectures will ensure long term use. At Nedbank, we’re looking towards implementing our solution in all our Kubernetes clusters (where possible) to enable us to track our carbon emissions more accurately and report on them timeously. This will contribute towards long-term adoption of the framework.

Through Kubernetes, we demonstrated how the IF can be used in real time. This functionality was not there, and our solution is a proposal for the GSF to consider adding to the IF.

We believe our solution can contribute to the geographical footprint of the IF. Google Cloud recently launched its first cloud data centre in South Africa (SA), with research estimating that the cloud ecosystem could potentially contribute US$2bn to SA’s GDP and support the creation of 40k jobs by 2030. While it’s still early days, we believe our solution has the potential to be used as a base for the Africa leg of the ecosystem, and we hope to build a local community around green software using our solution as demonstration.

Mitigating Harmful Designs at the Source

Prize category

Best Contribution to the Framework

Overview

We propose developing a Figma plugin (or a regular plugin, if that's not allowed) that uses Impact Framework to provide environmental impact assessments to UI/UX designers during the design phase. The goal is to inform designers of the potential carbon footprint of their designs before those designs get shipped, while also highlighting any deceptive design patterns like infinite scroll or default newsletter opt-ins.

I think we'll need to create some type of 'carbon impact multiplier' to quantify the additional environmental costs of addictive design patterns.

The reinforcement loop of addictive designs --> higher carbon outputs --> poorer mental health outcomes could also be mentioned, although we're not sure how to include this.

Questions to be answered

  • Is this feasible and allowed? (We work in front-end dev/UX roles, with some backend knowledge)
  • Any ideas on how we could map deceptive or addictive design patterns to a carbon output value?

Have you got a project team yet?

Yes and we are now complete :)

Project team

@sgiori11
@Joakim-Andersson

Terms of Participation

Build a plugin for calculating SCI of blockchains

Type of project

Building a plug-in for Impact Framework

Overview

Background

The energy consumption of blockchain is a growing concern. Currently, according to some researches, Bitcoin alone accounts for 0.38 % of the whole global electricity consumption (https://www.jbs.cam.ac.uk/2023/bitcoin-electricity-consumption). Bitcoin mining process also consumes a lot of fresh water in their cooling process, some researchers estimate that US Bitcoin miners uses around 93 - 120 gigalitres (GL), equivalent to the average annual water consumption of around 300,000 U.S. households.

With the growing popularity of blockchain technology, more blockchains will be created in the future and more users will take part in the ecosystem. That means more nodes, more transactions and more resources like electricity or fresh water are used.

There have been some efforts by blockchain organizations to reduce the energy consumption by their nodes. For example, Ethereum switching from PoW (proof-of-work) consensus mechanism to Pos (proof-of-stake) reduces 99.99% energy consumption, a fantastic result.
I think the Impact Framework can contribute to the process of raising blockchain users awareness about the impact their actions/software/systems can have over the environment.

About the plugin

Here is what I have in mind for this plugin right now.
The plugin would calculate how much resources consumed for these types of users:

  • Blockchain developers: These are the people who will develop and deploy blockchains themselves, the plugin should give them an overview of how much resources their blockchains will consume given the inputs like the mechanism they use (PoW, PoS, token,..), the number of nodes, the node configuration.
  • Cryptocurrency miners / validators: To answer a question, if someone deploys a miner/validators machine with this configuration (x GB of RAM, CPU 8 cores,..) in block chain X (Bitcoin, Ethereum,...), how much resources like electricity, fresh water I need?
  • Blockchain traders: Give them an overview of how much resources these blockchains/tokens consumed annually so they may know which blockchain is "green"
  • Smart contract developers: Smart contracts are just programs running on top of blockchains, so it would be nice of the developers can calculate how much resources their smart contracts used.

Questions to be answered

These are some issues I would like to discuss with Carbon hack team:

  1. The most important thing is data and researches concerning the energy consumption of blockchains. The Crypto Carbon Rating Institute (https://carbon-ratings.com/) provides an API service where we can get some of the data like energy consumption annually of a blockchain. Are you aware of any other data sources or researches that we can use to do calculation ourselves?

  2. The calculation for energy consumed by smart contracts is the hardest question here I think, I've done some researches and haven't found any solid researches, also found no data sources.

Have you got a project team yet?

Yes and we are still open to extras members

Project team

No response

Terms of Participation

Project Submission

Summary

EcoChain is a GSF plugin that helps measure environmental impacts (carbon emission, fresh water consumption,
electronic waste, land conversion).

Problems

Blockchain, especially Bitcoin, has a profound environmental impact. While numerous studies have calculated the
environmental impact of blockchains, they typically provide only an aggregate number reflecting the environmental
impacts of entire blockchain networks. While such figures are meaningful for raising awareness among blockchain users,
they fail to illustrate the environmental impacts of individual users' actions in utilizing blockchains.

Application

This plugin attempts to show users the impact of their own actions by calculating the environmental impacts of each
specific blockchain transaction.
The environmental impacts of blockchain transactions will serve as the basis for calculating users' environmental
impacts when interacting with blockchains. With this data, we can assess the environmental footprint of various DeFi
projects such as UniSwap, USDC coin, Vault, and others. Smart contract developers will be able to estimate the
environmental impacts of their smart contracts on different platforms such as Solana, Avalanche, and others

Prize Category

Beyond Carbon

Judging criteria

  1. Measure impacts beyond carbon:
  • The plugin calculates Bitcoin transactions electronic waste, fresh water
    consumption and land conversion (other blockchains are not supported yet.)
  1. Overall impact:
  • What potential impact will this model have on the broader sustainability movement?:

    • This plugin can help educate normal users about their actions' environmental impacts
    • Once we raise awareness within the blockchain user community, blockchain users may be persuaded to switch to using
      greener blockchain networks, greener DeFi projects, or greener cryptocurrencies.
    • Eventually, this plugin can contribute to transforming the blockchain industry towards a greener future.
  • What things need to happen for that impact to occur?

    • The environmental impacts output by this plugin must be utilized and understood by blockchain users whenever they
      are about to engage with a blockchain network, cryptocurrency, DeFi project, or create a new smart contract.
  1. Educational Value:
  • Does the project help people understand more about emissions and planetary boundaries? Imagine it as a good
    teacher—making complex topics easy to grasp and sparking interest in learning more:
    • The project helps blockchain users understand the impacts of their actions (transferring crypto, call smart
      contract, deploy smart contract,...)
    • Blockchain users can see how much carbon emissions, land conversion, freshwater consumption, and electronic waste
      their actions produce.
  1. Synthesizing
  • How well does the model integrate and combine information from existing research? Are any coefficients, methodologies,
    or techniques backed up with good citations? How can we trust the outputs of this model are correct?:
    • The plugin methodology is based on numerous widely accepted research studies. All coefficients, methodologies, and
      techniques are derived from reputable sources
    • The plugin also cross-checks the output with numbers from trusted sources.

Video

https://youtu.be/KhUEAdZiwpw

Artefacts

https://github.com/ktg9/EcoChain

Usage

https://github.com/ktg9/EcoChain/blob/main/examples/Basic-Usage.md

Process

The process of building this plugin:

  • Find researches/papers to develop the plugin methodology
  • Gather data sources and process them
  • Use the data to build models for calculation
  • Write the GSF plugin code
  • Compare the result with other sources
  • Write unit test, documentation

Inspiration

Two factors that inspired me to develop this plugin are my experiences working in the blockchain industry and my desire
to contribute efforts towards reducing global warming.

Challenges

Challenges that I encountered building this plugin:

  • The main problem is data

    • Data comes from many sources and the process of validating/retrieving/processing
      these data takes a lot of time.
  • The complex structures of some blockchains:

    • Ethereum for example, has multiple L2 solutions connected to the main chain. This creates
      a lot of difficulties in estimating Ethereum transaction environmental impact. As I stated in the
      doc, I couldn't find a good enough model for Ethereum, therefore it's not supported.
  • A significant amount of research papers needed to be reviewed to develop the methodology for the plugin.

Accomplishments

  • About the output, this plugin has calculated carbon emission for 8 blockchains. For bitcoin,
    3 more environmental impacts are quantified: fresh water consumption, electronic waste and land conversion.
  • The plugin also provides methodologies that could be applied for other blockchains if enough data
    is gathered.

Learning

  • Technical learning:
    • GSF plugin and impact framework
    • How to quantify numerous blockchain environmental impacts
  • Other learning:
    • Ecological ceiling, the current status of global warming, resources lacking (fresh water, land,
      ...) on Earth

What's next?

This plugin could be used as a background/reference for future development to cover many other blockchains
environmental impacts.

Endava - Team EndIf: AWS-Inventory & Climitiq.io plugins

Prize category

Best Plugin

Overview

Endava's team hopes to build two plugins to enable the identification and observation of AWS resources within an application boundary, that can then be consumed by the Impact Framework. This should open up the IF to use by AWS cloud customers.

If time permits, we intend to build a total of three plugins including one to use Climatiq.io's Cloud Calcuation APIs, to perform automated SCI scoring of the Fraud Detection Tool that we submitted to the SCI-Case-Studies repository, as an end to end use-case.

The aim is to demonstrate use of the three plugins along with existing GSF and community plugins required to achieve this, in a manifest file, with an initial child-node representing an AWS subscription/application and 'application' tag. The plugin will then return the EC2 VMs and storage as children to the tree for observations and further pipeline execution to provide an SCI score.

AWS-Inventory plugin: #89
AWS-Importer plugin: #90
Climatiq plugin: #92

We have prioritised the plugin development for the hackathon as follows:

  1. AWS-Importer (component output)
  2. AWS-Inventory (grouping/children, manual manifest alternative exists)
  3. Climatiq (component, an alternative plugin chain could be used)

Questions to be answered

  • Are any other teams working on AWS-based components?
  • Are all IF refactorings to output children to the graph completed (kind=children?),
  • Are any known changes planned to Boavizta or CCF models to expand embodied carbon support (we are considering options to determine SSD embodied emissions for ESB storage, and were thinking about forking the existing plugin to add support for the component/ssd | hdd APIs.

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@jcendava
@viktoria-mahmud
en-andrei-serdulet
@eblenert
en-vasiliuralucaelena

Terms of Participation


Project Submission

Summary

Endava’s team wanted to see how the Impact Framework and its plugins could be used to perform automated sustainability calculations such as the SCI score, of large or complex cloud applications. We decided to use our Fraud Detection solution as our target use-case, building IF plugins to replicate the process we followed in our SCI case-study submission.

Problems

  • Our calculation of the SCI score had included a significant amount of manual work, identifying resources, recording utilisation and performing manual calculations. Even using the IF, defining the tree or graph for a large or complex application with various compute and storage resources and multiple regions could be time consuming, and changes to application infrastructure would need to be reflected in a manifest. We wished to contribute plugins to the IF that would keep those manual tasks to a minimum.
  • There was no native support within the IF plugins for AWS-based observations. We wanted to contribute a plugin to provide this support.
  • There was no plugin for Climatiq.io, who provide cloud calculation APIs that simplify some of the process of calculating energy and emissions from cloud resources. We wanted to contribute a plugin that enabled Climatiq outputs to be included in an IF pipeline.

Application

Endava's team has built two (technically, three) Impact Framework plugins as part of the hackathon submission:

  • The AWS-Importer plugin enables retrieval of observations of AWS VM and storage resources, based on a time-span, region and resource tag parameters, using the AWS SDK to retrieve usage data from AWS EC2 Service and AWS CloudWatch.
  • The Climatic plugin allows calculation of energy consumption, operating and embodied CO2e emissions for VM, storage, CPU and Memory components using the services cloud calculation APIs.
  • The Boavista-storage plugin adds support for the Boavista component APIs that enable calculation of embodied emissions for storage devices.

In our project, the plugins have been chained together with the SCI plugin to build a manifest that calculates the SCI score of our Fraud Detection Tool for given dates and time-spans.

Prize Category

Best plugin

Judging Criteria

The plugin opens up the IF more easily to AWS customers. AWS is currently the cloud provider with the largest market share, and thus the plugin may enable a larger number of software applications to use the Impact Framework. Support for further AWS services can be added to realise the plugin’s full potential. This will however require the ability to interpret observations to measure energy consumption or environmental impacts of those services. Similar plugins will be needed for other cloud and XaaS providers to use the Impact Framework across the full spectrum of software solutions.
Both the AWS-Importer and Climatiq follow the micro-model architecture, using the standard inputs and outputs where possible, and introducing standard-type parameters where no existing ones could be found. The Climatiq model offers configuration options to return or omit emissions, sum energy and intensity parameters in order that subsequent plugins can perform that role.

Video

https://www.youtube.com/watch?v=EUoKxxaLD4M

Artefacts

Usage

Process

We began by looking at the SCI calculation exercise we had performed, and broke down that process into logical steps based on the SCI specification components themselves. This helped identify the plugins that would be required to automate the same process within the Impact Framework, and an idea of the manifest that might represent our application for scoring. We created a Jira backlog with Epics and user stories for each plugin, prioritizing the AWS-Importer as we felt it delivered the most value to the Impact Framework. As work was completed on each plugin, we created a manifest connecting the plugins within a single pipeline, ensuring parameters were successfully passed from one to another. With the plugins successfully integrated, we were finally able to execute the pipeline against the full (representative of production) infrastructure we had used for our SCI case-study.

Inspiration

Endava's existing SCI case-study, and the potential to automate what had been a fairly manual exercise provided the inspiration for our hackathon solution, along with the potential to utilise the Impact Framework and such plugins with our clients, to introduce sustainability serviceability into deployment pipelines or operational monitoring.

Challenges -/150

We had a few challenges around how to structure our plugins and manifest; whether and how to group results and the number of plugins actually required. The AWS SDKs were useful starting points but accessing the correct CloudWatch metrics took some time to get right. Ensuring the parameter interfaces between plugins (inputs/outputs) were consistent took some time. One of the toughest challenges was working with plugin parameter requirements with no configuration to switch modes. Determining embodied emissions for storage proved the most challenging data-wise, and we ended up creating a quick plugin using Boavista’s APIs to access SSD and HDD embodied data – although we’re not sure how it compares to server-grade componentry.

Additionally, the team were all working on paid client engagements, so we all had to fit our hackathon activities around our working days.

Accomplishments

The AWS-Importer simplifies the manifest for a large AWS EC2 based application, requiring only tags to identify resources within the application boundary and populate output parameters with observations.
The Climatiq plugin utilises batch APIs to reduce network calls to streamline calculations of time series or multiple resource observations.
Delivering two working plugins with a team who were fitting in hackathon activity around paid client engagements :-)

Learnings

We learnt a great deal about the Impact Framework itself, and how to build plugins and structure manifests, and also learnt our way round the AWS v3 SDK. The hackathon also helped us understand the challenges around determining emissions for cloud software services in general where data is not readily available. Having completed the hackathon, we’ll feel confident we can help our clients with similar projects in the future.

What Next

We hope that contributing two plugins that simplify two aspects of the SCI calculation - resource utilisation observations and energy & emissions calculation, will help expand the current capabilities of the Impact Framework, and encourage other AWS customers to explore or adopt the Impact Framework.
Support for additional AWS services, or new Climatiq cloud-calculation endpoints can be added in future to broaden the capabilities of each plugin. We could look to fork the Boavista plugin and add the storage component functions to the community plugin too.

Create a model plugin to calculate the energy and carbon consumed during AI model training

Prize category

Best Plugin

Overview

Basically it is a one implementation for the problem statement
#33

I am thinking to implement this using https://mlco2.github.io/impact/. We can get the inputs Hardware type, Hours Used, Provider, Region of Compute and return the raw carbon emissions produced and the approximate offset carbon emissions. I know the data is outdated, but in future if we get a updated source we can use it.

If we need more accurate results then we can also integrate with paid services like https://www.climatiq.io/

Questions to be answered

No response

Have you got a project team yet?

I am working on this project alone

Project team

No response

Terms of Participation

Project Submission

Summary

Count the amount of carbon used for creating an AI model using Impact framework and display in Hugging face

Problem

Creating models needs a lot of resources so becoming one of the big carbon emission processes, sustainability is not integrated into AI development.

Application

Hugging Face serves as the GitHub for datasets and models. However, the process of creating models requires significant resources, contributing to substantial carbon emissions. Currently, benchmarks are used to compare models, but I foresee a future where creators also present carbon usage as a metric when introducing new models.

To achieve this, I developed an application using existing plugins within Impct framework. In the future, plugin creators will simply need to add five configurations to the config.json file during model upload, enabling Hugging Face to automatically display carbon usage.

Prize category

Best Content

Judging Criteria

This solution aligns with our goal of integrating sustainability into AI development, ensuring that environmental impact is considered alongside technological advancements.

Video

https://youtu.be/ZoPPBWSqf7Y

Artefacts

https://github.com/sann3/AI-Carbon-Counter

Usage

https://github.com/sann3/AI-Carbon-Counter/blob/main/README.md

Process

Initially I have started as a plugin development, and after understanding the IF eco system realized that we don't need a new plugin, it can be achieved by chaining the existing plugins.

Inspiration

I like sustainability, and introducing it in my day to day life. I want sustainability implemented in the AI ecosystem.

Learnings

Learned how to develop a plugin and application using Impact framework

What's next?

If the Impact framework is integrated into Hugging face ecosystem then we can show carbon impact in many areas like model creation, model inference, dataset creation etc..,

Measure and report on the carbon impact of a website

Overview

User Story

As a web developer I want to know how to get started with the Impact Framework with the ultimate goal of measuring my website's carbon footprint.

Rationale

There are numerous components that go into measuring a website's carbon footprint, which vary greatly based on the specific website being measured These can be calculated using existing models in the IF repositories. However, calculating the impact of a real website requires carefully configuring the manifest file and choosing the right models as well as finding accurate input data.

When each of us started using the Impact Framework, we found we needed more support in configuring the manifest file etc and the content we've created aims to offer that support to other technologists.

Impact

If this idea is successfully implemented, the GSF will have a great number of resources to support people onboarding into the impact framework, including easily understandable tutorials, sample code, a plugin specifically for website measurement, and suggestions for improvements to the framework and surrounding infrastructure to enhance consistency and legibility of the framework overall.

Scope Of Work

List some of the tasks that will be required to implement this idea

  • A new model plugin
  • A manifest file will have to be written
  • Implement test cases
  • A new feature to the framework itself
  • It will require changes to the documentation

Examples and resources

Originally posted by @jmcook1186 in #35

Questions to be answered

  • What resources and knowledge do new developers need in order to onboard into using the Impact Framework for the first time.
  • How can the Impact Framework be used to measure website energy usage
  • What framework improvements, documentation, and tools could be added to the framework to make it easier to use

Project team

@heaversm, @rachaelcodes, @rachanakabi, @iretep, @Bro-mar

Terms of Participation

Carbon_aware_advisor_model

Prize category

Best Plugin

Overview

Overview
The CarbonAwareAdvisor model is designed to provide carbon emission data based on specified locations and timeframes. It interacts with the Carbon Aware SDK to fetch the most carbon-efficient options for given parameters.

Key Features
Location Filtering: Users can specify a list of locations to consider for carbon emission data.
Timeframe Filtering: Users can define time ranges to narrow down the search for carbon emission data.
Sampling: An optional parameter that allows users to specify the number of data points to sample from the available data, providing a broader view of the carbon emission landscape. If sampling is not defined in the impl then no data points are sampled and the plotted-points is not added in the ompl.
Outputs
** Suggestions: **: List of the best location and time combination to minimize the carbon score along with that score.
** Plotted-points: **: ONLY IF THE SAMPLING PARAMETER IS INITIALIZED IN THE IMPL. A sampling number of samples for trade-off visualization. A best combination from each timeframe is always included . So sampling must be >= number of time regions in the allowed-timeframes. The plotter model can then be used in the pipeline to plot this samples.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@jimbou
@TomasKopunec
@JasonLuuk
@TelmaGudmunds

Terms of Participation

Carbon CI Pipeline - more CI providers

Type of project

Building a plug-in for Impact Framework

Overview

Interested in Continuous integration and helping bring sustainability into the conversation? Help us bring sustainability to every developer by integrating our CI pipeline tooling into the most popular CI platforms, while using the Impact Framework data!

https://github.com/Green-Software-Foundation/carbon-ci

Questions to be answered

What CI pipelines can we target?

GitHub Actions
GitLab CI
Travis CI
Circle CI
SourceHut Builds

Where is the best value for developers? And how can we bring the greatest value to developers to help them reduce their carbon intensity in both cloud deployments, but also local code.

Have you got a project team yet?

No, we would like your help to find team-mates

Project team

No response

Terms of Participation

GPU Carbon Estimator

Prize category

Best Plugin

Overview

At least one plugin that makes progress on an end-to-end method to estimate carbon emissions from GPUs.

Input is GPU power at regular timestamps (source of power measurement is up to the user). The output is carbon emissions in kgCO2.

The intention of the overall project is to function like the Boavizta plugin, but for GPUs. I will encapsulate computations into multiple plugins as needed. I assume GPU energy can easily be converted into carbon emissions using existing plugins, but I'll need to verify that.

Questions to be answered

  • None.

Have you got a project team yet?

I am working on this project alone.

Project team: "Meridian"

@dukeofjukes

Terms of Participation


Project Submission

Summary

I made an IF plugin called gpu-carbon-estimator that estimates carbon emissions from periodic power readings from a GPU.

Problem

Currently, IF has no plugins that address carbon emissions for GPU. While the logic employed by my plugin is not necessarily unique to GPUs, it presents a first step to creating a workflow to estimate GPU carbon emissions using IF.

Application

The plugin takes GPU power (watts), duration (seconds), and region as input. From power and duration, it calculates the energy (kWh). With the energy and region, it gets the carbon emissions (kgCO2) from the Climatiq API and returns it as output.

Prize Category

Best Plugin.

Judging Criteria

Overall Impact

This plugin is a first step towards supporting estimations of GPU carbon emissions using IF. Luckily, IF already has plugins that can estimates CPU and memory emissions, so this plugin can be plugged in to a holistic pipeline very easily. Future work would need to investigate how this plugin would work for cloud use cases.

Opportunity

As mentioned above, IF does not currently have any GPU support. This plugin enables users of IF to add GPU estimations to their workflow. This plugin may also attract future users due to the fact that it strengthens IF's ability to holistically estimate a machine's carbon emissions.

Modular

Since GPU power is the primary input, any method can be used to gather this data. For example, novice users can use nvidia-smi or its underlying NVML to estimate the GPU power of a program or a running system. Or, if a user has access to more accurate, direct measurement tools, they can use those instead.

Video

https://youtu.be/BG4bEf-eIRY?si=tC_CrKUTj60tUu9D

Artefacts

Repository: https://github.com/dukeofjukes/gpu-carbon-estimator

Usage

Instructions to install and run the plugin are found in the repo's README.md.

Process

I looked at a number of existing plugins (e.g., Boavizta, Teads Curve, TDP Finder) to consider the scope of what was possible with IF. In order to keep my plugin as modular as possible, I only wanted to handle the back-end conversion from power to carbon emissions, and leave the difficult problem of measuring/estimating power consumption of a GPU to those with more expertise. With these constraints, I landed on the solution of simply querying an existing carbon database.

Inspiration

My PhD research is in parallel computing (primarily in GPUs), so I felt that the lack of a GPU plugin in IF was a major blind spot.

Challenges

As a mostly Python/C/C++/CUDA programmer, the refresher on Typescript/Node.js was the biggest hurdle. Luckily, the documentation of IF and existing plugins alleviated that quickly. I wanted to investigate the cloud computing use case, but my lack of experience in that field and the time limit prevented me from being able to address it fully in this iteration. Perhaps I'll return to this work (or make another plugin) that can address it.

Accomplishments

I'm glad I was able to get something done in time for the submission deadline!

Learnings

I learned alot about carbon emissions estimations--mostly that estimating them is not as easy as it would seem. Developing my coding skills was also a nice benefit.

What's next?

I want to make a simple script that runs a program and measures its power periodically, then writes the measurements to an output manifest for direct input into my pipeline. This hackathon submission is part of a project in my Green Computing course at Texas State University, so making this script will effectively enable the ability of end-to-end measurement/estimation.

AquaQuantx

Prize category

Beyond Carbon

Overview

Through AquaQuantx, companies will be able to calculate and monitor their water consumption and footprint, and take
respective measures to ensure efficient usage of their water supplies, with specific attention to their data centers.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

No response

Terms of Participation

Grasp: A Plugin to convert a software's carbon emission into easier to "grasp" values

Prize category

Beyond Carbon

Overview

Inspiration comes from websitecarbon.com's overview page after a test. Here they are giving the user some comparison values that are easier to understand.

Input: Either the total carbon emissions (gram of CO2eq) or the carbon emissions per unit of work (gram of CO2eq) + a number of realistic units over a time range (maybe year?)

Output: A Set of interesting comparison values. Some ideas:

  • Cups of coffee
  • Km / Miles a regular car could drive till it emits the amount of carbon
  • Amount of trees that would be necessary to absorb that carbon
  • Social cost / damages in $
  • ...

Questions to be answered

  • What else could be an interesting value to make the impact of a piece of software more "graspable"?
  • Do you see any other obstacles for the idea or things I need to research?

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@hoernschen
@tr1ng0

Terms of Participation

Project Submission

Summary

The plugin provides insights and metrics for better understanding the software’s implications and “hidden” costs.

  • Impact: Focuses on socioeconomic effects of carbon emissions, based on the applications activity.
    • Social Cost of Carbon (SCC): Quantifies economic costs of CO2 emissions, currently paid by society.
    • Premature Deaths and Displacements: Highlights implications of climate change on human lifes and inhabitable areas.
  • Comparison: Offers comparative carbon emission metrics for various products and actions.
    • Lifecycle Emissions: Assesses carbon footprint of bananas, chocolate, coffee, based on life cycle assessments.
    • Transportation Emissions: Compares greenhouse gas emissions per passenger km for different transport modes.

Problems

We are recognizing the difficulty in grasping and understanding the measured data for evaluating application activity. Is a ton of carbon emissions a good or bad value? Why should we aim at reducing emissions while at the same time the impact of our activites on society is not very clear, even if we know the amount of emissions? We believe there is a lack of insights that translates these emissions into tangible metrics and the direct implications on human health and displacement due to climate change.

Application

Our solution, translates input of carbon emissions data into actionable insights by providing a detailed analysis of their broader implications. It employs metrics such as the Social Cost of Carbon (SCC), and quantifies the effects of emissions on human health and displacement, offering a multifaceted view of the impact of software activities. By converting abstract emission figures into relatable, concrete metrics, the plugin aids users in comprehending and evaluating the environmental and social footprint of their software, enabling informed decisions towards sustainability.
More details can be found in the Wiki

Judging criteria

Overall Impact

By providing insights into different socioeconomics aspects such as the Social Cost of Carbon (SCC), and the health and displacement effects of climate change, our plugin offers a comprehensive view of environmental impact, encouraging more sustainable decision-making beyond just carbon metrics.

Educational Value

Our model help people to better understand what the carbon emissions of their software compare to and what impact it can have on society

Video

https://youtu.be/BAu1d-xsXGY

Artefacts

https://github.com/hoernschen/grasp

Usage

https://github.com/hoernschen/grasp

Process

The solution was developed based on the IF Plugin template. We split the development in the two categories: comparison and impact. Both did their own research on the topic which was the most time intensive part of the work.
After completing the research phase, it was then poured into the documentation and code, with a strong focus on transparency and encouraging others in the future to contribute more data or metrics to the plugin.
During the development phase we made the decision to split the model into separate plugins to give the users more freedom which numbers they are interested in and to not inflate the pipeline output.

Inspiration

Initial inspiration came from the summary page of websitecarbon.com. Here the user gets below the estimated carbon emissions of one website load a number of comparison values which we found it a very fun way of making the carbon emission metric more understandable.
But these values missed something from an evaluation perspective, therefore we added more values based on impact on society. This comes from the principle of internalization of externalities, a way of incorporating a completely neglected cost factor of business activities. Therefore a second inspiration came from the principle of sustainable accounting, of which a carbon tax is a part of.
Additionally, Asim gave inspiration regarding the impact on physical health due to carbon emission when registering this plugin.

Challenges

Finding the right literature for the metrics was very challenging. The IPCC already labeled them with "high confidence" regarding impact of climate change and therefore carbon emissions but much research was focused on identifying the impact, but not making it quantifiable to an specific amount of carbon. Often research also focussed on a very small part of a bigger system which made it difficult to really make generalized assumptions from it.

Accomplishments: Share what you are most proud of

We believe, enabling decision makers on understanding the implications and "graspable" values of applications activities is a major step towards reduction. Therefore we are really proud of connecting measured data with metrics, backed by scientific evidence, to enable evaluation of ICT activity.

Learnings: Share what you learned while hacking

  • It is very difficult to find trustworthy, open and especially general research on a topic
  • The research often has good proofs of connections between action and carbon emission but often lacks the concrete value of impact.

What's next?: How will your solution contribute long term to the Impact Framework eco-system

We believe, that measuring alone is sometimes not enough and requires understanding of the implications. Therefore this plugin tries to enable decision makers to better evaluate and increase transparency for the activities implications on the global society.
We want to keep updating and extending our set of metrics to make it a valuable tool for explaining the impact of a software's carbon emissions and the value of reducing it.

Azure Sustainability API Plugin

Type of project

Building a plug-in for Impact Framework

Overview

The primary objective of this plugin is to seamlessly integrate data from Microsoft's Sustainability API for Azure into the Impact Framework. Microsoft's API provides detailed metrics on carbon emissions and resource usage specific to Azure cloud services. By incorporating this data into the IF, users can gain a more comprehensive understanding of the environmental impact of their software applications on Azure.

Questions to be answered

Can I write the plugin in C#.Net? Does the build need to be a DLL, executable, or something else? What does the interface between IF and a plugin look like? What kind of data would be most beneficial to IF?

Have you got a project team yet?

No, but we will find people ourself

Project team

No response

Terms of Participation

GSF Impact Framework extension for the Microsoft Azure Data Studio Tool - Calculate the Impacts of SQL Query payloads

Prize category

Best Content

Overview

Microsoft Azure Data Studio (https://github.com/microsoft/azuredatastudio) is a database administration tool that is written as an Electron application. It supports a wide range of database systems and runs on Windows, Linux and IOS. It provides SQL Query performance metrics.
The goal of the Hack is to build an Azure Data Studio Extension (https://github.com/microsoft/azuredatastudio/wiki/List-of-Extensions) and an SQL Query Impact Framework extension that can hack into these performance metrics and produce impact estimates.
Azure Data Studio also supports (Jupyter) Notebooks. As part of the GSF Extension for Azure Data Studio I would like to deliver a Notebook that introduces the Green Software Foundation and basics of the Impact Framework, besides demonstrating how to use the extension with SQL Query payloads.

Questions to be answered

No response

Have you got a project team yet?

No, but we will find people ourself

Project team

No response

Terms of Participation

Plotter_model

Prize category

Best Plugin

Overview

The Plotter model created for the Impact Engine Framework is designed to visualize data through various types of graphs such as bar, line, and scatter plots. It takes input in YAML format or csv format , defining the x and y values along with additional parameters to customize the plots.

This model is typically used in a pipeline following data-enrichment models like carbon-aware-advisor, which populates the plotted-points parameter required by Plotter. If the user prefers he can specify the plotted-points parameter himself in the Impl file but the main value of the model is its ability to visualize the data provided by other models of the Impact Engine Framework. The user can also specify a csv file to read the data to plot from.

Questions to be answered

No response

Have you got a project team yet?

I am working on this project alone

Project team

@jimbou

Terms of Participation

Generic CPU Power Curve model

Type of project

Building a plug-in for Impact Framework

Overview

User Story

As an IF user, I want to be able to create and use custom CPU power curves in IF so that I could calculate carbon emissions of my system based on it's specific power consumption model

Rationale

Based on our work on the Intel model (aka IEE = "Intel Energy Estimator") we know that many clients (private, OEMs and others) use customized systems where the power consumption characteristics (aka power curves) may differ from the generic ones provided by manufacturers (if any). The idea here is to provide a model where users will only need to plug in their custom power curve (as a data file) and have the model carry the burden of energy calculations, among others, at runtime.
Such a model would be backed by a script that will generate a power curve on given system.

Impact

Successfully implementing this idea will:
Significantly simplify getting started with IF, for those wanting to have a pipeline that accurately reflects their specific system modifications and customizations. Acknowledging the importance of CPU modeling in the manifest file, we can see how this will improve onboarding of such users, as they will be able to create their own power curves and then only provide the resulting data to the model, without writing any code.
Increase transparency under a unified standard: once all CPU-modeling users use the same Generic CPU Power Curve generation script and model, we know that while the data naturally differs between each, the process of creating the power curves and calculations done based on those curves are the same across all users. Even if users can't share their data (power curves), we at least have transparency over their computations.

Discussion

This project idea originated from this discussion:
#37

Questions to be answered

No response

Have you got a project team yet?

Yes - "The GreenChips" :)

Project team

@ajagann - Akshaya Jagannadharao
@greeliyahu - Eli Greenberg
@dgolive - Danilo Oliveira
@pazbardanl - Paz Barda
@OrhenG - Orhen Oren Greenberg

Terms of Participation

Submission Content:

Summary

Our project aims at measuring energy consumption of SW execution on a CPU, or any other device / component that could be characterized by a power curve / power load line. An innovative part of the project is also tackling the challenge of generating such power curves for any CPU on any given system.
Generc CPU IF Plugin: https://github.com/pazbardanl/if-plugins/tree/generic-cpu
Power curve generator: https://github.com/ajagann/powercurve-generator/tree/feature/update_polling_result_integ

Problems

The current industry standard for generating power loadlines is to utilize SPECPower or SERT. However, these tools are not universally accessible and come with their own set of limitations. For instance, they require direct access to hardware for measurements and operate within highly structured benchmarking environments that may not fully replicate real-world scenarios. One significant limitation we aim to address is the reliance of SPECPower and SERT on a set of standard benchmarks, which may not cover all software scenarios adequately.
Opensource solutions like Teads and CodeCarbon also utilize loadlines in their own calculations, showcasing the growing recognition of loadline-based methodologies in energy consumption estimation. However, these solutions may have their own limitations or may not provide the flexibility required for certain user scenarios.
Also, the plugin we're submitting acts as a simple and generic way to use power curves (of any device, not just CPUs) to the impact framework.

Application

To address these limitations and provide a more accessible solution, we propose the development of a framework that enables users to generate their own power loadlines for personal use. Our goal is to create an extensible and flexible framework that empowers users to generate power loadlines tailored to their specific systems. While our framework may not achieve the same level of accuracy as SPECPower or SERT, it provides developers with energy estimates that are more likely to reflect their unique situations.
Having achieved power loadlines (aka power curves) users can then leverage the simplicity and effectiveness of the IF to measure energy consumption of any SW workload.

Prize category:

Best Plugin

Judging criteria

Impact on the broader sustainability movement

CPUs carry a siginificant part of software workload. The ability to accurately calculate their energy consumption is crucial in the growth of software energy measurability.

What things need to happen for that impact to occur?

In our vision, vendor-generated power curves are important for transparency and provide very good accuracy. But often times end user systems deviate from the generic, "clean" power curve generated at the vendor's lab due to customizations, aging effects and custom workloads that are not well represented by the generic curves. For this impact to reach its max potential, we vision users using the impact framework and feeding it with their own custom-generated power curves to get a better, more accurate energy measurement.

Opportunity

Given the tool to generate custom power curves, and the IF plugin to utilize these curves, users can now measure energy for any SW platform, regardless of vendor provided data.

Modular

Our IF plugin complies with the IF standard and interfaces.
Our power-curve generator tool can execute any benchmark, be it a standard or a custom one.

Video

https://www.youtube.com/watch?v=BWZpbMFQADA

Artefacts:

Generc CPU IF Plugin: https://github.com/pazbardanl/if-plugins/tree/generic-cpu
Power curve generator: https://github.com/ajagann/powercurve-generator/tree/feature/update_polling_result_integ

Usage

Please refer to the respective README files of the artefacts above

Process

We started by studying the power curve generation process of SPECPower and SERT. This involves a calibration phase to determine the workload necessary to fully utilize all available CPU resources, marked as 100% utilization. They then measure power consumption while executing this workload. Subsequently, they iteratively adjust the workload to achieve lower utilization levels, measuring power consumption at each step. For instance, to determine the workload at 90% utilization, they calculate 90% of the workload at 100% utilization and measure the corresponding power. Inspired by this methodology, we structured our benchmark implementation accordingly. Our subsequent task was to develop a flexible framework for future customization. This framework aims to accommodate diverse power polling methods and benchmarks, ensuring extensibility. Our code structure allows users to create their own benchmarks, providing flexibility for granularity adjustments and hardware optimization exploitation.
IF plugin development was done according to the IF plugin interface and development guidelines.

Inspiration

Our quest to gauge the carbon footprint and energy consumption of our software led us to explore open-source solutions. While impressed by Teads and Boavizta's efforts, closer scrutiny revealed their reliance on power curves for estimation. However, the methods used to generate these curves raised doubts. While power curves and TDP are crucial for energy calculations, their assumptions often don't hold in real-world scenarios. Typically crafted in controlled environments, these curves may not align with the diverse workloads we sought to measure. This disconnect between idealized settings and practical application left us with unresolved questions.

Challenges

Our team is globally dispersed, leading to common communication challenges. Our proficiency in Python also varied among team members, with some seizing the opportunity to enhance their skills in the language. Furthermore, while none of us are power experts, we were fortunate to have a team member with substantial knowledge in this domain. Bringing the team up to speed on the issue and devising an approach required considerable trial and error.
Moreover, having 2 components in the same project led to significant time and effort spent in integration and troubleshooting.

Accomplishments

We are incredibly proud of achieving our intended functionality. Throughout this journey, we've gained valuable insights into the realm of power management and significantly expanded our proficiency in Python. While acknowledging that there's more to accomplish, we celebrate the solid foundation we've laid down with our framework.

Learnings

The key lesson we gleaned from our hackathon experience is the complexity of power and energy measurement. Despite the well-defined nature of the field and the familiarity with its challenges, addressing these issues meaningfully proved challenging. Whether it's the inability to isolate processes within the machine or ensuring measurement reliability without physical devices, we encountered hurdles. In response, we recognized the necessity of documenting our assumptions and devising strategies to address potential sources of error in future endeavors.

What's next?

Our next steps involve comprehensive testing across various environments, encompassing containerized environments and diverse bare-metal setups, to ensure the robustness and adaptability of our framework. We aim to delve deeper into power measurement methodologies for applications running on single or multiple sockets, considering potential code modifications to accurately track their performance. Additionally, we will rigorously test the functionality for hyperthreaded applications, anticipating adjustments to accommodate their nuances effectively. Enhancing support for CLI and extending compatibility to AMD processors are also pivotal tasks on our agenda. Moreover, we plan to establish a robust suite of test cases and implement continuous integration practices to fortify our framework's reliability. Concurrently, we are committed to broadening our understanding of power dynamics, seeking insights into the underlying reasons behind observed results. We also aspire to enhance the flexibility of our scripts, ensuring they dynamically adapt to the benchmark's information requirements, optimizing their utility and versatility. In general, we need to increase the robustness of the framework.

Integrate Impact Framework into load & performance tests

Prize category

Best Content

Overview

Why?
It would be great to enable development teams to directly integrate Impact Framework into their performance tests, because:

  • Performance and load tests should in general play one important role in measuring how green a peace of software is.

  • Enabling the teams to not only measure response times and throughput in performance test but also directly relate the software under test to it's carbon footprint could become a motivator to optimize for minimizing it.

  • Also the boundary problem of measuring the environmental impact of software should be more easy to handle in the context of performance tests, as per-se teams typically pay well attention to have well defined and repeatable conditions during performance testing.

  • Last- not least: In case the load&performance tests are run regularly, improvements regarding the CO2 footprint can be directly compared over time. Also deployments which would drastically increase the environmental impact can be more easily discovered.

How?

  1. Develop two plugins for the data exchange between Performance test tools (i.e. JMeter, Gatling or K6) and the Impact Framework: One "Import plugin" for transferring the conditions and parameters of the load tests scripts to IF to clearly describe the boundaries a second "Export Plugin" to transfer results of impact calculations of the IF back to the performance tests tools to enhance the results reports.
  2. Develop a template/example manifest combining these import and export plugins with other IF plugins to monitoring and impact calculation plugins (we could start with the SCI plugin).

Questions to be answered

  1. Do we already have monitoring Plugins in the IF? The scope would for sure become to big in case we would need to also develop those. I only see the proposal of a Prometheus plugin in the list of proposed projects for this hackathon.
  2. I have not secured my team yet (speaking with 2-3 former colleagues) and myself I'm non-technical, so we might need to recruit contributors.
  3. Can this project proposal be brought-up and briefly be discussed in the next Live Prep and Q&A session coming monday (March 11th) to get an idea if it's worth following (and still try to get a team)? - I Hope you spot it before....

Have you got a project team yet?

Yes

Project team

@yannlv ; @stephane-batteux

Terms of Participation

Project Submission

Summary

We explored how we could utilize the Impact Framework to obtain carbon emissions of applications while doing load and performance tests.

We demonstrate the concept on how to expand a K6 Performance test script to transfer the test parameters and results as input parameters for the GSF Impact Framework and to calculate the operational carbon emitted during the performance test run. The actual demo scope shows it for the example of the e-net plugin calculating the network energy.

Problem

User Story:
As a load tester
I want to integrate my performance tests with GSF Impact Framework,
so that I can get carbon emissions of the application as part of my performance test results.

Load and performance tests are executed under well defined conditions and with repeatable load inputs. Therefor they are an ideal use case for the utilization of the IF as they inherently solve the border challenge. Also this provides an easy approach to directly measure improvements over time regarding carbon efficiency when load and performance tests are repeated on a regular base.

Application

We created an example of an integration of IF into performance tests using K6, which is one of the most common used performance test tools. To achieve the goal we expanded the performance script to generate a IF Manifest and invoke it through the command-line as last step of the test process. The performance test parameters and data volumes get transferred into the Manifest as input parameters for the carbon calculation.

Prize category

Best Content

Judging Criteria

Overall Impact: The easy utilization of IF in the context of load & performance
test generates a great potential regarding the actual usage of IF.

  • Load and performance tests are already a common practice across the industry.
  • Directly related to the resources-utilization.
  • Load and performance tests are executed under very well-defined conditions, giving a well understanding of the border of the application observed.
  • Making the carbon measurement repeatable.
  • The conceptual approach should be easily transferable to other performance test tools.

Clarity: The solution is simply a short expansion of the performance tests script. Anyone already familiar with the K6 should be able to easily follow the example.

Innovation: This project brings IF right into the best spot of the software development lifecycle to learn regarding the carbon emissions and motivate to improve!

Video

https://youtu.be/5-LSVtmG1vA

Artefacts

Code: https://github.com/yannlv/k6-perfQA4impact
Presentation:
https://www.dropbox.com/scl/fo/sn7ubp5y3h9nivyyi3gnk/h?rlkey=k5kc9gb3xtrj0ts0ms32t81gh&dl=0

Inspiration

Load and Performance testing appeared as the best natural fit to approach carbon measurement of running software applications. Therefor we wanted to aim at integrating IF into this task.

Challenges

There are no monitoring plugins available yet. So the approach for now is based on mock data regarding CPU and memory. The integration with real monitoring data in addition to the performance test outputs will need to come in a next step but should be easy after those become available.

Accomplishments

We lay out an easy approach to expand the K6 script to be able to transfer the parameters and outputs of the performance tests to the IF and get the operational carbon emissions in return.
The demo shows the principle by invoking the e-net plugin, using test parameters and data volume results from the performance test.

Learnings

We first wanted to approach the problem by creating an importer plugin but quickly realized that it could be more suitable to be able to invoke the execution of IF by the performance test itself. So, we came to the idea to seek a way to generate the manifest and invoke it through command-line at the end of the test run.

What's next?

Expand the script to deliver the operational carbon.
Integrate coming monitoring plugins (i.e. Prometheus Importer) into the solution so the real CPU and Memory usage during the test run will get considered.
Transfer the solution to other performance test tools (I.e. JMeter or Gatling).

Object storage Carbon estimates

Prize category

Best Plugin

Overview

A plugin to estimate the carbon emissions of object storage based on a few factors such as:

  • data amount
  • length of time data is stored
  • availability of data
  • data velocity

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@nickbarber, @mgriffin-scottlogic, @jmain-scottlogic, @ishmael-burdeau

Terms of Participation

Project Submission

Summary

A series of plugins to estimate the energy used by data storage, particularly cloud object storage as well as the impact of reading and writing data to storage devices. We have created 4 plugins that can be used in conjunction with each other, and other plugins as needed. One to estimate the energy used by cloud object storage, one to return a replication factor multiplication based on defaults of some cloud providers, one to estimate the energy consumed by stored data and one to estimate the energy consumed by reading/writing data.

Problems

How to calculate the carbon emissions generated by data storage and the reading and writing of data as well as object (blob) storage.

Application

Consists of multiple plugins, designed so that they can be used together or as separate components for non-cloud usage.
One plugin retrieves the replication factor for cloud storage services.
Another takes drive size, power along with duration, data stored and the replication factor to estimate the total energy associated with storage.

Prize category

Best Plugin

Judging criteria

Gives users a way to calculate the carbon emissions of their data storage, and has been created in such a way that it can be applied from single drives up to cloud managed services as long as the user has access to the relevant required data.

Video

https://www.youtube.com/watch?v=dGMidYCsEnk

Artefacts

https://github.com/mgriffin-scottlogic/if-carbon-hack-plugin

Usage

https://github.com/mgriffin-scottlogic/if-carbon-hack-plugin?tab=readme-ov-file#usage

Process

We tried to get data out of cloud services that would give us an indication of what was being used on the back end of cloud object storage services but were unable to get much useful information.
We used an open-source object storage system to test hypothesis’ we had about the impact of storage as well as looking up research others had done on the topic. We therefore decided to create a plugin solution in it’s simplest form, to output energy used by a storage device. We discovered energy usage differs drastically on read/write vs idle and therefore split the plugins.

Inspiration

Discussions between Scott Logic and DWP and what impact 30TB of data in S3 has on carbon emissions and that there was no clear way to calculate or estimate outside of AWS reporting.

Challenges

Getting data out of cloud services and understanding what is happening at lower levels, especially with replication, redundancy and also availability levels (e.g. intelligent tiering)
Finding the right places to break up plugins
Understanding the CPU/Memory impact of object storage on top of the storage component, whether it be hosted service or a system running locally.

Accomplishments

Calculating estimate within reasonable error of AWS own reporting for common crawl
Making use of existing if-plugins to estimate the embodied carbon of storage

Learnings

Simply storing data has less impact than we expected. CPU usage for reading/writing etc has a bigger impact.
There is lots of scope to reuse the standard impact framework plugins in helpful ways (reusing embodied for data on drives)

What's next?

Further research into what information is required to get more detailed calculations for cloud object storage services.

Computation and memory overheads of object storage systems

Erasure coding vs replication.

Automated duplication of observations for replicated regions

Prometheus Importer

Prize category

Best Plugin

Overview

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company. To emphasize this, and to clarify the project's governance structure, Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project, after Kubernetes.

Prometheus collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.

Problem/Solution Statement

We propose to build an Prometheus Importer which will facilitate the consumption and processing of metrics through Impact Framework, especially in a Kubernetes environment.
We plan to allow PROM queries to be used as input for our model, with a specified aggregation window. The resulting array of measurements would be further processed by other models in the pipeline.
This would be especially useful if and when the Impact Framework is deployed within a Kubernetes environment, significantly reducing the overhead of preparing the measurements to be used in the manifest file.

Scope and Limitations

  • Prometheus is a prerequisite, either stand-alone or deployed via Kube-Prometheus-Stack
  • the output array of the model will only contain the timestamp and metric fields and their respective values

Questions to be answered

  • Are there any plans to containerize the Impact Framework?
  • Are there any plans on exposing the Impact Framework via REST API? (especially useful if deployed as microservice in a Kubernetes Cluster to be used in conjunction with our model)

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@ParvanAndrei @MXG99 @laurandrei994 @AmeliaCrizbasan @AlexHusleag @angelcataron @hyper11011

Terms of Participation


Project Submission

Summary

Our Prometheus Importer greatly facilitates the consumption and processing of metrics through Impact Framework, especially in a Kubernetes environment.

Problem

Apart from outputting standard metrics, custom PROM query can be used as input for our model, with a specified aggregation window. The resulting array of measurements would be further processed by other models in the pipeline.
This would be especially useful if and when the Impact Framework is deployed within a Kubernetes environment, significantly reducing the overhead of preparing the measurements to be used in the manifest file.

Application

Our Impact Framework plug-in fetches automatically the most useful information regarding a system from Prometheus. The metrics exposed automatically are: CPU Utilization, Memory Available, Memory Used and Memory Utilization. Additionally, one can request an additional metric by using the 'Custom Query' input.

Prize category

Best Plugin

Judging criteria

Overall impact: Prometheus and especially the Kube-Prometheus-Stack is the de-facto monitoring solution within Kubernetes Clusters. Increased migration and development of new micro-services will only strengthen the choice of containers and container management systems, like Kubernetes.

Opportunity: Having enabled IF to rely on Prometheus as a data source in a pipeline, the solution can be used on an increasing number of operational environments, cloud infrastructures, cloud services, including container run-time services, etc.

Modular: Having developed this IF module as a data-source, it can be easily integrated with other modules in the downstream. It follows the convention in the output it delivers and it has been tested against both official and unofficial IF modules.

Video

https://youtu.be/f3sPHGQWGP0

Artefacts

https://github.com/andreic94/if-prometheus-importer

Usage

Process

In terms of development, we started from the IF model template project and then added the necessary implementations following the Azure Importer Module as reference.

Inspiration

While IF is powerful in it's capabilities, there are limitations in terms of tools/services it can use as input data. We are using Prometheus extensively, and we thought that giving IF this power will pave the road for further adoption by other individuals who are using this monitoring solution.

Challenges

Getting familiar with Typescript and its pitfalls reduced our efficiency in providing this module.

Accomplishments

We managed to develop everything we set out to do. The introduction of the 'custom query' input is definitely a highlight.

Learnings

Always get familiar with the language of choice beforehand.

What's next?

Having a way to asses the sustainability impact of micro-services, especially in containerized environments, will allow us to minimize the damage they cause to the eco-system.

Prefrontal Cortex from Capgemini

Type of project

Writing content about Impact Framework

Overview

Tem or Project Name: Prefrontal Cortex from Capgemini

Type of project: Writing content about the Impact Framework, which is key for any initiative, products, and services to be successful, purposeful and impactful.

Questions to be answered

No response

Have you got a project team yet?

Yes and we are still open to extras members

Project team

Team Mates: @kumarchinnakali

Terms of Participation

Create a manifest file for training an LLM

Type of project

Building a plug-in for Impact Framework

Overview

The idea comes from this discussion.

Large Language Model(LLM) has evolved rapidly over the past year and has shown great potential in many areas. At the same time, we cannot ignore its impact on the environment.
Therefore, we would like to conduct a survey on the existing LLMs to generate a manifest file that could be used to calculate the energy and carbon consumed by LLM training. The actual models/calculations are not requirements here - it is enough to create a manifest file that could run as this will expose gaps in the current stack.

  • The manifest MUST be a single .yaml file
  • The manifest file MUST have all the required fields described here
  • The manifest MUST define the necessary inputs required for the calculation
  • The manifest MUST include all necessary models in a pipeline
  • The manifest MUST define the expected outputs and their units
  • The manifest SHOULD include comments showing which models are necessary but currently unavailable
  • The manifest SHOULD include some documentation in a separate README explaining the execution flow, any known problems or missing components.

Questions to be answered

At the moment we have no questions.
If there are relevant suggestions or resources, feel free to leave a message.

Have you got a project team yet?

Yes and we aren't recruiting

Project team

Team members: @Xiaoliang1122, @Irenia111

Terms of Participation

Greent IT: modeling the impact of software development

Prize category

Best Content

Overview

Green IT is an integral part of ESG and in the tertiary sector, a major component. Integrating the recommendations of the GREEN GRID for the data center and the SCI index for the software part, allowing to model the impact of a software tuning and giving basic capacity planning capacities.

Questions to be answered

No response

Have you got a project team yet?

I am working on this project alone

Project team

[email protected]

Terms of Participation

Manifest Generator

Type of project

Enhancing the Impact Framework

Overview

I plan to build an intuitive interface, either a GUI or TUI, aimed at simplifying the process of generating manifest files for software projects. This tool will lower the entry barrier, making green software practices more accessible and appealing to new users, thereby encouraging widespread adoption and contributing to a more sustainable software development ecosystem.

Questions to be answered

Is this project idea viable considering the current stability of the manifest file specifications?

Would a TUI, as an addition to the current CLI offering more integrated process support, or a GUI, capable of displaying more details such as graph, be more suitable?

How should the configuration of plugins be handled? Should it include plugin-specific questions (at least for the standard library) or simply provide an input field with a reference to the plugin documentation for configuration details?

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@fapfaff

Terms of Participation


Project Submission

Summary

This addition to the Impact Framework CLI demystifies the initial complexity of creating manifest files.
By interactively querying the user about their project, the tool generates a preliminary manifest tailored to the users project. After adding the missing inputs and configuring the remaining sections, the manifest is ready to use.
This lowers the entry barrier for developers to take advantage of the Impact Framework and helps making green development more accessible to a broader audience.

Problems

The solution addresses the challenge faced by developers, including myself, who initially feel overwhelmed by the possibilities within the Impact Framework.
Understanding the framework can be daunting, from learning the manifest's structure to navigating through various plugins to ascertain necessary inputs and outputs for a project. The multiplicity of approaches to configuring the manifest adds another layer of complexity. However, the Impact Framework is designed to ease the transition into green software development.
This CLI enhancement simplifies these initial hurdles, making it more approachable for developers to contribute to sustainable software practices effectively. It ensures that the potential for significant environmental impact through software isn't lost due to complexity in the setup process.

Application

I developed an additional command (--init) for the Impact Engine to facilitate the creation of manifest files for developers unfamiliar with green software or the Impact Framework. This command initiates an interactive session, asking straightforward questions about the project, such as its type (Web/App/ML/IoT) or its deployment (OnPremise/Cloud/Hybrid).
The answers to these questions guide the automatic creation of a draft manifest file, hiding the technical details like configuring plugins, allowing users to focus on high-level project characteristics.
This approach makes the process accessible even to those with no prior knowledge of green software principles.
The users are only required to add the inputs later, a step designed to avoid the complexity of more intricate configurations

Prize Category

✨ Best Contribution

Judging Criteria

My contribution simplifies the initial steps to create a manifest file through an interactive CLI, substantially lowering the entry barrier. This means more developers can utilize IF to write environmentally conscious software, significantly amplifying its overall impact. The likelihood of reaching its full potential is high, especially if integrated directly into IF or highlighted within its documentation, ensuring visibility to all interested users.

It complements the Impact Framework's design philosophy, enhancing user experience without altering its core principles. It ensures transparency with visible outputs via manifest files and simplifies the entry for new users by providing adaptable examples, aligning with the framework's flexibility.
While being a direct contribution to the Impact Framework CLI it's also considerable to extract it as a standalone tool.

In terms of innovation and creativity, this project introduces a novel approach by abstracting away the complexities involved in configuring a manifest file, making the process easier for users without experience in green software.

From a user experience perspective, the tool is designed to be straightforward, guiding users through simple questions to generate a manifest file without knowledge of technicaldetails, enabling a smooth start. This approach ensures an almost nonexistent learning curve, making it exceptionally user-friendly.

Video

https://youtu.be/mv5lKIeleYk

Artifacts

The impact framework respository: fapfaff/if-tui#1
The documentation repository: fapfaff/if-docs#1

Usage

The updated section in the Docs

Process

Developing the solution involved detailed planning to accommodate the diverse application types.
I meticulously reviewed all available plugins to identify their potential applications across different project components. Faced with the complexity of designing an intuitive yet comprehensive question flow, I opted to structure the development around a State Machine model. This approach allowed for a logical, maintainable architecture. Abstraction from the Inquirer UI ensures flexibility, allowing the tool to evolve with the Impact Framework.
This architecture ensures the tool's adaptability to modifications to the Impact Framework and makes it able to incorporate new plugins in the future.

Inspiration

My inspiration stemmed from my own initial confusion about creating manifest files and understanding the functionalities of various plugins within the Impact Framework. I sought to create a tool analogous to create-react-app, which brilliantly simplifies the setup process for new React applications by abstracting away the complexities. This parallel approach aimed to eliminate the steep learning curve for developers new to green software, providing an intuitive gateway to sustainable software development.

Challenges

Despite writing this submission while confined to bed with a fever, the development journey introduced several challenges, notably deciding which components to include or exclude, such as training, inference, and data collection, particularly in areas outside my expertise. Selecting the most appropriate plugins was complex due to the current scarcity of options and a reliance on unofficial plugins, which added an element of uncertainty. This necessitated a thoughtful approach to ensure the tool remained versatile and reliable for various types of software projects.

Accomplishments

Taking part in my first hackathon, especially on a topic I'm passionate about - combining development with sustainability - has been fulfilling. Through this journey, I aimed to make green software more accessible and straightforward for others, contributing to environmental sustainability. This project not only aligned with my passion but also offered a chance to contribute positively to the greater good. I'm proud of the opportunity to merge my interests in a way that could potentially bring about meaningful change.

Learnings

Diving into this project, I ventured into the realm of Green Software, uncovering the nuances and potential of the Impact Framework. This exploration not only broadened my understanding of sustainable software development but also sparked curiosity about the framework's future. Additionally, the process of developing an interactive CLI tool provided practical experience and insights into user-friendly design principles, enhancing my skills in creating tools that are both impactful and accessible.

What's next?

My solution aims to be a long-term asset to the Impact Framework ecosystem. By either donating the project or continuing its maintenance myself.
I plan to keep it evolving in line with the Framework's growth.
I'm particularly keen on integrating new plugins developed during the hackathon, making it possible to configure even more components through questions.
This approach aims to make green software more accessible, encouraging new developers to engage with sustainability in technology.

Refactoring Legacy Systems for Sustainability

Prize category

Best Content

Overview

A deep dive into a project (TBD) that involved refactoring an existing, inefficient legacy system to adhere to sustainable software principles.

Questions to be answered

No response

Have you got a project team yet?

No, we would like your help to find team-mates

Project team

@Mercy-Iyanu

Terms of Participation

Create a manifest file for training an LLM

Type of project

Writing content about Impact Framework

Overview

The idea comes from this discussion.

Large Language Model(LLM) has evolved rapidly over the past year and has shown great potential in many areas. At the same time, we cannot ignore its impact on the environment.
Therefore, we would like to conduct a survey on the existing LLMs to generate a manifest file that could be used to calculate the energy and carbon consumed by LLM training. The actual models/calculations are not requirements here - it is enough to create a manifest file that could run as this will expose gaps in the current stack.

  • The manifest MUST be a single .yaml file
  • The manifest file MUST have all the required fields described here
  • The manifest MUST define the necessary inputs required for the calculation
  • The manifest MUST include all necessary models in a pipeline
  • The manifest MUST define the expected outputs and their units
  • The manifest SHOULD include comments showing which models are necessary but currently unavailable
  • The manifest SHOULD include some documentation in a separate README explaining the execution flow, any known problems or missing components.

Questions to be answered

At the moment we have no questions.
If there are relevant suggestions or resources, feel free to leave a message.

Have you got a project team yet?

Yes and we aren't recruiting

Project team

Team members: @Xiaoliang1122, @Irenia111, @Jjing-Liang

Terms of Participation


Project Submission

Summary

This tutorial outlines methods for estimating the carbon emissions of Large Language Models (LLMs) during training and inference using the Impact framework tool. It offers manifest examples for various levels of emission estimates and helps compare emissions across different LLM configurations, promoting carbon reduction. An updated dataset with LLM carbon emissions papers and related public data is provided for convenient calculation, aiming for a sustainable AI-environment future.

Problems

Large Language Models(LLMs) have evolved rapidly over the past year and have shown great potential in many areas. At the same time, we cannot ignore its impact on the environment. The current version of the Impact Framework lacks a comprehensive plugin designed to accurately calculate the carbon emissions generated by Large Language Models (LLMs). This deficiency creates a challenge for users seeking to understand and manage the environmental impact of their AI-driven operations. Our proposed solution leverages the capabilities of the Impact framework to define a manifest specifically tailored for the assessment of carbon emissions from LLMs. This will enable users to gain insights into the constituents of LLMs carbon footprints and identify the critical factors influencing them. Additionally, our research will involve the compilation of a readily accessible dataset of public data pertinent to LLMs computation, facilitating straightforward lookup and application for those aiming to evaluate and mitigate the carbon emissions associated with their AI models.

Application

Our solution serves as a tutorial guide for calculating the LLMs carbon footprint, consisting of a collection of manifest files and explanatory articles. Collaborating with the Impact Framework, we illustrate the carbon footprint of LLMs using IF plugins. By supplying simple data input, you can achieve a comprehensive understanding of LLMs carbon emissions. The manifests and accompanying explanatory content will facilitate your exploration of the LLMs carbon footprint structure and assist in comprehending each emission component. An updated dataset with LLMs carbon emissions papers and related public data is also provided for convenient calculation.

Prize category

Best content

Judging criteria

Overall Impact:
The tutorial's approach to evaluating LLMs carbon emissions stands to bolster sustainability efforts, offering a methodology for assessing AI's environmental footprint. By equipping users with the Impact Framework to make eco-conscious decisions, it fosters more sustainable AI development and promotes a shift towards greener technology practices. To actualize this impact, the tutorial requires broad outreach to engage the AI and sustainability sectors, integration with the Impact Framework for ongoing support, and persistent research for methodological refinement and data expansion.

Clarity:
The tutorial clarifies the complex subject of LLMs carbon emissions with well-structured guidance and relatable examples. Its use of visual aids and plain language ensures that the material is comprehensible to a wide range of users, from seasoned professionals to those new to the field, making the information both accessible and engaging.

Innovation:
Innovative in both concept and execution, the tutorial breaks new ground in AI sustainability by harnessing the Impact framework for carbon emission analysis in LLMs. The introduction of diverse manifest examples not only streamlines the estimation process but also educates users on the nuances of carbon footprint assessment, marking a significant advancement in the field.

Video

https://youtu.be/pOIdXF0N9HQ

Artefacts

https://github.com/Jjing-Liang/LLMCarbon--/blob/main/content.md

Usage

https://github.com/Jjing-Liang/LLMCarbon--/blob/main/README.md

Process

The development process of this tutorial involved thorough research and analysis of the environmental impact of LLMs and existing carbon emissions evaluation tools. The Impact Framework was extensively studied to understand its functionalities and applicability. Based on the research and framework analysis, multiple manifests with different granularity were designed to calculate LLMs carbon emissions, considering factors such as server energy consumption, training time, data transfer, and manufacturing costs. This tutorial provided users with a simple and comprehensive method to calculate and assess the carbon emissions of their LLMs models. The tutorial underwent testing and optimization to ensure accuracy and reliability.

Inspiration

Our inspiration for developing this tutorial came from three key sources. Firstly, the growing awareness of environmental issues and the need for sustainable development motivated us to address the environmental impact of artificial intelligence, particularly LLMs. Secondly, the widespread application of LLMs in various domains highlighted the potential environmental challenges associated with their resource-intensive nature. Lastly, the discovery of the Impact Framework's capabilities in environmental assessment provided us with the inspiration to write a tutorial that integrates with the framework to evaluate LLMs carbon emissions. By combining these motivations, we aimed to contribute to the sustainable development of AI by providing a comprehensive solution for assessing and managing LLMs carbon footprints.

Challenges

We encountered several challenges during the process, including data availability and collection complexities for evaluating the environmental impact of LLMs. Understanding the complex operations of LLMs and accurately evaluating their environmental impact required expertise in AI and environmental evaluation methodologies. Integrating the evaluation framework with LLMs proved challenging, necessitating customization to align with their specific requirements. However, through extensive research, collaboration, and optimization, we successfully addressed these challenges, resulting in a solution for assessing and managing the environmental impact of LLMs.

Accomplishments

We are proud of our achievement in rapidly constructing the LLMs evaluation and comprehending its relationship with carbon emissions. Despite starting from scratch, we dedicated substantial time and effort to quickly acquire knowledge and apply it effectively, yielding impressive outcomes. Throughout the process, we overcame technical and theoretical hurdles through extensive research and practical experimentation. Our team demonstrated remarkable adaptability and learning capabilities, enabling us to complete the task efficiently. Ultimately, our pride stems from our ability to swiftly build the LLMs manifest and understand the complexities of carbon emissions. This accomplishment showcases our team's commitment and competence, establishing a solid foundation for future endeavors.

Learnings

Through hacking, our team has acquired valuable skills and insights. We understand the factors influencing carbon emissions in large language models. It has improved our ability to gather and organize information effectively, enabling informed decision-making. Utilizing the IF framework, we navigate complex challenges with logical statements and conditions. Our problem-solving and critical thinking skills also have sharpened, allowing us to approach challenges creatively. Overall, this journey has provided us with invaluable knowledge and skills.

What’s next

Our solution aims to have a lasting impact on the Impact Framework ecosystem by deepening our understanding of carbon emissions in LLMs, optimizing processes through advanced technologies, and ensuring scalability and replicability.

We hope we can conduct continuous research and data analysis to gain a deeper understanding of the factors that influence carbon footprints in LLMs. This knowledge will enable us to develop targeted strategies for reducing emissions. Additionally, we want to streamline processes and enhance data analytics capabilities through the integration of advanced technologies and intelligent tools. This will improve the efficiency and accuracy of data collection, organization, and analysis, allowing participants to effectively monitor and evaluate their carbon emissions. Furthermore, our solution will be designed to be scalable and replicable across different scales, regions, and sectors. This will facilitate its widespread adoption and replication, maximizing its impact and encouraging more stakeholders to embrace low-carbon practices.

In summary, our solution will contribute to the long-term development of the Impact Framework ecosystem by deepening our understanding of carbon emissions, optimizing processes, and ensuring scalability and replicability. Through these efforts, we will drive the adoption of sustainable practices and contribute to a more sustainable future.

Synapse-Carbon-Extractor

Prize category

Best Plugin

Overview

The goal of the Synapse-carbon-extractor project is to integrate synapse utilization metrics into Impact framework from Azure Synapse Spark clusters, SQL databases, and data processing pipelines to calculate carbon intensity metrics. By measuring the energy consumption associated with data processing tasks, the project aims to provide insights into the environmental impact of cloud-based data processing and empower organizations to optimize their workflows for sustainability.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

No response

Terms of Participation

iMPACT earth

Screen Shot 2024-03-25 at 3 18 15 AM

Type of project

Writing content about Impact Framework

Overview

We are introducing "iMPACT", an innovative apparel line that visually embodies the principles of the Impact Framework. Collaborating with various artists, each design reflects the Framework's role in evaluating the environmental iMPACT of software. Through our online store, the "iMPACT" line aims to create a tangible connection between consumers and the concept of sustainable software. Our goal is to illustrate the synergy between the Impact Framework's objectives and our "iMPACT" apparel line, showcasing how creativity can be harnessed to enhance understanding and visibility of sustainability in software.

Questions to be answered

none

Have you got a project team yet?

Yes and we are still open to extras members

Project team

No response

Terms of Participation

Create a model plugin to calculate the carbon emissions of Neural Networks training and inference

Prize category

Best Plugin

Overview

The idea is to generate a model that could be used to calculate the carbon emissions produced during neural network training and inference phases, also accounting for embodied carbon emissions

Questions to be answered

No response

Have you got a project team yet?

Yes and we are still open to extras members

Project team

@SpanishInquisition49 @WoralQuaz

Terms of Participation

Black is Green

Type of project

Building a plug-in for Impact Framework

Overview

Context

It is known for sometime now that darker colours require less energy to illuminate LED/OLED displays, black being the lowest energy colour and white being the most energy intensive colour. Red, green and blue lie in the middle with blue being more energy inefficient than red and green.
Having said that, almost all websites load images. It would be fascinating to see how each image contributes towards energy consumption eventually leading to carbon emissions.

Idea/Problem statement

I am thinking of building a pipeline which has following steps

  1. Image importer plugin - Pulls an image from AWS s3 bucket
  2. Image analyser plugin - Analyses the image to figure out the R, G, B percentages, height and width of the image
  3. Image efficiency plugin - Take the values from output of previous plugin along with size of the display area and calculates total power consumption at a standard/typical brightness
  4. Brightness adjuster plugin - Takes the total power consumption and adjust it according to given brightness input
  5. Finally to calculate SCI, the total power consumption needs to be converted to Carbon and used as input to standard model SCI from IF.
    Other standard models like SCI-E, SCI-M can be used OR new one might be needed to apply the monitor/display's embodied carbon (not sure at this point).

Scope

For hackathon, it would be limited to

  1. LED/OLED displays. (LCD use backlight and work on different technique than LEDs).
  2. AWS s3 but later can be extended to other providers.
  3. Direct linear correlation between brightness and consumption (As brightness increases, power consumption increases linearly).

Impact

There are hundreds of websites that host stock photos and images for download and use e.g. Shutterstock, Unsplash, Getty images etc.
If these images were to display SCI as part of their metadata then the user can become 'Carbon Aware' and will know the impact of including an image in their webpage.

Language

Leaning toward Python but not given up on TypeScript yet.

Questions to be answered

  1. Are there plugin already available which do these image/colour manipulations?
  2. Is pipeline better OR all steps mentioned above can be part of a single plugin/model?
  3. Would passing in total-embodied-emissions in the model config for SCI-M, help me calculate embodied carbon for displays/monitors?
  4. I am not sure if I can use the SCI-E and SCI-O directly given that the input is mostly CPU, GPU, memory related.
  5. SCI should be ok to use as it takes carbon as input so long as I have carbon as output from my plugins, I can use SCI.

Have you got a project team yet?

I am working on this project alone

Project team

@mads6879

Terms of Participation

XBox Game Carbon Emission calculator

Prize category

Best Plugin

Overview

To capture the carbon emission of Xbox games we will develop the tool, based on the Green Software Foundation methodology called SCI (Software Carbon Intensity) and IF (Impact Framework), will capture carbon emissions across the Xbox game lifecycle: measuring power consumption during gameplay, estimating server and download emissions, raising awareness of development impact, utilizing "carbon-aware" features, and promoting responsible recycling.

Questions to be answered

No response

Have you got a project team yet?

Yes and we are still open to extras members

Project team

No response

Terms of Participation

GreenCode - helping developers write greener code

Prize category

Best Plugin

Overview

GreenCode is evolving. My existing MVP, a robust static analyzer within VSCode, now embarks on a groundbreaking journey to integrate IF. This implementation is designed to revolutionize how sustainability departments monitor and forecast software energy consumption directly from the development environment.

Core Features:

Energy Consumption Analysis: GreenCode will now assess the energy impact of code changes, translating complex analyses into actionable insights.
Impact Framework Integration: By incorporating IF, GreenCode will offer precise kWh forecasts, empowering developers to see the future energy footprint of their software as they code.
Sustainability at Your Fingertips: Tailored for both developers and sustainability departments, our tool aims to bridge the gap between software development and environmental stewardship.

Questions to be answered

No response

Have you got a project team yet?

No, we would like your help to find team-mates

Project team

No response

Terms of Participation

Summary
GreenCode revolutionizes sustainable software development by seamlessly integrating detection and suggestion of code optimizations within the developer's workflow, focusing on reducing energy consumption and carbon footprint. In the demo video, you can see two different code snippets, standard (unoptimized) and green (optimized with GreenCode). It leverages the Impact Framework to provide precise measurements of these optimizations' environmental impact. IF will be executed in the background of the VSCode Extension, in the demo video. IF with its Azure plugin is part of the GreenCode VSCode Extension as of version 0.2.1.

Problems
The environmental impact of software development, notably its energy consumption and carbon footprint, often goes unnoticed, leading to inefficient and unsustainable applications. GreenCode tackles this by enabling developers to directly identify and rectify energy inefficiencies in their code, while the Impact Framework quantifies the environmental benefits of these optimizations.

Application
GreenCode, a VSCode extension, empowers developers to detect and replace inefficient code patterns with energy-efficient alternatives in real-time. Working alongside the Impact Framework, it measures the environmental impact of software, offering a comprehensive toolkit for sustainable development by providing actionable insights and precise impact measurements.

Judging Criteria

  • Overall Impact & Sustainability Integration: GreenCode introduces a novel solution to a critical but often overlooked aspect of software development—environmental sustainability. By enabling developers to detect and rectify inefficiencies in code that contribute to unnecessary energy consumption, GreenCode not only promotes energy efficiency but also empowers a broad base of software developers to contribute positively to the sustainability movement. The integration with the Impact Framework for accurate measurements amplifies this impact, providing a clear and quantifiable connection between code optimizations and environmental benefits. Further investment in training software developers and making them conscious about this movement will make the integration part easier.

  • Opportunity
    By integrating with the IF Azure plugin, GreenCode unlocks the potential for developers to apply IF's sustainability assessments within Azure, and by extension, across varied environments like AWS and Kubernetes. This capability not only broadens the scope of environments where IF's insights can be leveraged, but also simplifies the process for developers to incorporate sustainability metrics into their workflow, promoting widespread adoption of environmental impact assessments across multiple platforms.

  • Modular
    The integration between GreenCode and the IF Azure plugin exemplifies the Unix philosophy by doing one job exceptionally well—providing accurate, actionable environmental impact data directly within the developers' workflow. This focused approach ensures the plugin remains lightweight and efficient, fitting seamlessly into existing development pipelines. The IF Azure plugin's ability to deliver specific, valuable insights with minimal overhead showcases its effectiveness and utility as a model of modularity and precision in the development ecosystem.

  • Innovation and Creativity: The project stands out for its creative approach to sustainability in software development. GreenCode's real-time detection and suggestion mechanism, coupled with actionable insights from the Impact Framework, represent an innovative leap in making sustainability a core aspect of software engineering practices.

  • Alignment with Design Philosophy: GreenCode's development was also developed by the Impact Framework's design philosophy, emphasizing transparency, neutrality, modularity, and flexibility. By providing developers with immediate feedback on the environmental implications of their code, GreenCode aligns perfectly with these principles, fostering an ecosystem where sustainability can be seamlessly integrated into any software project.

  • User and Developer Experience: For users, GreenCode demystifies the often complex domain of software sustainability, offering a user-friendly interface and actionable insights that make sustainable coding accessible to all developers. For developers, it ensures a smooth integration into existing workflows, with minimal setup and a focus on enhancing rather than disrupting the development process. This dual focus on UX and DX is crucial for widespread adoption and effectiveness.

  • Community and Ecosystem Contribution: Beyond its direct functionalities, GreenCode contributes to the broader Impact Framework ecosystem by serving as a model for how tools can bridge the gap between sustainability metrics and everyday coding practices. It opens the door for further innovations and tools in this space, encouraging a community-wide shift towards more sustainable development practices.

Video
https://youtu.be/Qe9_IBCE3r0

Artefacts
https://marketplace.visualstudio.com/items?itemName=GreenCode.greencode

Usage
Ensure Prerequisites: Confirm that both GreenCode and the Azure plugin for the Impact Framework are installed and configured in your VSCode environment.

Open Your Project: Launch VSCode and navigate to the project you wish to analyze.

Configure IF Settings: In your project, make sure you have an azure_test_gc.yaml (or similarly named) file as well as the .env configured with the necessary details for the Azure plugin. This includes specifying your Azure subscription ID, resource group, VM name you're monitoring. In the .env file: tenant ID, client ID, client secret.

Analyze CPU Utilization:

Open the command palette in VSCode using Ctrl+Shift+P (Windows/Linux) or Cmd+Shift+P (macOS).
Type and select GreenCode: Run IF and display results. This command triggers the Azure plugin through the Impact Framework to start the analysis.
GreenCode communicates with the Impact Framework, which then fetches and processes CPU utilization data from your specified Azure resources.

View Results:
Once the analysis is complete, GreenCode displays the CPU utilization insights directly in VSCode.
Look for the output panel or a pop-up notification in VSCode showing the CPU utilization percentage, observation window, and any recommendations for optimization if the utilization suggests inefficiencies.

Process
The development journey began with integrating the Impact Framework's measurement capabilities into GreenCode, enabling real-time environmental impact assessments. Focusing on using IF to measure two different sets of instructions run in SQL (eco vs. standard), I conducted extensive testing to ensure the tool's accuracy and effectiveness in development environments.

Inspiration
The inspiration behind GreenCode was the realization that software developers could play a pivotal role in environmental sustainability if given the right tools. By creating a solution that not only detects inefficiencies but also leverages the Impact Framework for impact measurement, I envisioned transforming software development into a force for good.

Challenges
One of the main challenges was making the Azure plugin work in IF, I wanted to keep the workflow simple for the hackathon and had the idea of running the code on an Azure VM, but I had to make many additional changes (not part of the IF tutorial) for the plugin to work. The challenge after that was integrating IF as smooth as possible into the VSCode Plugin GreenCode.

Accomplishments
I'm particularly proud of how the plugin facilitates a significant shift towards sustainable development practices. Its ability to detect, suggest, and, with the Impact Framework, measure the environmental impact of code changes represents a groundbreaking advancement in eco-friendly software development.

Learnings
Developing the plugin taught me the profound potential of integrating sustainability into the software development lifecycle. I learned the technical challenges of accurate environmental impact measurement and the importance of providing developers with immediate, actionable feedback on their code's sustainability.

What's Next?
GreenCode aims to broaden its detection capabilities, support more programming languages, and deepen integration with the Impact Framework for even more precise impact measurements. Moreover, GreenCode is currently in the process of being implemented in a corporate environment. By building a community around sustainable development, GreenCode will continue to drive the software industry towards a more environmentally conscious future.

[Org Carbon Footprint As a Data Product] - Visualise and track Organisation Carbon Footprint so that org can take better decisions to maintain sustainable environment

Prize Category

Best Plugin

Overview

Creating an organisation's carbon footprint as a data product involves collecting, analyzing, and presenting data related to the organisation's environmental impact with the help of Visualization & Dashboard. For MVP we have started with visualizations as an output in the form of html, hence named it as visualization-plugin.
The output of this plugin are visualizations in the form of html file which can present key metrics and insights related to the organization's carbon footprint.
Following steps are involved in building this Data Product :

  1. Data Collection:

Gather data from various application sources within the organisation, such as energy consumption by CPUs, carbon emitted by Cloud Servers.
Uses mock-observations plugin to consume data as its input.

  1. Data Processing and Analysis:

Clean and preprocess the collected data to ensure consistency and accuracy.
Calculate carbon emissions for each data source using established methodologies and emission factors.
Invokes cloud-metadata, teads-curve, watttime plugin and operational-carbon plugin to produce the operational carbon footprint for the servers.

  1. Visualization and Reporting:

Developed interactive dashboards and visualizations to present key metrics and insights related to the organization's carbon footprint and produced html file as an output product from the plugin so that stakeholders can unlock insights from data.
Use charts, graphs, maps, and other visual elements to communicate complex data in a clear and intuitive manner.
Create customized reports and presentations for different stakeholders, such as executives, sustainability teams, investors, and regulators.

Use feedback loops to gather input from stakeholders and incorporate suggestions for improvement.

Impact :
By treating the organization's carbon footprint as a data product, we can provide stakeholders with valuable insights into environmental performance, drive positive change, and demonstrate a commitment to sustainability and corporate responsibility.

Questions to be answered

No response

Have you got a project team yet?

Yes and we are not recruiting

Project team

Team Name : Datamusers
Team Members :
@ramgsuri
@Aditya-Amgaonkar
@ravinderj
@vanshkapoor

Terms of Participation


Project Submission

Summary

Visualisation Plugin seamlessly integrates into the pipeline to understand insights around carbon emissions and CPU consumption of an organization servers and returns a report with detailed comparison visualisations for the list of your servers over the time range. As of now, our report provides analysis on 2 Parameters i.e, CPU consumption and Carbon emissions by your servers.

Through these reports, stakeholders can analyse and track how much carbon and cpu consumption is happening on timely basis and can take further steps. These reports can act as a goto report to understand servers emissions and utilizations and stakeholders can take appropriate actions and business decisions

Problem

It is very hard to take decision from raw data presented in tabular format. If business needs to unlock insights from data then visualisations can help the stakeholders to take appropriate business decision.
Hence with the help of visualization-pluginn now stakeholders can plug the plugin into the pipeline and using Impact Engine they can produce visualization as data product output which help them to unlock insights from data and track carbon emissions of an organisation so that they can take appropriate actions and business decisions accordingly.

Application

Visualisation Plugin comprised of Input and Outputs. Plugin seamlessly integrates in the pipeline to understand insights around carbon emissions and CPU consumption of an organization’s server data and returns a report with detailed comparison visualisations for the list of your servers over the time range. As of now, our report provides analysis on 2 Parameters i.e, CPU consumption and Carbon emissions by your servers.

Prize category

Best Plugin

Judging Criteria
Overall Impact 👩🏽‍⚖️
By Integrating this plugin into the pipeline now stakeholders can unlock insights from data and visualize the graphs and take appropriate actions to reduce carbon emissions.

Opportunity
Opportunity to unlock insights from data and present visuals as output so that stakeholders can take better decisions from organisation's standpoint.

Modularity
The Code is modular can easily be integrated with current existing IF official and un official plugins.
Modular Architecture and follows Single Responsibility Principle.
Devops integrated as CI/CD is enabled via Github actions.

Video

https://youtu.be/pz5tNm1TRNE

Artefacts

https://github.com/ramgsuri/visualization-plugin

Usage

https://github.com/ramgsuri/visualization-plugin/blob/main/readme.md

Process

We took the clone of IF repo and then leveraged the copy-plugin example to start the plugin and go through the codebase. This has helped us to start simple and integrate CI/CD in the repository in the beginning to keep the process devops enabled and Agile. We then started to run the standalone plugin and do an integration in the current framework, once E2E is done, then we started adding the code and functionality to plugin and enrich the config and inputs and started consuming in code.

Inspiration

The Simplicity of the IF Framework and the sense of responsibility towards environment and sustainable living inspired us to contribute and take some time out of the daily work and chores and learn about Green Software Foundation Course and start contributing and start hands-on with IF codebase.

Challenges

The challenge we faced when we started integrating the existing plugins and leverage the inputs of the previous pipelines and as we were producing visualizations as output so we wanted to make sure that our plugin can be used in the middle of the pipeline as well and not just in the end as the end output of our plugin is html visualizations.

Accomplishments

We were able to integrate the plugin in the current IF codebase and were also able to leverage the current existing plugins into our work and integrate CI/CD into the development work as well.

Learnings

Learn about Impact Framework, new terminologies around developing Green Software and making the environment sustainable for all.

What's next?

To scale the solution and the amount of data we plan to leverage LLMs in our data and train the model so that solution can Incorporate storytelling techniques to highlight successes, challenges, and opportunities for reducing carbon emissions.
Continuous Monitoring and Improvement will help organisation to keep track of their OKRs towards Carbon Footprint
Set targets and goals for reducing carbon emissions and track progress over time.

Kepler Model for Kubernetes Clusters

Type of project

Building a plug-in for Impact Framework

Overview

Context

Kepler is a CNCF project for measuring energy consumption of bare metal Kubernetes clusters using RAPL. A Kepler model for Impact Framework would query Prometheus for Kepler metrics and allow calculating the SCI score for applications deployed to Kubernetes clusters when Kepler is deployed.

Idea/Problem statement

We are thinking of building a pipeline which has following steps

  1. Prometheus importer plugin - Pulls Kepler metrics from prometheus server
  2. Kepler plugin - Analyze the Kepler prometheus metrics and normalize the data to consume by official IF SCI models
  3. IF official SCI Models - Calculates SCI score
  4. Prometheus Metrics Publisher plugin - Finailly, publishes the SCI score back to the prometheus server

Scope

For hackathon, it would be limited to

  1. Measuring SCI score for Kubernetes workload running on bare metal
  2. We would require Kepler and Prometheus/Grafana to be installed on the Kubernetes cluster
  3. IF pipeline to calculate SCI score will be invoked via a Kubernetes cron job

Questions to be answered

  1. What programming languages can we use?
  2. Are there any generic models we can build on? e.g for Prometheus or Kubernetes

Have you got a project team yet?

Yes and we are still open to extras members

Project team

@greenscale-nandesh @rossf7 @SFulpius @tmcclell

Terms of Participation


Project Submission

Summary

The Kepler Importer plugin is an essential tool for Kubernetes users who are serious about understanding the environmental impact of their workloads. With this powerful plugin, users can achieve a deeper insight into how their workloads affect the environment, and make informed decisions to reduce their impact.

Problem

Kepler (Kubernetes-based Efficient Power Level Exporter) is an impressive tool that leverages eBPF to probe performance counters and system stats. Using advanced ML models, it accurately estimates energy consumption based on these stats and exports them as Prometheus metrics. However, it currently lacks support for assessing the environmental impact of K8s workloads.

Application

The Kepler Importer plugin is a powerful tool that enables easy and efficient importing of Kepler metrics using Impact Framework. By leveraging this plugin, users can effortlessly calculate the SCI score using official plugins- a crucial metric for evaluating the environment impact of applications deployed to Kubernetes clusters. With its advanced features and user-friendly interface, the Kepler Importer plugin is the perfect solution for businesses looking to optimize their Kubernetes deployments.

Prize category

📦 Best Plugin

Judging Criteria

Overall Impact
This plugin could increase the practice of measuring energy consumption in Kubernetes clusters to enable reducing their footprint. It could help increase the adoption of both Impact Framework and Kepler. This is important because in 2022 a report stated 70% of organizations had adopted Kubernetes.

Opportunity
It opens the door to using Impact Framework and Kepler across both Kubernetes and the wider cloud native ecosystem. Kepler supports using both RAPL on bare metal and can use machine learning when deployed to the cloud and RAPL is not available. Our plugin supports both these modes.

Modular
We have followed the Unix philosophy to keep a small scope and ensure the plugin just integrates with Prometheus. We also tested it with the other IF plugins to ensure they can use the energy value the plugin provides.

Video

https://www.youtube.com/watch?v=sn0zrhghrjE
https://docs.google.com/presentation/d/1xXVTVBOXM1ZJXWYE67anPl9cWEwAis8-KZoJL4p-UM8/edit?usp=sharing

Artifacts

Plugin - https://github.com/greenscale-nandesh/if-unofficial-models/tree/main/src/lib/kepler-importer
Manifest - https://github.com/greenscale-nandesh/if-unofficial-models/blob/main/kepler-pipeline-manifest.yaml

Usage

https://github.com/greenscale-nandesh/if-unofficial-models/blob/main/src/lib/kepler-importer/README.md

Process

We started by looking at the existing plugins and agreeing the design in a Miro board and then created GitHub issues. We then found a Prometheus SDK for TypeScript and used the WG Green Reviews cluster to test with Kepler deployed. We then created a manifest with a full SCI pipeline to integrate with the other plugins. During development we held twice a week team calls to sync with weekly calls prior to the start to plan.

Inspiration

Some of our teammates are part of CNCF Green Reviews working group. During the development of the monitoring pipeline they realized that having an Impact Framework plugin to generate SCI score for K8s workloads using Kepler would be useful for the working group and also the wider cloud native community.

Challenges

One challenge was to debug input and output between plugins. Initially we didn’t realize we needed to pass all inputs to the next stage in the pipeline. Another challenge was Kepler provides metrics in Joules which we needed to convert to kWh.

Accomplishments

  • Great team work - Working across geo-locations to deliver this plugin (even thru the time change twice)
  • Integrating Kepler Plugin with rest of the IF official plugins to generate the SCI score

Learnings

  • Getting a deeper understanding of how SCI scores are calculated
  • How to use Impact Framework and write manifests
  • Learning Kepler’s design and the metrics it exposes
  • How to calculate embodied carbon using the Boavizta API

What's next?

  • Discuss with Green Reviews WG whether they can use the plugin for calculating SCI scores for CNCF projects
  • Discuss with the Kepler team if there are any enhancements they would recommend and how to increase adoption of both the plugin and Kepler itself

Extending the Impact Framework to consider CI pipelines

Type of project

Enhancing the Impact Framework

Overview

The Impact Framework from the GSF is working hard and making strides to measure the real carbon intensity of compute resources, to give us developers a chance at understanding the impacts of our software and cloud architecture. Wouldn't it be great if we could contribute to the framework's roadmap by unlocking GitHub actions or other CI technologies. This could allow every developer to have the IF at their fingertips when writing code and deploying cloud resources, and could significantly increase the number of developers who are aware of the carbon impact of their software.

Since the IF measures, simulates and models the environmental aspects of software, it could be a great addition to allow you to assess the SCI of software during your own CI process.

We believe this would be a highly beneficial addition to the roadmap of the IF in terms of driving usage, but also increasing awareness of SCI, and allowing developers to take action to reduce their environmental impact.

We need to consider popular tools and approaches to, to get the most use from developers, and also consider what data is most useful for developers to be able to take action to reduce their SCI.

IF: https://github.com/Green-Software-Foundation/if

Questions to be answered

Can we improve the IF by considering CI pipelines? What data can we extract and use? Is the integration of the IF and a CI pipeline a sustainable approach in itself, or does it drive up the SCI of a solution with limited impact? How should this information be displayed to a user as part of a CI process?

What CI pipeline tools can we use? What popular platforms could be investigated:

GitHub Actions
GitLab CI
Travis CI
Circle CI
SourceHut Builds

Have you got a project team yet?

No, we would like your help to find team-mates

Project team

No response

Terms of Participation

Environmental impact risk scorecard for services

Prize category

Beyond Carbon

Overview

We plan to create a YAML or possibly an exhaust plugin that produces a rating for processes or services running on known cloud VMs.

Inputs will be rated for SCI level, Software Water Intensity (we will need to build a plugin as per the first part of the discussion at https://github.com/Green-Software-Foundation/hack/discussions/82)) and total energy consumption.

Given a lack of independent criteria we will allow the user to set thresholds for RAG themselves with the intention of using that to show improvement or lowered performance over time.

As a stretch goal we may look at adding another plugin to assess SADPI, or Software Abiotic Depletion Potential Intensity based on Boazvista data.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@paulonevrything @omotrium @OpenYast @colesadam @yohan-oc

Terms of Participation


Project Submission

Summary

We decided to create a plugin for a risk scorecard to rate the environmental impact of software based on the Software Carbon Intensity (CO2e/R) and Software Water Intensity (l/R) - this gives us a way to classify a functional unit (e.g. http request) in terms of its impact to the with regards to two environmental concerns.

To be able to calculate SWI we needed also created plugins for measuring the water consumed when cooling the software workload. We also have proposed that the water consumed during power generation should be added and identified datasets that could be used for estimation.

Problem

It is difficult for non-technical stakeholders or those who are not familiar with sustainability concepts to understand the numeric readings from the calculations such as the SCI. We believed that using a Red/Amber/Green rating (RAG) would provide a more obvious indicator of the impact that software under development will have in real life. We had seen that this system is currently used in areas such as assessing the sustainability of websites.

We also believe that stakeholders are more familiar with the concepts of global warming being a consequence of fossil fuel power generation which could be easily mentally linked to the electricity used by datacentres, whereas many would not be aware of the other large environmental impact caused by water use. It seemed that linking the two concepts in a similar way would help to raise awareness within stakeholders and development teams.

There is no current plugin designed to measure water use so we knew we would need to create one ourselves. Prompted by discussion in the GSF forum we set out to establish the logic behind a single software water intensity figure.

Application

We developed an app to measure data inputs and create a risk scorecard showing the RAG status of SCI and SWI.

The app has 4 plugins:

  • Water Power -- uses an electricity maps API to obtain the power generation for a location during a period. Using total energy and a dataset that maps energy for each generator to water use we derive the water used to generate the energy in the same period.

  • Water Cloud -- this multiplies energy by the default datacentre WUE figure of 1.8 l/kWh to get a total for water cooling for each process during the period.

  • SWI -- totals water(power) and water(cloud) and allocates to functional unit/time according to the function:

SWI = ((E*(I1+I2)) +M) per R

where E is the energy consumed by software, in kWh
I1 is the water used for producing electricity, per kWh of energy in Litre/kWh (water-generation)
I2 is the water used for cooling data centres, per kWh of energy in Litre/kWh (water-cloud)
M is the water used during the creation and destruction of hardware that the software is running on (water-embodied)(Not yet implemented).
R is the functional unit

  • RAG Rating -- user-set thresholds for SCI and SWI and outputs Red, Amber or Green.

Prize category

Beyond carbon

Judging criteria

Overall Impact

The main purpose of rating software using a simple traffic light system is to make it clearer to developers and stakeholders whether they are meeting appropriate standards. While using the SCI measurement is a good standard, the addition of an SWI measurement takes the assessment beyond carbon and broadens the understanding of sustainable software.

Educational Value

Most software teams and stakeholders will be able to make the connection between energy use and powerplants emitting CO2 but the link to water is less obvious. Equating the amount of water used by software processes to carbon provides a useful mental model and a link to real human problems.

Synthesizing

The plugins use a variety of data sources:

  • The electricity maps API is used to get a breakdown of energy produced per power source -- this is used to calculate water use due to power generation*. Data is not available for every country or zone and so we default to an average of 1l/kWh.

  • In the future we plan to incorporate water stress information coming from https://ourworldindata.org/water-use-stress to apply a coefficient to the SWI RAG rating.

References

Video

https://youtu.be/1oIL9X0m780

Artefacts

GitHub Repo - https://github.com/opencastsoftware/carbon-hack-2024

Usage

Read Me - https://github.com/opencastsoftware/carbon-hack-2024/blob/main/README.md

Process

At the start of the project none of the team had any hands-on experience with the Impact Framework so the first stage involved upskilling the development team to provide them with a foundation upon which the 4 plugins could be developed.

The next stage involved one of our data analysts carrying out extensive research on water consumption across datacentres. This allowed the data analyst to create a data model which expressed the relationship between energy consumption and water consumption. This model, when combined with other metrics allowed us to calculate a value for SWI.

The development team then took the data model and applied to across a series of 4 plugins as described in a previous section.

The team worked collaboratively with daily stand-ups and mob sessions to progress the development work and refine the data model.

Inspiration

The team identified a gap in the capability of the Impact Framework.

Currently the framework allows energy consumption to be evaluated but lacks the ability for users to determine the impact of water consumption during software development.

A component to show water consumption and its impact on software development would enhance the case for the adoption of sustainable practices.

By incorporating a region-based water stress coefficient into the SWI RAG threshold calculation, the "if risk scorecard" plugin can provide context-aware guidance to users.

This approach acknowledges the varying levels of water stress in different regions and encourages users to prioritise water conservation in areas with higher water stress.

Opencast has made a commitment to adopting sustainable practices. The enhancement of the Impact Framework is an example of how Opencast seeks to promote sustainability through the provision of toolsets that provide visibility of the impact of software development on the environment.

Challenges

As part of our assessment we wanted to add a beyond carbon measurement and chose Software Water Intensity modelled partly on SCI.

We used a default value for WUE (water usage effectiveness) of 1.8l/kWh averaged across all datacentres. This value includes renewables as they too consume water.

We were able to add a dataset of water usage per kWh produced for different types of generation (e.g. coal, nuclear, hydro) and used the Electricity Maps API to derive a breakdown of grid usage in the period being measured.
The SWI model does not include a measure for embodied water; this could be added in future.

The biggest issue, which this plugin does not address, is assigning initial threshold values for ratings as data is either not available or context specific.

We propose these thresholds could be set at default values, adjusted over time based on empirical evidence.

Accomplishments

None of the team had used the Impact Framework and we welcomed the opportunity to progress our understanding in an immersive environment.

It was clear the development of reliable data models was crucial to our ability to design impactful plugins.

We spent a significant amount of time carrying out research and data analysis to derive a data model and data set that reflects water consumption within datacentres.

The establishment of a measurable relationship between energy consumption and water consumption during the software development lifecycle was an important breakthrough that enabled the development of the plugins.

Overcoming the challenge of translating theoretical data models into a working solution that enhances the Impact Framework and allows users to visualise the impact of software applications on water consumption was a major achievement for us.

We feel this addition to the framework will serve to enhance the case for the adoption of sustainable practices.

Learnings

Although all team members had taken the GSF/LF green software course, none had any practical experience with SCI or the Impact Framework and this hackathon has helped them gain real practical experience of the concepts and product. As one of our team members said -- "I've started thinking about carbon every time I write code."

Because the team had chosen a beyond carbon target we ended up thinking about broader sustainability topics such as the impact of excessive water use on different countries around the world. Addressing the use of water during power generation pointed out some of the complexities of attempting to make sustainable choices -- for instance all forms of power generations use some water, so no matter how low carbon the grid you can only reduce water consumption by reducing demand.

Finally, we learned the importance of good data -- much of it is still to be made available.

What's next?

Our solution represents one way of implementing a scoring mechanism for the impact framework. The RAG plugin could be extended or modularised to produce scores for other types of sustainability targets. It could also become more of an exhaust plugin, producing data that can be directly consumed by 3^rd^ party dashboard or visualisation tools.

The problem of setting appropriate rating thresholds for what could be a wide variety of loads and architectures seems difficult to solve -- we have sidestepped this by allowing user input that could be refined. It's possible that data produced by IF could end up being analysed to produce a more dynamic and realistic mechanism for deciding what is a sustainable score, or alternatively future designers could decide to code to sustainability NFRs, e.g.,no user request should ever create more than 1g of CO2eq or consume more than 5ml of water.

Finally, we would like to progress with our SWI plugins to include embedded water use -- both in the creation of datacentres and even the power generators themselves as only through thorough measurement can the true impact of software be highlighted.

GCP Importer

Prize category

Best Plugin

Overview

Story

As a user of a Google Cloud Platform, I want to be able to retrieve usage metrics for my application. Right now, IF only enables me to do this for Azure VMs.

Rationale

We have implemented an Azure importer in IF, but no such model exists for Google Cloud Platform. This will address the current limitation of IF having a Azure-only importer and enable GCP users to retrieve observations of GCP resources via Google Monitor (or similar APIs) and pipeline that utilisation data into other IF models. As suggested here (#30)

Scope

This should be a model plugin that conforms to the IF model plugin interface.

Implementation guidelines

This model MUST conform to the IF model plugin interface
The model MUST return metadata about the application and usage data (e.g. CPU-util, RAM-used, etc) similar to the azure importer
The model must return data in the expected units (kWh, seconds, GB...)
The model must accept duration in units of seconds an input.
The model SHOULD be written in Typescript
The model MUST have unit tests that demonstrate correct execution
The model MUST have documentation in the form of a README that explains how to use.

Questions to be answered

No

Have you got a project team yet?

I am working on this project alone

Project team

No response

Terms of Participation

Project Submission

Summary

As a user of Google Cloud Platform, I want to be able to retrieve usage metrics for my application. GCP importer Plugin enables GCP users to retrieve observations of GCP resources via Google Monitor APIs and automatically populate your manifest with usage metrics that can then be passed along a plugin pipeline to calculate energy and carbon impacts.

Problems

To enable GCP users to retrieve observations of GCP resources, below problems has been addressed

  1. Authentication with GCP APIs
  2. Retrieve the monitor metrics of Compute engine VMs from all regions of one GCP projects, eg: CPU and memory usage
  3. Populate usage metrics that can be passed along with other IF plugins

Application

  • Method: Provide one method GcpImporter to retrieve CPU utilization and memory data from GCP, the method is a execute type.
  • Config: accept one GCP project ID. GCP importer plugin will list usage data for all compute engine VMs of one GCP project.
  • Inputs: accept timestamp and duration as the input. timestamp is the start time of usage data and duration is the seconds that the usage data covers.
  • Validation: the GCP importer plugin will validate the config and inputs data. ConfigValidationError error will be raised if validation not passed
  • GCP APIs: the first version of GCP plugin use Google Compute API and Google Monitoring API. Google Compute API is used to retrieve all compute engine VMs of one GCP projects and some instance details such as instance name and machine type. Google Monitoring API is used to CPU utilization and memory utilization of VMs.
  • Enrich Outputs: the GCP plugin populate the manifest with usage metrics can be passed to plugins. eg ccf.

Prize category

Best Plugin

Judging criteria

Overall Impact

GCP is the world's Top3 cloud provide, the GCP importer plugin can enable GCP uses to use the Impact Framework

Opportunity

The GCP importer is easy to use for systems hosted in Google Compute engine VMs. And the GCP importer plugin is easy to be extended to support more GCP services such as App engine, Cloud Run

Modular

The plugin is built based on the arch of Azure importer, and it works well with other plugins such as CCF

Video

https://youtu.be/q4mPejLqj-A

Artefacts

https://github.com/tianbiao/if-unofficial-plugins/tree/gcp-importer

Usage

https://github.com/tianbiao/if-unofficial-plugins/blob/gcp-importer/src/lib/gcp-importer/README.md

Process

  • Ideation: As I am a GCP user, I want to build a plugin to let me to use Impact Framework.
  • Research: Understand the current unofficial plugins and follow the design of Azure Importer plugin as we are similar plugin for different cloud provider.
  • Code

Inspiration

  1. I am a GCP user for 4 years
  2. I'd like to understand and contribute to Green Software open source tools.

Challenges

  1. Choosing the right Google monitor metrics when retrieving CPU and memory utilizations
  2. As the GCP provides a lot of compute services, for example Compute engine, App engine and Kubernetes Engine, it's hard to implement the GCP importer plugin to support them all during such short hack time.Finally, I start to build the plugin from Compute engine usage metrics.

Accomplishments

The GCP importer plugin can response enough details of the usage data for Carbon calculations.

Learnings

  • Understand IF and it's plugins
  • How to build IF plugins

What's next?

  1. Introduce to my teams that is using GCP to use IF and this plugin to calculate Carbon
  2. Extend the current plugin to support more GCP resources

Azure importer Extension Plugin

Type of project

Best Plugin

Overview

The Azure Importer is a plugin that aims to provide a capability to calculate the Software Carbon Intensity (SCI) of workloads deployed on Azure services, using the Impact Framework. The project plans to extend the existing proof of concept (POC) of the Impact Framework for Azure VMs, which can be found here: https://if.greensoftware.foundation/tutorials/how-to-use-azure-vm

Based on our analysis we have observed that the workloads running on any compute, storage or network platforms draw energy from the underlying host. This energy can be measured through metrics like CPU utilization, memory utilization etc. By measuring these metrices over fixed time period and passing them through the IF computational pipeline we will be able to calculate the SCI score for these workloads on Azure.

The plugin will use the Azure Monitor API to collect observations from Azure resources and feed them into the Impact Framework computation pipeline. The project will also support batch API and multiple metrics for the SCI calculation.

The importer will take the subscriptionID and resource group name as input and run through all the azure services deployed in that specific resource group. For the hackathon, we aim to consider Azure Virtual machines, Virtual machine scale set, Azure SQL and Azure Kubernetes Services (as a stretch goal).

From an impact framework perspective, this is a plugin but the plugin itself can be considered as a framework for extending it to more Azure services in future.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@srini1978 @yelghali @vineeth123-png

Terms of Participation


Project Submission

Summary

In the Impact framework repository , today we have Azure importer which is available as an unofficial plugin that provides the capability to calculate SCI score for a single Virtual Machine . In this hack we have extended the capability of Azure importer to be able to calculate SCI emission scores for all the resource types present in an Azure subscription for e.g Azure SQL, Virtual machine scale sets, Azure functions etc.

Problem

In most applications developed for Azure , the resource group and/or the subscription serve as the bounded context . i.e. all the components of the application are deployed entirely in a single resource group or spread across multiple resource groups within the same subscription.

Hence if we want to automate the calculation of software emissions for an entire application that is built on Azure with the help of the Impact Framework, we should have the capability to provide just the subscription ID and the pipeline should calculate the end result. We cannot achieve this with the code that is there in the unofficial-plugins repository today. This is because the version of Azure SDK of Typescript that is in release state (7.0.0) is using the older version of the Azure monitor API (2018 version) that does not support retrieving observations for multiple resources in a subscription.

This is a known bug in the Azure SDK for Typescript and is captured here -GitHub issue for this -Azure/azure-sdk-for-js#29013

We have addressed this issue as part of the hack and worked with the Azure core SDK team to upgrade the API version being used and a new version of Azure SDK for typescript has been released (8.0.0 beta 5) https://www.npmjs.com/package/@azure/arm-monitor/v/8.0.0-beta.5

Application

Azure importer is a plugin on top of Impact Framework that will be pulling telemetry or observations from Azure services and passing it through the IF computation pipeline for calculation of SCI score. The plugin will use the Azure Monitor API to collect observations from Azure resources and feed them into the Impact Framework computation pipeline.

The following diagram illustrates the high-level architecture of the Azure importer plugin extension.

image

High level Design

The Azure importer plugin consists of the following software components:

  • A configuration module that allows the user to specify the Azure resources, metrics, and time range for collecting observations. In our case this is the manifest file.
  • An API client module that interacts with the Azure Monitor API to retrieve the metric values for the specified resources and time range.
  • The Impact Framework computation pipeline that takes the output from Azure importer and provides the operational and embodied emissions. For doing this the impact framework has multiple plugins added into it's computational pipeline like Teads Curve, Cloud Metadata, Cloud Carbon Footprint, SCI-E, SCI-O, SCI and so on. The details of how the plugins are stitched together to get the output is detailed out in the Process section below.

Prize category

Best Plugin

Judging Criteria

  1. Since this is an existing plugin that we are extending, we can call this component "Azure Importer Extension". The singular aim of building this extension is to enable scale for calculation of software emissions from Azure cloud. By running the importer by giving just subscriptionID as input, a multitude of scenarios can be enabled for developers, architects and devops professionals.

  2. This will open the door for measuring emissions for all types of software on azure cloud - batch jobs, serverless functions, managed services, Kubernetes nodes, storage queues and Synapse extensions, Front end apps, middle tier applications and distributed databases can all be included in the Azure importer pipeline.

  3. Large enterprise customers who run all their systems on Azure cloud can now benefit by just providing a single parameter and the end to end emissions can now be calculated.

  4. The extension will also give an impetus to other hyper scalars to do similar implementations. GCP and AWS can use this as a reference implementation and do similar implementations for their cloud environments.

Video

A link to your video submission on YouTube

Artefacts

Link to the code or content

Usage

https://github.com/srini1978/if-unofficial-plugins-AzureHackers/blob/main/src/lib/azure-importer/README.md

Process

Manifest File design

The impact framework is a pipeline of model plugins that are chained to get an output. In this section we will learn about the plugins that were leveraged, the input and the output parameters that were used for each of the plugins.

The processing done in each plugin is also detailed out.

Sl no

Plugin used

Inputs

Outputs

1

Azure importer- Extended

SubscriptionID

Metric namespace

Cpu/utilization

Final output is all the list of metrices for all the resource types present in the subscription.

Process:

  • A call is made to management API to retrieve all resource types as part of the subscription. Examples of resource types (metric namespaces)
  • microsoft.compute/virtualmachines, microsoft.storage/storageaccounts, Microsoft.Sql/managedInstances etc
  • For these resource types a call is made to Azure Monitor REST API which takes subscription ID along with region name, metric namespace and returns all metrics associated with all resources of the specific resource type

2

Cloud Metadata

  • Cloud/Vendor
  • Cloud/instance type

Process:

Cloud metadata is a standard IF-plugin that is used. IT gives the thermal design power and physical processor that was used in the underlying Azure instance based on the cloud/vendor input and cloud/instance type value.

  • Physical-processor
  • v-cpus-allocated
  • vcpus-total
  • cpu/thermal-design-power

3

Teads Curve

  • v-cpus-allocated from Cloud Metadata
  • vcpus-total from Cloud Metadata
  • CPU utilization from Azure importer
  • cpu/thermal-design-power from Cloud Metadata

Teads model gives Energy CPU in kwH if you give TDP value of processor.

For getting TDP value you need to get the name of the underlying processor which is given by cloud instance metadata

If we cannot make changes to cloud instance metadata we will use default value

cpu/thermal-design-power: 100

If vcpus-allocated and vcpus-total are available, these data will be used to scale the CPU energy usage. If they are not present, we assume the entire processor is being used. For example, if only 1 out of 64 available vCPUS are allocated, we scale the processor TDP by 1/64.

4

CCF

  • Duration from input params
  • CPU/utilization from Azure importer
  • Cloud/vendor from Azure importer
  • Cloud/instance from Azure importer

CCF is a community plugin. It is a self-contained dataset. Given the input parameters we can use CCF to get the embodied emissions for the cloud/vendor and cloud/instance. Energy is also given but we can ignore it as we are getting energy value from TEADS curve model

Outputs: carbon-embodied

5

SCI-M

  • V-cpus-allocated from cloud metadata
  • V-cpus-total from cloud metadata
  • Carbon-embodied from CCF
  • Device-expected lifespan can be provided as a default value (3 years)

Outputs: adjusted carbon-embodied is given

6

SCI-E

CPU/energy in KwH from Teads Curve

Energy in KwH

  • SCI-E model is then used to convert energy-cpu to energy.
  • Energy is provided as output

7

Watt time

Region/location where the workload is run is given as input to WattTime.

Grid/carbon-intensity is returned as output

8

SCI-O

  • Energy in kwh from SCI-E
  • Grid/carbon-intensity

Carbon-operational is returned as output.

SCI-O should always be preceded by sci-e

Inspiration

Tell us what inspired you to develop the solution
Max 150 words

Challenges

Share the challenges you ran into

Accomplishments

  1. With this hack we have now enabled anyone and everyone who wants to measure emissions from their Azure solutions. With the existing version of Azure importer that was there in the repo, there was a bug in the Azure SDK for typescript and how metrics could be retrieved from Azure monitor API.

To give some more details, the Azure monitor API can either give metrics output at just a resource level or at an entire subscription level. However only the latest version of the Azure monitor API allows querying at an entire subscription level. But the typescript SDK's code was never updated to the latest version and hence it was pointing to the 2018 version of the API instead of the 2023 version.

https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/rest-api-walkthrough?tabs=portal

Due to this the API failed our calls. We worked with the Azure typescript SDK team to point to the right version of API. Bug fixed -Azure/azure-sdk-for-js#29013

  1. I also provided mentorship to a team in Microsoft who works on Data pipelines. They got inspired by what I was doing and submitted a hack themselves -#87

Learnings

  1. Learnt how to chain the different plugins as part of the Impact Framework computational pipeline.
  2. Typescript development and node package usage were new learnings.
  3. Learnt how to design a manifest file.

What's next?

How will your solution contribute long term to the Impact Framework eco-system
Max 200 words

puiley - prod push pull

Type of project

Enhancing the Impact Framework

Overview

Groundbreaking project that aims to transition our society from climate indifference to active environmental stewardship. The project's core intention is to cultivate a culture of climate consciousness and responsibility, empowering individuals, communities, and corporations to make informed, environmentally-friendly decisions.

Central is the use of technology as a catalyst for this transformation. Recognizing the influential role of technology in shaping behaviors and perspectives, Focusing on developing innovative tech solutions that raise awareness about climate change, encourage sustainable practices, and drive policy changes. From educational apps to data-driven decision-making tools, Harnessing the power of technology to tackle the pressing issue of climate change.

Movement towards a more sustainable future. It's about creating a global community that is informed, proactive, and committed to the environment. We envision a world where every decision, whether by an individual, a community, or a corporation, takes into account its impact on the environment. Ultimately, dedicated to making climate consciousness a universal way of life.

Questions to be answered

None

Have you got a project team yet?

Yes and we are still open to extras members

Project team

No response

Terms of Participation

[GSF Example Project]

Prize category

Beyond Carbon

Overview

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque congue quam tellus, sed mattis leo suscipit in. In semper malesuada nulla sit amet cursus. Vivamus ac orci id elit varius feugiat. Integer viverra magna dui, non accumsan augue congue sagittis. Quisque sed dolor vitae magna maximus accumsan. Fusce sed urna a lacus rutrum tristique quis ut nulla. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Questions to be answered

None

Have you got a project team yet?

I am working on this project alone

Project team

@russelltrow

Terms of Participation


Project Submission

Summary

A brief overview of your project
100 words max

Problem

Describe the problems the solution addresses
200 words max

Application

Describe what the solution actually does
200 words max

Prize category

Specify which prize category you are entering

Judging Criteria

Explain how what you built meets the judging criteria for your prize category
e.g. For Beyond Carbon - "Overall Impact", "Educational Value", "Synthesizing"
Max 200 words

Video

A link to your video submission on YouTube

Artefacts

Link to the code or content

Usage

Link to usage instructions if applicable

Process

Describe how you developed the solution
Max 150 words

Inspiration

Tell us what inspired you to develop the solution
Max 150 words

Challenges

Share the challenges you ran into

Accomplishments

Share what you are most proud of
Max 150 words

Learnings

Share what you learned while hacking
Max 150 words

What's next?

How will your solution contribute long term to the Impact Framework eco-system
Max 200 words

Create a DataDog plugin to collect resource usage metrics

Prize category

Best Plugin

Overview

DataDog is a monitoring service, particularly useful for cloud applications, that collects various metrics related to running and profiling applications. In particular, a user can collect resource usage metrics, such as processor, memory, and storage usage that can be used to create an energy usage profile of an application. Let us create a plugin which can capture these metrics from DataDog and use them in the Impact Framework to generate an emissions profile.

Questions to be answered

No response

Have you got a project team yet?

Yes and we are still open to extras members

Project team

@bvickers7, @chargao, @asibille-indeed, @GoethelTori, and @atilanog-indeed.

Terms of Participation

API Carbon Score

Prize category

Best Content

Overview

API Carbon Score with baseline footprint

Project team

@Dhanushguptha19 @gmlavanya @Harsha-kommuru

Terms of Participation

Project Submission

Summary

API Carbon Score is a tool which gives the carbon score with baseline footprint to enable consumer to optimize invocation and adoption. We combine Impact Framework and other tools and libraries which can give us metrics for real time carbon score for each API invocation.

Problem

Each day there are billions of API calls that are made which contribute significantly to the carbon footprint enormously. Some of these emissions are not necessary and at a larger scale we are talking about a potential upside in the carbon emissions every day. We as developers do not have the insights as to how much carbon is emitted at the API level. The problem at hand is to give these valuable insights to the developers who can make informed decisions to optimize the invocation.

Application

We have a client/server architecture with the API gateway in between which routes the request. Whenever we make a call the gateway routes it to the server. Now at the server there will be a Process ID related to the process. We calculate the footprint at the JVM level using power measuring tools such as JoularjX. Moreover there is embodied carbon at the Gateway and the network footprint due to the request and response sizes. We sum all this with the help of the SCI plugin and return this baseline carbon score to the end user.

Prize category

Best Content

Judging Criteria

Overall Impact 👩🏽‍⚖️
The potential upside would be that developers will become more conscious while invoking API's as each API will have the baseline carbon footprint. This conscious efforts will not only pursue us to control the amount of carbon that is emitted daily but will bring a change in how we approach IT based solutions for a sustainable environment around us.

Clarity
We have prepared a detailed technical documentation on this topic as well as the demo video for better understanding.

Innovation
We have done some research and we could not find ideas that exhibit what we plan to achieve. We have also referred multiple tools and libraries in this domain and tried to integrate it to our project.

Video

API Carbon Score

Content

API Carbon Score Technical Documentation

Process

We built a sample spring boot application with some test API's. We utilized JoularjX to give the JVM level insights combining it with IF to calculate the carbon impact. This metrics are then pushed as response payload to the user as the carbon score.

Inspiration

As part of the SDLC process in our day to day life we use Postman/SoapUi to invoke API's. Many times it is unnecessary for example if a session is generated for 5 mins we don't need to invoke the API before 5 mins. We thought of a solution where we could give the developer's insights on these metrics to contribute to greener IT.

Challenges

Building a team and then collaborating together with so many ideas in place. Also, to come up with this idea we spent lot of time thinking before actually implementing the solution. Since there is no much research done on this we faced challenges in coming up with a innovative approach to contribute to the hackathon. We were also involved in customer commitments but all of us worked hard to put this content in place.

Accomplishments

We accomplished a lot, from having no idea to actually submitting 5 ideas and then implementing a few. We worked as a team and came up with a innovative solution.

Learnings

We learnt a lot about IF and green IT.

What's next?

The goal is to provide insights and relevant metrics to the developers and people actually using this tool to raise awareness. We want to add real-time metrics at the Gateway level and enhance the way we calculate the energy at the server level.

Starter Pack for Startups

Type of project

Building a plug-in for Impact Framework

Overview

A simple-as-possible plugin that gets startups an 80/20 view of the carbon impact of their software without writing any code.

How it could work:

  1. Takes standard inputs via a GUI/website
  2. Generates an impl file
  3. Runs the IF model
  4. Reads the ompl file
  5. Displays the outputs to the user

How this could be extended:
a) Graphical displays of information
b) Benchmarking versus other startups
c) Analyses sensitivity to inputs
c) Suggests & prioritises options to reduce emissions
d) Suggests options for offsetting

Questions to be answered

Key questions:

  • Key standard inputs to ask for
  • Best way of interfacing with the user
  • Decisions over standard settings (choice of models etc.)

Have you got a project team yet?

No, we would like your help to find team-mates

Project team

No response

Terms of Participation

WaterWise

Prize category

Beyond Carbon

Overview

Our goal with this project is to look at the impacts of cloud computing (datacenter) impact on water. We plan to build a WUE and/or human impact estimation based on the suggestion by @jawache here: #82. As a stretch goal for the project we would like to use the plugin to compare cloud deployments in various regions and compare to the decisions you may make if only considering carbon impact or other factors.

Questions to be answered

TBD

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@ridhee1gupta

Terms of Participation

Importer for github actions

Prize category

Best Plugin

Overview

The Git Actions import plugin will be designed in a way that results in retrieving data relevant to CI/CD workflows.
The data acquisition process unfolds methodically, either following the completion of each individual workflow run or in bulk, addressing multiple workflows at once.

The aim is to define a spectrum of measurements and parameters essential for the seamless integration of various plugins within the impact framework pipeline.

Questions to be answered

No response

Have you got a project team yet?

I am working on this project alone

Project team

No response

Terms of Participation

A plugin which can estmate carbon released by web traffic

Prize category

Best Contribution

Overview

We propose to build a visual plugin that allows web users to show emissions generated during web page usage.

  • On the one hand, it can provide users with a clear and quantifiable understanding of carbon emissions;
  • On the other hand, we hope to provide optimization and data input to the e-net during the development process;

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

https://github.com/ziJinChampion
https://github.com/lyxHanHan

Terms of Participation

Project Submission

Summary

A Google plug-in that contains the following features:

  • Utilizing the ENet model;
  • Carbon emissions generated by network resource transmission;
  • Real-time acquisition, duration: 1hour;

Problem

It can provide users with a clear and quantifiable understanding of carbon emissions

Application

More intuitively show users the carbon emissions generated by actual network operations, enhancing environmental awareness

Prize category

Best Contribution

Judging Criteria

  • Overall Impact:Establishing a connection between online behavior and carbon emissions enables users to clearly perceive, and consciousness is the driver of action;
  • Innovation and Creativity:Real time acquisition of network transmission data, coupled with Google plugin for the first time, convenient for users to use;
  • User Experience:Intuitive user interface and simple operation;

Video

Youtube link

Artefacts

GitHub address

Usage

Process

  • Build a project
  • Choose the appropriate mode
  • Obtain network transmission data
  • Building Google plugins

Inspiration

Energy conservation and emission reduction are about protecting our own homes. We are willing to do what we can on green software, hoping that a single spark can start a prairie fire.

Challenges

  • Obtain network transmission data;

Accomplishments

  • The first work to focus on green software;
  • The first Google plugin;

Learnings

  • Obtain network transmission data;
  • The development of Google plugins;

What's next?

Improve the accuracy of data obtained through network transmission

My team name

Summary

A brief overview of your project, 100 words max

Problem

Describe the problems the solution addresses, 200 words max

Application

Describe what the solution actually does, 200 words max

Prize category

Specific prize category

Judging Criteria

Explain how what you built meets that judging criteria for your prize category, the below are the criteria for the Beyond Carbon prize, the criteria for your prize might be different, see the link above for details.

Overall Impact

100 words max

Educational Value

100 words max

Synthesizing

100 words max

Video | A link to your video submission on YouTube | -
Artefacts | Link to the code or content | -
Usage | Link to usage instructions if applicable | -
Process | Describe how you developed the solution | 150
Inspiration | Tell us what inspired you to develop the solution | 150
Challenges | Share the challenges you ran into | 150
Accomplishments | Share what you are most proud of | 150
Learners | Share what you learned while hacking | 150
What's next? | How will your solution contribute long term to the Impact Framework eco-system | 200

Summary A brief overview of your project 100
Problems Describe the problems the solution addresses 200
Application Describe what the solution actually does 200
Prize category Specific prize category and explain how what you built meets that judging criteria 200
Video A link to your video submission on YouTube -
Artefacts Link to the code or content -
Usage Link to usage instructions if applicable -
Process Describe how you developed the solution 150
Inspiration Tell us what inspired you to develop the solution 150
Challenges Share the challenges you ran into 150
Accomplishments Share what you are most proud of 150
Learners Share what you learned while hacking 150
What's next? How will your solution contribute long term to the Impact Framework eco-system 200

Terms of Participation

The Green Mile: Streamlining Impact Framework Accessibility (Team: Green HighTech #Innovators)

https://github.com/Green-Software-Foundation/hack/wiki/Submit-your-project-for-judging

Members: (Lead) @WBurggraaf, @teun2408, @Thomcdrom, @kees2125

Project Submission

Summary | A brief overview of your project | 100

The project tackles Impact Framework plugin challenges by re-engineering it into an accessible API for simpler manifest file creation and plugin development.

It introduces four innovative sustainability plugins (vehicle-embodied carbon, weather impact prediction, EV charging emissions, and package delivery emissions) to boost sustainability efforts.

Supported by a fictitious EV fleet sustainability case study, the project's clear README files and comprehensive analyses extend beyond carbon emissions to educate users.

Additionally, a simulation API streamlines Impact Framework use and spur innovation.

Concrete thought experiments inspire scientists to explore new uses of the Impact Framework
by offering context, innovative modeling techniques, simulation basics, and simplified coding extensions through tangible examples.

Problems | Describe the problems the solution addresses | 200

Problems the Project Addresses

The Impact Framework, while a powerful sustainability analysis tool, suffers from inherent complexity. This complexity manifests in the creation of manifest files, hindering adoption and potentially leading to inaccurate analysis. Additionally, while plugins offer flexibility, their development process can remain challenging. This complexity, along with a conventional focus on carbon emissions, obscures the true potential of the Impact Framework for new users.

How the Project Solves These Problems

Our project tackles these challenges head-on. By reimagining the Impact Framework as an accessible API, we radically simplify interaction for users working with manifest files. This lowers the barrier to entry and reduces the chances of errors. We also provide extensive documentation and a fabricated EV fleet sustainability case study to enhance understanding. Additionally, our plugins expand the focus to include embodied carbon, water usage, and other key sustainability metrics. Lastly, we've added a specific tool – a simulation API – to further empower users. These combined efforts make the Impact Framework significantly more accessible, fostering a thriving and innovative development community.

Application | Describe what the solution actually does | 200

Our solution empowers delivery companies to optimize their EV fleet operations for maximum environmental benefit. It achieves this through several key components:

Holistic Impact Assessment: Custom plugins for the Green Software Foundation's Impact Framework go beyond carbon emissions. They calculate embodied carbon, water usage, and waste generation throughout a vehicle's lifecycle, providing a comprehensive view of the fleet's environmental footprint.

Weather Impact Predictions: The solution analyzes how various weather conditions influence EV range and battery health. This enables proactive route adjustments and charging strategies for weather-optimized performance.

Grid-Aware Charging Optimization: Fleet managers can schedule charging to coincide with peak renewable energy generation periods on the grid. This minimizes the emissions associated with charging and potentially reduces costs.

Data-Driven Route Analysis: The solution analyzes planned routes, factoring in real-time traffic, weather, and vehicle performance data. This enables the identification of the most energy-efficient and emissions-conscious routes.

Proactive Maintenance Insights: Battery health projections allow for predictive maintenance scheduling. This extends the vehicle's lifespan, reduces unexpected downtime, and minimizes the long-term environmental impact.

Overall, our solution provides comprehensive tools and insights for delivery companies to make informed decisions that prioritize environmental sustainability in their EV fleet operations.

Prize category | Specify which prize category you are entering | -

✨ Best Contribution

Judging criteria | Explain how what you built meets the judging criteria for your prize category | 200

✨ Best Contribution

Why our project fits perfectly

Overall Impact 👩🏽‍⚖️

API-Driven Simplicity: Turning the Impact Framework into an API simplifies interaction, benefiting users by reducing complexity and minimizing errors in manifest file operations.

Focus on Usability: Providing a fabricated case study PDF, clear README files, and diverse plugins emphasizes user-friendliness, empowering new users to utilize the Impact Framework beyond carbon emission analyses.

Supplemental Tools: A simulation API application extends the core framework's functionality, making it more adaptable and approachable across various use cases.

Innovation and Creativity

Transformation into an API: The innovative shift to an API-driven model streamlines interaction, enhancing usability and reducing errors.

Diverse Plugins and Tools: The inclusion of various plugins and a simulation API demonstrates creativity in extending the Impact Framework's capabilities and usability.

Alignment

Empowering Users: Through usability-focused initiatives like case studies and clear documentation, the project aligns with user needs and expectations, making the Impact Framework more accessible and powerful.

Encouraging Expansion: By providing tools and plugins, the project aligns with the goal of inspiring scientists and specialists to build upon the existing framework and contribute to its evolution.

User Experience

Simplified Interaction: The emphasis on API-driven simplicity and user-friendly resources like case studies and README files enhances the overall user experience with the Impact Framework.

Expanded Functionality: Additional plugins and tools contribute to a more versatile and adaptable user experience, accommodating various use cases and needs.

Video | A link to your video submission on YouTube | -

VIDEO

Artefacts | Link to the code or content | -

The main actor is the IF API <- Code Quality
Use the big manifest file example here to test it!:
IF-API: SWAG-UI <- API link (where you can run the JSON files beneath linked)

Using if api Swagger UI:

  • Locate the Endpoint: In your Swagger UI interface, find the section listing API endpoints. Expand the "POST /if" endpoint.
  • Parameters: This endpoint takes no specific query or path parameters. The primary input is provided through the request body.
  • Request Body:
    • Content-Type: Ensure it's set to "application/json". This tells the server you're sending JSON data.
    • Prepare Your JSON: The endpoint description ("Convert your YAML manifests to JSON...") indicates you'll need to provide correctly formatted JSON data, likely obtained from converting your YAML manifests.
    • Paste JSON into the Textbox: Swagger UI should provide a text area for the request body. Paste your prepared JSON data into this field.
  • Try it out: Click the "Try it out" button.

Examples you can use from the IF examples directory are, or create you'r own -> https://jsonformatter.org/yaml-to-json:
MANIFESTS/SCI.JSON
MANIFESTS/BASIC.JSON

The initial step we took was to develop a fabricated case study, ensuring that both our team members and subsequently the jury would fully grasp the core narrative of our project.
Content: PDF case study<- Docs and Transparency

Plugin: vehicle-embodied-carbon
README <- Docs and Transparency
EXAMPLE JSON <- Use this in the API link
EXAMPLE YAML <- Docs and Transparency
PLUGIN SOURCE <- Code Quality

Plugin: weather-impact-prediction
README <- Docs and Transparency
EXAMPLE JSON<- Use this in the API link
EXAMPLE YAML <- Docs and Transparency
PLUGIN SOURCE <- Code Quality

Plugin: ev-charging-emissions
README <- Docs and Transparency
EXAMPLE JSON<- Use this in the API link
EXAMPLE YAML <- Docs and Transparency
PLUGIN SOURCE <- Code Quality

Plugin: package-delivery-emissions
README <- Docs and Transparency
EXAMPLE JSON<- Use this in the API link
EXAMPLE YAML <- Docs and Transparency
PLUGIN SOURCE <- Code Quality

Usage | Link to usage instructions if applicable | -

Use the base simulator to generate a big example manifest file, usable for this solution:
BASE-SIMULATOR-SWAG-UI

Using basesim api Swagger UI:

  1. Locate the Endpoint: Navigate to the Swagger UI interface for your API. Find the section where your endpoints are listed and expand the "GET /mondayAtGreenLogisticsTheExampleLtd/Simulate" endpoint.
  2. Parameters: This endpoint has no specific parameters listed. This means you can execute the simulation directly without providing additional input.
  3. Try it out: Click the "Try it out" button associated with the endpoint.

Base Simulator Source Code <- Code Quality

Execute: Press the "Execute" button. This will send the GET request to your API server.

Process | Describe how you developed the solution | 150

Our solution development process emphasized collaboration and iteration:

Ideation: We brainstormed potential plugins and features to enhance the Impact Framework's capabilities.

Planning: Using Azure DevOps scrum board, we structured tasks focused on individual plugins, simulators, API integration, and UI development.

Development: Prioritizing TypeScript ensured alignment with the Impact Framework's design. Pair programming and frequent discussions facilitated collaboration.

Adaptation: We remained flexible, shifting towards an API-driven architecture, resulting in additional simulator creation.

Alignment: Weekly meetings ensured progress, addressed challenges, and refined the solution for a cohesive outcome.

Inspiration | Tell us what inspired you to develop the solution | 150

Picture this: a whirlwind brainstorming session fueled by a shared passion for sustainability and a sprinkle of tech geekery! One of us throws out "logistics optimization," another mentions Picnic's EV grocery deliveries and the discord buzzes with the complex beauty of charging logistics and electronics. Suddenly, the intricate dance of routes, optimization, and a story about vehicle inspections and weather-dependent charging strategies takes center stage.

It's a eureka moment! We envision a solution built on the Impact Framework – custom plugins quantifying often-overlooked factors like embodied water usage and waste generation throughout a vehicle's lifecycle. We see the potential to predict weather-driven battery performance and align charging with renewable energy peaks.

The excitement is palpable as we imagine a world where delivery fleets aren't just about efficiency but a holistic commitment to minimizing their environmental footprint. This isn't just about building a tool – it's empowering change, making sustainable logistics tangible and actionable. We're ready to dive into the beautiful mess of code, APIs, and data-driven insights, fueled by the thrill of driving real impact!

Challenges | Share the challenges you ran into | 150

We hit some roadblocks! The existing plugin examples weren't the most intuitive, and our varied TypeScript backgrounds made the initial learning curve steeper. Thankfully, one teammate stepped up with a clear template, and that's when things started clicking into place.

Plus, being tech-focused, we had to shift our thinking from pure calculations to the scientific complexities of real-world data. Understanding the nuances of observations versus absolute measurements within the larger context – that was a mind-bender at times!

But hey, that's the fun of it, right? We tackled these challenges head-on, learning as a team and recognizing the inherent limitations of any model. In the end, we focused on what we could achieve, delivering a solution that pushes the boundaries of sustainable fleet analysis.

Accomplishments | Share what you are most proud of | 150

We've revolutionized the understanding and optimization of electric vehicles for a sustainable future by delving into Impact Framework data. Our analysis uncovers insights into embodied carbon, material waste, and EV battery composition. We've quantified carbon footprints, identified resource depletion hotspots, and studied intricate EV charging dynamics crucial for advancing green transportation.

To amplify the dataset's impact, we transformed the Impact Framework into a user-friendly API, simplifying interactions and empowering users with clear guides and diverse plugins. Our tools and simulation API applications offer adaptability and specialized functions.

Our focus on usability makes the Impact Framework accessible to scientists and specialists, democratizing knowledge and tools for sustainable innovation in transportation. We pave the way for a greener future by inspiring innovation in sustainable technology.

Learnings | Share what you learned while hacking | 150

This hackathon was (and still is) an incredible learning journey! Here are some key takeaways:

The Power of Collaboration: Diverse perspectives sparked our creativity, and teamwork got us through those tricky problem-solving moments. We learned the value of leaning on each other's strengths to create something truly impactful.

Impact Framework's Potential: Diving deep into the Impact Framework revealed its vast potential for quantifying environmental impacts far beyond carbon. There's so much room to explore and build upon this foundation.

Bridging the Gap: We had to grapple with turning scientific complexities into usable data models. This highlighted the importance of bridging the divide between scientific research and practical software tools for effective sustainability solutions.

Flexibility is Key: Our pivot towards an API-driven architecture reinforced that adaptability is a hacker's best friend. Embracing change allowed us to create a more robust and scalable solution.

Passion is Contagious: Our shared enthusiasm for sustainable tech-fueled our determination to build something meaningful. It taught us that passion is a powerful catalyst for innovation and change.

What's next? | How will your solution contribute long-term to the Impact Framework eco-system | 200

Our work has the potential to create a ripple effect within the Impact Framework ecosystem and beyond:

Pioneering a Holistic Approach: By introducing plugins that quantify embodied water usage and waste generation, we pave the way for a more comprehensive understanding of software's environmental footprint. This can inspire new metrics, plugins, and analyses focused on these often-overlooked impact areas.

Open-Source Empowerment: Prioritizing open-source development and detailed documentation lowers barriers to entry. We envision developers, researchers, and businesses alike utilizing and building upon our plugins and methodology. This fosters a collaborative community dedicated to expanding the Impact Framework's reach.

Driving Data-Driven Sustainability: Our solution demonstrates the power of data in optimizing EV fleet operations for sustainability. This can serve as a blueprint for other industries and use cases, encouraging the adoption of the Impact Framework as a decision-making tool for minimizing environmental impact.

Beyond Fleet Management: The modular design of our plugins lays the groundwork for adaptation to other transport sectors (aviation, shipping, etc.), potentially leading to industry-specific tools and optimizations.

Mainstreaming "Beyond Carbon" Thinking: By making complex concepts like embodied water and waste accessible, we aim to shift mindsets within the tech industry. We believe our work can nurture a collective understanding that a truly sustainable digital future demands consideration of the full spectrum of environmental impacts.

Ultimately, we envision a world where assessing and optimizing the environmental footprint of software is seamlessly integrated into development processes. Our solution strives to be a catalyst for this transformation, contributing to a more responsible and sustainable digital landscape.

Right_Sizing_Model

Prize category

Best Plugin

Overview

The right-sizing model for Impact Engine Framework is designed to identify cloud computing instances that better align with the performance and capacity requirements of the customer's workload, with the goal of achieving the highest possible cpu usage , ram usage while minimising the cost and maintaining performance . It takes input in yaml format, ultimately providing more optimal instances based on the current cloud instance type, cloud vendor, current CPU utilization, target CPU utilization, and RAM requirements. Currently, this model primarily targets virtual machines of Azure and AWS.

Questions to be answered

No response

Have you got a project team yet?

Yes and we aren't recruiting

Project team

@jimbou
@GrayNekoBean
@mosalehio
@wangyiyuelva
@pazmiller

Terms of Participation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.