Code Monkey home page Code Monkey logo

app's Introduction

PACTA

This repository contains code for the Paris Agreement Capital Transition Assessment (PACTA) project, which consists of an OpenAPI v3-based API and a Nuxt-based frontend.

Running

# First, run a credential service, which you'll need if you want to log in.
# Otherwise, you can manually create a token with genjwt and use the API directly.

cd <path to credential service>

# Run the credential service
bazel run //scripts:run_server -- --use_azure_auth

# In a new terminal, from this directory, run the PACTA database
bazel run //scripts:run_db

# In another terminal, run the PACTA server
bazel run //scripts:run_server -- --with_public_endpoint=$USER

# In one last terminal, run the frontend
cd frontend
npm run local

Status

This project is at a very early stage, expect things to change rapidly.

Testing the PACTA workflow

To run the PACTA workflow code (e.g. from this repo), first create the relevant directories:

# From the repo root
mkdir workflow-data
cd workflow-data

mkdir -p analysis-output pacta-data real-estate score-card survey benchmarks portfolios report-output summary-output

And then load in the relevant files:

  • pacta-data - Should contain timestamped directories (one per year or quarter or something) that contain the actual data
  • benchmarks - Should contain timestamped directories containing pre-rendered result sets for comparison to outputs
  • portfolios - Should contain a single default_portfolio.csv, can be seen here

Look at scripts/run_workflow.sh for more details. Once all the files are in the correct location, start a run with:

bazel run //scripts:run_workflow

You should see output like:

DEBUG [...] Checking configuration.
INFO [...] Running PACTA
INFO [...] Starting portfolio audit
...

Security

Please report security issues to [email protected], or by using one of the contact methods available on our Contact Us page.

Contributing

Contribution guidelines can be found on our website.

app's People

Contributors

gbdubs avatar bcspragu avatar

Stargazers

Jackson Hoffart avatar Tobias Augspurger avatar

Watchers

 avatar Alex Axthelm avatar  avatar Jackson Hoffart avatar  avatar  avatar

app's Issues

Allow SSR Requests to Fetch Data

We currently get 401's when SSR'ing pages that are requesting data. As a temporary workaround (mostly to avoid complicating an existing PR) I'm using onMounted hooks instead.

Copy Needed Tracking Bug

This bug should be used everywhere in the code where we need copy suggestions from PACTA owners (probably Hodie). We'll assemble that list using this bug, and ship them all off to him in one fell swoop in ~a month.

Make standard composable for data fetching

Copying Grady's comment from here:

I like this (generally) but notice that almost all of this is standard/boilerplate.

proposal: can we come up with a wrapper for useAsyncData that does a few things:

1 - reuses the key between the loading OpKey (which we can do now that it isn't used for error reporting) and the useAsyncData key
2 - auto-wraps stuff in withLoading so that we don't need to have that ~anywhere
3 - adds the handleOAPIError if the return type of the promise can include the api.Error type
4 - automatically throws createError if an error is returned.
5 - ONLY hands back { data, refresh }, since error handling will all be handled standard.

If we have use cases where we need more customization folks can still choose to use useAsyncData in a way that is custom, but the utility described would cover 90% of our use cases with way less boilerplate.

Parsing Input/Output Coorelation

In the meeting ~1 week ago, we discussed this problem:

The parsing image takes in a folder (containing potentially multiple files), and parses into (potentially multiple) output portfolios. This means we don't have (a) a mapping between input files + output portfolios, and (b) we lack a good name for the output portfolios.

Ideally, alongside the output portfolios, the parsing image would create a JSON file (or similar) with a structure like:

[
  {
     "output_portfolio_file_name": "uuid-1.txt",
     "number_of_elements": 123, // Or something similar + nonsensitive to help distinguish at a glance.
     "portfolio_name": "the name of the portfolio from the file"
      "input_file_name": "path_of_file_derived_from.txt"
   } ...
]

@AlexAxthelm Could you add something like this to the parsing image? Without it we're stuck naming portfolios UUIDs, which is not ideal from the user's perspective.

Portfolio grouping structure

For "grouped" portfolios (logical composites of multiple real portfolios), I see two main ways we could approach this:

  1. The app handles the grouping, and sends a single portfolio object to the PACTA image, which returns a single PACTA report. This would require some CSV processing on the app's part
  2. Whenever there are multiple portfolios exposed to the PACTA image (mounted in), it combines them all into a single portfolio in-app, and emits a single report. This is my preferred option, since it keeps all that logic in the PACTA image.

Combining the CSVs is the easy part, since those should be standardized after upload. The trickier part will be in managing the portfolio parameters, and keeping the rules about which portfolios can be grouped or not aligned.

For example, we probably shouldn't allow portfolios with different holdings dates to be grouped, or maybe portfolios that are not attached to the same initiative.

I'm thinking that the easiest way to handle this would be to have the app handle the lgoic on grouping portfolios, and pass in a single set of portfolio parameters, and expose the relevant portfolios via volume mount

So if a "normal" (ungrouped) portfolio has parameters along the line of

{
  "name": "my cool portfolio",
  "holdingsDate": 20231231,
  "initiatitve": "GENERAL",
  ...
}

a grouped portfolio might have something like:

{
  "name": "my grouped portfolio",
  "portfolios": [
    "54a66368-7388-4e27-9cce-c0dc3d169d64",
    "efb9110b-4d35-4600-94f9-769f20938555",
    "d824c50f-96aa-47d0-984a-4fc7873cb038",
    "17467439-142f-4314-b32d-cc8533f10ad0"
  ],
  "holdingsDate": 20231231,
  "initiatitve": "GENERAL",
  ...
}

@gbdubs @bcspragu do you have any preferences here?

Error Yields Hundreds of Requests

If a user is logged out and loads a page and gets an error, we seem to retry hundreds of times. My guess is a poorly considered event listener. I'll find + fix.

Improve Logging Signal to Noise Ratio

When developing locally, the information I often need is terribly burried in the output logs for the server. There are tons of CORS data, tons of 200 responses that have their full http request + responses headers, and error logs bury the error itself at the end of the message. Improving this would have a significant impact on developer velocity.

Centralize + Import RMI PVDesigner Codebase

A good solution here would:

  • Have a codebase for the PVDesigner core code
  • Have a clear system in place for customizations that could be contributed upstream
  • Have a clear system in place for customizations that are expected to only be used by the given project (_extensions.scss ex)
  • Probably be an NPM module of some kind because we'd want to be able to enforce versioning constraints w/r/t this module, PV designer, and PV.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.