Code Monkey home page Code Monkey logo

template-application-flask's Introduction

Template Application Flask

Overview

This is a template application that can be used to quickly create an API using Python and the Flask framework. This template includes a number of already implemented features and modules, including:

  • Python/Flask-based API that writes to a database using API key authentication with example endpoints
  • PostgreSQL database + Alembic migrations configured for updating the database when the SQLAlchemy database models are updated
  • Thorough formatting & linting tools
  • Logging, with formatting in both human-readable and JSON formats
  • Backend script that generates a CSV locally or on S3 with proper credentials
  • Ability to run the various utility scripts inside or outside of Docker
  • Restructured and improved API request and response error handling which gives more details than the out-of-the-box approach for both Connexion and Pydantic
  • Easy environment variable configuration for local development using a local.env file

The template application is intended to work with the infrastructure from template-infra.

Installation

To get started using the template application on your project:

  1. Run the download and install script in your project's root directory.

    curl https://raw.githubusercontent.com/navapbc/template-application-flask/main/template-only-bin/download-and-install-template.sh | bash -s

    This script will:

    1. Clone the template repository
    2. Copy the template files into your project directory
    3. Remove any files specific to the template repository.
  2. Optional, if using the Platform infra template: Follow the steps in the template-infra README to set up the various pieces of your infrastructure.

Note on memory usage

If you are using template-infra, you may want to increase the default memory allocated to the ECS service to 2048 Mb (2 Gb) to avoid the gunicorn workers running out of memory. This is because the application is currently configured to create multiple workers based on the number of virtual CPUs available, which can take up more memory.

Getting started

Now you're ready to get started.

template-application-flask's People

Contributors

lorenyu avatar chouinar avatar dependabot[bot] avatar kevinjboyer avatar daphnegold avatar sammysteiner avatar zelgadis avatar kylegunby avatar rocketnova avatar sawyerh avatar yoomlam avatar bbemis017 avatar jamesbursa avatar nava-joshlong avatar anybodys avatar lxqnt avatar michael-wojcik avatar rsk2 avatar rylew1 avatar

Stargazers

Keyth M Citizen  avatar neaph avatar Sinan Bolel avatar  avatar Aaron Couch avatar  avatar Josh Osborne avatar

Watchers

Sha Hwang avatar Andy Bailey avatar  avatar  avatar Brian Gantick avatar Erik Nelsestuen avatar Jeremy Black avatar  avatar  avatar Wei Leong avatar  avatar Razorphish avatar Christopher Garner avatar Iryna Shafinska avatar Tanner Doshier avatar  avatar Ali avatar Josh Osborne avatar George Byers avatar  avatar C Cheng avatar Logan Bertram avatar Micah avatar Kaylyn Van Norstrand avatar Karina Munoz Gonzalez avatar Jack Ryan avatar Genevieve Gaudet avatar Katie Witham avatar  avatar

template-application-flask's Issues

Investigate DB options regarding the ORM & Migrations

Currently this is built out to use SQLAlchemy + Alembic to handle most operations with the Postgres database along with FactoryBoy+Faker to generate the test data.

Consider what alternative options could be available that might be:

  • Simpler
  • DB language agnostic

Add make test-watch

I noticed we have make test and make test-coverage but not make test-watch. Could be useful for developer productivity

Keep openapi.yml file in sync with code

Context

make openapi-spec will generate the openapi.yml file from the schema files, but this is currently manually run, so it can get out of sync

Options

  1. have a CI check that runs make openapi-spec and if it changes the file (e.g. the git index isn't clean) then fail the CI check
  2. have a workflow that runs make openapi-spec on PRs and if there are any changes automatically commit and push the changes

Tracking mutations in the database

This is draft idea for discussion.

Goal

Record the operation (batch job or API POST/PATCH/PUT) that created or last modified a row in some way in the database. This is often helpful when troubleshooting or debugging something.

Inspiration:

  • created_at and updated_at track when a row was created or modified (TimestampMixin), but not what or who did it
  • in pfml, some tables have a reference to import_log to track which batch job created or updated each row

Logs could answer this too, but it's not always easy to get from logs (especially in bulk), and it needs attention to have the right logging calls.

Proposal

Add created_mutation_id and updated_mutation_id columns, both FKs to a new table mutation:

mutation
mutation_id int primary key auto increment (or uuid)
api_operation_id int foreign key fk to a lookup table of API operations, NULL if this wasn't an API call
user_id uuid foreign key fk to user, if there was an authenticated user
batch_name text name of batch job, NULL if this wasn't batch
batch_step text step of batch job
status text status of batch job
etc.

Examples

Let's say the address table has these columns. It looks like this:

address_id created_at updated_at created_mutation_id updated_mutation_id street ...
...
100 04:30:00 11:30:00 4001 4500 "1500 Pennsylvania Ave" ...
101 04:30:00 NULL 4001 NULL "2200 New York Ave" ...
...

The mutation table has:

mutation_id api_operation_id user_id batch_name batch_step status
...
4001 NULL NULL "address.import" "AddressImportStep" "success"
...
4500 fk โ†’ (POST /address) 33445566 NULL NULL NULL
...

This shows that address 100 was:

  • Created at 04:30 by batch job "address.import"
  • Later on at 11:30 updated by an API POST by user 33445566

While address 101 was created by the same batch, and not modified since.

Out of scope

  • Tracking all changes. Only creation and the most recent modification. It's too much to track all changes in the db for most use cases.

Add CI

Context

template-application-flask still doesn't have CI. We want to add a ci-app.yml that runs the checks for the flask application. This CI should be able to be used on any project that uses the template.

See this github thread

Rename api package to something more generic

Context

current folder structure is /app/api/...
let's rename to /app/lib/... or something else that can apply to both api code and background jobs

Once we do this, we can rename /lib/route to /lib/api since the stuff in the route package is really stuff that's API specific (api schemas, api handlers/controller layer)

Document instructions for local database management

Context

the database commands in the makefile could use more comments and some additional documentation in the docs folder to explain how to use them and when to use them

i also couldn't find any instructions on how to start the database from scratch i.e. deleting the volume, which would be useful especially for template development

Document ADR for choice of poetry as package management tool

Context

poetry is being used as the package management tool, it'd be good to retroactively document this decision as an ADR

References

some references from a quick googling, but may find additional resources from internal slack

Separate out project packages from core packages

Needs refinement, but the idea is to separate out project specific packages from core packages to solidify hexagonal architecture and make it harder to add business logic to low level adapters, etc

Investigate alternative API validation approaches

Our current setup has validation occuring in OpenAPI/Swagger, and then validation occuring via Pydantic models we define. Flask requires an openapi spec, but openapi can't do many validations and is clunky in a lot of ways to setup.

A lot of this ends up duplicated. Investigate ways of doing this validation differently.

A few leads:

Add with_db_session and with_db_session_transaction decorators

Could be nice convenience to add decorators that automatically create a new session in a context manager (and optionally start transaction)

Design options

could be two decorators with_db_session and with_db_session_transaction or could be one decorator with a parameter
with_db_session(start_transaction: bool)

Connect to DB using IAM rather than username/password

Add an option in db module to connect to postgres db using IAM rather than with username/password
Use this as the default option
Ideally keep username/password option as an option for projects that don't need complexity of IAM auth

Considerations

Need to figure out how to test locally

Dependencies

This might depend on the aurora db template-infra work

Make the API generic - removing WIC references

After #2 - go through and remove the WIC MT specific naming and implementation including:

  • Documentation
  • API implementation
  • Script implementation

Note this should make an equivalent, but not fully formed implementation (eg. missing thorough examples).

Rename DB env vars to DB prefix

DB env vars are currently prefixed as POSTGRES_* but I think DB_* I prefer names that represent the role/purpose rather than the language/tech

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.