Code Monkey home page Code Monkey logo

asgi-caches's Introduction

NOTICE

Now unused. See florimondmanca/www.


Personal

Build Status Angular DigitalOcean

This is the repository for the frontend application powering my personal blog.

For the backend API, see personal-api.

Install

Install Angular CLI:

$ npm install -g @angular/cli

Install the dependencies:

$ npm install

Quickstart

Create an environment file called .env (it will be excluded from version control) at the project root, containing the following variables:

  • API_KEY: a valid API key created via the backend admin site.
  • BACKEND_URL: the URL to the backend root (without trailing slash).

For example:

# .env
API_KEY=myapikey
BACKEND_URL=http://localhost:8000

Generate your development environment file:

$ npm run config -- --env=dev

Start the development server, which will run on http://localhost:4200/:

$ ng serve -c dev

Using server-side rendering

Server side rendering is implemented using Angular Universal.

Server-side rendering allows to send fully-rendered HTML pages to clients, instead of sending them a blank page and letting Angular fill it in the browser. This reduces the "first meaningful paint" time, helps with referencing and allows integration with social media.

To use the server-rendered app, you must first create a build of the app:

$ npm run build:dev

Note: in production, use npm run build instead to create a production-optimized build.

Then start the server-rendered app (an Express server):

$ npm run serve:ssr

Scripts

See package.json for the available NPM scripts.

CI/CD

TravisCI is configured on this repo and generates a production build on every push to a branch.

asgi-caches's People

Contributors

florimondmanca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

snmz216 adrian2x

asgi-caches's Issues

How to pass cache objects between files?

@invokermain wrote in #22:

A side question I had which would be nice to have documented: do you have a recommended way of passing cache objects to submodules (e.g. if you have various submodules defining a starlette router and some endpoints), or just around the project in general? Having some more example patterns in the documentation would be great ๐Ÿ‘

Ability to define cached routes on CacheMiddleware

@invokermain wrote in #22:

For my use case it would amazing to be able to specify routes for which a cache applies to when adding the middleware. Looking through the source code I cannot see why this would not be possible. I imagine the use could specify this by passing a list of routes, like: ['/api/y', '/api/x'] and anything that matches these routes is cached?

Add support for private and public Cache-Control attributes

While #4 will add support for adding arbitrary Cache-Control attributes, the private and public attributes are a bit special. They are exclusive, in that only one of them should be present in the Cache-Control header. Besides, if private is present then the response should never be cached.

To implement this feature:

  • Modify the Cache-Control header generation to allow passing private=True and public=True to the @cache_control header (resulting in the private or public directives being present in the Cache-Control header).
    • Make sure that only one of private or public is passed to @cache_control, or raise an exception (e.g. ValueError).
    • If one of public or private is already present in the Cache-Control header (this may happen if e.g. the view returned as response with Cache-Control: public), it should be overridden.
  • In store_in_cache(), make sure to not store the response in the cache if the private cache-control directive is present.
  • Add tests for the new behavior (and modify the one that was testing that private and public weren't supported yet).

Note that this can only be tackled once #4 has been resolved.

Multiple caches for multiple TTLs?

Hi flori,

This project looks very exciting, very useful for my use case (web API which gets/posts data from a database), but I sat down today to try it out but realised it is quite forced in terms of how to implement, on the surface my options are:

  1. Either application wide caching with one TTL.
  2. Or define multiple caches with multiple TTLs and define per end point (clunky, and have to pass cache objects around whole project).

For my use case it would amazing to be able to specify routes for which a cache applies to when adding the middleware. Looking through the source code I cannot see why this would not be possible. I imagine the use could specify this by passing a list of routes, like: ['/api/y', '/api/x'] and anything that matches these routes is cached?

Might be possible to take one step further and specify TTLs per route, with the Middleware handling creating the various caches. But I am not familiar with the Cache package.

What are your thoughts? Happy to fork and try to implement the above.

Thanks

P.S. looking at the code I can see that the caching for GET requests is unique for each set of parameters. This is great for the API use-case, would be nice to have this on the documentation.

Document the cache key algorithm

@invokermain wrote in #22:

P.S. looking at the code I can see that the caching for GET requests is unique for each set of parameters. This is great for the API use-case, would be nice to have this on the documentation.

While it is true that this particular behavior is not documented yet, I think this is one aspect of a great issue, being that the cache keying algorithm is currently not documented.

There are already various bits of info embedded in docstrings for various functions in asgi_caches/utils/cache.py, but we should probably also surface them in our prose docs, perhaps even with some diagrams.

Docs site

Currently the documentation is very much usage-centered, e.g. "if you want to solve X then use Y".

Various issues start to surface the need for other forms of docs, e.g. reference on the cache keying algorithm (#24), or usage patterns or best practices (#25).

I'd also like to add some API docs at some point.

As the docs expand we're quickly going to reach the limit of what a README can hold without becoming unreadable or hard to navigate.

Soโ€ฆ time for a MkDocs site?

Update docs with Starlette 0.13+ usage

Starlette 0.13 introduced a new declarative style for defining apps, routes, etc. (see

routes = [
    Route('/', homepage),
    Mount('/users', routes=[
        Route('/', users, methods=['GET', 'POST']),
        Route('/{username}', user),
    ])
]

app = Starlette(routes=routes)

Our documentation is now out of date w.r.t. https://www.starlette.io. So although things aren't broken, we need to update the code snippets to use the declarative style for consistency with the newest Starlette release.

A note on per-endpoint caching docs: we should keep the @cached() decorator on top of HTTPEndpoint instances, even though #23 makes me think we should probably rejig the @cached() decorator into something that can perform both as a decorator and a standard function (e.g. cache_endpoint(home, cache=cache)).

Debug logs

There's no way tell from a response whether the server-side cache was hit (e.g. server-side caching does not result in sending back a 304), which is a pain for debugging.

So let's add logs on two levels:

  • DEBUG: show HTTP-level behavior (cache miss/cache hit/uncachable request)
  • TRACE: show exactly what happens, in particular any operations with the cache system (cache get/set, key generation, uncachable response cases, etc)

Implement endpoint-level overriding of TTL

Currently the max-age attribute in the cache control header is defined globally by cache.ttl. But users may want to specify a special value for a given endpoint (for example a longer TTL, because the resource doesn't change very often).

The proposed API is a ttl parameter on the @cached decorator:

import math
from starlette.responses import JSONResponse
from asgi_caches.decorators import cached

@app.route("/pi")
@cached(cache, ttl=60 * 60)  # Cache for 1 hour
class Pi(HTTPEndpoint):
    async def get(self, request):
        ...

To implement this feature:

  • Add an optional ttl parameter on the @cached decorator.
  • If it is passed, use it instead of cache.ttl when computing the max_age.
  • Add a test to verify that if given, the custom ttl is used instead of the cache.ttl.

Disallow multiple endpoint-level cache configuration

Currently, it is theoretically possible for a user to do the following:

@cached(cache)
@cached(cache)
...
@cached(cache)
class Endpoint(HTTPEndpoint):
    ...

i.e. to apply the @cached decorator introduced by #15 an arbitrary number of times.

I can't see a situation when this would make sense. Even when #3 lands, the only reasonable behavior is to apply the top-most decorator, which means others decorators have no effect and should then be disallowed to prevent confusion.

Users could also have CacheMiddleware applied onto the app, and then applied cached to an endpoint. This shouldn't be allowed either.

To implement this feature:

  • Modify the @cached decorator to raise an exception (ValueError is probably fine) if the decorated app is an instance of CacheMiddleware (because this means that the decorator has already been applied).
  • Add a test to verify that the exception is raised if we try to apply the decorator twice.

Implement never_cache decorator

Currently, when CacheMiddleware is applied then all application endpoints will be cached according to the cache.ttl. Users may want to not cache a specific endpoint, e.g. because its response should always be fresh.

The proposed API is a new @never_cache decorator:

from datetime import datetime
from asgi_caches.decorators import never_cache

@app.route("/datetime")
@never_cache
class DateTime(HTTPEndpoint):
    async def get(self, request):
        return JSONResponse({"time": datetime.now().utcformat()})

Alternatives that were considered:

  • Use @cached(ttl=0): this would work, as currently if the TTL is zero we do not cache at all (see #10), but it is not the most intuitive API.

To implement this feature:

  • Add the @never_cache decorator in decorators.py. You may want to implement it as a proxy to @cached(ttl=0), if it turns out to work.
  • Add a test that if the decorator is applied, the endpoint is not cached even if the application is wrapped in CacheMiddleware.

(Note: the situation where both @cached and @never_cache isn't within the scope of this issue. It will be dealt with as part of #16.)

Implement cache_control decorator

Cache-Control is a header that allows multiple key=value pairs. There is currently no way to pass arbitrary Cache-Control directives to the response (at least without manually setting the header as part of the endpoint, e.g. on a Starlette response). Indeed, right now asgi-caches automatically adds max-age based on the ttl, but more directives exist, e.g. no-transform, must-revalidate, etc. (see the HTTP cache directives registry).

The proposed API is a @cache_control decorator:

from asgi_caches.decorators import cache_control

@cache_control(must_revalidate=True, no_transform=True)
class View(HTTPEndpoint):
    ...

To implement this feature:

  • Define a @cache_control decorator in decorators.py.
    • It should accept arbitrary **kwargs, and add them to the Cache-Control header. (This processing should probably be performed by an utility in utils/cache.py.)
    • It should not apply caching by itself (this is the role of @cached, which users should be able to apply on top of or below this decorator). Again, it should only add the **kwargs to the Cache-Control header .
    • Names should be converted to kebab-case (i.e. replace _ with -).
    • Items with a boolean value should result in adding the directive to the header, without directive (e.g. no_transform=True should result in no-transform (no =True)).
    • Non-boolean items should be added as a key=value pair to the header (e.g. stale_if_error=60 should result in stale-if-error=60).
    • Items in **kwargs should be merged into any Cache-Control header pre-existing in the response. In particular, this means that False items should result in removing the directive if it is already present. For example, Cache-Control: max-age=60 must-revalidate and must_revalidate=False should result in Cache-Control: max-age=60.
  • Add a test to verify that kwargs passed to @cache_control are correctly transcribed into the Cache-Control header.

Note that dealing with the private and public attributes is out of scope of this issue, because they require some extra behavioral changes. They will be tackled as part of #17. They should be disallowed for now (e.g. raise a NotImplementedError if they are in **kwargs).

Alternatives considered:

  • Allow to pass arbitrary **kwargs to @cached: this isn't desirable, because we want to be able to apply Cache-Control attributes without necessarily setting the max-age attribute and Expires header (which is what @cached does).

Implement CacheMiddleware

Still need to figure out:

  • Whether separating out request processing (getting from the cache) from response processing (storing in the cache) is relevant. The key observation is that fetching from the cache must occur first in the request processing chain, while storing in the cache must occur last in the response processing chain. This hints me that perhaps one single middleware is sufficient, so long as it is applied last (we'll need to document this).
  • How to implement as a generic ASGI middleware.

Implement vary_on_headers

(This functionality is not pre-documented in the README yet.)

asgi-caches already takes the Vary response header to build its cache key. (Vary lists request headers that may result in a different response. For example, Vary: User-Agent means that they may get a different response depending on the device they're accessing the website from, e.g. desktop vs mobile.)

However, users currently have no way to robustly specify what goes into the Vary header. They could use their framework-specific response headers API (e.g. return PlainTextResponse("...", headers={"Vary": "Accept-Encoding"}) in Starlette), but this would replace the Vary header. Instead, we want to allow them to add request headers to the Vary header.

The proposed API is an @vary_on_headers() decorator:

from asgi_caches.decorator import vary_on_headers

@vary_on_headers('User-Agent', 'Cookie')
class View(HTTPEndpoint):
    ...

To implement this feature:

  • Define a @vary_on_headers() decorator in decorators.py.
  • It should accept a variable number of (case-insensitive) request headers.
  • The provided headers should be added to the Vary header of the response. (Starlette already provides a utility for this in the form of response.headers.add_vary_header().)
  • Add one or more tests to verify that using @vary_on_headers() results in the listed headers being added to the response's Vary header.

Note: the equivalent functionality in Django is documented here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.