pallets-eco / flask-caching Goto Github PK
View Code? Open in Web Editor NEWA caching extension for Flask
Home Page: https://flask-caching.readthedocs.io
License: Other
A caching extension for Flask
Home Page: https://flask-caching.readthedocs.io
License: Other
I just realized that when trying to get an item set by another app would always return None, How can I share cache between apps using the same cache server? does the extension add a unique identifier to the key when setting an item in cache? If you, how would I get that get that identifier?
is there a method to limit the cache size?
(eg. by entries or the size in bytes)
If i set timeout=None
i may end up with a lot of entries in the cache after a while, so i need a way to limit them.
In the context of my plugin flask-capitains-nemo ( https://github.com/Capitains/flask-capitains-nemo ), I am trying to get memoized a certain list of functions. I currently cache dynamically these one through :
if self.cache is not None:
for func, instance in self.cached:
setattr(instance, func.__name__, self.cache.memoize()(func))
It works well, as long as I don't cache functions that have no enforced args or kwargs schemes but are actually route dispatcher :
def render(self, template, **kwargs):
will always create the key : flask_nemo.Nemo.render('main::collection.html',){}
as the output of self._memoize_kwargs_to_args will not work with a **kwargs
system. It is most likely the result of the use of get_arg_names
. which will find only one defined arg.
I'll be trying to propose a fix for this issue.
why decorated_function is used here?
Can you explain, please?
It's a bit unclear to me if the CACHE_ configs can be put into the regular Flask config, or they are separate from it?
Is this valid?
class Config(Object):
# Unrelated examples
SECRET_KEY = configfile.get("env", "secret")
CORS_HEADERS = ["Content-Type", "Authorization", "User"
# Related to flask-caching
CACHE_TYPE = "redis"
CACHE_REDIS_HOST = "127.0.0.1
# etc
CONFIG = Config()
app = Flask(__name__)
app.config.from_object(CONFIG)
cache = Cache(app)
Or should it be separate?
class Config(Object):
# Unrelated examples
SECRET_KEY = configfile.get("env", "secret")
CORS_HEADERS = ["Content-Type", "Authorization", "User"
class CacheConfig(Object):
# Related to flask-caching
CACHE_TYPE = "redis"
CACHE_REDIS_HOST = "127.0.0.1
CONFIG = Config()
CACHE_CONFIG = CacheConfig()
app = Flask(__name__)
cache = Cache(app, CACHE_CONFIG)
app.config.from_object(CONFIG)
Thanks!
def delete_many(self, *keys):
"""Deletes multiple keys at once.
:param keys: The function accepts multiple keys as positional
arguments.
:returns: Whether all given keys have been deleted.
:rtype: boolean
"""
return all(self.delete(key) for key in keys)
Currently the delete loop stops at the first item that is False (it's the behavior of the all function if evaluation is done inside)
This is annoying because you have to check the presence in the cache of all the keys you want to delete.
The default behaviour should be to continue the deletion despite the errors.
In this way the trick is to bypass the evaluation during the loop:
return all([self.delete(key) for key in keys])
Or add this as option ?
def delete_many(self, *keys, ignore_errors=False)
Note: I found this problem with the file system cache
Often it is useful to know the hit to miss ratio for a particular view function, or maybe even for a memoization key, which includes both the function name and call parameters.
It would be very useful if Flask-Caching
provided a way to register hooks that would be called on each hit and miss. It may also be useful to have a hook for errors thrown from the cache backend.
What do people think? I can work on a PR if there is chance that it would be accepted ๐
Hello!
I faced with such problem:
I tried to memoize property getter in this way:
class TestClass:
def __init__(self, value):
self._value = value
@property
@cache.memoize(100)
def value(self):
return self._value
and I got:
DeprecationWarning("Deleting messages by relative name is " "no longer reliable, please switch to a " "function reference.")
I thing it is bug, and I fix it in this pull request: #9
Curious if this exists, happy to look into implementation if it doesn't.
Not all cache's are reliable, and sometimes services go down, looking for something that can set a connection timeout.
For instance, if Redis doesn't respond in 5 seconds, ignore the cache, proceed with business as usual.
Exception possibly due to cache backend.
Traceback (most recent call last):
File "https://github.com/sh4nks/flask-caching/blob/master/flask_caching/__init__.py"
rv = self.cache.get(cache_key)
File "https://github.com/sh4nks/flask-caching/blob/master/flask_caching/backends/clients.py"
return self._get(key)
serialized = ''.join([v for v in result if v is not None]
TypeError: sequence item 0: expected str instance, bytes found
Python 3.6.0
Flask-Cache==0.13.1
How to delete a spercific cache.
I mean if cache.clear()
, it will remove all cache. what about if I want to delete only one cached route.
ref: https://stackoverflow.com/questions/36180066/flask-cache-equivalent-of-delete-memoized-for-clear
It seems that this library only supports pylibmc
. This is a request to support other libs, e.g., pymemcache
and bmemcached
.
I am using the FileSystem cache, but on Windows, when I try to do cache.clear()
it doesn't actually delete anything.
The caching also results in this quite often when reading from the cache. Any ideas? Thanks.
File "C:\Anaconda3\lib\site-packages\flask_caching\__init__.py", line 665, in decorated_function
f, *args, **kwargs
File "C:\Anaconda3\lib\site-packages\flask_caching\__init__.py", line 492, in make_cache_key
f, args=args, timeout=_timeout, forced_update=forced_update
File "C:\Anaconda3\lib\site-packages\flask_caching\__init__.py", line 481, in _memoize_version
timeout=timeout)
File "C:\Anaconda3\lib\site-packages\werkzeug\contrib\cache.py", line 195, in set_many
if not self.set(key, value, timeout):
File "C:\Anaconda3\lib\site-packages\werkzeug\contrib\cache.py", line 776, in set
rename(tmp, filename)
File "C:\Anaconda3\lib\site-packages\werkzeug\posixemulation.py", line 97, in rename
old = "%s-%08x" % (dst, random.randint(0, sys.maxint))
AttributeError: module 'sys' has no attribute 'maxint'
Flask-Caching==1.3.3
currently supports connecting to Redis over TLS when you create a connection using the CACHE_REDIS_URL
configuration option:
In [1]: from flask_caching import Cache
In [2]: from flask import Flask
In [3]: config = {'CACHE_TYPE': 'redis', 'CACHE_REDIS_URL': 'rediss://:[email protected]:6380/0'}
In [4]: cache = Cache(config=config)
In [5]: app = Flask(__name__)
In [6]: cache.init_app(app)
In [7]: cache.get("my-key")
In [8]: cache.set('hi', 'hello')
Out[8]: True
In [9]: cache.get("hi")
Out[9]: 'hello'
but this support is not mentioned in the documentation. We should add a note that CACHE_REDIS_URL
accepts rediss://
URLs.
Hi there, it's me again.
One thing I'd like to do is to be able to provide my own backend object instead of going through config with a keyword argument backend
on __init__()
. This way I can share or have specific backend objects across my code base :)
If this feature is okay, I'll put in the work. Question is : would type checking be required for werkzeug.contrib.BaseCache ?
Cheers
Hello,
I think the docs for template caching could be improved somehow...
Describing that keys control if something should be fetched from cache or not. If the keys change, then the content is not cached. I'm not sure of the best wording, but on first reading the documentation it wasn't clear to me what keys were for.
Show an example of using multiple keys. I had to discover that a comma is required.
{% cache 60*5, project_id | string(), release_id | string() if release_id else '' %}
best regards,
@app.route('/cache/') @cache.memoize(timeout=0) def test_cache(): print 'cache' import random return "%s" %random.randint(0,9) @app.route('/delete-mem/') def clear_one(): cache.delete_memoized(test_cache) return 'clear one'
The cache mode is simple.
the problem is after I delete the cache clear_one()
, when I run test_cache()
, everytime(every reloading the page), it give a new number.
I'd like to use Flask-Caching in my Flask-Restful app. I wanted to take advantage of make_cache_key
property that gets added to the decorated method, but could not figure out how to make it work. Any ideas?
from flask import Flask, url_for
from flask_restful import Api, Resource
from flask_caching import Cache
app = Flask(__name__)
api = Api(app)
cache = Cache(app, config={'CACHE_TYPE': 'simple'})
@api.resource('/whatever/')
class Foo(Resource):
@cache.cached()
def get(self, param):
return expensive_db_operation()
def post(self):
## Before I update the DB, I'd like to invalidate the result of get().
## I ended up with this ugly thing... is there a better way?
cache.delete('view/' + url_for('foo'))
update_db_here()
return something_useful()
The uwsgi backend introduced a dependency on Werkzeug 0.12+.
Without specifying this requirement, the deployment break in existing environment because it does not force the Werkzeug
dependency upgrade.
Hi! What would be the best way to approach using Flask-Caching with Redis Sentinel? It doesn't look like that werkzeug's RedisCache
supports Sentinel.
I'm willing to work on a PR for the support.
In my application views are implemented by subclassing View, eg:
class SomeView(View):
def dispatch_request(self, *args, **kwargs):
# impl
Adding the annotation on dispatch_request
does not work, it would be nice if it could be supported.
It seem that the current release is missing the backend internal module, that makes the code break when installing it from pip (when using the source is ok).
For example:
[2016-12-09 11:05:41,383] ERROR in app: Failed to initialize entry point: inspire_cache = inspirehep.modules.cache.ext:INSPIRECache
Traceback (most recent call last):
File "/virtualenv/bin/inspirehep", line 11, in <module>
load_entry_point('Inspirehep', 'console_scripts', 'inspirehep')()
File "/virtualenv/lib/python2.7/site-packages/click/core.py", line 716, in __call__
return self.main(*args, **kwargs)
File "/virtualenv/lib/python2.7/site-packages/flask/cli.py", line 345, in main
return AppGroup.main(self, *args, **kwargs)
File "/virtualenv/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/virtualenv/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/virtualenv/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/virtualenv/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/virtualenv/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/virtualenv/lib/python2.7/site-packages/flask/cli.py", line 228, in decorator
with __ctx.ensure_object(ScriptInfo).load_app().app_context():
File "/virtualenv/lib/python2.7/site-packages/flask/cli.py", line 201, in load_app
rv = self.create_app(self)
File "/virtualenv/lib/python2.7/site-packages/invenio_base/app.py", line 159, in create_cli_app
app = create_app(debug=get_debug_flag())
File "/virtualenv/lib/python2.7/site-packages/invenio_base/app.py", line 120, in _create_app
modules=extensions,
File "/virtualenv/lib/python2.7/site-packages/invenio_base/app.py", line 179, in app_loader
modules=modules)
File "/virtualenv/lib/python2.7/site-packages/invenio_base/app.py", line 230, in _loader
init_func(ep.load())
File "/virtualenv/lib/python2.7/site-packages/invenio_base/app.py", line 178, in <lambda>
_loader(app, lambda ext: ext(app), entry_points=entry_points,
File "/code/inspirehep/modules/cache/ext.py", line 37, in __init__
self.init_app(app)
File "/code/inspirehep/modules/cache/ext.py", line 41, in init_app
self.cache = Cache(app)
File "/virtualenv/lib/python2.7/site-packages/flask_caching/__init__.py", line 150, in __init__
self.init_app(app, config)
File "/virtualenv/lib/python2.7/site-packages/flask_caching/__init__.py", line 191, in init_app
self._set_cache(app, config)
File "/virtualenv/lib/python2.7/site-packages/flask_caching/__init__.py", line 196, in _set_cache
from . import backends
ImportError: cannot import name backends
Cheers!
Not an issue, obviously.
I have been concerned about Flask-Cache for a while and it's complete lack of updates. Has the original author been in communication at all? Can the repo be shared/transferred so that we don't need people to update dependencies?
Either way I'll be attempting to use your fork, thanks for your time.
I have a need to configure a function at the app level which would transform keys passed in to get/set calls on the cache prior to them being sent to the backend. Specifically, this would allow us to md5 hash keys prior to them getting to memcached to avoid hitting the 250-character limit on keys there while keeping our application-facing keys readable and sufficiently verbose.
Django has a hook similar to this called 'KEY_FUNCTION': https://docs.djangoproject.com/en/1.8/ref/settings/#std:setting-CACHES-KEY_FUNCTION so this would also aid in cache sharing between Django and Flask apps (something we would also like to do).
Do others think this would be useful? The key prefix could also be provided to the function similarly to Django's implementation.
I'm using flask-caching to cache the results of an api endpoint. I'd like to set the cache depending on the result. Ex: Don't set the cache if the response is something like {"error": "message"}
What's the best way to do this? Should I override a method? If so, which method?
One of the things that has changed in Python 3.6 is consistent ordering and insertion preservation for datatypes which are hashed (dict, set, etc.). It may be possible to leverage this to your advantage to skip having to sort query string parameters with sorted()
in Python 3.6.
I think this may be completely solved by the fact that using sorted
is initiated by query_string=True
, but then it should be called legacy_cache_key_fix
or something?
Just brainstorming, something I thought of... I'll try to come back and elaborate/update the issue.
Hi guys,
Sorry for opening a question issue here, but I couldn't find any user group for the library (I hope it's OK).
I'm starting to experiment with Flask-Caching for an application I'm building, and I'd like to know if the library supports multi-level caching. For example, can I have something like this?:
memory_cache = Cache(config=config_for_memory)
persistent_cache = Cache(config=config_for_persistent)
memory_cache.init_app(app)
persistent_cache.init_app(app)
Is the above expected to work? I need this because this application will have to deal with very fast access to in-memory cache but also a (not so very) fast persisted cache in certain cases.
Thanks!
Diogo
I get some odd behaviour when trying to use Redis. Take the following example:
from flask import Flask
from flask_caching import Cache
import time
app = Flask(__name__)
app.config['CACHE_TYPE'] = 'RedisCache'
app.config['CACHE_REDIS_HOST'] = 'localhost'
app.config['CACHE_REDIS_PORT'] = '6379'
cache = Cache()
cache.init_app(app)
@app.route("/")
@cache.cached(timeout=5)
def index():
return str(time.time())
app.run()
Accessing /
throws:
Traceback (most recent call last):
File "/Users/jensgeyti/git/data-web/env/lib/python3.5/site-packages/flask_caching/__init__.py", line 340, in decorated_function
rv = self.cache.get(cache_key)
File "/Users/jensgeyti/git/data-web/env/lib/python3.5/site-packages/werkzeug/contrib/cache.py", line 598, in get
return self.load_object(self._client.get(self.key_prefix + key))
AttributeError: 'Flask' object has no attribute 'get'
It seems Cache._set_cache
calls cache_obj(app, config, cache_args, cache_options)
, but cache_obj
is an instance of wekzeug.contrib.cache.RedisCache
expecting arguments (self, host='localhost', port=6379, password=None, db=0, default_timeout=300, key_prefix=None, **kwargs)
.
In essence, it seems Flask-Caching passes an app
where it should pass a hostname. Any idea why that's happening?
The simple cache works fine.
I am trying to enable Flask Cache. We are currently using redis-sentinel to handle failover issues, and to increase concurrency performance. However, it seems this way of caching is more difficult to resolve than imagine. Perhaps, you guys can help me.
from flask_caching import Cache
from redis import StrictRedis
from werkzeug.contrib.cache import BaseCache
from redis.sentinel import Sentinel
from app import Config
class SentinelCache(BaseCache):
def __init__(self, key_prefix=None, **kwargs):
BaseCache.__init__(self, default_timeout=300)
host = str(kwargs['host'])
port = str(kwargs['port'])
#print(host + ":" + port)
print(Sentinel([{host:host, port:port}]))
#Setting self client to be a sentinel
self._client = Sentinel([{host:host, port:port}])
def sentinel(app, config, args, kwargs):
#Get master
sentinel = Sentinel(list(map(lambda x: (x.split(':')[0], int(x.split(':')[1])), Config.get('REDIS_SENTINEL_SERVERS').split(','))))
master = sentinel.discover_master('rdwhizz')
config['SENTINEL_SERVERS'] = master
kwargs.update(dict(
host=master[0],
port=master[1]
))
print(kwargs)
return SentinelCache(*args, **kwargs)
cache = Cache(config={'CACHE_TYPE':'app.cache.sentinel'})
Here is what I have currently. We were able to retrieve the master host and port. But what we are are unable to do is sub class Base Cache to enable Sentinel Caching.
Thank you!
When calling cache.add
or cache.set
, the result is currently lost. This makes it necessary to call cache.cache.add
instead to get the result of the underlying cache implementation (and to be able to implement a cache-based locking mechanism using cache.add
).
Since those functions are bare wrappers around a caching implementation (Werkzeug most of the time), I expected them to also return the results.
Is there a way to store compressed data in cache backend?
I use Memcached with pylibmc, I cache a product list page where much of content is being repeated - It can have 1MB as plain text but something like 60KB gzipped. So it is a wasting of Memcached storage to store it. It can even have a problem with Memcached default max size for an item (1MB) - with python-memcache
it seemed to me that flask-caching
simply didn't work :) pylibmc
at least throw an exception with max size problem.
Can the Flask itself help somehow? Maybe returning gzipped result of the view function? But it wouldn't be a solution for memoize()
either.
Or is it possible and reasonable to add compression there https://github.com/sh4nks/flask-caching/blob/master/flask_caching/__init__.py#L351 and decompression there https://github.com/sh4nks/flask-caching/blob/master/flask_caching/__init__.py#L340 and control it by configuration?
There are several cases when you may want to exclude some parameters from cache key generation. For example:
@cache.memoize(timeout=1000)
def search(title, softIds):
#some search stuff
search("Title 1", {1, 2, 3})
search("Title 1", {3, 2, 1})
in this case we want second search to be taken from cache, but it didn't. I propose this solution:
@cache.memoize(timeout=1000, exclude_params=["softIds"])
def search(title, softIds):
#some search stuff
exclude_params - array of parameter names which should be excluded from key generation. So key will be based only on "title" parameter, and second search will be taken from cache.
First, thanks for continuing the development of Flask-Cache. It's a great piece of software.
So, I'm using the memoize
decorator with a 60 seconds timeout on a method in a class. Data returned by this method may differ between each timeouts (calls). This class has a fixed __repr__
(i.e always the same string).
I noticed that a cache file is created each time the timeout was reached. I.e the first call creates a file with cached data, then on the second call when the timeout is reached, data is fetched again and another file is created with this data (the olds ones never gets deleted). This could lead to a folder with thousands of useless files.
Why this behavior? Aren't cache files supposed to be reused?
After I read some big json file from somewhere, I want to make that internally available by my app to work on it, Or when user queries it to return from cache, I can use @cache.memoize()
on get function for user to use and for my app to use, But there is a one time hit of getting the json from my DB.
As I am populating that data myself, is it possible to populate that in cache explicitly so my app just uses that ?
We are going to remove werkzeug.contrib.cache
from werkzeug. Flask-Caching is depending on it, can you copy that part into your repo and maintain it in Flask-Caching.
I takes me hours of time to fix it. Every time it give me this error raise RuntimeError('no redis module found')
.
I thought it was a server problem. So I searched and tried to fix it. And it takes much time.
Finally, I found that I was wrong, the mistake is not at server side.
I have to install redis firstly
pip install redis
I have tested that on:
Python 2.7
Flask-Caching==1.3.3
Below you can find new test that fails for me:
def test_memoize_when_using_args_unpacking(app, cache):
with app.test_request_context():
class Mock(object):
@classmethod
@cache.memoize(5)
def big_foo(cls, *args):
return sum(args) + random.randrange(0, 100000)
result = Mock.big_foo(5, 2)
result2 = Mock.big_foo(5, 3)
time.sleep(1)
assert Mock.big_foo(5, 2) == result
assert Mock.big_foo(5, 2) == result
assert Mock.big_foo(5, 3) != result
assert Mock.big_foo(5, 3) == result2
cache.delete_memoized(Mock.big_foo)
assert Mock.big_foo(5, 2) != result
assert Mock.big_foo(5, 3) != result2
Looking through the logs in CI, I noticed that the Redis, Memcached, and UWSGICache tests are skipped.
This is dangerous, as it means purposed breaking changes can find their way into master unless manual verification is done prior, which is easy to miss.
Any ideas why the arrangement of the arguments in functions matters in Python > 3.3 (I tested it with Python 3.6 but the tests for Python 3.5 are also failing, so it probably can also appear in Python 3.4)?
For example, to fix the kwargs tests I had to re-arrange the arguments (in unittest method test_10a_arg_kwarg_memoize_var_keyword
L354-L368)
from
assert f(1, 2, d=5, e=8) == f(1, 2, e=8, d=5)
assert f(1, b=2, c=3, d=5, e=8) == f(1, 2, e=8, d=5, b=2, c=3)
to
assert f(1, 2, d=5, e=8) == f(1, 2, d=5, e=8)
assert f(1, b=2, c=3, d=5, e=8) == f(1, b=2, c=3, d=5, e=8)
Hey thanks for the fork. I'm looking into switching to this if it will continue to be maintained.
Any way we can include an option to pass a datetime for cache expiration as well? For example php's implementation of Memcached allows the option to send both a datetime and an int.
Hi,
I'm working on an app where I'd like to cache the result of a method, no matter which instance of the object it is.
For example,
class Thing(object):
@cache.memoize
def method(self, a, b):
...do some work...
obj1 = Thing()
obj2 = Thing()
val1 = obj1.method(1,2)
val2 = obj2.method(1,2) # cached!
It seems like flask-caching
is keeping a "version" of the function to be cached. (In _memoize_version
, which gets it from function_namespace
). Currently, it defaults to "if i have an instance of an object, its token is repr(instance)
", which is something like <abc.object at 0xabcdef123456>
. I'd like to add a function (called instance_token
?) that takes an instance and by default, returns repr
, but could be overridden to return something based on the class module/name.
So my extension could look like:
class MyCache(Cache):
def instance_token(self, inst):
cls = inst.__class__
mod = cls.__module__
name = cls.__name__
return '{}.{}'.format(mod, name)
Does that make sense? I certainly don't have the depth of experience of actually writing this library, so if there's some facet of python (multi-threading?) that makes it not feasible, I can certainly work around it. If it does seem feasible, I'll take a run at a PR, if it would be helpful.
Are there any breaking changes between Flask-Caching 1.0.1 and the last version of Flask-Cache (0.13)? Other than having to change from flask.ext.cache import Cache
to from flask_caching import Cache
, of course. Looking through the changelog I don't see anything that looks like a breaking change, but wanted to verify. Perhaps it would be worthwhile to mention in the README.md?
I think it can be a good idea to put a lock inside a cached\memoize.
Usually caching is used for some heavy\expensive functionality, means when app has just started, and multiple clients are trying to call a "cached" function, there is no cached value ready yet and they all will start to query simultaneously the same heavy function.
Putting a basic lock can prevent this - after lock releases all pending threads will get a cached value.
If one uses CACHE_ARGS or CACHE_OPTIONS to try to configure a memcached backend client, you get a traceback like this:
File "/usr/lib64/python2.7/site-packages/flask_caching/__init__.py", line 193, in init_app
self._set_cache(app, config)
File "/usr/lib64/python2.7/site-packages/flask_caching/__init__.py", line 219, in _set_cache
cache_options)
File "/usr/lib64/python2.7/site-packages/flask_caching/backends/backends.py", line 81, in memcached
return MemcachedCache(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'behaviors'
I was trying to pass the "behaviors" argument mentioned in the pylibmc docs (http://sendapatch.se/projects/pylibmc/).
It seems that flask-caching faithfully adds the options I've specified. So that's good. The library is doing what it should. The problem is that the underlying werkzeug cache library does not accept any extra arguments or keyword arguments. see around line 488 in https://github.com/pallets/werkzeug/blob/master/werkzeug/contrib/cache.py
From my quick perusal, the backends from cache.py are not consistent in the signature of init. That's probably reasonable given how different the backends are. Only redis accepts **kwargs, and then passes it on to the underlying client. None accept **args.
At the very minimum, CACHE_OPTIONS shouldn't be processed and passed for memcached backends. It would be much better if werkzeug upstream actually accepted **kwargs and passed them on for memcached, though.
workaround
For pylibmc, I think I can:
cache = flask_caching.Cache("localhost")
cache.cache._client.behaviors = {"tcp_nodelay" : True}
I am in the process of migrating my server to a FIPS-compliant OS which doesn't have md5 available which results in a segmentation fault when using the memoize decorator.
I'm wondering if it's possible to override the make_cache_key method or some other way to use sha256 instead of md5 to create the cache key with memoize.
https://github.com/sh4nks/flask-caching/blob/master/flask_caching/__init__.py#L513
Thanks!
This makes it easier to work with a specific version of the extension :)
Hi,
I'm using flask app with uwgsi and ngnix. So, I configured 'CACHE_TYPE': 'filesystem', 'CACHE_DIR': tempfile.gettempdir()
.
I have a job that update the cache somtimes.
cache.delete_memoized(f)
It's working fine but the problem is when i check the temp directory, the past files still exist.
14/03/2018 03:57 PM 5,452,969 1c90fe01368dd1f4a1e99707ac31e684
14/03/2018 03:57 PM 12,242,036 ba6d5eebaeb05afe29e3bde20f396fea
14/03/2018 03:57 PM 10,772,996 f611d864d2a4206fa6ac2d49e33cc2bd
14/03/2018 04:02 PM 5,493,106 c05247ee8087ffc1f43239a1afc3d0d9
14/03/2018 04:02 PM 12,286,872 35b28b3cd29a3afaf90ea86e66cd40e8
14/03/2018 04:03 PM 10,829,939 a4c14575c6412333e59cf0f700354943
14/03/2018 04:04 PM 22 00881cc24cbc428e8dfd137afb40ba2c
14/03/2018 04:05 PM 5,534,646 fd45b7456ce7cd62f7828d023b302083
14/03/2018 04:05 PM 12,339,879 8809360ecbe1ead558691cb13ccabfd8
14/03/2018 04:05 PM 10,858,896 f43b0ecfbb733c4c0af02e6f036b15af
14/03/2018 04:05 PM 8 2029240f6d1128be89ddc32729463129
Is there any way to eliminate them with the library?. If not, is there a way to put a prefix to identify which files to delete?
I have requests that I want to cache that have array parameters (so they don't fit well in GET queries; have to stick stuff in the POST body). However, EITHER there's no way to get flask-caching to include the POST body as part of the cache key, OR, it's really not clear from the docs how to do this.
It seems like an "easy solution" would be for the decorator to accept a function argument for the make_cache_key
.
It should not be difficult to make a wrapper to load UWSGICache class ;)
pallets/werkzeug@b082922
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.