jazzband / django-redis Goto Github PK
View Code? Open in Web Editor NEWFull featured redis cache backend for Django.
License: Other
Full featured redis cache backend for Django.
License: Other
It would be nice to have access to redis setnx functionality. Either through additional method or through additional optional parameter in set method.
I'm ready to implement it either way if there is a chance for it to be merged. What do you think?
If I perform an incr(key) where the key is not already set, the incr() should set the key to 1. However, the default behavior in Django's get_cache function does not allow you to do this. Is there a way to drop down to the redis-py client that allows you to do this atomic incr?
My application uses couple of Redis instances for different purposes.
I would like to omit exceptions for only one of them.
For now DJANGO_REDIS_IGNORE_EXCEPTIONS is global setting.
Something like this
CACHES = {
'default': {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": config['redis_url_first'],
"OPTIONS": {
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
"DJANGO_REDIS_IGNORE_EXCEPTIONS": True
}
},
`other`: {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": config['redis_url_second'],
"OPTIONS": {
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
"DJANGO_REDIS_IGNORE_EXCEPTIONS": False
}
}
}
connect() takes exactly 1 argument (2 given)
With latest code, I am getting ImportError for stats app.
It seems you removed the ConnectionPoolHandler from utils, but still that is imported and used in stats/views.
Here is the traceback -
Traceback:
File "/home/vagrant/project/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
response = middleware_method(request)
**import**(name)
url(r'^redis/status/', include('redis_cache.stats.urls', namespace='redis_cache')),
urlconf_module = import_module(urlconf_module)
**import**(name)
Exception Type: ImportError at /
Exception Value: cannot import name ConnectionPoolHandler
I use my custom key function, which i can specify via Django's KEY_FUNCTION param.
But django-redis
makes additional key extraction:
return [k.split(":", 2)[2] for k in encoding_map]
https://github.com/niwibe/django-redis/blob/master/redis_cache/client/default.py#L450
What if i don't use standard ":" separator? Keys extraction should be customized.
I'm writing a patch to https://github.com/ui/django-rq to allow it to tailgate its configuration on the django-redis cache setup. I want this patch to be compatible with both django-redis and django-redis-cache, but the method of getting the Redis connection differs between the two. For testing purposes, I want to test the two packages separately, but the fact that they both import as redis_cache makes this a bit difficult. I'm basing my import-time detection of which package is installed on the existence of get_redis_connection in the redis_cache module, but I wanted to check if this is the best way to do this check.
This is for a 3.0 release. All code should be on the devel
branch.
The documentation says:
You can optionally set a timeout for redis operations by specifying an integer or float value for SOCKET_TIMEOUT in your CACHES entry
But in the file "pool.py":
kwargs['socket_timeout'] = int(self.options['SOCKET_TIMEOUT'])
It's impossible to set a timeout value less than one second. Is there reason for it?
Hello @niwibe ,
IMHO, I found strange django-redis
behavior in simple JSON caching.
from django.core.cache import cache
cache.set("123", '{"test":"test"}')
Redis monitor:
~$ redis-cli monitor
OK
1394490905.897550 [1 127.0.0.1:49782] "SETEX" "123" "300" "\x80\x02U\x0f{\"test\":\"test\"}q\x01."
Why does it pickle simple JSON string? Pickled version will take more memory.
Please explain - is it desired behavior?
I have followed instructions and installed django-redis and set up the cache using the redis_cache backend--confirmed that it is working, etc... but it is breaking the session tests for cache backends. It appears that sessions are not expiring appropriately in the test_actual_expiry test. Using Django 1.5.4:
Traceback (most recent call last):
File "env/lib/python2.7/site-packages/django/contrib/sessions/tests.py", line 291, in test_actual_expiry
self.assertNotIn('foo', new_session)
AssertionError: 'foo' unexpectedly found in <django.contrib.sessions.backends.cached_db.SessionStore object at 0x10b919e90>
Traceback (most recent call last):
File "env/lib/python2.7/site-packages/django/contrib/sessions/tests.py", line 291, in test_actual_expiry
self.assertNotIn('foo', new_session)
AssertionError: 'foo' unexpectedly found in <django.contrib.sessions.backends.cached_db.SessionStore object at 0x10b92edd0>
Traceback (most recent call last):
File "env/lib/python2.7/site-packages/django/contrib/sessions/tests.py", line 291, in test_actual_expiry
self.assertNotIn('foo', new_session)
AssertionError: 'foo' unexpectedly found in <django.contrib.sessions.backends.cache.SessionStore object at 0x10b931750>
Hello everyone,
I'm start to using django-redis but when the Redis Server is down the websites crash's..
I believe that a better approach could be if the Redis Server is off then you can specify others backend, like callbacks, even if it's dummy backend.. just to do not break all the site, even because the data properly are in your DB like MySQL and so on.. if you access to MySQL why break all the web site?
Some ways, for example:
You specify inside django-redis an second backend, just the name, that find for settings.CACHES and one function.
For example, if django-redis try to connect to the Server without successfully so the other backend could be used and the function is called, this function could be used to inform for example, the admin that the Redis Server is off or even execute some command to get up Redis Server.
Cya!
pypi name is django-redis
python package is redis_cache
there is another pypi package called django-redis-cache
I couldn't find any info in official django docs on how cached backends should treat negative or zero values, but according to
django/contrib/sessions/tests.py โ SessionTestsMixin.test_actual_expiry
django now does expect negative values to remove items from cache. At least starting with 1.6.x/stable branch.
django-redis currently just sets the item without timeout if timeout is less than or equal to zero. I propose supporting negative timeouts (since redis's EXPIRE
does support them.) SETEX
doesn't seem to support them, but that can be solved with pipelining. (I also intend on asking @antirez if that's a bug or a by-design decision)
I'll attach a pull-request shortly
It would be great to implement fake client for testing purposes, which supports majority of API calls.
We may use django.core.cache.backends.locmem.LocMemCache
as a basis.
Maybe the default Redis connection pool settings should be lowered a bit. I was one who did it and now I am running some problems with them :P
I am running an UWSGI web server with the following settings. It is almost no traffic Django site, default Ubuntu 12.04 installation. It has < 100 requests per day.
processes = 4
threads = 2
I checked with PS and there are actually 4 UWSGI processes.
Somehow Redis connections keep piling up, clogging the server:
redis-ser 31546 redis 994u IPv4 332011 0t0 TCP localhost:6379->localhost:47332 (ESTABLISHED)
And then:
lsof|grep -i redis|wc -l
1020 <--- open Redis TCP/IP connections
As far as I calculate max open connections should be number of processes * number of max connections per pool?
Apparently something is not closing connections or UWSGI is somehow misusing the pool.
Or am I missing something?
My application already has a Redis connection pool configured. It would be nice if there were a documented method for sharing that connection pool with django-redis.
Right now I'm overriding the connect
method of the DefaultCache
โฆ but that seems less than ideal.
Hey, is it possible to configure django-redis so that it fails silently if cache server is not running? I don't want the whole site down if just cache is down. Perhaps it could just log an error.
We just upgraded to 3.6 from 3.3 and now we have a new warning every time we use django-redis. This would be fine, if it made sense.
/home/<>/.virtualenvs/<>/local/lib/python2.7/site-packages/redis_cache/client/__init__.py:19: RuntimeWarning: sentinel client is unsuported with redis-py<2.9
RuntimeWarning)
We have redis 2.9.1 installed, which can be verified by running pip freeze | grep "redis==2"
which gives:
redis==2.9.1
Having the warning in the __init__
file also means that the notice is visible whenever the module is imported, which is pretty much whenever anything else involving django-redis
is used... even if the sentinel client isn't.
I have the result of a query that is very expensive. This is cached in redis for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again.
But at the point of expiration the thundering herd problem issue can happen.
http://en.wikipedia.org/wiki/Thundering_herd_problem
Currently I'm using an algorithm that continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache.
But it would be much better if it is a built-in feature of a caching backend, and it is in some backend: https://github.com/ericflo/django-newcache
Any plan on this feature?
When using multiple threads or greenlets, default ConnectionPool
may be overlimited. redis.connection.BlockingConnectionPool
easily solves this problem, but it can't be used without tricky monkey-patching.
Better if ConnectionPool class and its arguments would be specified in settings.
Great project! It was really easy to install, and so far is great. Is there a suggested value for MAX_ENTRIES in the django cache options dictionary?
I think the default is 300, but it seems like you could reasonably make that a lot bigger with this project.
https://github.com/niwibe/django-redis/blob/master/redis_cache/cache.py#L290
same code is executed twice: lines 290 and 292
When trying to work with unixsocket, I got an error from python-redis indicating "Error 22 connecting to unix socket: . Invalid argument.".
I further debugged it and saw that the "path" parameter that should contain the socket is empty.
I found the blame to be in util.py:73 (in ConnectionPoolHandler.connection_pool).
Instead of:
kwargs['path'] = kwargs['unix_socket_path']
it should be:
params['path'] = kwargs['unix_socket_path']
Fixing it on my local machine fixed that error.
I'll also send a corresponding pull request.
I write a test like:
'''python
def test_incr(self):
def inc(cache):
for i in range(100000):
cache.incr("num")
self.cache.set("num", 0)
threads = [Thread(target=inc, args=(self.cache,)) for i in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
self.assertEqual(self.cache.get("num"), 1000000, "%d != 1000000" % (self.cache.get("num")))
'''
Having a result like:
Traceback (most recent call last):
File "/home/lasi/devel/django-redis/tests/redis_backend_testapp_inc/tests.py", line 36, in test_incr
self.assertEqual(self.cache.get("num"), 1000000, "%d != 1000000" % (self.cache.get("num")))
AssertionError: 233816 != 1000000
Just noticed and by looking at the master, django-redis does not fully support django 1.6 . Previously, passing None explicitly would use the default timeout value. Now it will cache the value forever. This change was made in order to allow timeout = 0 to let the cache expire immediately. I'll most likely make a fork, cause I need this badly.
I'm using redis_cache something like this
from redis_cache import get_redis_connection
from rq import Queue
try:
queue = Queue(connection=get_redis_connection('db'))
queue.enqueue(some_action, some_data)
except Exception:
do_something_else(some_data)
I was expecting a ConnectionError or some other exceptions when the remote server refused me to access, but the program just didn't respond anything.
I set the timeout in the caches settings, but it also not work.
"db": {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": "127.0.0.1:6379:2",
"TIMEOUT": 120,
"OPTIONS": {
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
}
},
Could someone give a clue about this?
It is not documented what you need to build the docs and how. Having cleaner instructions would encourage providing patches to the documentation, as the submitter has a chance to preview the changes on their local computer.
For example you need something like asciidoc, pygment and niwi. The last is pygment theme, but I could not figure out where and how to install it and if it is needed.
I think it would be a good feature to implement a way to fallback to database cache instead of other Redis instance.
Is it possible to be done? I would like to add this feature if it fits the django-redis purposes.
I'm a first time redis user. I don't realize that example configuration in README file uses hiredis as parser class. I was unaware that hiredis parser class is not included in redis package. Maybe hiredis should be added to install requirements, or at least mentioned in README to prevent confusion for first time user.
Original django-redis-cache seems to be alive now. But the main point of this fork from README file is to revive the project. What else?
If you run build-docs.sh
against a clean checkout, asciidoc
not installed, the script will clear all files in your repository, leaving only static folder and do a commit on the top of this. You may lose your unfinished edits.
Maybe check that you do not have any unfinished changes doing rm -rf *
.
Now can be tested on devel
branch.
I have setup a master-slave solution but i would like to have a selective access to master or slave raw client via get_redis_connection command, is this possible?
reards
I've noticed to differences with official memcached backends:
I find both features very useful and would like to know if you have plans to integrate them or if you are interested in contributions around this feature
Hi,
Getting this error from time to time when using django-redis:
redis_cache.hash_ring in get_node_pos
IndexError: list index out of range
Context:
'_hash': 'fff669c4a9b088e574b496b38144e59563d99ee1ab9c6e5d4952072757a77158'
'idx': 128
Possible fix (file: hash_ring.py, line 49):
Instead of:
idx = min(idx, (self.replicas * len(self.nodes))-1)
Use this (because the list is 0 indexed):
idx = min(idx, (self.replicas * len(self.nodes))) - 1
Not sure if this causes any other issues.
Full sentinel support was not added until redis-py 2.9.0, yet your setup.py lists "redis>=2.7.0"
Hello, can anybody confirm that TIMEOUT setting is respected by django-redis? This is my configuration (I want key/values to never expire):
CACHES = {
'default': {
'BACKEND': 'redis_cache.cache.RedisCache',
'LOCATION': '127.0.0.1:6379',
'KEY_PREFIX': 'my-prefix',
'OPTIONS': {
'TIMEOUT': None
}
}
}
when use the stats app, there is a import error
the line "cannot import name CacheConnectionPool".
I tried using django-redis but it trows this error in production only, probably the combination of software versions is causing this:
Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/core/management/init.py", line 443, in execute_from_command_line
utility.execute()
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/core/management/init.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/core/management/base.py", line 196, in run_from_argv
self.execute(_args, *_options.dict)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/core/management/base.py", line 217, in execute
translation.activate('en-us')
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/utils/translation/init.py", line 105, in activate
return _trans.activate(language)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/utils/translation/trans_real.py", line 194, in activate
_active.value = translation(language)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/utils/translation/trans_real.py", line 183, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/utils/translation/trans_real.py", line 160, in _fetch
app = import_module(appname)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/utils/importlib.py", line 35, in import_module
import(name)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/django_redis-3.1.2-py2.6.egg/redis_cache/init.py", line 3, in
from django.core.cache import get_cache
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/core/cache/init.py", line 187, in
cache = get_cache(DEFAULT_CACHE_ALIAS)
File "/srv/coclea_virt_env/lib/python2.6/site-packages/Django-1.4.2-py2.6.egg/django/core/cache/init.py", line 178, in get_cache
"Could not find backend '%s': %s" % (backend, e))
django.core.cache.backends.base.InvalidCacheBackendError: Could not find backend 'redis_cache.cache.RedisCache': 'tuple' object has no attribute 'major'
Software Debian 6.0 (Squeeze):
django.core.cache.backends.base.DEFAULT_TIMEOUT
is by default a blank object in Django 1.6. Apps that explicitly set this value as the timeout will fail due to proper int checks not being in place.
The line client.expire(key, int(timeout))
within DefaultClient.set causes a TypeError while trying to convert object() to int.
The Django base cache implementation catches this exception and deals with it accordingly. The DefaultClient redis implementation checks for timeout > 0, which returns True for the default object but then fails while trying to cast it.
when ShardedRedisCache.close() is called, an exception is raise. It's because that ShardedRedisCache use self.get_server() instead of self._client()
Hi,
I'm testing the 3.x branch in Django 1.4.2 and the sharded client does not work, getting this exception upon connection close:
File "/Users/uroy1/.virtualenvs/calltime-api/lib/python2.6/site-packages/redis_cache/cache.py", line 144, in close
self.client.close(**kwargs)
TypeError: close() got an unexpected keyword argument 'signal'
Having a look to the code, it seems that the close method in the ShardClient class should have this signature at least (and actually should be doing something with the connections if it's configured to be closing them):
def close(self, **kwargs):
pass
Best regards,
Urtzi.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.