Code Monkey home page Code Monkey logo

object-cache's Introduction

Object::Cache wercker status

Easy caching of Ruby objects, using Redis as a backend store.

Installation

Add this line to your application's Gemfile:

gem 'object-cache'

And then execute:

bundle

Or install it yourself as:

gem install object-cache

Quick Start

# require the proper libraries in your project
require 'redis'
require 'object/cache'

# set the backend to a new Redis instance
Cache.backend = Redis.new

# wrap your object in a `Cache.new` block to store the object on first usage,
# and retrieve it again on subsequent usages
Cache.new { 'hello world' }

# add the core extension for easier access
require 'object/cache/core_extension'
cache { 'hello world' }

Usage

Using Object::Cache, you can cache objects in Ruby that have a heavy cost attached to initializing them, and then replay the recorded object on any subsequent requests.

For example, database query results can be cached, or HTTP requests to other services within your infrastructure.

Caching an object is as easy as wrapping that object in a Cache.new block:

Cache.new { 'hello world' }

Here, the object is of type String, but it can be any type of object that can be marshalled using the Ruby Marshal library.

marshaling data

You can only marshal data, not code, so anything that produces code that is executed later to return data (like Procs) cannot be cached. You can still wrap those in a Cache.new block, and the block will return the Proc as expected, but no caching will occur, so there's no point in doing so.

ttl

By default, a cached object has a ttl (time to live) of one week. This means that every request after the first request uses the value from the cached object. After one week, the cached value becomes stale, and the first request after that will again store the (possibly changed) object in the cache store.

You can globaly set the default ttl to a different value:

Cache.default_ttl = 120

You can easily modify the ttl per cached object, using the keyword argument by that same name:

Cache.new(ttl: 60) { 'remember me for 60 seconds!' }

Or, if you want the cached object to never go stale, disable the TTL entirely:

Cache.new(ttl: nil) { 'I am forever in your cache!' }
Cache.new(ttl: 0) { 'me too!' }

Note that it is best to never leave a value in the backend forever. Since this library uses file names and line numbers to store the value, a change in your code might mean a new cache object is created after a deployment, and your old cache object becomes orphaned, and will polute your storage forever.

namespaced keys

When storing the key/value object into Redis, the key name is based on the file name and line number where the cache was initiated. This allows you to cache objects without specifying any namespacing yourself.

If however, you are storing an object that changes based on input, you need to add a unique namespace to the cache, to make sure the correct object is returned from cache:

Cache.new(email) { User.find(email: email) }

In the above case, we use the customer's email to correctly namespace the returned object in the cache store. The provided namespace argument is still merged together with the file name and line number of the cache request, so you can re-use that same email namespace in different locations, without worrying about any naming collisions.

key prefixes

By default, the eventual key ending up in Redis is a 6-character long digest, based on the file name, line number, and optional key passed into the Cache object:

Cache.new { 'hello world' }
Cache.backend.keys # => ["22abcc"]

This makes working with keys quick and easy, without worying about conflicting keys.

However, this does make it more difficult to selectively delete keys from the backend, if you want to purge the cache of specific keys, before their TTL expires.

To support this use-case, you can use the key_prefix attribute:

Cache.new(key_prefix: 'hello') { 'hello world' }
Cache.backend.keys # => ["hello_22abcc"]

This allows you to selectively purge keys from Redis:

keys = Cache.backend.keys('hello_*')
Cache.backend.del(keys)

You can also use the special value :method_name to dynamically set the key prefix based on where the cached object was created:

Cache.new(key_prefix: :method_name) { 'hello world' }
Cache.backend.keys # => ["test_key_prefix_method_name_22abcc"]

Or, use :class_name to group keys in the same class together:

Cache.new(key_prefix: :class_name) { 'hello world' }
Cache.backend.keys # => ["CacheTest_22abcc"]

You can also define these options globally:

Cache.default_key_prefix = :method_name

redis replicas

Before, we used the following setup to connect Object::Cache to a redis backend:

Cache.backend = Redis.new

The Ruby Redis library has primary/replicas support built-in using Redis Sentinel.

If however, you have your own setup, and want the writes and reads to be separated between different Redis instances, you can pass in a hash to the backend config, with a primary and replicas key:

Cache.backend = { primary: Redis.new, replicas: [Redis.new, Redis.new] }

When writing the initial object to the backend, the primary Redis is used. On subsequent requests, a random replica is used to retrieve the stored value.

The above example obviously only works if the replicas receive the written data from the primary instance.

core extension

Finally, if you want, you can use the cache method, for convenient access to the cache object:

require 'object/cache/core_extension'

# these are the same:
cache('hello', ttl: 60) { 'hello world' }
Cache.new('hello', ttl: 60) { 'hello world' }

That's it!

License

The gem is available as open source under the terms of the MIT License.

object-cache's People

Contributors

dependabot-preview[bot] avatar jasperste avatar jeanmertz avatar jurriaan avatar koenrh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

iq-scm

object-cache's Issues

Replace 'Marshal' with JSON serialiser

The use of Marshal.load poses a security risk. It could lead to remote code execution when loading untrusted data. I don't think it is not beyond the realm of possibilities that for some reason some program or piece of code manages to update data stored at 'object-cache'-defined keys, which in turn is deserialised by 'object-cache' (and thus Marshal.load).

As far as I can tell this library only supports the serialisation and deserialisation of simple types, which means it is probably as easy as replacing Marshal with a JSON serialiser?

Add the option to return _stale_ values whilst refreshing in background

Currently this gem allows values to be cached until the expire, when this happens a new value is generated, cached and returned. This means that every ttl one user/client/connection needs to wait for the cache to refresh.

It would be great if you could specify the time after which a value is stale (time to stale?). Where stale means the value can still be used but a new version should be generated in the background.

Ex:

def get_highscore
  Cache.new(tts: 5.minutes, ttl: 10.minutes) {
    get_resource('https://example.org/api/highscore.json')
  }
end

# snip…snip…snip

puts get_highscore # first time, fetches data

sleep 1.minute
puts get_highscore # use cache

sleep 5.minutes
puts get_highscore # cache is stale, method returns immediately background job fetches new data
puts get_highscore # background job might still be busy, stale cache still used, no second job started

sleep 5.seconds
puts get_highscore # new data is used, timers reset

sleep 15.minutes
puts get_highscore # value completely expired, method blocks to fetch new data.

This way values are kept up-to-date and no user/client/connection has is blocked every once and a while.

Two more ideas:

  • Pluggable backend, defaulting to Thread.new. If this gem is used in an eventmachine environment eventmachine should be used.
  • Automatic stale. If you measure the average time a refresh takes, you can predict when a new version should be fetch. ex: tts = ttl - (measured_time*2) (this should be disabled by default).

Cheers,
—Koen

Still usable?

Hi I'd like to use this library but wondering if it's still in a usable state with Ruby 3 and actively maintained. Last commit is 2 years ago, thus my concern.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.