dasebe / adaptsize Goto Github PK
View Code? Open in Web Editor NEWA caching system that maximizes hit ratios under highly variable traffic.
License: BSD 2-Clause "Simplified" License
A caching system that maximizes hit ratios under highly variable traffic.
License: BSD 2-Clause "Simplified" License
adaptsize.c: In function ‘main’:
adaptsize.c:32:7: error: implicit declaration of function ‘fprint’; did you mean ‘fprintf’? [-Werror=implicit-function-declaration]
fprint(stderr, "Unusual size of %l bytes for a memory cache.\nDid you state cachesize in GB?\nExiting", cache_size);
^~~~~~
fprintf
Hi, AdaptSize developers.
Your results are really outstanding.
I have read dozens of articles about caching algorithms in
Content Delivery Network (CDN) and none of the considering algorithm
when caching was not taken into account the size of the content
and in general all the algorithms have always added a new content ("newcomer")
and have only relied on the policy of eviction from the cache.
I am interested in caching algorithms, read a lot of articles but
couldn't understand why researchers don't condider the size of the content,
after all the size of the content is very influence on the cache system performance.
When I has read Your article
"AdaptSize: Orchestrating the Hot Object Memory Cache in a Content Delivery Network",
I was very pleased that our thoughts coincided.
Your article is unique in all senses: there is a good mathematical model, there is a very detailed
study of the algorithm performance compared to existing algorithms, and besides all this there is
also the implementation (in our time reference for the implementation of the algorithms from the
articles are often absent). But in order to develop science, it is very important to be able to
repeat the experiments described in scientific papers, and this can only be done having a detailed
description of the experiment (describe the requirements to the computing system, description of
dataset) or implementation. Your article has it all.
When I read article I wanted to repeat the results from the article.
I downloaded AdaptSize, webtracereplay, webcachesim repositories.
Then I launched all algorithms in webcacesim and also AdaptSize (with webtracereplay)
on my own datasets. And it turned out that the results AdaptSize lower than
all the rest algorithms (LRU, LRU-K, S4LRU, LFU-DA and etc).
There is a pure implementation (not as a part of varnish) of the AdaptSize
algorithm like implementations for webcachesim?
Or I can take from some place test data to repeat the experiment?
I've been looking for so long an article that takes into account the size of the content.
I really want to repeat the results that You presented in your article.
Hi, AdaptSize developers:
Thanks for this awesome framework! I'm not sure how to write an appropriate Varnish vcl
file to make AdaptSize work. Would you mind giving an example?
Thanks
We replay a production trace from Akamai, which comes from an edge cache that serves highly multiplexed traffic which is challenging to cache. An unmodified Varnish version achieves an average hit ratio of 0.42 due to many objects being evicted before being requested again. The hit ratio under Varnish is also highly variable: the hit ratio's coefficient of variation is 23%.
AdaptSize achieves a hit ratio of 0.66, which is a 1.57x improvement over unmodified Varnish. Additionally, the hit ratio under AdaptSize is much more stable: the hit ratio's coefficient of variation is 5%, which is a 4.6x improvement.
While AdaptSize significantly improves the hit ratio, it does not impose a throughput overhead. Specifically, AdaptSize does not add any locks and thus scales exactly like an unmodified Varnish does.
AdaptSize is a new caching management policy. Unlike the vast majority of prior caching policies (see webcachesim for examples), AdaptSize does not change the cache eviction policy. Eviction policies decide which object to evict from the cache, if we need space. Typically, this is a variant of LRU.
AdaptSize introduces a new cache admission policy, which decides which objects can get admitted into the cache. The admission decision is base on the following intuition:
If cache space is limited (as in memory caches), admitting large objects can be bad for the hit ratio (as they force the eviction of potentially others objects, which then won't be available on their next request). Accordingly, large objects need to prove their worth before being allowed into the cache.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.