Code Monkey home page Code Monkey logo

optimalwebcaching's Introduction

Derive the Optimal Caching Policy for Internet and Storage Request Traces

The tools in this repository allow calculating the optimal caching policy and its hit ratio for request traces where object sizes are variable. More information is available in our Sigmetrics 2018 paper.

Motivation

In computer architecture, Belady's algorithm (also known as OPT or clairvoyant) gives the optimal hit ratio that can be achieved on a given trace. Clearly, this is a very useful way to benchmark the performance of caching policies.

Unfortunately, when cached objects vary in their sizes (number of bytes they take up in the cache), Belady is not anymore optimal. In fact, on real-world traces, Belady can be outperformed by recent caching policies. This is a very common case, object sizes are variable in in-memory caches like memcached, CDN caches like Varnish, and in storage systems like Ceph.

This repo introduces a new set of algorithms that enables the calculation of the optimal hit ratio and the optimal sequence of caching decisions for workloads with variable object sizes. Specifically, we relax the goal of computing OPT to obtaining accurate upper and lower bounds on OPT's hit ratio.

Examplary Results

For variable object sizes, there are different ways of measuring a cache's performance, e.g., the object hit/miss ratio and the byte hit/miss ratio (defined below). We focus on the most common metric, the object miss ratio, and consider several online caching policies (LRU, AdaptSize, GDSF). Specifically, we compare these online policies to several bounds on the optimal cache miss ratio. We mark bounds with a U, for upper bounds, and with an L, for lower bounds.

Object Miss Ratio of Several Online and Offline Caching Policies

This experiment shows that online caching policies can be much better than Belady. Furthermore, even advanced versions of Belady (Belady-Size) perform similarly to online policies, suggesting that online policies are near optimal with regard to their miss ratio. In contrast, our new bounds (PFOO-U and PFOO-L) show that there still remains a gap and work is now underway to bridge this gap.

Public Traces for Your Experiments

We release the following CDN request trace that you can use in your experiments (we ask academic works that use this trace to cite the SIGMETRICS'18 paper on the bottom of this page).

  • Format: LZMA-compressed space-separated table.
  • Three columns: request number, anonymized object-id, object size in bytes
  • See New trace links

Object Hit Ratio (OHR) & Object Miss Ratio

For the object hit/miss ratio optimization goal, every object counts the same (i.e., a hit for a large 1GB object and hit for a smal 10B object will both count as a "hit"). This is appropriate in memory caches, where the cache's purpose is to minimize the number of I/O operations (random seeks) going to secondary storage.

All code for this optimization goal can be found under the directory "OHRgoal".

Offline Algorithms

  • Flow Offline Optimum (FOO): asymptotically exact derivation of optimal caching (OPT)
  • Practical FOO (PFOO): fast calculation of upper and lower bounds on OPT (PFOO-U and PFOO-L)
  • OFMA: prior OPT approximation from the paper "Page replacement with multi-size pages and applications to web caching" [Irani. STOC'97]
  • LocalRatio: prior OPT approximation from the paper "A unified approach to approximating resource allocation and scheduling" [Bar-Noy, Bar-Yehuda, Freund, Naor, and Schieber. J. ACM 48 (2001)]
  • various other approximations for OPT (Belady-Size, Belady, Freq-Size)

Usage

Traces are expected in the webcachesim space-separated format with three columns (time, id, size in bytes) and a separate request on each line. See the download link above for an example.

The CLI parameters of some of the tools (with examples) are as follows.

  • FOO

    • format (four parameters, all required) and example:
      ./foo [trace name] [cachesize in bytes] [pivot rule] [output name]
      ./foo trace.txt 1073741824 4 foo_decision_variables.txt
      
    • pivot rule denotes the network simplex's pivot rule
  • PFOO-U

    • format (four parameters, all required) and example:
      ./pfoou [trace name] [cachesize in bytes] [pivot rule] [step size] [output name]
      ./pfoou trace.txt 1073741824 4 500000 pfoo_decision_variables.txt
      
    • step size denotes the number of decision variables that PFOO-U changes in each iteration, 500k is a good starting point. (Lower is faster, higher has better accuracy)
  • PFOO-L

    • format (four parameters, all required) and example:
      ./pfool [trace name] [cachesize in bytes] [output name]
      ./pfoou trace.txt 1073741824 pfoo_decision_variables.txt
      

Byte Hit Ratio (BHR) & Byte Miss Ratio

For the byte hit/miss ratio optimization goal, every object is weighted in proportion to the number of bytes (e.g., a hit to a 4KB object is 4x more important that a hit to a 1KB object). This is appropriate in disk/flash caches (e.g., CDNs), where each miss incurs a bandwidth cost (which is linear in the number of missed bytes).

All code for this optimization goal is in the directory "BHRgoal".

Offline Algorithms

  • Practical FOO Lower (PFOO-L): lower bound on the byte miss ratio, i.e., upper bound on the BHR that any online policy can achieve
  • Practical FOO Upper (PFOO-U): (currently still being ported)
  • Belady: as above

Usage

Traces are expected in the webcachesim space-separated format with three columns (time, id, size in bytes) and a separate request on each line. See the download link above for an example.

Makefiles are in each directory, they have been tested with g++ version 7 and above.

The CLI parameters are as follows.

  • PFOO-L

    • input: two parameters (trace path and cache size)
    • output (to cout): OHR and BHR
    • example:
      ./BHRgoal/PFOO-L/pfool [trace name] [cachesize in bytes]
      ./BHRgoal/PFOO-L/pfool trace.txt 1073741824
      
  • PFOO-U

    • to be ported from OHR
  • Belady

    • input: three parameters (trace path, cache size, and sampling size)
    • output (to cout): OHR and BHR
    • example:
    ./BHRgoal/Belady/belady2 [trace name] [cachesize in bytes] [samples]
    ./BHRgoal/Belady/belady2 trace.txt 1073741824 100
    

Contributors are welcome

Want to contribute? Great! We follow the Github contribution work flow. This means that submissions should fork and use a Github pull requests to get merged into this code base.

This is an early-stage research prototype. There are many ways to help out.

Bug Reports

If you come across a bug in webcachesim, please file a bug report by creating a new issue. This is an early-stage project, which depends on your input!

Write Test Cases

This project has not be thoroughly tested, test cases are likely to get a speedy merge.

Algorithmic Issues (Network Flow)

Both FOO and PFOO-U are much slower than they need be. See corresponding Issue: PFOO-U is too slow. This is fixable, but currently open.

Academic References

We ask academic works, which built on this code, to reference the following papers:

Practical Bounds on Optimal Caching with Variable Object Sizes
Daniel S. Berger, Nathan Beckmann, Mor Harchol-Balter. 
ACM SIGMETRICS, June 2018.
Also appeared in ACM POMACS, vol. 2, issue 2, as article No. 32, June 2018.

Towards Lightweight and Robust Machine Learning for CDN Caching
Daniel S. Berger
ACM HotNets, November 2018 (to appear).

External libraries

This software uses the LEMON and Catch2 C++ libraries.

  • LEMON: Library for Efficient Modeling and Optimization in Networks

    • Copyright 2003-2012 gervary Combinatorial Optimization Research Group, EGRES
    • Boost Software License, Version 1.0, see lemon/LICENSE
    • Authors: lemon/AUTHORS
  • Catch2: C++ Automated Test Cases in a Header

    • Boost Software License, Version 1.0, see tests/LICENSE.txt

optimalwebcaching's People

Contributors

dasebe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

optimalwebcaching's Issues

How to output the exact decision of PFOO for trace?

Hi,

I want to get the decision from PFOO. But when I run the command like ./pfoou [trace name] [cachesize in bytes] [pivot rule] [step size] [output name], I can only obtain some numbers like fluid2 1.0000 4 12 0.3333 1.0833 4 in the output file. Is it supposed to see the binary cached decision for each request in the output file?

If this is not the correct way, can you provide some instruction how to obtain these binary decisions from PFOO? Thanks!

PFOO-U yields impossible result

I tried PFOO-U with a short test trace and discovered what appears to be an impossible solution. My trace was

1 1 10
2 1 10
3 2 10
4 1 10
5 2 10
6 2 10
7 2 10
8 1 10

ran with cache size = 11 and max-valued step size, and my solution from PFOO (using OHRGoal/PFOO-U/pfoou) was

// output from PFOO-U
// ID, Size, Utility, decision_var, is_hit

1 10 0.1    1    0
1 10 0.05   1    1
2 10 0.05   0.1  0
1 10 0.025  1    1    **
2 10 0.1    1    0.1  **
2 10 0.1    1    1
2 10 0      0    1
1 10 0      0    1

I've highlighted the two rows that I believe yield an impossible solution as caching both objects entirely would exceed the cache size. The regular FOO-U (OHRgoal/FOO/foo) implementation seems to yield the correct integer hit rate of 4/8

// output from FOO
// Time, ID, Size, decision_var

1 1 10 1
2 1 10 0.1
3 2 10 1
4 1 10 0.1
5 2 10 1
6 2 10 1
7 2 10 0
8 1 10 0

As a result of this the miss rate from PFOO-U (3/8) is less than the miss rate from FOO-U (4/8) on this trace which seems like it violates the claim from the paper that PFOO-L ≤ FOO-L ≤ OPT ≤ FOO-U ≤ PFOO-U

PFOO-U is too slow

PFOO-U is painfully slow on traces above 50 million requests. Technically, it should be a linear-time algorithm and should outperform PFOO-L. But PFOO-L is orders of magnitude faster in practice, due to its simplicity. In addition, PFOO-U is single threaded and memory bound.

About the dataset

Hello. Previously, we closely followed your work on "LFO: Towards Lightweight and Robust Machine Learning for CDN Caching" [HotNets'18]. We conducted some tests based on the dataset you provided ("We use a 2016 request trace from the CDN of an anonymous top-ten US website. Recorded on a San Francisco CDN server, the trace spans about a week (500 million requests).") for evaluation purposes.

Unfortunately, we noticed that the dataset is no longer available for download on your homepage (https://www.microsoft.com/en-us/research/people/daberg/data-and-software/). Moreover, the above link contains the trace in 2018 instead of 2016. As we aim to reproduce our algorithm's experimental results using your dataset, we were wondering if there are any alternative means to obtain this dataset. Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.