Code Monkey home page Code Monkey logo

twemcache's Issues

All threads spike to 100% CPU

Strace on one of the treads didn't give much information besides confirm that it was in some type of loop.

epoll_wait(13, {{EPOLLIN, {u32=44, u64=44}}}, 32, 4294967295) = 1
epoll_wait(13, {{EPOLLIN, {u32=44, u64=44}}}, 32, 4294967295) = 1
epoll_wait(13, {{EPOLLIN, {u32=44, u64=44}}}, 32, 4294967295) = 1
epoll_wait(13, {{EPOLLIN, {u32=44, u64=44}}}, 32, 4294967295) = 1

Process is still responsive to requests and only way to fix has been to restart. It was running across a few hundred machines and was happening about twice a day.

Has anybody else experienced this issue?

Be able to turn off UDP

In wake of recent memcached amplification problems
https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-port-11211/

it makes sense to disable UDP support.

The code of twemcache seem to be created in a way to require UDP binding always:

if (tcp_specified && !udp_specified) {

    if (tcp_specified && !udp_specified) {
        settings.udpport = settings.port;
    } else if (udp_specified && !tcp_specified) {
        settings.port = settings.udpport;
    }

Scalability of Twemcache

Dear Twemcache team,
I would like to ask about the scalability of Twemcache. What is the maximum throughput you could achieve in terms of concurrent requests per second? and how many cores could you scale up to?

Based on my own experience, I could saturate only two (Xeon) cores with a single twemcache process achieving a maximum throughput of 320K requests per second. My request mix is 95% reads to 5% writes and each request reads/writes a 1KB record.

Is there any proof that Twemcache can be scaled-up to a higher number of cores? higher number of max concurrent requests?
Could you please share your experience on the twemcache scalability issues? Are there any known scalability bottlenecks that I should consider in my tuning process?

Regards,

php user

hi:

how can i use it in php?
is there have a php-extension?

memcached.org changes?

Hi,

I apologize if I'm grumpy at all, as this sort of thing is a reoccuring theme for us over at memcached central.

Have you folks seen at all the performance and slab balancing changes that have gone in six months ago? From what I see on your end most of the other changes are around stats and logging, which are pretty trivial.

In my own testing (with mc-crusher) I wasn't able to get the stats locks (as of 1.4.13) to cause any contention (most of the stats were split into per-thread locks ages ago). This is probably untrue with a NUMA machine, and I would be curious to know how you caused those issues.

Thanks!

Add Garbage Collector

I see that you've invested a lot of time in stackable Eviction Strategies.
That's good for me, because that means, that you've already made the first step : admitted that the default LRU is not as good as it could.

Perhaps you would like to incorporate this changeset:
https://groups.google.com/forum/?fromgroups#!topic/memcached/MdNPv0oxhO8
which makes sure that expired items are never in memory, and does so in O(1).
We use it in nk.pl since years at it works great (evictions dropped to 0, and monitoring memory consumption provides more information now -- also, slabs now have a chance to become emptied and disposed).

The only drawback I can see with it is the additional O(1) memory per item for doubly linked list pointers. I believe that Twitter hires some tough hackers which could make this number smaller (how about the trick with XORed pointers?).

Can't install ,please help!

I download the zip file. and unzip it.

then I install it with the command ./configure

it show configure file not found

Add metrics to track memory/heap consumption

Learning about the real memory consumption of Twemcache is important to correctly estimate overhead and avoid paging. To many people's surprise, slab memory doesn't account for the entire heap size in many cases, and it would be helpful to have metrics reflecting actual heap size and its composition.

Aside from slabs, large memcache instances usually allocate a lot of memory into hashtable(s); and for instances with a lot of connections, connection buffer is also a significant source of memory overhead. So it would be nice to have the following metrics for starters:

heap_curr /* total heap size, everything allocated through mc_*alloc */
heap_hashtable /* size of the current hashtable, and if in transition, hashtables */
heap_conn /* connection buffer related overhead */

There are others that could be added, such as slab size (which can currently be computed from slab_curr and slab size), suffix buffer for reply messages, etc. It would be nice to come up a more comprehensive component list, but they probably aren't as important as the above ones.

mc_cache problem

mc_cache.c line 61
base on the ohter functions, i think there should be:
ptr = mc_calloc(initial_pool_size, sizeof(char *));

maximum memory of twemcache

Hi,
I found the service may extend maximum memory with "set" then "get",
twemcache -d -m 20
test script
import memcache
mc = memcache.Client(['localhost:11211'])
i = 0
while True:
i+=1
key = value = str(i)
mc.set(key, value)
mc.get(key)

The memory size keeps increasing and extends the maximum. Is it an issue with some internal data structure?

thanks

memory slab calcification problem.

Once the object is allocated to some slabs and if the repartition of items size changes, some slabs will miss space whereas others are full of empty pages.

memory leak

It seems that there is still a memory leak.
I run Twemcache under Valgrind, and submitted many small objects (a sequence of set/get of object with key "i" and random value, with i = 1, 2, 3, ...).
The memory keeps growing.
The output of Valgrind says that, when a suffix is created, the sequence of calls are
asc_respond_get (mc_ascii.c) -> asc_create_suffix (mc_ascii.c) -> cache_alloc (mc_cache.c)
but the created memory is not properly freed or managed (blocks are definitely lost).

Could you please check?

Typo in documentation

On documentation page https://github.com/twitter/twemcache it is written:

"No eviction (0) ...
Item LRU eviction (1) ...
Random eviction (2) ...
Slab LRA eviction (4) ...
Slab LRC eviction (8) ..."

and later:
"For example, -M 5 means that if slab LRU eviciton fails, Twemcache will try item LRU eviction".

But there is no slab LRU in a previous list. I suppose it means slab LRA.

Please change that abbreviation accordingly.

Best regards,
Maxim

mc_strtoll function impl maybe something wrong ?

original codes are from here: https://github.com/twitter/twemcache/blob/master/src/mc_util.c

bool 
mc_strtoll(const char *str, int64_t *out)
{
    char *endptr;
    long long ll;

    errno = 0;
    *out = 0LL;

    ll = std::strtoll(str, &endptr, 10);

    if (errno == ERANGE) {
        return false;
    }

#if 0
    // bug: if str is "123 abc"
    if (isspace(*endptr) || (*endptr == '\0' && endptr != str)) {
        *out = ll;
        return true;
    }
#endif
    
    std::string s(endptr);
    if (std::all_of(s.begin(), s.end(), [](unsigned char c) { return std::isspace(c); }) || *endptr == '\0' && endptr != str) {
        *out = ll;
        return true;
    }
    return false;
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.