Code Monkey home page Code Monkey logo

pegasus-go-client's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pegasus-go-client's Issues

Increase rpc io buffer and request channel size to improve throughput

Surprisingly in our experiment we found making request channel size larger actually making no improvement, but making response buffer (rpc.ReadStream) larger does improve significantly on both average latency and throughput.

As we increase the response buffer from 256KB to 512KB, it seemed that the system has reached its limit (4w qps). So optimizations on performance will make no sense. Maybe in the future we could explore some optimization on reducing the memory use.

The following are benchmarks on 5 pegasus replica nodes, 16 partitions, 10w insertions, 100 bytes record size.
It reached the same conclusion when we change the workloads to 30w read operations.

response buffer 64kb, no request channel buffer

INSERT - Count: 192010, Avg(us): 3482, Min(us): 386, Max(us): 42951, 95th(us): 8000, 99th(us): 14000
INSERT - Count: 387387, Avg(us): 3447, Min(us): 356, Max(us): 45644, 95th(us): 8000, 99th(us): 14000
INSERT - Count: 584503, Avg(us): 3412, Min(us): 356, Max(us): 45644, 95th(us): 7000, 99th(us): 13000
INSERT - Count: 774928, Avg(us): 3438, Min(us): 356, Max(us): 45644, 95th(us): 7000, 99th(us): 13000
INSERT - Count: 965434, Avg(us): 3451, Min(us): 338, Max(us): 77322, 95th(us): 7000, 99th(us): 13000
INSERT - Count: 1000000, Avg(us): 3443, Min(us): 338, Max(us): 77322, 95th(us): 7000, 99th(us): 13000
Run finished, takes 51.837521852s

response buffer 128kb, no request channel buffer

INSERT - Count: 225254, Avg(us): 3139, Min(us): 357, Max(us): 36666, 95th(us): 7000, 99th(us): 14000
INSERT - Count: 458059, Avg(us): 3110, Min(us): 357, Max(us): 42223, 95th(us): 7000, 99th(us): 14000
INSERT - Count: 683384, Avg(us): 3135, Min(us): 340, Max(us): 42223, 95th(us): 7000, 99th(us): 14000
INSERT - Count: 915600, Avg(us): 3157, Min(us): 322, Max(us): 57728, 95th(us): 7000, 99th(us): 15000
INSERT - Count: 999999, Avg(us): 3140, Min(us): 322, Max(us): 57728, 95th(us): 7000, 99th(us): 15000
Run finished, takes 43.703584059s

response buffer 256kb, no request channel buffer

INSERT - Count: 366927, Avg(us): 2511, Min(us): 347, Max(us): 50030, 95th(us): 7000, 99th(us): 15000
INSERT - Count: 701266, Avg(us): 2649, Min(us): 344, Max(us): 73976, 95th(us): 8000, 99th(us): 17000
INSERT - Count: 1000000, Avg(us): 2615, Min(us): 340, Max(us): 73976, 95th(us): 8000, 99th(us): 17000
Run finished, takes 28.381599693s

response buffer 512kb, no request channel buffer

INSERT - Count: 366486, Avg(us): 2596, Min(us): 332, Max(us): 83957, 95th(us): 8000, 99th(us): 17000
INSERT - Count: 725917, Avg(us): 2624, Min(us): 320, Max(us): 83957, 95th(us): 8000, 99th(us): 18000
INSERT - Count: 999999, Avg(us): 2634, Min(us): 320, Max(us): 95898, 95th(us): 8000, 99th(us): 18000
Run finished, takes 27.91239882s

response buffer 1M, no request channel buffer

INSERT - Count: 387340, Avg(us): 2482, Min(us): 322, Max(us): 83280, 95th(us): 7000, 99th(us): 16000
INSERT - Count: 757737, Avg(us): 2542, Min(us): 322, Max(us): 83280, 95th(us): 8000, 99th(us): 17000
INSERT - Count: 999995, Avg(us): 2616, Min(us): 322, Max(us): 83280, 95th(us): 8000, 99th(us): 17000
Run finished, takes 27.33208115s

INSERT - Count: 400155, Avg(us): 2401, Min(us): 338, Max(us): 41996, 95th(us): 7000, 99th(us): 15000
INSERT - Count: 789793, Avg(us): 2435, Min(us): 338, Max(us): 51444, 95th(us): 7000, 99th(us): 15000
INSERT - Count: 999993, Avg(us): 2441, Min(us): 316, Max(us): 57813, 95th(us): 7000, 99th(us): 16000
Run finished, takes 25.50218167s

INSERT - Count: 376762, Avg(us): 2562, Min(us): 308, Max(us): 44060, 95th(us): 8000, 99th(us): 17000
INSERT - Count: 750331, Avg(us): 2577, Min(us): 308, Max(us): 49770, 95th(us): 8000, 99th(us): 16000
INSERT - Count: 999998, Avg(us): 2577, Min(us): 308, Max(us): 49770, 95th(us): 8000, 99th(us): 17000
Run finished, takes 26.846375043s

response buffer 2M, no request channel buffer

INSERT - Count: 387340, Avg(us): 2482, Min(us): 322, Max(us): 83280, 95th(us): 7000, 99th(us): 16000
INSERT - Count: 757737, Avg(us): 2542, Min(us): 322, Max(us): 83280, 95th(us): 8000, 99th(us): 17000
INSERT - Count: 999995, Avg(us): 2616, Min(us): 322, Max(us): 83280, 95th(us): 8000, 99th(us): 17000
Run finished, takes 27.33208115s

no response buffer, no request channel buffer

INSERT - Count: 191814, Avg(us): 3587, Min(us): 323, Max(us): 41650, 95th(us): 8000, 99th(us): 12000
INSERT - Count: 383523, Avg(us): 3577, Min(us): 323, Max(us): 69097, 95th(us): 8000, 99th(us): 12000
INSERT - Count: 574214, Avg(us): 3603, Min(us): 323, Max(us): 69097, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 762215, Avg(us): 3615, Min(us): 323, Max(us): 69097, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 953002, Avg(us): 3610, Min(us): 323, Max(us): 69097, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 1000000, Avg(us): 3601, Min(us): 323, Max(us): 69097, 95th(us): 8000, 99th(us): 13000
Run finished, takes 52.512368797s

INSERT - Count: 187798, Avg(us): 3606, Min(us): 376, Max(us): 31063, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 379229, Avg(us): 3583, Min(us): 348, Max(us): 42985, 95th(us): 8000, 99th(us): 12000
INSERT - Count: 565811, Avg(us): 3609, Min(us): 348, Max(us): 49867, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 749010, Avg(us): 3646, Min(us): 346, Max(us): 143944, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 936218, Avg(us): 3648, Min(us): 346, Max(us): 143944, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 1000000, Avg(us): 3645, Min(us): 322, Max(us): 143944, 95th(us): 8000, 99th(us): 13000
Run finished, takes 53.474150495s

increase request channel buffer to 1024

INSERT - Count: 187842, Avg(us): 3686, Min(us): 354, Max(us): 67325, 95th(us): 8000, 99th(us): 14000
INSERT - Count: 375630, Avg(us): 3695, Min(us): 354, Max(us): 86987, 95th(us): 8000, 99th(us): 15000
INSERT - Count: 566584, Avg(us): 3672, Min(us): 354, Max(us): 86987, 95th(us): 8000, 99th(us): 14000
INSERT - Count: 756894, Avg(us): 3658, Min(us): 314, Max(us): 86987, 95th(us): 8000, 99th(us): 14000
INSERT - Count: 949095, Avg(us): 3639, Min(us): 314, Max(us): 86987, 95th(us): 8000, 99th(us): 14000
INSERT - Count: 999986, Avg(us): 3623, Min(us): 314, Max(us): 86987, 95th(us): 8000, 99th(us): 14000
Run finished, takes 52.666922249s

INSERT - Count: 190783, Avg(us): 3581, Min(us): 345, Max(us): 29590, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 384480, Avg(us): 3542, Min(us): 345, Max(us): 41550, 95th(us): 8000, 99th(us): 12000
INSERT - Count: 575013, Avg(us): 3571, Min(us): 345, Max(us): 48848, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 767113, Avg(us): 3568, Min(us): 345, Max(us): 84307, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 954669, Avg(us): 3588, Min(us): 345, Max(us): 87353, 95th(us): 8000, 99th(us): 13000
INSERT - Count: 999998, Avg(us): 3577, Min(us): 345, Max(us): 87353, 95th(us): 8000, 99th(us): 13000
Run finished, takes 52.395231448s

response buffer 64 kb, request channel buffer 1024

INSERT - Count: 198474, Avg(us): 3277, Min(us): 354, Max(us): 32772, 95th(us): 7000, 99th(us): 12000
INSERT - Count: 389445, Avg(us): 3473, Min(us): 341, Max(us): 169640, 95th(us): 7000, 99th(us): 16000
INSERT - Count: 582581, Avg(us): 3490, Min(us): 341, Max(us): 169640, 95th(us): 8000, 99th(us): 16000
INSERT - Count: 775263, Avg(us): 3484, Min(us): 324, Max(us): 169640, 95th(us): 8000, 99th(us): 15000
INSERT - Count: 962725, Avg(us): 3514, Min(us): 324, Max(us): 251650, 95th(us): 8000, 99th(us): 16000
INSERT - Count: 999998, Avg(us): 3494, Min(us): 324, Max(us): 251650, 95th(us): 7000, 99th(us): 16000
Run finished, takes 51.934792143s

INSERT - Count: 201381, Avg(us): 3191, Min(us): 380, Max(us): 28021, 95th(us): 7000, 99th(us): 11000
INSERT - Count: 401748, Avg(us): 3243, Min(us): 346, Max(us): 87484, 95th(us): 7000, 99th(us): 12000
INSERT - Count: 603700, Avg(us): 3245, Min(us): 346, Max(us): 87484, 95th(us): 7000, 99th(us): 12000
INSERT - Count: 805674, Avg(us): 3249, Min(us): 325, Max(us): 87484, 95th(us): 7000, 99th(us): 12000
INSERT - Count: 1000000, Avg(us): 3239, Min(us): 325, Max(us): 87484, 95th(us): 7000, 99th(us): 12000
Run finished, takes 49.658922395s

response buffer 1M, request channel buffer 1024

INSERT - Count: 387271, Avg(us): 2485, Min(us): 340, Max(us): 40965, 95th(us): 8000, 99th(us): 15000
INSERT - Count: 779784, Avg(us): 2468, Min(us): 323, Max(us): 62745, 95th(us): 8000, 99th(us): 15000
INSERT - Count: 1000000, Avg(us): 2459, Min(us): 323, Max(us): 62745, 95th(us): 7000, 99th(us): 15000
Run finished, takes 25.798325495s

INSERT - Count: 362033, Avg(us): 2668, Min(us): 319, Max(us): 74088, 95th(us): 8000, 99th(us): 18000
INSERT - Count: 750832, Avg(us): 2569, Min(us): 319, Max(us): 74088, 95th(us): 8000, 99th(us): 17000
INSERT - Count: 1000000, Avg(us): 2534, Min(us): 319, Max(us): 74088, 95th(us): 8000, 99th(us): 17000
Run finished, takes 26.448898855s

scanner.Next(ctx)返回的hashKey和sortKey正常,value没有数据

for i, scanner := range scanners {
// Iterates sequentially.

    start := time.Now()
    cnt := 0
    for true {
        ctx, _ = context.WithTimeout(context.Background(), time.Second*10)
        completed, hashKey, sortKey, value, err := scanner.Next(ctx)
        if err != nil {
            logger.Print(err)
            return
        }
        if completed {
            logger.Printf("scanner %d completes", i)
            break
        }
        if len(sortKey) == 8 {
            res := int(binary.BigEndian.Uint64(sortKey))
            if res < oneYearAgo {
                logger.Printf("hashkey=%s, sortkey=%d\n", string(hashKey), res)
            }
        }

        cnt++
        if time.Now().Sub(start) > time.Minute {
            logger.Printf("scan 1-min, %d rows in total", cnt)
            start = time.Now()
        }
    }
}

idl/base.thrift is not correspond with idl/base/*

I wanto fix a bug and will add a field in idl/rrdb.thrift but when I generate go code by "thrift -I idl/ -out idl --gen go:thrift_import='github.com/pegasus-kv/thrift/lib/go/thrift',package_prefix='github.com/XiaoMi/pegasus-go-client/idl/' idl/rrdb.thrift' cause en error:
image

then i found that the idl/base.thrift is not match the directory idl/base, it lacks the definition of 'blob'

image

failed tests on travis

#1

=== RUN   TestPegasusTableConnector_ScanHalfInclusive
2019/12/30 03:09:57 create session with [0.0.0.0:34601(meta)]
2019/12/30 03:09:57 create session with [0.0.0.0:34602(meta)]
2019/12/30 03:09:57 create session with [0.0.0.0:34603(meta)]
2019/12/30 03:09:57 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2019/12/30 03:09:57 stop dialing for [0.0.0.0:34601(meta)], connection state: ConnStateReady
2019/12/30 03:09:57 create session with [10.20.0.42:34803(replica)]
2019/12/30 03:09:57 create session with [10.20.0.42:34802(replica)]
2019/12/30 03:09:57 create session with [10.20.0.42:34801(replica)]
2019/12/30 03:09:57 stop dialing for [10.20.0.42:34802(replica)], connection state: ConnStateReady
2019/12/30 03:09:57 stop dialing for [10.20.0.42:34803(replica)], connection state: ConnStateReady
2019/12/30 03:09:57 stop dialing for [10.20.0.42:34801(replica)], connection state: ConnStateReady
2019/12/30 03:09:58  Scanning on all partitions has been completed
2019/12/30 03:09:58  Scanning on all partitions has been completed
2019/12/30 03:09:59 Close session with [0.0.0.0:34601(meta)]
2019/12/30 03:09:59 failed to read response from [0.0.0.0:34601(meta)]: read tcp 127.0.0.1:46676->127.0.0.1:34601: use of closed network connection
2019/12/30 03:09:59 Close session with [0.0.0.0:34602(meta)]
2019/12/30 03:09:59 Close session with [0.0.0.0:34603(meta)]
2019/12/30 03:09:59 Close session with [10.20.0.42:34803(replica)]
2019/12/30 03:09:59 failed to read response from [10.20.0.42:34803(replica)]: read tcp 10.20.0.42:60490->10.20.0.42:34803: use of closed network connection
2019/12/30 03:09:59 Close session with [10.20.0.42:34802(replica)]
2019/12/30 03:09:59 failed to read response from [10.20.0.42:34802(replica)]: read tcp 10.20.0.42:33802->10.20.0.42:34802: use of closed network connection
2019/12/30 03:09:59 Close session with [10.20.0.42:34801(replica)]
panic: test timed out after 1m0s
goroutine 1393 [running]:
testing.(*M).startAlarm.func1()
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1377 +0x11c
created by time.goFunc
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/time/sleep.go:168 +0x52
goroutine 1 [chan receive]:
testing.(*T).Run(0xc00011a000, 0x9b4b06, 0x2b, 0x9c1178, 0x1)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:961 +0x68a
testing.runTests.func1(0xc00011a000)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1202 +0xa7
testing.tRunner(0xc00011a000, 0xc00010bd18)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:909 +0x19a
testing.runTests(0xc0000a0500, 0xcea9c0, 0x17, 0x17, 0x0)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1200 +0x522
testing.(*M).Run(0xc000102080, 0x0)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1117 +0x300
github.com/XiaoMi/pegasus-go-client/pegasus.TestMain(0xc000102080)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/pegasus/main_test.go:16 +0x71
main.main()
	_testmain.go:86 +0x224
goroutine 1383 [chan receive]:
github.com/XiaoMi/pegasus-go-client/session.(*nodeSession).Close(0xc00012e120, 0x0, 0x0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/session.go:360 +0x13d
github.com/XiaoMi/pegasus-go-client/session.(*ReplicaManager).Close(0xc000250180, 0x0, 0x0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/replica_session.go:201 +0x169
github.com/XiaoMi/pegasus-go-client/pegasus.(*pegasusClient).Close(0xc0002501b0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/pegasus/client.go:64 +0x222
github.com/XiaoMi/pegasus-go-client/pegasus.TestPegasusTableConnector_ScanHalfInclusive(0xc00006c000)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/pegasus/table_connector_test.go:669 +0x682
testing.tRunner(0xc00006c000, 0x9c1178)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:909 +0x19a
created by testing.(*T).Run
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:960 +0x652
goroutine 1409 [semacquire]:
sync.runtime_SemacquireMutex(0xc00012e17c, 0x900000000, 0x1)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc00012e178)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/sync/mutex.go:138 +0x1c1
sync.(*Mutex).Lock(0xc00012e178)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/sync/mutex.go:81 +0x7d
github.com/XiaoMi/pegasus-go-client/session.(*nodeSession).notifyCallerAndDrop(0xc00012e120, 0xc00026e210)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/session.go:186 +0x83
github.com/XiaoMi/pegasus-go-client/session.(*nodeSession).loopForResponse(0xc00012e120, 0x4ae8cf, 0xc00009e070)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/session.go:258 +0x660
gopkg.in/tomb%2ev2.(*Tomb).run(0xc0000b4550, 0xc00026f210)
	/home/travis/gopath/pkg/mod/gopkg.in/[email protected]/tomb.go:163 +0x3c
created by gopkg.in/tomb%2ev2.(*Tomb).Go
	/home/travis/gopath/pkg/mod/gopkg.in/[email protected]/tomb.go:159 +0x127
FAIL	github.com/XiaoMi/pegasus-go-client/pegasus	60.019s

#2

2020/01/02 01:42:43 create session with [0.0.0.0:34601(meta)]
2020/01/02 01:42:43 create session with [0.0.0.0:34602(meta)]
2020/01/02 01:42:43 create session with [0.0.0.0:34603(meta)]
2020/01/02 01:42:43 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/02 01:42:43 stop dialing for [0.0.0.0:34601(meta)], connection state: ConnStateReady
2020/01/02 01:42:43 create session with [10.20.0.168:34803(replica)]
2020/01/02 01:42:43 create session with [10.20.0.168:34802(replica)]
2020/01/02 01:42:43 create session with [10.20.0.168:34801(replica)]
2020/01/02 01:42:43 stop dialing for [10.20.0.168:34801(replica)], connection state: ConnStateReady
2020/01/02 01:42:43 stop dialing for [10.20.0.168:34802(replica)], connection state: ConnStateReady
2020/01/02 01:42:43 stop dialing for [10.20.0.168:34803(replica)], connection state: ConnStateReady
2020/01/02 01:42:43 Close session with [0.0.0.0:34601(meta)]
2020/01/02 01:42:43 failed to read response from [0.0.0.0:34601(meta)]: read tcp 127.0.0.1:52870->127.0.0.1:34601: use of closed network connection
2020/01/02 01:42:43 Close session with [0.0.0.0:34602(meta)]
2020/01/02 01:42:43 Close session with [0.0.0.0:34603(meta)]
2020/01/02 01:42:43 Close session with [10.20.0.168:34802(replica)]
2020/01/02 01:42:43 failed to read response from [10.20.0.168:34802(replica)]: read tcp 10.20.0.168:38248->10.20.0.168:34802: use of closed network connection
2020/01/02 01:42:43 Close session with [10.20.0.168:34801(replica)]
2020/01/02 01:42:43 failed to read response from [10.20.0.168:34801(replica)]: read tcp 10.20.0.168:33930->10.20.0.168:34801: use of closed network connection
2020/01/02 01:42:43 Close session with [10.20.0.168:34803(replica)]
panic: test timed out after 1m0s
goroutine 1685 [running]:
testing.(*M).startAlarm.func1()
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1377 +0x11c
created by time.goFunc
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/time/sleep.go:168 +0x52
goroutine 1 [chan receive]:
testing.(*T).Run(0xc0000ee000, 0x9b01db, 0x22, 0x9c1138, 0x1)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:961 +0x68a
testing.runTests.func1(0xc0000ee000)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1202 +0xa7
testing.tRunner(0xc0000ee000, 0xc0000e1d18)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:909 +0x19a
testing.runTests(0xc00000e500, 0xceaa00, 0x17, 0x17, 0x0)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1200 +0x522
testing.(*M).Run(0xc0000d8080, 0x0)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:1117 +0x300
github.com/XiaoMi/pegasus-go-client/pegasus.TestMain(0xc0000d8080)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/pegasus/main_test.go:16 +0x71
main.main()
	_testmain.go:86 +0x224
goroutine 1653 [chan receive]:
github.com/XiaoMi/pegasus-go-client/session.(*nodeSession).Close(0xc000128000, 0x0, 0x0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/session.go:362 +0x13d
github.com/XiaoMi/pegasus-go-client/session.(*ReplicaManager).Close(0xc000218180, 0x0, 0x0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/replica_session.go:201 +0x169
github.com/XiaoMi/pegasus-go-client/pegasus.(*pegasusClient).Close(0xc0002181b0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/pegasus/client.go:64 +0x222
github.com/XiaoMi/pegasus-go-client/pegasus.TestPegasusTableConnector_BatchGet(0xc0000d4200)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/pegasus/table_connector_test.go:1088 +0x1551
testing.tRunner(0xc0000d4200, 0x9c1138)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:909 +0x19a
created by testing.(*T).Run
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/testing/testing.go:960 +0x652
goroutine 1678 [semacquire]:
sync.runtime_SemacquireMutex(0xc00012805c, 0x900000000, 0x1)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc000128058)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/sync/mutex.go:138 +0x1c1
sync.(*Mutex).Lock(0xc000128058)
	/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/sync/mutex.go:81 +0x7d
github.com/XiaoMi/pegasus-go-client/session.(*nodeSession).notifyCallerAndDrop(0xc000128000, 0xc0001003e0)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/session.go:186 +0x83
github.com/XiaoMi/pegasus-go-client/session.(*nodeSession).loopForResponse(0xc000128000, 0x4ae8cf, 0xc000308e60)
	/home/travis/gopath/src/github.com/XiaoMi/pegasus-go-client/session/session.go:258 +0x660
gopkg.in/tomb%2ev2.(*Tomb).run(0xc0003081e0, 0xc000100340)
	/home/travis/gopath/pkg/mod/gopkg.in/[email protected]/tomb.go:163 +0x3c
created by gopkg.in/tomb%2ev2.(*Tomb).Go
	/home/travis/gopath/pkg/mod/gopkg.in/[email protected]/tomb.go:159 +0x127
FAIL	github.com/XiaoMi/pegasus-go-client/pegasus	60.018s

#3


=== RUN   TestNodeSession_ConcurrentCall
2020/01/07 10:38:50 create session with [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 querying configuration of table(temp) from [0.0.0.0:34601(meta)]
2020/01/07 10:38:50 stop dialing for [0.0.0.0:34601(meta)], connection state: ConnStateReady
2020/01/07 10:38:50 close session [0.0.0.0:34601(meta)]
panic: test timed out after 1m0s

Adapt partition split

  1. Fill correct partition_hash value in thrift_header
  • Thrift header has a reserved filed called 'partition_hash' , default value is 0. It will be used in server supporting partition split to distinguish the value is for parent or child partition.
  1. Add need_check_hash filed in get_scanner_request
  • After partition split, parent will hold data belonging to child, and child will also have data belonging to parent. As a result, we have to check hash while executing unordered_scan to remove duplicated data. However, we don't need to check hash while hash_scan. Hash_scan will scan only one partition, it won't have duplicated data.
  1. Handle split related error code: ERR_SPLITTING, ERR_PARENT_PARTITION_MISUSED
  • when parent partition is registering child partition, it will reject read write requests by return ERR_SPLITTING, client won't query meta and will retry in remaining time
  • when child partition is active, the request should be sent to child, parent will return ERR_PARENT_PARTITION_MISUSED, client should update all partitions config to update both parent and child config
  1. Handle the condition that client query config for a splitting table
  • when client query config for a splitting table, it will get child partition config, if child is not ready, its requests should be redirected to parent
  • Now server support split (new_count = old_count*2) and cancel split (new_count = old_count / 2), client should update configurations in those cases.

Fill in partition_hash in thrift_header

Thrift header has a reserved filed called 'partition_hash' , default value is 0, server doesn't use this filed currently. However, it will be used in server supporting partition split, so we should fill in correct value of it.

bad readme

from the readme homepage, I can not understand for what it is used. Maybe more useful introduce about this project is helpful.

Can't get pegasus-go-client by `go get -u`

smilencer@mi:~$ cat result 
# github.com/XiaoMi/pegasus-go-client/idl/cmd
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:42:36: not enough arguments in call to iprot.ReadStructBegin
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:47:55: not enough arguments in call to iprot.ReadFieldBegin
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:61:25: not enough arguments in call to iprot.Skip
	have (thrift.TType)
	want (context.Context, thrift.TType)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:80:31: not enough arguments in call to iprot.ReadFieldEnd
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:84:31: not enough arguments in call to iprot.ReadStructEnd
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:91:31: not enough arguments in call to iprot.ReadString
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:100:37: not enough arguments in call to iprot.ReadListBegin
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:108:32: not enough arguments in call to iprot.ReadString
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:115:29: not enough arguments in call to iprot.ReadListEnd
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:122:34: not enough arguments in call to oprot.WriteStructBegin
	have (string)
	want (context.Context, string)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:122:34: too many errors
# github.com/XiaoMi/pegasus-go-client/idl/base
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/blob.go:18:31: not enough arguments in call to iprot.ReadBinary
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/blob.go:27:26: not enough arguments in call to oprot.WriteBinary
	have ([]byte)
	want (context.Context, []byte)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/error_code.go:105:34: not enough arguments in call to iprot.ReadString
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/error_code.go:110:26: not enough arguments in call to oprot.WriteString
	have (string)
	want (context.Context, string)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/gpid.go:18:25: not enough arguments in call to iprot.ReadI64
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/gpid.go:30:23: not enough arguments in call to oprot.WriteI64
	have (int64)
	want (context.Context, int64)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/rpc_address.go:26:31: not enough arguments in call to iprot.ReadI64
	have ()
	want (context.Context)
Code/GOPATH/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/rpc_address.go:35:23: not enough arguments in call to oprot.WriteI64
	have (int64)
	want (context.Context, int64)

I can't find a way to fix it. But is accessible to use go mod

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.