Code Monkey home page Code Monkey logo

vertx-hazelcast's Introduction

Hazelcast Cluster Manager

Build Status (5.x) Build Status (4.x)

This is a cluster manager implementation for Vert.x that uses Hazelcast.

It is the default cluster manager in the Vert.x distribution, but it can be replaced with another implementation as Vert.x cluster managers are pluggable.

Please see the main documentation on the web-site for a full description:

Running tests

To run the clustering test suite, open a terminal and type:

mvn test

There are additional integration tests in this project. To run these tests as well:

mvn verify

vertx-hazelcast's People

Contributors

adnanel avatar afloarea avatar anierbeck avatar apatrida avatar aschrijver avatar cescoffier avatar dependabot[bot] avatar frant-hartm avatar hasancelik avatar jerrinot avatar julianladisch avatar o7bfg avatar pflanzenmoerder avatar pmlopes avatar purplefox avatar richardf avatar rikardscg avatar slinkydeveloper avatar stampy88 avatar thomas-melusyn avatar tsegismont avatar vbekiaris avatar vietj avatar zyclonite avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vertx-hazelcast's Issues

Add support for Hazelcast configuration using system property

As discussed here the -Dhazelcast.config system property is not supported by Vert.x and this PR adds a note about this: #35
But currently the code uses an opinionated way of providing config: Putting a file named cluster.xml on the (vert.x) classpath

Supporting the hazelcast.config system property configuration mechanism would be more flexible and aligns with instructions you'll find on the Hazelcast website.

Unconventional serialization / deserialization pattern, incompatible with many serialization libraries and formats

Observed:

clusterSerializable.readFromBuffer(0, Buffer.buffer(bytes));

Most serialization frameworks take data as an input and return a new instance themselves. For example, to rapidly support my existing serialization pattern, I'd like to do...

public interface DefaultClusterSerializable extends ClusterSerializable {
	default void writeToBuffer(Buffer buffer) {
		try {
			// Java serialization to Buffer
		} catch (IOException e) {
			throw new RuntimeException();
		}
	}

	default int readFromBuffer(int pos, Buffer buffer) {
		// ???
	}
}

Expected:

ClusterSerializable could support an encoder pattern like the EventBus does.

Or, ClusterSerializable could add a method that returns a new instance. However, the tricks you would need to do in the caller (i.e., ConversionUtils) would be very hairy, if you wanted to avoid allocating a new instance in order to call an instance method.

EventLoop thread blocked on HazelcastLock.release()

Vertx 3.2.1.
EventLoop thread blocked on HazelcastLock.release().

Stacktrace:

19:11:04.917 [vertx-blocked-thread-checker] WARN  i.v.core.impl.BlockedThreadChecker - Thread Thread[vert.x-eventloop-thread-0,5,main] has been blocked for 5014 ms, time limit is 2000
io.vertx.core.VertxException: Thread blocked
    at java.lang.Object.wait(Native Method) ~[na:1.8.0]
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.pollResponse(InvocationFuture.java:299) ~[hazelcast-3.5.2.jar:3.5.2]
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:247) ~[hazelcast-3.5.2.jar:3.5.2]
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224) ~[hazelcast-3.5.2.jar:3.5.2]
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204) ~[hazelcast-3.5.2.jar:3.5.2]
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.getSafely(InvocationFuture.java:216) ~[hazelcast-3.5.2.jar:3.5.2]
    at com.hazelcast.concurrent.semaphore.SemaphoreProxy.release(SemaphoreProxy.java:115) ~[hazelcast-3.5.2.jar:3.5.2]
    at com.hazelcast.concurrent.semaphore.SemaphoreProxy.release(SemaphoreProxy.java:106) ~[hazelcast-3.5.2.jar:3.5.2]
    at io.vertx.spi.cluster.hazelcast.HazelcastClusterManager$HazelcastLock.release(HazelcastClusterManager.java:388) ~[vertx-hazelcast-3.2.1.jar:na]
    ....
    at io.vertx.ext.jdbc.impl.actions.AbstractJDBCAction$$Lambda$150/1550800554.handle(Unknown Source) ~[na:na]
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$18(ContextImpl.java:335) ~[vertx-core-3.2.1.jar:na]
    at io.vertx.core.impl.ContextImpl$$Lambda$8/1067262186.run(Unknown Source) ~[na:na]
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
    at java.lang.Thread.run(Thread.java:744) ~[na:1.8.0]

May be you forgot to write something like vertx.executeBlocking in Lock implementation.

vertx don't instant sends messages while processing handlers!

It seems that vertx only sends cluster messages when is idle (nothing to process).
That means while a vertx is very busy/active there is no messages dispatched to eventBus until the vertx becomes completely idle, that could be a big limitation if there is a need to send instant messages.

What is the solution to force vertx to instant send a message without waiting the need to be idle?

In order to demonstrate this issue, I have created two very simple verticles:

  • Verticles ping: sends 60 instant messages to pong and prints the reply answers.
  • Verticle pong: emulates a 50ms processing time and reply the message

Scenarios: (cluster mode)

  • Test1: pong is a worker verticle:
  • Test2: pong is a multithread worker verticle:
java -cp 'target/*' io.vertx.core.Launcher start my.pacakge.Pong -id pong -worker -cluster
java -cp 'target/*' io.vertx.core.Launcher start my.pacakge.Ping -id ping -worker -cluster

For these tests ping verticle is a worker verticle ([w-##] means vert.x-worker-thread-##), however it's not revenant if it's a standard verticle because both configurations are similar. Both ping and pong suffers from the reported issue and therefore the messages were stuck on eventBus until idle (nothing to process).

Note that If the ping used a multithread worker then the 1 second extra stuff wasn't noticed and the entire process was 1 second quicker (see ping code), however the pong continues suffering from the reported issue.

ping verticle code:

public class Ping extends AbstractVerticle {
    private final static Logger LOG = LoggerFactory.getLogger(Ping.class);

    @Override
    public void start() {
        for (int i = 1; i <= 60; i++) {
            String ball = String.format("ball%02d", i);
            getVertx().eventBus().send("table", ball, response -> {
                if (response.succeeded()) {
                    LOG.info("ping> recv " + response.result().body());
                } else {
                    LOG.error("ping> fail " + response.cause());
                }
            });
        }
        LOG.info("ping> send ball01 to ball60");
        doExtraStuff(); // 1 second wait
    }

    private static void doExtraStuff() {
        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            throw new IllegalStateException(e);
        }
    }
}

pong verticle code:

public class Pong extends AbstractVerticle {
    private final static Logger LOG = LoggerFactory.getLogger(Pong.class);

    @Override
    public void start() {
        getVertx().eventBus().consumer("table", message -> {
            String ball = (String)message.body();
            LOG.info("pong> recv " + ball);
            process(); // 50ms wait
            LOG.info("pong> send " + ball);
            message.reply(ball);
        });
    }

    private static void process() {
        try {
            Thread.sleep(50);
        } catch (InterruptedException e) {
            throw new IllegalStateException(e);
        }
    }
}
  • Test1 results: ping and pong are worker verticle:
[2016-09-04 02:12:23.169] [w-02] ping> send ball01 to ball60 

# -> (ping is doing extra sfuff for 1 second and the 60 messages aren't dispatched to pong)

[2016-09-04 02:12:24.294]                              [w-03] pong> recv ball01 
[2016-09-04 02:12:24.349]                              [w-03] pong> send ball01 
[2016-09-04 02:12:24.364]                              [w-03] pong> recv ball02 
[2016-09-04 02:12:24.415]                              [w-03] pong> send ball02 
#... (2x57 lines - recv and send for ball03 to ball59 - time between recv and send is ~50ms)
[2016-09-04 02:12:27.392]                              [w-03] pong> recv ball60 
[2016-09-04 02:12:27.443]                              [w-03] pong> send ball60 

# -> (pong is idle and only now the 60 messages are dispatched to ping)
# -> (ping didn't receive answers for at least 3 seconds; pong processing time is 50ms!)

[2016-09-04 02:12:27.463] [w-04] ping> recv ball01 
[2016-09-04 02:12:27.465] [w-04] ping> recv ball02 
#... (57 lines - recv for ball03 to ball59)
[2016-09-04 02:12:27.504] [w-04] ping> recv ball60 

# -> (ping received the 60 reply messages practically in one go!)
  • Test2 results: ping is a worker verticle and pong is a multithread worker verticle:
[2016-09-04 02:12:37.623] [w-02] ping> send ball01 to ball60 

# -> (ping is doing extra sfuff for 1 second and the 60 messages aren't dispatched to pong)

[2016-09-04 02:12:38.754]                              [w-12] pong> ball01 recv 
[2016-09-04 02:12:38.754]                              [w-13] pong> ball02 recv 
#... (17 lines - recv for random bal01 to bal20 except ball01, ball02 and ball18)
[2016-09-04 02:12:38.760]                              [w-09] pong> ball18 recv 

# -> (pong doesn't have more idle threads - all 20 pool threads are processing for 50ms)

[2016-09-04 02:12:38.805]                              [w-13] pong> ball02 send 
[2016-09-04 02:12:38.805]                              [w-14] pong> ball03 send 
[2016-09-04 02:12:38.805]                              [w-13] pong> ball03 recv
#... (2x18 lines - with random send/recv bal01 to bal20 except already displayed balls)
[2016-09-04 02:12:38.835]                              [w-12] pong> ball40 recv 

# -> (pong doesn't have more idle threads - all 20 pool threads are processing for 50ms)

[2016-09-04 02:12:38.872]                              [w-16] pong> ball26 send 
[2016-09-04 02:12:38.872]                              [w-06] pong> ball27 send 
[2016-09-04 02:12:38.873]                              [w-16] pong> ball41 recv 
#... (2x17 lines - with random send/recv bal21 to bal40 except already displayed balls)
[2016-09-04 02:12:38.882]                              [w-02] pong> ball59 recv 
[2016-09-04 02:12:38.887]                              [w-14] pong> ball40 send 
[2016-09-04 02:12:38.888]                              [w-14] pong> ball60 recv 

# -> (pong doesn't have more idle threads - all 20 pool threads are processing for 50ms)

[2016-09-04 02:12:38.925]                              [w-16] pong> ball41 send 
[2016-09-04 02:12:38.926]                              [w-19] pong> ball44 send 
#... (17 lines - send for random bal41 to bal60 except ball41, ball44 and ball60)
[2016-09-04 02:12:38.938]                              [w-14] pong> ball60 send 

# -> (pong has idle threads and only the 60 messages are dispatched to ping)
# -> (ping didn't receive answers for at least 150ms seconds; pong processing time is 50ms!)

[2016-09-04 02:12:38.945] [w-04] ping> recv ball09 
[2016-09-04 02:12:38.959] [w-04] ping> recv ball04 
#... (57 lines - recv for random bal01 to bal60 except ball09, ball04 and ball44)
[2016-09-04 02:12:39.004] [w-04] ping> recv ball44 

# -> (ping received the 60 reply messages practically in one go!)

What is the solution to force vertx to instant send a message without waiting the need to be idle?

HazelcastAsyncMultiMap: Near-cache is not used under constant load

This results in poor performance and a hazelcast address lookup on event event-bus message dropping the rate by 10x

The problems seems to come from this commit. The algorithm does not converge.

7199d39

In case of constant concurrent accesses to HazelcastAsync the variable inprogressCount
will never be 0 thus cache will never be used even though there is data in it.

It requires one consumer unregister or register to bring the value to 1 again and then concurrent accesses will keep it high

connecting to an existing hazelcast cluster in a rancher/docker network

Hi there! We are using vertx (3.4.0) in following docker setup:

We have a cross-host rancher/docker network all our containers are deployed to. The core of our vertx communication is a hazelcast cluster of standalone hazelcast containers independent of vertx - apart from the map configuration from here.

Our vertx service containers connect to this hazelcast cluster as hazelcast clients. This works perfectly from inside the rancher network.

Now we would like to be able to connect to the cluster from outside the rancher network, e.g. to debug directly from the IDE. Therefore we opened the hazelcast port 5701 to access containers from outside the network via the hosts. When using plain hazelcast this works fine. Unfortunately it does not if we use the hazelcast instance in this vertx cluster manager.

After it says HazelcastClient 3.8 is CLIENT_CONNECTED sometimes nothing happens, sometimes the event loop thread gets blocked forever. The same code works when connecting to a hazelcast cluster in the same network as the client (e.g. localhost) It seems like vertx tries to use the internal IP of the rancher network to communicate to, instead of the external host IP.

It would be really kind if you had a hint if this could be a vertx issue. I can provide more information on this, if needed.

Greetings!

OSX unexpected event bus message body

Stacktrace

Starting test: HazelcastClusteredEventbusTest#sendNoContext 
[LOCAL] [dev] [3.4] Picked Address[127.0.0.1]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true 
[LOCAL] [dev] [3.4] Picked Address[127.0.0.1]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true 
[127.0.0.1]:5701 [dev] [3.4] Backpressure is disabled 
[127.0.0.1]:5702 [dev] [3.4] Backpressure is disabled 
[127.0.0.1]:5701 [dev] [3.4] Starting with 4 generic operation threads and 8 partition operation threads. 
[127.0.0.1]:5702 [dev] [3.4] Starting with 4 generic operation threads and 8 partition operation threads. 
[127.0.0.1]:5701 [dev] [3.4] Hazelcast 3.4 (20141224 - 3dc5214) starting at Address[127.0.0.1]:5701 
[127.0.0.1]:5701 [dev] [3.4] Copyright (C) 2008-2014 Hazelcast.com 
[127.0.0.1]:5702 [dev] [3.4] Hazelcast 3.4 (20141224 - 3dc5214) starting at Address[127.0.0.1]:5702 
[127.0.0.1]:5702 [dev] [3.4] Copyright (C) 2008-2014 Hazelcast.com 
[127.0.0.1]:5701 [dev] [3.4] Creating MulticastJoiner 
[127.0.0.1]:5702 [dev] [3.4] Creating MulticastJoiner 
[127.0.0.1]:5701 [dev] [3.4] Address[127.0.0.1]:5701 is STARTING 
[127.0.0.1]:5702 [dev] [3.4] Address[127.0.0.1]:5702 is STARTING 
[127.0.0.1]:5701 [dev] [3.4] 


Members [1] {
    Member [127.0.0.1]:5701 this
}

[127.0.0.1]:5701 [dev] [3.4] Address[127.0.0.1]:5701 is STARTED 
[127.0.0.1]:5701 [dev] [3.4] Initializing cluster partition table first arrangement... 
[127.0.0.1]:5702 [dev] [3.4] Trying to join to discovered node: Address[127.0.0.1]:5701 
[127.0.0.1]:5702 [dev] [3.4] Connecting to /127.0.0.1:5701, timeout: 0, bind-any: true 
[127.0.0.1]:5701 [dev] [3.4] Accepting socket connection from /127.0.0.1:62996 
[127.0.0.1]:5702 [dev] [3.4] Established socket connection between /127.0.0.1:62996 and localhost/127.0.0.1:5701 
[127.0.0.1]:5701 [dev] [3.4] Established socket connection between /127.0.0.1:5701 and localhost/127.0.0.1:62996 
[127.0.0.1]:5701 [dev] [3.4] 

Members [2] {
    Member [127.0.0.1]:5701 this
    Member [127.0.0.1]:5702
}

[127.0.0.1]:5702 [dev] [3.4] 

Members [2] {
    Member [127.0.0.1]:5701
    Member [127.0.0.1]:5702 this
}

[127.0.0.1]:5701 [dev] [3.4] Re-partitioning cluster data... Migration queue size: 135 
[127.0.0.1]:5701 [dev] [3.4] All migration tasks have been completed, queues are empty. 
[127.0.0.1]:5702 [dev] [3.4] Address[127.0.0.1]:5702 is STARTED 
java.lang.AssertionError: expected:<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999]> but was:<[64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 0, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 1, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 2, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 3, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 4, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 5, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 6, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 7, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 8, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 9, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 10, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 11, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 12, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 13, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 14, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 15, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 16, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 17, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 18, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 19, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 20, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 21, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 22, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 23, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 24, 25, 26, 475, 476, 27, 477, 478, 28, 479, 480, 481, 482, 483, 484, 485, 486, 487, 29, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 30, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 31, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 32, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 33, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 34, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 35, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 36, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 37, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 38, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 39, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 40, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 41, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 42, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 43, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 44, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 45, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 46, 993, 994, 995, 996, 997, 998, 999, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63]>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:144)
    at io.vertx.test.core.AsyncTestBase.assertEquals(AsyncTestBase.java:354)
    at io.vertx.test.core.ClusteredEventBusTest.lambda$sendNoContext$1202(ClusteredEventBusTest.java:415)
    at io.vertx.test.core.ClusteredEventBusTest$$Lambda$123/23211803.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.EventBusImpl$HandlerRegistration.handle(EventBusImpl.java:1113)
    at io.vertx.core.eventbus.impl.EventBusImpl.lambda$doReceive$181(EventBusImpl.java:756)
    at io.vertx.core.eventbus.impl.EventBusImpl$$Lambda$88/1107661936.handle(Unknown Source)
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$15(ContextImpl.java:312)
    at io.vertx.core.impl.ContextImpl$$Lambda$22/2009185574.run(Unknown Source)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    at java.lang.Thread.run(Thread.java:745)
Unhandled exception 
java.lang.AssertionError: expected:<[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999]> but was:<[64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 0, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 1, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 2, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 3, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 4, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 5, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 6, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 7, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 8, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 9, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 10, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 11, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 12, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 13, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 14, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 15, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 16, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 17, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 18, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 19, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 20, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 21, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 22, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 23, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 24, 25, 26, 475, 476, 27, 477, 478, 28, 479, 480, 481, 482, 483, 484, 485, 486, 487, 29, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 30, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 31, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 32, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 33, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 34, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 35, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 36, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 37, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 38, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 39, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 40, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 41, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 42, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 43, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 44, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 45, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 46, 993, 994, 995, 996, 997, 998, 999, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63]>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:144)
    at io.vertx.test.core.AsyncTestBase.assertEquals(AsyncTestBase.java:354)
    at io.vertx.test.core.ClusteredEventBusTest.lambda$sendNoContext$1202(ClusteredEventBusTest.java:415)
    at io.vertx.test.core.ClusteredEventBusTest$$Lambda$123/23211803.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.EventBusImpl$HandlerRegistration.handle(EventBusImpl.java:1113)
    at io.vertx.core.eventbus.impl.EventBusImpl.lambda$doReceive$181(EventBusImpl.java:756)
    at io.vertx.core.eventbus.impl.EventBusImpl$$Lambda$88/1107661936.handle(Unknown Source)
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$15(ContextImpl.java:312)
    at io.vertx.core.impl.ContextImpl$$Lambda$22/2009185574.run(Unknown Source)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    at java.lang.Thread.run(Thread.java:745)

[127.0.0.1]:5701 [dev] [3.4] Address[127.0.0.1]:5701 is SHUTTING_DOWN 
[127.0.0.1]:5702 [dev] [3.4] Address[127.0.0.1]:5702 is SHUTTING_DOWN 
[127.0.0.1]:5702 [dev] [3.4] Shutting down multicast service... 
[127.0.0.1]:5701 [dev] [3.4] Shutting down multicast service... 
[127.0.0.1]:5701 [dev] [3.4] Shutting down connection manager... 
[127.0.0.1]:5702 [dev] [3.4] Shutting down connection manager... 
[127.0.0.1]:5702 [dev] [3.4] Connection [Address[127.0.0.1]:5701] lost. Reason: Socket explicitly closed 
[127.0.0.1]:5701 [dev] [3.4] Connection [Address[127.0.0.1]:5702] lost. Reason: Socket explicitly closed 
[127.0.0.1]:5702 [dev] [3.4] Shutting down node engine... 
[127.0.0.1]:5701 [dev] [3.4] Shutting down node engine... 
[127.0.0.1]:5701 [dev] [3.4] Destroying node NodeExtension. 
[127.0.0.1]:5702 [dev] [3.4] Destroying node NodeExtension. 
[127.0.0.1]:5701 [dev] [3.4] Hazelcast Shutdown is completed in 8 ms. 
[127.0.0.1]:5702 [dev] [3.4] Hazelcast Shutdown is completed in 8 ms. 
[127.0.0.1]:5701 [dev] [3.4] Address[127.0.0.1]:5701 is SHUTDOWN 
[127.0.0.1]:5702 [dev] [3.4] Address[127.0.0.1]:5702 is SHUTDOWN 

when in same pc,it,s OK,but in different pc.error

二月 24, 2018 11:37:24 上午 io.vertx.core.eventbus.impl.clustered.ConnectionHolder
警告: Connecting to server localhost:5614 failed
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: localhost/127.0.0.1:5614
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:325)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.ConnectException: Connection refused: no further information
... 11 more

source code:

/**

  • 开启集群。负责向另外一个节点发消息。
  • @author Administrator

*/
public class APPClusterSlave {

public static void main(String[] args) {
	// TODO Auto-generated method stub

// Config config = new ClasspathXmlConfig("xmlconfig/simple-config.xml");
ClusterManager mgr = new HazelcastClusterManager();
VertxOptions options = new VertxOptions().setClusterManager(mgr);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
System.out.println("ffffffffffffff");
Vertx vertx = res.result();
vertx.eventBus().send("testMsg", "1111");
// vertx.setPeriodic(1000, id->{
//// vertx.eventBus().registerCodec(codec);
// vertx.eventBus().send("testMsg", "laofan send::"+new Date().toGMTString());
// vertx.eventBus().send("testMsg1", "laofan send::"+new Date().toGMTString());
// });
} else {

	  }
	});
	
	
		
}

}

public static void main(String[] args) {
// Config config = new ClasspathXmlConfig("xmlconfig/simple-config.xml");
ClusterManager mgr = new HazelcastClusterManager();
VertxOptions options = new VertxOptions().setClusterManager(mgr);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
vertx.eventBus().consumer("testMsg",str->{
System.out.println("收到消息:"+str.body());
try {
Thread.currentThread().sleep(1500);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
});

	    vertx.eventBus().consumer("testMsg1",str->{
    		System.out.println("收到消息1:"+str.body());
    		try {
				Thread.currentThread().sleep(1500);
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
    });
	    
	  } else {
	    
	  }
	});
	
	
	
		
}

Cluster manager is corrupted after merging of hazelcast partition

I think that there is problem, that Vert.x use Hazelcast local interface endpoint UUID as unique constant identification of node, see io.vertx.spi.cluster.hazelcast.HazelcastClusterManager#getNodeID and initialization of nodeId field: nodeID = hazelcast.getLocalEndpoint().getUuid() [in io.vertx.spi.cluster.hazelcast.HazelcastClusterManager#join called from io.vertx.core.impl.VertxImpl#VertxImpl()]

This nodeID is used for node registration under multimap of topic subcribers "__vertx.subs" and it is used e.g. for subscriber removing from disconnected nodes (lambda in io.vertx.core.eventbus.impl.clustered.ClusteredEventBus#setClusterViewChangedHandler)

But it looks that in some situation is this UUID regenerated, see com.hazelcast.instance.Node#setNewLocalMember, e.g during merging of hazelcast partiotions.

After that it is in the situation, that hazelcast knows new node UUID, but the vertx registers topics still under the old value, I did not find any place, where the nodeId would be updated. And the lambda from io.vertx.core.eventbus.impl.clustered.ClusteredEventBus#setClusterViewChangedHandler will remove subscribers for this node from subscriber multimap.

I add some logging mechanism, which every 30s compares nodeId from Hazelcast and from Vertx HazelcastClusterManager:

final ClusterManager clusterManager = ((VertxImpl) vertx).getClusterManager();
final String currentNodeID = ((VertxImpl) vertx).getNodeID();

if (clusterManager instanceof HazelcastClusterManager) {
    String currentHazelcastNodeID = ((HazelcastClusterManager) clusterManager).getHazelcastInstance().getLocalEndpoint().getUuid();
    if (!currentNodeID.equals(currentHazelcastNodeID)) {
            getLogger().error("Hazelcast local endpoint {} UUID {} differs from Vertx NodeId {}",
                    ((HazelcastClusterManager) clusterManager).getHazelcastInstance().getLocalEndpoint().getSocketAddress().toString(),
                    currentHazelcastNodeID, currentNodeID);
    }
}

And after hazelcast cluster merge is this in the log:

TID: [2018-06-21 15:01:39,947] WARN [c.h.i.c.i.DiscoveryJoiner] (hz.MCI_SERVICE_CAMPAIGN.cached.thread-11) [] - [10.148.250.33]:5703 [hazelcast-consul-discovery-spi] [3.8.2] [10.148.250.33]:5703 is merging [tcp/ip] to [10.148.250.34]:5702
TID: [2018-06-21 15:01:39,973] WARN [c.h.i.c.i.o.MergeClustersOperation] (hz.MCI_SERVICE_CAMPAIGN.cached.thread-11) [] - [10.148.250.33]:5703 [hazelcast-consul-discovery-spi] [3.8.2] [10.148.250.33]:5703 is merging to [10.148.250.34]:5702, because: instructed by master [10.148.250.33]:5703
TID: [2018-06-21 15:01:39,977] INFO [c.c.m.l.c.m.h.l.NodeLifecycleListener] (hz.MCI_SERVICE_CAMPAIGN.cached.thread-17) [] - Hazelcast state changed: LifecycleEvent [state=MERGING]
TID: [2018-06-21 15:01:39,978] WARN [c.hazelcast.instance.Node] (hz.MCI_SERVICE_CAMPAIGN.cached.thread-17) [] - [10.148.250.33]:5703 [hazelcast-consul-discovery-spi] [3.8.2] Setting new local member. old uuid: 82ffa5f9-f059-48be-be16-7528c547fdd8 new uuid: 2446732d-70df-4201-bccb-7bec82f384fd
TID: [2018-06-21 15:01:46,082] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5702 - 455989de-a9bc-4964-83d1-ec463bdda952,type=added}
TID: [2018-06-21 15:01:46,082] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.37]:5701 - efef4cfe-8463-4e3b-aa34-eca29b0b6157,type=added}
TID: [2018-06-21 15:01:46,082] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5701 - 7efdcfcb-5460-4e8d-ac61-1ac1a8eaba8b,type=added}
TID: [2018-06-21 15:01:46,082] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5706 - 26cf5948-8718-4230-a5bc-1b9ee0ed6015,type=added}
TID: [2018-06-21 15:01:46,082] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5707 - dcfdeb3d-6ebb-474d-80bc-9bfece2d771a,type=added}
TID: [2018-06-21 15:01:46,082] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5702 - aaf66b6b-4026-452c-80d4-cd6cd15fa3a9,type=added}
TID: [2018-06-21 15:01:46,083] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5708 - 72ee2e95-f237-4687-9b1e-973c9cd427b6,type=added}
TID: [2018-06-21 15:01:46,083] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5709 - 170ab501-1b1a-48b1-ad7f-e4cbe12fa5dc,type=added}
TID: [2018-06-21 15:01:46,083] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5703 - ee16f3e5-88f8-4fa4-9efb-a06749ee0996,type=added}
TID: [2018-06-21 15:01:46,083] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5710 - 9f9b3669-4d8b-42c8-b724-52286908f6e0,type=added}
TID: [2018-06-21 15:01:46,083] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5706 - e14bd98a-060a-460f-b352-2fb39399101a,type=added}
TID: [2018-06-21 15:01:46,083] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5711 - eaeeca1d-5c20-4847-8d34-388ed2167f4c,type=added}
TID: [2018-06-21 15:01:46,087] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5704 - 800be4ec-4921-46c8-b20e-067fe4ac3f84,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5705 - 80e49d13-6de8-409e-85ac-59f17deb8f9e,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5703 - fa85141d-b02c-4078-91c6-ed66cd176452,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5704 - b82add35-1d7e-43c8-9388-fdfd997f4121,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5701 - 39e7ae3e-bb62-47c1-8da7-190669c058ef,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.34]:5705 - 69b6f627-3840-4b6e-9705-dfd158e64dc3,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5707 - 92d52042-b967-4075-b4b9-f9023bba2d49,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5708 - 09771edf-56fd-496e-a958-725fd4120357,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5709 - 4cd1905b-2e71-4bbd-8733-5f7e5268f30d,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.238.196]:5710 - 366ebd54-714c-4ed4-8637-03cdadac87fe,type=added}
TID: [2018-06-21 15:01:46,088] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.33]:5701 - 764ba439-4812-418d-975c-0c3ad4a84b0f,type=added}
TID: [2018-06-21 15:01:46,089] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.33]:5709 - 23f95919-4682-4003-be2e-cac876b47f70,type=added}
TID: [2018-06-21 15:01:46,089] DEBUG [c.c.m.l.c.m.h.l.ClusterMembershipListener] (hz.MCI_SERVICE_CAMPAIGN.event-7) [] - Hazelcast member added: MembershipEvent {member=Member [10.148.250.33]:5702 - 2991f9e2-72d8-49e2-9135-b3dd964fe53d,type=added}
TID: [2018-06-21 15:01:46,325] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-6) [] - Hazelcast migration started: MigrationEvent{partitionId=0, status=STARTED, oldOwner=Member [10.148.250.33]:5701 - 764ba439-4812-418d-975c-0c3ad4a84b0f, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,359] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-6) [] - Hazelcast migration completed: MigrationEvent{partitionId=0, status=COMPLETED, oldOwner=Member [10.148.250.33]:5701 - 764ba439-4812-418d-975c-0c3ad4a84b0f, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,646] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-8) [] - Hazelcast migration started: MigrationEvent{partitionId=52, status=STARTED, oldOwner=Member [10.148.238.196]:5710 - 366ebd54-714c-4ed4-8637-03cdadac87fe, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,656] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-8) [] - Hazelcast migration completed: MigrationEvent{partitionId=52, status=COMPLETED, oldOwner=Member [10.148.238.196]:5710 - 366ebd54-714c-4ed4-8637-03cdadac87fe, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,685] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-10) [] - Hazelcast migration started: MigrationEvent{partitionId=64, status=STARTED, oldOwner=Member [10.148.250.34]:5701 - 39e7ae3e-bb62-47c1-8da7-190669c058ef, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,693] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-10) [] - Hazelcast migration completed: MigrationEvent{partitionId=64, status=COMPLETED, oldOwner=Member [10.148.250.34]:5701 - 39e7ae3e-bb62-47c1-8da7-190669c058ef, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,755] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-9) [] - Hazelcast migration started: MigrationEvent{partitionId=33, status=STARTED, oldOwner=Member [10.148.238.196]:5703 - ee16f3e5-88f8-4fa4-9efb-a06749ee0996, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,755] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-9) [] - Hazelcast migration completed: MigrationEvent{partitionId=33, status=COMPLETED, oldOwner=Member [10.148.238.196]:5703 - ee16f3e5-88f8-4fa4-9efb-a06749ee0996, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,755] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-9) [] - Hazelcast migration started: MigrationEvent{partitionId=58, status=STARTED, oldOwner=Member [10.148.250.34]:5706 - 26cf5948-8718-4230-a5bc-1b9ee0ed6015, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,755] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-9) [] - Hazelcast migration completed: MigrationEvent{partitionId=58, status=COMPLETED, oldOwner=Member [10.148.250.34]:5706 - 26cf5948-8718-4230-a5bc-1b9ee0ed6015, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,779] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-9) [] - Hazelcast migration started: MigrationEvent{partitionId=103, status=STARTED, oldOwner=Member [10.148.250.34]:5703 - fa85141d-b02c-4078-91c6-ed66cd176452, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,788] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-9) [] - Hazelcast migration completed: MigrationEvent{partitionId=103, status=COMPLETED, oldOwner=Member [10.148.250.34]:5703 - fa85141d-b02c-4078-91c6-ed66cd176452, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,804] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-10) [] - Hazelcast migration started: MigrationEvent{partitionId=124, status=STARTED, oldOwner=Member [10.148.250.37]:5701 - efef4cfe-8463-4e3b-aa34-eca29b0b6157, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,822] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-10) [] - Hazelcast migration completed: MigrationEvent{partitionId=124, status=COMPLETED, oldOwner=Member [10.148.250.37]:5701 - efef4cfe-8463-4e3b-aa34-eca29b0b6157, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,875] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-10) [] - Hazelcast migration started: MigrationEvent{partitionId=169, status=STARTED, oldOwner=Member [10.148.238.196]:5701 - 7efdcfcb-5460-4e8d-ac61-1ac1a8eaba8b, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,892] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-10) [] - Hazelcast migration completed: MigrationEvent{partitionId=169, status=COMPLETED, oldOwner=Member [10.148.238.196]:5701 - 7efdcfcb-5460-4e8d-ac61-1ac1a8eaba8b, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,892] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-8) [] - Hazelcast migration started: MigrationEvent{partitionId=187, status=STARTED, oldOwner=Member [10.148.250.34]:5708 - 72ee2e95-f237-4687-9b1e-973c9cd427b6, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,907] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-8) [] - Hazelcast migration completed: MigrationEvent{partitionId=187, status=COMPLETED, oldOwner=Member [10.148.250.34]:5708 - 72ee2e95-f237-4687-9b1e-973c9cd427b6, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,928] DEBUG [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-6) [] - Hazelcast migration started: MigrationEvent{partitionId=200, status=STARTED, oldOwner=Member [10.148.238.196]:5705 - 80e49d13-6de8-409e-85ac-59f17deb8f9e, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:46,944] INFO [c.c.m.l.c.m.h.l.ClusterMigrationListener] (hz.MCI_SERVICE_CAMPAIGN.event-6) [] - Hazelcast migration completed: MigrationEvent{partitionId=200, status=COMPLETED, oldOwner=Member [10.148.238.196]:5705 - 80e49d13-6de8-409e-85ac-59f17deb8f9e, newOwner=Member [10.148.250.33]:5703 - 2446732d-70df-4201-bccb-7bec82f384fd this}
TID: [2018-06-21 15:01:47,279] ERROR [c.c.m.s.c.Application] (vert.x-eventloop-thread-0) [] - Hazelcast local endpoint /10.148.250.33:5703 UUID 2446732d-70df-4201-bccb-7bec82f384fd differs from Vertx NodeId 82ffa5f9-f059-48be-be16-7528c547fdd8

But new registered subscribers are still registered under 82ffa5f9-f059-48be-be16-7528c547fdd8, I registered some subscribers after this operation and in MultiMap is e.g. this:

{
  "key": "topic-getCampaignMaterials",
  "values": [
	{
	  "serverId": "10.148.250.34:15702", -- subsriber from another node
	  "nodeId": "455989de-a9bc-4964-83d1-ec463bdda952"
	},
	{
	  "serverId": "10.148.250.33:15703", -- Hazelcast port + 10000
	  "nodeId": "82ffa5f9-f059-48be-be16-7528c547fdd8"
	}
  ]
}

but 82ffa5f9-f059-48be-be16-7528c547fdd8 is not list of Hazelcast members, there is uuid 2446732d-70df-4201-bccb-7bec82f384fd for [10.148.250.33]:5703 only. And if some nodes are removed/added to cluster, the lambda in io.vertx.core.eventbus.impl.clustered.ClusteredEventBus#setClusterViewChangedHandler will remove these subsribers, I think.

And the subsribers registred earlier from this node are lost, because the multimap recovery is in Hazelcast implemented in last releases. I tried use the latest release of Hazelcast, multimap recovery is possibly solved there, but problem with unupdated nodeId-UUID remains, so the subsribers are removed from map by Vertx anyway.

Shouldn't be there some nodeId updating after Hazelcast merge notification?

Used versions: Vert.X 3.5.2, Hazelcast: 3.8.2 (and 3.10.2)

Link to original discussion topic.

Create README/docs

Describing how this is used and configured, including cluster.xml, default-cluster.xml etc.

Investigate the blocked thread when leaving the cluster.

It may be normal, but worth an investigation.

When leaving the cluster, the thread is blocked. Probably because the balancing of the data is done in the background.

io.vertx.core.VertxException: Thread blocked
    at java.lang.Object.wait(Native Method)
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.pollResponse(InvocationFuture.java:300)
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:245)
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:222)
    at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:202)
    at com.hazelcast.concurrent.lock.LockProxySupport.tryLock(LockProxySupport.java:136)
    at com.hazelcast.concurrent.lock.LockProxySupport.tryLock(LockProxySupport.java:125)
    at com.hazelcast.concurrent.lock.LockProxy.tryLock(LockProxy.java:93)
    at io.vertx.spi.cluster.hazelcast.HazelcastClusterManager.beforeLeave(HazelcastClusterManager.java:315)
    at io.vertx.core.impl.VertxImpl.closeClusterManager(VertxImpl.java:439)
    at io.vertx.core.impl.VertxImpl.lambda$null$420(VertxImpl.java:471)
    at io.vertx.core.impl.VertxImpl$$Lambda$78/776405214.handle(Unknown Source)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.lambda$null$144(ClusteredEventBus.java:163)
    at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus$$Lambda$88/1936993836.handle(Unknown Source)
    at io.vertx.core.net.impl.NetServerImpl.lambda$executeCloseDone$30(NetServerImpl.java:375)
    at io.vertx.core.net.impl.NetServerImpl$$Lambda$96/1795172180.handle(Unknown Source)
    at io.vertx.core.impl.ContextImpl.lambda$wrapTask$3(ContextImpl.java:337)
    at io.vertx.core.impl.ContextImpl$$Lambda$13/2099574098.run(Unknown Source)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
    at java.lang.Thread.run(Thread.java:745)

[127.0.0.1]:5702 [dev] [3.6.1] Address[127.0.0.1]:5702 is SHUTTING_DOWN 

Update Hazelcast to 3.7+

We are using some hazelcast 3.7 features in our vert.x based project. If there are no reasons to stay at 3.6, it would be really convenient for us if vert.x used 3.7 or 3.8 itself.

vert.x exception while shutting down

From @tobiloeb on May 24, 2016 14:0

Version

  • vert.x core: 3.2.1
  • vert.x web: 3.2.1
  • vert.x hazelcast: 3.2.1

Context

Hello,

we have started vert.x in cluster mode with two nodes. Everything is working fine until we shutdown the server. The server can't shutdown correctly and it fails with attached exception.
Both servers shut down at same time via jvm term signal.

Seems to be related with Hazelcast.

2016-05-24 15:50:43,034 INFO hz._hzInstance_1_dev.IO.thread-in-1 com.hazelcast.nio.tcp.TcpIpConnection - [server1]:38232 [dev] [3.5.2] Connection [Address[server2]:38232] lost. Reason: java.io.EOFException[Remote socket closed!]
2016-05-24 15:50:43,034 INFO hz._hzInstance_1_dev.IO.thread-in-0 com.hazelcast.nio.tcp.TcpIpConnection - [server1]:38232 [dev] [3.5.2] Connection [Address[server2]:38232] lost. Reason: java.io.EOFException[Remote socket closed!]
2016-05-24 15:50:43,107 WARN hz._hzInstance_1_dev.IO.thread-in-0 com.hazelcast.nio.tcp.ReadHandler - [server1]:38232 [dev] [3.5.2] hz._hzInstance_1_dev.IO.thread-in-0 Closing socket to endpoint Address[server2]:38232, Cause:java.io.EOFException: Remote socket closed!
2016-05-24 15:50:43,107 WARN hz._hzInstance_1_dev.IO.thread-in-1 com.hazelcast.nio.tcp.ReadHandler - [server1]:38232 [dev] [3.5.2] hz._hzInstance_1_dev.IO.thread-in-1 Closing socket to endpoint Address[server2]:38232, Cause:java.io.EOFException: Remote socket closed!
2016-05-24 15:50:43,157 INFO hz.ShutdownThread com.hazelcast.core.LifecycleService - [server1]:38232 [dev] [3.5.2] Address[server1]:38232 is SHUTTING_DOWN
2016-05-24 15:50:43,157 WARN hz.ShutdownThread com.hazelcast.instance.Node - [server1]:38232 [dev] [3.5.2] Terminating forcefully...
2016-05-24 15:50:43,158 INFO hz.ShutdownThread com.hazelcast.instance.Node - [server1]:38232 [dev] [3.5.2] Shutting down connection manager...
2016-05-24 15:50:43,164 INFO hz.ShutdownThread com.hazelcast.instance.Node - [server1]:38232 [dev] [3.5.2] Shutting down node engine...
2016-05-24 15:50:43,175 INFO hz.ShutdownThread com.hazelcast.instance.NodeExtension - [server1]:38232 [dev] [3.5.2] Destroying node NodeExtension.
2016-05-24 15:50:43,176 INFO hz.ShutdownThread com.hazelcast.instance.Node - [server1]:38232 [dev] [3.5.2] Hazelcast Shutdown is completed in 19 ms.
2016-05-24 15:50:43,214 ERROR vert.x-eventloop-thread-17 i.v.c.e.impl.clustered.ClusteredEventBus - Failed to remove sub
com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
at com.hazelcast.spi.AbstractDistributedObject.throwNotActiveException(AbstractDistributedObject.java:81)
at com.hazelcast.spi.AbstractDistributedObject.lifecycleCheck(AbstractDistributedObject.java:76)
at com.hazelcast.spi.AbstractDistributedObject.getNodeEngine(AbstractDistributedObject.java:70)
at com.hazelcast.multimap.impl.ObjectMultiMapProxy.remove(ObjectMultiMapProxy.java:125)
at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.lambda$remove$31(HazelcastAsyncMultiMap.java:123)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$17(ContextImpl.java:296)
at io.vertx.core.impl.OrderedExecutorFactory$OrderedExecutor.lambda$new$265(OrderedExecutorFactory.java:91)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-24 15:50:43,222 ERROR vert.x-eventloop-thread-10 io.vertx.core.impl.DeploymentManager - Failure in calling handler
com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
at com.hazelcast.spi.AbstractDistributedObject.getService(AbstractDistributedObject.java:93)
at com.hazelcast.map.impl.proxy.MapProxySupport.toData(MapProxySupport.java:1143)
at com.hazelcast.map.impl.proxy.MapProxyImpl.remove(MapProxyImpl.java:178)
at io.vertx.core.impl.HAManager.stop(HAManager.java:195)
at io.vertx.core.impl.VertxImpl.lambda$close$278(VertxImpl.java:461)
at io.vertx.core.impl.DeploymentManager.lambda$undeployAll$158(DeploymentManager.java:238)
at io.vertx.core.impl.DeploymentManager.lambda$reportResult$161(DeploymentManager.java:398)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$18(ContextImpl.java:335)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
2016-05-24 15:50:43,261 INFO hz.ShutdownThread com.hazelcast.core.LifecycleService - [server1]:38232 [dev] [3.5.2] Address[server1]:38232 is SHUTDOWN

Thanks in advanced for any help
Tobias

Copied from original issue: vert-x3/vertx-web#384

Hazelcast cluster implementation try send event to died node

Hi All,

I have a problem with Hazelcast cluster manager impl.

After I kill several nodes in clustering Vertx still tried send event to died nodes.
To fix this I should restart all my nodes in cluster.

My reproducer with description how reproduce them :
https://github.com/SergeyMagid/vertx-reproducer
I used there vertx-hazelcast:3.1.0-SNAPSHOT. With 3.0.0 I have the same problem.
I get this problem with aws and multicast join config.

Also I was tried write JUnite test. But After I kill several nodes my died nodes still receive event. Maybe its problem because kill is simulated? But its work as expected if I kill one node.
https://github.com/SergeyMagid/vertx-hazelcast/blob/master/src/test/java/io/vertx/test/core/HazelcastClusteredEventbusTest.java

And I found that this callback https://github.com/vert-x3/vertx-hazelcast/blob/master/src/main/java/io/vertx/spi/cluster/hazelcast/impl/HazelcastAsyncMultiMap.java#L145 do not called from hazelcast. So cache can be corrupted. But in ClusterManager callback memberRemoved https://github.com/vert-x3/vertx-hazelcast/blob/master/src/main/java/io/vertx/spi/cluster/hazelcast/HazelcastClusterManager.java#L225 calls every time.

microservice jar file size

Hi,
I use vertx to create microservices running on openshift.
But now I figured out that if I include this modues as dependency my fat jars get a file size of > 50mb.
Is there a possibility to keep my jars small? IMHO having microservice jars that big is not the idea behind microservices?!

regards, Marco

AsyncMap timeout not cleared if put is invoked

See io.vertx.test.core.AsyncMapTest#testMapPutTtlThenPut:

    getVertx().sharedData().getAsyncMap("foo", onSuccess(map -> {
      map.put("pipo", "molo", 10, onSuccess(vd -> {
        map.put("pipo", "mili", onSuccess(vd2 -> {
          vertx.setTimer(20, l -> {
            getVertx().sharedData().getAsyncMap("foo", onSuccess(map2 -> {
              map2.get("pipo", onSuccess(res -> {
                assertEquals("mili", res);
                testComplete();
              }));
            }));
          });
        }));
      }));
    }));
    await();

This test fails with Hazelcast so for now it's ignored in this CM.

Interaction with the asyncmap puts request into ended state

I'm attempting to use the asyncmap to cache certain aspects of requests. This is done in some middleware, then put onto the routing context, and then I call routingContext.next(). However interaction with the async map causes an exception:
java.lang.IllegalStateException: Request has already been read

Using the hazelcast's map directly does not cause this problem. Somehow the event loop is getting a "lasthttpcontent" object and closes the request. I believe this has something to do with the event loop, because even scheduling hazelcast's 'put' and 'get' on workers cause the same issue.

I've hacked up a repro here: https://gist.github.com/robbieknuth/67e46195b30c7f99b1d742e5af415887

The real meat and potatoes there is that the middleware interacts with the asyncmap. Then it calls routing context.next().

Using the repro, this is fine: http://127.0.0.1:9000/without
This blows up http://127.0.0.1:9000/with

Versions:

    compile group: 'io.vertx', name: 'vertx-hazelcast', version: '3.4.0'
    compile group: 'io.vertx', name: 'vertx-web', version: '3.4.0'
    compile group: 'io.vertx', name: 'vertx-core', version: '3.4.0'

Exception:

Apr 07, 2017 11:27:18 AM io.vertx.ext.web.impl.RoutingContextImplBase
SEVERE: Unexpected exception in route
java.lang.IllegalStateException: Request has already been read
	at io.vertx.core.http.impl.HttpServerRequestImpl.checkEnded(HttpServerRequestImpl.java:426)
	at io.vertx.core.http.impl.HttpServerRequestImpl.endHandler(HttpServerRequestImpl.java:239)
	at io.vertx.ext.web.impl.HttpServerRequestWrapper.endHandler(HttpServerRequestWrapper.java:57)
	at EntryPoint$Verticle.actualRoute(EntryPoint.java:122)
	at io.vertx.ext.web.impl.RouteImpl.handleContext(RouteImpl.java:217)
	at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:78)
	at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:133)
	at EntryPoint$Verticle.lambda$null$3(EntryPoint.java:106)
	at EntryPoint$HandlerCallBackAdapter.setResult(EntryPoint.java:260)
	at EntryPoint$HandlerCallBackAdapter.onResponse(EntryPoint.java:249)
	at com.hazelcast.util.executor.DelegatingFuture$DelegatingExecutionCallback.onResponse(DelegatingFuture.java:170)
	at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture$1.run(InvocationFuture.java:127)
	at EntryPoint$VertxExecutorAdapter.lambda$execute$0(EntryPoint.java:287)
	at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:324)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at java.lang.Thread.run(Thread.java:745)

cluster not working for me

i am using java 8 with official vertx-samples.

/Users/WB/Workspace/Vertx3/vertx-examples/core-examples/src/main/java/io/vertx/example/core/ha/Server.java

i run the Server.java in different cmd window, the cluster just not working.

so i write a test code like this:

    public static void main(String[] args) {
       System.setProperty("vertx.cwd", "src/main/java/io/hale/core");
        Consumer<Vertx> runner = node -> {
            //.....
        };
       ClusterManager mgr = new HazelcastClusterManager();
        VertxOptions options = new VertxOptions()
                .setClustered(true)
                .setClusterManager(mgr);
        Vertx.clusteredVertx(options, res -> {
            if (res.succeeded()) {
                Vertx join = res.result();
                System.out.println("joined cluster node:" + join.toString());
                runner.accept(join);
            } else {
                throw new RuntimeException(res.cause());
            }
        });
}

and i started two of this, still not working.

THE TWO APPLICATIONS JUST NOT FOUND EACHOUTHER

3.4.1 incompatibility with 3.3.3

I have a 3.3.3 app which, after successfully joining a hazelcast cluster containing 3.4.1 nodes, attempts to send a clustered event bus message and gets the failure below.

2017-04-20T11:59:13.604-0500 [vert.x-eventloop-thread-7] ERROR i.v.c.e.i.c.ClusteredEventBus - Failed to send message
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.HazelcastClusterNodeInfo', exception: io.vertx.spi.cluster.hazelcast.impl.HazelcastClusterNodeInfo
	at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:130) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:47) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:46) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:170) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:234) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.multimap.impl.operations.MultiMapResponse.getObjectCollection(MultiMapResponse.java:72) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.multimap.impl.ObjectMultiMapProxy.get(ObjectMultiMapProxy.java:116) ~[hazelcast-3.6.3.jar:3.6.3]
	at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.lambda$get$2(HazelcastAsyncMultiMap.java:96) ~[vertx-hazelcast-3.3.3.jar:na]
	at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:259) ~[vertx-core-3.3.3.jar:na]
	at io.vertx.core.impl.OrderedExecutorFactory$OrderedExecutor.lambda$new$0(OrderedExecutorFactory.java:94) ~[vertx-core-3.3.3.jar:na]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_91]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_91]
	at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]
Caused by: java.lang.ClassNotFoundException: io.vertx.spi.cluster.hazelcast.impl.HazelcastClusterNodeInfo
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_91]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_91]
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[na:1.8.0_91]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_91]
	at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:137) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:115) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.nio.ClassLoaderUtil.newInstance(ClassLoaderUtil.java:68) ~[hazelcast-3.6.3.jar:3.6.3]
	at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:119) ~[hazelcast-3.6.3.jar:3.6.3]
	... 12 common frames omitted

CI failure: io.vertx.test.core.HazelcastHATest.testSimpleFailover

java.lang.IllegalStateException: Timed out
    at io.vertx.test.core.AsyncTestBase.waitUntil(AsyncTestBase.java:611)
    at io.vertx.test.core.AsyncTestBase.waitUntil(AsyncTestBase.java:596)
    at io.vertx.test.core.HATest.testSimpleFailover(HATest.java:65)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
    at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

See https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-hazelcast/3962/testReport/io.vertx.test.core/HazelcastHATest/testSimpleFailover/

Remove Cluster Node cause NPE

When I just terminate another hazelcast instance, it offen show me error on alive one.

@Override
  public void entryRemoved(EntryEvent<K, V> entry) {
    removeEntry(entry.getKey(), entry.getOldValue());
  }

When entry.getOldValue() is null, will cause NPE

java.lang.NullPointerException
    at java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
    at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
    at io.vertx.core.impl.ConcurrentHashSet.remove(ConcurrentHashSet.java:79)
    at io.vertx.spi.cluster.hazelcast.impl.ChoosableSet.remove(ChoosableSet.java:57)
    at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.removeEntry(HazelcastAsyncMultiMap.java:151)
    at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.entryRemoved(HazelcastAsyncMultiMap.java:145)
    at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatch0(MultiMapEventsDispatcher.java:94)
    at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEntryEventData(MultiMapEventsDispatcher.java:68)
    at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEvent(MultiMapEventsDispatcher.java:39)
    at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:355)
    at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:64)
    at com.hazelcast.spi.impl.EventServiceImpl$EventPacketProcessor.process(EventServiceImpl.java:549)
    at com.hazelcast.spi.impl.EventServiceImpl$RemoteEventPacketProcessor.run(EventServiceImpl.java:630)
    at com.hazelcast.util.executor.StripedExecutor$Worker.process(StripedExecutor.java:190)
    at com.hazelcast.util.executor.StripedExecutor$Worker.run(StripedExecutor.java:174)

Deadlock while acquiring and releasing lock in HazelcastClusterManager in vertx 3.3.2

public class Test {

public static void main (String[] args)  {
        Vertx.clusteredVertx(new VertxOptions(), result -> testFunction(result.result()));
}

private static void testFunction(Vertx vertx) {
    Context context = vertx.getOrCreateContext();
    context.runOnContext(v -> testFunctionOnContext(vertx,"r1"));
    context.runOnContext(v -> testFunctionOnContext(vertx,"r2"));
    context.runOnContext(v -> vertx.setTimer(20000L, event -> context.runOnContext(v1 -> testFunctionOnContext(vertx,"r3"))));
}


private static void testFunctionOnContext(Vertx vertx, String req) {
    System.out.println("************ TRY TO GET LOCK "+req +" ************");
    vertx.sharedData().getLockWithTimeout("abc",15000L, lockResult -> {
            if (lockResult.succeeded()) {
                    System.out.println("************ "+req +" GOT LOCK ************");
                    vertx.setTimer(10000L, event -> {
                        System.out.println("************ "+req +" TRYING TO RELEASE LOCK ************");
                        lockResult.result().release();
                    });
            }
            else{
                lockResult.cause().printStackTrace();
            }
        });
}

}

Using vertx 3.3.2

In the above case getLockWithTimeout and release call on lock both are using executeBlocking with ordering = true by default.
We are running on a single context.

So,
When r1 tries to get lock, it will get the lock for abc and will start doing its work, assume it does some work which takes 10 secs.

When r2 tries to get lock, simultaneously before, r1 finishes its work, r2 will not get a lock and will be waiting 15 secs for r1 to release the lock.

But r1 is waiting for r2 to get lock since it is executeBlocking with ordering true.

So r2 will never get the lock, and fail with timeout exception even though r1 had finished the work in 10 secs. After r2 fails with timeout exception r1 will release the lock, since it was waiting for r2 executeBlocking to get over.

This will work if we have ordering false in executeBlocking. Why is the ordering true by default, which makes it sequential execution of executeBlocking ?
In that case, I will not be able to use lock on single context, when there are multiple requests coming for the same resource.

Add warning about smartclient support

Smart clients do not respect the HA behavior and so must not be used in such scenarios. The documentation should be updated to clearly state that smart client should be used very carefully.

NullPointer in sendToSubs

We're using vertx 3.3.3 in a productive environment clustered with hazelcast.
We are observing the following NullPointerException from time to time:

2016-12-07 09:57:08,063 ERROR [vert.x-eventloop-thread-4] [] io.vertx.core.impl.ContextImpl - Unhandled exception
java.lang.NullPointerException
	at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.sendToSubs(ClusteredEventBus.java:279)
	at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.lambda$sendOrPub$133(ClusteredEventBus.java:182)
	at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.lambda$get$30(HazelcastAsyncMultiMap.java:116)
	at io.vertx.core.impl.FutureImpl.checkCallHandler(FutureImpl.java:158)
	at io.vertx.core.impl.FutureImpl.setHandler(FutureImpl.java:100)
	at io.vertx.core.impl.ContextImpl.lambda$null$16(ContextImpl.java:305)
	at io.vertx.core.impl.ContextImpl.lambda$wrapTask$18(ContextImpl.java:335)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)

I guess, this happens when we're unregistering an eventbus consumer at the same time the sendToSubs is trying to submit a message to it.

Hazelcast "smart client" vert.x cluster member is not removed after forced shutdown

When using a "smart client" ClusterManager, if the vert.x instance is killed or disconnected from the network the eventbus registrations of this instance are not removed.

The memberRemoved(MembershipEvent membershipEvent) method is never called on surviving instances, and messages are delivered to dead addresses.

The Hazelcast MembershipListener interface, which is used by the cluster manager provides only member disconnection events and no client events.

"time-to-live-seconds" and "max-idle-seconds" are not working as expected.

I am trying to use AsyncMap as cache.
and I have set cluster.xml of hazelcast as below.

But, "time-to-live-seconds" and "max-idle-seconds" are not working as expected.
AsyncMap datas are not evicted.

Do I need to add any code to use this function?

map configuartion in cluster.xml

<!--
    Number of backups. If 1 is set as the backup-count for example,
    then all entries of the map will be copied to another JVM for
    fail-safety. 0 means no backup.
-->
<backup-count>0</backup-count>
<async-backup-count>1</async-backup-count>
<!--

Maximum number of seconds for each entry to stay in the map. Entries that are
older than and not updated for
will get automatically evicted from the map.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
-->
0

10

NONE

0

25
<!--
While recovering from split-brain (network partitioning),
map entries in the small cluster will merge into the bigger cluster
based on the policy set here. When an entry merge into the
cluster, there might an existing entry with the same key already.
Values of these entries might be different for that same key.
Which value should be set for the key? Conflict is resolved by
the policy set here. Default policy is PutIfAbsentMapMergePolicy

    There are built-in merge policies such as
    com.hazelcast.map.merge.PassThroughMergePolicy; entry will be added if there is no existing entry for the key.
    com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
    com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
    com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
-->
<merge-policy>com.hazelcast.map.merge.LatestUpdateMapMergePolicy</merge-policy>

Problem while reading DataSerializable

2017-03-16_05:42:43.32981 com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.Hazelcas
tAsyncMap$DataSerializableHolder', exception: Failed to load class io.vertx.ext.web.sstore.impl.SessionImpl
2017-03-16_05:42:43.32982 at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:130)
2017-03-16_05:42:43.32983 at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:47)
2017-03-16_05:42:43.32983 at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:46)
2017-03-16_05:42:43.32984 at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:170)
2017-03-16_05:42:43.32985 at com.hazelcast.map.impl.proxy.MapProxySupport.toObject(MapProxySupport.java:940)
2017-03-16_05:42:43.32985 at com.hazelcast.map.impl.proxy.MapProxyImpl.get(MapProxyImpl.java:94)
2017-03-16_05:42:43.32985 at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMap.lambda$get$0(HazelcastAsyncMap.java:46)
2017-03-16_05:42:43.32986 at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:263)
2017-03-16_05:42:43.32988 at io.vertx.core.impl.OrderedExecutorFactory$OrderedExecutor.lambda$new$0(OrderedExecutorFactory.java:94)
2017-03-16_05:42:43.32989 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2017-03-16_05:42:43.32989 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2017-03-16_05:42:43.32989 at java.lang.Thread.run(Thread.java:745)
2017-03-16_05:42:43.32990 Caused by: java.lang.IllegalStateException: Failed to load class io.vertx.ext.web.sstore.impl.SessionImpl
2017-03-16_05:42:43.32990 at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMap$DataSerializableHolder.readData(HazelcastAsyncMap.java:178)
2017-03-16_05:42:43.32991 at com.hazelcast.internal.serialization.impl.DataSerializer.read(DataSerializer.java:121)
2017-03-16_05:42:43.32991 ... 11 more
2017-03-16_05:42:43.32992 Caused by: java.lang.InstantiationException: io.vertx.ext.web.sstore.impl.SessionImpl
2017-03-16_05:42:43.32992 at java.lang.Class.newInstance(Class.java:427)
2017-03-16_05:42:43.32992 at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMap$DataSerializableHolder.readData(HazelcastAsyncMap.java:175)
2017-03-16_05:42:43.32995 ... 12 more
2017-03-16_05:42:43.32995 Caused by: java.lang.NoSuchMethodException: io.vertx.ext.web.sstore.impl.SessionImpl.()
2017-03-16_05:42:43.32997 at java.lang.Class.getConstructor0(Class.java:3082)
2017-03-16_05:42:43.32997 at java.lang.Class.newInstance(Class.java:412)
2017-03-16_05:42:43.32998 ... 13 more

Bump hazelcast version to 3.6

HZ 3.6 was just released, we should update.
This is for 3.3.0 and not 3.2.1.

The pom can be cleaned up as the new HZ 3.6 pom does not import transitive dependencies anymore.

HazelcastAsyncMap should not rely on executeBlocking()

Current implementation of HazelcastAsyncMap uses executeBlocking() for all its method.

I wrote a small prototype showing how to hook into Hazelcast ICompletableFuture. See: jerrinot@2637471

This way you do not consume a thread per each in-flight operations. The prototype relies on casting into ICompletableFuture, however this is very unlikely to change in future versions of Hazelcast and if there is a change then it will be caught be tests anyway.

Would you accept a Pull Request with a changeset similar to the one I linked?

Any kind of feedback is welcome!

Documentation should indicate that default JDK needs to be overridden in custom cluster.xml

The default cluster.xml contains

<properties> 
   <property name="hazelcast.logging.type">jdk</property> 
</properties>

This means that even if you add something something like this to the command line
-Dhazelcast.logging.type=slf4j
the default JDK logging will take priority.

This should be highlighed in the documentation.

Please see the following discussion
https://groups.google.com/forum/#!topic/vertx/PiOviVwwcM0

vertx.close() fails if clusterManager.leave() has been called first

If I have explicitly initialized a HazelcastClusterManager, and call HazelcastClusterManager.leave(), a subsequent call to vertx.close() throws exception.

The reason I want to explicitly call HazelcastClusterManager.leave(), is to ensure that all events are handled until the node leaves the cluster and no more events are received by the node. Otherwise, calling vertx.close() first closes the event bus, which doesn't immediately inform the cluster/sender that this node is not longer active, and results to some messages timing out.

SEVERE: Failed to remove sub com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active! at com.hazelcast.spi.AbstractDistributedObject.throwNotActiveException(AbstractDistributedObject.java:85) at com.hazelcast.spi.AbstractDistributedObject.lifecycleCheck(AbstractDistributedObject.java:80) at com.hazelcast.spi.AbstractDistributedObject.getNodeEngine(AbstractDistributedObject.java:74) at com.hazelcast.multimap.impl.ObjectMultiMapProxy.remove(ObjectMultiMapProxy.java:124) at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.lambda$remove$4(HazelcastAsyncMultiMap.java:133) at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:271) at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:60) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Hard Killing node in cluster using EventBus causes NullPointerException

Incredibly easy to reproduce...

I created a dummy Verticle that uses the EventBus... Did a Maven "fat-jar" project, included Vertx 3.0.0 and Hazelcast 3.4.

NOTE: This bug does not reproduce if you don't use an EventBus. It's definitely EventBus related.

The verticle was simply:

package com.test;

import io.vertx.core.eventbus.EventBus;
import io.vertx.core.AbstractVerticle;

public class FailureVerticle extends AbstractVerticle {
  // Called when verticle is deployed
  public void start() {
    EventBus eb = vertx.eventBus();
    eb.consumer("broadcasts", message -> {
      System.out.println("got: " + message.body());
      message.reply("pong");
    });

    vertx.setPeriodic(1000, (l) -> {
      eb.send("broadcasts", "ping");
    });
  }
}

Started 3 instances in separate terminals

./bin/start

kill one off

jps | grep -v Jps | head -n 1 | xargs kill -9

Then get this

Aug 05, 2015 11:32:00 AM com.hazelcast.spi.impl.EventServiceImpl
SEVERE: [X.X.X.X]:5701 [dev] [3.4] hz._hzInstance_1_dev.event-2 caught an exception while processing task:com.hazelcast.spi.impl.EventServiceImpl$LocalEventDispatcher@22bb6975
java.lang.NullPointerException
    at java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
    at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
    at io.vertx.core.impl.ConcurrentHashSet.remove(ConcurrentHashSet.java:79)
    at io.vertx.spi.cluster.hazelcast.impl.ChoosableSet.remove(ChoosableSet.java:57)
    at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.removeEntry(HazelcastAsyncMultiMap.java:151)
    at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.entryRemoved(HazelcastAsyncMultiMap.java:145)
    at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatch0(MultiMapEventsDispatcher.java:94)
    at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEntryEventData(MultiMapEventsDispatcher.java:68)
    at com.hazelcast.multimap.impl.MultiMapEventsDispatcher.dispatchEvent(MultiMapEventsDispatcher.java:39)
    at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:339)
    at com.hazelcast.multimap.impl.MultiMapService.dispatchEvent(MultiMapService.java:64)
    at com.hazelcast.spi.impl.EventServiceImpl$LocalEventDispatcher.run(EventServiceImpl.java:666)
    at com.hazelcast.util.executor.StripedExecutor$Worker.process(StripedExecutor.java:190)
    at com.hazelcast.util.executor.StripedExecutor$Worker.run(StripedExecutor.java:174)

Example project:
https://github.com/DevelopStuff/vertx-hazelcast-bug

Improve build stability

The test suite almost always fails on Cloudbees.

Most impacted tests seem to be:

  • counter tests
  • cluster wide map put if absent tests

HazelcastAsyncMap and HazelcastInternalAsyncMap should both rely on ConversionUtils

When adding elements in hazelcast map, values are wrapped whithin a DataSerializableHolder, but the two implementations use duplicated code to do so.

HazelcastAsyncMap has it's own private inner class (storing in the HazelcastAsyncMap$DataSerializableHolder objects as value)

HazelcastInternalAsyncMap uses the inner class declared in ConversionUtils (storing ConversionUtils$DataSerializableHolder objects as value).

HazelcastAsyncMap should be modified to use the Helper instead of using duplicated code, and it would be nice to change ConversionUtils from friendly visibility to public. When using Hazlecast EntryProcessor we need a way to read the values stored in the map, but with the current implementation the object is only available by using reflection to access the clusterSerializable property.

thanks

Microservices sometimes doesn't discover Services over Kubernetes

Hi,

I'm trying to build some microservices which are using vertx-hazelcast library and I'm deploying it over kubernetes on AWS EC-2 instance. The microservices sometimes get disconnected from hazelcast cluster and sometimes are not discoverable. If there anything I'm missing ? Need some advice.

My architecture


Fat Jar -> Docker Image -> Kubernetes (AWS EC-2)

Lock release should release a single permit

Hi, there,

I just noticed that the implementation of Locks with Hazelcast is via ISemaphore which can be potentially release a permit multiple times. Though it is developer's obligation to use the Semaphore correctly, from the Lock's point of view, it should be able to release more than one time, e.g. in multiple exception handler. But when it tries to acquire the lock, multiple release() should not result that the Lock can be acquired multiple times.

This could be a drawback of using ISemaphore to implement the Lock, while I am thinking the Ignite's implementation using a queue with a cap (max size) is more appropriate.

Any ideas?

Document and test Hazelcast lite member usage

We got good feedback from users about cluster stability with external data nodes and Vert.x nodes as lite members.

We should document the setup and add integration tests in the project to make sure all features work fine with such a setup.

Hazelcast clustering, when used with sessions is sensitive to what constructors are available in the objects

Local storage, for example session store, is tolerant of classes that are deserialized and do not have a default constructor. Hazelcast when used in clustering with a clustered session store throws exceptions and cannot deserialize the classes that were allowed to be stored into the session.

There needs to be a uniform standard across vertx projects about this because some, like vertx-auth which is obviously going to write to the session does not do so correctly: https://github.com/vert-x3/vertx-auth/issues/26 (at least for JWT)

And it appears there is no test coverage for each of these session objects that uses cluster session store and hazelcast.

This should be improved in the project across the board to be consistent and well known what is required of objects that go into session or clustered shared memory.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.