Code Monkey home page Code Monkey logo

californium's People

Contributors

abetzler avatar alexitc avatar aml2610 avatar boaks avatar eclipse-californium-bot avatar eclipsewebmaster avatar eugene-nikolaev avatar jaimiew avatar joemag1 avatar jvermillard avatar macrosak avatar martinlanter avatar mb590721 avatar mojadita avatar pokgak avatar prafulbhatnagar avatar rajsmit avatar rikard-sics avatar rkimsb2 avatar rogierc avatar ryanmeador avatar sbernard31 avatar scop avatar sophokles73 avatar stigbjorlykke avatar sudeepta-bhuyan avatar t0bbe avatar thomas-vos avatar vikram919 avatar ynh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

californium's Issues

issue with url-encoded path segments on server-side

If you are unlucky enough to find yourself in the situation that you get non-ascii chars or even blanks in the path, its crucial to uri-decode the path segments. Quick fix:

replace line:
https://github.com/eclipse/californium/blob/master/californium-core/src/main/java/org/eclipse/californium/core/server/ServerMessageDeliverer.java#L142

by something like:

String name = path.removeFirst();
String sanitizedName = java.net.URLDecoder.decode(name, "UTF-8");
current = current.getChild(sanitizedName);

sc-dtls-example demo app not working properly?

I built Cf project from source and wanted to try out the demo app for DTLS. If I try to run sc-dtls-example-1.1.0-SNAPSHOT.jar , I can only see ExampleDTLSServer prints. There are no ExampleDTLSClient prints:

sudeepta@Sudeepta-ThinkStation-P300:~/californium/demo-apps/run$ java -jar sc-dtls-example-1.1.0-SNAPSHOT.jar
1 CONFIG [DTLSConnector]: Cannot determine MTU of network interface, using minimum MTU [1280] of IPv6 instead - (org.eclipse.californium.scandium.DTLSConnector.java:331) start() in thread main at (2016-09-17 14:36:34)
1 INFO [DTLSConnector]: DTLS connector listening on [0.0.0.0/0.0.0.0:5684] with MTU [1,280] using (inbound) datagram buffer size [16,474 bytes] - (org.eclipse.californium.scandium.DTLSConnector.java:281) start() in thread main at (2016-09-17 14:36:34)
9 CONFIG [DTLSConnector$Worker]: Starting worker thread [DTLS-Sender-0.0.0.0/0.0.0.0:5684] - (org.eclipse.californium.scandium.DTLSConnector$Worker.java:-1) run() in thread DTLS-Sender-0.0.0.0/0.0.0.0:5684 at (2016-09-17 14:36:34)
10 CONFIG [DTLSConnector$Worker]: Starting worker thread [DTLS-Receiver-0.0.0.0/0.0.0.0:5684] - (org.eclipse.californium.scandium.DTLSConnector$Worker.java:-1) run() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-09-17 14:36:34)

NOTE: For other examples in the demo-app directory, seperate jars for client and server get created (for e.g. cf-helloworld-client-1.1.0-SNAPSHOT.jar and cf-helloworld-server-1.1.0-SNAPSHOT.jar). But for sc-dtls-example only one jar gets created.

NullPointerException on concurrent requests via async CoapClient#advanced

Time-to-time my tests were failing with the following stacktrace:

32 INFO [CoapEndpoint]: Starting endpoint at 0.0.0.0/0.0.0.0:0 - (org.eclipse.californium.core.network.CoapEndpoint.java:115) start() in thread Thread-6 at (2016-06-15 19:46:40)
    at org.eclipse.californium.core.network.CoapEndpoint.runInProtocolStage(CoapEndpoint.java:692)
    at org.eclipse.californium.core.network.CoapEndpoint.sendRequest(CoapEndpoint.java:390)
    at org.eclipse.californium.core.CoapClient.send(CoapClient.java:897)
    at org.eclipse.californium.core.CoapClient.send(CoapClient.java:879)
    at org.eclipse.californium.core.CoapClient.asynchronous(CoapClient.java:742)
    at org.eclipse.californium.core.CoapClient.advanced(CoapClient.java:663)
    at <SKIPPED>
    at java.lang.Thread.run(Thread.java:745)
32 INFO [EndpointManager]: Created implicit default endpoint 0.0.0.0/0.0.0.0:39102 - (org.eclipse.californium.core.network.EndpointManager.java:98) createDefaultEndpoint() in thread Thread-6 at (2016-06-15 19:46:40)

The code made concurrent asynchronous requests via CoapClient using default endpoint (no endpoint was specified).

After some debug, I discovered a race condition.

EndpointManager:

   public Endpoint getDefaultEndpoint() {
        if (default_endpoint == null) {
            createDefaultEndpoint();
        }
        return default_endpoint;
    }

    private synchronized void createDefaultEndpoint() {
        if (default_endpoint != null) return;

        default_endpoint = new CoapEndpoint();

        try {
            default_endpoint.start();
                        ...

CoapEndpoint:

    public synchronized void start() throws IOException {
        ....

        if (this.executor == null) {
                             ...
            setExecutor(Executors.newSingleThreadScheduledExecutor(
                    new Utils.DaemonThreadFactory("CoapEndpoint-" + connector.getAddress() + '#'))); //$NON-NLS-1$
                 ...

     private void runInProtocolStage(final Runnable task) {
        executor.execute(new Runnable() {
                            ...

So, when two threads invoked CoapClient#advanced(CoapHandler, Request) they called EndpointManager#getDefaultEndpoint somewhere down in the hierarchy.

getDefaultEndpoint invoked synchronized createDefaultEndpoint but as long as getDefaultEndpoint is not synchronized, second thread got default_endpoint != null and proceeded. But the default_endpoint was not started yet so CoapEndpoint#executor was not filled.

It is actually a cause of that NPE.
createDefaultSecureEndpoint is not synchronized too.
I am preparing a PR (actually just adding two 'synchronized') now.
But I am not sure it is possible to make a unit test for this case.

Keep Block handling in BlockwiseLayer

#36 fixed a missing onError() call when an intermediary block times out.

To push a bit back on cross-layer concerns, I would like to solve this purely from the BlockwiseLayer, for instance by attaching a MessageObserver on the intermediary blocks that then calls the onError() of the original request.

This is more "nice to have" than urgent...

The variable canceled of CoAPObserverRelation is not set to true after calling the reregister()

Hello,

We are using californium in our software and we are facing the following situation:
The communication of the network is very bad and the devices could stop working at any moments. So after establishing a relationship of ovservation, we have to check every X minutes about the status of the CoAPObserverRelation and if the relation is cancelled, we need to reregister with the same token. (The automatic reregister after the max-age is passed may not work because the device is not very stable)
However, it seems that after calling coapObserverRelation.reregister(), the value of coapObserverRelation.isCanceled() is still true (which should be set false) after an ACK is received. Is it possbile a bug that should be fixed?

Thank you very much.
Jie

Californium re-uses unacknowledged MIDs in outbound requests

Californium currently uses a simple counter for generating message IDs that is incremented with every message sent and wrapped at 2^16.

When client code uses Californium to send out requests to another CoAP server at a high rate, this may lead to re-use of message IDs for which Californium hasn't received an ACK or CON yet. While such a client's behavior seems questionable (flooding a constrained device with hundreds of requests per second doesn't really fit with the constrained device environment), Californium should make sure not to re-use IDs in this erroneous way. Also see #57 for a more detailed discussion of the issue.

token mismatch after 65536 (=2^16) requests/responses

Hi Californium team,

I believe I found a somewhat strange problem inside the californium core. This was originally discovered after doing performance tests with an application that is based on leshan (See initial issue here).

I have tried to do some debugging of the core using leshan, and I found out beginning with the 65537th response (i.e. after sending 2^16 responses), sendResponse of org.eclipse.californium.core.network.Matcher is being called with the wrong exchange object.

The test setup looks as follows: I have a leshan server and client, and the client is registered on the server. I then send http requests to the leshan server, which leads to coap requests being sent to the leshan client, to which the client responds. After 65536 requests, the server is unable to to match the response to the initial request, because the tokens don't match.

To better illustrate the problem, I have added some log prints to sendRequest, sendResponse, receiveRequest and receiveResponse. This is what happens around the 65537th request, when I run my test:

Server side:

...
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher sendRequest
WARNING: Sending request with mid KeyMID[20394] and token KeyToken[dc89]
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveResponse
WARNING: Received response with mid KeyMID[20394] and token KeyToken[dc89]
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher sendRequest
WARNING: Sending request with mid KeyMID[20395] and token KeyToken[1b502f9c21]
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveResponse
WARNING: Received response with mid KeyMID[20395] and token KeyToken[1b502f9c21]
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher sendRequest
WARNING: Sending request with mid KeyMID[20396] and token KeyToken[4dbb3da333b5]
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveResponse
WARNING: Received response with mid KeyMID[20396] and token KeyToken[53]
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveResponse
WARNING: Discarding unmatchable piggy-backed response from [/127.0.0.1:57,885]: ACK-2.05   MID=20396, Token=53, OptionSet={"Content-Format":"unknown/1542"}, c6 01 2d 31 32 36 2e 30 

Client side:

...
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveRequest
WARNING: Received request with mid KeyMID[20394] and token KeyToken[dc89]; exchange hash: 1,904,375,366
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher sendResponse
WARNING: Sending response with mid KeyMID[20394] and token KeyToken[dc89]; exchange hash: 1,904,375,366
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveRequest
WARNING: Received request with mid KeyMID[20395] and token KeyToken[1b502f9c21]; exchange hash: 740,501,474
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher sendResponse
WARNING: Sending response with mid KeyMID[20395] and token KeyToken[1b502f9c21]; exchange hash: 740,501,474
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher receiveRequest
WARNING: Received request with mid KeyMID[20396] and token KeyToken[4dbb3da333b5]; exchange hash: 940,539,153
Jun 15, 2016 6:05:47 PM org.eclipse.californium.core.network.Matcher sendResponse
WARNING: Sending response with mid KeyMID[20396] and token KeyToken[53]; exchange hash: 642,583,859

As you can see from the log prints, when the client tries to respond to the request with MID 20396, sendResponse is called with the wrong exchange object (as observable by the exchange hashes printed in the log), which leads to the wrong token being used, and the server side is unable to match the response to the initial request.

From this point on, every time the client tries to respond to a request, the wrong exchange object is used.

This test was done using the current master branch of californium.

I hope this information is enough for you to find the source of the problem, since so far, I was unable to.

Adding new resources dynamically - CoAP

Once CoAP server is started I need to add new resources dynamically. But I have to stop and start the server again in order to access new resources. I suppose adding new resources same as adding a new HTTP servlet into already started HTTP server.

Here I added source code which is used for adding dynamic resources. If I am missing anything here let me know.

private static CoapServer server;

public CoAPEventAdapter(InputEventAdapterConfiguration eventAdapterConfiguration,
                        Map<String, String> globalProperties) {
    this.eventAdapterConfiguration = eventAdapterConfiguration;
    this.globalProperties = globalProperties;
    if(server == null){
        server = new CoapServer();
        server.start();
    }
}

@Override
public void connect() {
    registerDynamicEndpoint(eventAdapterConfiguration.getName());
    isConnected = true;
} 

private void registerDynamicEndpoint(String adapterName) {
        server.stop();
        server.add(new HelloWorldResource(adapterName));
        server.start();
}


class HelloWorldResource extends CoapResource {

    public HelloWorldResource(String resourceName) {
        // set resource identifier
        super(resourceName);
        // set display name
        getAttributes().setTitle("Hello-World Resource");
    }

    @Override
    public void handleGET(CoapExchange exchange) {

        // respond to the request
        exchange.respond("Hello World!");
    }
}

Randomly wrong responses in blockwise (option block2)

(Im using leshan with californium 1.0.3)
During tests, I recognized, that sometimes the coap server resends already sent blocks.
(e.g. 23 is resend, 24 is requested).
Any Idea, what could cause this?

- (org.eclipse.californium.core.network.interceptors.MessageTracer.java:64) sendResponse() in thread pool-4-thread-4 at (2016-04-06 17:58:21)
33 INFO [MessageTracer]: /192.168.10.106:12.345 ==> req CON-GET    MID=50932, Token=8ff6, OptionSet={"Uri-Path":["file","fw.zip"], "Block2":"(szx=3/128, m=false, num=22)"}, no payload - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:78) receiveRequest() in thread pool-4-thread-3 at (2016-04-06 17:58:21)
33 INFO [MessageTracer]: /192.168.10.106:12.345 <== res ACK-2.05   MID=50932, Token=8ff6, OptionSet={"Block2":"(szx=3/128, m=true, num=22)"}, 
06 02 00 1d 07 02 00 57 07 02 00 fe 06 02 00 3b 07 02 00 70 07 02 00 89 07 02 00 9f 07 02 00 c3
07 02 00 dd 07 02 00 2e 08 02 00 47 08 02 00 8c 20 00 20 57 08 02 00 6e 08 02 00 ec 20 00 20 ec
25 00 20 9c 21 00 20 9c 20 00 20 28 26 00 20 10 27 00 00 80 08 02 00 e2 04 00 00 9b 08 02 00 30
b5 04 1c 85 b0 0d 1c 00 2a 04 d0 10 1c 24 49 24 4a ff f7 d7 fc 2b 78 15 2b 3e d1 22 4b 03 93 10
 - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:64) sendResponse() in thread pool-4-thread-3 at (2016-04-06 17:58:21)
54 INFO [MessageTracer]: /192.168.10.106:12.345 ==> req CON-GET    MID=51188, Token=90f6, OptionSet={"Uri-Path":["file","fw.zip"], "Block2":"(szx=3/128, m=false, num=23)"}, no payload - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:78) receiveRequest() in thread pool-4-thread-7 at (2016-04-06 17:58:21)
54 INFO [MessageTracer]: /192.168.10.106:12.345 <== res ACK-2.05   MID=51188, Token=90f6, OptionSet={"Block2":"(szx=3/128, m=true, num=23)"}, 
23 02 93 ff f7 be fc 20 1c 01 a9 05 f0 46 fe 00 28 22 d1 ff f7 b6 fc 1c 4b 1d 4a 18 88 0c 21 ff
f7 3c fc 08 23 02 1c 04 1c 9a 43 0e d0 02 22 18 4b c3 18 93 43 09 d0 11 28 07 d0 16 4b 98 42 04
d0 a0 21 0f 4a ff 31 ff f7 ac fc 21 1c 13 48 06 f0 9c fd ff f7 96 fc 0f e0 11 4b 98 42 01 d1 10
48 02 e0 10 28 03 d1 0f 48 06 f0 8f fd 04 e0 d7 21 04 4a 49 00 ff f7 95 fc 00 20 05 b0 30 bd cb
 - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:64) sendResponse() in thread pool-4-thread-7 at (2016-04-06 17:58:21)
52 INFO [MessageTracer]: /192.168.10.106:12.345 ==> req CON-GET    MID=51444, Token=91f6, OptionSet={"Uri-Path":["file","fw.zip"], "Block2":"(szx=3/128, m=false, num=24)"}, no payload - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:78) receiveRequest() in thread pool-4-thread-5 at (2016-04-06 17:58:21)
52 INFO [MessageTracer]: /192.168.10.106:12.345 <== res ACK-2.05   MID=51188, Token=90f6, OptionSet={"Block2":"(szx=3/128, m=true, num=23)"}, 
23 02 93 ff f7 be fc 20 1c 01 a9 05 f0 46 fe 00 28 22 d1 ff f7 b6 fc 1c 4b 1d 4a 18 88 0c 21 ff
f7 3c fc 08 23 02 1c 04 1c 9a 43 0e d0 02 22 18 4b c3 18 93 43 09 d0 11 28 07 d0 16 4b 98 42 04
d0 a0 21 0f 4a ff 31 ff f7 ac fc 21 1c 13 48 06 f0 9c fd ff f7 96 fc 0f e0 11 4b 98 42 01 d1 10
48 02 e0 10 28 03 d1 0f 48 06 f0 8f fd 04 e0 d7 21 04 4a 49 00 ff f7 95 fc 00 20 05 b0 30 bd cb
 - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:64) sendResponse() in thread pool-4-thread-5 at (2016-04-06 17:58:21)
37 INFO [MessageTracer]: /192.168.10.106:12.345 ==> req CON-GET    MID=51444, Token=91f6, OptionSet={"Uri-Path":["file","fw.zip"], "Block2":"(szx=3/128, m=false, num=24)"}, no payload - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:78) receiveRequest() in thread pool-4-thread-4 at (2016-04-06 17:58:23)
37 INFO [MessageTracer]: /192.168.10.106:12.345 <== res ACK-2.05   MID=51188, Token=90f6, OptionSet={"Block2":"(szx=3/128, m=true, num=23)"}, 
23 02 93 ff f7 be fc 20 1c 01 a9 05 f0 46 fe 00 28 22 d1 ff f7 b6 fc 1c 4b 1d 4a 18 88 0c 21 ff
f7 3c fc 08 23 02 1c 04 1c 9a 43 0e d0 02 22 18 4b c3 18 93 43 09 d0 11 28 07 d0 16 4b 98 42 04
d0 a0 21 0f 4a ff 31 ff f7 ac fc 21 1c 13 48 06 f0 9c fd ff f7 96 fc 0f e0 11 4b 98 42 01 d1 10
48 02 e0 10 28 03 d1 0f 48 06 f0 8f fd 04 e0 d7 21 04 4a 49 00 ff f7 95 fc 00 20 05 b0 30 bd cb
 - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:64) sendResponse() in thread pool-4-thread-4 at (2016-04-06 17:58:23)
54 INFO [MessageTracer]: /192.168.10.106:12.345 ==> req CON-GET    MID=51444, Token=91f6, OptionSet={"Uri-Path":["file","fw.zip"], "Block2":"(szx=3/128, m=false, num=24)"}, no payload - (org.eclipse.californium.core.network.interceptors.MessageTracer.java:78) receiveRequest() in thread pool-4-thread-7 at (2016-04-06 17:58:26)
54 INFO [MessageTracer]: /192.168.10.106:12.345 <== res ACK-2.05   MID=51188, Token=90f6, OptionSet={"Block2":"(szx=3/128, m=true, num=23)"}, 
23 02 93 ff f7 be fc 20 1c 01 a9 05 f0 46 fe 00 28 22 d1 ff f7 b6 fc 1c 4b 1d 4a 18 88 0c 21 ff
f7 3c fc 08 23 02 1c 04 1c 9a 43 0e d0 02 22 18 4b c3 18 93 43 09 d0 11 28 07 d0 16 4b 98 42 04
d0 a0 21 0f 4a ff 31 ff f7 ac fc 21 1c 13 48 06 f0 9c fd ff f7 96 fc 0f e0 11 4b 98 42 01 d1 10
48 02 e0 10 28 03 d1 0f 48 06 f0 8f fd 04 e0 d7 21 04 4a 49 00 ff f7 95 fc 00 20 05 b0 30 b
```d cb

Duplicated CoAP Tokens

In Matcher there is a method createUnusedToken() that should ensure that there are no duplicated CoAP tokens. However there are issues that lead to duplicated tokens anyways.

These issues differ on master branch and 2.0.x branch.

Master branch:

The createUnusedToken mechanism is not thread-safe. It gets checked if the token already exists but the new token gets added to the map later on, i.e. two threads may generate the same "unused" token.

2.0.x branch:

  • The same concurrency issue as above,
  • the AND condition in the while loop needs to be an OR condition.

I think on master branch we could fix this with putIfAbsent(), but on 2.0.x branch things get more complicated. Are there any suggestions how to fix this? Of course one fix would be to introduce some locks.

Remark (I am not a maths expert): One may assume that the probability for this clash is not that high, but it is (especially on 2.0.x branch). Caused by the variable length of the token we have a 1:8 probability to generate a one byte token. Inside these one byte tokens then we have some kind of birthday problem. In summary I get a clash on average with about 165 iterations with a test run.

Thanks
Daniel

Scandium certificate generation problem

Hi,

The generated scandium certificates are not working. I tried generating the certificates by following the steps mention in github link https://github.com/eclipse/californium/tree/master/scandium-core. The certificates got generated. After which we are facing BAD CERTIFICATE error from scandium( when we ran the example dtls server and example dtls client in scandium using the newly generated certificates ).

It says on the scandium says, "certificate validation failed due to signature check failed".

Small possibility of Exchange leak when receiving non-piggybacked responses

Repro steps:

  1. Client sends a CON request
  2. Server calls exchange.acknowledge(), but then never sends a response, either due to a bug, or because server crashed.
  3. The exchange on the client side never times out.

This has two undersired effects:

If client used CoapClient.(put|get|delete|post) variant with CoapHandler, then neither onLoad() nor onError() methods are ever called. Furthermore, the offending exchange is never removed from Matcher's tracking maps.

Potential solution: have one of CoapStack layers add a task that will fire after EXCHANGE_LIFETIME and call Exchange.timeout() if Exchange is still incomplete.

Why an additional ResumptionSupportingConnectionStore

@sbernard31 : Just a thought.

When resuming is considered to be a feature of a Connection why not simply add the one method "markAllAsResumptionRequired" (or any other future methods that would allow resumption for a particular connection based on a session ID) to ConnectionStore interface directly instead of a new Interface ResumptionSupportingConnectionStore and marking ConnectionStore as deprecated?

Blockwise transfer on slow network fails with "Wrong block number. Expected 0 but received 9. Respond with 4.08 (Request Entity Incomplete)"

I run CoAP POST requests at a constant rate (1 request every 2 seconds), with 10kB payload both in request and response. When the network condition is an average/poor mobile connection (3G/2G), about every second request fails, with the message on server "Wrong block number. Expected X but received Y. Respond with 4.08 (Request Entity Incomplete)".

It works fine with ideal network conditions (Ethernet/Wifi, or good 4G mobile connection). It also works fine if I reduce the request rate to about 1 request every 4 seconds.

I can reproduce the failure using a network shaper configured with 100ms delay, data rate 420kbps up, 850kbps down. This should be more than enough bandwidth to get 1 request every 2 seconds through.

I have attached a detailed client and server log, and also client and server tcp dumps. You can see the first occurence of the error in server.log line 230, and server.pcap packet 58.

Maybe the blockwise response of the first request somehow gets intermingled with the blockwise second request?

Logs & dumps:
blockwise_issue.zip

DTLS Connection saving to store

Starting new handshake (upon receiving ClientHello) is done in
org.eclipse.californium.scandium.DTLSConnector#startNewHandshake
The very fist step is saving the connection to store
Then deep under the invocation of:
handshaker.processMessage(record)
the method org.eclipse.californium.scandium.dtls.ServerHandshaker#receivedClientHello is called
which in turn "updates" just saved connection entity (via handshakeStarted()).
But this "update" doesn't have any effect on persisted entity, so when the DTLS client side sends next message the connection fetched from the store lacks any information about "updated" objects (e.g. DTLSSession or Handshaker), thus whole handshake fails
It works with org.eclipse.californium.scandium.dtls.InMemoryConnectionStore as you refer to the same object.
Q: Is this by design? So that storage always have some kind of in-memory LRU cache? Otherwise I think this is bug

Client Hello after Handshake crash

It seems the issue was not completely fixed.

On Leshan sandbox, I regulary encounter this issue. Here is the log :

Feb 08, 2016 6:29:08 PM org.eclipse.californium.scandium.DTLSConnector processHandshakeRecordWithConnection
FINE: Discarding HANDSHAKE message [epoch=0] from peer [/84.14.163.130:56830] which does not match expected epoch(s)

When this happen, the device will not be able to connect to the server with the same host/port...

I suspect the issue was here.
I think we should not test if we have an established session, I mean if we get a Client_Hello with epoch 0 we probably want to do a fresh handshake.
The only case, we don't want that is if this is a retransmission but we have this code to prevent this.

Block2 request does not seem to notify CoapHandler#onError if exchange stucks on non-first request.

Hello!
I've been playing around block transfer and noticed one thing.
I use Californium 1.0.3.
I have a Client which sends CON GET and a Server which responds with large payload.
Then, I tried to emulate network loss and added a connector to the Client which skipped some of ACKs.

When I skip the first ACK, after retries and some time it logs "Timeout: retransmission limit reached, exchange failed, message: ..." and my CoapHandler#onError is being invoked.

When I skip any intermediate ACK, it logs the same line but does not invoke CoapHandler#onError.
I've made some debugging.

org.eclipse.californium.core.network.stack.ReliabilityLayer.java:

 protected abstract class RetransmissionTask implements Runnable {
    <skimmed>
359:    } else {
        LOGGER.fine("Timeout: retransmission limit reached, exchange failed, message: "+message);
            exchange.setTimedOut();
            message.setTimedOut(true);
    <skimmed>

And if the message is an intermediate block transfer request, it does not have any handlers, so no onError could be called. But exchange#request has the original request which has its' handlers.

Maybe the exchange#request.setTimedOut(true) should be called there in the case of block transfer? It looks reasonable for me because in case of success CoapHandler#onLoad of the original request is called.

The sample code will be provided if requested.

When BlockwiseLayer is manually turned off, NullPointerException is thrown.

When BlockwiseLayer is manually turned off, NullPointerException is thrown.

A proposed quick fix is to check for "response != null" in Matcher.java.

Jul 20, 2016 10:26:19 AM org.eclipse.californium.core.network.CoapEndpoint$5 run
SEVERE: Exception in protocol stage thread: null
java.lang.NullPointerException
at org.eclipse.californium.core.network.Matcher$ExchangeObserverImpl.completed(Matcher.java:535)
at org.eclipse.californium.core.network.Exchange.setComplete(Exchange.java:449)
at org.eclipse.californium.core.network.Matcher.sendResponse(Matcher.java:225)
at org.eclipse.californium.core.network.CoapEndpoint$OutboxImpl.sendResponse(CoapEndpoint.java:530)
at org.eclipse.californium.core.network.stack.CoapStack$StackBottomAdapter.sendResponse(CoapStack.java:217)
at org.eclipse.californium.core.network.stack.AbstractLayer.sendResponse(AbstractLayer.java:71)
at org.eclipse.californium.core.network.stack.ReliabilityLayer.sendResponse(ReliabilityLayer.java:162)
at org.eclipse.californium.core.network.stack.AbstractLayer.sendResponse(AbstractLayer.java:71)
at org.eclipse.californium.core.network.stack.ObserveLayer.sendResponse(ObserveLayer.java:120)
at org.eclipse.californium.core.network.stack.AbstractLayer.sendResponse(AbstractLayer.java:71)
at org.eclipse.californium.core.network.stack.CoapStack$StackTopAdapter.sendResponse(CoapStack.java:176)
at org.eclipse.californium.core.network.stack.CoapStack.sendResponse(CoapStack.java:123)
at org.eclipse.californium.core.network.CoapEndpoint.sendResponse(CoapEndpoint.java:443)
at org.eclipse.californium.core.network.Exchange.sendResponse(Exchange.java:199)
at org.eclipse.californium.core.server.resources.CoapExchange.respond(CoapExchange.java:290)
at org.eclipse.californium.core.server.resources.CoapExchange.respond(CoapExchange.java:211)
at org.eclipse.californium.core.server.resources.CoapExchange.respond(CoapExchange.java:192)
at org.eclipse.californium.examples.HelloWorldServer$HelloWorldResource.handlePOST(HelloWorldServer.java:152)
at org.eclipse.californium.core.CoapResource.handleRequest(CoapResource.java:217)
at org.eclipse.californium.core.server.ServerMessageDeliverer.deliverRequest(ServerMessageDeliverer.java:86)
at org.eclipse.californium.core.network.stack.CoapStack$StackTopAdapter.receiveRequest(CoapStack.java:185)
at org.eclipse.californium.core.network.stack.AbstractLayer.receiveRequest(AbstractLayer.java:91)
at org.eclipse.californium.core.network.stack.AbstractLayer.receiveRequest(AbstractLayer.java:91)
at
org.eclipse.californium.core.network.stack.ReliabilityLayer.receiveRequest(ReliabilityLayer.java:236)
at org.eclipse.californium.core.network.stack.AbstractLayer.receiveRequest(AbstractLayer.java:91)
at org.eclipse.californium.core.network.stack.CoapStack.receiveRequest(CoapStack.java:133)
at org.eclipse.californium.core.network.CoapEndpoint$InboxImpl.receiveMessage(CoapEndpoint.java:654)
at org.eclipse.californium.core.network.CoapEndpoint$InboxImpl.access$700(CoapEndpoint.java:583)
at org.eclipse.californium.core.network.CoapEndpoint$InboxImpl$1.run(CoapEndpoint.java:597)
at org.eclipse.californium.core.network.CoapEndpoint$5.run(CoapEndpoint.java:746)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

cf-coap-blockwise-bug.txt

UDPConnection's Receiver Thread Keeps Start and Stop

Hi,

I'm working on multicast handling, but it seems there is a problem of the UDPConnection's Receiver Thread as it keeps start() and stop(). This results missing out of CON messages. I also tried using Request.waitForResponse(waitTime) or increasing the receiveThreadCount (in UDPConnection), but unsuccessful.

Is it true that the current implementation only handles one message at a time?

Change Block1 blocksize

Is it possible to change the default block1 size when doing a put? Now it is set to 512 but I want to change it to 1024.

Query: Scandium DTLS session timeout

The LeastRecentlyUsedCache used for saving connections have an expiration threshold of 36 hours. I assume this implies if a client sleeps for more than 36 hours and then wakes to send a data packet, it would be discarded and client will need to reconnect. I tried a quick incomplete read of the RFC 6347 and couldn't find points about a need for session timeout.

Is this session timeout mandatory. Why are not we letting the session stay for ever until LRU capacity is reached? Or is my understanding wrong ? Kindly let me know if this query has to go to a mailing list.

Retransmission issue

I was testing the CoAP in a lossy connection environment and got some problem in retransmission.

I tried to send CON requests (GET, PUT) in the synchronous mode from a client to a server. With a probability of packet loss, a percentage of requests may not be delivered to the server. That triggers the retransmission from the client. These part works well and I can find those requests successfully delivered from the retransmission and processed to the server .

As it is described in the RFCs. The responses are sent back in an ACK back to the Client. (which I can confirmed using the ".advance().getType" ). Once the original request is not acked, it can be either the request was not delivered to the Server, or the ACK was not delivered back to the Client.

The problem I experienced occurred in the second situation. If the Server received and responded to the original request, but the ACK was not delivered, the Client should resend the request. However, the re-sent request was not responded by the Server that had already responded to the original Request. As a result, no ACK would be sent to the Client and the finally the Client received no response to the Request.

For example, in a connection with 20% packet loss, I can get [1-(20%)^5 = 99.97%] of my request delivered to the Server. If the retransmission works properly, I should get [1-(36%)^5 = 99.40%] of my request finally acknowledged by the Client. However, I got 99.9% of my request processed in the Server side, and only about 80% of my total requests acked in my experiment. That indicates the Server did not respond to any retransmitted request whose original request had been responded but the response was lost.

I tried both GET and PUT and get the similar result as above.

I guess it may be because of the way a server deal with the MessageID or TokenID, which prevent the server from processing an already processed request. Just my guess.

Request.scheme is not set properly or at all

I'm working on a HTTP-CoAP ProxyServer based on Californium and I'm using cf-rd from californium.tools project. My intention is to support both coap and coaps endpoints on the proxy server and with help of cf-rd tools, the CoAP peers can POST /rd using either coap or coaps.

However I noticed that although my CoAP device registered using coaps, the logs showed

INFO [RDResource]: Adding new endpoint: coap://127.0.0.1:50961

On further investigation, in RDNodeResource class, when LinkFormat.CONTEXT is either missing or empty, the code assumes the scheme to be coap. I can't set LinkFormat.CONTEXT in URIQuery when positng on resource /rd, because of NAT. Based on RFC, if LinkFormat.CONTEXT is not set, then source address and port should be used. RDNodeResource class uses Request object to correctly set the source address and port, but not the scheme.

So I decided to change the logic in class RDNodeResource to rely on Request.getScheme() method to create appropriate LinkFormat.CONTEXT. But this change produced the following in logs

INFO [RDResource]: Adding new endpoint: //127.0.0.1:50961

Is Request.scheme ever set? Which component is responsible of setting this variable to right value? Is there a better way to setting the LinkFormat.CONTEXT in RDNodeResource other than using Request object?

DTLS Handshake Failure with 'Unsupported Certificate'

I recently tried to connect our mbed client to a local version of Leshan and noticed a problem, which I believe may be caused by a bug in the DTLS californium stack.

Here is the setup:

  • Mbed client (with mbed TLS) supporting PSK and certificate-based authentication (but not raw public keys).
  • The Leshan server (with californium) supports raw public keys and PSK-based authentication (but not certificates). I am using version 0.1.11-M10-Snapshot of the Leshan-demo-server.

At the DTLS handshake one would expect that the client and the server agree on the use of a PSK-based ciphersuite but in this specific case they didn’t.

Instead the handshake fails with an alert saying ‘Unsupported Certificate’.

I believe this error is caused by an incorrect handling of the ciphersuite selection. If the server only possesses raw public keys and the client does not offer the extensions defined in RFC 7250 then the server should not even try to use certificates.

If I configure the mbed client to only offer a PSK-based ciphersuite (and no certificate-based ciphersuites) then the exchange is successful.

A packet trace of the exchange is available at packet-trace.txt

Scheme not set on Maven Central 1.0.4 version

Hi,

Just a heads up for other developers so they don't spend time scratching their heads while investigating why the scheme is always null like I did the past 2 days.

On the class : package org.eclipse.californium.core.network.CoapEndpoint inner class InboxImpl method private void receiveMessage(RawData raw)

The versions on the Maven Central Repo don't contain this line :
request.setScheme(raw.isSecure() ? "coaps" : "coap"); which leads to the scheme not being set.

It's set however on the 1.1.0-SNAPSHOT, so you may want to use it (by building the source from the Github repo).

Thanks.

Blockwise transfer slow: Each block is only transferred after the previous is acknowledged

When analyzing issue #44, I noticed that the client transfers the next block only after the server has acknowledged the previous block. You can see this very clearly in the attached tcpdumps.

This makes block transfers very slow on networks with large delays (e.g. mobile connections).

It this something the spec mandates (I could not find anything)? Otherwise it would be a large performance boost if the client would send blocks without waiting for acknowledgement of previous blocks.

Name thread in threadpools for easier jstack reading

For example in CoapEndpoint.java:
final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(new Utils.DaemonThreadFactory());

This generated thread names like:
"pool-2-thread-1" #26 prio=5 os_prio=0 tid=0x00007fadc03c5000 nid=0x21 waiting on condition [0x00007fad8feed000]

it makes reading Java thread dump pretty hard

Unable to connect to server on restart with different inetAddress.

Currently we can not restart a device with a different inetAddress.
It don't work because we reuse the connection but the server don't expect that as the device has a new address.

The solution should be to:

  • reuse the connection if the inetAddress don't change.
  • resume the session if the inetAddress change.

See more details on this thread on the mailing list.

The #11 PR try to fix that but we can not implement that without adding some new API. So this can not be integrated in 1.0.x branch.

For the 1.0.x branch, the only solution is probably to:

  • reuse the connection if the inetAddress don't change.
  • clear the connection store to force new full handshake if the inetAddress change.

CoAP over TCP leaks exchanges

With switching to CoAP over TCP we no longer close outgoing exchanges that never receive any response. This is because in the past reliability layer + deduplicator were performing this function, and TCP stack no longer uses those.

This has two impacts:

  1. Clients using async invocation never get called back
  2. TcpMatcher will slowly accumulate old exchanges in its map.

To solve, likely need to introduce a TCP version of reliability layer to close expired exchanges.

Manual blockwise transfer

How to manually conduct blockwise transfer (Californium v. 1.0.3) or perform it without loading whole payload to memory?
I have a large payload (with eg. InputStream) that I don't want to load to memory fully.
Instead I read chunk by chunk from InputStream and prepare
org.eclipse.californium.core.coap.Request (I'm setting the Block1 options properly).
Unfortunately receiving response on such request yields NPE from:
org.eclipse.californium.core.network.stack.BlockwiseLayer.receiveResponse(BlockwiseLayer.java:324)
Which is of course due to missing
org.eclipse.californium.core.network.stack.BlockwiseStatus instance in Exchange abstraction.

Is it the missing functionality or is it 'by design'?

Error when using new certificates

As the certificates provided have expired, I have tried to create new one following the instructions of https://github.com/eclipse/californium/tree/master/scandium-core.
Nevertheless, when I use them, I get the following error:

18 FINE [CertificateMessage]: Certificate validation failed due to signature check failed - (org.eclipse.californium.scandium.dtls.CertificateMessage.java:401) verifyCertificate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:39912 at (2016-06-03 11:19:18)

18 INFO [DTLSConnector]: Aborting handshake with peer [localhost/127.0.0.1:5684]: Certificate chain could not be validated - (org.eclipse.californium.scandium.DTLSConnector.java:1487) terminateOngoingHandshake() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:39912 at (2016-06-03 11:19:18)

Too extensive log in CoapEndpoint in case of CoAP Ping

CoapEndpoint logs every response to a CoAP ping on log level INFO. In my opinion that is to extensive. It is not helpful for an administrator to see that the server got pinged.

For example, to check if our server is still reachable we send a ping to it every second. If we do not lower the log level to WARNING for CoapEndpoint our logs are full with these ping log statements.

What do you think about that, what would be a more appropriate log level? FINEST?

DTLS and Californium Sandbox Server not working

Hello!
I've written an application which uses californium and scandium to provide coap client functionality. To test the app I've written a litte test server (also using californium). Connecting to this test server works without any problems (with and without dtls).
Now I'm also trying to connect to one of these:
coaps://californium.eclipse.org:5684
coaps://vs0.inf.ethz.ch:5684
coaps://vs0.inf.ethz.ch:5685

which doesn't work at the moment.
To find out why it doesn't work I've used the example provided here.
Using this example code (with modified server url) doesn't work too. The output using fine logging is:

Jun 24, 2016 12:22:31 AM org.eclipse.californium.core.network.config.NetworkConfig createStandardWithFile INFORMATION: Loading standard properties from file Californium.properties Jun 24, 2016 12:22:31 AM org.eclipse.californium.core.network.CoapEndpoint start INFORMATION: Starting endpoint at 0.0.0.0/0.0.0.0:0 1 CONFIG [DTLSConnector]: Cannot determine MTU of network interface, using minimum MTU [1280] of IPv6 instead - (org.eclipse.californium.scandium.DTLSConnector.java:331) start() in thread main at (2016-06-24 00:22:31) 11 CONFIG [DTLSConnector$Worker]: Starting worker thread [DTLS-Receiver-0.0.0.0/0.0.0.0:58507] - (org.eclipse.californium.scandium.DTLSConnector$Worker.java:-1) run() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:31) 1 INFO [DTLSConnector]: DTLS connector listening on [0.0.0.0/0.0.0.0:58507] with MTU [1.280] using (inbound) datagram buffer size [16.474 bytes] - (org.eclipse.californium.scandium.DTLSConnector.java:281) start() in thread main at (2016-06-24 00:22:31) 10 CONFIG [DTLSConnector$Worker]: Starting worker thread [DTLS-Sender-0.0.0.0/0.0.0.0:58507] - (org.eclipse.californium.scandium.DTLSConnector$Worker.java:-1) run() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:31) Jun 24, 2016 12:22:31 AM org.eclipse.californium.core.CoapClient setEndpoint INFORMATION: Started set client endpoint 0.0.0.0/0.0.0.0:58507 10 FINE [Connection]: Handshake with [californium.eclipse.org/129.132.15.80:5684] has been started - (org.eclipse.californium.scandium.dtls.Connection.java:831) handshakeStarted() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:31) 10 FINE [ECDHECryptography$SupportedGroup]: Group [brainpoolP256r1] is not supported by JRE - (org.eclipse.californium.scandium.dtls.cipher.ECDHECryptography$SupportedGroup.java:373) <init>() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:31) 10 FINE [ECDHECryptography$SupportedGroup]: Group [brainpoolP384r1] is not supported by JRE - (org.eclipse.californium.scandium.dtls.cipher.ECDHECryptography$SupportedGroup.java:381) <init>() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:31) 10 FINE [ECDHECryptography$SupportedGroup]: Group [brainpoolP512r1] is not supported by JRE - (org.eclipse.californium.scandium.dtls.cipher.ECDHECryptography$SupportedGroup.java:389) <init>() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:31) 12 FINE [DTLSConnector]: Re-transmitting flight for [californium.eclipse.org/129.132.15.80:5684], [3] retransmissions left - (org.eclipse.californium.scandium.DTLSConnector.java:112) handleTimeout() in thread DTLS RetransmitTask 1 at (2016-06-24 00:22:32) No response received. 10 FINE [Connection]: Handshake with [californium.eclipse.org/129.132.15.80:5684] has been started - (org.eclipse.californium.scandium.dtls.Connection.java:831) handshakeStarted() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:33) 12 FINE [DTLSConnector]: Re-transmitting flight for [californium.eclipse.org/129.132.15.80:5684], [3] retransmissions left - (org.eclipse.californium.scandium.DTLSConnector.java:112) handleTimeout() in thread DTLS RetransmitTask 1 at (2016-06-24 00:22:34) 12 FINE [DTLSConnector]: Re-transmitting flight for [californium.eclipse.org/129.132.15.80:5684], [2] retransmissions left - (org.eclipse.californium.scandium.DTLSConnector.java:112) handleTimeout() in thread DTLS RetransmitTask 1 at (2016-06-24 00:22:36) 10 FINE [Connection]: Handshake with [californium.eclipse.org/129.132.15.80:5684] has been started - (org.eclipse.californium.scandium.dtls.Connection.java:831) handshakeStarted() in thread DTLS-Sender-0.0.0.0/0.0.0.0:58507 at (2016-06-24 00:22:37)

Regarding the log it seems that the server simply doesn't respond to the dtls handshake request. Even increasing the RetransmissionTimeout of the DtlsConnectorConfig.Builder doesn't change anything.
This behavior ist the same on all three servers whereas my test server works flawless.
I've tried using version 1.0.0, 1.0.4 and 1.1.0-SNAPSHOT (build from source) but this behavior is the same with all versions.
I hope that you can help me because at least the californium.eclipse.org server is also using californium.
Is there something I'm doing wrong or is there a problem with these servers?
Thanks for the great project and your help.

DTLS requests with CON fail after longer network outage

My client sends regular messages (2 per second, each 256 bytes) via a mobile connection to a server, using CON, using the DTLSConnector of Californium 1.0.3.

After a longer network outage (no mobile connection at all for > 60s), the client fails to send any message to the server after the connection is restored. Every message sent fails with onError() of CoapHandler. Note that the client still tries to send during the network outage, regardless of the error.

The problem does not occur if I use the UDPConnector instead of DTLSConnector, or if I use NONs instead of CONs. In these cases, the client can successfully send messages again after the connection is restored.

pom.xml of element-connector sub module still contains scm and repository infos

The pom of element-connector still contains (old?) scm configuration and duplicates repositories of parent pom. Is this leftover from moving to multi module project or was this done intentionally?

<scm>
    <developerConnection>scm:git:https://github.com/eclipse/californium.element-connector.git</developerConnection>
    <url>https://github.com/eclipse/californium.element-connector</url>
    <tag>HEAD</tag>
</scm>

<repositories>
    <repository>
        <id>eclipse_snapshots</id>
        <name>Eclipse Snapshots</name>
        <url>https://repo.eclipse.org/content/repositories/snapshots/</url>
    </repository>
    <repository>
        <id>eclipse_releases</id>
        <name>Eclipse Releases</name>
        <url>https://repo.eclipse.org/content/repositories/releases/</url>
    </repository>
</repositories>

Concurrent requests to the server return "null" response

In my scenario, there is one android server and several clients(all clients are simulated by a PC, using Threads). The clients will connect to the server and send data arbitrarily. This may lead to concurrency. Each client has a unique ID and send data to the resource named by its ID. So no clients will share resource( otherwise I found it shows "Wrong block number. Expected xxx but received xxx"). But the problem is that if there are more than 15 clients send requests together, most of them will get "null" response. The android server still have enough CPU and Memory. And the CoAP is a really light-weight protocol, so I wonder whether it because I misused the framework so the efficiency is so low.

Scandium Server with WOLFSSl Client handshake failure

Hi ,
I am using scandium as a server and WolfSSL as a client .I am using cipher suite TLS_PSK_WITH_AES_128_CBC_SHA256 .On wireshark i saw server hello done successfully but on server key exchange its stuck and requesting again and again .
Below is the log :

Secure CoAP server powered by Scandium (Sc) is listening on port 5684

15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:21)

15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:21)

15 FINE [Record]: Parsing message without a session - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:21)

15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [NULL] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:21)

15 FINE [DTLSConnector]: Processing CLIENT_HELLO from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:864) processClientHello() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:21)

15 FINER [DTLSConnector]: Verifying client IP address [/127.0.0.1:52174] using HELLO_VERIFY_REQUEST - (org.eclipse.californium.scandium.DTLSConnector.java:949) sendHelloVerify() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINE [Record]: Parsing message without a session - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [NULL] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINE [DTLSConnector]: Processing CLIENT_HELLO from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:864) processClientHello() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINER [DTLSSession]: Setting MTU for peer [/127.0.0.1:52174] to 1,280 bytes - (org.eclipse.californium.scandium.dtls.DTLSSession.java:212) setMaxTransmissionUnit() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINER [DTLSSession]: Setting maximum fragment length for peer [/127.0.0.1:52174] to 1,227 bytes - (org.eclipse.californium.scandium.dtls.DTLSSession.java:597) determineMaxFragmentLength() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINER [DTLSSession]: Checking sequence no [1] using bit mask [10] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINE [ServerHandshaker]: Processing Handshake (22) message from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:238) doProcessMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINE [Connection]: Handshake with [/127.0.0.1:52174] has been started - (org.eclipse.californium.scandium.dtls.Connection.java:831) handshakeStarted() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)

15 FINE [ECDHECryptography$SupportedGroup]: Group [brainpoolP256r1] is not supported by JRE - (org.eclipse.californium.scandium.dtls.cipher.ECDHECryptography$SupportedGroup.java:380) () in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ECDHECryptography$SupportedGroup]: Group [brainpoolP384r1] is not supported by JRE - (org.eclipse.californium.scandium.dtls.cipher.ECDHECryptography$SupportedGroup.java:388) () in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ECDHECryptography$SupportedGroup]: Group [brainpoolP512r1] is not supported by JRE - (org.eclipse.californium.scandium.dtls.cipher.ECDHECryptography$SupportedGroup.java:396) () in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [ServerHandshaker]: Negotiated cipher suite [TLS_PSK_WITH_AES_128_CBC_SHA256] with peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:493) negotiateCipherSuite() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSConnector]: Sending flight of 2 message(s) to peer [/127.0.0.1:52174] using 1 datagram(s) of max. 1,280 bytes - (org.eclipse.californium.scandium.DTLSConnector.java:1267) sendFlight() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ServerHandshaker]: Processed CLIENT_HELLO (1) message with message sequence no [1] from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:372) doProcessMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSSession]: Updated receive window with sequence number [1]: new upper boundary [63], new bit vector [10] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:378) markRecordAsRead() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [PSK] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSSession]: Checking sequence no [2] using bit mask [100] against received records [10] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ServerHandshaker]: Processing Handshake (22) message from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:238) doProcessMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [ServerHandshaker]: Client [/127.0.0.1:52174] uses PSK identity [Client_identity] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:265) receivedClientKeyExchange() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ServerHandshaker]: Processed CLIENT_KEY_EXCHANGE (16) message with message sequence no [2] from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:372) doProcessMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSSession]: Updated receive window with sequence number [2]: new upper boundary [63], new bit vector [110] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:378) markRecordAsRead() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSConnector]: Received 2 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINER [DTLSSession]: Checking sequence no [3] using bit mask [1000] against received records [110] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ServerHandshaker]: Processing Change Cipher Spec (20) message from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:238) doProcessMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [ServerHandshaker]: Processed Change Cipher Spec (20) message from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.dtls.ServerHandshaker.java:372) doProcessMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
15 FINE [DTLSConnector]: Discarding Handshake (22) record from peer [/127.0.0.1:52174]: MAC validation failed - (org.eclipse.californium.scandium.DTLSConnector.java:790) discardRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:22)
18 FINE [DTLSConnector]: Re-transmitting flight for [/127.0.0.1:52174], [3] retransmissions left - (org.eclipse.californium.scandium.DTLSConnector.java:1347) handleTimeout() in thread DTLS RetransmitTask 1 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
18 FINER [DTLSConnector]: Sending flight of 2 message(s) to peer [/127.0.0.1:52174] using 1 datagram(s) of max. 1,280 bytes - (org.eclipse.californium.scandium.DTLSConnector.java:1360) sendFlight() in thread DTLS RetransmitTask 1 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [Record]: Parsing message without a session - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [NULL] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSSession]: Checking sequence no [4] using bit mask [10000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSSession]: Checking sequence no [5] using bit mask [100000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Discarding Handshake (22) record from peer [/127.0.0.1:52174]: MAC validation failed - (org.eclipse.californium.scandium.DTLSConnector.java:790) discardRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [Record]: Parsing message without a session - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [NULL] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSSession]: Checking sequence no [6] using bit mask [1000000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSSession]: Checking sequence no [7] using bit mask [10000000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Discarding Handshake (22) record from peer [/127.0.0.1:52174]: MAC validation failed - (org.eclipse.californium.scandium.DTLSConnector.java:790) discardRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [Record]: Parsing message without a session - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [NULL] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSSession]: Checking sequence no [8] using bit mask [100000000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSSession]: Checking sequence no [9] using bit mask [1000000000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
15 FINE [DTLSConnector]: Discarding Handshake (22) record from peer [/127.0.0.1:52174]: MAC validation failed - (org.eclipse.californium.scandium.DTLSConnector.java:790) discardRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:23)
18 FINE [DTLSConnector]: Re-transmitting flight for [/127.0.0.1:52174], [2] retransmissions left - (org.eclipse.californium.scandium.DTLSConnector.java:1347) handleTimeout() in thread DTLS RetransmitTask 1 at (2016-07-29 18:02:25)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINE [DTLSConnector]: Received Handshake (22) record from peer [/127.0.0.1:52174] - (org.eclipse.californium.scandium.DTLSConnector.java:455) processHandshakeRecord() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
18 FINER [DTLSConnector]: Sending flight of 2 message(s) to peer [/127.0.0.1:52174] using 1 datagram(s) of max. 1,280 bytes - (org.eclipse.californium.scandium.DTLSConnector.java:1360) sendFlight() in thread DTLS RetransmitTask 1 at (2016-07-29 18:02:25)
15 FINE [Record]: Parsing message without a session - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINER [Record]: Parsing HANDSHAKE message plaintext using KeyExchange [NULL] and receiveRawPublicKey [false] - (org.eclipse.californium.scandium.dtls.Record.java:830) decryptHandshakeMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINER [DTLSSession]: Checking sequence no [10] using bit mask [10000000000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINER [DTLSConnector]: Received 1 DTLS records using a 16,474 byte datagram buffer - (org.eclipse.californium.scandium.DTLSConnector.java:422) receiveNextDatagramFromNetwork() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINER [DTLSSession]: Checking sequence no [11] using bit mask [100000000000] against received records [0] with lower boundary [0] - (org.eclipse.californium.scandium.dtls.DTLSSession.java:359) isDuplicate() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)
15 FINER [Handshaker$InboundMessageBuffer]: Discarding message from peer [/127.0.0.1:52174] from past epoch [0] < current epoch [1] - (org.eclipse.californium.scandium.dtls.Handshaker$InboundMessageBuffer.java:362) getNextMessage() in thread DTLS-Receiver-0.0.0.0/0.0.0.0:5684 at (2016-07-29 18:02:25)

Server Treats NON-Message as CON-Message

I'm trying to send a non-confirmable message from the client as follows:

        Request request = new Request(Code.POST, Type.NON);
        request.setPayload("from CoAP client");
        CoapResponse response = client.advanced(request);

In this case, I'm not expecting any response back. But, it seems the server always treats the request a confirmable message.

INFO: UDPConnector Receiver: org.eclipse.californium.elements.RawData@4a1f79c5
UDPConnector Receiver: org.eclipse.californium.elements.RawData@4a1f79c5

org.eclipse.californium.core.server.ServerMessageDeliverer.handleRequest(): request=[CON-POST MID=48305, Token=76369a, OptionSet={"Uri-Path":"helloWorld"}, "from CoAP client"]

org.eclipse.californium.examples.HelloWorldServer$HelloWorldResource.handleRequest(): request=[CON-POST MID=48305, Token=76369a, OptionSet={"Uri-Path":"helloWorld"}, "from CoAP client"]

request=[CON-POST MID=48305, Token=76369a, OptionSet={"Uri-Path":"helloWorld"}, "from CoAP client"]
message from client = [from CoAP client];

As per output, it clearly shows the Server treats the NON-POST request as a CON-POST request.

Certificates are not valid anymore

Hey,

today I tried to install californium. However, I had problems reagrding the Tests which are executed regarding scandium. I had to reset my Date and tiime to somewhere in June 2015 to make the tests passing, so I assume, that the demo certs are not valid anymore and should be replaced. Correct me if I'm wrong.

Thanks and kind regards,
Jan

Why reregistration is forced for every observe relation?

As I can see from my experiments and the source code, if the server doesn't set Max-Age header each observe relation is forced to be reregistered once a minute. Why is that? And how can I disable it?

At CoapClient:1045 the prepareReregistration method is called which in its turn creates (Cf. CoapObserveRelation:192) a Runnable executing reregister() in 60+2 seconds after the last delivery, because by default the Max-Age is equal to 60 sec.

Request for change in the CoapClient

Hi,

We want to create a Request with setContentFormat() and some data in the header using "coapRequest.getOptions().addOption()" before calling observe. but the current exposed observe API only accept Response handler and Accept option.

Could you please provide a way to pass the customized request in the observe API or make to below two API public.

private CoapObserveRelation observeAndWait(Request request, CoapHandler handler)

private CoapObserveRelation observe(Request request, CoapHandler handler)

Regards,
Rajesh Kumar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.