Code Monkey home page Code Monkey logo

sparkplug's Introduction

sparkplug's People

Contributors

anipper avatar cirrus-link avatar ckienle avatar hwbrill avatar jrmclaurin avatar mattmiller2112 avatar wes-johnson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparkplug's Issues

Oneof value inside metric

Hello,

I am trying to write a custom proto file for a project that uses sparkplug proto file. In my proto file, I have fields that should have the same type as "oneof value of metric message" in sparkplug. Since the oneof value is not defined as a separate message, I cannot import and use it as type of the field in my proto file (As you can see below, I repeated similar oneof value structure inside a message). But still after protoc generates the code, it is a different type comparing to sparkplug oneof value.

Is there any workaround? (I hesitate to modify the sparkplug proto file by refactoring oneof value into a separate message)

My proto file:

message ValueType {
 
    oneof value {
        uint32 int_value = 1;
        uint64 long_value = 2;
        float float_value = 3;
        double double_value = 4;
        bool boolean_value = 5;
        string string_value = 6;
        bytes bytes_value = 7;
        MetricValueExtension extension_value = 8;
    }
}

message MetricValueExtension {
    repeated google.protobuf.Any extensions = 1;
}

message myMessage {
    ValueType default_value = 1;
    ValueType safe_value = 2;
    ValueType min_value = 3;
    ValueType max_value = 4;
}

Sparkplug proto file:

message Metric {
    string name      = 1; // Metric name - should only be included on birth
    uint64 alias     = 2; // Metric alias - tied to name on birth and included in all later DATA messages
    uint64 timestamp = 3; // Timestamp associated with data

    // acquisition time
    uint32 datatype        = 4; // DataType of the metric/tag value
    bool is_historical     = 5; // If this is historical data and should not update real time tag
    bool is_transient      = 6; // Tells consuming clients such as MQTT Engine to not store this as a tag
    bool is_null           = 7; // If this is null - explicitly say so rather than using -1, false, etc. for some datatypes.
    MetaData metadata      = 8; // Metadata for the payload
    PropertySet properties = 9;
    
    oneof value {
        uint32 int_value        = 10;
        uint64 long_value       = 11;
        float float_value       = 12;
        double double_value     = 13;
        bool boolean_value      = 14;
        string string_value     = 15;
        bytes bytes_value       = 16; // Bytes, File
        DataSet dataset_value   = 17;
        Template template_value = 18;

        MetricValueExtension extension_value = 19;
    }
}

message MetricValueExtension {
    repeated google.protobuf.Any details = 1;
}

node-red-contrib-sparkplug not possible to send DDEATH

After last years update when @mattmiller2112 updated the index.js, and added the "option" parameter.
I think this is why, node-red-contrib-sparkplug is not recieving DDEATH messages properly.

The direct result of that change is that devices cannot get individual DDEATH messages. Which i think is quite important when one Node can hold several devices. QOS data will be kept "Good" :(

Specifically:
node-red-contrib-sparkplug/sparkplug/sparkplug.js, calls publishDeviceDeath
else if (messageType === "DDEATH") { // Clear device cache delete deviceCache[deviceId]; // Publish device data sparkplugClient.publishDeviceDeath(deviceId, payload);
which javascript/sparkplug-client/index.js expect the option parameter, which is not included.
// Publishes Device DEATH certificates for the edge node this.publishDeviceDeath = function(deviceId, payload, **options**) { var topic = version + "/" + groupId + "/DDEATH/" + edgeNode + "/" + deviceId; // Add seq number addSeqNumber(payload); // Publish logger.info("Publishing DDEATH for device " + deviceId); client.publish(topic, encodePayload(maybeCompressPayload(payload, options))); messageAlert("published", topic, payload);
Keep up the good work!

decode_payload does not work

Hey folks,

The c function decode_payload does not work in case a custom property was attached to a metric.
My simple program prints:
"ERROR: Wrong metric name: TypeId"

Here is program code to reproduce:

`int main(int argc, char argv[])
{
uint32_t valUint32 = 7;
char
unit = "mm";
char* myMetricName = "MyMetric1";
uint32_t typeId = 3;

com_cirruslink_sparkplug_protobuf_Payload payload;
get_next_payload(&payload);
payload.uuid = "myuuid";

com_cirruslink_sparkplug_protobuf_Payload_Metric metric1 = com_cirruslink_sparkplug_protobuf_Payload_Metric_init_default;
init_metric(&metric1, myMetricName, true, 1, METRIC_DATA_TYPE_INT32, false, false, false, &valUint32, sizeof(uint32_t));

com_cirruslink_sparkplug_protobuf_Payload_PropertySet properties1 = com_cirruslink_sparkplug_protobuf_Payload_PropertySet_init_default;
add_property_to_set(&properties1, "TypeId", PROPERTY_DATA_TYPE_UINT32, false, &typeId, sizeof(typeId));
add_propertyset_to_metric(&metric1, &properties1);

add_metric_to_payload(&payload, &metric1);


size_t buffer_length = 1024;
uint8_t *binary_buffer = (uint8_t *)malloc(buffer_length * sizeof(uint8_t));
size_t message_length = encode_payload(&binary_buffer, buffer_length, &payload);
free_payload(&payload);


com_cirruslink_sparkplug_protobuf_Payload inbound_payload = com_cirruslink_sparkplug_protobuf_Payload_init_zero;
if(!decode_payload(&inbound_payload, binary_buffer, message_length))
{
    fprintf(stderr, "ERROR: Failed to decode the payload\n");
}

if (inbound_payload.metrics_count != 1)
{
    fprintf(stderr, "ERROR: Wrong metrics_count: %ld\n", inbound_payload.metrics_count);
}

if (strcmp(inbound_payload.metrics[0].name, myMetricName) != 0)
{
    fprintf(stderr, "ERROR: Wrong metric name: %s\n", inbound_payload.metrics[0].name);
}



free_payload(&inbound_payload);
free(binary_buffer);

fprintf(stdout, "Testing finished\n");

return 0;

}`

segmentation fault running example.c running on linux ARM9 platform

file example.c
function publish_node_birth()

fprintf(stdout, "Adding metric: 'Node Metric0'\n");
char nbirth_metric_zero_value[] = "hello node";
add_simple_metric(&nbirth_payload, "Node Metric0", true, Node_Metric0, METRIC_DATA_TYPE_STRING, false, false, false, &nbirth_metric_zero_value, sizeof(nbirth_metric_zero_value));

nbirth_metric_zero_value is a char pointer so no need to use it's address passing it the correct call shall be:
add_simple_metric(&nbirth_payload, "Node Metric0", true, Node_Metric0, METRIC_DATA_TYPE_STRING, false, false, false, nbirth_metric_zero_value, sizeof(nbirth_metric_zero_value));

Another issue regard the sparkplug_b client C library:
file sparkplug_b.c
the are improper use of the %zd specificator in print functions like:

else if (datatype == METRIC_DATA_TYPE_STRING || datatype == METRIC_DATA_TYPE_TEXT || datatype == METRIC_DATA_TYPE_UUID) {
		DEBUG_PRINT(("Setting datatype: %zd, with value: %s\n", datatype, (char *)value));
		...

This in particular make the example to crash with segmentation fault.
the %zd shall be used only printing size_t type.
in ARM9 size_t is 32 bit, but the above source try to print a 64 bit value.

BdSeq Number and Message Seq Number. Are those seq numbers in an order?

Hello, I am using C# Sparkplug library and trying to publish data. I need a clarification about BdSeq number and message Seq number.

Should we use only one variable for seq number? Start at 0 and increment till 255 when publishing NBIRTH, DBIRTH, NDATA and DDATA? what is the difference between BdSeq number and message Seq number.

Also, I have seen the metric for Bdseqnum takes a datatype of UInt64. If the limit is only till 255, why are we using UInt64?

Thanks.

too much debug

There's too much console debug

info: Publishing DDATA for device sensor1
info: packetsend: publish
info: packetsend: pingreq
info: packetreceive: pingresp
info: packetsend: pingreq
info: packetreceive: pingresp
info: packetsend: pingreq
info: packetreceive: pingresp
info: packetsend: pingreq
info: packetreceive: pingresp

Specification parkplugB: clarification about the figure 4

Hello,

In page 22, figure 4 (EoN node MQTT Session Establishment) of the Sparkplug specificationB:

  • Message 2 is written SUBSCRIBE but it is PUBLISH, right?
  • Message 3 is written PUBLISH but it is a SUBSCRIBE, right?
  • After the sent of the second NBIRTH, the state is ONLINE, right?
  • The last message is PUBLISH NDATA, right?

Thanks and best regards,
Sabrine

Memory leaks and other issues reported by valgrind

Hi folks!

Valgrind memory check reports memory leaks and other issues within the c client library.
Please check the attached results from valgrind and the example program.

Could you please fix the issues asap? We want implement support of Inductive Automation Ignition Platform, so would like avoid any problems within our project schedule...

Results screenshot:
sparkplugmemoryleaksandotherissues2

Example program
SparkplugMemoryCheck2.zip

The C library message decoder does not properly decode messages from an Iginition instance

Our software is C++, so I followed the instructions and example applications for the C application. I was able to write a server and client that is able to publish and subscribe to messages and decoding on this loopback was correct. When I pointed our client to the Ignition server, I was able to get the message, and the Cirrus C library "successfully" decoded it, but the data values were incorrect (all the values were 0.0000 (double) and not all the metric fields were decoded). The decode method returned successful, but it was not. We're using spBv1.0 and we tested with particular topics and the # all topic.

I went through a bunch of permutations and this is where I am:

Server Kelvin (C)      / Client Kelvin (C)                   -> SUCCESS
Server Kelvin (C)      / Client Cirrus-Link Example (Python) -> SUCCESS
Server Kelvin (C)      / MQTT.fx (JAVA)                      -> SUCCESS
Server Ignition (JAVA) / Client Kelvin (C)                   -> FAILED
Server Ignition (JAVA) / Client Cirrus-Link Example (C)      -> FAILED
Server Ignition (JAVA) / Client Cirrus-Link Example (Python) -> SUCCESS
Server Ignition (JAVA) / MQTT.fx (JAVA)                      -> SUCCESS

I believe connecting the C library to an Ignition server will reproduce the problem.

SparkplugRaspberryPiExample: clarification about the bdSeq

Hello,

I am reading this example:
https://github.com/Cirrus-Link/Sparkplug/blob/master/sparkplug_b/raspberry_pi_examples/java/src/main/java/com/cirruslink/example/SparkplugRaspberryPiExample.java

According to the Sparkplug B specification in the 16.1 section:

A bdSeq number as a metric should be included in the payload. This should match the bdSeq number
provided in the MQTT CONNECT packet’s LW&T payload. This allows backend applications to correlate
NBIRTHs to NDEATHs. The bdSeq number should start at zero and increment by one on every new MQTT
CONNECT

In the example if I well understood, the bdSeq is not the same in the MQTT connect and the NBIRTH, I saw that it is incremented in publishBirth(). I don't understand why since it should match the one in the MQTT connect.

have I missed something?

Thanks and best regards,
Sabrina

Does the DDEATH message contains a payload or not?

Hello,

I am a little bit confused about the DDEATH payload.
In the specification, section 16.7:

The DDEATH message requires the following payload components.
The DDEATH must include the a seq number in the payload and it must have a value of one greater than
the previous MQTT message from the EoN node contained unless the previous MQTT message contained
a value of 255. In this case the seq number must be 0.

And in section 17.8:

The DDEATH does not include a payload.

Which is the correct one?

Thanks and best reagrds,
Sabrine

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.