cirrus-link / sparkplug Goto Github PK
View Code? Open in Web Editor NEWLicense: Eclipse Public License 1.0
License: Eclipse Public License 1.0
Hi folks!
Valgrind memory check reports memory leaks and other issues within the c client library.
Please check the attached results from valgrind and the example program.
Could you please fix the issues asap? We want implement support of Inductive Automation Ignition Platform, so would like avoid any problems within our project schedule...
Example program
SparkplugMemoryCheck2.zip
Hey folks,
The c function decode_payload does not work in case a custom property was attached to a metric.
My simple program prints:
"ERROR: Wrong metric name: TypeId"
Here is program code to reproduce:
`int main(int argc, char argv[])
{
uint32_t valUint32 = 7;
char unit = "mm";
char* myMetricName = "MyMetric1";
uint32_t typeId = 3;
com_cirruslink_sparkplug_protobuf_Payload payload;
get_next_payload(&payload);
payload.uuid = "myuuid";
com_cirruslink_sparkplug_protobuf_Payload_Metric metric1 = com_cirruslink_sparkplug_protobuf_Payload_Metric_init_default;
init_metric(&metric1, myMetricName, true, 1, METRIC_DATA_TYPE_INT32, false, false, false, &valUint32, sizeof(uint32_t));
com_cirruslink_sparkplug_protobuf_Payload_PropertySet properties1 = com_cirruslink_sparkplug_protobuf_Payload_PropertySet_init_default;
add_property_to_set(&properties1, "TypeId", PROPERTY_DATA_TYPE_UINT32, false, &typeId, sizeof(typeId));
add_propertyset_to_metric(&metric1, &properties1);
add_metric_to_payload(&payload, &metric1);
size_t buffer_length = 1024;
uint8_t *binary_buffer = (uint8_t *)malloc(buffer_length * sizeof(uint8_t));
size_t message_length = encode_payload(&binary_buffer, buffer_length, &payload);
free_payload(&payload);
com_cirruslink_sparkplug_protobuf_Payload inbound_payload = com_cirruslink_sparkplug_protobuf_Payload_init_zero;
if(!decode_payload(&inbound_payload, binary_buffer, message_length))
{
fprintf(stderr, "ERROR: Failed to decode the payload\n");
}
if (inbound_payload.metrics_count != 1)
{
fprintf(stderr, "ERROR: Wrong metrics_count: %ld\n", inbound_payload.metrics_count);
}
if (strcmp(inbound_payload.metrics[0].name, myMetricName) != 0)
{
fprintf(stderr, "ERROR: Wrong metric name: %s\n", inbound_payload.metrics[0].name);
}
free_payload(&inbound_payload);
free(binary_buffer);
fprintf(stdout, "Testing finished\n");
return 0;
}`
Hello,
In page 22, figure 4 (EoN node MQTT Session Establishment) of the Sparkplug specificationB:
Thanks and best regards,
Sabrine
The print statements in tahu/client_libraries/python/sparkplug_b.py, lines 185, 242, 261, 300 need to be updated to Python 3.0 syntax.
There's too much console debug
info: Publishing DDATA for device sensor1
info: packetsend: publish
info: packetsend: pingreq
info: packetreceive: pingresp
info: packetsend: pingreq
info: packetreceive: pingresp
info: packetsend: pingreq
info: packetreceive: pingresp
info: packetsend: pingreq
info: packetreceive: pingresp
sparkplug_b.c
After last years update when @mattmiller2112 updated the index.js, and added the "option" parameter.
I think this is why, node-red-contrib-sparkplug is not recieving DDEATH messages properly.
The direct result of that change is that devices cannot get individual DDEATH messages. Which i think is quite important when one Node can hold several devices. QOS data will be kept "Good" :(
Specifically:
node-red-contrib-sparkplug/sparkplug/sparkplug.js, calls publishDeviceDeath
else if (messageType === "DDEATH") { // Clear device cache delete deviceCache[deviceId]; // Publish device data sparkplugClient.publishDeviceDeath(deviceId, payload);
which javascript/sparkplug-client/index.js expect the option parameter, which is not included.
// Publishes Device DEATH certificates for the edge node this.publishDeviceDeath = function(deviceId, payload, **options**) { var topic = version + "/" + groupId + "/DDEATH/" + edgeNode + "/" + deviceId; // Add seq number addSeqNumber(payload); // Publish logger.info("Publishing DDEATH for device " + deviceId); client.publish(topic, encodePayload(maybeCompressPayload(payload, options))); messageAlert("published", topic, payload);
Keep up the good work!
Hello, I am using C# Sparkplug library and trying to publish data. I need a clarification about BdSeq number and message Seq number.
Should we use only one variable for seq number? Start at 0 and increment till 255 when publishing NBIRTH, DBIRTH, NDATA and DDATA? what is the difference between BdSeq number and message Seq number.
Also, I have seen the metric for Bdseqnum takes a datatype of UInt64. If the limit is only till 255, why are we using UInt64?
Thanks.
file example.c
function publish_node_birth()
fprintf(stdout, "Adding metric: 'Node Metric0'\n");
char nbirth_metric_zero_value[] = "hello node";
add_simple_metric(&nbirth_payload, "Node Metric0", true, Node_Metric0, METRIC_DATA_TYPE_STRING, false, false, false, &nbirth_metric_zero_value, sizeof(nbirth_metric_zero_value));
nbirth_metric_zero_value is a char pointer so no need to use it's address passing it the correct call shall be:
add_simple_metric(&nbirth_payload, "Node Metric0", true, Node_Metric0, METRIC_DATA_TYPE_STRING, false, false, false, nbirth_metric_zero_value, sizeof(nbirth_metric_zero_value));
Another issue regard the sparkplug_b client C library:
file sparkplug_b.c
the are improper use of the %zd specificator in print functions like:
else if (datatype == METRIC_DATA_TYPE_STRING || datatype == METRIC_DATA_TYPE_TEXT || datatype == METRIC_DATA_TYPE_UUID) {
DEBUG_PRINT(("Setting datatype: %zd, with value: %s\n", datatype, (char *)value));
...
This in particular make the example to crash with segmentation fault.
the %zd shall be used only printing size_t type.
in ARM9 size_t is 32 bit, but the above source try to print a 64 bit value.
Our software is C++, so I followed the instructions and example applications for the C application. I was able to write a server and client that is able to publish and subscribe to messages and decoding on this loopback was correct. When I pointed our client to the Ignition server, I was able to get the message, and the Cirrus C library "successfully" decoded it, but the data values were incorrect (all the values were 0.0000 (double) and not all the metric fields were decoded). The decode method returned successful, but it was not. We're using spBv1.0 and we tested with particular topics and the # all topic.
I went through a bunch of permutations and this is where I am:
Server Kelvin (C) / Client Kelvin (C) -> SUCCESS
Server Kelvin (C) / Client Cirrus-Link Example (Python) -> SUCCESS
Server Kelvin (C) / MQTT.fx (JAVA) -> SUCCESS
Server Ignition (JAVA) / Client Kelvin (C) -> FAILED
Server Ignition (JAVA) / Client Cirrus-Link Example (C) -> FAILED
Server Ignition (JAVA) / Client Cirrus-Link Example (Python) -> SUCCESS
Server Ignition (JAVA) / MQTT.fx (JAVA) -> SUCCESS
I believe connecting the C library to an Ignition server will reproduce the problem.
Hello,
I am trying to write a custom proto file for a project that uses sparkplug proto file. In my proto file, I have fields that should have the same type as "oneof value of metric message" in sparkplug. Since the oneof value is not defined as a separate message, I cannot import and use it as type of the field in my proto file (As you can see below, I repeated similar oneof value structure inside a message). But still after protoc generates the code, it is a different type comparing to sparkplug oneof value.
Is there any workaround? (I hesitate to modify the sparkplug proto file by refactoring oneof value into a separate message)
message ValueType {
oneof value {
uint32 int_value = 1;
uint64 long_value = 2;
float float_value = 3;
double double_value = 4;
bool boolean_value = 5;
string string_value = 6;
bytes bytes_value = 7;
MetricValueExtension extension_value = 8;
}
}
message MetricValueExtension {
repeated google.protobuf.Any extensions = 1;
}
message myMessage {
ValueType default_value = 1;
ValueType safe_value = 2;
ValueType min_value = 3;
ValueType max_value = 4;
}
message Metric {
string name = 1; // Metric name - should only be included on birth
uint64 alias = 2; // Metric alias - tied to name on birth and included in all later DATA messages
uint64 timestamp = 3; // Timestamp associated with data
// acquisition time
uint32 datatype = 4; // DataType of the metric/tag value
bool is_historical = 5; // If this is historical data and should not update real time tag
bool is_transient = 6; // Tells consuming clients such as MQTT Engine to not store this as a tag
bool is_null = 7; // If this is null - explicitly say so rather than using -1, false, etc. for some datatypes.
MetaData metadata = 8; // Metadata for the payload
PropertySet properties = 9;
oneof value {
uint32 int_value = 10;
uint64 long_value = 11;
float float_value = 12;
double double_value = 13;
bool boolean_value = 14;
string string_value = 15;
bytes bytes_value = 16; // Bytes, File
DataSet dataset_value = 17;
Template template_value = 18;
MetricValueExtension extension_value = 19;
}
}
message MetricValueExtension {
repeated google.protobuf.Any details = 1;
}
Hi,
Platform : iMx28 - cross compilation
I'm trying to use sparkplug library for example.c application.It is executing fine when compilation done by static library(libsparkplug_b.a). But same example.c is giving 'segmentation fault' when compilation done by shared(libsparkplug_b.so) library.
Please help me to find/fix the issue.
Hello,
I am reading this example:
https://github.com/Cirrus-Link/Sparkplug/blob/master/sparkplug_b/raspberry_pi_examples/java/src/main/java/com/cirruslink/example/SparkplugRaspberryPiExample.java
According to the Sparkplug B specification in the 16.1 section:
A bdSeq number as a metric should be included in the payload. This should match the bdSeq number
provided in the MQTT CONNECT packet’s LW&T payload. This allows backend applications to correlate
NBIRTHs to NDEATHs. The bdSeq number should start at zero and increment by one on every new MQTT
CONNECT
In the example if I well understood, the bdSeq is not the same in the MQTT connect and the NBIRTH, I saw that it is incremented in publishBirth(). I don't understand why since it should match the one in the MQTT connect.
have I missed something?
Thanks and best regards,
Sabrina
Hello,
I am a little bit confused about the DDEATH payload.
In the specification, section 16.7:
The DDEATH message requires the following payload components.
The DDEATH must include the a seq number in the payload and it must have a value of one greater than
the previous MQTT message from the EoN node contained unless the previous MQTT message contained
a value of 255. In this case the seq number must be 0.
And in section 17.8:
The DDEATH does not include a payload.
Which is the correct one?
Thanks and best reagrds,
Sabrine
Currently, the proto file is the latest and the example given in the nodered **emulated-device.js ** file is not working. Can we have sample flow for nodered example.
update: writing at official github
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.