Cybus::Endpoint¶
An endpoint describes the address of a single data endpoint within the language of a specific protocol. A single endpoint address is always mapped into a single topic of the internal broker.
The topic can either be specific explicitly by using the topic property, or it is auto-generated (see note at topic). In a concrete service, the endpoint topic will typically be mapped (using a Cybus::Mapping resource) into an application-specific topic, which will then be used by other services such as a dashboard application.
The actual data address of the protocol is specified by the properties below the subscribe, read, or write property of the endpoint. These important properties are actually not immediately below the endpoint but one level lower, namely below subscribe / read / write, which in turn is below endpoint. Those two different levels must not be confused.
Operation results¶
Endpoints of type read or write generate an additional topic with a /res
suffix (“result”) where the results of the operation are sent to, loosely
following the JSON-RPC 2.0 Specification.
A read endpoint named myEndpoint will listen to requests on the MQTT topic
myEndpoint/req
, and publish the result as a message to the MQTT topicmyEndpoint/res
.A write endpoint named myEndpoint similarly will listen to requests on the MQTT topic
myEndpoint/set
, and publish the result as a message to the MQTT topicmyEndpoint/res
.
The data message on the result topic will have the following format in the successful case:
1{
2 "id": 29194,
3 "timestamp":1629351968526,
4 "result": {
5 "value":0
6 }
7}
In this result message, id
is the request identifier that was sent on the
original request, timestamp
is the Unix timestamp of when the result was
received, and result
is the JSON object with the actual result, but its
content depends on the concrete protocol implementation.
If there was an error, the resulting JSON object does not contain a property
result but instead a property error
. The content of the error property
depends on the concrete protocol implementation, too. Often, it could simply be
of string type and contains an explanatory message of the error. Hence, in the
error case the data message on the result topic will have the following format:
1{
2 "id": 29194,
3 "timestamp":1629351968526,
4 "error": "Wrong input values"
5}
Polling interval and Subscribe¶
When an endpoint should subscribe to some data point on the device, it should be defined with the subscribe operation. Some protocols support such a subscription directly (e.g. OPC UA), whereas others only support regular polling the data point from the Connectware side. Depending on the available choices, the actual behaviour can be chosen by the properties in the subscribe / read / write section.
If the endpoint is set to polling, there is the choice between specifying an interval or a time schedule expression for polling from the Connectware side.
An
interval
specifies the waiting time between subsequent polls. There is no guarantee on the exact time interval, only that on average the time interval should be matched, i.e. if the protocol needed longer for one interval, the next one will be chosen shorter. Typical numbers for specified time intervals of 1000 milliseconds are actual intervals in the range of 950 to 1050 milliseconds, but this also strongly depends on the protocol and device behaviour.A time schedule expression is specified in the
cronExpression
syntax, see https://github.com/node-cron/node-cron , for example"0 * * * *"
for “every hour at the 00 minute, such as 00:00h, 01:00h, 02:00, and so on. In this case there is no guarantee on the exact time when data is received, but one polling will be triggered for each time expression match. So it can be relied on receiving 24 polling results per day if “once per hour” has been specified in the cronExpression.
For any subscribe endpoint in the protocols where polling is available, you can either specify an interval, or a cronExpression (which takes precedence over the interval property), or neither, in which case interval will be used with its default value.
Properties¶
Property |
Type |
Required |
---|---|---|
|
Required |
|
|
Required |
|
|
Optional* |
|
read |
|
Optional* |
write |
|
Optional* |
|
Optional |
|
|
Optional |
|
|
Optional |
|
|
Optional |
|
|
Optional |
|
|
Obsolete |
|
|
Optional |
|
|
Optional |
* One out of subscribe, read, and write is required.
protocol¶
Identifies the protocol for which a connection should be established
is required
type:
enum
The value of this property must be equal to supported protocols or custom connectors. Custom connectors allow you to connect to a device or sensor that is not yet supported by the Protocol Mapper. Connectware supports the following protocols out of the box:
Ads
Bacnet
EthernetIp
Focas
GenericVrpc
Hbmdaq
Heidenhain
Http
InfluxDB
Kafka
Modbus
Mqtt
Mssql
Opcda
Opcua
Profinet
S7
Shdr
Sinumerik
Sopas
Sql
Systemstate
Werma
connection¶
Reference to an Cybus::Connection resource
is required
type:
string
subscribe / read / write¶
one of those is required
type:
object
Depending on the protocol type, this property needs the following parameters (properties) which specifies the actual data address in the respective protocol:
Bacnet
Endpoint PropertiesEthernetIp
Endpoint PropertiesGenericVrpc
Connection PropertiesFocas
Endpoint PropertiesHbmdaq
Endpoint PropertiesHeidenhain
Endpoint PropertiesHttp
Endpoint PropertiesInfluxDB
Endpoint PropertiesKafka
Endpoint PropertiesModbus
Endpoint PropertiesMqtt
Endpoint PropertiesMssql
Endpoint PropertiesOpcda
Endpoint PropertiesOpcua
Endpoint PropertiesProfinet
Endpoint PropertiesShdr
Endpoint PropertiesSinumerik
SinumerikSopas
Endpoint PropertiesSystemstate
Endpoint PropertiesWerma
Endpoint Properties
Note
Strictly speaking, the protocol’s properties mentioned here are not properties of the endpoint but rather those of the subscribe / read / write property of the endpoint. In other words, those important properties must appear one level deeper in the yaml file: Not directly below endpoint but below subscribe / read / write, which in turn is below endpoint. Those two different levels must not be confused.
rules¶
is optional
type:
array
of Rules Objects
You may specify rules to your payload here before sending it to the internal broker first time.
Note
This rules will transform the raw data as received from this protocol. It affects all further steps in the processing chain.
qos¶
MQTT Quality of Service (QoS) for the internal messaging from Endpoint to internal MQTT broker.
If this endpoint runs on an agent, setting this to 1
instead of the default 0 will activate the simple buffering of MQTT client
implementations.
is optional
type:
integer
, must be one of0
,1
,2
default:
0
Note: QoS level 2 is most likely not useful in the industry context and is not recommended here.
retain¶
Whether the last message should be retained (last-value-cached) on the internal MQTT broker.
If this endpoint runs on an agent, setting this to true instead of default false might be useful in certain applications to have some value on the topic if the agent disconnects. However, in other applications this might not make sense.
is optional
type:
boolean
, must be one oftrue
,false1
default:
false
targetState¶
The state this resource should be in, after start-up.
is optional
type:
enum
, must be one ofenabled
,disabled
default:
enabled
topic¶
Explicit topic name to which this endpoint address should be mapped.
Note
The provided topic name is prefixed with the value of the
Cybus::MqttRoot global parameter.
This global parameter by default has the value services/<serviceId>
where <serviceId>
is replaced with the actual
ServiceID of the current service. Hence, in the
default case the full endpoint topic will expand to:
services/<serviceId>/<topic>
See the explanation at Cybus::MqttRoot if alternative topic structures are needed.
Providing a custom topic and avoiding an additional mapping resource improves overall performance as the message has to travel one hop less. Endpoints with custom topics can still be mapped using a regular mapping (see Cybus::Mapping).
is optional
type:
string
agentName¶
Obsolete - this value is no longer being used. The agentName of the referenced connection is used always, if this connection and endpoint is being used on an agent instance, separate from the Connectware.
buffering¶
The buffering
section can optionally switch on output data buffering on
write endpoints. With this feature, it is possible to enable output buffering
on write operations for when the connection to the device is lost, in order to
avoid data loss.
The buffering mechanism kicks in when a device disconnection is detected and will start buffering any incoming values. After the connection is reestablished, buffered messages will be written to the machine (“flushed”).
The flushing of the buffer is implemented to handle subsequent disconnections during flushing correctly. In such a case newly incoming values will be buffered, too. Once the connection is reestablished again, the flushing will continue where it left off.
By default, this feature is switched off. To enable it, the property enabled
must be set to true
, and most likely additional properties should be set
according to the expected behaviour in the actual application scenario. The
supported properties of buffering are:
enabled
(default:false
) Whether buffering should be enable or not when the connection to the source device is lost.
keepOrder
(default:true
) Whether to keep the order of messages when going into redelivery mode after a endpoint came back online.
burstInterval
(default:100
) Time in milliseconds to wait between each re-publishing of buffered messages after connection is re-established.
burstSize
(default:1
) The number of messages to send in one burst when flushing the buffer upon re-connection.
bufferMaxSize
(default:100000
) The max number of messages to be buffered. Older messages are deleted when this limit is reached.
bufferMaxAge
(default:86400
, one day) The number of seconds the buffered data will be kept. If messages have been buffered for longer than this number of seconds, they will be discarded.
Note
It is important to keep a balanced configuration of these properties to avoid potentially unwanted behavior. For example if a very large buffer (bufferMaxSize) is configured along with a very slow burstInterval and an small burstSize, the flushing of the buffer could take very long and depending on the bufferMaxAge it could be possible for messages to expire.
The values should be configured based on the target device capabilities.
The keepOrder property, which is switched on by default, will keep order of arriving messages when a flush of the buffer is in progress. This will delay newly arriving messages until all the buffered messages have been sent.
For example if we had the values 1, 2, 3, 4 in the buffer, the buffer starts the flushing of values after a reconnection, and the values 5, 6 are received in the meantime, then the machine will get the values in that exact order 1, 2, 3, 4, 5, 6. If this property was set to false and the same scenario is replicated, the order of arrival of the new values is unspecified and the end result would be an interleaved set of values, for example: 1,5,2,3,6,4.
inputBuffering¶
The input data to each endpoint can optionally be managed through an individual input buffer (also called input queue) to establish fine-grained control for high data rate behaviour. By default, this input buffering is disabled and instead all input data is handled on the global event queue, which works fine as long as there is no risk of out-of-memory exceptions due to unexpected slow data processing or forwarding.
When enabling the individual input buffer, the buffer properties determine the
behaviour in situations when the input buffer is filling up. The buffer is
filling up when the message arrival rate is larger than the processing data rate
or the forwarding (publishing) data rate. Or, in other words, the input buffer
is filling up if the messages arrive faster than how they can be processed or be
forwarded (published). If this situation happens for longer time durations, the
input buffer will reach its configured capacity limits and arriving messages
will be dropped, so that the system will not run into an uncontrollable
out-of-memory exception. This is a fundamental and unavoidable property of
distributed systems due to its finite resources. But the actual behaviour of the
input buffer can be adapted to the actual application scenario by setting the
properties in the inputBuffering
section (optional).
Supported properties are (all optional):
enabled
(type: boolean, default:false
) Enable or disable input buffering.
maxInputBufferSize
(type: integer, default:5000
) Maximum number of input messages that are queued in the input buffer. Exceeding messages will be discarded. Adjust this to a higher value if you are handling bursty traffic.
maxConcurrentMessages
(type: integer, default:2
) Maximum number of concurrently processed messages as long as the input buffer queue is non-empty.
waitingTimeOnEmptyQueue
(type: integer, default:10
) Waiting time in milliseconds after the input buffer queue ran empty and before checking again for newly queued input messages. Regardless of this value, on non-empty input buffer queue all messages will be processed without waiting time in between until the queue is empty again.
Examples¶
Bacnet
1bacnetSubscribe:
2 type: Cybus::Endpoint
3 properties:
4 protocol: Bacnet
5 connection: !ref bacnetConnection
6 subscribe:
7 objectType: analog-input
8 objectInstance: 2796204
9 interval: 1000
10
11# This subscribes to a Bacnet analog input of object instance 2796204
OPC UA
1opcuaSubscribeToCurrentServerTime:
2 type: Cybus::Endpoint
3 properties:
4 protocol: Opcua
5 connection: !ref opcuaConnection
6 subscribe:
7 nodeId: i=2258
8
9# This subscribes to a the OPC UA server node that publishes the current time
MQTT with write buffering enabled
1writeEndpoint:
2 type: Cybus::Endpoint
3 properties:
4 protocol: Mqtt
5 connection: !ref mqttConnection
6 buffering:
7 enabled: true
8 keepOrder: true
9 burstInterval: 10
10 burstSize: 100
11 bufferMaxSize: 20000
12 bufferMaxAge: 5000
13 write:
14 topic: test/write
15
16# This configures a write endpoint which will buffer up to 200000 messages
17# if the connection is lost and will publish 100 messages every 10 milliseconds
18# when the connection is reestablished. New incoming messages will be published
19# only when the originally buffered items were all published