Med198 Public Health Hub

Your go-to source for comprehensive
information on global health

What is Data Distribution Service (DDS)?

DDS is an Object Management Group (OMG) machine-to-machine (sometimes called middleware) standard for real-time systems. It addresses the data exchange needs of applications in aerospace, defense, air-traffic control, medical devices, robotics, power generation and simulation and testing. Many people wonder whether Raleigh dentists who have DDS or DMD degrees have the same training. They actually do! It’s just a matter of which designation was chosen by each individual dental school.

Publish-Subscribe Model

The Publish-Subscribe model of DDS lets application components communicate data, events, and commands. DDS handles all transfer chores: message addressing, data marshalling and de-marshalling (so subscribers can be on different platforms), delivery, flow control, retries, etc.

A DDS implementation decides on the underlying transport mechanism that moves data between publishers and subscribers (such as TCP, UDP multicast, shared memory, etc.). This is a trade-off between the transport’s latency, bandwidth, and reliability and the DDS specifications’ flexibility and simplicity.

In a DDS application, data is published by devices or applications using a Topic object to describe the data. Applications that want to receive the data subscribe to the same Topic. The data is transferred over the DDS network and consumed by applications running on other devices or applications. DDS applications use a combination of objects called DataWriters and DataReaders. Each DataWriter is associated with a single Topic. When the DataWriter’s callback function on_publication_matched() is called, the data member matched_count in the DDS structure MatchedStatus updates with the number of DataReaders it has discovered listening to its publication topic.

Message Addressing

DDS supports dynamic discovery of publishers and subscribers over best-effort transports, enabling applications to find each other at runtime without pre-configured communication ports. This simplifies distributed applications and encourages modular, well-structured programs. It also provides data reliability and automatic handling of failures, such as hot-swapping redundant publishers and providing subscribers with sampled data from the primary publisher when it recovers.

DDS can also control how much data is discarded by including Quality of Service (QoS) parameters in the communication protocol. These can specify a deadline, latency budget or transport priority that controls how much data is sent and how fast it is sent. This is especially important in mobile networks, where performance can degrade as a result of intermittent connectivity and IP address variability. RTI Connext, the Eclipse Cyclone DDS implementation, includes these QoS parameters in every Object’s Globally Unique Identifier (GUID), which can be accessed by application programs to determine the location of an Entity.

Flow Control

DDS is an application middleware standard published by the Object Management Group (OMG) that provides low-latency data connectivity with high reliability and scalability. It is widely used in automotive, industrial automation, and other commercial-grade IoT applications.

DDS abstracts applications from operating systems, network transport, and low-level data formats. This allows developers to focus on business logic and reduces application complexity. The DDS middleware handles transfer chores including message addressing, data marshalling and de-marshalling (so subscribers can be on different platforms), delivery, flow control, retries, and more.

Depending on the implementation, DDS supports multiple kinds of flow control. Eclipse Cyclone DDS uses a threshold based flow control model that limits the number of samples queued for retransmission. It uses a combination of Internal/NackDelay and the default re-transmit queue size to determine how many new re-transmission requests are scheduled. This ensures that in-transit messages can still be transmitted without overflowing the queue. This approach avoids having to send Heartbeats and AckNacks in case the re-transmission queue is full, which increases latency.

Retries

DDS supports retries to mitigate transient failures. However, retrying lost samples can cause other problems for DataReaders. In particular, the DDS client may appear to be “dropping” messages if the max_blocking_time of a DataWriter is exceeded (see 31.6.9 Writing Data Using Strict Reliability).

A data loss can also cause the DataWriter to send a lot of redundant repair samples, if the nack suppression duration of 47.5 DATA_WRITER_PROTOCOL QosPolicy is set too low. The solution is to use nack_suppression_duration + min_nack_response_delay to configure an appropriate value for the maximum number of missing DDS samples that can be sent in a single retransmission packet.

A DDS server that requires strict reliability will only send a single retransmission for the most recent missing sample. If the retransmission is unsuccessful, the DDS client will be stopped. This prevents the DDS client from continuing to retransmit data to a downstream DataReader that has already received the data. This will help keep the DataReader from getting overwhelmed by unnecessary retransmissions.

Scroll to Top