Discussion:
[OSPL-Dev] Network partitioning and discovery
Andrea Reale
2012-01-11 10:34:52 UTC
Permalink
Hello everyone.

I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.

On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.

The ospl configuration for that node (N1) for what concerns network
partitions is as follows:

...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...

N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.

Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the partition
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the port
of the best-effort channel.

Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...


Thanks,
andrea
Andrea Reale
2012-01-11 10:48:12 UTC
Permalink
While writing the previous post I made a mistake in copying the excerpt
of my configuration file.
The actual one I am using is:

<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>

Sorry for the double post, and thanks again for any help you will
provide.

Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns network
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.
Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the partition
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the port
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
Hans van't Hag
2012-01-19 13:16:40 UTC
Permalink
Hi Andrea,

Sorry for the late reply .. anyhow, yes, this behavior is normal as you
explicitly state that data sent to logical DDS-partition "part" should be
'pushed' out to the NetworkPartition "part" which is defined as the
N2/N2/N3/N4 unicast address-set.

If you have discovery enabled (which you have), there is the
'optimziation' that as long as there's nobody interested in the data,
OpenSplice won't even bother to send it on the wire, yet as soon as
there's one interested node, it WILL be sent to the wire following the
partition-definitions as been set up.

Technically it could be possible of course to optimize the algorithm, yet
that's currently not in place in the community edition's RT-networking
service.



Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078

PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.

-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Wednesday, January 11, 2012 11:48 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery

While writing the previous post I made a mistake in copying the excerpt
of my configuration file.
The actual one I am using is:

<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>

Sorry for the double post, and thanks again for any help you will
provide.

Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns network
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.
Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the partition
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the port
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
Andrea Reale
2012-01-20 10:19:46 UTC
Permalink
Hi Hans,

thanks for your very clear answer.
So, if I did not misunderstood your explanation, does this practically
mean that the main use cases for defining network partitions associated
with unicast addresses are those where using multicast is made
impossible due to administration related issues (e.g., multicast is
filtered?).

Are there any other use cases that I am not seeing?

Thanks again for your support.
Andrea
Post by Hans van't Hag
Hi Andrea,
Sorry for the late reply .. anyhow, yes, this behavior is normal as you
explicitly state that data sent to logical DDS-partition "part" should be
'pushed' out to the NetworkPartition "part" which is defined as the
N2/N2/N3/N4 unicast address-set.
If you have discovery enabled (which you have), there is the
'optimziation' that as long as there's nobody interested in the data,
OpenSplice won't even bother to send it on the wire, yet as soon as
there's one interested node, it WILL be sent to the wire following the
partition-definitions as been set up.
Technically it could be possible of course to optimize the algorithm, yet
that's currently not in place in the community edition's RT-networking
service.
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.
-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Wednesday, January 11, 2012 11:48 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
While writing the previous post I made a mistake in copying the excerpt
of my configuration file.
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>
Sorry for the double post, and thanks again for any help you will
provide.
Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns network
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.
Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the partition
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the port
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
Hans van't Hag
2012-01-20 12:48:55 UTC
Permalink
Hi Andrea,



I think you have to decouple the usage/use-cases of using unicast and using
networkPartitions.



? OpenSplice NetworkPartitions are a means to physically partition
the communication-space (and by means of mapping 'relate' this to the
logical partitioning of the global-data-space by means of DDS-partitions).

? Unicast addressing is one of the communication-methods that can
be utilized to get data distributed within a networkPartition (or the
'GlobalPartition' if there are no explicit networkPartitions defined).



W.r.t. unicast-addressing there is still another distinct OpenSplice-DDS
feature in that we also support a dedicated ?dynamic unicast-discovery?
mechanism in OpenSplice:



? Given the size and (unicast) protocol restrictions of many
large-scale/WAN systems, a discovery mechanism is required where the
scalability of the dynamic system is ensured whilst minimizing the
communication overhead of the required discovery process. For these reasons
OpenSplice DDS provides a dynamic unicast-discovery protocol where the
physical network can be *overlaid* with a notion of ?Roles? and related
communication-scopes such that only nodes within a defined ?*
scope-of-interest*? will be automatically discovered and their state
maintained in a distributed/fault-tolerant manner by the OpenSplice DDS
middleware. Other DDS-vendors either rely on a protocol that requires
multicast for discovery of all DDS-entities (*rather than
communication-nodes*) or rely on a centralized service that can be/become a
single-point-of-failure in the dynamic system. Finally, especially in
hierarchical systems, a scalable discovery protocol (*such as in OpenSplice
DDS*) actually PREVENTS ?*horizontal?* communication between physically
connected endpoint (e.g. nodes on the same ?level?, yet in another ?branch?
of a hierarchical system) even if from a DDS-perspective they share
interest in the same information (*topic/partitions*). Without a clear
notion of (*hierarchical*) ?role? and ?scope?, other DDS-implementations
are likely to ?blow-up? the underlying platform with discovery
activities/traffic as information will start flowing ?horizontally? between
nodes that are on ?the same? hierarchical level (*yet belong to different
?branches?*) in combination with protocols that require each individual
application-level communication-endpoint to be discovered and its state
maintained (by individual heartbeats).





-Hans







Hans van 't Hag

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078



PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.



-----Original Message-----
From: developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Friday, January 20, 2012 11:20 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery



Hi Hans,



thanks for your very clear answer.

So, if I did not misunderstood your explanation, does this practically

mean that the main use cases for defining network partitions associated

with unicast addresses are those where using multicast is made

impossible due to administration related issues (e.g., multicast is

filtered?).



Are there any other use cases that I am not seeing?



Thanks again for your support.

Andrea
Post by Hans van't Hag
Hi Andrea,
Sorry for the late reply .. anyhow, yes, this behavior is normal as you
explicitly state that data sent to logical DDS-partition "part" should be
'pushed' out to the NetworkPartition "part" which is defined as the
N2/N2/N3/N4 unicast address-set.
If you have discovery enabled (which you have), there is the
'optimziation' that as long as there's nobody interested in the data,
OpenSplice won't even bother to send it on the wire, yet as soon as
there's one interested node, it WILL be sent to the wire following the
partition-definitions as been set up.
Technically it could be possible of course to optimize the algorithm, yet
that's currently not in place in the community edition's RT-networking
service.
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.
-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Wednesday, January 11, 2012 11:48 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
While writing the previous post I made a mistake in copying the excerpt
of my configuration file.
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>
Sorry for the double post, and thanks again for any help you will
provide.
Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns network
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.
Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the partition
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the port
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer





_______________________________________________

OpenSplice DDS Developer Mailing List

Developer at opensplice.org

Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20120120/b832a9b4/attachment.htm>
Andrea Reale
2012-01-20 14:27:54 UTC
Permalink
Hi Hans,

thanks for your answer, and for making again things much clearer.
? OpenSplice NetworkPartitions are a means to physically
partition the communication-space (and by means of mapping 'relate'
this to the logical partitioning of the global-data-space by means of
DDS-partitions).
? Unicast addressing is one of the communication-methods that
can be utilized to get data distributed within a networkPartition (or
the 'GlobalPartition' if there are no explicit networkPartitions
defined).
I find the OpenSplice feature of mapping networkPartitions to DDS
Partitions/topics very interesting as it allows a clear and controllable
isolation of the network traffic generated by semantically different
sets of data.

However, given the current implementation, I was wandering if there
existed any other reason, except administration/filtering constraints,
to use unicast addressing within a single networkPartition.

For what concerns the second mechanism, i.e. the dynamic discovery, I
totally agree with your considerations about the need of scalable
protocols (not only for what concerns the discovery) to enable the
realization of very large scale systems (both in terms of number of
participants and geographical dispersion).

For what I read about dynamic discovery, it basically provides a mean
for extending the 'GlobalPartition', by querying one or more nodes in
the statically configured ProbeList, with the unicast addresses of some
nodes whose role matches one (or more) given scope expressions.
One thing that it's not really clear to me is the following: does not
the fact that they are all added to the same 'GlobalPartition'
prevent me from achieving the fine grained control over network traffic
that networkPartitions give? I will try to explain myself better with an
example.

Consider for example the simple scenario in which a DomainParticipant
'DP1' on one node creates two publishers: one for partition 'A', and the
other for partition 'B'. For each of those publisher, a data writer is
also created; both the data writers periodically write data on topic
'T'.
Now, imagine that 'DP1' discovers, through dynamic discovery, a list of
remote unicast addresses maybe corresponding to nodes "on the other
side" of a WAN. However, some of these nodes are only interested in the
data instances of topic 'T' in partition 'A', the others in the
instances of topic 'T' in partition 'B'. Since all theses addresses are
in the 'GlobalPartition', isn't every data sample written by DP1
forwarded towards all of them?
Is there a way to combine the fine control granted by a mechanism such
as 'Opensplice networkPartitions' with the flexibility of the dynamic
discovery?

Kind regards,
Andrea
? Given the size and (unicast) protocol restrictions of many
large-scale/WAN systems, a discovery mechanism is required where the
scalability of the dynamic system is ensured whilst minimizing the
communication overhead of the required discovery process. For these
reasons OpenSplice DDS provides a dynamic unicast-discovery protocol
where the physical network can be overlaid with a notion of ?Roles?
and related communication-scopes such that only nodes within a defined
?scope-of-interest? will be automatically discovered and their state
maintained in a distributed/fault-tolerant manner by the OpenSplice
DDS middleware. Other DDS-vendors either rely on a protocol that
requires multicast for discovery of all DDS-entities (rather than
communication-nodes) or rely on a centralized service that can
be/become a single-point-of-failure in the dynamic system. Finally,
especially in hierarchical systems, a scalable discovery protocol
(such as in OpenSplice DDS) actually PREVENTS ?horizontal?
communication between physically connected endpoint (e.g. nodes on the
same ?level?, yet in another ?branch? of a hierarchical system) even
if from a DDS-perspective they share interest in the same information
(topic/partitions). Without a clear notion of (hierarchical) ?role?
and ?scope?, other DDS-implementations are likely to ?blow-up? the
underlying platform with discovery activities/traffic as information
will start flowing ?horizontally? between nodes that are on ?the same?
hierarchical level (yet belong to different ?branches?) in combination
with protocols that require each individual application-level
communication-endpoint to be discovered and its state maintained (by
individual heartbeats).
-Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End
User customers to build and optimize high-performance systems
primarily for Mil/Aero, Communications, Industrial, and Financial
Markets.
-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Friday, January 20, 2012 11:20 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
Hi Hans,
thanks for your very clear answer.
So, if I did not misunderstood your explanation, does this practically
mean that the main use cases for defining network partitions
associated
with unicast addresses are those where using multicast is made
impossible due to administration related issues (e.g., multicast is
filtered?).
Are there any other use cases that I am not seeing?
Thanks again for your support.
Andrea
Post by Hans van't Hag
Hi Andrea,
Sorry for the late reply .. anyhow, yes, this behavior is normal as
you
Post by Hans van't Hag
explicitly state that data sent to logical DDS-partition "part"
should be
Post by Hans van't Hag
'pushed' out to the NetworkPartition "part" which is defined as the
N2/N2/N3/N4 unicast address-set.
If you have discovery enabled (which you have), there is the
'optimziation' that as long as there's nobody interested in the
data,
Post by Hans van't Hag
OpenSplice won't even bother to send it on the wire, yet as soon as
there's one interested node, it WILL be sent to the wire following
the
Post by Hans van't Hag
partition-definitions as been set up.
Technically it could be possible of course to optimize the
algorithm, yet
Post by Hans van't Hag
that's currently not in place in the community edition's
RT-networking
Post by Hans van't Hag
service.
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based,
performance-critical
Post by Hans van't Hag
middleware. Our products enable our OEM, Systems Integrator, and
End User
Post by Hans van't Hag
customers to build and optimize high-performance systems primarily
for
Post by Hans van't Hag
Mil/Aero, Communications, Industrial, and Financial Markets.
-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Wednesday, January 11, 2012 11:48 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
While writing the previous post I made a mistake in copying the
excerpt
Post by Hans van't Hag
of my configuration file.
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5"
Connected="true"
Post by Hans van't Hag
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>
Sorry for the double post, and thanks again for any help you will
provide.
Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with
KEEP_LAST
Post by Hans van't Hag
Post by Andrea Reale
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns
network
Post by Hans van't Hag
Post by Andrea Reale
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5"
Connected="true"
Post by Hans van't Hag
Post by Andrea Reale
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four
potential
Post by Hans van't Hag
Post by Andrea Reale
domain participants.
Now, if no data reader matching the data writer on N1 is started
in the
Post by Hans van't Hag
Post by Andrea Reale
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example --
N2 I
Post by Hans van't Hag
Post by Andrea Reale
see that N1 generates UDP traffic towards ALL the hosts in the
partition
Post by Hans van't Hag
Post by Andrea Reale
(i.e., N2, N3, N4, N5) even though no opensplice instance is
running on
Post by Hans van't Hag
Post by Andrea Reale
N3, N4 and N5. The destination port of these messages is 53370,
the port
Post by Hans van't Hag
Post by Andrea Reale
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
Ravi Chandran
2012-01-20 15:43:12 UTC
Permalink
Hi Hans,

Does this mean that the messages published into GDS to a particular network
partition would eventually exist only within nodes defined within that
network partition physically, and not into the logical GDS space of other
network partitions?

On Fri, Jan 20, 2012 at 6:18 PM, Hans van't Hag
Post by Hans van't Hag
Hi Andrea,
I think you have to decouple the usage/use-cases of using unicast and
using networkPartitions.
? OpenSplice NetworkPartitions are a means to physically
partition the communication-space (and by means of mapping 'relate' this to
the logical partitioning of the global-data-space by means of DDS-
partitions).
? Unicast addressing is one of the communication-methods that can
be utilized to get data distributed within a networkPartition (or the
'GlobalPartition' if there are no explicit networkPartitions defined).
W.r.t. unicast-addressing there is still another distinct OpenSplice-DDS
feature in that we also support a dedicated ?dynamic unicast-discovery?
? Given the size and (unicast) protocol restrictions of many
large-scale/WAN systems, a discovery mechanism is required where the
scalability of the dynamic system is ensured whilst minimizing the
communication overhead of the required discovery process. For these reasons
OpenSplice DDS provides a dynamic unicast-discovery protocol where the
physical network can be *overlaid* with a notion of ?Roles? and related
communication-scopes such that only nodes within a defined ?*
scope-of-interest*? will be automatically discovered and their state
maintained in a distributed/fault-tolerant manner by the OpenSplice DDS
middleware. Other DDS-vendors either rely on a protocol that requires
multicast for discovery of all DDS-entities (*rather than
communication-nodes*) or rely on a centralized service that can be/become
a single-point-of-failure in the dynamic system. Finally, especially in
hierarchical systems, a scalable discovery protocol (*such as in
OpenSplice DDS*) actually PREVENTS ?*horizontal?* communication between
physically connected endpoint (e.g. nodes on the same ?level?, yet in
another ?branch? of a hierarchical system) even if from a DDS-perspective
they share interest in the same information (*topic/partitions*). Without
a clear notion of (*hierarchical*) ?role? and ?scope?, other
DDS-implementations are likely to ?blow-up? the underlying platform with
discovery activities/traffic as information will start flowing
?horizontally? between nodes that are on ?the same? hierarchical level (*yet
belong to different ?branches?*) in combination with protocols that
require each individual application-level communication-endpoint to be
discovered and its state maintained (by individual heartbeats).
-Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.
-----Original Message-----
developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Friday, January 20, 2012 11:20 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
Hi Hans,
thanks for your very clear answer.
So, if I did not misunderstood your explanation, does this practically
mean that the main use cases for defining network partitions associated
with unicast addresses are those where using multicast is made
impossible due to administration related issues (e.g., multicast is
filtered?).
Are there any other use cases that I am not seeing?
Thanks again for your support.
Andrea
Post by Hans van't Hag
Hi Andrea,
Sorry for the late reply .. anyhow, yes, this behavior is normal as you
explicitly state that data sent to logical DDS-partition "part" should be
'pushed' out to the NetworkPartition "part" which is defined as the
N2/N2/N3/N4 unicast address-set.
If you have discovery enabled (which you have), there is the
'optimziation' that as long as there's nobody interested in the data,
OpenSplice won't even bother to send it on the wire, yet as soon as
there's one interested node, it WILL be sent to the wire following the
partition-definitions as been set up.
Technically it could be possible of course to optimize the algorithm, yet
that's currently not in place in the community edition's RT-networking
service.
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End
User
Post by Hans van't Hag
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.
-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Wednesday, January 11, 2012 11:48 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
While writing the previous post I made a mistake in copying the excerpt
of my configuration file.
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>
Sorry for the double post, and thanks again for any help you will
provide.
Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns network
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.
Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the
partition
Post by Hans van't Hag
Post by Andrea Reale
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the
port
Post by Hans van't Hag
Post by Andrea Reale
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
--
Thanks & Regards
Ravi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20120120/4bbabf62/attachment.htm>
Hans van't Hag
2012-01-20 16:28:25 UTC
Permalink
Messages published in DDS are always published in (one or more) DDS
partitions. Such a logical DDS-partition might be mapped onto an OpenSplice
NetworkPartition which can be configured to ?bound? the traffic-flow,
like when specifying a set of unicast-addresses, the data will ONLY flow to
those nodes .. or when specifying a multicast-address for a networkPartion,
some nodes might set the ?connected? pararmeter of the networkPartion to
?false? which basically implies that that node will never join the
multicast group regardless of any emerging interest by applications (or
durability-services) running on that node.



Now to your question, data exists in logical DDS-partitions .. i.e. those
partitions where the data has been published into .. this is the most
important thing to realize.

The optional usage of OpenSplice NetworkPartitions is introduced to be able
to better utilize network-resources, like reserving a multicast-group for
distributing data that exists in one-or-more logical Partitions.

Then an ?edge-case? would be that you use these same NetworkPartitions to
?exclude? certain nodes from the distribution by not adding them to a
unicast-list of a NetworkPartition or to explicitly not-connect a
NetworkPartition on a specific node to the network at all .. this use-case
can be useful if you want to assure that certain data never reaches a node.











* *

*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078



PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Ravi Chandran
*Sent:* Friday, January 20, 2012 4:43 PM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] Network partitioning and discovery



Hi Hans,



Does this mean that the messages published into GDS to a particular network
partition would eventually exist only within nodes defined within that
network partition physically, and not into the logical GDS space of other
network partitions?



On Fri, Jan 20, 2012 at 6:18 PM, Hans van't Hag <hans.vanthag at prismtech.com>
wrote:

Hi Andrea,



I think you have to decouple the usage/use-cases of using unicast and using
networkPartitions.



? OpenSplice NetworkPartitions are a means to physically partition
the communication-space (and by means of mapping 'relate' this to the
logical partitioning of the global-data-space by means of DDS-partitions).

? Unicast addressing is one of the communication-methods that can
be utilized to get data distributed within a networkPartition (or the
'GlobalPartition' if there are no explicit networkPartitions defined).



W.r.t. unicast-addressing there is still another distinct OpenSplice-DDS
feature in that we also support a dedicated ?dynamic unicast-discovery?
mechanism in OpenSplice:



? Given the size and (unicast) protocol restrictions of many
large-scale/WAN systems, a discovery mechanism is required where the
scalability of the dynamic system is ensured whilst minimizing the
communication overhead of the required discovery process. For these reasons
OpenSplice DDS provides a dynamic unicast-discovery protocol where the
physical network can be *overlaid* with a notion of ?Roles? and related
communication-scopes such that only nodes within a defined ?*
scope-of-interest*? will be automatically discovered and their state
maintained in a distributed/fault-tolerant manner by the OpenSplice DDS
middleware. Other DDS-vendors either rely on a protocol that requires
multicast for discovery of all DDS-entities (*rather than
communication-nodes*) or rely on a centralized service that can be/become a
single-point-of-failure in the dynamic system. Finally, especially in
hierarchical systems, a scalable discovery protocol (*such as in OpenSplice
DDS*) actually PREVENTS ?*horizontal?* communication between physically
connected endpoint (e.g. nodes on the same ?level?, yet in another ?branch?
of a hierarchical system) even if from a DDS-perspective they share
interest in the same information (*topic/partitions*). Without a clear
notion of (*hierarchical*) ?role? and ?scope?, other DDS-implementations
are likely to ?blow-up? the underlying platform with discovery
activities/traffic as information will start flowing ?horizontally? between
nodes that are on ?the same? hierarchical level (*yet belong to different
?branches?*) in combination with protocols that require each individual
application-level communication-endpoint to be discovered and its state
maintained (by individual heartbeats).





-Hans







Hans van 't Hag

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078



PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.



-----Original Message-----
From: developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Friday, January 20, 2012 11:20 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery



Hi Hans,



thanks for your very clear answer.

So, if I did not misunderstood your explanation, does this practically

mean that the main use cases for defining network partitions associated

with unicast addresses are those where using multicast is made

impossible due to administration related issues (e.g., multicast is

filtered?).



Are there any other use cases that I am not seeing?



Thanks again for your support.

Andrea
Post by Hans van't Hag
Hi Andrea,
Sorry for the late reply .. anyhow, yes, this behavior is normal as you
explicitly state that data sent to logical DDS-partition "part" should be
'pushed' out to the NetworkPartition "part" which is defined as the
N2/N2/N3/N4 unicast address-set.
If you have discovery enabled (which you have), there is the
'optimziation' that as long as there's nobody interested in the data,
OpenSplice won't even bother to send it on the wire, yet as soon as
there's one interested node, it WILL be sent to the wire following the
partition-definitions as been set up.
Technically it could be possible of course to optimize the algorithm, yet
that's currently not in place in the community edition's RT-networking
service.
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
PrismTech is a global leader in standards-based, performance-critical
middleware. Our products enable our OEM, Systems Integrator, and End User
customers to build and optimize high-performance systems primarily for
Mil/Aero, Communications, Industrial, and Financial Markets.
-----Original Message-----
From: developer-bounces at opensplice.org
[mailto:developer-bounces at opensplice.org] On Behalf Of Andrea Reale
Sent: Wednesday, January 11, 2012 11:48 AM
To: developer at opensplice.org
Subject: Re: [OSPL-Dev] Network partitioning and discovery
While writing the previous post I made a mistake in copying the excerpt
of my configuration file.
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="part"/>
</PartitionMappings>
</Partitioning>
Sorry for the double post, and thanks again for any help you will
provide.
Regards,
Andrea
Post by Andrea Reale
Hello everyone.
I am confused on how the static discovery works related to network
partitioning. In particular, here is my use case.
On one node (call it N1), I run a domain participant with one data
writer which writes some data to a topic 'T' in partition 'part'.
The reliability QoS for the data writer is best-effort with KEEP_LAST
history, and history.depth = 1.
The ospl configuration for that node (N1) for what concerns network
...
<Partitioning>
<GlobalPartition Address="224.0.0.42"/>
<NetworkPartitions>
<NetworkPartition Address="N2 N3 N4 N5" Connected="true"
Name="part"/>
</NetworkPartitions>
<PartitionMappings>
<PartitionMapping DCPSPartitionTopic="part.*"
NetworkPartition="inputoutput"/>
</PartitionMappings>
</Partitioning>
...
N2, N3, N4, and N5 are the unicast ip addresses of other four potential
domain participants.
Now, if no data reader matching the data writer on N1 is started in the
domain I see no traffic going out from N1 as one would expect.
However, if I start exactly one data reader on -- for example -- N2 I
see that N1 generates UDP traffic towards ALL the hosts in the partition
(i.e., N2, N3, N4, N5) even though no opensplice instance is running on
N3, N4 and N5. The destination port of these messages is 53370, the port
of the best-effort channel.
Is the behaviour normal? I would have expected that no traffic was
generated towards the nodes not running opensplice...
Thanks,
andrea
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://dev.opensplice.org/mailman/listinfo/developer





_______________________________________________

OpenSplice DDS Developer Mailing List

Developer at opensplice.org

Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer


_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://dev.opensplice.org/mailman/listinfo/developer
--
Thanks & Regards

Ravi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20120120/71723e0a/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 3349 bytes
Desc: not available
URL: <Loading Image...>
Continue reading on narkive:
Loading...