Discussion:
[OSPL-Dev] OSPL RELIABILITY question
Attila Balint
2009-11-30 14:45:08 UTC
Permalink
Hello,

I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.

I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.

- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.

Could you please tell me how or through which settings I can ensure that all
my data goes through from the Chatter application to the MessageBoard.

Thank your for you answer in advance,
With regards,

Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091130/132aa283/attachment.htm>
Hans van't Hag
2009-11-30 15:57:29 UTC
Permalink
Hi Attila,



You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.

Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).

In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used as
a key-field which is the userID. This is done to separate chat-messages from
multiple chatters so that they?ll be stored at different locations (would be
?rows? in a regular database and are called ?instances? in DDS terminology)
in the dataReader?s cache. Now here?s whats happening when you remove the
?sleep? in the Chatter: you?ll write the 10 samples very very fast and very
likely so fast that the MessageBoard applications doesn?t even get a chance
to see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?*overwrite?* each other upon arrival.
This is very typical of ?any? database, i.e. *new data will replace old(er)
data*. If that?s unexpected, then the good news is that there?s also
something like the HISTORY QoS-policy (of dataReaders in this case) which
allows you to specify how many ?historical? samples should be preserved i.e.
very much like a ring-buffer or ?queue? that will hold the ?n? newest
samples rather than just the single newest sample (which is tied to the
default HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency coupling
between publishers and subscribers which is typically something you don?t
want as the ?*decoupling in space and time?* is one of the ?driving
concepts? behind the DDS specification.



So what you?re experiencing is the separation of ?*delivery?* and ?*storage?
* of information (which is very similar to the ?real-world? where you can
ask for reliable/acknowledged delivery of a letter to be mailed, yet which
doesn?t imply that once delivered, the letter will be actually read by the
recipient J)



Hope that explains somewhat ?



Cheers,

Hans





*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question



Hello,



I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.



I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.



- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.



Could you please tell me how or through which settings I can ensure that all
my data goes through from the Chatter application to the MessageBoard.



Thank your for you answer in advance,

With regards,



Attila Balint

Mobile: +4(0)740791399

E-Mail: abalint21 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091130/b5ed81aa/attachment.htm>
Sveta Shasharina
2009-11-30 16:05:19 UTC
Permalink
Hi Hans,

A related question. If I have a topic with many attributes and every
change in each
attribute should be lead to a separate instance, should I specify all
attributes
as keys or is there a simpler way to show this? Or maybe I should add one
more attribute which changes each time the topic is published
(maybe related to the publishing time) that would make sure
that all samples are saved in the dataReader's cache?

Thanks,
Sveta
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache.
Please note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a /distributed database/
in the sense that a dataReader cache is organized like a database
where arriving data ( samples ) are inserted (in this case following
successful reliable delivery) according to their * KEY * attributes
(where a * KEY * is a list of zero or more topic - type attributes
who?s values uniquely identify samples of an ? instance? of that topic ).
In DDS, key-fields are identified already in the IDL-file that defines
the types that are used as topics. Now, when you look at the Chat.idl
code of the chatroom tutorial, you?ll notice that there?s only one
attribute used as a key-field which is the userID . This is done to
separate chat-messages from multiple chatters so that they?ll be
stored at different locations (would be ? rows? in a regular database
and are called ? instances? in DDS terminology) in the dataReader?s
cache. Now here?s whats happening when you remove the ?sleep? in the
Chatter: you?ll write the 10 samples very very fast and very likely so
fast that the MessageBoard applications doesn?t even get a chance to
see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?/overwrite?/ each other upon
arrival. This is very typical of ?any? database, i.e. /new data will
replace old(er) data/. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of
dataReaders in this case) which allows you to specify how many
?historical? samples should be preserved i.e. very much like a
ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency
coupling between publishers and subscribers which is typically
something you don?t want as the ?/decoupling in space and time?/ is
one of the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?/delivery?/ and
?/storage?/ of information (which is very similar to the ?real-world?
where you can ask for reliable/acknowledged delivery of a letter to be
mailed, yet which doesn?t imply that once delivered, the letter will
be actually read by the recipient J )
Hope that explains somewhat ?
Cheers,
Hans
** Hans van 't Hag **
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
* From: * developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org <mailto:developer at opensplice.org>
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught
my attention. I've went through most of the documentation on provided
in the git repository.
I've took the example from the Tutorial in C++ as a base for my
wrapper library. What I want to do is to be able to send multiple
samples of an instance - which is done in the Chatter application.
I've saw in the documentation that if we set the topic reliability to
RELIABLE the DDS will ensure through data retransmission that all the
data will get on the other side safely. I've noticed that if I take
out the "sleep" instruction in both Chatter, the MessageBoard
application will not receive all 10 messages, although its started
well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same
machine where the ospl is running.
Could you please tell me how or through which settings I can ensure
that all my data goes through from the Chatter application to the
MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile : +4(0)740791399
E-Mail: abalint21 at gmail.com <mailto:abalint21 at gmail.com>
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
Hans van't Hag
2009-11-30 16:17:04 UTC
Permalink
Hi Sveta,



I suspect that what you mean is that every change in each attribute should
lead to a new sample being published rather than implying a new instance.

It would be pretty unique that all topic-attributes would be key-attributes
since typically the ?key-attributes? ?just? provide the unique
identification of a sample where the non-key-fields contain the actual
?information? update of the sample. For example in a climate-control system,
each room might have a temperature-sensor that publishes a sample of a
?room-temperature? topic that has the roomID as key-field and Temperature as
non-key-field so that each room can ?publish? room-temperature samples
periodically where the temparatures of each room are clearly separated by
belonging to different ?instance? (i.e. key-values).



If you don?t want any samples to ?get lost? (during periods that a
dataReader is not reading data), you can either define a sufficiently large
HISTORY-DEPTH for the reader, specify a KEEP_ALL behavior (inducing
peer-to-peer flow-control between writer/reader once the history is
exhausted i.e. the specified RESOURCE_LIMITS reached or memory has been
exhausted J). Of course you could also add a sequence-number as an
additional key-field, but that?s a lot like using the HISTORY_QoS yet with a
lot more overhead as you?ll create large amounts of instances that all have
their own administration (instance-state, view-state, sample-state) that you
probably don?t need





Regards,

Hans









Hans van 't Hag

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078



-----Original Message-----
From: developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] On Behalf Of Sveta Shasharina
Sent: Monday, November 30, 2009 5:05 PM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question



Hi Hans,



A related question. If I have a topic with many attributes and every

change in each

attribute should be lead to a separate instance, should I specify all

attributes

as keys or is there a simpler way to show this? Or maybe I should add one

more attribute which changes each time the topic is published

(maybe related to the publishing time) that would make sure

that all samples are saved in the dataReader's cache?



Thanks,

Sveta
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache.
Please note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a /distributed database/
in the sense that a dataReader cache is organized like a database
where arriving data ( samples ) are inserted (in this case following
successful reliable delivery) according to their * KEY * attributes
(where a * KEY * is a list of zero or more topic - type attributes
who?s values uniquely identify samples of an ? instance? of that topic ).
In DDS, key-fields are identified already in the IDL-file that defines
the types that are used as topics. Now, when you look at the Chat.idl
code of the chatroom tutorial, you?ll notice that there?s only one
attribute used as a key-field which is the userID . This is done to
separate chat-messages from multiple chatters so that they?ll be
stored at different locations (would be ? rows? in a regular database
and are called ? instances? in DDS terminology) in the dataReader?s
cache. Now here?s whats happening when you remove the ?sleep? in the
Chatter: you?ll write the 10 samples very very fast and very likely so
fast that the MessageBoard applications doesn?t even get a chance to
see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?/overwrite?/ each other upon
arrival. This is very typical of ?any? database, i.e. /new data will
replace old(er) data/. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of
dataReaders in this case) which allows you to specify how many
?historical? samples should be preserved i.e. very much like a
ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency
coupling between publishers and subscribers which is typically
something you don?t want as the ?/decoupling in space and time?/ is
one of the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?/delivery?/ and
?/storage?/ of information (which is very similar to the ?real-world?
where you can ask for reliable/acknowledged delivery of a letter to be
mailed, yet which doesn?t imply that once delivered, the letter will
be actually read by the recipient J )
Hope that explains somewhat ?
Cheers,
Hans
** Hans van 't Hag **
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
* From: * developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org <mailto:developer at opensplice.org>
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught
my attention. I've went through most of the documentation on provided
in the git repository.
I've took the example from the Tutorial in C++ as a base for my
wrapper library. What I want to do is to be able to send multiple
samples of an instance - which is done in the Chatter application.
I've saw in the documentation that if we set the topic reliability to
RELIABLE the DDS will ensure through data retransmission that all the
data will get on the other side safely. I've noticed that if I take
out the "sleep" instruction in both Chatter, the MessageBoard
application will not receive all 10 messages, although its started
well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same
machine where the ospl is running.
Could you please tell me how or through which settings I can ensure
that all my data goes through from the Chatter application to the
MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile : +4(0)740791399
E-Mail: abalint21 at gmail.com <mailto:abalint21 at gmail.com>
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________

OpenSplice DDS Developer Mailing List

Developer at opensplice.org

Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091130/f81c4266/attachment.htm>
Sveta Shasharina
2009-11-30 16:20:30 UTC
Permalink
Thank you, Hans!
You understood my question correctly :-)
Sveta
Post by Hans van't Hag
Hi Sveta,
I suspect that what you mean is that every change in each attribute
should lead to a new sample being published rather than implying a new
instance .
It would be pretty unique that all topic-attributes would be
key-attributes since typically the ?key-attributes? ?just? provide the
unique identification of a sample where the non-key-fields contain the
actual ?information? update of the sample. For example in a
climate-control system, each room might have a temperature-sensor that
publishes a sample of a ?room-temperature? topic that has the roomID
as key-field and Temperature as non-key-field so that each room can
?publish? room-temperature samples periodically where the temparatures
of each room are clearly separated by belonging to different
?instance? (i.e. key-values).
If you don?t want any samples to ?get lost? (during periods that a
dataReader is not reading data), you can either define a sufficiently
large HISTORY-DEPTH for the reader, specify a KEEP_ALL behavior
(inducing peer-to-peer flow-control between writer/reader once the
history is exhausted i.e. the specified RESOURCE_LIMITS reached or
memory has been exhausted J ). Of course you could also add a
sequence-number as an additional key-field, but that?s a lot like
using the HISTORY_QoS yet with a lot more overhead as you?ll create
large amounts of instances that all have their own administration (
instance-state, view-state, sample-state ) that you probably don?t need
Regards,
Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
-----Original Message-----
From: developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>] On Behalf Of Sveta Shasharina
Sent: Monday, November 30, 2009 5:05 PM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
A related question. If I have a topic with many attributes and every
change in each
attribute should be lead to a separate instance, should I specify all
attributes
as keys or is there a simpler way to show this? Or maybe I should add one
more attribute which changes each time the topic is published
(maybe related to the publishing time) that would make sure
that all samples are saved in the dataReader's cache?
Thanks,
Sveta
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache.
Please note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a /distributed database/
in the sense that a dataReader cache is organized like a database
where arriving data ( samples ) are inserted (in this case following
successful reliable delivery) according to their * KEY * attributes
(where a * KEY * is a list of zero or more topic - type attributes
who?s values uniquely identify samples of an ? instance? of that
topic ).
Post by Hans van't Hag
In DDS, key-fields are identified already in the IDL-file that defines
the types that are used as topics. Now, when you look at the Chat.idl
code of the chatroom tutorial, you?ll notice that there?s only one
attribute used as a key-field which is the userID . This is done to
separate chat-messages from multiple chatters so that they?ll be
stored at different locations (would be ? rows? in a regular database
and are called ? instances? in DDS terminology) in the dataReader?s
cache. Now here?s whats happening when you remove the ?sleep? in the
Chatter: you?ll write the 10 samples very very fast and very likely so
fast that the MessageBoard applications doesn?t even get a chance to
see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?/overwrite?/ each other upon
arrival. This is very typical of ?any? database, i.e. /new data will
replace old(er) data/. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of
dataReaders in this case) which allows you to specify how many
?historical? samples should be preserved i.e. very much like a
ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency
coupling between publishers and subscribers which is typically
something you don?t want as the ?/decoupling in space and time?/ is
one of the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?/delivery?/ and
?/storage?/ of information (which is very similar to the ?real-world?
where you can ask for reliable/acknowledged delivery of a letter to be
mailed, yet which doesn?t imply that once delivered, the letter will
be actually read by the recipient J )
Hope that explains somewhat ?
Cheers,
Hans
** Hans van 't Hag **
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
<mailto:hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>>
Post by Hans van't Hag
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
* From: * developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>
Post by Hans van't Hag
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>] *On Behalf Of *Attila Balint
Post by Hans van't Hag
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org <mailto:developer at opensplice.org>
<mailto:developer at opensplice.org <mailto:developer at opensplice.org>>
Post by Hans van't Hag
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught
my attention. I've went through most of the documentation on provided
in the git repository.
I've took the example from the Tutorial in C++ as a base for my
wrapper library. What I want to do is to be able to send multiple
samples of an instance - which is done in the Chatter application.
I've saw in the documentation that if we set the topic reliability to
RELIABLE the DDS will ensure through data retransmission that all the
data will get on the other side safely. I've noticed that if I take
out the "sleep" instruction in both Chatter, the MessageBoard
application will not receive all 10 messages, although its started
well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same
machine where the ospl is running.
Could you please tell me how or through which settings I can ensure
that all my data goes through from the Chatter application to the
MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile : +4(0)740791399
E-Mail: abalint21 at gmail.com <mailto:abalint21 at gmail.com>
<mailto:abalint21 at gmail.com <mailto:abalint21 at gmail.com>>
Post by Hans van't Hag
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
Sveta Shasharina
2009-11-30 23:28:36 UTC
Permalink
Hi Hans,
I think I understand you, but found these statements a bit confusing:

"KEY is a list of zero or more topic-type attributes who?s values
uniquely identify samples of an ?instance? of that topic."

"It would be pretty unique that all topic-attributes would be
key-attributes since typically the ?key-attributes? ?just? provide the
unique identification of a sample where the non-key-fields contain the
actual ?information? update of the sample. For example in a
climate-control system, each room might have a temperature-sensor that
publishes a sample of a ?room-temperature? topic that has the roomID as
key-field and Temperature as non-key-field so that each room can
?publish? room-temperature samples periodically where the temparatures
of each room are clearly separated by belonging to different ?instance?
(i.e. key-values)."

The confusion comes from my understanding of your example as follows:

room-temperature-topic {
string roomID;
float temperature;
}
#pragma keylist room-temperature-topic roomID

Then instances of the room-temperature topic will have roomID = A;
roomID = B etc. Each instance could have
different temperature. If this is correct then the key values do not
identify samples as is stated in the quotes above.

Could you please clarify? If one want to keep many instances on the
reader side, is there
a way to identify them or one just iterates from the last one?

Thanks,
Sveta
Post by Hans van't Hag
Hi Sveta,
I suspect that what you mean is that every change in each attribute
should lead to a new sample being published rather than implying a new
instance.
It would be pretty unique that all topic-attributes would be
key-attributes since typically the ?key-attributes? ?just? provide the
unique identification of a sample where the non-key-fields contain the
actual ?information? update of the sample. For example in a
climate-control system, each room might have a temperature-sensor that
publishes a sample of a ?room-temperature? topic that has the roomID
as key-field and Temperature as non-key-field so that each room can
?publish? room-temperature samples periodically where the temparatures
of each room are clearly separated by belonging to different
?instance? (i.e. key-values).
If you don?t want any samples to ?get lost? (during periods that a
dataReader is not reading data), you can either define a sufficiently
large HISTORY-DEPTH for the reader, specify a KEEP_ALL behavior
(inducing peer-to-peer flow-control between writer/reader once the
history is exhausted i.e. the specified RESOURCE_LIMITS reached or
memory has been exhausted J). Of course you could also add a
sequence-number as an additional key-field, but that?s a lot like
using the HISTORY_QoS yet with a lot more overhead as you?ll create
large amounts of instances that all have their own administration
(instance-state, view-state, sample-state) that you probably don?t need
Regards,
Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
-----Original Message-----
From: developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>] On Behalf Of Sveta Shasharina
Sent: Monday, November 30, 2009 5:05 PM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
A related question. If I have a topic with many attributes and every
change in each
attribute should be lead to a separate instance, should I specify all
attributes
as keys or is there a simpler way to show this? Or maybe I should add one
more attribute which changes each time the topic is published
(maybe related to the publishing time) that would make sure
that all samples are saved in the dataReader's cache?
Thanks,
Sveta
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache.
Please note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a /distributed database/
in the sense that a dataReader cache is organized like a database
where arriving data ( samples ) are inserted (in this case following
successful reliable delivery) according to their * KEY * attributes
(where a * KEY * is a list of zero or more topic - type attributes
who?s values uniquely identify samples of an ? instance? of that topic ).
In DDS, key-fields are identified already in the IDL-file that defines
the types that are used as topics. Now, when you look at the Chat.idl
code of the chatroom tutorial, you?ll notice that there?s only one
attribute used as a key-field which is the userID . This is done to
separate chat-messages from multiple chatters so that they?ll be
stored at different locations (would be ? rows? in a regular database
and are called ? instances? in DDS terminology) in the dataReader?s
cache. Now here?s whats happening when you remove the ?sleep? in the
Chatter: you?ll write the 10 samples very very fast and very likely so
fast that the MessageBoard applications doesn?t even get a chance to
see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?/overwrite?/ each other upon
arrival. This is very typical of ?any? database, i.e. /new data will
replace old(er) data/. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of
dataReaders in this case) which allows you to specify how many
?historical? samples should be preserved i.e. very much like a
ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency
coupling between publishers and subscribers which is typically
something you don?t want as the ?/decoupling in space and time?/ is
one of the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?/delivery?/ and
?/storage?/ of information (which is very similar to the ?real-world?
where you can ask for reliable/acknowledged delivery of a letter to be
mailed, yet which doesn?t imply that once delivered, the letter will
be actually read by the recipient J )
Hope that explains somewhat ?
Cheers,
Hans
** Hans van 't Hag **
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
<mailto:hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>>
Post by Hans van't Hag
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
* From: * developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>
Post by Hans van't Hag
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>] *On Behalf Of *Attila Balint
Post by Hans van't Hag
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org <mailto:developer at opensplice.org>
<mailto:developer at opensplice.org <mailto:developer at opensplice.org>>
Post by Hans van't Hag
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught
my attention. I've went through most of the documentation on provided
in the git repository.
I've took the example from the Tutorial in C++ as a base for my
wrapper library. What I want to do is to be able to send multiple
samples of an instance - which is done in the Chatter application.
I've saw in the documentation that if we set the topic reliability to
RELIABLE the DDS will ensure through data retransmission that all the
data will get on the other side safely. I've noticed that if I take
out the "sleep" instruction in both Chatter, the MessageBoard
application will not receive all 10 messages, although its started
well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same
machine where the ospl is running.
Could you please tell me how or through which settings I can ensure
that all my data goes through from the Chatter application to the
MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile : +4(0)740791399
E-Mail: abalint21 at gmail.com <mailto:abalint21 at gmail.com>
<mailto:abalint21 at gmail.com <mailto:abalint21 at gmail.com>>
Post by Hans van't Hag
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
Hans van't Hag
2009-12-01 09:20:46 UTC
Permalink
Hi Sveta,



Each subsequent sample published by a temperature sensor in a room will
contain the (new) temperature as well as the identification of the room to
which it applies (i.e. the roomID key-value that identifies the
?room-instance? to which the samples belong).



Without the key(s), it would be impossible to relate DDS-samples to which *
?outside-world-objects?* they belong, i.e. in this example to which *room* a
*temperature* would apply.



This is exactly like any (other) DBMS system where ?keys? would identify
rows in a table. What is ?extra? for DDS is that ?time? (or ?HISTORY? in
DDS-terminology) also plays an important role in the sense that its typical
that you?ll publish multiple samples of an instance over time (like
publishing the room-temperature every minute) and that you might want to
maintain a set of ?*historical?* data instead of just replacing an old
temperature sample for a specific room with the new measurement for that
room. The HISTORY QoS-policy allows you to specify the ?depth? i.e. the
number of historical samples (for each instance !) to be maintained by the
DDS middleware.



There are also DDS-introduction presentations on YouTube (
www.youtube.com/OpenSpliceTube) and slideshare (
www.slideshare.net/Angelo.Corsaro) that might help in understanding the
data-modeling and QoS concepts as applicable to the OMG-DDS standard and or
our implementation of it.



In
http://www.slideshare.net/Angelo.Corsaro/a-gentle-introduction-to-opensplice-ddsthe
concept of samples/keys is also explained in slides 15 to 23.





Regards,

Hans







Hans van 't Hag

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078



-----Original Message-----
From: developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] On Behalf Of Sveta Shasharina
Sent: Tuesday, December 01, 2009 12:29 AM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question



Hi Hans,

I think I understand you, but found these statements a bit confusing:



"KEY is a list of zero or more topic-type attributes who?s values

uniquely identify samples of an ?instance? of that topic."



"It would be pretty unique that all topic-attributes would be

key-attributes since typically the ?key-attributes? ?just? provide the

unique identification of a sample where the non-key-fields contain the

actual ?information? update of the sample. For example in a

climate-control system, each room might have a temperature-sensor that

publishes a sample of a ?room-temperature? topic that has the roomID as

key-field and Temperature as non-key-field so that each room can

?publish? room-temperature samples periodically where the temparatures

of each room are clearly separated by belonging to different ?instance?

(i.e. key-values)."



The confusion comes from my understanding of your example as follows:



room-temperature-topic {

string roomID;

float temperature;

}

#pragma keylist room-temperature-topic roomID



Then instances of the room-temperature topic will have roomID = A;

roomID = B etc. Each instance could have

different temperature. If this is correct then the key values do not

identify samples as is stated in the quotes above.



Could you please clarify? If one want to keep many instances on the

reader side, is there

a way to identify them or one just iterates from the last one?



Thanks,

Sveta
Post by Hans van't Hag
Hi Sveta,
I suspect that what you mean is that every change in each attribute
should lead to a new sample being published rather than implying a new
instance.
It would be pretty unique that all topic-attributes would be
key-attributes since typically the ?key-attributes? ?just? provide the
unique identification of a sample where the non-key-fields contain the
actual ?information? update of the sample. For example in a
climate-control system, each room might have a temperature-sensor that
publishes a sample of a ?room-temperature? topic that has the roomID
as key-field and Temperature as non-key-field so that each room can
?publish? room-temperature samples periodically where the temparatures
of each room are clearly separated by belonging to different
?instance? (i.e. key-values).
If you don?t want any samples to ?get lost? (during periods that a
dataReader is not reading data), you can either define a sufficiently
large HISTORY-DEPTH for the reader, specify a KEEP_ALL behavior
(inducing peer-to-peer flow-control between writer/reader once the
history is exhausted i.e. the specified RESOURCE_LIMITS reached or
memory has been exhausted J). Of course you could also add a
sequence-number as an additional key-field, but that?s a lot like
using the HISTORY_QoS yet with a lot more overhead as you?ll create
large amounts of instances that all have their own administration
(instance-state, view-state, sample-state) that you probably don?t need
Regards,
Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
-----Original Message-----
From: developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>] On Behalf Of Sveta Shasharina
Sent: Monday, November 30, 2009 5:05 PM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
A related question. If I have a topic with many attributes and every
change in each
attribute should be lead to a separate instance, should I specify all
attributes
as keys or is there a simpler way to show this? Or maybe I should add one
more attribute which changes each time the topic is published
(maybe related to the publishing time) that would make sure
that all samples are saved in the dataReader's cache?
Thanks,
Sveta
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache.
Please note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a /distributed database/
in the sense that a dataReader cache is organized like a database
where arriving data ( samples ) are inserted (in this case following
successful reliable delivery) according to their * KEY * attributes
(where a * KEY * is a list of zero or more topic - type attributes
who?s values uniquely identify samples of an ? instance? of that topic ).
In DDS, key-fields are identified already in the IDL-file that defines
the types that are used as topics. Now, when you look at the Chat.idl
code of the chatroom tutorial, you?ll notice that there?s only one
attribute used as a key-field which is the userID . This is done to
separate chat-messages from multiple chatters so that they?ll be
stored at different locations (would be ? rows? in a regular database
and are called ? instances? in DDS terminology) in the dataReader?s
cache. Now here?s whats happening when you remove the ?sleep? in the
Chatter: you?ll write the 10 samples very very fast and very likely so
fast that the MessageBoard applications doesn?t even get a chance to
see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?/overwrite?/ each other upon
arrival. This is very typical of ?any? database, i.e. /new data will
replace old(er) data/. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of
dataReaders in this case) which allows you to specify how many
?historical? samples should be preserved i.e. very much like a
ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency
coupling between publishers and subscribers which is typically
something you don?t want as the ?/decoupling in space and time?/ is
one of the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?/delivery?/ and
?/storage?/ of information (which is very similar to the ?real-world?
where you can ask for reliable/acknowledged delivery of a letter to be
mailed, yet which doesn?t imply that once delivered, the letter will
be actually read by the recipient J )
Hope that explains somewhat ?
Cheers,
Hans
** Hans van 't Hag **
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
<mailto:hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>>
Post by Hans van't Hag
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
* From: * developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>
Post by Hans van't Hag
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>] *On Behalf Of *Attila Balint
Post by Hans van't Hag
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org <mailto:developer at opensplice.org>
<mailto:developer at opensplice.org <mailto:developer at opensplice.org>>
Post by Hans van't Hag
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught
my attention. I've went through most of the documentation on provided
in the git repository.
I've took the example from the Tutorial in C++ as a base for my
wrapper library. What I want to do is to be able to send multiple
samples of an instance - which is done in the Chatter application.
I've saw in the documentation that if we set the topic reliability to
RELIABLE the DDS will ensure through data retransmission that all the
data will get on the other side safely. I've noticed that if I take
out the "sleep" instruction in both Chatter, the MessageBoard
application will not receive all 10 messages, although its started
well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same
machine where the ospl is running.
Could you please tell me how or through which settings I can ensure
that all my data goes through from the Chatter application to the
MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile : +4(0)740791399
E-Mail: abalint21 at gmail.com <mailto:abalint21 at gmail.com>
<mailto:abalint21 at gmail.com <mailto:abalint21 at gmail.com>>
Post by Hans van't Hag
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________

OpenSplice DDS Developer Mailing List

Developer at opensplice.org

Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091201/1c035e71/attachment.htm>
Sveta Shasharina
2009-12-01 13:57:35 UTC
Permalink
Dear Hans,
Thank you for your detailed response! I understood you the first time, but
was just confused by "identify sample" (probably my Russian language
interfering).
Sveta
Post by Hans van't Hag
Hi Sveta,
Each subsequent sample published by a temperature sensor in a room
will contain the (new) temperature as well as the identification of
the room to which it applies (i.e. the roomID key-value that
identifies the ?room-instance? to which the samples belong).
Without the key(s), it would be impossible to relate DDS-samples to
which /?outside-world-objects?/ they belong, i.e. in this example to
which /room/ a /temperature/ would apply.
This is exactly like any (other) DBMS system where ?keys? would
identify rows in a table. What is ?extra? for DDS is that ?time? (or
?HISTORY? in DDS-terminology) also plays an important role in the
sense that its typical that you?ll publish multiple samples of an
instance over time (like publishing the room-temperature every minute)
and that you might want to maintain a set of ?/historical?/ data
instead of just replacing an old temperature sample for a specific
room with the new measurement for that room. The HISTORY QoS-policy
allows you to specify the ?depth? i.e. the number of historical
samples (for each instance !) to be maintained by the DDS middleware.
There are also DDS-introduction presentations on YouTube
(www.youtube.com/OpenSpliceTube
<http://www.youtube.com/OpenSpliceTube>) and slideshare
(www.slideshare.net/Angelo.Corsaro
<http://www.slideshare.net/Angelo.Corsaro>) that might help in
understanding the data-modeling and QoS concepts as applicable to the
OMG-DDS standard and or our implementation of it.
In
http://www.slideshare.net/Angelo.Corsaro/a-gentle-introduction-to-opensplice-dds
the concept of samples/keys is also explained in slides 15 to 23.
Regards,
Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
-----Original Message-----
From: developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>] On Behalf Of Sveta Shasharina
Sent: Tuesday, December 01, 2009 12:29 AM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
"KEY is a list of zero or more topic-type attributes who?s values
uniquely identify samples of an ?instance? of that topic."
"It would be pretty unique that all topic-attributes would be
key-attributes since typically the ?key-attributes? ?just? provide the
unique identification of a sample where the non-key-fields contain the
actual ?information? update of the sample. For example in a
climate-control system, each room might have a temperature-sensor that
publishes a sample of a ?room-temperature? topic that has the roomID as
key-field and Temperature as non-key-field so that each room can
?publish? room-temperature samples periodically where the temparatures
of each room are clearly separated by belonging to different ?instance?
(i.e. key-values)."
room-temperature-topic {
string roomID;
float temperature;
}
#pragma keylist room-temperature-topic roomID
Then instances of the room-temperature topic will have roomID = A;
roomID = B etc. Each instance could have
different temperature. If this is correct then the key values do not
identify samples as is stated in the quotes above.
Could you please clarify? If one want to keep many instances on the
reader side, is there
a way to identify them or one just iterates from the last one?
Thanks,
Sveta
Post by Hans van't Hag
Hi Sveta,
I suspect that what you mean is that every change in each attribute
should lead to a new sample being published rather than implying a new
instance.
It would be pretty unique that all topic-attributes would be
key-attributes since typically the ?key-attributes? ?just? provide the
unique identification of a sample where the non-key-fields contain the
actual ?information? update of the sample. For example in a
climate-control system, each room might have a temperature-sensor that
publishes a sample of a ?room-temperature? topic that has the roomID
as key-field and Temperature as non-key-field so that each room can
?publish? room-temperature samples periodically where the temparatures
of each room are clearly separated by belonging to different
?instance? (i.e. key-values).
If you don?t want any samples to ?get lost? (during periods that a
dataReader is not reading data), you can either define a sufficiently
large HISTORY-DEPTH for the reader, specify a KEEP_ALL behavior
(inducing peer-to-peer flow-control between writer/reader once the
history is exhausted i.e. the specified RESOURCE_LIMITS reached or
memory has been exhausted J). Of course you could also add a
sequence-number as an additional key-field, but that?s a lot like
using the HISTORY_QoS yet with a lot more overhead as you?ll create
large amounts of instances that all have their own administration
(instance-state, view-state, sample-state) that you probably don?t need
Regards,
Hans
Hans van 't Hag
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>
<mailto:hans.vanthag at prismtech.com <mailto:hans.vanthag at prismtech.com>>
Post by Hans van't Hag
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
-----Original Message-----
From: developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>
Post by Hans van't Hag
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>] On Behalf Of Sveta Shasharina
Post by Hans van't Hag
Sent: Monday, November 30, 2009 5:05 PM
To: OpenSplice DDS Developer Mailing List
Subject: Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
A related question. If I have a topic with many attributes and every
change in each
attribute should be lead to a separate instance, should I specify all
attributes
as keys or is there a simpler way to show this? Or maybe I should
add one
Post by Hans van't Hag
more attribute which changes each time the topic is published
(maybe related to the publishing time) that would make sure
that all samples are saved in the dataReader's cache?
Thanks,
Sveta
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache.
Please note the capitalization of ?*DELIVERY?* as that?s very
important.
Post by Hans van't Hag
Post by Hans van't Hag
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a /distributed database/
in the sense that a dataReader cache is organized like a database
where arriving data ( samples ) are inserted (in this case following
successful reliable delivery) according to their * KEY * attributes
(where a * KEY * is a list of zero or more topic - type attributes
who?s values uniquely identify samples of an ? instance? of that
topic ).
Post by Hans van't Hag
Post by Hans van't Hag
In DDS, key-fields are identified already in the IDL-file that defines
the types that are used as topics. Now, when you look at the Chat.idl
code of the chatroom tutorial, you?ll notice that there?s only one
attribute used as a key-field which is the userID . This is done to
separate chat-messages from multiple chatters so that they?ll be
stored at different locations (would be ? rows? in a regular database
and are called ? instances? in DDS terminology) in the dataReader?s
cache. Now here?s whats happening when you remove the ?sleep? in the
Chatter: you?ll write the 10 samples very very fast and very likely so
fast that the MessageBoard applications doesn?t even get a chance to
see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?/overwrite?/ each other upon
arrival. This is very typical of ?any? database, i.e. /new data will
replace old(er) data/. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of
dataReaders in this case) which allows you to specify how many
?historical? samples should be preserved i.e. very much like a
ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency
coupling between publishers and subscribers which is typically
something you don?t want as the ?/decoupling in space and time?/ is
one of the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?/delivery?/ and
?/storage?/ of information (which is very similar to the ?real-world?
where you can ask for reliable/acknowledged delivery of a letter to be
mailed, yet which doesn?t imply that once delivered, the letter will
be actually read by the recipient J )
Hope that explains somewhat ?
Cheers,
Hans
** Hans van 't Hag **
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
<mailto:hans.vanthag at prismtech.com> <mailto:hans.vanthag at prismtech.com
<mailto:hans.vanthag at prismtech.com>>
Post by Hans van't Hag
<mailto:hans.vanthag at prismtech.com
<mailto:hans.vanthag at prismtech.com> <mailto:hans.vanthag at prismtech.com
<mailto:hans.vanthag at prismtech.com>>>
Post by Hans van't Hag
Post by Hans van't Hag
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
* From: * developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>
Post by Hans van't Hag
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>>
Post by Hans van't Hag
Post by Hans van't Hag
[mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>
Post by Hans van't Hag
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>
Post by Hans van't Hag
<mailto:developer-bounces at opensplice.org
<mailto:developer-bounces at opensplice.org>>>] *On Behalf Of *Attila Balint
Post by Hans van't Hag
Post by Hans van't Hag
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org <mailto:developer at opensplice.org>
<mailto:developer at opensplice.org <mailto:developer at opensplice.org>>
Post by Hans van't Hag
<mailto:developer at opensplice.org <mailto:developer at opensplice.org>
<mailto:developer at opensplice.org <mailto:developer at opensplice.org>>>
Post by Hans van't Hag
Post by Hans van't Hag
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught
my attention. I've went through most of the documentation on provided
in the git repository.
I've took the example from the Tutorial in C++ as a base for my
wrapper library. What I want to do is to be able to send multiple
samples of an instance - which is done in the Chatter application.
I've saw in the documentation that if we set the topic reliability to
RELIABLE the DDS will ensure through data retransmission that all the
data will get on the other side safely. I've noticed that if I take
out the "sleep" instruction in both Chatter, the MessageBoard
application will not receive all 10 messages, although its started
well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same
machine where the ospl is running.
Could you please tell me how or through which settings I can ensure
that all my data goes through from the Chatter application to the
MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile : +4(0)740791399
E-Mail: abalint21 at gmail.com <mailto:abalint21 at gmail.com>
<mailto:abalint21 at gmail.com <mailto:abalint21 at gmail.com>>
Post by Hans van't Hag
<mailto:abalint21 at gmail.com <mailto:abalint21 at gmail.com>
<mailto:abalint21 at gmail.com <mailto:abalint21 at gmail.com>>>
------------------------------------------------------------------------
Post by Hans van't Hag
Post by Hans van't Hag
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
<mailto:Developer at opensplice.org <mailto:Developer at opensplice.org>>
Post by Hans van't Hag
Post by Hans van't Hag
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
<mailto:Developer at opensplice.org <mailto:Developer at opensplice.org>>
Post by Hans van't Hag
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org <mailto:Developer at opensplice.org>
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
------------------------------------------------------------------------
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
Attila Balint
2009-11-30 22:25:58 UTC
Permalink
Thank you Hans,
for taking the time to write this thorough explanation. It helped a lot
in understanding how the opensplice DDS works.

I understand now and I think I know how I have to approach the 'problem'.

Thank you again.
With regards,

Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com


On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).
In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used
as a key-field which is the userID. This is done to separate chat-messages
from multiple chatters so that they?ll be stored at different locations
(would be ?rows? in a regular database and are called ?instances? in DDS
terminology) in the dataReader?s cache. Now here?s whats happening when
you remove the ?sleep? in the Chatter: you?ll write the 10 samples very very
fast and very likely so fast that the MessageBoard applications doesn?t
even get a chance to see them all arriving as they are all inserted at the
same location in the dataReader?s cache and therefore ?*overwrite?* each
other upon arrival. This is very typical of ?any? database, i.e. *new data
will replace old(er) data*. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of dataReaders in
this case) which allows you to specify how many ?historical? samples should
be preserved i.e. very much like a ring-buffer or ?queue? that will hold the
?n? newest samples rather than just the single newest sample (which is tied
to the default HISTORY_DEPTH value of 1). You could even specify a
KEEP_ALL history policy for a dataWriter which would imply a end-to-end
frequency coupling between publishers and subscribers which is typically
something you don?t want as the ?*decoupling in space and time?* is one of
the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?*delivery?* and ?*
storage?* of information (which is very similar to the ?real-world? where
you can ask for reliable/acknowledged delivery of a letter to be mailed, yet
which doesn?t imply that once delivered, the letter will be actually read by
the recipient J)
Hope that explains somewhat ?
Cheers,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.
I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.
Could you please tell me how or through which settings I can ensure that
all my data goes through from the Chatter application to the MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091201/31c2a78d/attachment.htm>
Attila Balint
2009-12-02 16:27:01 UTC
Permalink
Hi Hans,

I've tried the solution you've given me with using the HistoryQoS Policy
and I've even set it to Keep_All. I've modified the example file Chatter and
MessageBoard. I was expecting the following behaviour.

- I was sending from Chatter 10k message with the same key. I was
expecting to see all 10k messages on the Messageboard. This is not the case.
I've tried to see what happens if I put Keep_Last but it somehow seems that
this doesn't affect the reader at all. I've attached the modified samples.

Is this not implemented yet? Or what is the problem here?
Any help is appreciated,

Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com


On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag
Post by Hans van't Hag
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).
In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used
as a key-field which is the userID. This is done to separate chat-messages
from multiple chatters so that they?ll be stored at different locations
(would be ?rows? in a regular database and are called ?instances? in DDS
terminology) in the dataReader?s cache. Now here?s whats happening when
you remove the ?sleep? in the Chatter: you?ll write the 10 samples very very
fast and very likely so fast that the MessageBoard applications doesn?t
even get a chance to see them all arriving as they are all inserted at the
same location in the dataReader?s cache and therefore ?*overwrite?* each
other upon arrival. This is very typical of ?any? database, i.e. *new data
will replace old(er) data*. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of dataReaders in
this case) which allows you to specify how many ?historical? samples should
be preserved i.e. very much like a ring-buffer or ?queue? that will hold the
?n? newest samples rather than just the single newest sample (which is tied
to the default HISTORY_DEPTH value of 1). You could even specify a
KEEP_ALL history policy for a dataWriter which would imply a end-to-end
frequency coupling between publishers and subscribers which is typically
something you don?t want as the ?*decoupling in space and time?* is one of
the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?*delivery?* and ?*
storage?* of information (which is very similar to the ?real-world? where
you can ask for reliable/acknowledged delivery of a letter to be mailed, yet
which doesn?t imply that once delivered, the letter will be actually read by
the recipient J)
Hope that explains somewhat ?
Cheers,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.
I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.
Could you please tell me how or through which settings I can ensure that
all my data goes through from the Chatter application to the MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091202/e999de79/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: SACPP_abalint.7z
Type: application/x-7z-compressed
Size: 24582 bytes
Desc: not available
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091202/e999de79/attachment.7z>
Hans van't Hag
2009-12-02 17:06:51 UTC
Permalink
Hi Attila,



My fault for not explaining that HISTORY is also applicable to a DataWriter
(and the default is KEEP_LAST with history-depth 1). So if you write fast
than the network can ?handle?, you?ll start overwriting data already in the
writer?s history (assuring that ?when? networking is ready to send another
sample, it will write the latest value and not some ?old? value).

In your usecase which is like a ?messaging-usecase? you basically want a
?synchronous write? to the network i.e. block for networking to keep-up.



So what you need to do is to use a KEEP_ALL history-policy on the writer and
then set the resource-limits to an appropriate value to for instance 100
samples per instance.

For achieving optimal throughput, its wise to select a ?reasonable? history
so that networking, when its ready to send the next ?packet?, can ?pack?
multiple samples in a single UDP-fragment (of configurable size) which is
more efficient than each sample needing to pass through the UDP/IP stack.



As the writer now may block for history-space to become available, you also
might want to set a time-out on the write-operation to prevent it from
blocking indefinitely.

So assuming dwq is the data-writer-qos structure, here are the required
(extra) settings for setting up the dataWriter:



dwq.history.kind = KEEP_ALL_HISTORY_QOS;

dwq.resource_limits.max_samples_per_instance = 100;

dwq.reliability.max_blocking_time.sec = 10;

dwq.reliability.max_blocking_time.nanosec =0;



If you reader is fast enough to ?keep-up? with the incoming flow, you might
not see a difference between KEEP_LAST and KEEP_ALL, but if you want to
assure that no incoming new sample will overwrite a previous sample before
its actually read (or taken) by your application, you?ll need to specify a
sufficient history-depth (or in extrema use KEEP_ALL with proper
RESOURCE_LIMITS setting).



Good luck,

Hans







*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Wednesday, December 02, 2009 5:27 PM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question



Hi Hans,



I've tried the solution you've given me with using the HistoryQoS Policy
and I've even set it to Keep_All. I've modified the example file Chatter and
MessageBoard. I was expecting the following behaviour.

- I was sending from Chatter 10k message with the same key. I was
expecting to see all 10k messages on the Messageboard. This is not the case.
I've tried to see what happens if I put Keep_Last but it somehow seems that
this doesn't affect the reader at all. I've attached the modified samples.

Is this not implemented yet? Or what is the problem here?

Any help is appreciated,


Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com

On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag <hans.vanthag at prismtech.com>
wrote:

Hi Attila,



You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.

Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).

In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used as
a key-field which is the userID. This is done to separate chat-messages from
multiple chatters so that they?ll be stored at different locations (would be
?rows? in a regular database and are called ?instances? in DDS terminology)
in the dataReader?s cache. Now here?s whats happening when you remove the
?sleep? in the Chatter: you?ll write the 10 samples very very fast and very
likely so fast that the MessageBoard applications doesn?t even get a chance
to see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?*overwrite?* each other upon arrival.
This is very typical of ?any? database, i.e. *new data will replace old(er)
data*. If that?s unexpected, then the good news is that there?s also
something like the HISTORY QoS-policy (of dataReaders in this case) which
allows you to specify how many ?historical? samples should be preserved i.e.
very much like a ring-buffer or ?queue? that will hold the ?n? newest
samples rather than just the single newest sample (which is tied to the
default HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency coupling
between publishers and subscribers which is typically something you don?t
want as the ?*decoupling in space and time?* is one of the ?driving
concepts? behind the DDS specification.



So what you?re experiencing is the separation of ?*delivery?* and ?*storage?
* of information (which is very similar to the ?real-world? where you can
ask for reliable/acknowledged delivery of a letter to be mailed, yet which
doesn?t imply that once delivered, the letter will be actually read by the
recipient J)



Hope that explains somewhat ?



Cheers,

Hans





*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question



Hello,



I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.



I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.



- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.



Could you please tell me how or through which settings I can ensure that all
my data goes through from the Chatter application to the MessageBoard.



Thank your for you answer in advance,

With regards,



Attila Balint

Mobile: +4(0)740791399

E-Mail: abalint21 at gmail.com




_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091202/70314004/attachment.htm>
Attila Balint
2009-12-07 15:27:01 UTC
Permalink
Hi Hans,

Sorry that I couldn't test your indication sooner, but for some reason its
still not working. I've set the historyQosPolicy in the writer and reader to
Keep_All, but if I take out the sleep so it will run as fast as possible
from 10k of elements only ~25 reaches the other side. Could you please tell
me why. I've read the docs 3 times already where the HistoryQos explained,
and I couldn't figure out what I'm I doing wrong.

Thank you and regards,

Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com


On Wed, Dec 2, 2009 at 7:06 PM, Hans van't Hag
Post by Hans van't Hag
Hi Attila,
My fault for not explaining that HISTORY is also applicable to a DataWriter
(and the default is KEEP_LAST with history-depth 1). So if you write fast
than the network can ?handle?, you?ll start overwriting data already in the
writer?s history (assuring that ?when? networking is ready to send another
sample, it will write the latest value and not some ?old? value).
In your usecase which is like a ?messaging-usecase? you basically want a
?synchronous write? to the network i.e. block for networking to keep-up.
So what you need to do is to use a KEEP_ALL history-policy on the writer
and then set the resource-limits to an appropriate value to for instance 100
samples per instance.
For achieving optimal throughput, its wise to select a ?reasonable? history
so that networking, when its ready to send the next ?packet?, can ?pack?
multiple samples in a single UDP-fragment (of configurable size) which is
more efficient than each sample needing to pass through the UDP/IP stack.
As the writer now may block for history-space to become available, you also
might want to set a time-out on the write-operation to prevent it from
blocking indefinitely.
So assuming dwq is the data-writer-qos structure, here are the required
dwq.history.kind = KEEP_ALL_HISTORY_QOS;
dwq.resource_limits.max_samples_per_instance = 100;
dwq.reliability.max_blocking_time.sec = 10;
dwq.reliability.max_blocking_time.nanosec =0;
If you reader is fast enough to ?keep-up? with the incoming flow, you might
not see a difference between KEEP_LAST and KEEP_ALL, but if you want to
assure that no incoming new sample will overwrite a previous sample before
its actually read (or taken) by your application, you?ll need to specify a
sufficient history-depth (or in extrema use KEEP_ALL with proper
RESOURCE_LIMITS setting).
Good luck,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Wednesday, December 02, 2009 5:27 PM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
I've tried the solution you've given me with using the HistoryQoS Policy
and I've even set it to Keep_All. I've modified the example file Chatter and
MessageBoard. I was expecting the following behaviour.
- I was sending from Chatter 10k message with the same key. I was
expecting to see all 10k messages on the Messageboard. This is not the case.
I've tried to see what happens if I put Keep_Last but it somehow seems that
this doesn't affect the reader at all. I've attached the modified samples.
Is this not implemented yet? Or what is the problem here?
Any help is appreciated,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag <
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).
In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used
as a key-field which is the userID. This is done to separate chat-messages
from multiple chatters so that they?ll be stored at different locations
(would be ?rows? in a regular database and are called ?instances? in DDS
terminology) in the dataReader?s cache. Now here?s whats happening when
you remove the ?sleep? in the Chatter: you?ll write the 10 samples very very
fast and very likely so fast that the MessageBoard applications doesn?t
even get a chance to see them all arriving as they are all inserted at the
same location in the dataReader?s cache and therefore ?*overwrite?* each
other upon arrival. This is very typical of ?any? database, i.e. *new data
will replace old(er) data*. If that?s unexpected, then the good news is
that there?s also something like the HISTORY QoS-policy (of dataReaders in
this case) which allows you to specify how many ?historical? samples should
be preserved i.e. very much like a ring-buffer or ?queue? that will hold the
?n? newest samples rather than just the single newest sample (which is tied
to the default HISTORY_DEPTH value of 1). You could even specify a
KEEP_ALL history policy for a dataWriter which would imply a end-to-end
frequency coupling between publishers and subscribers which is typically
something you don?t want as the ?*decoupling in space and time?* is one of
the ?driving concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?*delivery?* and ?*
storage?* of information (which is very similar to the ?real-world? where
you can ask for reliable/acknowledged delivery of a letter to be mailed, yet
which doesn?t imply that once delivered, the letter will be actually read by
the recipient J)
Hope that explains somewhat ?
Cheers,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.
I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.
Could you please tell me how or through which settings I can ensure that
all my data goes through from the Chatter application to the MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091207/775eba74/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Chatter.cpp
Type: text/x-c++src
Size: 11574 bytes
Desc: not available
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091207/775eba74/attachment.cpp>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: MessageBoard.cpp
Type: text/x-c++src
Size: 11302 bytes
Desc: not available
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091207/775eba74/attachment-0001.cpp>
Attila Balint
2009-12-10 09:57:34 UTC
Permalink
Hi,

Could someone please help me on the question below?

Any help would be appreciated.
Thank you,

Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
Post by Sveta Shasharina
Hi Hans,
Sorry that I couldn't test your indication sooner, but for some reason its
still not working. I've set the historyQosPolicy in the writer and reader to
Keep_All, but if I take out the sleep so it will run as fast as possible
from 10k of elements only ~25 reaches the other side. Could you please tell
me why. I've read the docs 3 times already where the HistoryQos explained,
and I couldn't figure out what I'm I doing wrong.
Thank you and regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
On Wed, Dec 2, 2009 at 7:06 PM, Hans van't Hag <hans.vanthag at prismtech.com
Post by Hans van't Hag
Hi Attila,
My fault for not explaining that HISTORY is also applicable to a
DataWriter (and the default is KEEP_LAST with history-depth 1). So if you
write fast than the network can ?handle?, you?ll start overwriting data
already in the writer?s history (assuring that ?when? networking is ready to
send another sample, it will write the latest value and not some ?old?
value).
In your usecase which is like a ?messaging-usecase? you basically want a
?synchronous write? to the network i.e. block for networking to keep-up.
So what you need to do is to use a KEEP_ALL history-policy on the writer
and then set the resource-limits to an appropriate value to for instance 100
samples per instance.
For achieving optimal throughput, its wise to select a ?reasonable?
history so that networking, when its ready to send the next ?packet?, can
?pack? multiple samples in a single UDP-fragment (of configurable size)
which is more efficient than each sample needing to pass through the UDP/IP
stack.
As the writer now may block for history-space to become available, you
also might want to set a time-out on the write-operation to prevent it from
blocking indefinitely.
So assuming dwq is the data-writer-qos structure, here are the required
dwq.history.kind = KEEP_ALL_HISTORY_QOS;
dwq.resource_limits.max_samples_per_instance = 100;
dwq.reliability.max_blocking_time.sec = 10;
dwq.reliability.max_blocking_time.nanosec =0;
If you reader is fast enough to ?keep-up? with the incoming flow, you
might not see a difference between KEEP_LAST and KEEP_ALL, but if you want
to assure that no incoming new sample will overwrite a previous sample
before its actually read (or taken) by your application, you?ll need to
specify a sufficient history-depth (or in extrema use KEEP_ALL with proper
RESOURCE_LIMITS setting).
Good luck,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Wednesday, December 02, 2009 5:27 PM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
I've tried the solution you've given me with using the HistoryQoS Policy
and I've even set it to Keep_All. I've modified the example file Chatter and
MessageBoard. I was expecting the following behaviour.
- I was sending from Chatter 10k message with the same key. I was
expecting to see all 10k messages on the Messageboard. This is not the case.
I've tried to see what happens if I put Keep_Last but it somehow seems that
this doesn't affect the reader at all. I've attached the modified samples.
Is this not implemented yet? Or what is the problem here?
Any help is appreciated,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag <
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is
a list of zero or more topic-type attributes who?s values uniquely
identify samples of an ?instance? of that topic).
In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used
as a key-field which is the userID. This is done to separate
chat-messages from multiple chatters so that they?ll be stored at different
locations (would be ?rows? in a regular database and are called ?
instances? in DDS terminology) in the dataReader?s cache. Now here?s
whats happening when you remove the ?sleep? in the Chatter: you?ll write the
10 samples very very fast and very likely so fast that the MessageBoardapplications doesn?t even get a chance to see them all arriving as they are
all inserted at the same location in the dataReader?s cache and therefore
?*overwrite?* each other upon arrival. This is very typical of ?any?
database, i.e. *new data will replace old(er) data*. If that?s
unexpected, then the good news is that there?s also something like the
HISTORY QoS-policy (of dataReaders in this case) which allows you to
specify how many ?historical? samples should be preserved i.e. very much
like a ring-buffer or ?queue? that will hold the ?n? newest samples rather
than just the single newest sample (which is tied to the default
HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency coupling
between publishers and subscribers which is typically something you don?t
want as the ?*decoupling in space and time?* is one of the ?driving
concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?*delivery?* and ?*
storage?* of information (which is very similar to the ?real-world? where
you can ask for reliable/acknowledged delivery of a letter to be mailed, yet
which doesn?t imply that once delivered, the letter will be actually read by
the recipient J)
Hope that explains somewhat ?
Cheers,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.
I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.
Could you please tell me how or through which settings I can ensure that
all my data goes through from the Chatter application to the MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe
http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091210/51aed8a3/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Chatter.cpp
Type: text/x-c++src
Size: 11574 bytes
Desc: not available
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091210/51aed8a3/attachment.cpp>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: MessageBoard.cpp
Type: text/x-c++src
Size: 11302 bytes
Desc: not available
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091210/51aed8a3/attachment-0001.cpp>
Hans van't Hag
2009-12-10 12:31:19 UTC
Permalink
Hi Atilla,



The Chatroom tutorial is perhaps not the ?easiest? example to ?adapt? with
your changes as it also includes a simulated multitopic (*optional DCPS
capability that allows a user to ?join? multiple topics that share the same
key-attributes, which is a small subset of the capabilities of the DDS
DLRL-layer which OpenSplice DDS supports via its commercial subscriptions*)
that is used ?so transparently? that you probably haven?t noticed that.



Chapter 7.3 of the Tutorial explains how/why this is used in this example
(and is perhaps adviced to ?read? for a better understanding of the
structure/purpose of this example)



The simulated multitopics functionality *joins* the ChatMessage and
NameService topics and writes the ?joined information? as the NamedMessagetopic.

This ?intermediate step? is where you now loose the data as the multitopic
emulation logic uses the QoS-policies as defined on the topics for its 2
readers and writer.



So the thing to do to ?fix? this issue is to also set the KEEP_ALL history
on the constructed topic QoS policy in both Chatter.cpp and
MessageBoard.cpp:



/* Set the ReliabilityQosPolicy to RELIABLE *and KEEP_ALL* . */

status = participant->get_default_topic_qos(reliable_topic_qos);

checkStatus(status, "DDS::DomainParticipant::get_default_topic_qos");

reliable_topic_qos.reliability.kind = RELIABLE_RELIABILITY_QOS;

reliable_topic_qos.history.kind = KEEP_ALL_HISTORY_QOS;



Also in MessageBoard.cpp you don?t have to explicitly set the KEEP_ALL
policy when you ?just? refer back to the original code where the topic-QoS
is used:



/* Create a DataReader for the NamedMessage Topic (using the appropriate
QoS). */

parentReader = chatSubscriber->create_datareader(

namedMessageTopic.in(),

DATAREADER_QOS_USE_TOPIC_QOS,

NULL,

ANY_STATUS);

checkHandle(parentReader, "DDS::Subscriber::create_datareader");



Hope that (finally) sorts-out your example.



*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Thursday, December 10, 2009 10:58 AM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question



Hi,



Could someone please help me on the question below?



Any help would be appreciated.

Thank you,


Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com

On Mon, Dec 7, 2009 at 5:27 PM, Attila Balint <abalint21 at gmail.com> wrote:

Hi Hans,



Sorry that I couldn't test your indication sooner, but for some reason its
still not working. I've set the historyQosPolicy in the writer and reader to
Keep_All, but if I take out the sleep so it will run as fast as possible
from 10k of elements only ~25 reaches the other side. Could you please tell
me why. I've read the docs 3 times already where the HistoryQos explained,
and I couldn't figure out what I'm I doing wrong.



Thank you and regards,


Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com

On Wed, Dec 2, 2009 at 7:06 PM, Hans van't Hag <
hans.vanthag at prismtech.com> wrote:

Hi Attila,



My fault for not explaining that HISTORY is also applicable to a DataWriter
(and the default is KEEP_LAST with history-depth 1). So if you write fast
than the network can ?handle?, you?ll start overwriting data already in the
writer?s history (assuring that ?when? networking is ready to send another
sample, it will write the latest value and not some ?old? value).

In your usecase which is like a ?messaging-usecase? you basically want a
?synchronous write? to the network i.e. block for networking to keep-up.



So what you need to do is to use a KEEP_ALL history-policy on the writer and
then set the resource-limits to an appropriate value to for instance 100
samples per instance.

For achieving optimal throughput, its wise to select a ?reasonable? history
so that networking, when its ready to send the next ?packet?, can ?pack?
multiple samples in a single UDP-fragment (of configurable size) which is
more efficient than each sample needing to pass through the UDP/IP stack.



As the writer now may block for history-space to become available, you also
might want to set a time-out on the write-operation to prevent it from
blocking indefinitely.

So assuming dwq is the data-writer-qos structure, here are the required
(extra) settings for setting up the dataWriter:



dwq.history.kind = KEEP_ALL_HISTORY_QOS;

dwq.resource_limits.max_samples_per_instance = 100;

dwq.reliability.max_blocking_time.sec = 10;

dwq.reliability.max_blocking_time.nanosec =0;



If you reader is fast enough to ?keep-up? with the incoming flow, you might
not see a difference between KEEP_LAST and KEEP_ALL, but if you want to
assure that no incoming new sample will overwrite a previous sample before
its actually read (or taken) by your application, you?ll need to specify a
sufficient history-depth (or in extrema use KEEP_ALL with proper
RESOURCE_LIMITS setting).



Good luck,

Hans







*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Wednesday, December 02, 2009 5:27 PM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question



Hi Hans,



I've tried the solution you've given me with using the HistoryQoS Policy
and I've even set it to Keep_All. I've modified the example file Chatter and
MessageBoard. I was expecting the following behaviour.

- I was sending from Chatter 10k message with the same key. I was
expecting to see all 10k messages on the Messageboard. This is not the case.
I've tried to see what happens if I put Keep_Last but it somehow seems that
this doesn't affect the reader at all. I've attached the modified samples.

Is this not implemented yet? Or what is the problem here?

Any help is appreciated,


Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com

On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag <hans.vanthag at prismtech.com>
wrote:

Hi Attila,



You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.

Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).

In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used as
a key-field which is the userID. This is done to separate chat-messages from
multiple chatters so that they?ll be stored at different locations (would be
?rows? in a regular database and are called ?instances? in DDS terminology)
in the dataReader?s cache. Now here?s whats happening when you remove the
?sleep? in the Chatter: you?ll write the 10 samples very very fast and very
likely so fast that the MessageBoard applications doesn?t even get a chance
to see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?*overwrite?* each other upon arrival.
This is very typical of ?any? database, i.e. *new data will replace old(er)
data*. If that?s unexpected, then the good news is that there?s also
something like the HISTORY QoS-policy (of dataReaders in this case) which
allows you to specify how many ?historical? samples should be preserved i.e.
very much like a ring-buffer or ?queue? that will hold the ?n? newest
samples rather than just the single newest sample (which is tied to the
default HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency coupling
between publishers and subscribers which is typically something you don?t
want as the ?*decoupling in space and time?* is one of the ?driving
concepts? behind the DDS specification.



So what you?re experiencing is the separation of ?*delivery?* and ?*storage?
* of information (which is very similar to the ?real-world? where you can
ask for reliable/acknowledged delivery of a letter to be mailed, yet which
doesn?t imply that once delivered, the letter will be actually read by the
recipient J)



Hope that explains somewhat ?



Cheers,

Hans





*Hans van 't Hag*

OpenSplice DDS Product Manager

PrismTech Netherlands

Email: hans.vanthag at prismtech.com

Tel: +31742472572

Fax: +31742472571

Gsm: +31624654078
------------------------------

*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question



Hello,



I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.



I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.



- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.



Could you please tell me how or through which settings I can ensure that all
my data goes through from the Chatter application to the MessageBoard.



Thank your for you answer in advance,

With regards,



Attila Balint

Mobile: +4(0)740791399

E-Mail: abalint21 at gmail.com




_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer




_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091210/7905364d/attachment.htm>
Randy Groves
2012-02-21 22:02:45 UTC
Permalink
I didn't realize that I hadn't signed up for the dev list on this account.
Re-posting so this will be sent to the list.

-randy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20120221/e6228fb3/attachment.htm>
Loading...