Hi Atilla,
The Chatroom tutorial is perhaps not the ?easiest? example to ?adapt? with
your changes as it also includes a simulated multitopic (*optional DCPS
capability that allows a user to ?join? multiple topics that share the same
key-attributes, which is a small subset of the capabilities of the DDS
DLRL-layer which OpenSplice DDS supports via its commercial subscriptions*)
that is used ?so transparently? that you probably haven?t noticed that.
Chapter 7.3 of the Tutorial explains how/why this is used in this example
(and is perhaps adviced to ?read? for a better understanding of the
structure/purpose of this example)
The simulated multitopics functionality *joins* the ChatMessage and
NameService topics and writes the ?joined information? as the NamedMessagetopic.
This ?intermediate step? is where you now loose the data as the multitopic
emulation logic uses the QoS-policies as defined on the topics for its 2
readers and writer.
So the thing to do to ?fix? this issue is to also set the KEEP_ALL history
on the constructed topic QoS policy in both Chatter.cpp and
MessageBoard.cpp:
/* Set the ReliabilityQosPolicy to RELIABLE *and KEEP_ALL* . */
status = participant->get_default_topic_qos(reliable_topic_qos);
checkStatus(status, "DDS::DomainParticipant::get_default_topic_qos");
reliable_topic_qos.reliability.kind = RELIABLE_RELIABILITY_QOS;
reliable_topic_qos.history.kind = KEEP_ALL_HISTORY_QOS;
Also in MessageBoard.cpp you don?t have to explicitly set the KEEP_ALL
policy when you ?just? refer back to the original code where the topic-QoS
is used:
/* Create a DataReader for the NamedMessage Topic (using the appropriate
QoS). */
parentReader = chatSubscriber->create_datareader(
namedMessageTopic.in(),
DATAREADER_QOS_USE_TOPIC_QOS,
NULL,
ANY_STATUS);
checkHandle(parentReader, "DDS::Subscriber::create_datareader");
Hope that (finally) sorts-out your example.
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Thursday, December 10, 2009 10:58 AM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question
Hi,
Could someone please help me on the question below?
Any help would be appreciated.
Thank you,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
On Mon, Dec 7, 2009 at 5:27 PM, Attila Balint <abalint21 at gmail.com> wrote:
Hi Hans,
Sorry that I couldn't test your indication sooner, but for some reason its
still not working. I've set the historyQosPolicy in the writer and reader to
Keep_All, but if I take out the sleep so it will run as fast as possible
from 10k of elements only ~25 reaches the other side. Could you please tell
me why. I've read the docs 3 times already where the HistoryQos explained,
and I couldn't figure out what I'm I doing wrong.
Thank you and regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
On Wed, Dec 2, 2009 at 7:06 PM, Hans van't Hag <
hans.vanthag at prismtech.com> wrote:
Hi Attila,
My fault for not explaining that HISTORY is also applicable to a DataWriter
(and the default is KEEP_LAST with history-depth 1). So if you write fast
than the network can ?handle?, you?ll start overwriting data already in the
writer?s history (assuring that ?when? networking is ready to send another
sample, it will write the latest value and not some ?old? value).
In your usecase which is like a ?messaging-usecase? you basically want a
?synchronous write? to the network i.e. block for networking to keep-up.
So what you need to do is to use a KEEP_ALL history-policy on the writer and
then set the resource-limits to an appropriate value to for instance 100
samples per instance.
For achieving optimal throughput, its wise to select a ?reasonable? history
so that networking, when its ready to send the next ?packet?, can ?pack?
multiple samples in a single UDP-fragment (of configurable size) which is
more efficient than each sample needing to pass through the UDP/IP stack.
As the writer now may block for history-space to become available, you also
might want to set a time-out on the write-operation to prevent it from
blocking indefinitely.
So assuming dwq is the data-writer-qos structure, here are the required
(extra) settings for setting up the dataWriter:
dwq.history.kind = KEEP_ALL_HISTORY_QOS;
dwq.resource_limits.max_samples_per_instance = 100;
dwq.reliability.max_blocking_time.sec = 10;
dwq.reliability.max_blocking_time.nanosec =0;
If you reader is fast enough to ?keep-up? with the incoming flow, you might
not see a difference between KEEP_LAST and KEEP_ALL, but if you want to
assure that no incoming new sample will overwrite a previous sample before
its actually read (or taken) by your application, you?ll need to specify a
sufficient history-depth (or in extrema use KEEP_ALL with proper
RESOURCE_LIMITS setting).
Good luck,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Wednesday, December 02, 2009 5:27 PM
*To:* OpenSplice DDS Developer Mailing List
*Subject:* Re: [OSPL-Dev] OSPL RELIABILITY question
Hi Hans,
I've tried the solution you've given me with using the HistoryQoS Policy
and I've even set it to Keep_All. I've modified the example file Chatter and
MessageBoard. I was expecting the following behaviour.
- I was sending from Chatter 10k message with the same key. I was
expecting to see all 10k messages on the Messageboard. This is not the case.
I've tried to see what happens if I put Keep_Last but it somehow seems that
this doesn't affect the reader at all. I've attached the modified samples.
Is this not implemented yet? Or what is the problem here?
Any help is appreciated,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
On Mon, Nov 30, 2009 at 5:57 PM, Hans van't Hag <hans.vanthag at prismtech.com>
wrote:
Hi Attila,
You?re basically right about the RELIABILITY QoS-policy in that when a
dataWriter has set his reliability QoS policy to RELIABLE that the
middleware will ensure the *DELIVERY* into the dataReader?s cache. Please
note the capitalization of ?*DELIVERY?* as that?s very important.
Something that could go by unnoticed easily is that unlike typical
?messaging? middleware, DDS is much more like a *distributed database* in
the sense that a dataReader cache is organized like a database where
arriving data (samples) are inserted (in this case following successful
reliable delivery) according to their *KEY* attributes (where a *KEY* is a
list of zero or more topic-type attributes who?s values uniquely identify
samples of an ?instance? of that topic).
In DDS, key-fields are identified already in the IDL-file that defines the
types that are used as topics. Now, when you look at the Chat.idl code of
the chatroom tutorial, you?ll notice that there?s only one attribute used as
a key-field which is the userID. This is done to separate chat-messages from
multiple chatters so that they?ll be stored at different locations (would be
?rows? in a regular database and are called ?instances? in DDS terminology)
in the dataReader?s cache. Now here?s whats happening when you remove the
?sleep? in the Chatter: you?ll write the 10 samples very very fast and very
likely so fast that the MessageBoard applications doesn?t even get a chance
to see them all arriving as they are all inserted at the same location in
the dataReader?s cache and therefore ?*overwrite?* each other upon arrival.
This is very typical of ?any? database, i.e. *new data will replace old(er)
data*. If that?s unexpected, then the good news is that there?s also
something like the HISTORY QoS-policy (of dataReaders in this case) which
allows you to specify how many ?historical? samples should be preserved i.e.
very much like a ring-buffer or ?queue? that will hold the ?n? newest
samples rather than just the single newest sample (which is tied to the
default HISTORY_DEPTH value of 1). You could even specify a KEEP_ALL history
policy for a dataWriter which would imply a end-to-end frequency coupling
between publishers and subscribers which is typically something you don?t
want as the ?*decoupling in space and time?* is one of the ?driving
concepts? behind the DDS specification.
So what you?re experiencing is the separation of ?*delivery?* and ?*storage?
* of information (which is very similar to the ?real-world? where you can
ask for reliable/acknowledged delivery of a letter to be mailed, yet which
doesn?t imply that once delivered, the letter will be actually read by the
recipient J)
Hope that explains somewhat ?
Cheers,
Hans
*Hans van 't Hag*
OpenSplice DDS Product Manager
PrismTech Netherlands
Email: hans.vanthag at prismtech.com
Tel: +31742472572
Fax: +31742472571
Gsm: +31624654078
------------------------------
*From:* developer-bounces at opensplice.org [mailto:
developer-bounces at opensplice.org] *On Behalf Of *Attila Balint
*Sent:* Monday, November 30, 2009 3:45 PM
*To:* developer at opensplice.org
*Subject:* [OSPL-Dev] OSPL RELIABILITY question
Hello,
I saw the presentation which you've made on open splice and it caught my
attention. I've went through most of the documentation on provided in the
git repository.
I've took the example from the Tutorial in C++ as a base for my wrapper
library. What I want to do is to be able to send multiple samples of an
instance - which is done in the Chatter application. I've saw in the
documentation that if we set the topic reliability to RELIABLE the DDS will
ensure through data retransmission that all the data will get on the other
side safely. I've noticed that if I take out the "sleep" instruction in both
Chatter, the MessageBoard application will not receive all 10 messages,
although its started well before the Chatter app.
- I've tested the Chatter and MessageBoard application on the same machine
where the ospl is running.
Could you please tell me how or through which settings I can ensure that all
my data goes through from the Chatter application to the MessageBoard.
Thank your for you answer in advance,
With regards,
Attila Balint
Mobile: +4(0)740791399
E-Mail: abalint21 at gmail.com
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
_______________________________________________
OpenSplice DDS Developer Mailing List
Developer at opensplice.org
Subscribe / Unsubscribe http://www.opensplice.org/mailman/listinfo/developer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://dev.opensplice.org/pipermail/developer/attachments/20091210/7905364d/attachment.htm>