I've been having a small offlist chat with Kevin about this so I
thought I'd post an update for those interested.
After a bit of investigation, I now better understand how EnMasse is
behaving in my system. I don't know if this is the expected behaviour
so maybe others can confirm my findings or suggest reasons my install
may not be happy.
My confusion came from the very simple fact that I expected EnMasse to
behave much like a print queue. I expected to be able to open as many
tcp connections as I wanted (http / soap connections in this case as
I'm using Fairy) and push data at EnMasse. I expected EnMasse to hover
up all this data and spool it and then, when a rendering node was
available, pass it on to be rendered and spool the render output.
Finally, I expected EnMasse to send the spooled results back down the
waiting tcp connection. I expected all of this to be effectively
asynchronous. Therefore, I was expecting to be able to open multiple
connections and throw data at EnMasse and then wait for a reply and I
was expecting EnMasse to start a new render while it was still sending
back data from the previous render. However, this doesn't seem to be
the case.
What I have found on my system is slightly but significantly
different. Yes, EnMasse allows you to open multiple connections - but
it then refuses all data writes on those connections unless there is a
free rendering node. When a rendering node becomes available, one of
the open connections is allowed to write data and this is passed on to
the rendering node. Then the rendering takes place and the result data
is sent back. Only when this entire process is complete and the
connection terminated is a new connection allowed to write.
My misunderstanding meant I was using more efficient blocking writes
when opening a connection and so, of course, creating a deadlock
situation whereby my application was waiting to write while EnMasse
was waiting to reply to a previous connection. The problem was only
visible when the read / write sizes were larger than could be buffered
in system-level buffers so slamm renders were never a problem :) This
is now fixed and I use less efficient non-blocking writes (so now
reading and writing are both based on c-style 'select' event triggers)
and everything is much happier.
However, I really would like to question the behaviour of EnMasse in
this case as it seems to me that its mode of operation is not very
efficient. Firstly, not allowing transmission of queued render input
files means that the rendering engine is sitting idle while data is
being transferred. The same goes for return of rendering output.
Secondly, if a rendering engine fails for some reason, I now have
reason to question whether I would need to requeue the input data in
order to have the render complete on another node. Lastly, EnMasse is
unable to pre-heat any sort of caches and is unable to use modern
techniques as used in cluster computing (such as identifying 'slow'
nodes and re-scheduling around them or sending the same job to
multiple nodes to reduce 'tail' time in a batch job) to achieve
maximum use of the rendering nodes.
Does anyone know if this is the designed behaviour of EnMasse or
whether there are ways to change the configuration to allow for
spooling style operation? I can't see how increasing the number of
agents will help as each agent (I am lead to believe) is there to load
and monitor a single XEP instance. Increasing the 'data-backlog' value
allows for more tcp connections but the behaviour continues to be as
described above.
thanks
Robert
On 1 Sep 2008, at 10:05, Khachik Kocharyan wrote:
> Hi Robert,
>
> you can use
> <option name="agents-count" value="INTEGER"/>
>
> The default value is XEP server's count in EnMasse configuration file.
> With a greater 'agents-count' EnMasse starts more agents and can
> handle more connections.
> There is one option to control logging:
> <option name="log-level" value="'all'|'error'|'none'"/>
>
> Khachik
> http://www.renderx.net/
> http://www.renderx.com/
>
> On Օգս 28, 2008, at 17:01, Robert Goldsmith wrote:
>
>> Hi all,
>>
>> We are using EnMasse with it's SOAP interface, Fairy, to manage a
>> cluster of XEP instances but we are having problems with queueing
>> submitted jobs. We have our own front-end (we call it Eiocha) and
>> this does lots of pre-processing and central handling of binary
>> files etc. before then sending the xsl-fo onto EnMasse for
>> processing. However, if we try to send more SOAP requests than
>> there are instances of XEP, EnMasse effectively hangs. When we then
>> kill the SOAP connections, EnMasse then throws an exception and
>> crashes out. EnMasse does not seem to have very much control over
>> logging and letting us know what it is having problems with so we
>> can't really tell what's going on. If we throttle the connections
>> it works fine (although nothing ever shows up listed in the
>> 'submitted' section of the web status page) but if it's not going
>> to do queueing we might as well not bother with EnMasse and talk
>> directly to the XEP instances.
>>
>> Has anyone seen a similar problem? Or know how to turn on
>> additional logging?
>>
>> If we did decide to dump EnMasse and talk to XEP directly, do we
>> need to pay for an additional product or is there some sort of tcp-
>> based comms available as part of the standard XEP package (or as
>> part of the EnMasse license)? We are developers so we are not going
>> to be phased by complex protocols (we are just lazy and would
>> prefer it if EnMasse did it for us!) :)
>>
>> Thanks in advance,
>>
>> Robert
>> ---
>> Robert Goldsmith
>> Systems Integrator
>> SP Group
>>
>>
>>
>> "Please consider the environment before printing this e-mail"
>>
>> ***********************************************************************************
>> IMPORTANT - this email and the information that it contains may
>> beconfidential, legally privileged and/or protected by law. It is
>> intendedsolely for the use of the individual or entity to whom it
>> is addressed. If you are not the intended recipient, please notify
>> the sender immediatelyand do not disclose the contents to any other
>> person, use it for any purpose,or store or copy the information in
>> any medium (including by printing it out. Please also delete all
>> copies of this email and any attachments from yoursystem and shred
>> any printed copies.
>> We do not accept any liability for losses or damages that you may
>> sufferas a result of your receipt of this email including but not
>> limited tocomputer service or system failure, access delays or
>> interruption, datanon-delivery or mis-delivery, computer viruses or
>> other harmful components.
>> Any views expressed in this message are those of the individual
>> senderexcept where the sender specifically states them to be the
>> views of the SP Group.
>> Copyright in this email and any attachments belongs to the SP Group
>> the sender orits licensors.
>> ***********************************************************************************
>>
>> -------------------
>> (*) To unsubscribe, send a message with words 'unsubscribe xep-
>> support'
>> in the body of the message to majordomo@renderx.com from the address
>> you are subscribed from.
>> (*) By using the Service, you expressly agree to these Terms of
>> Service http://www.renderx.com/terms-of-service.html
>
>
> -------------------
> (*) To unsubscribe, send a message with words 'unsubscribe xep-
> support'
> in the body of the message to majordomo@renderx.com from the address
> you are subscribed from.
> (*) By using the Service, you expressly agree to these Terms of
> Service http://www.renderx.com/terms-of-service.html
--- Robert Goldsmith Systems Integrator SP Group "Please consider the environment before printing this e-mail" *********************************************************************************** IMPORTANT - this email and the information that it contains may be confidential, legally privileged and/or protected by law. It is intended solely for the use of the individual or entity to whom it is addressed. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium (including by printing it out. Please also delete all copies of this email and any attachments from your system and shred any printed copies. We do not accept any liability for losses or damages that you may suffer as a result of your receipt of this email including but not limited to computer service or system failure, access delays or interruption, data non-delivery or mis-delivery, computer viruses or other harmful components. Any views expressed in this message are those of the individual sender except where the sender specifically states them to be the views of the SP Group. Copyright in this email and any attachments belongs to the SP Group the sender or its licensors. *********************************************************************************** ------------------- (*) To unsubscribe, send a message with words 'unsubscribe xep-support' in the body of the message to majordomo@renderx.com from the address you are subscribed from. (*) By using the Service, you expressly agree to these Terms of Service http://www.renderx.com/terms-of-service.htmlReceived on Mon Sep 1 08:57:59 2008
This archive was generated by hypermail 2.1.8 : Mon Sep 01 2008 - 08:58:00 PDT