Apples and Oranges UK, Sunday, 9th January 2011
This is the third of six articles in the series Apples and Oranges comparing stormmq and Amazon SQS.
Amazon SQS is a superb product which implements a simple messaging API. One of its advantages is a straightforward approach to consuming messages: Consume a message, and within a time frame, acknowledge you’ve got it. If not acknowledged (‘acked’), put it back on the queue. stormmq, using AMQP, provides a very similar model. If you’re just occasionally pulling a message off a queue (ie, polling for new messages) then there’s little more to say: they’re both simple and unfussy. However, there are common situations when simple acknowledgements won’t do:‐
Let’s take an example of an application processing a stream of data, such as a series of position changes for the Mars Rover, or a feed of corporate bond data on a Stock Exchange (eg price, sales volume, yield to maturity, Z score and PV01, say). A common way of sending such data is a ‘key frame’ or ‘last value’ of all the values followed be a series of messages just detailing changes – it keeps data volume low (the same technique is used for encoded video). Ideally, you’d want to be pushed new data, rather than go and get it every time – it’s more efficient. Both stormmq and JMS do this, but Amazon SQS doesn’t. Of course, if the volume or throughput of the stream is quite high, you might not be able to, or might not need to, acknowledge every message. But if your application fails, you would like what hadn’t been finished with to be pushed back on the queue.
That’s something that JMS can only do with heavyweight transactions, but AMQP can do with lightweight selective acknowledgments. Simply tell the server (‘ack’) the unique id of the latest message you handled – and it’ll acknowledge as received and used all previous messages sent to you.
Managing Quality of Service
The messaging experts amongst you might be getting very worried by now. How long should stormmq’s server wait (either in time or number of messages) before assuming something bad happened? What if we don’t ack for 1000 messages and the client gets swamped? That doesn’t sound good! That’s where AMQP’s Quality of Service can be used – simply tell the server how many messages it can send before waiting for an ack. Used judiciously, this can become a very simple and effective way of having a client cache incoming messages, effectively smoothing out the profile of that stream of data, so the valuable application logic you’d write on top is fully utilised. Trying to do the same with JMS transactions will consume resources like mad and slow down the whole set up for everybody.
Turning it all Off
Of course, sometimes acknowledgements just plain get in the way. Imagine you’re writing an application to display the latest status of something – perhaps a monitoring app displaying log messages in near realtime, a call centre ‘calls handled’ display, or a small ‘latest stock price’ app. It doesn’t really matter if you don’t process every message – another one will be along in a second or two. Perhaps they come almost too fast – stormmq can transmit a message from creator to consumer in under 14ms. In that cases you can just turn them off with stormmq – something you just can’t do with Amazon SQS.
Making sure Messages Go!
It’s often the case in messaging that ensuring a message gets onto a queue matters. A sort of reverse of acknowledgments. The simple way we do this in stormmq is with transactions. Transactions in AMQP are like those more commonly found in databases, but without a lot of the complexity. They’re designed to make a message (or messages) definitively got on a queue, and that an ack really happened. They exist in JMS, but are absent in Amazon SQS.
The simplest need is to make a message get on a queue. In normal AMQP, a transaction isn’t needed – as long as the connection is still open after posting a message, and no error was reported, it got there – AMQP only reports failures. However, if you’re using a mobile device, that might not be enough. A recent client we worked with was constrained to using a GRPS modem. The connection to stormmq, over TCP/IP, was initiated by the modem, but happened at the mobile companies data centre. As is the way with mobiles, the GRPS modem would often lose the signal and drop the connection – but only to the data centre. It had no way of knowing when the mobile company’s servers would kill its TCP connection to us – was it before or after the signal was restored? If signal was lost just after sending a message… well, let’s just say with mobiles, if it can happen, it will (and to some, TCP and GRPS doesn’t mix – something us developers have no control over). By using a transaction, though, they could determine delivery.
A more complex example might be a server that publishes several related messages – perhaps the breakdown of a bill, or instructions to a group of systems (perhaps an address change from the CRM to accounts, stock control and procurement). They either all go on the queue or nothing goes on the queue. stormmq supports that too.
About the AuthorGot a question but don’t want to comment? Email me.
Other posts you might likeGuess-timated
Every messaging systems offers a different approach and trade offs for security, authentication and permissions.
Superficially, stormmq seems to be like Amazon’s SQS, but they have very different message properties.
Superficially, stormmq seems to be like Amazon’s SQS, but they have very different messaging idioms.