Bulky versus chatty applications is a topic that comes up a lot when we design applications on the cloud and believe me it’s not a random topic!
When talk about bulky versus chatty, we’re essentially talking about the applications communication style. When the payloads exchanged by the application are large, we say that they are bulky. When we say that an application is chatty, we mean that the application exchanges many messages.
Most conversations about bulky versus chatty are complex, but they’re essential to the success of your project. Depending on the type of application that you are building and on its composition you will need to size messages appropriately. To do so you will need to enumerate the constraints imposed by your infrastructure (essentially bandwidth vs cost of data transfers).
Imagine for a second, that your Cloud Service sends out a massive amount of data to thousands of mobile devices. These devices could be watches, phones, tables or even cars and you need to remember that they all have different communication stacks. Some these devices incur additional cost of operation for end-users while others can use public WIFI.
If you start pushing gigabytes of data to these devices, they may not be able to do anything with it. Worst case scenario, they probably will run out of memory and the end-user will have to pay for the bandwidth.
This issue doesn’t apply specifically to communications from your Cloud Services to their clients, it also applies to all the Roles that compose your Cloud Services. In a post about why size matters I demonstrated that reducing the size of your message can drastically affect the overall performance of your application. It contributes to a better end-user’s experience.
Essentially, by reducing the size of your messages you are reducing the time required to transfer the message over the wire. This can be achieved by designing messages to be succinct by removing all unnecessary data from them.
Size isn’t everything, trust me it isn’t!
If the application exchanges a moderate amount of messages the consuming Roles and Services should be to keep up. Consequently, the application should meet its performance targets.
Workflows that are composed of many short steps might result in extremely chatty applications.
A tale from my personal experience
A few months ago, I tried to distribute a complex workflow over multiple physical machines. I thought that breaking down the workflow into small steps was the way scale it efficiently. Having completed the modifications to the code base I fired it up and left for the night.
The next morning, still half a sleep, I fired up visual studio and started to go through each queue to see if the work had completed. Sadly, the application had been able to process 200 Thousand messages and still had 16 Million messages to go! I had failed…
That morning I learned the important of striking the right balance between the amount of work accomplished by a task, the number of messages exchanged by process and the size of each message.
In the end, after much thought and careful analysis, I drastically reduced the size of my messages and the number of messages required to complete each process. I went from 16 Million messages to roughly 50 Thousand messages. By adjusting the amount of work performed at each step in my workflow I also went from 16 hours of processing down to 40 minutes over 3 Extra Small Role instances!
Consider Bulkiness & Chattiness Instead of Bulky VS Chatty
When designing a Cloud Service it’s imperative that you consider both its communication’s bulkiness and chattiness. Finding the right balance is all about trial and error. Keep a close eye on metrics and diagnostics then floor it! Then kick it back a notch to observe how it reacts.
- If your application is too chatty, try to group related messages together.
- If the messages are too bulky, try to remove any unnecessary data from them. You can also try to compress messages in order to transfer them over the wire.
- In both cases you can setup caching mechanisms to help limit the number of calls being made over the network.