Whether it be creating a distributed application or a server queue, one of the biggest challenges we face is reducing the bandwidth used for communication. When broadcasting a message or recovering data from several devices, the design decisions impact heavily on performance, and decide the bottlenecks of the architecture. We’ll review the basics of the different approaches to analyze which is best in each situation.
The centralized approach
Most services and servers use a single entry point for communication from clients, giving a well-known address to reach the server. This entry point may or may not mask a load-balancing service that keeps the servers from congesting, but it does receive all the network transactions to and from the server.
With this method the clients only deal with their own data usage, but the central entry point is flooded with all the transmissions. One server with 1000 clients sending 1000 each second would be receiving 1 million transactions each second, and the number scales linearly. Also, unless it’s confined to a private network, if the server needs to broadcast data to each of its thousand clients, it will need to send a thousand packages.
But this model isn’t all loss: centralization means a server knows the client’s address at all times and can reach it directly. It’s also a lean approach to the client construction, making it a great choice for cases where storage size or client bandwidth are constraints.
The tree topology
In cases where groups of clients can message each other, a much more efficient distribution method is available. By the server sending a broadcast message to just a few clients and delegating to them the responsibility of spreading it to their group. In turn, these clients delegate in the same way to the few they communicate with. This creates a root in the server and branching nodes for repeat clients, thus forming a tree graph. From a performance standpoint, we have traded propagation time and complexity for a more even usage of bandwidth.
However, this structure has other advantages. In cases of distributed work among peers or workers, a complex calculation or one that needs peer data can be partially done. Once the partial computation is finished, it’s transmitted up the chain to an upper node, aggregated with the peer’s own calculation and then retransmitted up to repeat the process until the root peer obtains the total result. This methodology greatly reduces both traffic to the root peer and its processing power.
The mesh pattern
Taking the calculation case into account, we can refine a new case from it. If the computed data needs to be known by all the peers, the root peer must perform a broadcast every time it completes. If instead of assigning a root peer we meld it with the rest, we can create a crisscrossing pattern of messages between the peers so that they all receive data and add it to their own, while making sure that no data is used twice in any peer. This crisscross approach nets the quickest distribution of results to all nodes.
No silver bullet
As is often the case with computer architecture, we see that there is no one-size-fit-all solution to connectivity. When deciding our approach, we must be sure of what our priorities and constraints are, and of the heaviest load on our network. Does it mainly report to a single place? Is it a top-down distribution of data? Is it a divisible intensive computation? Do the clients/peers rely heavily on the result of common aggregation? Knowing what model fits the problem best means a big boon to both performance, reliability and resources used.