Software-based load balancer and high availability

What is a load balancer?

It is a set of software and particular configuration settings that can be spread over two or more processes that may be executed on the same or on different computers. That is why, when designing a robust solution architecture, one of the most important points is taking care of load-balancing requirements between processes and nodes. In this way, we would be providing fault tolerance and scalability to the platform operation.

In relation to the horizontal scalability or growth, this technique will help us to grow by adding nodes with minimal modifications to our software-based balancer’s configuration. In this particular case, the proposal presented below is used to balance SOAP type requirements.

The operating principle is based on Apache configuration settings in order to function as load balancer, and we use Linux Heartbeat to manage a Virtual IP.

A virtual IP is the unique point of contact to route the soap request through the balancer. This address will be attached to interface computers that take part of the cluster (this decision is made by the Heartbeat process). The request is routed from external systems to the floating IP that is exposed by the cluster.

If Heartbeat (HA) detects that a node is out of service and that node had the floating IP (considered master in the past), it will take the necessary actions to attach this IP to the corresponding node that will be considered the Master from that moment on. Under this condition, we will always have a single point of contact (the virtual IP), which will have high availability managed by Heartbeat (HA).

Once the request reaches the platform through the virtual IP, at the port declared in the Apache configuration init file, this will be routed using the apache extension according to the rules established in the detailed configuration that is shown below.

As an example, the detailed configuration is as follows:

Configuration file: httpd.conf (should add lines)

# Modules proxy for load balancing
# =======================================
LoadModule proxy_module
LoadModule proxy_http_module
LoadModule proxy_balancer_module

# Virtual hosts
Include conf / extra / httpd-vhosts.conf

Configuration file: extra / httpd-vhosts.conf
NameVirtualHost *: 9090

ServerName IntrawayWildfly
ErrorLog “/ var / log / iway / apache / lb_9090_error_log”
CustomLog “/ var / log / iway / apache / lb_9090_access_log” common


BalancerMember http: // host01: 8989 route = node1
BalancerMember http: // host02: 8989 route = node2
Order allow, deny
Allow from all
ProxySet lbmethod = byrequests

Off ProxyRequests

ProxyPass / balancer: // clusterAPPSRV / stickysession = JSESSIONID | scolonpathdelim jsessionid = On nofailover = on
ProxyPassReverse / http: // host01: 8989
ProxyPassReverse / http: // host02: 8989

RewriteEngine On
RewriteRule ^ (. *) – [E = CLIENT_IP:% {REMOTE_ADDR}, L] HTTP_X_FORWARDED_FOR% in September RequestHeader} and {CLIENT_IP

In these settings, we can see that the requests that are routed through port 9090 (9090 Listen httpd.conf configuration file and NameVirtualHost *: 9090 extra / httpd-vhosts.conf) will be sent to destinations BalancerMember http: // host01 : 8989 route = node1 and BalancerMember http: // host02: 8989 route = node2 (detail configuration file: extra / httpd-vhosts.conf).

Thus, we have a load balancer based on software that distributes requests between two processes or nodes making a cluster.

I hope that this technique is useful for everyone!

You may also like

Open-Source Software

Top 5 Open-Source Software for Developers

Software Bugs

Why Finding Software Bugs is not Always bad

Reactive Microservices

Testing Reactive Microservices With Spring-Webflux