Networking

Load balancers / reverse proxies

Load balancers and reverse proxies sit between users and application servers. They control how requests are routed, add layers of security, and enable scaling by distributing traffic across multiple instances.

Load Balancers

A load balancer distributes incoming requests across multiple backend servers. This makes applications more reliable by avoiding overload on a single server, and it improves performance by sharing the traffic.

Load balancing is typically combined with reverse proxying — the proxy receives requests, then decides which backend server should handle each one.

Key benefits of load balancers:

  • High availability – if one server goes down, traffic is redirected to others.

  • Scalability – more servers can be added to handle increased load.

  • Flexibility – traffic can be distributed based on rules, such as round-robin, least connections, or client IP.

  • Zero-downtime deployments – new versions of applications can be introduced gradually by routing part of the traffic to updated servers.

Example: nginx as a Load Balancer

upstream app_servers {
server 192.168.1.101;
server 192.168.1.102;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://app_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}

Here, NGINX forwards requests for example.com to two backend servers.

Load Balancing Strategies in nginx

nginx supports several strategies for deciding which server gets the next request:

  • Round Robin (default): Requests are distributed evenly in rotation across all servers.

  • Least Connections: New requests go to the server with the fewest active connections. Useful when requests have varying durations.

  • IP Hash: Requests from the same client IP always go to the same server. This can be useful when session stickiness is needed without shared session storage.

  • Weighting: Servers can be given weights to send more traffic to stronger servers. Example:

upstream app_servers {
server 192.168.1.101 weight=3;
server 192.168.1.102 weight=1;
}

In this case, server 192.168.1.101 will receive three times more traffic than 192.168.1.102.

Alternatives

  • HAProxy – often used for high-performance load balancing.

  • Traefik – integrates well with container orchestration (Docker, Kubernetes).

  • Cloud load balancers – Azure Load Balancer, AWS ELB/ALB, and Google Cloud Load Balancer provide fully managed options.

Reverse Proxies

A reverse proxy accepts requests from clients and forwards them to one or more backend servers. The client only communicates with the proxy, which hides the backend infrastructure.

Reverse proxies are widely used because they provide multiple benefits at once:

  • Security and abstraction – backend servers are not directly exposed to the internet. The proxy can filter traffic, block malicious requests, and enforce limits.

  • SSL/TLS termination – HTTPS certificates are managed in one place, instead of on every backend server.

  • Routing flexibility – traffic can be routed to different services depending on subdomain or path. For example, api.example.com to an API, and www.example.com to a website.

  • Performance – caching, compression, and connection reuse reduce load on backend servers.

  • Central logging – all traffic goes through the proxy, making it easier to collect logs and metrics.

Example: NGINX as a Reverse Proxy

server {
listen 80;
server_name www.itiden.se;
# Serve the main site
location / {
root /var/www/html;
index index.html index.htm;
}
# Forward a folder to a Node.js app
location /proxied-folder/ {
proxy_pass http://localhost:4000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

In this setup, the main site is served from /var/www/html, while www.itiden.se/proxied-folder/ is forwarded to a Node.js application running on port 4000.