Actions: | Security

AllGoodBits.org

Navigation: Home | Services | Tools | Articles | Other

Multi-tier Web Application Architecture Part 2

In the first part of this article, I laid out a basic architecture for a web application, built on just two machines for simplicity. In this second part, I will address various techniques that can be used to add capacity, to improve resilience to failure or to improve performance. If you want to improve performance or be able to cope with more clients, there are two basic options

  1. Add capacity; use more hardware, otherwise known as "throwing money at the problem".
  2. Make effort to improve the configuration.

This article is focused on the former; but choosing which approach to take is the Art.

When to add capacity or tune for performance

Since the architecture is modular, you can add capacity or optimize performance at any (combination of) layer(s); but when should you invest your time/effort/money?

The answer to this is rather beyond the scope of this article, but in short: "Not until you can demonstrate that you need to".

Web Application Architecture

At which of these tiers should you add capacity? Well, you have performance metrics for each component of your architecture going into a graphing tool such as Graphite, right? That should help you decide. If you don't have those numbers, you're firing in the dark, doing it wrong. Go set up a system to get at least some basic performance metrics first and then come back here.

Load Balancer: Tier 0

In part 1, I ducked the issue of load balancing with a footnote referring to round-robin DNS. Here I will give a more sophisticated (real) load balanced architecture, using HAProxy. The proxy functionality of Nginx can be used to implement primitive load balancing, but if you want or need more sophisticated load balancing, HAProxy is a common choice.

The simplest way to use HAProxy is Layer 4 Load Balancing, that is, at the transport layer. This means that requests that come in to a particular address/port to which HAProxy is bound will be proxied to a particular of backend servers for fulfillment of the request.

If you're using HAProxy as packaged by your distribution vendor, you'll likely have a decent default configuration in /etc/haproxy/haproxy.cfg, in particular various default options such as timeouts, limits etc. I'll just provide the Frontend and Backend configurations to match the example in my diagram.

frontend:

frontend main *:80
    default_backend myapp

backend:

backend myapp
   balance roundrobin
   server machineA 192.168.10.10:9000 check weight 20
   server machineB 192.168.10.11:9000 check weight 10
balance
This specifies the algorithm to use to select which backend server will receive the next connection. 'roundrobin' is the simplest, it sends to each server in turn. The roundrobin can be optionally weighted as show here. 'leastconn' is an alternative that sends connection to the backend server that is 'least connected', that is, has the the fewest connections. See the HAProxy docs for details and others.
check
This specifies that health checks will be performed periodically against the server to ensure that it is available for requests. By default, these are just TCP checks.

Static files/assets

One obvious improvement above the trivial is to send requests for dynamic content to the appservers, but to send requests for static assets such as images, stylesheets and javascript files to a separate http server. Here's a modification to the above frontend stanza and an additional backend stanza:

frontend  main *:80
    acl url_static              path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             myapp

backend static
    mode        http
    balance     roundrobin
    server      static 192.168.10.10:80 check
    server      static 192.168.10.11:80 check

This requires suitable nginx configuration to ensure that the static files can be found.

Adding web servers - Tier 1

This is the easiest of all to change; just add more machines until you're satisfied or you run out of cash and make sure that their addresses are added to the appropriate backend stanza(s) in your HAProxy config. Unfortunately, this is tier is not likely to be your bottleneck, so again, don't bother unless you have data that demonstrates this is where you need to improve.

Adding appservers - Tier 2

Here we have an assumption: your application is designed so that multiple instances can run in parallel against the same database. And if that assumption is not true, you're likely in for a world of pain if you have to try to achieve resilience to failure or scale beyond "low usage".

If it is true, then you likely have no problem at all, it's as easy as Tier 1 - just add more and ensure that any configuration stanzas that point to your appservers point to all of them.

Separating Tier 3 - the database

Migrating the databases on to their own hardware often makes sense because database services are often very resource intensive. Of course, it depends on the nature of your application, but database services often require lots of memory (attempting to keep the entire database in memory instead of on disk), lots of disk IO (if the database is too big to keep cached in memory, you'll want it on the fastest disk(s) you can justify/afford), or lots of cpu time (particularly if your application depends on stored procedures or sophisticated queries with many or complex JOINs).

Read only slaves

If the application allows you to configure multiple databases, you can perhaps scale by creating additional machines to be database slaves and ensuring that only read queries are sent to the slaves, while write queries are sent to the master(s). For example, Wordpress has a plugin called HyperDB which allows this sort of configuration.