Actions: | Security

AllGoodBits.org

Navigation: Home | Services | Tools | Articles | Other

Multi-tier Web Application Architecture Part 1

I was recently asked to describe an architecture for a web application (PHP/MySQL) with some attempt to provide resilience to failure and with some options for scaling/additional capacity.

There are 3 basic services involved, and I'm going to consider them each as a separate logical tier of the architecture:

  1. HTTP server acting as a reverse proxy for:
  2. Application server which connects to:
  3. Database server
Web Application Architecture

In part 1 of this article I'm only using two machines, each of which serves all architectural tiers. Part 2 will address separating the tiers onto different machines.

Two machines

The key to achieving any resilience to failure is to have more than one instance of each service so this sort of satisfies the requirement. However, I like to keep services a little more separated than using only two machines allows so I'm only considering this to be a starting point.

This approach uses two identical machines both running all three tiers; MachineA is 192.168.10.10, MachineB is 192.168.10.11. Your application code should be installed into /var/www/myapp.example.com/docroot/ and is executable by the nginx user. You might need to pay attention to permissions and/or SELinux settings, but that's outside the current scope.

Nginx

Nginx is configured with an upstream directive within the http context, which contains 2 server directives. This means that Nginx will proxy connections to the local appserver, via unix domain socket, unless that service is unavailable, in which case Nginx will proxy the request to the other machine's appserver via a TCP socket.

MachineA will have /etc/nginx/conf.d/myapp.conf:

upstream myapp {
    server unix:/var/run/php-fpm/myapp.sock;
    server machineB.example.com:9000 backup;
}

server {
server_name     myapp.example.com;
root            /var/www/myapp.example.com/docroot;
index           index.php index.html index.htm;

    location ~ \.php$ {
       fastcgi_pass   http://myapp;
       fastcgi_index  index.php;
       fastcgi_param  SCRIPT_FILENAME  /var/www/myapp.example.com/docroot$fastcgi_script_name;
       fastcgi_param  PATH_INFO $fastcgi_script_name;
       include        fastcgi_params;
    }
}

MachineB will have an identical Nginx config, except that the second server directive in the upstream stanza will be server machineA.example.com:9000 backup.

PHP-FPM

Both machines can have an identical configuration for php-fpm, /etc/php-fpm.d/myapp.conf:

[myapp]
listen = /var/run/php-fpm/myapp.sock
listen = 9000
user = nginx
group = nginx

MySQL

Both machines have almost identical MySQL (or MariaDB) configuration so that they are set up as a multi-master replicated pair. The only differences are to ensure that each server looks to the other as a replication master, the 'server-id' directive should be different on each machine and the 'auto_increment_offset' directive should be different. Here are example fragments from /etc/my.cnf.

MachineA:

server-id               = 10
auto_increment_offset   = 1
master-host             = MachineB.example.com

MachineB:

server-id               = 11
auto_increment_offset   = 2
master-host             = MachineA.example.com

Documentation that describes the details of configuring multi-master MySQL replication are all over the web, including my own article. A more modern approach would be to create a synchronously replicating cluster using MariaDB/Galera or Percona XtraDB Cluster.

Iptables Firewall

Nginx

Just the ports you need, presumably TCP80/443:

-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
PHP-FPM

Open the port only from the other machine because local connections will come over the unix domain socket.

MachineA:

-A INPUT --src 192.168.10.11 -p tcp --dport 9000 -j ACCEPT

MachineB:

-A INPUT --src 192.168.10.10 -p tcp --dport 9000 -j ACCEPT
MySQL

Each mysqld should be able to receive TCP connections from both our machines.

MachineA:

-A INPUT --src 127.0.0.1/8 -p tcp --dport 3306 -j ACCEPT
-A INPUT --src 192.168.10.11/32 -p tcp --dport 3306 -j ACCEPT

MachineB:

-A INPUT --src 127.0.0.1/8 -p tcp --dport 3306 -j ACCEPT
-A INPUT --src 192.168.10.10/32 -p tcp --dport 3306 -j ACCEPT

Some assumptions/notes:

  • The application can be configured to make database queries to a secondary server in the event that the primary is not available. If this assumption is not true for the application in question, you'll need to setup a separate load balancer/proxy service and point the appservers at that.
  • DNS is round robin to both machines (alternatively, a single DNS record and LVS/Keepalived to share a Virtual IP between the load balancers)
  • With a configuration management tool such as Puppet/Chef/Salt/etc., machines can be automatically configured to any of these roles
  • Set SELinux to permissive if you're having problems, but then watch the logs and use the warnings to create appropriate rules so that you can set it back to enforcing.