Actions: | Security

AllGoodBits.org

Navigation: Home | Services | Tools | Articles | Other

Serving HTTP with nginx

Web server configuration, like most anything else in this game, depends on your situation. Your server hardware, OS, network connectivity, application requirements, client access patterns, performance requirements, tolerance for failure, etc., etc. are not necessarily the same as mine. These configuration suggestions will get you going with something reasonable, something that's good enough for HTTP services that are not mission-critical. Having said that, it's intended to be a good starting point for the things that are mission critical; not to be blindly copy-pasted, but to get a flying start in the right direction.

These are complex tools and it's not always predictable whether a particular change might improve your success or not. Therefore, if you want to serve HTTP well (high performance or low resource usage or HA, etc.), you'll need to test, measure and evaluate based on your workload and your situation. I'll not labour it too hard here, but optimization effort without detailed, well-interpreted information coming out of well recorded, well presented data collection is a waste of time. Having said that, if you have a common situation with a PHP webapp such as Wordpress, you can get a most-of-the-way-there solution by using nginx, php-fpm and microcaching.

One last comment in that area: if you don't measure carefully, you may spend your efforts in the wrong areas; if you performance bottleneck is one area, then any effort at all in another area is a waste of time. For example, if it's your application code that's your bottleneck, tuning your HTTP server doesn't help, if you're database service is crawling through molasses, tuning the PHP bytecode caching pointless and so on.(For more on this, look into the more IT focused work in the area of the theory of operations; people like Gene Kim, building on work by W. Edwards Deming, Goldratt, et al.).

Note that picking SSL ciphers correctly is difficult, you'll need to understand the relative (performance/security) costs and benefits. This list is admittedly somewhat cargo-cult, but is drawn up based on my research (current at the time, but of course destined to obsolesce) from reading the work of those who only focus on these areas. If you're trying to avoid being low-hanging fruit, these will serve you well, at least for a while. If you need to protect against serious/targetted attack, you'll need to do your own research and gather your own understanding from primary sources. I'm a generalist, not a security/crypto specialist. Caveat lector.

nginx.conf

user                  nginx;
# one worker or equal the number of _real_ cpu cores, in which case turn on accept_mutex
worker_processes      1;
worker_priority       10;

error_log  /var/log/nginx/error.log;
#error_log  /var/log/nginx/error.log  notice;
#error_log  /var/log/nginx/error.log  info;

pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
    # serially accept() connections and pass to workers. Turn on only if workers gt 1
    accept_mutex        off;


}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr $host $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" $ssl_cipher $request_time';

    access_log  /var/log/nginx/access.log  main;

 # Perfect Forward Secrecy (PFS) high strength ciphers first.
 # PFS ciphers are those which start with [EC]DHE ([Elliptic Curve] Diffie-Hellman Ephemeral).
 ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:AES256-SHA;
 ssl_prefer_server_ciphers on;
 ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

  # Timeouts, to avoid unnecessarily long-lived connections
  # Helps protect against Slowloris type attacks AND reduces resource usage.
  keepalive_timeout       75s; # timeout for a single keep-alive connection to stay open
  send_timeout             4s; # maximum pause between nginx packets
  client_body_timeout      6s; # maximum pause between client packets
  client_header_timeout    6s; # maximum time for client to send entire header to nginx

  aio                 on; # asynchronous file I/O
  directio            64k; # use O_DIRECT when reading files larger than this size
  directio_alignment  512; # Increase to 4k when using XFS on Linux
  charset             utf-8; # adds "Content-Type" into response-header
  # disable on the fly gzip compression (which adds latency), only use gzip_static
  # enable this only if you want to prefer low-bandwidth over low-latency!
  gzip                off;
  gzip_proxied        any; # allows compressed responses for any request even from proxies

  keepalive_requests        50;  # number of requests per connection, does not affect SPDY
  keepalive_disable         none; # allow all browsers to use keepalive connections
  max_ranges                0;   # disabled to stop range header DoS attacks
  msie_padding              off;
  open_file_cache           max=1000 inactive=2h;
  open_file_cache_errors    on;
  open_file_cache_min_uses  1;
  open_file_cache_valid     1h;
  output_buffers            1 512;
  postpone_output           1440;   # ensure sends match our Maximum Segment Size
  read_ahead                512K;   # kernel read ahead set to the output_buffers
  reset_timedout_connection on;  # reset timed out connections freeing ram
  sendfile                  on;  # on for decent direct disk I/O
  server_tokens             off; # version number in error pages
  server_name_in_redirect   off; # if off, nginx will use the requested Host header
  source_charset            utf-8; # same value as "charset"
  tcp_nodelay               on; # Nagle buffering algorithm, used for keepalive only
  tcp_nopush                off;

  ignore_invalid_headers    on;

  # Load config files from the /etc/nginx/conf.d directory
  include /etc/nginx/conf.d/*.conf;

}

Example vhost configs

conf.d/static-site.conf:

server {
    server_name      www.example.com;
    root            /var/www/www.example.com/docroot;
    try_files $uri $uri/;

    location /dl {
       auth_basic              "Not very private";
       #this path is relative to /etc/nginx/
       auth_basic_user_file    htpasswd;
       autoindex on;
    }

    location ~ robots.txt$ {
       allow all;
       log_not_found off;
       access_log off;
    }

    location /nginx_status {
       stub_status on;
       access_log   off;
       #allow all;
       allow 127.0.0.1;
       deny all;
    }
}

conf.d/wordpress.conf:

server {
    server_name     wordpress.example.com;
    index           index.php index.html index.htm;

    # No need to proxy for static content, just serve it if it's there!
    try_files $uri $uri/ /index.php?$args;

    location ~ \.php$ {
       fastcgi_pass unix:/var/run/php-fpm/www.sock;
       #fastcgi_pass 127.0.0.1:9000;
       fastcgi_index  index.php;
       fastcgi_param  SCRIPT_FILENAME  /var/www/wordpress.example.com/docroot$fastcgi_script_name;
       fastcgi_param  PATH_INFO $fastcgi_script_name;
       include        fastcgi_params;
    }
}

conf.d/monitoring.conf:

server {
    server_name     monitoring.example.com;
    index           index.php index.html index.htm;

    location        /nagios {
            proxy_pass  http://localhost:8080/nagios;
    }

    location ~ \.php$ {
            fastcgi_pass unix:/var/run/php-fpm/www.sock;
            #fastcgi_pass 127.0.0.1:9000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  /var/www/monitoring.example.com/docroot$fastcgi_script_name;
            fastcgi_param  PATH_INFO $fastcgi_script_name;
            include        fastcgi_params;
    }
}

conf.d/perl-appserver.conf:

server {
    server_name                 articles.example.com;
    root            /var/www/articles.example.com/docroot;
    try_files $uri $uri/;

    location        / {
            proxy_pass      http://unix:/var/run/starman.sock;
    }
}