Table of Contents
Nginx Performance Tuning
Many useful tools can tell you on what and where web pages optimizations can be made. For example, Firefox's PageSpeed addon (requiring firebug) or Yslow point out for you the optimizations you could make on your server. Let's say we have a running Nginx installation on which we would like to tweak a few parameters.
Enable compression
Gzip compression can greatly reduce the size of static file. And by greatly, I really mean greatly. Try it yourself !
# ls -al |grep index.html -rw-r--r-- 1 www-data adm 133986 2010-03-10 22:48 index.html # gzip -9 -c index.html > index.html.gz # ls -al |grep index.html -rw-r--r-- 1 www-data adm 133986 2010-03-10 22:48 index.html -rw-r--r-- 1 root root 11317 2010-04-12 12:34 index.html.gz # gzip -l index.html.gz compressed uncompressed ratio uncompressed_name 11317 133986 91.6% index.html
Compression ratio is 91% ! It means that, instead of sending 133986 bytes of data (which represents at least 90 full TCP packets over Ethernet, not counting handshake), your server will only send 11317 bytes (only 8 packets !).
Nginx can use GZIP compression. However, such feature must be used carefully because some browsers, like IE6 and previous, don't support it.
Gzip compression is configured globally in /etc/nginx/nginx.conf in the 'http' directive (you can also configure it for each 'server' directive). Let's take a closer look at the configuration :
http { [...] gzip on; gzip_static on; gzip_comp_level 9; gzip_min_length 1400; gzip_types text/plain text/css image/png image/gif image/jpeg application/x-javascript text/xml application/xml application/x ml+rss text/javascript; gzip_vary on; gzip_http_version 1.1; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; }
gzip on
Activate gzip compression.
gzip_static on
This feature allow you to use static GZipped versions of your files (you need to create them yourself) and serve those gzipped files directly through Nginx. Nginx just looks for a filename identical to the one requested by the browsers, but with a “.gz” extension (eg. if 'index.html' is requested, nginx will look for a 'index.html.gz' file next to it). If it exists, Nginx serves this -already compressed- file and does not perform gzip compression, thus reducing CPU overhead and response time.
If the “.gz” file does not exist, Nginx serves the original file and realizes the gzip compression on the fly.
The trade-off there is that you need to have 2 versions of your statics. There are patches to avoid that, but I am not going to cover them here. It also means that, if you modify the original, you need to regenerate the gzipped file as well.
You can test nginx behavior as follow:
- create a file “testgzip.html” containing the text: “Hello world, file generated at unixtime 1271070001”
- gzip the file using 'gzip -9 -c testgzip.html > testgzip.html.gz'
- get the file using curl (with compression support activated) : 'curl http://nginxserver/testgzip.html –compressed', it displays the expected text
- now, change the original file's content to: “Hello world, file generated at unixtime 1271070001 modified at 1271071107”
- get the file with curl again, it still displays the original text…
<note warning>When using gzip_static, you need to constantly make sure that your gzipped versions are up to date ! </note>
A simple way to keep gzipped files up to date is to use the following script. It checks the modification timestamp of all the original files (matching the extensions) and re-generate the corresponding gzipped files if needed.
#! /bin/bash # this script checks a list of directories for a list of extensions and # generated gzipped versions of the files that are found # # if the modification date of a file is newer than its gzipped version # then the gzip file is regenerated # specify a filetype like *.css or a filename like index.html # leave one space between entries FILETYPES="*.css *.jpg *.jpeg *.gif *.png *.js *.html" # specify a list of directories to check recursively DIRECTORIES="/var/www/nginx_default/*" for currentdir in $DIRECTORIES do for extension in $FILETYPES do find $currentdir -iname $extension -exec bash -c 'PLAINFILE={};GZIPPEDFILE={}.gz; \ if [ -e $GZIPPEDFILE ]; \ then if [ `stat --printf=%Y $PLAINFILE` -gt `stat --printf=%Y $GZIPPEDFILE` ]; \ then echo "$GZIPPEDFILE outdated, regenerating"; \ gzip -9 -f -c $PLAINFILE > $GZIPPEDFILE; \ fi; \ else echo "$GZIPPEDFILE is missing, creating it"; \ gzip -9 -c $PLAINFILE > $GZIPPEDFILE; \ fi' \; done done
Launch it via crontab every 2 hours and yours statics should be reasonably up to date.
Roman Stingler contributed a bash script that is slightly smarter than mine. Feel free to test it.
#! /bin/bash FILETYPES=( "*.html" "*.woff" "*.css" "*.jpg" "*.jpeg" "*.gif" "*.png" "*.js" ) # specify a list of directories to check recursively DIRECTORIES="/var/www/gr3at-html5-app/" for currentdir in $DIRECTORIES do for i in "${FILETYPES[@]}" do find $currentdir -iname "$i" -exec bash -c 'PLAINFILE={};GZIPPEDFILE={}.gz; \ if [ -e $GZIPPEDFILE ]; \ then if [ `stat --printf=%Y $PLAINFILE` -gt `stat --printf=%Y $GZIPPEDFILE` ]; \ then echo "$GZIPPEDFILE outdated, regenerating"; \ gzip -9 -f -c $PLAINFILE > $GZIPPEDFILE; \ fi; \ else echo "$GZIPPEDFILE is missing, creating it"; \ gzip -9 -c $PLAINFILE > $GZIPPEDFILE; \ fi' \; done done
gzip_comp_level
Gzip can compress at different levels. Level 9 takes more CPU but creates smaller files.
gzip_min_length
Compression is useful only on size reasonably big. If the content to be returned can fit into one single TCP packet, then compressing it is useless for, at least, two reasons:
- we will need this TCP packet to send the compressed content anyway;
- CPU overhead might be higher than network delivery time
'gzip_min_length' defines the minimum size a HTTP response should be to be compressed. By setting this value to 1400 bytes, we basically say “compress if size is bigger than the MTU”.
<note important>This directive doesn't apply if a static .gz files is available in the directory. In this case, the .gz file will be served whatever its size might be.</note>
gzip_types
The list of MIME types eligible to compression.
gzip_vary
Set the Vary HTTP header as defined in the RFC.
gzip_http_version
Require that client announce HTTP 1.1 to use compression
gzip_disable
Disable compression for user-agents that match the corresponding regex (== do not use compression with IE 6 and previous).
Test it
Restart your Nginx configuration with those parameters and verify the sizes of yours requests using Firebug.
Before (without gzip):
After (with gzip):
That is a gain of 38% in responses size. Of course, the results completely depends on the type of content you serve, but if it's mostly static text files with a few images, like most websites, then you should see interesting results here.
Expires Header
For a browser to be able to cache content, the response from the HTTP server MUST contain an 'Expires' header. It indicates to the browser for how long it can keep the content in its local cache. Nginx can set the Expires header on the HTTP responses based on the type of content it's serving. You should set this value in accordance to your website goals and structure. I personally find acceptable to set an Expires value of 72 hours for all static contents, since I don't change my CSS and images on a regular basis.
This time, the configuration is realized in the 'server' section (the values mostly depends on the website type). So, in the server description, we have the following:
server { listen 80; server_name jve.linuxwall.info; [ ... ] #cache control: all statics are cacheable for 24 hours location / { if ($request_uri ~* \.(ico|css|js|gif|jpe?g|png)$) { expires 72h; break; } } }
The location directive is set to “/”, which means this applies to all the ressources located under the root. Then, we create a regular expression that is case insensitive (thanks to the star character in ~* ). The regular expression will match any for the following types:
- ico: icon images
- css: the style sheets
- js: javascripts
- gif, jpg, jpeg and png: the images
For all of those types, the 'expires' directive will set two HTTP Header similar to these ones:
Expires: Wed, 14 Apr 2010 08:44:29 GMT Cache-Control: max-age=86400
SoMe useful informations from the RFC:
- The Expires entity-header field gives the date/time after which the response is considered stale.
- The Cache-Control general-header field is used to specify directives that MUST be obeyed by all caching mechanisms along the request/response chain.
<note tip>The expiration time of an entity MAY be specified by the origin server using the Expires header (see section 14.21). Alternatively, it MAY be specified using the max-age directive in a response.</note>
<note tip>If a response includes both an Expires header and a max-age directive, the max-age directive overrides the Expires header, even if the Expires header is more restrictive. </note>
php5-xcache
Install xcache:
aptitude install php5-xcache
If nginx run in a chroot, you need to copy the following files:
sachiel:/var/www/chroot# cp /usr/lib/php5/20090626/xcache.so usr/lib/php5/20090626/ sachiel:/var/www/chroot# cp /etc/php5/conf.d/xcache.ini etc/php5/conf.d/ sachiel:/var/www/chroot# mknod -m 0666 /var/www/chroot/dev/zero c 1 5 sachiel:/var/www/chroot# chmod 777 tmp/ sachiel:/var/www/chroot# mkdir var/www/xcache sachiel:/var/www/chroot# cp -r /usr/share/xcache/* var/www/xcache/ sachiel:/var/www/chroot# chown chroot-nginx:chroot-nginx var/www/xcache/ -R
Configure Xcache in its xcache.ini file:
[xcache-common] ;; install as zend extension (recommended), normally "$extension_dir/xcache.so" zend_extension = /usr/lib/php5/20090626/xcache.so [xcache.admin] xcache.admin.enable_auth = On ; Configure this to use admin pages xcache.admin.user = "admin" xcache.admin.pass = "io3qh984hf982hf02h09fh23fh4820fh40hf" [xcache] ; ini only settings, all the values here is default unless explained ; select low level shm/allocator scheme implemenation xcache.shm_scheme = "mmap" ; to disable: xcache.size=0 ; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows xcache.size = 64M ; set to cpu count (cat /proc/cpuinfo |grep -c processor) xcache.count = 1 ; just a hash hints, you can always store count(items) > slots xcache.slots = 8K ; ttl of the cache item, 0=forever xcache.ttl = 0 ; interval of gc scanning expired items, 0=no scan, other values is in seconds xcache.gc_interval = 0 ; same as aboves but for variable cache xcache.var_size = 0 xcache.var_count = 1 xcache.var_slots = 8K ; default ttl xcache.var_ttl = 0 xcache.var_maxttl = 0 xcache.var_gc_interval = 300 xcache.test = Off ; N/A for /dev/zero xcache.readonly_protection = Off ; for *nix, xcache.mmap_path is a file path, not directory. ; Use something like "/tmp/xcache" if you want to turn on ReadonlyProtection ; 2 group of php won't share the same /tmp/xcache ; for win32, xcache.mmap_path=anonymous map name, not file path xcache.mmap_path = "/dev/zero" ; leave it blank(disabled) or "/tmp/phpcore/" ; make sure it's writable by php (without checking open_basedir) xcache.coredump_directory = "" ; per request settings xcache.cacher = On xcache.stat = On xcache.optimizer = On ; [xcache.coverager] ; per request settings ; enable coverage data collecting for xcache.coveragedump_directory and xcache_coverager_start/stop/get/clean() functions (will hurt executing performance) xcache.coverager = Off ; ini only settings ; make sure it's readable (care open_basedir) by coverage viewer script ; requires xcache.coverager=On xcache.coveragedump_directory = "/tmp/coverage/" ;
You need to copy the md5 fingerprint of some admin password and put it in the configuration file. That will allow you to open the administrator interface of xcache in your browser.
Now reload the php5-cgi process (certainly by restarting spawn-fcgi).
X-Cache works by caching the compiled PHP pages into memory. The PHP code doesn't need to be modified for it to work.
Let's try a basic benchmark on some simple PHP page, used to display a gallery of images. The injection tool is inject32, from Willy Tarreau.
Test without xcache:
$ ./inject32 -u 25 -H "Host: website.com" -G "server:80/gallery.php?cat=city" -o 100 [...] hits ^hits hits/s ^h/s bytes kB/s last errs tout htime sdht ptime 27257 219 340 219 131358285 1641 1191 0 0 2709.3 117.6 3335.0 27692 435 341 435 133632260 1649 2273 0 0 2745.0 69.9 3013.0 28013 321 341 321 134961725 1645 1329 0 0 2734.8 154.8 3619.2 28372 359 341 359 136936336 1649 1974 0 0 2571.6 72.4 5454.8 28760 388 342 388 138623997 1650 1687 0 0 2576.0 92.2 2883.8 28992 232 341 232 139831426 1645 1207 0 0 2615.9 91.3 3075.0 29441 449 342 449 142422089 1656 2590 0 0 2600.1 65.4 3219.8 29747 306 341 306 143578873 1650 1156 0 0 2636.4 123.7 3122.0 29973 226 340 226 144890733 1646 1311 0 0 2774.9 120.6 3792.3 30379 406 341 406 147110499 1652 2219 0 0 2810.8 138.8 3166.2 ^C Fin. Clients : 47 Hits : 30379 + 0 abortés Octets : 147110499 Duree : 89313 ms Debit : 1647 kB/s Reponse : 340 hits/s Erreurs : 0 Timeouts: 0 Temps moyen de hit: 2810.8 ms Temps moyen d'une page complete: 3166.2 ms Date de demarrage: 1285798045 (30 Sep 2010 - 0:07:25) Ligne de commande : ./inject32 -u 25 -H Host: website.com -G server:80/gallery.php -o 100
That's a decent 340hits per seconds (on a small VIA Nano processor U2250 (1.6GHz Capable))
Test with xcache
$ ./inject32 -u 25 -H "Host: website.com" -G "server:80/gallery.php?cat=city" -o 100 [...] hits ^hits hits/s ^h/s bytes kB/s last errs tout htime sdht ptime 28200 471 440 470 165119438 2579 2760 2 0 1943.2 112.7 2589.2 28647 447 440 447 167744669 2580 2627 2 0 1981.0 71.9 2258.0 29085 438 440 438 170317043 2580 2572 2 0 1952.0 123.5 2258.7 29469 384 439 384 172572275 2575 2257 2 0 2142.0 92.1 2618.5 29955 486 440 486 175426553 2579 2854 2 0 2288.4 407.1 3555.7 30403 448 440 448 178057657 2580 2631 2 0 2005.5 99.4 3727.4 ^C Fin. Clients : 47 Hits : 30403 + 74 abortés Octets : 178057657 Duree : 69834 ms Debit : 2549 kB/s Reponse : 435 hits/s Erreurs : 2 Timeouts: 0 Temps moyen de hit: 2005.5 ms Temps moyen d'une page complete: 3727.4 ms Date de demarrage: 1285798978 (30 Sep 2010 - 0:22:58) Ligne de commande : ./inject32 -u 25 -H Host: website.com -G server:80/gallery.php -o 100
435 hits per seconds. That's almost 30% better that without X-Cache. Not to bad for only 64MB of RAM ;) Further testing with different type of PHP code would be interesting.
<note important>Another use of XCache with a Drupal website showed more significant improvements. Without Xcache, average response was around 50 hits per seconds. With XCache enabled, it reached a steady 100 hits per seconds. That's a 100% performance improvement.</note>
Memcached
Installation is straighfoward:
# aptitude install php5-memcache memcached
Then, we can allow more or less memory to memcached in /etc/memcached.conf:
# memcached default config file # 2003 - Jay Bonci <jaybonci@debian.org> # This configuration file is read by the start-memcached script provided as # part of the Debian GNU/Linux distribution. # Run memcached as a daemon. This command is implied, and is not needed for the # daemon to run. See the README.Debian that comes with this package for more # information. -d # Log memcached's output to /var/log/memcached logfile /var/log/memcached.log # Be verbose # -v # Be even more verbose (print client commands as well) # -vv # Start with a cap of 64 megs of memory. It's reasonable, and the daemon default # Note that the daemon will grow to this size, but does not start out holding this much # memory -m 256 # Default connection port is 11211 -p 11211 # Run the daemon as root. The start-memcached will default to running as root if no # -u command is present in this config file -u nobody # Specify which IP address to listen on. The default is to listen on all IP addresses # This parameter is one of the only security measures that memcached has, so make sure # it's listening on a firewalled interface. -l 127.0.0.1 # Limit the number of simultaneous incoming connections. The daemon default is 1024 # -c 1024 # Lock down all paged memory. Consult with the README and homepage before you do this # -k # Return error when memory is exhausted (rather than removing items) # -M # Maximize core file limit # -r
Then, php5-memcache consist only on a shared library, that we can copy in the chroot of nginx:
# cp /usr/lib/php5/20090626/memcache.so /var/www/chroot/usr/lib/php5/20090626/ # cp /etc/php5/conf.d/memcache.ini /var/www/chroot/etc/php5/conf.d/
Now, unlike php5-xcache, memcached must be integrated in the PHP code. Most CNS furnish memcached support. Dotclear, for example, has a plugin that does just that.
Benchmark before memcached:
hits ^hits hits/s ^h/s bytes kB/s last errs tout htime sdht ptime 533 10 11 10 6760694 140 133 0 0 8804.3 114.2 8932.3 545 12 11 12 6905695 140 145 0 0 8807.6 105.1 8939.7 556 11 11 11 7045076 140 139 0 0 8829.3 104.2 8951.3 568 12 11 12 7197128 141 152 0 0 8813.1 107.7 8944.7 578 10 11 10 7337940 141 140 0 0 8788.9 121.0 8971.0 589 11 11 11 7470270 140 132 0 0 8783.5 117.4 8914.3 ^C Fin. Clients : 25 Hits : 589 + 0 abortés Octets : 7470270 Duree : 53332 ms Debit : 140 kB/s Reponse : 11 hits/s Erreurs : 0 Timeouts: 0 Temps moyen de hit: 8783.5 ms Temps moyen d'une page complete: 8914.3 ms Date de demarrage: 1285804175 (30 Sep 2010 - 1:49:35) Ligne de commande : ./inject32 -u 25 -H Host: jve.linuxwall.info -G sachiel.linuxwall.info:80/blog/index.php -o 4 -F
11 hits/s in average… (6/7 hits/s without xcache)
Now, activate memcached by adding the plugin and the following parameters in inc/config.php. Relaunch the benchmark:
hits ^hits hits/s ^h/s bytes kB/s last errs tout htime sdht ptime 1497 15 15 15 51862209 540 550 0 0 6370.9 83.7 6460.0 1513 16 15 16 52384599 540 522 0 0 6393.8 77.3 6476.0 1528 15 15 15 52935522 540 550 0 0 6423.8 91.9 6530.5 1544 16 15 16 53489490 540 553 0 0 6391.5 89.3 6487.8 1560 16 15 16 54011880 540 522 0 0 6417.4 87.4 6506.8 1575 15 15 15 54531225 539 519 0 0 6433.8 78.8 6528.0 1590 15 15 15 55050570 539 519 0 0 6426.8 99.8 6547.0 1605 15 15 15 55569915 539 519 0 0 6425.3 87.6 6515.2 1620 15 15 15 56089260 539 519 0 0 6422.5 88.5 6504.2 1636 16 15 16 56674806 539 585 0 0 6413.4 88.8 6516.8 1652 16 15 16 57228774 539 553 0 0 6379.6 90.4 6476.2 1668 16 15 16 57751164 539 522 0 0 6374.8 84.1 6463.2 ^C Fin. Clients : 50 Hits : 1668 + 0 abortés Octets : 57751164 Duree : 107719 ms Debit : 536 kB/s Reponse : 15 hits/s Erreurs : 0 Timeouts: 0 Temps moyen de hit: 6374.8 ms Temps moyen d'une page complete: 6463.2 ms Date de demarrage: 1285804931 (30 Sep 2010 - 2:02:11) Ligne de commande : ./inject32 -u 25 -H Host: jve.linuxwall.info -G sachiel.linuxwall.info:80/blog/index.php -o 4 -F
15 hits/s. That's still slow but better. I'll investigate to see if with other CMS, such as drupal, the gain is more important.