Speeding up an nginx webserver
After properly securing my nginx webserver, I tweaked the cache and connecting settings to improve performance as measured by www.webpagetest.org which I documented below.
The results are as follows, I achieved a 3.2x faster document ready timing, and reduced bandwidth by 18x.Click on the iteration below to see the webpagetest.org results:


Following these instructions, I enabled this in /etc/nginx/sites-enabled/default:
Additionally, I reduced the number of data files (csv) requested from 16 to 4 by concatenating the data. Also, I reduced the dependency on external (css) files which were not that important.
Most of these might already be in your conf file, but commented out.
To properly gzip text/csv data I had to add this mime type to /etc/nginx/mime.types as instructed here.
Note that this will not improve your performance the first load, but will subsequent requests.
I used settings found here:
The results are as follows, I achieved a 3.2x faster document ready timing, and reduced bandwidth by 18x.Click on the iteration below to see the webpagetest.org results:
iteration | change | Doc ready | Full load | ||||
---|---|---|---|---|---|---|---|
0 | baseline | 2.9s | 13 reqs | 1660 kb | 4.9s | 21 reqs | 1690 kb |
1 | http2, combine files | 1.5s | 5 reqs | 1570 kb | 3.2s | 16 reqs | 1660 kb |
2 | gzip | 1.1s | 4 reqs | 322 kb | 2.9s | 19 reqs | 358 kb |
3 | compile js | 0.8s | 5 reqs | 88 kb | 2.4s | 16 reqs | 116 kb |
4 | cache-control | 0.9s | 5 reqs | 88 kb | 2.4s | 19 reqs | 124 kb |


Iter 1: Add HTTP2, combine files
http2 has many advantages over http1.1, one of them being speed. Nginx supports this since version september 2015 or version 1.9.5. Note that the current (January 2019) version of nginx on Debian stretch is 1.10.3, which is higher than 1.9.5 (I was briefly confused by the '1.1' and '1.9').Following these instructions, I enabled this in /etc/nginx/sites-enabled/default:
server { # SSL configuration # listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; ... }
Iter 2: gzip
Even though a raspberry pi is not so fast, it's still faster to gzip content before sending it. For this goal I added the following to /etc/nginx.conf:http { ... ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/csv text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ... }
To properly gzip text/csv data I had to add this mime type to /etc/nginx/mime.types as instructed here.
Iter 3: Compile javascript
I'm using a 1.6MB javascript library which compresses well (because it's text), but compiling / minifying it further allows for even better compression. To this end I used Google's Closure Compiler using only 'simple' optimisation. This reduces the js file size from 1532 kb to 220 kb. Compression will be impaired a bit, but that's fine.Iter 4: Add Cache-control
Cache-control settings instruct browsers how often they should refresh their content, and thereby you can tune the requests based on the filetype. In my case, html and js files change rarely, but I have csv files that I update every 5 minutes. Some background info here.Note that this will not improve your performance the first load, but will subsequent requests.
I used settings found here:
server { ... # Cache control - cache regular files for 30d location ~* \.(?:js|css|png|jpg|jpeg|gif|ico)$ { expires 30d; add_header Cache-Control "public, no-transform"; } # Cache control - never cache csv files as they are updated continuously location ~* \.(?:csv)$ { expires 0; } ... }
02-'19 On-demand iOS VPN using Configuration Profiles
01-'19 Setting up an A+-grade nginx SSL server
Comments
Now this is the kind of content I like!
Did you try gzipping the assets on beforehand?
I like to put Cloudflare in front of every public website I own.
Did you try gzipping the assets on beforehand?
I like to put Cloudflare in front of every public website I own.
Nog een goede tip, Nginx kan pre gziped bestanden serveren. Dan hoeft Nginx niet elke request van een statisch bestand te gzippen. Als je bestandsnaam.js.gz naast het originele bestand zet wordt deze gebruikt. Zie https://theartofmachinery...06/nginx_gzip_static.html
Gzip is enorm snel, maar in stress situaties wordt de CPU hiermee minder belast.
Gzip is enorm snel, maar in stress situaties wordt de CPU hiermee minder belast.
Also use ETAG's, if a doc/page/file/image is not changed however cache is expired, you can send a 304.
- Use a tool like https://gtmetrix.com/ to figure out errors and what you can do to make it better. It uses the google speedtest however much better. You can also create reports and such.
- Minify html content
- Minify css
- Minify js with the Google enclosure compiler (you already did this)
- Remove tags and other info from images
- Use PNGoo PNG file optimizer if you using png images
- Try to send non-chunked content
- Use a CDN for static content or create a 'fake' cdn subdomain to avoid cookies on non-content.
- Redirect every page to www.[yourdomain.com] to avoid cookies on all other subdomains
Thanks to this all I get mostly a score of 100% 97% on gt-metrix.
See also this image: https://drive.google.com/...maFeZoMlwtqkxGQAEDUHhOiVi
Success!
- Use a tool like https://gtmetrix.com/ to figure out errors and what you can do to make it better. It uses the google speedtest however much better. You can also create reports and such.
- Minify html content
- Minify css
- Minify js with the Google enclosure compiler (you already did this)
- Remove tags and other info from images
- Use PNGoo PNG file optimizer if you using png images
- Try to send non-chunked content
- Use a CDN for static content or create a 'fake' cdn subdomain to avoid cookies on non-content.
- Redirect every page to www.[yourdomain.com] to avoid cookies on all other subdomains
Thanks to this all I get mostly a score of 100% 97% on gt-metrix.
See also this image: https://drive.google.com/...maFeZoMlwtqkxGQAEDUHhOiVi
Success!
[Comment edited on Sunday 10 February 2019 23:37]
Thanks all for your ideas! I'll try to implement them when I have some time.
ETags are not recommended at all! Only if your content tend to change often and you would still like to cache it. (Static (generated) HTML pages for example)codebeat wrote on Sunday 10 February 2019 @ 23:20:
Also use ETAG's, if a doc/page/file/image is not changed however cache is expired, you can send a 304.
- Use a tool like https://gtmetrix.com/ to figure out errors and what you can do to make it better. It uses the google speedtest however much better. You can also create reports and such.
- Minify html content
- Minify css
- Minify js with the Google enclosure compiler (you already did this)
- Remove tags and other info from images
- Use PNGoo PNG file optimizer if you using png images
- Try to send non-chunked content
- Use a CDN for static content or create a 'fake' cdn subdomain to avoid cookies on non-content.
- Redirect every page to www.[yourdomain.com] to avoid cookies on all other subdomains
Thanks to this all I get mostly a score of 100% 97% on gt-metrix.
See also this image: https://drive.google.com/...maFeZoMlwtqkxGQAEDUHhOiVi
Success!
On images / CSS / JS and such ETags actually slow down the rendering by waiting for the 304 before rendering. Those files rarely change and if they do you could just append those files with a unique hash so once they are updated the filename is different which requires a fresh download anyway.
Forgot to mention this in my previous message. Using a CDN is considered bad practice as well (unless you are still using HTTP1). Cookies are a very minimal payload, just a few bytes.codebeat wrote on Sunday 10 February 2019 @ 23:20:
- Use a CDN for static content or create a 'fake' cdn subdomain to avoid cookies on non-content.
A sepreate DNS request and SSL handshake are required for the CDN domain which causes quite a bit of latency which results in a delay before rendering. Compared to those few extra bytes I would rather go for faster rendering since bandwidth is not a problem anymore in 2019.
CDN's where populaire because with HTTP 1 you could only have 8 connections at a time to a domain. If you where to load a page with 100 photo's it would load in sets of 8 photo's. With multiple CDN domains this limitation was avoided. CDN's are not usually used to save a few bytes for Cookies.
Comments are closed