Nginx fast-cgi cache to RAM and it’s limitations

There are many guides on how to setup Nginx fast-cgi cache around the internet and even some software that will do it for you in a single command, like Webinoly. However, not many focus on an important configuration detail, which is the amount of space, such cache is allowed to occupy. The result may be the notorious (No space left on device) while reading upstream client error.

This is what the error may look like:

[alert] 12637#12637: *212239 write() "/var/run/nginx-cache/2/8c/{longhash}" failed (28: No space left on device) while reading upstream, client: ### , server:, request: "GET / HTTP/1.1", subrequest: "/index.php", upstream: "fastcgi://", host: ""

The confusing part is the fact that when checking space on devices using the df command, the output may not show any space shortage on any of the file systems and also, the fact that the path /var/run/nginx-cache is apparently mounted on the main disk.

But appearances are deceiving. The path /var/run/ is actually a linked to /run and /run is a temporary file system (tmpfs) that resides in the RAM. So the error message is actually referring to the /run file system, which may or may not be full, because in the mean time, some cache may have been deleted by then.

As previously mentioned, the amount of cached data can temporarily exceed the limit during the time between cache manager activations.

Nginx Documentation

Why is this happening and how to prevent it?

The problem is not the fact, that cached data exceeds the limit but the limit itself. The tmpfs partition mounted on /run has a fixed size defined by the operating system and it is usually about 20% of the available RAM. You can see the exact amount of space using the df -h command on your system.

The max_size setting should never exceed the amount of space available to /run. Since /run does contain various data, not just nginx-cache, I would recommend the max_size setting to a maximum of 70% of space available to /run.

The cause of the error is usually the fact that max_size is set too high, compared to the space available to the tmpfs mounted on /run.

To change the max_size setting, you should find your proxy_cache_path setting in your nginx configuration, and adjust the value. Bellow is one example.

proxy_cache_path /var/run/nginx-cache keys_zone=one:10m max_size=200m;

If you have a large site or a lot of virtualhosts on your nginx, you may need more cache than the available space on /run. You can either create a new RAM partition if you are sure to have enough spare RAM laying around, or write cache to the disk.

To create a new ram disk, refer to this answer on StackExchange. Don’t forget to point your proxy_cache_path to the newly created disk.

Writing cache to the disk comes with some speed loss. However, if you have SSD disks it shouldn’t be noticeable. In this case all you have to do is to point your proxy_cache_path to a directory on your SSD disk. Typically this would be /var/cache/nginx or /data/nginx/cache or really any directory of your choice. The directory should be readable and writable only by the nginx user (usually www-data).