This section describes how to enable and configure caching of responses received from proxied servers.

In This Section

Introduction

When caching is enabled, NGINX Plus saves responses in a disk cache and uses them to respond to clients without having to proxy requests for the same content every time.

To learn more about NGINX Plus’s caching capabilities, watch the Content Caching with NGINX webinar on demand and get an in-depth review of features such as dynamic content caching, cache purging, and delayed caching.

Enabling the Caching of Responses

To enable caching, include the proxy_cache_path directive in the top-level http context. The mandatory first parameter is the local filesystem path for cached content, and the mandatory keys_zone parameter defines the name and size of the shared memory zone that is used to store metadata about cached items:

http {
    ...
    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
}

Then include the proxy_cache directive in the context (protocol type, virtual server, or location) for which you want to cache server responses, specifying the zone name defined by the keys_zone parameter to the proxy_cache_path directive (in this case, one):

http {
    ...
    proxy_cache_path /data/nginx/cache keys_zone=one:10m;

    server {
        proxy_cache one;
        location / {
            proxy_pass http://localhost:8000;
        }
    }
}

Note that the size defined by the keys_zone parameter does not limit the total amount of cached response data. Cached responses themselves are stored with a copy of the metadata in specific files on the filesystem. To limit the amount of cached response data, include the max_size parameter to the proxy_cache_path directive. (But note that the amount of cached data can temporarily exceed this limit, as described in the following section.)

NGINX Processes Involved in Caching

There are two additional NGINX processes involved in caching:

In the following example, iterations last 300 milliseconds or until 200 items have been loaded:

proxy_cache_path /data/nginx/cache keys_zone=one:10m loader_threshold=300 loader_files=200;

Specifying Which Requests to Cache

By default, NGINX Plus caches all responses to requests made with the HTTP GET and HEAD methods the first time such responses are received from a proxied server. As the key (identifier) for a request, NGINX Plus uses the request string. If a request has the same key as a cached response, NGINX Plus sends the cached response to the client. You can include various directives in the http, server, or location context to control which responses are cached.

To change the request characteristics used in calculating the key, include the proxy_cache_key directive:

proxy_cache_key "$host$request_uri$cookie_user";

To define the minimum number of times that a request with the same key must be made before the response is cached, include the proxy_cache_min_uses directive:

proxy_cache_min_uses 5;

To cache responses to requests with methods other than GET and HEAD, list them along with GET and HEAD as parameters to the proxy_cache_methods directive:

proxy_cache_methods GET HEAD POST;

Limiting or Bypassing Caching

By default, responses remain in the cache indefinitely. They are removed only when the cache exceeds the maximum configured size, and then in order by length of time since they were last requested. You can set how long cached responses are considered valid, or even whether they are used at all, by including directives in the http, server, or location context:

To limit how long cached responses with specific status codes are considered valid, include the proxy_cache_valid directive:

proxy_cache_valid 200 302 10m;
proxy_cache_valid 404      1m;

In this example, responses with the code 200 or 302 are considered valid for 10 minutes, and responses with code 404 are valid for 1 minute. To define the validity time for responses with all status codes, specify any as the first parameter:

proxy_cache_valid any 5m;

To define conditions under which NGINX Plus does not send cached responses to clients, include the proxy_cache_bypass directive. Each parameter defines a condition and consists of a number of variables. If at least one parameter is not empty and does not equal “0” (zero), NGINX Plus does not look up the response in the cache, but instead forwards the request to the backend server immediately.

proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;

To define conditions under which NGINX Plus does not cache a response at all, include the proxy_no_cache directive, defining parameters in the same way as for the proxy_cache_bypass directive.

proxy_no_cache $http_pragma $http_authorization;

Purging Content From The Cache

NGINX makes it possible to remove outdated cached files from the cache. This is necessary for removing outdated cached content to prevent serving old and new versions of web pages at the same time. The cache is purged upon receiving a special “purge” request that contains either a custom HTTP header, or the “PURGE” HTTP method.

Configuring Cache Purge

Let’s set up a configuration that identifies requests that use the “PURGE” HTTP method and deletes matching URLs.

  1. On the http level, create a new variable, for example, $purge_method, that will depend on the $request_method variable:
    http {
        ...
        map $request_method $purge_method {
            PURGE 1;
            default 0;
        }
    }
  2. In the location where caching is configured, include the proxy_cache_purge directive that will specify a condition of a cache purge request. In our example, it is the $purge_method configured at the previous step:
    server {
        listen      80;
        server_name www.example.com;
    
        location / {
            proxy_pass  https://localhost:8002;
            proxy_cache mycache;
    
            proxy_cache_purge $purge_method;
        }
    }

Sending the Purge Command

When the proxy_cache_purge directive is configured, you’ll need to send a special cache purge request to purge the cache. You can issue purge requests using a range of tools, for example, the curl command:

$ curl -X PURGE -D – "https://www.example.com/*"
HTTP/1.1 204 No Content
Server: nginx/1.5.7
Date: Sat, 01 Dec 2015 16:33:04 GMT
Connection: keep-alive

In the example, the resources that have a common URL part (specified by the asterisk wildcard) will be removed. However, such cache entries will not be removed completely from the cache: they will remain on the disk until they are deleted for either inactivity (the inactive parameter of proxy_cache_path), or processed by the cache purger process, or a client attempts to access them.

Restricting Access to the Purge Command

It is recommended that you configure a limited number of IP addresses allowed to send a cache purge request:

geo $purge_allowed {
   default         0;  # deny from other
   10.0.0.1        1;  # allow from localhost
   192.168.0.0/24  1;  # allow from 10.0.0.0/24
}

map $request_method $purge_method {
   PURGE   $purge_allowed;
   default 0;
}

In this example, NGINX checks if the “PURGE” method is used in a request, and, if so, analyzes the client IP address. If the IP address is whitelisted, then the $purge_method is set to $purge_allowed: “1” permits purging, “0” denies purging.

Completely Removing Files from the Cache

To completely remove cache files that match an asterisk, you will need to activate a special cache purger process that will permanently iterate through all cache entries and delete the entries that match the wildcard key. On the http level, add the purger parameter to the proxy_cache_path directive:

proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m purger=on;

Cache Purge Configuration Example

http {
    ...
    proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m purger=on;

    map $request_method $purge_method {
        PURGE 1;
        default 0;
    }

    server {
        listen      80;
        server_name www.example.com;

        location / {
            proxy_pass        https://localhost:8002;
            proxy_cache       mycache;
            proxy_cache_purge $purge_method;
        }
    }

    geo $purge_allowed {
       default         0;
       10.0.0.1        1;
       192.168.0.0/24  1;
    }

    map $request_method $purge_method {
       PURGE   $purge_allowed;
       default 0;
    }
}

Byte-Range Caching

Sometimes, the initial cache fill operation may take some time, especially for large files. When the first request starts downloading a part of a video file, next requests will have to wait for the entire file to be downloaded and put into the cache.

NGINX makes it possible cache such range requests and gradually fill the cache with the cache slice module. The file is divided into smaller “slices”. Each range request chooses particular slices that would cover the requested range and, if this range is still not cached, put it into the cache. All other requests to these slices will take the response from the cache.

To enable byte-range caching:

  1. Make sure your NGINX is compiled with the slice module.
  2. Specify the size of the slice with the slice directive:
    location / {
        slice  1m;
    }

    The slice size should be adjusted reasonably enough to make slice download fast. A too small size may result in excessive memory usage and a large number of opened file descriptors while processing the request, a too large value may result in latency.

  3. Include the $slice_range variable to the cache key:
    proxy_cache_key $uri$is_args$args$slice_range;
  4. Enable caching of responses with 206 status code:
    proxy_cache_valid 200 206 1h;
  5. Enable passing range requests to the proxied server by passing the $slice_range variable in the Range header field:
    proxy_set_header  Range $slice_range;

Byte-range caching example:

location / {
    slice             1m;
    proxy_cache       cache;
    proxy_cache_key   $uri$is_args$args$slice_range;
    proxy_set_header  Range $slice_range;
    proxy_cache_valid 200 206 1h;
    proxy_pass        http://localhost:8000;
}

Note that if slice caching is turned on, the initial file should not be changed.

Combined Configuration Example

The following sample configuration combines some of the caching options described above.

http {
    ...
    proxy_cache_path /data/nginx/cache keys_zone=one:10m loader_threshold=300 
                     loader_files=200 max_size=200m;

    server {
        listen 8080;
        proxy_cache one;

        location / {
            proxy_pass http://backend1;
        }

        location /some/path {
            proxy_pass http://backend2;
            proxy_cache_valid any 1m;
            proxy_cache_min_uses 3;
            proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
        }
    }
}

In this example, two locations use the same cache but in different ways.

Because responses from the backend1 server rarely change, no cache-control directives are included. Responses are cached the first time a request is made, and remain valid indefinitely.

By contrast, responses to requests served by backend2 change frequently, so they are considered valid for only 1 minute and aren’t cached until the same request is made 3 times. Moreover, if a request matches the conditions defined by the proxy_cache_bypass directive, NGINX Plus immediately passes the request to backend2 without looking for it in the cache.