This deployment guide explains how to use NGINX and NGINX Plus to load balance HTTP and HTTPS traffic across a pool of Apache TomcatTM application servers. The detailed instructions in this guide apply to both cloud-based and on-premises deployments of Tomcat.
NGINX is an open source web server and reverse proxy that has grown in popularity in recent years due to its scalability. NGINX was first created to solve the C10K problem (serving 10,000 simultaneous connections on a single web server). NGINX’s features and performance have made it a staple of high performance sites – now powering 1 in 3 of the world’s million busiest web properties.
NGINX Plus is the commercially supported version of the open source NGINX software. NGINX Plus is a complete application delivery platform, extending the power of NGINX with a host of enterprise-ready capabilities that enhance a Tomcat deployment and are instrumental to building web applications at scale:
Apache Tomcat is an open source software implementation of the Java Servlet, JavaServer Pages, Java Expression Language, and Java WebSocket technologies.
We tested the procedures in this guide against Apache Tomcat 8.0.
The instructions assume you have basic Linux system administration skills, including the following. Full instructions are not provided for these tasks.
example.com
is used as a sample domain name (in key names and configuration blocks). Replace it with your organization’s name.If you plan to enable TLS/SSL encryption of traffic between NGINX or NGINX Plus and clients of your Tomcat application, you need to configure a server certificate for NGINX or NGINX Plus.
with-http_ssl_module
parameter to enable TLS/SSL support for HTTP traffic (the corresponding parameter for TCP is with-stream_ssl_module
, and for email is with-mail_ssl_module
, but this guide does not cover either of those protocol types).There are several ways to obtain a server certificate, including the following. For your convenience, step-by-step instructions are provided for the second and third options.
For more details on TLS/SSL termination, see the NGINX Plus Admin Guide.
Generate a public-private key pair and a self-signed server certificate in PEM format that is based on them.
openssl
software installed.Generate the key pair in PEM format (the default). To encrypt the private key, include the -des3
parameter. (Other encryption algorithms are available, listed on the man page for the genrsa
command.) You are prompted for the passphrase used as the basis for encryption.
root# openssl genrsa -des3 -out ~/private-key.pem 2048
Generating RSA private key ...
Enter pass phrase for private-key.pem:
Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.
root# cp ~/private-key.pem secure-dir/private-key.pem.backup
Generate the certificate. Include the -new
and -x509
parameters to make a new self-signed certificate. Optionally include the -days
parameter to change the key’s validity lifetime from the default of 30 days (10950 days is about 30 years). Respond to the prompts with values appropriate for your testing deployment.
root# openssl req -new -x509 -key ~/private-key.pem -out ~/self-cert.pem \
-days 10950
Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX or NGINX Plus server.
Log in as the root user on a machine that has the openssl
software installed.
Create a private key to be packaged in the certificate.
root# openssl genrsa -out ~/example.com.key 2048
Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.
root# cp ~/example.com.key secure-dir/example.com.key.backup
Create a Certificate Signing Request (CSR) file.
root# openssl req -new -sha256 -key ~/example.com.key -out ~/example.com.csr
Request a certificate from a CA or your internal security group, providing the CSR file (example.com.csr). As a reminder, never share private keys (.key files) directly with third parties.
The certificate needs to be PEM format rather than in the Windows-compatible PFX format. If you request the certificate from a CA website yourself, choose NGINX or Apache (if available) when asked to select the server platform for which to generate the certificate.
Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX Plus server.
To reduce errors, this guide has you copy directives from files provided by NGINX, Inc. into your configuration files, instead of using a text editor to type in the directives yourself. Then you go through the sections in this guide (starting with Configuring Virtual Servers for HTTP and HTTPS Traffic) to learn how to modify the directives as required for your deployment.
As provided, there is one file for basic load balancing (with NGINX or NGINX Plus) and one file for enhanced load balancing (with NGINX Plus). If you are installing and configuring NGINX or NGINX Plus on a fresh Linux system and using it only to load balance Tomcat traffic, you can use the provided file as your main configuration file, which by convention is called /etc/nginx/nginx.conf.
We recommend, however, that instead of a single configuration file you use the scheme that is set up automatically when you install an NGINX Plus package, especially if you already have an existing NGINX or NGINX Plus deployment or plan to expand your use of NGINX or NGINX Plus to other purposes in future. In the conventional scheme, the main configuration file is still called /etc/nginx/nginx.conf, but instead of including all directives in it, you create separate configuration files for different functions and store the files in the /etc/nginx/conf.d directory. You then use the include
directive in the appropriate contexts of the main file to read in the contents of the function-specific files.
To download the complete configuration file for basic load balancing:
root# cd /etc/nginx/conf.d
root# curl https://www.nginx.com/resource/conf/tomcat-basic.conf > tomcat-basic.conf
To download the complete configuration file for enhanced load balancing:
root# cd /etc/nginx/conf.d
root# curl https://www.nginx.com/resource/conf/tomcat-enhanced.conf > \
tomcat-enhanced.conf
(You can also access the URL in a browser and download the file that way.)
To set up the conventional configuration scheme, add an http
configuration block in the main nginx.conf file, if it does not already exist. (The standard placement is below any global directives.) Add this include
directive with the appropriate filename:
http {
include conf.d/tomcat-(basic|enhanced).conf;
}
You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files function-http.conf, this is an appropriate include
directive:
http {
include conf.d/*-http.conf;
}
For reference purposes, the full configuration files are also provided in this document:
We recommend, however, that you do not copy text directly from this document. It does not necessarily use the same mechanisms for positioning text (such as line breaks and white space) as text editors do. In text copied into an editor, lines might run together and indenting of child statements in configuration blocks might be missing or inconsistent. The absence of formatting does not present a problem for NGINX or NGINX Plus, because (like many compilers) they ignore white space during parsing, relying solely on semicolons and curly braces as delimiters. The absence of white space does, however, make it more difficult for humans to interpret the configuration and modify it without making mistakes.
We recommend that each time you complete a set of updates to the configuration, you run the nginx
-t
command to test the configuration file for syntactic validity.
root# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
To tell NGINX or NGINX Plus to start using the new configuration, run one of the following commands:
root# nginx -s reload
root# service nginx reload
This section explains how to set up NGINX or NGINX Plus as a load balancer in front of two Tomcat servers. The instructions in the first two sections are mandatory:
The instructions in the remaining sections are optional, depending on the requirements of your application:
The complete configuration file appears in Full Configuration for Basic Load Balancing.
If you are using NGINX Plus, you can configure additional enhanced features after you complete the configuration of basic load balancing. See Configuring Enhanced Load Balancing with NGINX Plus.
These directives define virtual servers for HTTP and HTTPS traffic in separate server
blocks in the top-level http
configuration block. All HTTP requests are redirected to the HTTPS server.
Configure a server
block that listens for requests for https://example.com received on port 443.
The ssl_certificate
and ssl_certificate_key
directives are required; substitute the names of the certificate and private key you chose in Configuring a TLS/SSL Certificate for Client Traffic.
The other directives are optional but recommended.
# in the 'http' block
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/certificate-name;
ssl_certificate_key /etc/nginx/ssl/private-key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
}
Configure a server
block that permanently redirects requests received on port 80 for http://example.com to the HTTPS server, which is defined in the previous step.
If you’re not using SSL for client connections, omit the return
directive. When instructed in the remainder of this guide to add directives to the server
block for HTTPS traffic, add them to this block instead.
# in the 'http' block
server {
listen 80;
server_name example.com;
# Redirect all HTTP requests to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
For more information about configuring SSL, see the NGINX Plus Admin Guide and the reference documentation for the HTTP SSL module.
To configure load balancing, you first create a named “upstream group,” which lists the backend servers. You then set up NGINX or NGINX Plus as a reverse proxy and load balancer by referring to the upstream group in one or more proxy_pass
directives.
Configure an upstream group called tomcat with two Tomcat application servers listening on port 8080, one on IP address 10.100.100.11 and the other on 10.100.100.12.
# in the 'http' block
upstream tomcat {
server 10.100.100.11:8080;
server 10.100.100.12:8080;
}
In the server
block for HTTPS traffic that we created in Configuring Virtual Servers for HTTP and HTTPS Traffic, include these two location
blocks:
The first one matches HTTPS requests in which the path starts with /tomcat-app/, and proxies them to the tomcat upstream group we created in the previous step.
The second one funnels all traffic to the first location
block, by doing a temporary redirect of all requests for http://example.com/.
# in the 'server' block for HTTPS traffic
location /tomcat-app/ {
proxy_pass http://tomcat;
}
location = / {
return 302 /tomcat-app/;
}
Note that these blocks handle only standard HTTPS traffic. If you want to load balance WebSocket traffic, you need to add another location
block as described in Configuring Proxy of WebSocket Traffic.
By default, NGINX and NGINX Plus use the Round Robin algorithm for load balancing among servers. The load balancer runs through the list of servers in the upstream group in order, forwarding each new request to the next server. In our example, the first request goes to 10.100.100.11, the second to 10.100.100.12, the third to 10.100.100.11, and so on. For information about the other available load-balancing algorithms, see Application Load Balancing with NGINX Plus.
In NGINX Plus, you can also set up dynamic reconfiguration of an upstream group when the set of backend servers changes, using DNS or an API; see Enabling On-the-Fly Reconfiguration of Upstream Groups.
For more information about proxying and load balancing, see Reverse Proxy and Load Balancing in the NGINX Plus Admin Guide, and the documentation for the Proxy and Upstream modules.
If your application requires basic session persistence (also known as sticky sessions), you can implement it in NGINX by using the IP Hash load-balancing algorithm. (NGINX Plus offers a more sophisticated form of session persistence, as described in Configuring Advanced Session Persistence.)
With the IP Hash algorithm, for each request NGINX calculates a hash based on the client’s IP address, and associates the hash with one of the upstream servers. It sends all requests with that hash to that server, thus establishing session persistence.
If the client has an IPv6 address, the hash is based on the entire address. If it has an IPv4 address, the hash is based on just the first three octets of the address. This is designed to optimize for ISP clients that are assigned IP addresses dynamically from a subnetwork (/24) range. However, it is not effective in these cases:
The majority of the traffic to your site is coming from one forward proxy or from clients on the same /24 network, because in that case IP Hash maps all clients to the same server.
A client’s IP address can change during the session, for example when a mobile client switches from a Wi-Fi network to a cellular one.
To configure session persistence in NGINX, add the ip_hash
directive to the upstream
block created in Configuring Basic Load Balancing:
# in the 'http' block
upstream tomcat {
ip_hash;
server 10.100.100.11:8080;
server 10.100.100.12:8080;
}
You can also use the Hash load balancing method for session persistence, with the hash based on any combination of text and NGINX variables you specify. For example, you can hash on full (four-octet) client IP addresses with the following configuration.
# in the 'http' block
upstream tomcat {
hash $remote_addr;
server 10.100.100.11:8080;
server 10.100.100.12:8080;
}
The WebSocket protocol (defined in RFC 6455) enables simultaneous two-way communication over a single TCP connection between clients and servers, where each side can send data independently from the other. To initiate the WebSocket connection, the client sends a handshake request to the server, upgrading the request from standard HTTP to WebSocket. The connection is established if the handshake request passes validation, and the server accepts the request. When a WebSocket connection is created, a browser client can send data to a server while simultaneously receiving data from that server.
Tomcat 8 does not enable WebSocket by default, but instructions for enabling it are available in the Tomcat documentation. If you want to use NGINX or NGINX Plus to proxy WebSocket traffic to your Tomcat application servers, add the directives discussed in this section.
NGINX and NGINX Plus by default use HTTP/1.0 for upstream connections. To be proxied correctly, WebSocket connections require HTTP/1.1 along with some other configuration directives that set HTTP headers:
#in the 'http' block
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# in the 'server' block for HTTPS traffic
location /wstunnel/ {
proxy_pass http://tomcat;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
The first proxy_set_header
directive is needed because the Upgrade
request header is hop-by-hop; that is, the HTTP specification explicitly forbids proxies from forwarding it. This directive overrides the prohibition.
The second proxy_set_header
directive sets the Connection
header to a value that depends on the test in the map
block: if the request has an Upgrade
header, the Connection
header is set to upgrade
; otherwise, it is set to close
.
For more information about proxying WebSocket traffic, see WebSocket proxying and NGINX as a WebSocket Proxy.
Caching responses from your Tomcat app servers can both improve response time to clients and reduce load on the servers, because eligible responses are served immediately from the cache instead of being generated again on the server. There are a variety of useful directives that can be used to finetune caching behavior; for a detailed discussion, see A Guide to Caching with NGINX.
To enable basic caching in NGINX or NGINX Plus, add the following configuration:
Include the proxy_cache_path
directive to create the local disk directory /tmp/NGINX_cache/ for use as a cache. The keys_zone
parameter allocates 10 megabytes (MB) of shared memory for a zone called backcache, which is used to store cache keys and metadata such as usage timers. A 1-MB zone can store data for about 8,000 keys.
# in the 'http' block
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
In the location
block that matches HTTPS requests in which the path starts with /tomcat-app/, include the proxy_cache
directive to reference the cache created in the previous step.
# in the 'server' block for HTTPS traffic
location /tomcat-app/ {
proxy_pass http://tomcat;
proxy_cache backcache;
}
By default, the cache key is similar to this string of NGINX variables$scheme$proxy_host$request_uri
. To change the list of variables, specify them with the proxy_cache_key
directive. One effective use of this directive is to create a cache key for each user based on the JSESSIONID
cookie. This is useful when the cache is private, for example containing shopping cart data or other user-specific resources. Include the JSESSIONID
cookie in the cache key with this directive:
proxy_cache_key $proxy_host$request_uri$cookie_jessionid;
For more complete information about caching, refer to the documentation for the Proxy module and the NGINX Plus Admin Guide.
HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and later.
If using NGINX, note that in NGINX 1.9.5 and later the SPDY module is completely removed from the NGINX codebase and replaced with the HTTP/2 module. After upgrading to version 1.9.5, you can no longer configure NGINX to use SPDY. If you want to keep using SPDY, use the latest binary available from the NGINX 1.8.x branch.
In NGINX Plus R8 and later, the nginx-plus and nginx-plus-extras packages support HTTP/2 by default (and SPDY is no longer supported). If using NGINX Plus R7, you must install the nginx-plus-http2 package instead of the nginx-plus or nginx-plus-extras package.
To enable HTTP/2 support, add the http2
parameter to the listen
directive in the server
block for HTTPS traffic that we created in Configuring Virtual Servers for HTTP and HTTPS Traffic, so that it looks like this:
# in the 'server' block for HTTPS traffic
listen 443 ssl http2;
To verify that HTTP/2 translation is working, you can use the “HTTP/2 and SPDY indicator” plug-in available for Google Chrome and Firefox.
The full configuration for basic load balancing appears here for your convenience. It goes in the http
context. The complete file is available for download from the NGINX, Inc. website.
We recommend that you do not copy text directly from this document, but instead use the method described in Creating and Modifying Configuration Files to include these directives in your configuration – add an include
directive to the http
context of the main nginx.conf file to read in the contents of /etc/nginx/conf.d/tomcat-basic.conf.
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream tomcat {
# Use IP Hash for session persistence
ip_hash;
# List of Tomcat application servers
server 10.100.100.11:8080;
server 10.100.100.12:8080;
}
server {
listen 80;
server_name example.com;
# Redirect all HTTP requests to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/ssl/certificate-name;
ssl_certificate_key /etc/nginx/ssl/private-key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
# Load balance requests for /tomcat-app/ across Tomcat application servers
location /tomcat-app/ {
proxy_pass http://tomcat;
proxy_cache backcache;
}
# Return a temporary redirect to the /tomcat-app/ directory
# when user requests '/'
location = / {
return 302 /tomcat-app/;
}
# WebSocket configuration
location /wstunnel/ {
proxy_pass https://tomcat;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
This section explains how to configure enhanced load balancing with some of the extended features in NGINX Plus.
Note: Before setting up the enhanced features described in this section, you must complete the instructions for basic load balancing in Configuring Virtual Servers for HTTP and HTTPS Traffic and Configuring Basic Load Balancing. Except as noted, all optional basic features (described in the other subsections of Configuring Basic Load Balancing in NGINX and NGINX Plus) can be combined with the enhanced features described here.
The features described in the following sections are all optional.
The complete configuration file appears in Full Configuration for Enhanced Load Balancing.
NGINX Plus provides more sophisticated session persistence methods than open source NGINX, implemented in three variants of the sticky
directive. In the following example, we add the sticky
route
directive to the upstream group we created in Configuring Basic Load Balancing, to base session persistence on the jvmRoute
attribute set by the Tomcat application.
Remove or comment out the ip_hash
directive, leaving only the server
directives:
# in the 'http' block
upstream tomcat {
#ip_hash;
server 10.100.100.11:8080;
server 10.100.100.12:8080;
}
Add the following lines to the configuration files for your backend Tomcat servers to append an identifier based on the jvmRoute
attribute (here, set to either a
or b
) to the end of the JSESSIONID
cookie value:
#on host 10.100.100.11
<Engine name=”Catalina” defaultHoast=”www.example.com” jvmRoute=”a”>
#on host 10.100.100.12
<Engine name=”Catalina” defaultHoast=”www.example.com” jvmRoute=”b”>
Configure NGINX Plus to select the upstream server by inspecting the JSESSIONID
cookie and URL in each request and extracting the jvmRoute
value.
# in the 'http' block
map $cookie_jsessionid $route_cookie {
~.+\.(?Pw+)$ $route;
}
map $request_uri $route_uri {
~jsessionid=.+\.(?Pw+)$ $route_uri;
}
upstream tomcat {
server 10.100.100.11:8080 route=a;
server 10.100.100.12:8080 route=b;
sticky route $route_cookie $route_uri;
}
map
directive extracts the final element (following the period) of the JSESSIONID
cookie, recording it in the $route_cookie
variable.map
directive extracts the final element (following the period) from the trailing jsessionid=
element of the request URL, recording it in the $route_uri
variable.The sticky
route
directive tells NGINX Plus to use the value of the first non-empty variable it finds in the list of parameters, which here is the two variables set by the map
directives. In other words, it uses the final element of the JESSIONID
cookie if it exists, and the final element of the jessionid=
URL element otherwise.
The route
parameters to the server
directives instruct NGINX to send the request to 10.100.100.11 if the value is a
and to 10.100.100.12 if the value is b
.
Another option for implementing session persistence is to use the sticky
learn
directive, so that the session identifier is the JSESSIONID
cookie created by your Tomcat application.
ip_hash
directive in the upstream
block as in Step 1 above.Include the sticky
learn
directive in the upstream
block:
# in the 'http' block
upstream tomcat {
server 10.100.100.11:8080;
server 10.100.100.12:8080;
sticky learn create=$upstream_cookie_JSESSIONID
lookup=$cookie_JSESSIONID
zone=client_sessions:1m;
}
The create
and lookup
parameters specify how new sessions are created and existing sessions are searched for, respectively. For new sessions, NGINX Plus sets the session identifier to the value of the $upstream_cookie_JSESSIONID
variable, which captures the JSESSIONID
cookie sent by the Tomcat application server. When checking for existing sessions, it uses the JSESSIONID
cookie sent by the client (the $cookie_JSESSIONID
variable) as the session identifier.
Both parameters can be specified more than once (each time with a different variable), in which case NGINX Plus uses the first non-empty variable for each one.
zone
argument creates a shared memory zone for storing information about sessions. The amount of memory allocated – here, 1 MB – determines how many sessions can be stored at a time (the number varies by platform). The name assigned to the zone – here, client_sessions
– must be unique for each sticky
directive.For more information about session persistence, see the NGINX Plus Admin Guide.
Health checks are out-of-band HTTP requests sent to a server at fixed intervals. They are used to determine whether a server is responsive and functioning correctly, without requiring an actual request from a client.
Because the health_check
directive is placed in the location
block, we can enable different health checks for each application.
In the location
block that matches HTTPS requests in which the path starts with /tomcat-app/ (created in Configuring Basic Load Balancing), add the health_check
directive.
Here we configure NGINX Plus to send an out-of-band request for the top-level URI / (slash) to each of the servers in the tomcat upstream group every 2 seconds, which is more aggressive than the default 5-second interval. If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes five subsequent health checks in a row. We include the match
parameter to define a nondefault set of health-check tests.
# in the 'server' block for HTTPS traffic
location /tomcat-app/ {
proxy_pass http://tomcat;
proxy_cache backcache;
health_check interval=2s fails=1 passes=5 uri=/
match=tomcat_check;
}
In the http
context, include a match
directive to define the tests that a server must pass to be considered functional. In this example, it must return status code 200
, the Content-Type
response header must be text/html
, and the response body must match the indicated regular expression.
# in the 'http' block
match health_check {
status 200;
header Content-Type = text/html;
body ~ "Apache Tomcat/8";
}
In the tomcat upstream group, include the zone
directive to define a shared memory zone that stores the group’s configuration and run-time state, which are shared among worker processes.
# in the 'http' block
upstream tomcat {
zone tomcat 64k;
server 10.100.100.11:8080;
server 10.100.100.12:8080;
...
}
NGINX Plus also has a slow-start feature that is a useful auxiliary to health checks. When a failed server recovers, or a new server is added to the upstream group, NGINX Plus slowly ramps up the traffic to it over a defined period of time. This gives the server time to “warm up” without being overwhelmed by more connections than it can handle as it starts up. For more information, see the NGINX Plus Admin Guide.
For example, to set a slow-start period of 30 seconds for your Tomcat application servers, include the slow_start
parameter to their server
directives:
# in the 'upstream' block
server 10.100.100.11:8080 slow_start=30s;
server 10.100.100.12:8080 slow_start=30s;
For information about customizing health checks, see the NGINX Plus Admin Guide.
NGINX Plus includes a Status module for live activity monitoring that tracks key load and performance metrics in real time. The module includes a built-in dashboard that graphically displays the statistics, along with a RESTful JSON API that makes it very easy to feed the data to a custom or third-party monitoring tool. These instructions show how to configure NGINX to enable the Status module and display the dashboard.
For more information about live activity monitoring, see the NGINX Plus Admin Guide.
The quickest way to configure the module and the built-in NGINX Plus dashboard is to download the sample configuration file from the NGINX, Inc. website and modify it as necessary. For more complete instructions, see Live Activity Monitoring of NGINX Plus in 3 Simple Steps.
Download the status.conf file to the NGINX Plus server:
# cd /etc/nginx/conf.d
# curl https://www.nginx.com/resource/conf/status.conf > status.conf
Customize the file for your deployment as specified by comments in the file. In particular, the default settings in the file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods:
IP address-based access control lists (ACLs). In the sample configuration file, uncomment the allow
and deny
directives, and substitute the address of your administrative network for 10.0.0.0/8. Only users on the specified network can access the dashboard.
allow 10.0.0.0/8;
deny all;
HTTP basic authentication. In the sample configuration file, uncomment the auth_basic
and auth_basic_user_file
directives and add user entries to the /etc/nginx/users file (for example, by using an htpasswd
generator). If you have an Apache installation, another option is to reuse an existing htpasswd file.
auth_basic on;
auth_basic_user_file /etc/nginx/users;
Client certificates, which are part of a complete configuration of SSL or TLS. For more information, see the NGINX Plus Admin Guide and the documentation for the HTTP SSL module.
Firewall. Configure your firewall to disallow outside access to the port for the dashboard (8080 in the sample configuration file).
In each upstream group that you want to monitor, include the zone
directive to define a shared memory zone that stores the group’s configuration and run-time state, which are shared among worker processes.
For example, to monitor your Tomcat application servers, add the zone
directive to the tomcat upstream group (if you followed the instructions in Configuring Application Health Checks, you already made this change).
# in the 'http' block
upstream tomcat {
zone tomcat 64k;
server 10.100.100.11:8080;
server 10.100.100.12:8080;
...
}
In the server
block for HTTPS traffic (created in Configuring Virtual Servers for HTTP and HTTPS Traffic), add the status_zone
directive:
# in the 'server' block for HTTPS traffic
status_zone tomcat;
When you reload the NGINX Plus configuration file, for example by running the nginx
–s
reload
command, the NGINX Plus dashboard is available immediately at http://nginx-server-address:8080.
With NGINX Plus, you can reconfigure load-balanced server groups on-the-fly using the Domain Name System (DNS) or a simple HTTP API. For a detailed discussion, see the NGINX Plus Admin Guide and Dynamic Reconfiguration with NGINX Plus.
To enable on-the-fly reconfiguration of your upstream group of Tomcat app servers using the API:
Include the zone
directive in the tomcat upstream group to create a shared memory zone that stores the group’s configuration and run-time state, which are shared among worker processes. (If you followed the instructions in Configuring Application Health Checks or Enabling Live Activity Monitoring, you already made this change.)
# in the 'http' block
upstream tomcat {
zone tomcat 64k;
server 10.100.100.11:8080;
server 10.100.100.12:8080;
...
}
In the server
block for HTTPS traffic (created in Configuring Virtual Servers for HTTP and HTTPS Traffic), add a new location
block for the on-the-fly reconfiguration API. It contains the upstream_conf
directive (upstream_conf is also the conventional name for the location, as used here).
We strongly recommend that you restrict access to the location so that only authorized administrators can access the reconfiguration API. The allow
and deny
directives in the following example permit access only from the localhost address (127.0.0.1).
# in the 'server' block for HTTPS traffic
location /upstream_conf {
upstream_conf;
With this configuration in place, you can run curl
commands on the NGINX Plus server’s command line to add and remove servers in the tomcat upstream group. The following sequence of commands checks the status of the upstream servers, drains a server of its active connections in preparation for taking it down, removes it from the group, and then readds it:
$ curl http://localhost/upstream_conf?upstream=tomcat
server 10.100.100.11:8080; # id=0
server 10.100.100.12:8080; # id=1
$ curl http://localhost:8080/upstream_conf?upstream=tomcat\&id=0\&drain=1
server 10.100.100.11:8080; # id=0 draining
$ curl http://localhost/upstream_conf?upstream=tomcat
server 10.100.100.11:8080; # id=0 draining
server 10.100.100.12:8080; # id=1
$ curl http://localhost:8080/upstream_conf?remove=\&upstream=tomcat\&id=0
server 10.100.100.12:8080; # id=1
$ curl http://localhost:8080/upstream_conf?add=\&upstream=tomcat\&server=10.100.100.11:8080\&max_fails=1
server 10.100.100.11:8080; # id=3
$ curl http://localhost/upstream_conf?upstream=tomcat
server 10.100.100.12:8080; # id=1
server 10.100.100.11:8080; # id=3
The full configuration for enhanced load balancing appears here for your convenience. It goes in the http
context. The complete file is available for download from the NGINX, Inc. website.
We recommend that you do not copy text directly from this document, but instead use the method described in Creating and Modifying Configuration Files to include these directives in your configuration – namely, add an include
directive to the http
context of the main nginx.conf file to read in the contents of /etc/nginx/conf.d/tomcat-enhanced.conf.
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
# WebSocket configuration
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Extract the data after the final period (.) in the
# JSESSIONID cookie and store it in the $route_cookie variable.
map $cookie_jsessionid $route_cookie {
~.+\.(?Pw+)$ $route;
}
# Search the URL for a trailing jsessionid parameter, extract the
# data after the final period (.), and store it in
# the $route_uri variable.
map $request_uri $route_uri {
jsessionid=.+\.(?Pw+)$ $route
}
# Application health checks
match tomcat_check {
status 200;
header Content-Type = text/html;
body ~ "Apache Tomcat/8";
}
upstream tomcat {
# Shared memory zone for application health checks, live activity
# monitoring, and on-the-fly reconfiguration
zone tomcat 64k;
# List of Tomcat application servers
server 10.100.100.11:8080 slow_start=30s;
server 10.100.100.12:8080 slow_start=30s;
# Session persistence based on the jvmRoute value in
# the JSESSION ID cookie
sticky route $route_cookie $route_uri;
# Uncomment the following directive (and comment the preceding
# 'sticky route' and JSESSIONID 'map' directives) for session
# persistence based on the JSESSIONID
#sticky learn create=$upstream_cookie_JSESSIONID
# lookup=$cookie_JSESSIONID
# zone=client_sessions:1m;
}
server {
listen 80;
server_name example.com;
# Redirect all HTTP requests to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
# Required for live activity monitoring of HTTPS traffic
status_zone tomcat;
ssl_certificate /etc/nginx/ssl/certificate-name;
ssl_certificate_key /etc/nginx/ssl/private-key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
# Load balance requests to /tomcat-app/ among Tomcat Server application servers
location /tomcat-app/ {
proxy_pass http://tomcat;
proxy_cache backcache;
# Active health checks
health_check interval=2s fails=1 passes=5 uri=/
match=tomcat_check;
}
# Return a 302 redirect to the /tomcat-app/ directory when user requests '/'
location = / {
return 302 /tomcat-app/;
}
# WebSocket configuration
location /wstunnel/ {
proxy_pass http://tomcat;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Secured access to the on-the-fly reconfiguration API
location /upstream_conf {
upstream_conf;
allow 127.0.0.1; # permit access from localhost
deny all; # deny access from everywhere else
}
}
NGINX and NGINX Plus can both be used to effectively load balance Tomcat application servers, and NGINX Plus provides enhanced features to help you better manage and monitor your Tomcat environment. For further information about NGINX and NGINX Plus, please see the following: