This deployment guide explains how to use NGINX and NGINX Plus to load balance HTTP and HTTPS traffic across a pool of JBoss application servers. It provides complete instructions for configuring NGINX or NGINX Plus as required.
NGINX is an open source web server and reverse proxy that has grown in popularity in recent years due to its scalability. NGINX was first created to solve the C10K problem (serving 10,000 simultaneous connections on a single web server). NGINX’s features and performance have made it a staple of high performance sites – now powering 1 in 3 of the world’s million busiest web properties.
NGINX Plus is the commercially supported version of the open source NGINX software. NGINX Plus is a complete application delivery platform, extending the power of NGINX with a host of enterprise-ready capabilities that enhance a JBoss application server deployment and are instrumental to building web applications at scale:
The JBoss application server (also call simply JBoss) is a Java EE platform for developing and deploying enterprise Java applications, portals, and web applications and services. Java EE (which stands for Java Platform, Enterprise Edition and was formerly known as J2EE) allows the use of standardized modular components and enables the Java platform to handle many aspects of programming automatically.
The commercially supported version of the software is Red Hat® JBoss®. For this guide, we will use Wildfly 9, the open source version of JBoss. WildFly is a flexible, lightweight, managed application runtime that helps you build applications. As free and open-source software, WildFly is distributed under to the GNU Lesser General Public License (LGPL), version 2.1. This guide applies to commercial JBoss application servers also.
The instructions assume you have basic Linux system administration skills, including the following. Full instructions are not provided for these tasks.
example.com
is used as a sample organization name (in key names and configuration blocks). Replace it with your organization’s name.If you plan to enable SSL/TLS encryption of traffic between NGINX or NGINX Plus and clients of your JBoss application, you need to configure a server certificate for NGINX or NGINX Plus.
with‑http_ssl_module
parameter to enable SSL/TLS support for HTTP traffic (the corresponding parameter for TCP is with‑stream_ssl_module
, and for email is with‑mail_ssl_module
, but this guide does not cover either of those protocol types).There are several ways to obtain a server certificate, including the following. For your convenience, step‑by‑step instructions are provided for the second and third options.
For more details on SSL/TLS termination, see the NGINX Plus Admin Guide.
Generate a public‑private key pair and a self‑signed server certificate in PEM format that is based on them.
openssl
software installed.Generate the key pair in PEM format (the default). To encrypt the private key, include the ‑des3
parameter. (Other encryption algorithms are available, listed on the man page for the genrsa
command.) You are prompted for the passphrase used as the basis for encryption.
root# openssl genrsa -des3 -out ~/private-key.pem 2048
Generating RSA private key …
Enter pass phrase for private-key.pem:
Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.
root# cp ~/private-key.pem secure-dir/private-key.pem.backup
Generate the certificate. Include the ‑new
and ‑x509
parameters to make a new self‑signed certificate. Optionally include the ‑days
parameter to change the key’s validity lifetime from the default of 30 days (10950 days is about 30 years). Respond to the prompts with values appropriate for your testing deployment.
root# openssl req -new -x509 -key ~/private-key.pem -out ~/self-cert.pem \
-days 10950
Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX or NGINX Plus server.
Log in as the root user on a machine that has the openssl
software installed.
Create a private key to be packaged in the certificate.
root# openssl genrsa -out ~/example.com.key 2048
Create a backup of the key file in a secure location. If you lose the key, the certificate becomes unusable.
root# cp ~/example.com.key secure-dir/example.com.key.backup
Create a Certificate Signing Request (CSR) file.
root# openssl req -new -sha256 -key ~/example.com.key -out ~/example.com.csr
Request a certificate from a CA or your internal security group, providing the CSR file (example.com.csr). As a reminder, never share private keys (.key files) directly with third parties.
The certificate needs to be PEM format rather than in the Windows‑compatible PFX format. If you request the certificate from a CA website yourself, choose NGINX or Apache (if available) when asked to select the server platform for which to generate the certificate.
Copy or move the certificate file and associated key files to the /etc/nginx/ssl directory on the NGINX Plus server.
To reduce errors, this guide has you copy directives from files provided by NGINX, Inc. into your configuration files, instead of using a text editor to type in the directives yourself. Then you go through the sections in this guide (starting with Configuring Virtual Servers for HTTP and HTTPS Traffic) to learn how to modify the directives as required for your deployment.
As provided, there is one file for basic load balancing (with NGINX or NGINX Plus) and one file for enhanced load balancing (with NGINX Plus). If you are installing and configuring NGINX or NGINX Plus on a fresh Linux system and using it only to load balance JBoss traffic, you can use the provided file as your main configuration file, which by convention is called /etc/nginx/nginx.conf.
We recommend, however, that instead of a single configuration file you use the scheme that is set up automatically when you install an NGINX Plus package, especially if you already have an existing NGINX or NGINX Plus deployment or plan to expand your use of NGINX or NGINX Plus to other purposes in future. In the conventional scheme, the main configuration file is still called /etc/nginx/nginx.conf, but instead of including all directives in it, you create separate configuration files for different functions and store the files in the /etc/nginx/conf.d directory. You then use the include
directive in the appropriate contexts of the main file to read in the contents of the function‑specific files.
To download the complete configuration file for basic load balancing:
root# cd /etc/nginx/conf.d
root# curl https://www.nginx.com/resource/conf/jboss-basic.conf > jboss-basic.conf
To download the complete configuration file for enhanced load balancing:
root# cd /etc/nginx/conf.d
root# curl https://www.nginx.com/resource/conf/jboss-enhanced.conf > \
jboss-enhanced.conf
(You can also access the URL in a browser and download the file that way.)
To set up the conventional configuration scheme, add an http
configuration block in the main nginx.conf file, if it does not already exist. (The standard placement is below any global directives.) Add this include
directive with the appropriate filename:
http {
include conf.d/jboss-(basic|enhanced).conf;
}
You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files function‑http.conf, this is an appropriate include directive:
http {
include conf.d/*-http.conf;
}
For reference purposes, the full configuration files are also provided in this document:
We recommend, however, that you do not copy text directly from this document. It does not necessarily use the same mechanisms for positioning text (such as line breaks and white space) as text editors do. In text copied into an editor, lines might run together and indenting of child statements in configuration blocks might be missing or inconsistent. The absence of formatting does not present a problem for NGINX or NGINX Plus, because (like many compilers) they ignore white space during parsing, relying solely on semicolons and curly braces as delimiters. The absence of white space does, however, make it more difficult for humans to interpret the configuration and modify it without making mistakes.
We recommend that each time you complete a set of updates to the configuration, you run the nginx
‑t
command to test the configuration file for syntactic validity.
root# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
To tell NGINX or NGINX Plus to start using the new configuration, run one of the following commands:
root# nginx -s reload
root# service nginx reload
This section explains how to set up NGINX or NGINX Plus as a load balancer in front of two JBoss servers. The instructions in the first two sections are mandatory:
The instructions in the remaining sections are optional, depending on the requirements of your application:
The complete configuration file appears in Full Configuration for Basic Load Balancing.
If you are using NGINX Plus, you can configure additional enhanced features after you complete the configuration of basic load balancing. See Configuring Enhanced Load Balancing with NGINX Plus.
These directives define virtual servers for HTTP and HTTPS traffic in separate server
blocks in the top‑level http
configuration block. All HTTP requests are redirected to the HTTPS server.
Configure a server
block that listens for requests for https://example.com received on port 443.
The ssl_certificate
and ssl_certificate_key
directives are required; substitute the names of the certificate and private key you chose in Configuring a SSL/TLS Certificate for Client Traffic.
The other directives are optional but recommended.
# in the 'http' block
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/certificate-name;
ssl_certificate_key /etc/nginx/ssl/private-key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
}
Configure a server
block that permanently redirects requests for http://example.com that are received on port 80 to the HTTPS server, which is defined in the previous step.
If you’re not using SSL/TLS for client connections, omit the return
directive. When instructed in the remainder of this guide to add directives to the server
block for HTTPS traffic, add them to this block instead.
# in the 'http' block
server {
listen 80;
server_name example.com;
# Redirect all HTTP requests to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
For more information on configuring SSL/TLS, see the NGINX Plus Admin Guide and the reference documentation for the HTTP SSL/TLS module.
To configure load balancing, you first create a named “upstream group,” which lists the backend servers. You then set up NGINX or NNGINX Plus as a reverse proxy and load balancer by referring to the upstream group in one or more proxy_pass
directives.
Configure an upstream group called jboss with two JBoss application servers listening on port 8080, one on IP address 192.168.33.11 and the other on 192.168.33.12.
# in the 'http' block
upstream jboss {
server 192.168.33.11:8080;
server 192.168.33.12:8080;
}
In the server
block for HTTPS traffic that we created in Configuring Virtual Servers for HTTP and HTTPS Traffic, include these two location
blocks:
The first one matches HTTPS requests in which the path starts with /webapp/, and proxies them to the jboss upstream group we created in the previous step.
The second one funnels all traffic to the first location
block, by doing a temporary redirect of all requests for http://example.com/.
# in the 'server' block for HTTPS traffic
location /webapp/ {
proxy_pass http://jboss;
}
location = / {
return 302 /webapp/;
}
Note that these blocks handle only standard HTTPS traffic. If you want to load balance WebSocket traffic, you need to add another location
block as described in Configuring Proxy of WebSocket Traffic.
By default, NGINX and NGINX Plus use the Round Robin algorithm for load balancing among servers. The load balancer runs through the list of servers in the upstream group in order, forwarding each new request to the next server. In our example, the first request goes to 192.168.33.11, the second to 192.168.33.12, the third to 192.168.33.11, and so on. For information about the other available load‑balancing algorithms, see Application Load Balancing with NGINX Plus.
In NGINX Plus, you can also set up dynamic reconfiguration of an upstream group when the set of backend servers changes, using DNS or an API; see Enabling On‑the‑Fly Reconfiguration of Upstream Groups.
For more information on proxying and load balancing, see Reverse Proxy and Load Balancing in the NGINX Plus Admin Guide, and the documentation for the Proxy and Upstream modules.
If your application requires basic session persistence (also known as sticky sessions), you can implement it in NGINX by using the IP Hash load‑balancing algorithm. (NGINX Plus offers a more sophisticated form of session persistence, as described in Configuring Advanced Session Persistence.)
With the IP Hash algorithm, for each request NGINX calculates a hash based on the client’s IP address, and associates the hash with one of the upstream servers. It sends all requests with that hash to that server, thus establishing session persistence.
If the client has an IPv6 address, the hash is based on the entire address. If it has an IPv4 address, the hash is based on just the first three octets of the address. This is designed to optimize for ISP clients that are assigned IP addresses dynamically from a subnetwork (/24) range. However, it is not effective in these cases:
The majority of the traffic to your site is coming from one forward proxy or from clients on the same /24 network, because in that case IP Hash maps all clients to the same server.
A client’s IP address can change during the session, for example when a mobile client switches from a Wi‑Fi network to a cellular one.
To configure session persistence in NGINX, add the ip_hash
directive to the upstream
block created in Configuring Basic Load Balancing:
# in the 'http' block
upstream jboss {
ip_hash;
server 192.168.33.11:8080;
server 192.168.33.12:8080;
}
You can also use the Hash load balancing method for session persistence, with the hash based on any combination of text and NGINX variables you specify. For example, you can hash on full (four‑octet) client IP addresses with the following configuration.
# in the 'http' block
upstream jboss {
hash $remote_addr;
server 192.168.33.11:8080;
server 192.168.33.12:8080;
}
The WebSocket protocol works out of the box on JBoss app servers, so no additional JBoss configuration is required. If you want NGINX or NGINX Plus to proxy WebSocket traffic, add the directives discussed in this section.
NGINX and NGINX Plus by default use HTTP/1.0 for upstream connections. To be proxied correctly, WebSocket connections require HTTP/1.1 along with some other configuration directives that set HTTP headers:
# in the 'http' block
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# in the 'server' block for HTTPS traffic
location /wstunnel/ {
proxy_pass http://jboss;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
The first proxy_set_header
directive is needed because the Upgrade
request header is hop‑by‑hop; that is, the HTTP specification explicitly forbids proxies from forwarding it. This directive overrides the prohibition.
The second proxy_set_header
directive sets the Connection
header to a value that depends on the test in the map
block: if the request has an Upgrade
header, the Connection
header is set to upgrade
; otherwise, it is set to close
.
For more information about proxying WebSocket traffic, see WebSocket proxying and NGINX as a WebSocket Proxy.
Caching responses from your JBoss app servers can both improve response time to clients and reduce load on the servers, because eligible responses are served immediately from the cache instead of being generated again on the server.
One choice for caching is Infinispan, an open source, distributed cache and key‑value NoSQL data store developed by Red Hat. Java application servers (including JBoss and Wildfly) can embed it as a library or use it as a service, and any non‑Java applications can use it as remote service through TCP/IP.
Another alternative is to cache server responses on the NGINX host by creating this configuration:
Include the proxy_cache_path
directive to create the local disk directory /tmp/NGINX_cache/ for use as a cache. The keys_zone
parameter allocates 10 megabytes (MB) of shared memory for a zone called backcache, which is used to store cache keys and metadata such as usage timers. A 1‑MB zone can store data for about 8,000 keys.
# in the 'http' block
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
In the location
block that matches HTTPS requests in which the path starts with /webapp/, include the proxy_cache
directive to reference the cache created in the previous step.
# in the 'server' block for HTTPS traffic
location /webapp/ {
proxy_pass http://jboss;
proxy_cache backcache;
}
For more complete information on caching, refer to the documentation for the Proxy module and the NGINX Plus Admin Guide.
HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and later.
If using NGINX, note that in NGINX 1.9.5 and later the SPDY module is completely removed from the NGINX codebase and replaced with the HTTP/2 module. After upgrading to version 1.9.5, you can no longer configure NGINX to use SPDY. If you want to keep using SPDY, use the latest binary available from the NGINX 1.8.x branch.
In NGINX Plus R8 and later, the nginx‑plus and nginx‑plus‑extras packages support HTTP/2 by default (and SPDY is no longer supported). If using NGINX Plus R7, you must install the nginx‑plus‑http2 package instead of the nginx‑plus or nginx‑plus‑extras package.
To enable HTTP/2 support, add the http2
parameter to the listen
directive in the server
block for HTTPS traffic that we created in Configuring Virtual Servers for HTTP and HTTPS Traffic, so that it looks like this:
# in the 'server' block for HTTPS traffic
listen 443 ssl http2;
To verify that HTTP/2 translation is working, you can use the “HTTP/2 and SPDY indicator” plug‑in available for Google Chrome and Firefox.
The full configuration for basic load balancing appears here for your convenience. It goes in the http
context. The complete file is available for download from the NGINX, Inc. website.
We recommend that you do not copy text directly from this document, but instead use the method described in Creating and Modifying Configuration Files to include these directives in your configuration – add an include
directive to the http
context of the main nginx.conf file to read in the contents of /etc/nginx/conf.d/jboss-basic.conf.
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream jboss {
# Use IP Hash for session persistence
ip_hash;
# List of JBoss application servers
server 192.168.33.11:8080;
server 192.168.33.12:8080;
}
server {
listen 80;
server_name example.com;
# Redirect all HTTP requests to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/ssl/certificate-name;
ssl_certificate_key /etc/nginx/ssl/private-key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
# Load balance requests for /webapp/ across JBoss application servers
location /webapp/ {
proxy_pass http://jboss;
proxy_cache backcache;
}
# Return a temporary redirect to the /webapp/ directory when user requests '/'
location = / {
return 302 /webapp/;
}
# WebSocket configuration
location /wstunnel/ {
proxy_pass https://jboss;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
This section explains how to configure enhanced load balancing with some of the extended features in NGINX Plus.
Note: Before setting up the enhanced features described in this section, you must complete the instructions for basic load balancing in Configuring Virtual Servers for HTTP and HTTPS Traffic and Configuring Basic Load Balancing. Except as noted, all optional basic features (described in the other subsections of Configuring Basic Load Balancing in NGINX and NGINX Plus) can be combined with the enhanced features described here.
The features described in the following sections are all optional.
The complete configuration file appears in Full Configuration for Enhanced Load Balancing.
NGINX Plus has more sophisticated session persistence methods available than open source NGINX, implemented in three variants of the sticky
directive. In the following example, we add the sticky
learn
directive to the upstream group we created in Configuring Basic Load Balancing.
Remove or comment out the ip_hash
directive, leaving only the server
directives:
# in the 'http' block
upstream jboss {
#ip_hash;
server 192.168.33.11:8080;
server 192.168.33.12:8080;
}
Configure session persistence that uses the JSESSIONID
cookie created by your JBoss application as the session identifier:
# in the 'http' block
upstream jboss {
server 192.168.33.11:8080;
server 192.168.33.12:8080;
sticky learn create=$upstream_cookie_JSESSIONID lookup=$cookie_JSESSIONID
zone=client_sessions:1m;
}
The create
and lookup
parameters specify how new sessions are created and existing sessions are searched for, respectively. For new sessions, NGINX Plus sets the session identifier to the value of the $upstream_cookie_JSESSIONID
variable, which captures the JSESSIONID
cookie sent by the JBoss application server. When checking for existing sessions, it uses the JSESSIONID
cookie sent by the client (the $cookie_JSESSIONID
variable) as the session identifier.
Both parameters can be specified more than once (each time with a different variable), in which case NGINX Plus uses the first non‑empty variable for each one.
zone
argument creates a shared memory zone for storing information about sessions. The amount of memory allocated – here, 1 MB – determines how many sessions can be stored at a time (the number varies by platform). The name assigned to the zone – here, client_sessions
– must be unique for each sticky
directive.For more information on session persistence, see the NGINX Plus Admin Guide.
Health checks are out‑of‑band HTTP requests sent to a server at fixed intervals. They are used to determine whether a server is responsive and functioning correctly, without requiring an actual request from a client.
Because the health_check
directive is placed in the location
block, we can enable different health checks for each application.
In the location
block that matches HTTPS requests in which the path starts with /webapp/ (created in Configuring Basic Load Balancing), add the health_check
directive.
Here we configure NGINX Plus to send an out‑of‑band request for the top‑level URI / (slash) to each of the servers in the jboss upstream group every 5 seconds (the URI and frequency are the defaults). If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes a subsequent health check. We include the match
parameter to the health_check
directive to define a nondefault set of health‑check tests.
# in the 'server' block for HTTPS traffic
location /webapp/ {
proxy_pass http://jboss;
proxy_cache backcache;
health_check match=jboss_check;
}
In the http
context, include a match
directive to define the tests that a server must pass to be considered functional. In this example, it must return status code 200
, the Content‑Type
response header must be text/html
, and the response body must match the indicated regular expression.
# in the 'http' block
match jboss_check {
status 200;
header Content-Type = text/html;
body ~ "Your WildFly 9 is running";
}
In the jboss upstream group, include the zone
directive to define a shared memory zone that stores the group’s configuration and run‑time state, which are shared among worker processes.
# in the 'http' block
upstream jboss {
zone jboss 64k;
server 192.168.33.11:8080;
server 192.168.33.12:8080;
...
}
NGINX Plus also has a slow‑start feature that is a useful auxiliary to health checks. When a failed server recovers, or a new server is added to the upstream group, NGINX Plus slowly ramps up the traffic to it over a defined period of time. This gives the server time to “warm up” without being overwhelmed by more connections than it can handle as it starts up. For more information, see the NGINX Plus Admin Guide.
For example, to set a slow‑start period of 30 seconds for your JBoss application servers, include the slow_start
parameter to their server
directives:
# in the 'upstream' block
server 192.168.33.11:8080 slow_start=30s;
server 192.168.33.12:8080 slow_start=30s;
For information about customizing health checks, see the NGINX Plus Admin Guide.
NGINX Plus includes a Status module for live activity monitoring that tracks key load and performance metrics in real time. The module includes a built‑in dashboard that graphically displays the statistics, along with a RESTful JSON API that makes it very easy to feed the data to a custom or third‑party monitoring tool. These instructions show how to configure NGINX to enable the Status module and display the dashboard.
For more information about live activity monitoring, see the NGINX Plus Admin Guide.
The quickest way to configure the module and the built‑in NGINX Plus dashboard is to download the sample configuration file from the NGINX, Inc. website and modify it as necessary. For more complete instructions, see Live Activity Monitoring of NGINX Plus in 3 Simple Steps.
Download the status.conf file to the NGINX Plus server:
# cd /etc/nginx/conf.d
# curl https://www.nginx.com/resource/conf/status.conf > status.conf
Customize the file for your deployment as specified by comments in the file. In particular, the default settings in the file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods:
IP address‑based access control lists (ACLs). In the sample configuration file, uncomment the allow
and deny
directives, and substitute the address of your administrative network for 10.0.0.0/8. Only users on the specified network can access the status page.
allow 10.0.0.0/8;
deny all;
HTTP basic authentication. In the sample configuration file, uncomment the auth_basic
and auth_basic_user_file
directives and add user entries to the /etc/nginx/users file (for example, by using an htpasswd generator). If you have an Apache installation, another option is to reuse an existing htpasswd file.
auth_basic on;
auth_basic_user_file /etc/nginx/users;
Client certificates, which are part of a complete configuration of SSL/TLS. For more information, see the NGINX Plus Admin Guide and the documentation for the HTTP SSL/TLS module.
Firewall. Configure your firewall to disallow outside access to the port for the dashboard (8080 in the sample configuration file).
In each upstream group that you want to monitor, include the zone
directive to define a shared memory zone that stores the group’s configuration and run‑time state, which are shared among worker processes.
For example, to monitor your JBoss application servers, add the zone
directive to the jboss upstream group (if you followed the instructions in Configuring Application Health Checks, you already made this change).
# in the 'http' block
upstream jboss {
zone jboss 64k;
server 192.168.33.11:8080;
server 192.168.33.12:8080;
...
}
In the server
block for HTTPS traffic (created in Configuring Virtual Servers for HTTP and HTTPS Traffic), add the status_zone
directive:
# in the 'server' block for HTTPS traffic
status_zone jboss;
When you reload the NGINX Plus configuration file, for example by running the nginx
–s
reload
command, the NGINX Plus dashboard is available immediately at http://nginx-server-address:8080.
With NGINX Plus, you can reconfigure load‑balanced server groups on‑the‑fly using the Domain Name System (DNS) or a simple HTTP API. For a detailed discussion, see the NGINX Plus Admin Guide and Dynamic Reconfiguration with NGINX Plus.
To enable on‑the‑fly reconfiguration of your upstream group of JBoss app servers using the API:
Include the zone
directive in the jboss upstream group to create a shared memory zone that stores the group’s configuration and run‑time state, which are shared among worker processes. (If you followed the instructions in Configuring Application Health Checks or Enabling Live Activity Monitoring, you already made this change.)
# in the 'http' block
upstream jboss {
zone jboss 64k;
server 192.168.33.11:8080;
server 192.168.33.12:8080;
...
}
In the server
block for HTTPS traffic (created in Configuring Virtual Servers for HTTP and HTTPS Traffic), add a new location
block for the on‑the‑fly reconfiguration API. It contains the upstream_conf
directive (upstream_conf is also the conventional name for the location, as used here).
We strongly recommend that you restrict access to the location so that only authorized administrators can access the reconfiguration API. The allow
and deny
directives in the following example permit access only from the localhost address (127.0.0.1).
# in the 'server' block for HTTPS traffic
location /upstream_conf {
upstream_conf;
The full configuration for enhanced load balancing appears here for your convenience. It goes in the http
context. The complete file is available for download from the NGINX, Inc. website.
We recommend that you do not copy text directly from this document, but instead use the method described in Creating and Modifying Configuration Files to include these directives in your configuration – add an include
directive to the http
context of the main nginx.conf file to read in the contents of /etc/nginx/conf.d/jboss-enhanced.conf.
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;
# WebSocket configuration
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Application health checks
match jboss_check {
status 200;
header Content-Type = text/html;
body ~ "Your WildFly 9 is running";
}
upstream jboss {
# Shared memory zone for application health checks, live activity monitoring,
# and on-the-fly reconfiguration
zone jboss 64k;
# List of JBoss application servers
server 192.168.33.11:8080 slow_start=30s;
server 192.168.33.12:8080 slow_start=30s;
# Session persistence based on JSESSIONID
sticky learn create=$upstream_cookie_JSESSIONID
lookup=$cookie_JSESSIONID
zone=client_sessions:1m;
}
server {
listen 80;
server_name example.com;
# Redirect all HTTP requests to HTTPS
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
# Required for live activity monitoring of HTTPS traffic
status_zone jboss;
ssl_certificate /etc/nginx/ssl/certificate-name;
ssl_certificate_key /etc/nginx/ssl/private-key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
# Load balance requests to /webapp/ among JBoss application servers
location /webapp/ {
proxy_pass http://jboss;
proxy_cache backcache;
# Active health checks
health_check match=jboss_check;
}
# Return a 302 redirect to the /webapp/ directory when user requests '/'
location = / {
return 302 /webapp/;
}
# WebSocket configuration
location /wstunnel/ {
proxy_pass http://jboss;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Secured access to the on-the-fly reconfiguration API
location /upstream_conf {
upstream_conf;
allow 127.0.0.1; # permit access from localhost
deny all; # deny access from everywhere else
}
}
NGINX and NGINX Plus can both be used to effectively load balance JBoss application servers, and NGINX Plus provides enhanced features to help you better manage and monitor your JBoss environment. For further information about NGINX and NGINX Plus, please see the following: