How to Configure Nginx: A Step by Step Guide

How to Configure Nginx

Setting up Nginx is key to making sure your web server runs smoothly and securely. Whether you’re managing multiple websites or fine-tuning your server for faster speeds, knowing how to configure Nginx can make a big difference.

Data shows that Nginx powers about a third of all web servers, specifically 33.8% of the market according to the October 2024 usage data provided by W3Techs. It’s also responsible for managing traffic of 46,9% of the top 1000 sites on the Internet, as indicated in Nginx’s official blog.

Nginx’s popularity isn’t just a trend; it’s a testament to how well it performs under pressure in a variety of server setups. With this in mind, now we will walk you through how to configure Nginx and optimize it for your server. By the end, you’ll know how to install Nginx, adjust firewall settings, configure server blocks, and more.

 

Setup

Nginx configuration is key to server performance and request handling. Here’s a guide to installing Nginx on an Ubuntu Server and adjusting firewall settings for the necessary traffic. We also recommend that you check out the official docs for more in-depth information about this webserver.

Hosting, a major hosting provider, indicates that Nginx “stands out for its versatility and advanced capabilities.

It’s clear that Nginx is a great choice to power our websites, so let’s see how to install it.

Installing Nginx on Ubuntu Server

Nginx is a powerful open-source web server and is often used as a reverse proxy or HTTP cache. To install Nginx on an Ubuntu Server update your package list with this command:

sudo apt update

Then install Nginx with:

sudo apt install Nginx

You can test if Nginx was installed correctly using the “Nginx -V” command, just like in the image below:

nginx-version
nginx-version

Now start the service:

sudo systemctl start Nginx

and visit your server’s public IP in a web browser to test if Nginx working.

The Nginx configuration files are in the /etc/Nginx directory, Nginx.conf is the main file. This file contains the directives for the server. After making changes to the configuration reload Nginx with sudo systemctl reload Nginx. Proper configuration from the start is key to web traffic.

Firewall Settings

Adjusting the firewall settings to allow HTTP (port 80) and HTTPS (port 443) traffic is required for your Nginx server. Add rules for these ports to allow incoming traffic from any IP address so your web server can serve pages and communicate securely.

These firewall settings are important for your server to be accessible and secure.

Nginx Configuration Files

Nginx configuration files are the foundation of your server. The default configuration file at /etc/Nginx/Nginx.conf contains the directives for the server and the modules. Understanding these files is key to server management.

We’ll look at the default configuration file and the ‘sites-available’ and ‘sites-enabled’ directories which are used to manage multiple sites.

Default Configuration File

Located in the /etc/Nginx/Nginx.conf directory the default configuration file is the heart of your Nginx setup. It manages server settings like user permissions and logging options, with simple directives ending in a semicolon and container directives grouping related settings within curly braces.

Understanding and configuring this file is the first step to a solid Nginx setup.

Sites-Available and Sites-Enabled Directories

The ‘sites-available’ directory contains the configuration files for the potential sites and ‘sites-enabled’ contains the symlinks to the active configurations. This allows administrators to manage multiple sites on a single server by enabling or disabling site configurations without removing the files.

Using these directories in the file system is key to managing complex server environments.

Basic Nginx Configuration

Configuring the Nginx web server involves telling Nginx how to handle URLs and process HTTP requests for resources. Special server configuration instances called locations define the virtual servers for HTTP traffic. Let’s learn to configure server blocks and use location directives to manage your server.

Nginx Configuration Checklist
Nginx Configuration Checklist

Server Blocks

Server blocks in Nginx allow different configurations for different domains on the same server, like different root directories, log files, and proxy settings. The correct configuration of server blocks is key to managing multiple domains and having everything work properly.

Example: Basic Server Block Configuration

server {

listen 80;

server_name example.com www.example.com;

root /var/www/example.com/html;

index index.html index.htm index.nginx-debian.html;

location / {

     try_files $uri $uri/ =404;

}

error_page 404 /404.html;

location = /404.html {

     internal;

}

access_log /var/log/nginx/example.com.access.log;

error_log /var/log/nginx/example.com.error.log;

}
  • listen 80; defines that this block will handle traffic on port 80 (HTTP).
  • server_name example.com www.example.com; sets the domain names this server block is responsible for.
  • root /var/www/example.com/html; defines the root directory where files are served from.
  • location / is used to match requests for the root URL.
  • error_page 404 /404.html; serves a custom error page for 404 errors.

 

Location Directives

Location blocks in Nginx control how requests for specific URLs are processed, using Perl Compatible Regular Expressions (PCRE) for matching. The = modifier speeds up processing by stopping matches after the first one.

Location directives allow you to return specific status codes or redirects with the return directive, as well as fine-grain control over request handling.

Example: Location Directives for Static Content and Redirects

server {

listen 80;
server_name example.com;
root /var/www/example.com/html;

# Match requests for static files
location /images/ {
     root /data;
}

# Exact match for URL
location = /about {
     return 301 http://newsite.com/about;
}

# Match all requests starting with /blog
location /blog/ {
     proxy_pass http://127.0.0.1:8080;
}

# Custom 403 and 404 error pages
error_page 403 /403.html;
location = /403.html {
     internal;

}

error_page 404 /404.html;
location = /404.html {
     internal;
}
}
  • location /images/ serves static content from the /data directory.
  • location = /about does an exact match and redirects /about to another site.
  • location /blog/ proxies all requests that start with /blog to a backend server.
  • Custom error pages are configured for 403 and 404 status codes using error_page.

Advanced Nginx Configuration Techniques

Advanced configurations can make Nginx much faster by distributing the traffic and managing the workload. Look into load balancing and caching static content, two techniques that improve server scalability and responsiveness.

Load Balancing

Load balancing distributes the client requests across multiple backend servers, performance, and reliability. Nginx supports algorithms like round-robin, least connections, and IP hash. Use the upstream module to define the server groups for the load-balancing configuration that you wish to use.

Balancing requests across multiple servers with Nginx improves performance and reliability. Choosing the right load-balancing algorithm will ensure optimal resource usage and response times, it’s very useful for high-traffic sites that require robust and scalable infrastructure.

Example: Basic Load Balancing Configuration

http {

upstream backend_servers {
     server backend1.example.com;
     server backend2.example.com;
     server backend3.example.com;
}

server {
     listen 80;
     server_name example.com;
     location / {

         proxy_pass http://backend_servers;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
     }
}
  • upstream backend_servers defines a group of backend servers for load balancing.
  • proxy_pass http://backend_servers; forwards requests to one of the backend servers in the upstream block.
  • Nginx will use round-robin load balancing by default, distributing requests evenly across the backend servers.

Caching Static Content

Caching static content speeds up the delivery by reducing the server load. Nginx uses directives like proxy_cache and fastcgi_cache to store the static content and then retrieve it faster. The proxy_cache_path directive defines the cache zone and the storage path.

Cached responses are stored until the cache reaches its maximum size, then the least recently used items are removed. Cache purging removes the outdated responses using a special HTTP method or a custom header. Proper caching configuration will speed up the response times and reduce the backend server load, overall performance will be better.

Example: Caching Static Content with proxy_cache

http {

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off;
server {
     listen 80;
     server_name example.com;

     location / {
         proxy_pass http://backend_server;
         proxy_cache my_cache;
         proxy_cache_valid 200 302 10m;
         proxy_cache_valid 404 1m;
         add_header X-Cache-Status $upstream_cache_status;
     }
}
  • proxy_cache_path sets the cache storage path, the cache size (max_size=1g), and the expiration rules.
  • proxy_cache my_cache; enables caching for the location block.
  • proxy_cache_valid defines how long responses should be cached, e.g., 10 minutes for 200 and 302 responses, and 1 minute for 404 responses.
  • add_header X-Cache-Status adds a custom header to indicate whether a request was served from the cache or the origin server.

Nginx Processes

Managing Nginx processes is key to having a stable server environment, Nginx uses multiple worker processes to handle the incoming requests. Learn how to start, stop, and reload Nginx to apply your configurations without interrupting the service.

Start and Stop Nginx

Start the Nginx service with:

sudo systemctl start Nginx

The Nginx.pid file contains the process ID to manage signals to the master process.

To stop Nginx use:

sudo systemctl stop Nginx

You can also issue a restart with a single command:

sudo systemctl restart Nginx

Restarting Nginx properly will keep the server smooth and stable.

Reload Nginx

Reloading Nginx applies the new configurations without interrupting the current connections. Use Nginx -s reload to signal the master process to reload the configuration.

If there are no syntax errors in the new configuration, Nginx will start new worker processes with the new settings.

Test Nginx settings

Before issuing a restart or a reload of Nginx, make sure to test the current settings using:

nginx -t

Errors and Status Codes

Error pages and HTTP status codes will improve the user experience and debugging. Look into custom error pages and return specific status codes with the return directive.

Custom Error Pages

Custom error pages for multiple status codes will provide a single-user experience. The error_page directive can point to a single file for multiple error codes, making the configuration simpler.

Custom error pages are usually stored in the default document root for easy access.

Return Specific Status Codes

The return directive in Nginx allows you to send a specific HTTP status code or redirect without further processing. This directive can be used in a location or server context.

Configuring error pages for different status codes will give you more control over the response.

Nginx as Reverse Proxy

Setting up Nginx as a reverse proxy is common. It acts as an intermediary for the requests from clients to the servers. Learn the basic reverse proxy setup and advanced proxy settings to optimize.

Basic Reverse Proxy

The proxy_pass directive will route the requests to the correct backend server in a reverse proxy setup. Add the proxy_pass directive inside a location block pointing to the backend server’s URL so the client requests will be forwarded properly, to manage the resources.

A reverse proxy setup will distribute the load and benefit Nginx security by hiding the backend servers from direct client access. Define the upstream server and make sure the requests are routed properly. This is very useful for high traffic and server redundancy.

Advanced Proxy

An advanced proxy will route the requests and manage better, optimizing the reverse proxy. Directives like proxy_set_header will give you fine control over the HTTP headers. Tuning the proxy buffers with directives like proxy_buffers and proxy_buffering will manage the memory and improve the performance.

The proxy_bind directive will specify the source IP address to connect to the backend servers, so the requests will be routed properly and identified. These advanced settings will give you more control and optimization, to make your reverse proxy more efficient and robust.

Security Best Practices

Securing your Nginx server will protect it from vulnerabilities. Let’s see how to enable HTTPS with Let’s Encrypt and IP whitelisting to secure your server.

Note: these are just some basic things you can do to secure your Nginx server. If you want to dig deeper, read our Nginx Security Hardening Guide

Disable Unneeded Modules

Since Nginx operates in a modular fashion, not all modules are required for every setup. To minimize potential vulnerabilities, deactivate or remove any modules that aren’t essential to your configuration. For instance, if you’re not using features like WebDAV or FastCGI, it’s best to disable them.

Limit Request Sizes and Implement Rate Limiting

Nginx is designed to handle large file uploads and high volumes of requests by default. However, attackers can exploit this by overwhelming the server with large or excessive requests. To prevent this, configure limits on request size with client_max_body_size and implement rate limiting through limit_req_zone. These measures help guard against denial-of-service (DoS) and brute force attacks.

Regularly Update Nginx

It’s crucial to keep Nginx updated to the latest version. Updates often include security fixes and new features that patch known vulnerabilities. Running outdated software is a common security risk, so make sure you’re always using the most current stable version to protect your server.

Enable HTTPS with Let’s Encrypt

Let’s Encrypt is an SSL certificate issuer that provides free SSL certificates to secure your Nginx-hosted sites with HTTPS. Get and install the SSL certificate to boost security and trust. Let’s Encrypt has automation features to manage the SSL certificates including auto-renewal.

Using HTTPS is a must nowadays, and it’s the foundation for protecting web traffic from eavesdropping and tampering. Enabling HSTS will also boost your security by forcing web browsers to load your sites using HTTPS.

IP Whitelisting

IP whitelisting will limit access to specific areas of your Nginx-hosted site. Configure Nginx to allow only specific IP addresses to secure certain directories by limiting access, this way only trusted IP addresses can connect, which will reduce the risk of unauthorized access and potential attacks.

Testing the Security of Your Nginx Server

When the time comes to test if your website runs smoothly under Nginx, you can use our website scanner to check if your site is secure and works as expected. 

This test will provide you with an overview of the top Nginx / HTTP misconfigurations, helping ensure your website is always protected.

Nginx Misconfiguration Scanner
Nginx Misconfiguration Scanner

Follow these steps:

  1. Start by accessing our web misconfiguration scanner.
  2. Type your website URL in the input box, and don’t forget to check the boxes for “Clear cache” and “Follow redirects.”
  3. Hit the Scan button a wait a few seconds for your results.

FAQs

How to install Nginx on Ubuntu Server?

To install Nginx on Ubuntu Server, update your package list with sudo apt update, then use sudo apt install Nginx to install it. Start the service with sudo systemctl start Nginx and access your server’s public IP address in a web browser.

What are ‘sites-available’ and ‘sites-enabled’ in Nginx?

The ‘sites-available’ directory will have all the configuration files for all the websites, whereas the ‘sites-enabled’ directory will have symlinks to the active configurations. This will make it easy to manage the website availability in Nginx.

How to set up a basic reverse proxy with Nginx?

To set up a basic reverse proxy with Nginx, you should use the proxy_pass directive inside a location block in your configuration file, which will route the client requests to your backend server’s URL. This will forward the incoming requests to the server.

What are the benefits of Let’s Encrypt for HTTPS?

Let’s Encrypt will give you free SSL certificates and automate the issuance and renewal, simplifying the process of securing your Nginx-hosted sites with HTTPS. This will secure and trust your website without any cost.

How IP whitelisting will secure Nginx?

IP whitelisting will secure Nginx by allowing only specific IP addresses to access certain parts of the website and will reduce the risk of unauthorized access. This targeted control will strengthen your web application’s security.

Summary

We have covered the best practices for Nginx, from setting up the environment to advanced configurations and security. Understanding and managing the Nginx configuration files, optimizing the server with load balancing and caching, and securing with HTTPS and IP whitelisting are key to having a robust and efficient web server. Follow these and you can utilize Nginx to its full potential to handle the web traffic. Keep exploring and applying these to keep your server optimal and secure.

Scroll to Top