What is a Reverse Proxy? Basics & Example

This site runs, in part, on a reverse proxy (along with the Ghost publishing platform). What is that? Why is it used?

I don't know when it came about but this concept has kept coming into my purview and I imagine it's coming into yours as well. In this article, I'll cover what a reverse proxy is, why you might use it and even an example configuration.

Of course, this is a large topic and so I'm narrowing my coverage of the topic to a reverse proxy using NGINX to Express and an SSL termination.

What is it?

I want to keep this simple, so while a reverse proxy isn't limited to port 80 or HTTP traffic, or even a single server, that will serve as the basis of our example.

A reverse proxy sits on a web server and waits for a request on port 80 (HTTP) and is the first to accept a request. Based on server configuration (subdomain in this example), the server will either handle the request - perhaps with a 301 redirect - or pass it off to another web server on the same server running on a different port.

Reverse-Proxy - Basic Example Diagram

The above example is one of many configurations possible.

Uses

So, why do this? If the NGINX web server passes the request off to another web server what is the point?

Some highlight reasons are - and this isn't exhaustive:

  1. More configuration control of the server and better control of redirects without touching (in my case) ghost and it's configuration.
  2. Multiple websites on one server... redirecting based on the subdomain (theopensourceu.org, blog.theopensourceu.org, etc).
  3. The benefit of NGINX's traffic management capabilities.
  4. SSL termination proxy... perhaps for several sites
  5. Many other reasons. A more comprehensive list can be found at Wikipedia

Furthermore, this is commonly used when running an Express or Kestral Server on Linux. The former point is our purpose of focus.

Example Configuration

Let's take a look at an annotated configuration of a reverse proxy in NGINX.

The target is to have an NGINX web server running (port 80, as required) and then an Express or Kesteral server running on port 8000.

This is an NGINX configuration file example:

server {
    listen 80;         # IPv4 - Listen on port 80
    listen [::]:80;    # IPv6 - Listen on port 80

    server_name theOpenSourceU-example.org;
    
    #
    # ... Content removed for readability ...
    # 
    
    # Root of the webserver
    location / {
        # 
        # proxy_set_header passes or sets the headers from 
        # NGINX to the proxy server. Otherwise these 
        # would not be available to the proxied server.
        #
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        
        #
        # Declare that any request coming in to 
        # http://theOpenSourceU-Example.org/
        # will go to http://127.0.0.1:8000 - a different 
        # web server on this web server.
        proxy_pass http://127.0.0.1:8000;
        
        #
        # proxy_pass is the magic to invoke the reverse 
        # proxy feature of NGINX. 
        #
    }
}

Summary

Reverse proxies have become a constant in my workday. If you're experiencing the same thing, I hope this helped shed some light on what it is, why you use one and, more importantly, why they are so versatile.

The example, while not production worthy, was meant to help clarify what the configuration looks like.

If you have any questions or would like me to expand on a specific topic or area, please drop me a comment below.

Frank Villasenor

Frank Villasenor

Owner and principal author of this site. Professional Engineering Lead, Software Engineer & Architect working in the Chicagoland area as a consultant. Cert: AWS DevOps Pro
Chicago, IL