Hosting multiple websites on a single VPS via Docker is pretty cool, but others might find it too bloated or complex for their needs. Instead of Docker, we can use Linux Containers, also known as LXC
, to do the same thing in a more streamlined, more Linux-y fashion.
Aim of this tutorial
In this tutorial, we will learn how to run multiple websites on a single VPS with a single public IP address. These websites will all run in their separate LX containers each with its own unique Fully Qualified Domain Name (FQDN). E.g, SUBDOMAIN1.DOMAIN.TLD
and SUBDOMAIN2.DOMAIN.TLD
, and so on.
We will achieve this by using one container to act as a load balancer, which will listen on all the requests coming on port 80 or 443 (more on these later) of your VPS. It will read the header information of the incoming request and if it is, say, SUBDOMAIN1.DOMAIN.TLD
, it will forward the request to the corresponding backend container. If it is SUBDOMAIN2.DOMAIN.TLD
, then the request goes to another container running a different website.
Lastly, we will setup TLS which to encrypt the entire communication between a client computer and your HAProxy container.
Prerequisites
- A registered domain name. We will use a placeholder DOMAIN.TLD where you would use your own domain name instead.
- An understanding of DNS records, especially A records and how to set those up.
- Root access to a VPS with a static public IP.
- Basic understanding of the Linux command line and how to use terminal-based editors like
vim
ornano
. Usenano
if you are new to this.
How are websites resolved?
a.k.a. Your DNS setup.
When you visit a website, say EXAMPLE.COM
, your web browser makes a request to DNS servers, like those belonging to Google (8.8.8.8
), OpenDNS (208.67.222.222
), or CloudFlare (1.1.1.1
). These DNS, or Domain Name Servers, check their records to see what public IP(s) it may point to. The IP is sent back to the browser. Then requests are made to this IP address which would then respond to them, typically, by sending over the contents of a webpage.
On the browser side, you typically expect these web pages to be served over port 80
(for HTTP) or port 443
(for HTTPS) of the server, so the browser sends requests to these particular ports, once it gets an IP address. How are we supposed to run multiple websites if only one or two ports are available to us?
Well, once we have a public IP with our VPS, we can set up multiple A records pointing different names to the same IP. So, if we want to launch websites SUBDOMAIN1.DOMAIN.TLD
and SUBDOMAIN2.DOMAIN.TLD
on a single server, both should point to the same IP address. Later on, we will set up a reverse proxy server which would send the traffic coming for SUBDOMAIN1.DOMAIN.TLD
to one container and SUBDOMAIN2.DOMAIN.TLD
to another container and so on.
But the important thing is that we will still be listening on port 80 and 443 on our web server, and no other ports, which is what we desire.
Initializing LXD and creating Linux containers
Let’s start with a clean slate Ubuntu 18.04 LTS server with no additional packages installed or any modifications made on it. Let’s run a customary update and upgrade on it, to make sure we have the latest packages made available to us.
$ sudo apt update
$ sudo apt upgrade
LXD init
LXD is the background process (a daemon), and Linux containers (LXC) is the containerization technology behind it. Now we can run lxd init
, which will ask us several questions and set up the containerization environment for us. It would be better to use OpenZFS to store and manage our container related data.
$ sudo install zfsutils-linux
$ sudo lxd init
LXD will ask you a lot of technical questions, and we may not be able to cover all of them in depth. However, let’s stick to a brief description and get our LXC environment up and running.
- When it comes to LXD clustering, we will use the default option, which is
no
. Just press <Enter> - New Storage Pool? Again, the default option
yes
. - Name of the storage pool? You can give it a reasonable name like
lxc_pool
, becausedefault
is not a meaningful name. - Storage backend? Let’s go with
zfs
, the default option. - Say
yes
to creating a new ZFS pool. - With block device, you have an option where you can create a new ZFS pool over an entire block device. If your VPS doesn’t have additional block storage attached to it, go with the default option of
no
.
If you have selectedno
in the previous step, you would be asked to assign some space in the current file system which LXC will use. Default 15GB is a good starting point for it. - MAAS server connection is not required. Enter
no
. - Local Network Bridge is extremely important for what we are going to do. Answer that with a
yes
. - Let the bridge have the default name
lxdbr0
. - IPv4 addresses? Leave that to the default
auto
as well. - IPv6 addresses are strictly optional. For this tutorial, we are going to say
none
and not use the default value. - Say,
no
to making LXD available over the network. - Automatic update of Cached Images?
yes
of course! - You can print the summary of the entire configuration in the last prompt if you want to. Here’s the output of our configuration for reference.
config: {}
cluster: null
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
managed: false
name: lxdbr0
type: ""
storage_pools:
- config:
size: 15GB
description: ""
name: lxc_pool
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: lxc_pool
type: disk
name: default
If you are logged as a user which is not root, add that USER
to your LXD group.
$ sudo usermod -aG lxd USER
Now, that we have LXD up and running, we can start creating our containers. LXC containers are different from Docker. You can treat Docker containers as simply a package that you’re installing, in a sense. LXC containers, on the other hand, are treated more like lightweight virtual machines each connected to each other with a private IP address and robust file systems and all the other things you typically associate with a VM.
Creating Linux containers
We will launch three containers by running:
$ lxc launch ubuntu:18.04 SUBDOMAIN1
$ lxc launch ubuntu:18.04 SUBDOMAIN2
$ lxc launch ubuntu:18.04 HAProxy
SUBDOMAIN1
is just a placeholder. You can name the container anything reasonable like blog
or portfolio
. You can see the state of each of them by using the command:
$ lxc list
+------------+---------+-----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------+---------+-----------------------+------+------------+-----------+
| HAProxy | RUNNING | 10.188.233.252 (eth0) | | PERSISTENT | 0 |
+------------+---------+-----------------------+------+------------+-----------+
| SUBDOMAIN1 | RUNNING | 10.188.233.245 (eth0) | | PERSISTENT | 0 |
+------------+---------+-----------------------+------+------------+-----------+
| SUBDOMAIN2 | RUNNING | 10.188.233.87 (eth0) | | PERSISTENT | 0 |
+------------+---------+-----------------------+------+------------+-----------+
The values above are just examples—your values might differ wildly from these.
IPTable Rules
Since all the incoming traffic (on port 80 and 443) should go through HAProxy first, let’s set some rules to enforce that. First, run the command:
$ ifconfig
Notice which network interface has the public IP address assigned to it. For example, one of the outputs of ifconfig
will show your public IP under the inet
entry. That particular interface’s name can be eth0
as shown below, or something else depending on your cloud provider. Use that name as your INTERFACE_NAME
.
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet PUBLIC_IP_ADDRESS netmask 255.255.240.0 broadcast
inet6 fe80::387f:b5ff:fe8d:5960 prefixlen 64 scopeid 0x20<link>
ether 3a:7f:b5:8d:59:60 txqueuelen 1000 (Ethernet)
RX packets 1664438 bytes 1952362406 (1.9 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 853505 bytes 389343547 (389.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Forwarding traffic to the HAProxy container
Once that is done, note the IP address of your HAProxy container, as it was shown in lxc list
. Let’s call it, HAPROXY_IP_ADDRESS
, and also make a note of your VPS’s public IP address, let’s call it PUBLIC_IP_ADDRESS
. Now, given these values, run:
$ sudo iptables -t nat -I PREROUTING -i INTERFACE_NAME -p TCP -d PUBLIC_IP_ADDRESS/32 --dport 80 -j DNAT --to-destination HAPROXY_IP_ADRESS:80
$ sudo iptables -t nat -I PREROUTING -i INTERFACE_NAME -p TCP -d PUBLIC_IP_ADDRESS/32 --dport 443 -j DNAT --to-destination HAPROXY_IP_ADRESS:443
We won’t go into the nuances of iptables
right now—just understand that all the traffic to port 80 and 443 are routed to the HAProxy container. The above settings would reset themselves upon system reboot; we can use the package iptables-persistent
to fix that:
$ sudo apt-get install iptables-persistent
For additional security, let’s secure the VPS by configuring the ufw
firewall. It is essential that you allow ssh connections before you enable ufw. You might end up locking yourself out of your VPS, otherwise.
$ sudo ufw allow http
$ sudo ufw allow https
$ sudo ufw allow ssh
$ sudo ufw enable
Configuring HAProxy container
Login to your HAProxy container by executing the following command:
$ lxc exec HAProxy -- bash
Now the prompt would change, indicating that you are inside the container, as a root user root@HAProxy:~#
. We will stay in this environment for the rest of this section. To go back to the main VPS environment, hit Ctrl+D
or Cmd+D
.
As the root user, you will want to run apt update
and then install HAProxy server by running the following:
# apt install haproxy
This command installs and starts the HAProxy server, which is a reverse proxy server. Now, we would like to achieve the following:
Route traffic for SUBDOMAIN1.DOMAIN.TLD
to SUBDOMAIN1
container.
Let the container know the client’s IP address so that it can keep track of different visitors to the SUBDOMAIN1.DOMAIN.TLD
.
Ditto for SUBDOMAIN2.DOMAIN.TLD
.
Optionally, add TLS certificates from Let’s Encrypt.
The configuration file for HAProxy is located at /etc/haproxy/haproxy.cfg
. It will have two sections global
and defaults
. Let’s modify the global section first by adding two lines:
maxconn 2048
tune.ssl.default-dh-param 2048
And to the defaults
sections add:
option forwardfor
option http-server-close
The result looks something like this:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 2048
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
The maxconn
parameter need not be 2048
, but any value that you might deem fit as the maximum number of simultaneous connections. The forwardfor
option is to retain the real source IP. The websites you have hosted can keep track of real-world visitors. Otherwise, it may appear that only the reverse proxy server has ever visited your websites SUBDOMAIN1.DOMAIN.TLD
and SUBDOMAIN2.DOMAIN.TLD
.
Now we need to add a frontend section at the bottom of the file, which will tell HAProxy how to filter the incoming requests. Then we will add a couple of backend sections telling haproxy where the filtered requests would go:
frontend http_frontend
bind *:80
acl web_host1 hdr(host) -i SUBDOMAIN1.DOMAIN.TLD
acl web_host2 hdr(host) -i SUBDOMAIN2.DOMAIN.TLD
use_backend subdomain1 if web_host1
use_backend subdomain2 if web_host2
backend subdomain1
balance leastconn
http-request set-header X-Client-IP %[src]
server SUBDOMAIN1 SUBDOMAIN1.lxd:80 check
backend subdomain2
balance leastconn
http-request set-header X-Client-IP %[src]
server SUBDOMAIN2 SUBDOMAIN2.lxd:80 check
The above text is what you should append to the /etc/haproxy/haproxy.cfg
file if you are interested in using just HTTP, without SSL. The frontend reads header information hdr(host) -i ...
and distributes traffic accordingly.
The backend forwards the source IP set-header X-Client-IP %[SRC]
and refers to the lxc containers using local domain names, like SUBDOMAIN1.lxd
. This is a useful feature of LXD which we can use to our advantage. It is completely separate from the actual SUBDOMAIN1.DOMAIN.TLD
that you would use to view your websites.
You can now check if the configuration file is okay or not by running:
# /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c
If everything checks out, restart HAProxy:
# service haproxy reload
You can now safely exit this container by running exit
command.
Running multiple websites
Now you can login to the backend container named SUBDOMAIN1
and SUBDOMAIN2
. We will install Nginx in one and Apache web server in other to see how they work.
$ lxc exec SUBDOMAIN1 -- bash
# apt install apache2
# exit
$ lxc exec SUBDOMAIN2 -- bash
# apt install nginx
# exit
Now if you visit SUBDOMAIN1.DOMAIN.TLD
from your web browser, you will see that it is running Apache2 web server. Visiting SUBDOMAIN2.DOMAIN.TLD
will show you an Nginx landing page instead!
You now have two different websites hosted on a single VPS. Login to their respective containers to install WordPress, Ghost, or another CMS, if you’re starting a blog. You can install anything you’d install on a barebones Linux VPS, such as self-hosted web apps, too.
Installing Let’s Encrypt certificates
We haven’t added SSL certificates, yet. If you are using Cloudflare as your DNS, you can enable SSL over there which would work just fine. It is free and easy to install. It will get your websites the green padlock symbol where the URL appears, and you won’t have to worry about certificate renewal.
If you want to use Let’s Encrypt for a free SSL certificate instead, you would have to jump through a couple of hoops. Let’s log back into the HAProxy container: lxc exec HAProxy -- bash
.
Obtaining certs
First, disable the HAProxy service so we can get started with certificate installation. Add the Certbot PPA to your list of trusted repositories and then install Certbot, which will fetch certificates for us.
# service haproxy stop
# add-apt-repository ppa:certbot/certbot
# apt update
# apt install certbot
Obtain the certificates by running certbot certonly
. This command will ask you several questions, including the domain names you want to have certified.
# certbot certonly
How would you like to authenticate with the ACME CA?
-------------------------------------------------------------------------------
1: Spin up a temporary web server (standalone)
2: Place files in webroot directory (webroot)
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):
Select 1. Then it would ask you for your email, enter it and Agree to terms and conditions. The command will also ask if you’re willing to share your email address with eff.org. You can opt out of it. Then, most importantly, you will be asked for your domain name(s). Enter your registered domain names. For example, we will enter SUBDOMAIN1.DOMAIN.TLD, SUBDOMAIN2.DOMAIN.TLD
separated by a comma and a space.
Configuring HAProxy to serve SSL
If successful, it would tell you where the certs are stored. Typically the certificates are saved at /etc/letsencrypt/live/SUDOMAIN1.DOMAIN.TLD
directory, which is named after the first FQDN you entered while obtaining the certificates.
We want to combine two files, the fullchain.pem
and privkey.pem
files, into a single file in a directory /etc/haproxy/certs
.
$ mkdir -p /etc/haproxy/certs
$ cat /etc/letsencrypt/live/SUBDOMAIN1.DOMAIN.TLD/fullchain.pem /etc/letsencrypt/live/SUBDOMAIN1.DOMAIN.TLD/privkey.pem > /etc/haproxy/certs/SUBDOMAIN1.DOMAIN.TLD.pem
Next we must revisit our /etc/haproxy/haproxy.cfg
file and add an SSL frontend to it, as well as modify the backend to redirect all non-SSL requests to SSL. We will add the SSL frontend first.
frontend www-https
bind *:443 ssl crt /etc/haproxy/certs/SUBDOMAIN1.DOMAIN.TLD.pem
reqadd X-Forwarded-Proto:\ https
acl host_web1 hdr(host) -i SUBDOMAIN1.DOMAIN.TLD
acl host_web2 hdr(host) -i SUBDOMAIN2.DOMAIN.TLD
use_backend subdomain1 if host_web1
use_backend subdomain2 if host_web2
Next, we will add the line redirect scheme https if !{ ssl_fc }
in each of our backend section to redirect requests to SSL. Like so:
backend subdomain1
balance leastconn
http-request set-header X-Client-IP %[src]
redirect scheme https if !{ ssl_fc }
server SUBDOMAIN1 SUBDOMAIN1.lxd:80 check
The final result would look something like this:
frontend www-https
bind *:443 ssl crt /etc/haproxy/certs/SUBDOMAIN1.DOMAIN.TLD.pem
reqadd X-Forwarded-Proto:\ https
acl host_web1 hdr(host) -i SUBDOMAIN1.DOMAIN.TLD
acl host_web2 hdr(host) -i SUBDOMAIN2.DOMAIN.TLD
use_backend subdomain1 if host_web1
use_backend subdomain2 if host_web2
frontend http_frontend
bind *:80
acl web_host1 hdr(host) -i SUBDOMAIN1.DOMAIN.TLD
acl web_host2 hdr(host) -i SUBDOMAIN2.DOMAIN.TLD
use_backend subdomain1 if web_host1
use_backend subdomain2 if web_host2
backend subdomain1
balance leastconn
http-request set-header X-Client-IP %[src]
redirect scheme https if !{ ssl_fc }
server SUBDOMAIN1 SUBDOMAIN1.lxd:80 check
backend subdomain2
balance leastconn
http-request set-header X-Client-IP %[src]
redirect scheme https if !{ ssl_fc }
server SUBDOMAIN2 SUBDOMAIN2.lxd:80 check
And you can now you can run service haproxy reload
for the new configurations to take effect. Check if SSL is working or not by visiting your FQDNs from a web browser. You will see a secure symbol at the URL bar if everything as worked out fine.
Renewing certificates
Your certs expire every 90 days, and you would need to renew them accordingly. While this process can be automated, let’s look at a manual way to do this for the sake of simplicity.
- Login to the HAProxy container:
lxc exec HAProxy -- bash
. - Stop the service:
service haproxy stop
. - Run:
certbot renew
. - Remove the older certificate from HAProxy configs
rm /etc/haproxy/certs/SUBDOMAIN1.DOMAIN.TLD.pem
. - Add the newer certs:
cat
.
/etc/letsencrypt/live/SUBDOMAIN1.DOMAIN.TLD/fullchain.pem
/etc/letsencrypt/live/SUBDOMAIN1.DOMAIN.TLD/privkey.pem >
/etc/haproxy/certs/SUBDOMAIN1.DOMAIN.TLD.pem - Restart HAProxy:
service haproxy start
.
Conclusion
The overall setup covered here is quite complicated and can appear frustrating at times. If you feel that way, it’s okay! Knowing what you have done in every step of the way can teach you a lot about how these systems work and how you can optimally use them.
If you have any doubts, queries or corrections, feel free to drop a comment below.
A note about tutorials: We encourage our users to try out tutorials, but they aren't fully supported by our team—we can't always provide support when things go wrong. Be sure to check which OS and version it was tested with before you proceed.
If you want a fully managed experience, with dedicated support for any application you might want to run, contact us for more information.