Aug 16, 2018

26 min read

Hosting Multiple Websites With Containers And HAProxy

Written by

Vippy The VPS
Hosting multiple websites on a single VPS via Docker is pretty cool, but others might find it too bloated or complex for their needs. Instead of Docker, we can use Linux Containers, also known as LXC, to do the same thing in a more streamlined, more Linux-y fashion.

Aim of this tutorial

In this tutorial, we will learn how to run multiple websites on a single VPS with a single public IP address. These websites will all run in their separate LX containers each with its own unique Fully Qualified Domain Name (FQDN). E.g, SUBDOMAIN1.DOMAIN.TLD and SUBDOMAIN2.DOMAIN.TLD, and so on. We will achieve this by using one container to act as a load balancer, which will listen on all the requests coming on port 80 or 443 (more on these later) of your VPS. It will read the header information of the incoming request and if it is, say, SUBDOMAIN1.DOMAIN.TLD, it will forward the request to the corresponding backend container. If it is SUBDOMAIN2.DOMAIN.TLD, then the request goes to another container running a different website. Lastly, we will setup TLS which to encrypt the entire communication between a client computer and your HAProxy container.


  • A registered domain name. We will use a placeholder DOMAIN.TLD where you would use your own domain name instead.
  • An understanding of DNS records, especially A records and how to set those up.
  • Root access to a VPS with a static public IP.
  • Basic understanding of the Linux command line and how to use terminal-based editors like vim or nano. Use nano if you are new to this.

How are websites resolved?

a.k.a. Your DNS setup. When you visit a website, say EXAMPLE.COM, your web browser makes a request to DNS servers, like those belonging to Google (, OpenDNS (, or CloudFlare ( These DNS, or Domain Name Servers, check their records to see what public IP(s) it may point to. The IP is sent back to the browser. Then requests are made to this IP address which would then respond to them, typically, by sending over the contents of a webpage. On the browser side, you typically expect these web pages to be served over port 80 (for HTTP) or port 443 (for HTTPS) of the server, so the browser sends requests to these particular ports, once it gets an IP address. How are we supposed to run multiple websites if only one or two ports are available to us? Well, once we have a public IP with our VPS, we can set up multiple A records pointing different names to the same IP. So, if we want to launch websites SUBDOMAIN1.DOMAIN.TLD and SUBDOMAIN2.DOMAIN.TLD on a single server, both should point to the same IP address. Later on, we will set up a reverse proxy server which would send the traffic coming for SUBDOMAIN1.DOMAIN.TLD to one container and SUBDOMAIN2.DOMAIN.TLD to another container and so on. But the important thing is that we will still be listening on port 80 and 443 on our web server, and no other ports, which is what we desire.

Initializing LXD and creating Linux containers

Let’s start with a clean slate Ubuntu 18.04 LTS server with no additional packages installed or any modifications made on it. Let’s run a customary update and upgrade on it, to make sure we have the latest packages made available to us.
$ sudo apt update
$ sudo apt upgrade

LXD init

LXD is the background process (a daemon), and Linux containers (LXC) is the containerization technology behind it. Now we can run lxd init, which will ask us several questions and set up the containerization environment for us. It would be better to use OpenZFS to store and manage our container related data.
$ sudo install zfsutils-linux
$ sudo lxd init
LXD will ask you a lot of technical questions, and we may not be able to cover all of them in depth. However, let’s stick to a brief description and get our LXC environment up and running.
  • When it comes to LXD clustering, we will use the default option, which is no. Just press <Enter>
  • New Storage Pool? Again, the default option yes.
  • Name of the storage pool? You can give it a reasonable name like lxc_pool, because default is not a meaningful name.
  • Storage backend? Let’s go with zfs, the default option.
  • Say yes to creating a new ZFS pool.
  • With block device, you have an option where you can create a new ZFS pool over an entire block device. If your VPS doesn’t have additional block storage attached to it, go with the default option of no. If you have selected no in the previous step, you would be asked to assign some space in the current file system which LXC will use. Default 15GB is a good starting point for it.
  • MAAS server connection is not required. Enter no.
  • Local Network Bridge is extremely important for what we are going to do. Answer that with a yes.
  • Let the bridge have the default name lxdbr0.
  • IPv4 addresses? Leave that to the default auto as well.
  • IPv6 addresses are strictly optional. For this tutorial, we are going to say none and not use the default value.
  • Say, no to making LXD available over the network.
  • Automatic update of Cached Images? yes of course!
  • You can print the summary of the entire configuration in the last prompt if you want to. Here’s the output of our configuration for reference.
config: {}
cluster: null
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  managed: false
  name: lxdbr0
  type: ""
- config:
    size: 15GB
  description: ""
  name: lxc_pool
  driver: zfs
- config: {}
  description: ""
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
      path: /
      pool: lxc_pool
      type: disk
  name: default
If you are logged as a user which is not root, add that USER to your LXD group.
$ sudo usermod -aG lxd USER
Now, that we have LXD up and running, we can start creating our containers. LXC containers are different from Docker. You can treat Docker containers as simply a package that
Continue reading this article
by subscribing to our newsletter.
Subscribe now

A note about tutorials: We encourage our users to try out tutorials, but they aren't fully supported by our team—we can't always provide support when things go wrong. Be sure to check which OS and version it was tested with before you proceed.

If you want a fully managed experience, with dedicated support for any application you might want to run, contact us for more information.