Code Monkey home page Code Monkey logo

homelab's Introduction

1. Hardware architecture

In a production-grade IT hardware architecture, several specialized appliances are normally deployed. However, homelabs are placed in residential environments where space restrictions can be the limiting factor for hardware selection.

Selecting devices with small form factors is a must to reduce physical space requirements. In addition, power efficiency is also mandatory for this kind of devices as they operate 24x7 but normally running low computational loads. Virtualization technologies may help in both aspects: space savings and power efficiency.

The selected hardware equipment for the homelab is:

Device name Device type Model Description IP address

router1

Networking device

Mikrotik Hap ax²

Flexible SOHO router running RouterOS

192.168.1.1

proxmox1

Virtualization node

Components bought from different vendors

Flexible power-efficient server for running Proxmox VE hypervisor

192.168.1.6

ups

UPS unit

PowerWalker VI 1000 STL

Uninterrupted power supply unit

-

kvm

KVM extender

Cheap Aliexpress model

Keybord video mouse (KVM) extender for managing locally a remote server

-

In the following sections, each device is described briefly. In next chapters, the solution gets described into deeper details.

Network topology

My home network is a traditional SOHO network with a unique network device (Mikrotik AX2) acting like a router, DHCP server, internet gateway, ethernet switch, WIFI access point and a VPN server. There is an external modem (ONT) from the ISP provider. Most of the devices are connected through wireless interfaces. Only 2 devices (TV and proxmox1) are connecting using wired ethernet.

All physical networking is configured in the Mikrotik Hap AX2 device using winbox client, including:
  • setup of 3 different local IP networks

  • PPPoE negociation with the ISP to get a public IP for the WAN interface

  • VPN server setup based on Wireguard

  • update of public IP in cloudflare DNS service using this custom routerOS script

The most meaningful details of the physical networks are the following:

Network address Network name Description Connected devices

192.168.1.0/24

LAN

Main network of the router

Virtualization server, TV and trusted wireless devices connects here

192.168.100.0/24

VPN

Network for connecting securely to LAN network from the internet

Trusted devices properly configured in wireguard VPN server

10.1.1.0/24

Guest

Provides internet access only, blocking all internal traffic to LAN network

Untrusted wireless devices connects here

Virtual networking, which is created within the virtualization server, is described in virtual servers chapter. The next diagram only shows physical networking:

networking devices

Virtualization node

In a residential environment, optimizing the number of specilized hardware appliances is key for reducing required space. Using virtualization servers (a.k.a hypervisors) make possible running virtual IT infrastructure in an single general-purpose hardware appliance like proxmox1.

Proxmox VE offers a complete virtualization solution ready for professional datacenter management. Proxmox VE is based on KVM kernel virtualization technology and provides a lot of out-of-the-shelf solutions and best practices.

Proxmox hypervisor allows the creation of virtual servers (VMs and LXC containers) by sharing the underlying hardware of the virtualization node. Hypervisor manages physical hardware as resource pools that are asigned to virtual servers at creation time. Each virtual server thinks its hardware resources are dedicated.

Hardware selection was basically driven by power efficency. With a combined TDP (thermal design power) of 15 watts, both CPU model (Intel Celeron J5040) and mother-board (Asrock J5040 ITX) were outstanding options.

Since all disks are SSD (Solid State Disks), IO is not only very performant but also power efficient (less than 1 watt of power consumption per disk).

Emergency power supply

Having a reliable power supply to our storage server is key to guarantee a safe long-term data archival of my family media and sensitive documents.

PowerWalker VI 1000 STL is a monitorized UPS (uninterrupted power supply) that ensures power supply in event of power outage for nearly 1 hour to the router and the "proxmox1" node. If the battery is running out of energy, the UPS starts a gracefull shutdown via the USB-B cable that connects the UPS with the "proxmox1" node.

KVM extender

There is no easy physical access to the virtulization server when SSH is not available. For core admin chores like UEFI changes, host OS installation or debugging boot failures, a KVM extender is really handy. A keyboard, video, and mouse (KVM) extender enables users to work locally on a computer from a distance.

kvm extender diagram

Some content of this section is taken from https://video.matrox.com/, that provides a great description of what a KVM extender is and how it works.

Hardware architecture diagram

Once described into some detail all devices, a complete hardware architecture is shown:

physical architecture

2. Virtualization server (hypervisor)

There is only one virtualization node (proxmox1) where Proxmox hypervisor should be installed. Its hardware specs are the following:

Host OS name IP address Operating System CPUs Cores RAM OS Disk Data disks

proxmox1

192.168.1.6

Proxmox VE 7.36 (based on Debian Linux)

1 Intel Celeron (J5040)

4

16 GBs

0.5TB SSD

2x 1TB SSD

Since Proxmox Virtual Environment (PVE) is a Type 1 hypervisor, it is installed bare-metal, directly over hardware. Installing Proxmox hypervisor is not harder than any bare-metal Linux-based installation. ventoy was used to flash Proxmox PVE ISO file in a USB stick. proxmox1 was then booted from the USB drive and conducted a common installation using the KVM extender.

OS disk

Proxmox VE installer provides by default a simple but professional OS disk layout. Proxmox VE software is installed only in the OS disk (/dev/sdb), reserving the other 2 disks for data storage.

OS disk partition LVM LV Type Goal

sdb1

-

ext2?

Grub2 OS-independent bootloader partition

sdb2

-

vfat

EFI System Partition (ESP), which makes it possible to boot on EFI systems. Linux kernel images are stored in this partition and mounted in /boot/efi

sdb3

swap

swap

lvm LV where Proxmox VE places the swap space

sdb3

root

ext4

lvm LV mounted as the root file system (/) of Proxmox

sdb3

data

LVM-thin

lvm thin provisioning volume used to store vDisks

Above table only shows LVM LVs. There is also one physical volume (PV) called "pve" and a volume group (VG) called "pve".

Data disks

A fault-tolerant long-term storage solution needs to be selected for the two 1 TB SSD data disks (/dev/sda and /dev/sdc). Several storage solutions were considered when designing the storage system.

The first approach was checking fault-tolerant storage backends provided natively by Proxmox VE. There are 2 different storage technologies:

Technology Description Comments

Ceph

A both self-healing and self-managing shared, reliable and highly scalable storage system

Cluster technology designed for having several nodes. Extra administration complexity. Not a simple solution for only 1 node.

ZFS

A combined file system and logical volume manager with extensive protection against data corruption, various RAID modes, fast and cheap snapshots

Memory intensive. The lack of recommended ECC memory was a no-go.

Since both HCI native storage technologies supported by Proxmox where discarded, the storage solution was built from the scratch in a virtual server. Data disks are not managed by the Proxmox hypervisor which delegates that task to a virtual machine acting as storage server.

A virtual machine named "nas" was created with both data disks directly attached to it by enabling disk-passthrough at hypervisor level. Using this configuration, data disks (/dev/sda and /dev/sdc) are not used directly neither by the hypervisor nor other virtual servers.

This virtual machine is based on the open-source NAS server OpenMediaVault (OMV) allowing a central management of the storage services. To get a detailed description of the long-term fault-tolerant storage design, check section NAS server.

Network topology

Proxmox installer detected LAN physical network (192.168.1.0/24) out of the box, allowing to set up easily a fixed IP address for proxmox1 (192.168.1.6).

Virtualization node has only 1 NIC directly attached to my router. However, Proxmox allow to setup a bridged network configuration, extending LAN network address space to the virtual servers started inside the hypervisor.

Proxmox creates a Linux bridge interface (vmbr0) to which all virtual servers are connected. This bridge is also connected to the physical NIC, reusing DHCP server and internet gateway from my Mikrotik router. Consequently, virtual servers belong to the same IP network (LAN) than the rest of my home devices.

3. Virtual servers

Proxmox allows creating 2 types of virtual servers: KVM VMs and LXC containers. This chapter describes the software-defined infrastructure -virtual servers and networks- that were created.

For running the homelab, 2 virtual servers were deployed, acting as application server and storage server:

Name Server type IP addresses Goal

nas

Storage server

192.168.1.5

Virtual machine that centralizes all shared storage devices, technologies and services (RAID 1, SMB drives, storage management). Based on open-source NAS server OpenMediaVault (OMV)

docker

Application server

192.168.1.4 192.168.1.7

Linux Container (LXC) where all docker containers are executed. Uses SMB shared storage drives served by storage server.

After describing the virtual servers in isolation, a network diagram can help to understand how they are connected. In order to differenciate physical from virtual networking, the former uses black lines and the latter uses green lines.

The complete network diagram, including physical and virtual networking, can be found here:

network diagram

NAS server

This storage server is used to deploy the open-source NAS server OpenMediaVault (OMV). OMV makes quite simple to create a storage server over commodity hardware like proxmox1. OMV takes care of all storage-realated tasks like:
  • managing physical disks

  • creation and monitoring of fault tolerant storage devices (like RAID devices)

  • creation of file systems and SMB shares

  • policy definition and enforcing: users, permissions and quotas

The hardware specs are the following:

Name Type OS vCPUs RAM Storage

nas

Virtual Machine

OpenMediaVault 6.3
(based on Debian 11)

2

3 GB

- 1 vDisk (for OS)
- 2 SSD physical disks (via disk passthrough)

Proxmox VE allows to create virtual machines with direct access to physical disks using disk passthrough. OMV detects both data disks as attached SATA disks.

Using OMV administration web tool, creating a fault tolerant 1 TB RAID 1 device with 2 SATA disks is quite simple. OMV manages mdadm (Linux software RAID) under the hood, offering a really smooth experience. As I wanted to create a file-based storage server (no block-based storaged required), an ext4 file system was created over the RAID device using OMV web GUI.

Now was time to choose which file-based network protocol to use in order to give access to the clients. OMV mainly offers NFS and SMB. SMB protocol was finally choosen due to its security features.

Using OMV administration tool again, 3 storage drives were created for external access. Each drive was secured with a user and password. The main features of each SMB drive are listed below:

Drive name Protocol Description Authorized users

backups

SMB

Long-term storage of virtual servers backup files. When Proxmox backs up nas VM, NFS storage is frozen so this VM has a special 2-steps backup process.

proxmox

documents

SMB

Long-term storage for important documents. This drive is used by NextCloud application from docker server.

docker

media

SMB

Long-term storage for family media (photos and videos). This drive is used by PhotoPrism application from docker server.

docker

Docker server

Docker is the facto standard for building, shipping and running containerized applications not only in the enterprise but also in homelabs. Not a surprise than almost all the user-facing applications running in the homelab are deployed as docker containers.

Before describing containers running in the docker server, let’s describe the underlying server specs:

Name Server Type Guest OS vCPUs RAM Storage

docker

LXC Container

Proxmox LXC
debian 11 template

3

4 GB

- 1 vDisks (docker images storage)
- external SMB drives

Contrary to the NAS server, docker server was built based on a Linux Container (LXC) instead of a virtual machine. Using Linux Containers has several advantages like reducing RAM consumption, since hypervisor and containers share a unique Linux kernel. However, this deployment model has some limitations on how certain tasks are done and not all software can be deployed inside a Linux Container.

OpenMediaVault is a product, distributed both as an ISO file or as a .deb package. Several errors occurred during the installation of the .deb package in one LXC container. Official documentation recommends to install OMV in bare metal or in virtual machines, not Linux Containers. Due to that errors, OMV was finally deployed in a virtual machine using the ISO file.

On the other way, docker packages worked perfectly well inside LXC containers using the Debian 11 template. After running Linux Containers for a while, I can recommend this technology, very few limitations detected. A complete and isolated docker server using only 80 MB of RAM is great resource utilization.

A docker environment was required to be installed from debian packages in the LXC container:

- docker.io: Docker engine
- docker-compose: Multi-container docker applications

Additionally, some extra system packages were installed for house-keeping tasks and automations:

- qemu-guest-agent: Guest agent for better power managent from hypervisor
- rclone: Off-site backup
- minidlna: Export media content via DLNA to smart TV
- ssmtp: mail command line tool redirected to my gmail account

4. Software architecture

Once virtual servers -storage and applications- are described, it’s time to show what applications and services are available in the homelab. In the following sections, a functional description of the applications and a complete logical architecture diagram are presented.

Applications

All application exposed to end users are deployed as Docker containers. Most of them are open-source community-driven applications that can be easilly found in docker image repositories like Docker Hub or Linux Servers. There is also one internal development (portfolio manager) that it’s also deployed as a docker container.

The list of docker containers available in the homelab is the following:

Application Description Repo Image

Pi-hole

Open-source network ad blocker

Public

Docker hub

Portainer CE

A lightweight service delivery platform for containerized applications

Public

Docker hub

Portfolio manager

Internal development of a personal investing tool based on Spanish mutual funds

Private

Docker hub

Heimdall

Dashboard for all applications and services deployed in the homelab

Public

LinuxServer

Checkmk Raw Edition

Open-source infrastructure and application monitoring tool

Public

Docker hub

Nextcloud

On-premise file storage and collaboration suite

Public

Docker hub

Nginx Proxy Manager

Reverse proxy for exposing web services in a secure way based on Let’s Encrypt

Public

Docker hub

Transmission

A Fast, Easy and Free Bittorrent Client

Public

LinuxServer

Photoprism

AI powered photo app you can run safely at home

Public

Docker hub

MariaDB

Open-source relational database

Public

Docker hub

PostgreSQL

Advanced Open Source Relational Database

Public

Docker hub

Above containers are deployed in 2 different docker-compose stacks (homelab.yml and portainer.yml). All docker-related information can be found here.

Service architecture

By default, each docker container exports its web applications in different TCP ports, providing a very fragmented and uncomfortable browsing experience. In addition, some of the applications use HTTP and other applications use self-signed HTTPS. One common best practice for exposing different web applications, in a secure and professional way, is to setup a reverse proxy.

Nginx Proxy Manager (NPM) is deployed as a docker container and is the only web application that end users can reach directly. Since all other docker containers have only internal IP addresses (10.10.10.0/24), web traffic needs to be forwarded through the proxy. Each docker container gets a FQDN from thehomelab.site domain. When a request is send to the proxy (ie. nextcloud.thehomelab.site) NPM checks its configuration in order to forward the request to nextcloud’s internal IP address. NPM it’s also responsible for all SSL handshakes and also Let’s Encrypt domain certificates renewals using DNS Challanges with CloudFlare.

Using NPM as a reverse proxy has several architectural advantages:
  • There is only 1 HTTPS port (443) open for all web applications exposed from docker. All web traffic is forwarded via that unique port, hiding the internal ports of each docker container.

  • NPM uses a valid SSL certificate from Let’s Encrypt for securing all web applications behind the proxy. Client web browsers present a validated certificate for all of them

  • Each docker container with a web interface gets a FQDN of thehomelab.site domain. NPM redirects internally to the given docker container based on the FQDN.

  • There is a central point for controlling web traffic where other best practices can be applied in the future (Identity Management, quality of service, etc)

There are 3 relational databases running in the homelab: 1 PostgreSQL server (portfolio manager) and 2 MariaDB servers (photoprism and nextcloud). The serving ports for these databases remain closed from LAN network (192.168.1.0/24) being only available for docker containers using the internal docker network (10.10.10.0/24). This desing enhances security by reducing exposed surface.

All this design can be shown in the logical architecture diagram.

Logical architecture diagram

In this section, main services and batch jobs deployed on the homelab are presented. This diagram includes, from a logical point of view, software running in both bare-metal infrastructure (hypervisor) and virtual servers ("nas" and "docker").

In addition, the logical architecture diagram also presents the main external services used by the system. Excluding domain registration, all other services are free to use. The main external services used are:

- NameCheap: Domain register (thehomelab.site)
- CloudFlare: DNS management
- Let's Encrypt: SSL certificates issuance
- Mega.io: off-site backup
logical architecture

homelab's People

Contributors

macvaz avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.