This document describes the step-by-step procedure for the initial setup of nc-env in a Fedora Workstation system.
Check the project homepage to know more about nc-env.
| Component | Version |
|---|---|
| Operating System | Fedora Linux 36 (Workstation Edition) |
| LXD | 5.1 |
| vagrant | 2.2.19 |
| vagrant-lxd | 0.5.6 |
Install all the software needed for the project, starting by updating the package repositories:
$ sudo dnf update
The following packages are not strictly needed, but may make the following operations more agile, depending on the user tooling preferences.
$ sudo dnf install vim
Install snap:
$ sudo dnf install snapd
Install LXD using snap:
$ sudo snap install lxd
Verify that LXD has been correctly installed:
$ snap list lxd Name Version Rev Tracking Publisher Notes lxd 5.1-1f6f485 23037 latest/stable canonicalβ -
While it is recommeded to use ZFS as backend for the LXD storage pools for this project, you can skip this section if you plan to use directory-based LXD storage pools.
If you plan to use ZFS as backend for the LXD storage pools, you should have a disk or unformatted partition, of 20GB or more, for that.
The ZFS installation instructions for Fedora can be found at this link.
Verify that it has been correctly installed:
$ sudo zpool --version zfs-2.1.4-1 zfs-kmod-2.1.4-1
Get the signing key and the Vagrant official repository address in the local DNF sources:
$ sudo dnf install -y dnf-plugins-core $ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo $ sudo dnf -y install vagrant
Install the vagrant package:
$ sudo dnf install vagrant
Verify that it has been correctly installed:
$ vagrant --version Vagrant 2.2.19
Vagrant plugins are installed by using the plugin switch, so let's install the vagrant-lxd plugin by issuing the following command:
$ vagrant plugin install vagrant-lxd Installing the 'vagrant-lxd' plugin. This can take a few minutes... Fetching thread_safe-0.3.6.gem Fetching tzinfo-1.2.9.gem Fetching minitest-5.15.0.gem Fetching activesupport-5.2.7.gem Fetching multipart-post-2.1.1.gem Fetching faraday-0.17.5.gem Fetching public_suffix-4.0.7.gem Fetching addressable-2.8.0.gem Fetching sawyer-0.9.1.gem Fetching hyperkit-1.3.0.gem Fetching vagrant-lxd-0.5.6.gem Installed the plugin 'vagrant-lxd (0.5.6)'!
Verify that it has been correctly installed:
$ vagrant plugin list vagrant-lxd (0.5.6, global)
As recommended in the vagrant-lxd documentation (here), to ensure synced folders work as expected, run the following commands:
$ echo root:$(id -u):1 | sudo tee -a /etc/subuid $ echo root:$(id -g):1 | sudo tee -a /etc/subgid
Download from the official repository the latest released binary of mkcert, an utility that makes it easy to create locally trusted TLS certificates:
$ pwd /home/fedora_usr/Documents/nc-env/ $ mkdir my-local-env $ cd my-local-env $ wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64
Install a needed dependency for mkcert:
$ sudo dnf install nss-tools
Rename the mkcert binary by stripping the version:
$ mv mkcert-v1.4.4-linux-amd64 mkcert
Make it executable:
$ chmod +x mkcert
Run the install command
$ ./mkcert -install The local CA is already installed in the system trust store! π The local CA is now installed in the Firefox and/or Chrome/Chromium trust store (requires browser restart)! π¦
Check the local CAROOT folder by issuing the following command. We will need to make the files in that folder available to the containers as well:
$ ./mkcert -CAROOT
Pick the latest release of the project from here, in this example v20220421:
$ cd ~/Documents $ wget https://codeberg.org/pmarini/nc-env/archive/v20220421.tar.gz $ tar -xzf v20220421.tar.gz $ cd nc-env/
Go through the LXD initialization process:
The process below assumes you have a free local block device that we can use as a storage pool. In the below listing the block device is identified as /dev/vdb in the system. If you donβt have a free local block device, you can use the dir backend.
Make sure the user is in group lxd.
$ lxd init Would you like to use LXD clustering? (yes/no) [default=no]: no Do you want to configure a new storage pool? (yes/no) [default=yes]: yes Name of the new storage pool [default=default]: pool01 Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: zfs Create a new ZFS pool? (yes/no) [default=yes]: yes Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes Path to the existing block device: /dev/vdb Would you like to connect to a MAAS server? (yes/no) [default=no]: no Would you like to create a new local network bridge? (yes/no) [default=yes]: yes What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, βautoβ or βnoneβ) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, βautoβ or βnoneβ) [default=auto]: none Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes Address to bind LXD to (not including port) [default=all]: Port to bind LXD to [default=8443]: Trust password for new clients: <your password> Again: <your password> Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Create a LXD profile for the containers created in nc-env:
$ lxc profile create nc-env-profile $ lxc profile device add nc-env-profile root disk path=/ pool=pool01 size=5GB $ lxc profile device add nc-env-profile eth0 nic nictype=bridged name=eth0 parent=lxdbr0 $ lxc profile list +----------------+---------------------+---------+ | NAME | DESCRIPTION | USED BY | +----------------+---------------------+---------+ | default | Default LXD profile | 1 | +----------------+---------------------+---------+ | nc-env-profile | | 0 | +----------------+---------------------+---------+
Configure the internal LXD DNS server to automatically resolve hostnames of containers in nc-env:
$ lxc network set lxdbr0 dns.domain '<your domain>' $ lxc network set lxdbr0 dns.mode managed $ sudo vim /etc/systemd/system/lxd-dns-lxdbr0.service [Unit] Description=LXD per-link DNS configuration for lxdbr0 BindsTo=sys-subsystem-net-devices-lxdbr0.device After=sys-subsystem-net-devices-lxdbr0.device [Service] Type=oneshot ExecStart=/usr/bin/resolvectl dns lxdbr0 <IP address of interface> ExecStart=/usr/bin/resolvectl domain lxdbr0 '<your domain>' [Install] WantedBy=sys-subsystem-net-devices-lxdbr0.device
Now you can reload the systemd daemons and enable and start the service:
$ sudo systemctl daemon-reload
$ sudo systemctl enable lxd-dns-lxdbr0.service
$ sudo systemctl start lxd-dns-lxdbr0.service
$ sudo resolvectl status lxdbr0
Link 3 (lxdbr0)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: <IP address of interface>
DNS Servers: <IP address of interface>
DNS Domain: <your domain>
In Fedora, for containers to correctly get their IPv4 address, you should either disable the firewall (service firewalld) or put the network interface lxdbr0 in zone trusted.
At this point you should be able to start using LXD to run your conatiners.
Let's first display the storage pools:
$ lxc storage list +--------+--------+--------+-------------+---------+---------+ | NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE | +--------+--------+--------+-------------+---------+---------+ | pool01 | zfs | pool01 | | 1 | CREATED | +--------+--------+--------+-------------+---------+---------+
Then, let's display the networks:
$ lxc network list +--------+----------+---------+---------------+------+-------------+---------+---------+ | NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE | +--------+----------+---------+---------------+------+-------------+---------+---------+ | enp1s0 | physical | NO | | | | 0 | | +--------+----------+---------+---------------+------+-------------+---------+---------+ | lxdbr0 | bridge | YES | 10.20.73.1/24 | none | | 1 | CREATED | +--------+----------+---------+---------------+------+-------------+---------+---------+
Let' start a sample container:
$ lxc launch ubuntu:22.04 c1 Creating c1 Starting c1
Verify its status:
$ lxc ls +------+---------+---------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+-----------+-----------+ | c1 | RUNNING | 10.20.73.194 (eth0) | | CONTAINER | 0 | +------+---------+---------------------+------+-----------+-----------+
Log into the container and get information about the system:
$ lxc exec c1 bash
root@c1:~# hostnamectl
Static hostname: c1
Icon name: computer-container
Chassis: container
Machine ID: 615754225c3e48b2945deebfdeff4b4a
Boot ID: b71e857de62a4c239cd3d9a274a433e6
Virtualization: lxc
Operating System: Ubuntu 20.04.4 LTS
Kernel: Linux 5.13.0-40-generic
Architecture: x86-64
Check the ZFS storage pool, using the zpool utility:
$ sudo zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT pool01 19,5G 586M 18,9G - - 0% 2% 1.00x ONLINE -
Delete the sample container:
$ lxc delete -f c1
Download from the official repository the latest released binary of mkcert, an utility that makes it easy to create locally trusted TLS certificates:
$ pwd /home/fedora_usr/Documents/nc-env/ $ mkdir my-local-env $ cd my-local-env $ wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64
Install a needed dependency for mkcert:
$ sudo dnf install nss-tools
Rename the mkcert binary by stripping the version:
$ mv mkcert-v1.4.4-linux-amd64 mkcert
Make it executable:
$ chmod +x mkcert
Run the install command
$ ./mkcert -install The local CA is already installed in the system trust store! π The local CA is now installed in the Firefox and/or Chrome/Chromium trust store (requires browser restart)! π¦
Check the local CAROOT folder by issuing the following command. We will need to make the files in that folder available to the containers as well:
$ ./mkcert -CAROOT
Create the first container with nc-env. As a first container letβs choose the Nextcloud Standalone container, that is template01-nextcloud-standalone:
$ pwd /home/fedora_usr/Documents/nc-env/my-local-env $ cp -r ../templates/template01-nextcloud-standalone nc01 $ cd nc01
Follow the instructions that you find in the file Readme.md .
At the end of the configuration the local folder layout should be similar to the following:
βββ artifacts β βββ mkcert β βββ nc-fulltext-live-indexer.service β βββ nc-redirect.conf β βββ nextcloud-23.0.3.tar.bz2 β βββ nextcloud-23.0.3.tar.bz2.md5 β βββ nextcloud.conf β βββ occ β βββ redis.conf β βββ rootCA-key.pem β βββ rootCA.pem βββ log β βββ provision.log βββ provision.sh βββ Readme.md βββ Vagrantfile
If this is your first container created with vagrant-lxd you'll need to add a certificate in LXD trust store allowing vagrant to interact with it. To do that, just issue the following command:
$ vagrant up
The LXD provider could not authenticate to the daemon at https://127.0.0.1:8443.
You may need configure LXD to allow requests from this machine. The
easiest way to do this is to add your LXC client certificate to LXD's
list of trusted certificates. This can typically be done with the
following command:
$ lxc config trust add /home/fedora_usr/.vagrant.d/data/lxd/client.crt
You can find more information about configuring LXD at:
https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration
As suggested in the helper, issue the following command (change the name of the user according to your setup):
$ lxc config trust add /home/fedora_usr/.vagrant.d/data/lxd/client.crt
Then, issue the following command:
$ vagrant up > log/provision.log
You may to want to follow the provisioning progress with the following command:
$ tail -f log/provision.log
Once the provisioning ended, you can see that the container has been created:
$ lxc ls +------+---------+---------------------+------+-----------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+-----------+-----------+ | nc01 | RUNNING | 10.20.73.116 (eth0) | | CONTAINER | 0 | +------+---------+---------------------+------+-----------+-----------+
Assign nc01 the profile nc-env-profile:
This result could have been achieved also by setting the parameter lxd.profiles in Vagrantfile.
$ lxc profile add nc01 nc-env-profile
You can now start using your brand-new Nextcloud instance by opening the instance url in your browser. The instance url and the admin user login credentials are displayed at the end of the log. For example if you set MACHINE_HOSTNAME=nc01.localenv.com in provision.sh, you will be able to access the instance with the following url: https://nc01.localenv.com.
To build up more complex environments please check the list of templates available and pick the ones that fit your needs!