Newer
Older
nc-env / how-to / How-To-Setup-Nc-Env-In-Ubuntu-Desktop-20.04.md

Initial Setup of nc-env in Ubuntu Desktop 20.04

Introduction

This document describes the step-by-step procedure for the initial setup of nc-env in an Ubuntu Destkop 20.04 system.

Check the project homepage to know more about nc-env.

Software Environment

Component Version
Operating System Ubuntu 20.04.4 LTS (Focal Fossa)
LXD 5.0.0
vagrant 2.2.19
vagrant-lxd 0.5.6

Procedure

Install all the software needed for the project, starting by updating the package repositories:

$ sudo apt update

The following packages are not strictly needed, but may make the following operations more agile, depending on the user tooling preferences.

$ sudo apt install vim tree

Check the storage

You should have a disk or unformatted partition, of 20GB or more, that we can use as LXD ZFS-based storage pool. If you don’t have any, you can use the directory storage backend and change your answers in section "LXD Configuration" below accordingly.

LXD Installation

Install LXD using snap:

$ sudo snap install lxd
lxd 5.0.0-b0287c1 from Canonical✓ installed

Verify that LXD has been correctly installed:

$ snap list lxd
Name  Version        Rev    Tracking       Publisher   Notes
lxd   5.0.0-b0287c1  22923  latest/stable  canonical✓  -

ZFS Utilities Installation

Install the ZFS package:

$ sudo apt install zfsutils-linux

Verify that it has been correctly installed:

$ sudo zpool --version
zfs-0.8.3-1ubuntu12.13
zfs-kmod-2.0.6-1ubuntu2.1

Vagrant Installation

Install the curl package:

$ sudo apt install curl

Get the signing key and the Vagrant official repository address in the local APT sources:

$ sudo curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
OK
$ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
$ sudo apt update

Install the vagrant package:

$ sudo apt install vagrant

Verify that it has been correctly installed:

$ vagrant --version
Vagrant 2.2.19

Vagrant LXD plugin Installation

Vagrant plugins are installed by using the plugin switch, so let's install the vagrant-lxd plugin by issuing the following command:

$ vagrant plugin install vagrant-lxd
Installing the 'vagrant-lxd' plugin. This can take a few minutes...
Fetching thread_safe-0.3.6.gem
Fetching tzinfo-1.2.9.gem
Fetching minitest-5.15.0.gem
Fetching activesupport-5.2.7.gem
Fetching multipart-post-2.1.1.gem
Fetching faraday-0.17.5.gem
Fetching public_suffix-4.0.7.gem
Fetching addressable-2.8.0.gem
Fetching sawyer-0.9.1.gem
Fetching hyperkit-1.3.0.gem
Fetching vagrant-lxd-0.5.6.gem
Installed the plugin 'vagrant-lxd (0.5.6)'!
$ vagrant plugin list
vagrant-lxd (0.5.6, global)

nc-env Installation

Pick the latest release of the project from here, in this example v20220421:

$ wget https://codeberg.org/pmarini/nc-env/archive/v20220421.tar.gz
$ gzip -d v20220421.tar.gz
$ tar -xf v20220421.tar
$ cd nc-env/

LXD Configuration

Go through the LXD initialization process:


:memo: The process below assumes you have a free local block device that we can use as a storage pool. If you don’t have one, you can use the dir backend



:memo: Make sure the user is in group lxd


$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: pool01
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: zfs
Create a new ZFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes
Path to the existing block device: /dev/vdb
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 
Port to bind LXD to [default=8443]: 
Trust password for new clients: <your password> 
Again: <your password> 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Create a LXD profile for the containers created in nc-env:

$ lxc profile create nc-env-profile
$ lxc profile device add nc-env-profile root disk path=/ pool=pool01 size=5GB
$ lxc profile device add nc-env-profile eth0 nic nictype=bridged name=eth0 parent=lxdbr0
$ lxc profile list
+----------------+---------------------+---------+
|      NAME      |     DESCRIPTION     | USED BY |
+----------------+---------------------+---------+
| default        | Default LXD profile | 1       |
+----------------+---------------------+---------+
| nc-env-profile |                     | 0       |
+----------------+---------------------+---------+

Configure the internal LXD DNS server to automatically resolve hostnames of containers in nc-env:

$ lxc network set lxdbr0 dns.domain '<your domain>'
$ lxc network set lxdbr0 dns.mode managed

$ sudo vim /etc/systemd/system/lxd-dns-lxdbr0.service
[Unit]
Description=LXD per-link DNS configuration for lxdbr0
BindsTo=sys-subsystem-net-devices-lxdbr0.device
After=sys-subsystem-net-devices-lxdbr0.device

[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns lxdbr0 <IP address of interface>
ExecStart=/usr/bin/resolvectl domain lxdbr0 '<your domain>'

[Install]
WantedBy=sys-subsystem-net-devices-lxdbr0.device

Now you can reload the systemd daemons and enable and start the service:


:memo: In Debian environments, you should enable at this point the systemd-resolved service (sudo systemctl enable systemd-resolved.service), then reboot.


$ sudo systemctl daemon-reload

$ sudo systemctl enable lxd-dns-lxdbr0.service

$ sudo systemctl start lxd-dns-lxdbr0.service

$ sudo resolvectl status lxdbr0
Link 3 (lxdbr0)
    Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
         Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: <IP address of interface>
       DNS Servers: <IP address of interface>
        DNS Domain: <your domain>

LXD Configuration Checks

At this point you should be able to start using LXD to run your conatiners.

Let's first display the storage pools:

$ lxc storage list
To start your first container, try: lxc launch ubuntu:20.04
Or for a virtual machine: lxc launch ubuntu:20.04 --vm

+--------+--------+--------+-------------+---------+---------+
|  NAME  | DRIVER | SOURCE | DESCRIPTION | USED BY |  STATE  |
+--------+--------+--------+-------------+---------+---------+
| pool01 | zfs    | pool01 |             | 1       | CREATED |
+--------+--------+--------+-------------+---------+---------+

Then, let's display the networks:

$ lxc network list
+--------+----------+---------+---------------+------+-------------+---------+---------+
|  NAME  |   TYPE   | MANAGED |     IPV4      | IPV6 | DESCRIPTION | USED BY |  STATE  |
+--------+----------+---------+---------------+------+-------------+---------+---------+
| enp1s0 | physical | NO      |               |      |             | 0       |         |
+--------+----------+---------+---------------+------+-------------+---------+---------+
| lxdbr0 | bridge   | YES     | 10.20.73.1/24 | none |             | 1       | CREATED |
+--------+----------+---------+---------------+------+-------------+---------+---------+

Let' start a sample container:

$ lxc launch ubuntu:20.04 c1
Creating c1
Starting c1

Verify its status:

$ lxc ls
+------+---------+---------------------+------+-----------+-----------+
| NAME |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+---------------------+------+-----------+-----------+
| c1   | RUNNING | 10.20.73.194 (eth0) |      | CONTAINER | 0         |
+------+---------+---------------------+------+-----------+-----------+

Log into the container and get information about the system:

$ lxc exec c1 bash
root@c1:~# hostnamectl
   Static hostname: c1
         Icon name: computer-container
           Chassis: container
        Machine ID: 615754225c3e48b2945deebfdeff4b4a
           Boot ID: b71e857de62a4c239cd3d9a274a433e6
    Virtualization: lxc
  Operating System: Ubuntu 20.04.4 LTS
            Kernel: Linux 5.13.0-40-generic
      Architecture: x86-64

Check the ZFS storage pool, using the zpool utility:

$ sudo zpool list
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool01  19,5G   586M  18,9G        -         -     0%     2%  1.00x    ONLINE  -

Delete the sample container:

$ lxc delete -f c1

Containers Creation with nc-env

Create the first container with nc-env. As a first container let’s choose the Nextcloud Standalone container, that is template01-nextcloud-standalone:

$ pwd
/home/ubuntu/nc-env
$ mkdir my-local-env
$ cd my-local-env
$ cp -r ../templates/template01-nextcloud-standalone nc01
$ cd nc01

Follow the instructions that you find in the Readme.md file.

At the end of the configuration the local folder layout should be similar to the following:

├── artifacts
│   ├── mkcert
│   ├── nc-fulltext-live-indexer.service
│   ├── nc-redirect.conf
│   ├── nextcloud-23.0.3.tar.bz2
│   ├── nextcloud-23.0.3.tar.bz2.md5
│   ├── nextcloud.conf
│   ├── occ
│   ├── redis.conf
│   ├── rootCA-key.pem
│   └── rootCA.pem
├── log
│   └── provision.log
├── provision.sh
├── Readme.md
└── Vagrantfile

If this is your first container created with vagrant-lxd you'll need to add a certificate in LXD trust store allowing vagrant to interact with it. To do that, just issue the following command:

$ vagrant up
The LXD provider could not authenticate to the daemon at https://127.0.0.1:8443.

You may need configure LXD to allow requests from this machine. The
easiest way to do this is to add your LXC client certificate to LXD's
list of trusted certificates. This can typically be done with the
following command:

    $ lxc config trust add /home/ubuntu/.vagrant.d/data/lxd/client.crt

You can find more information about configuring LXD at:

    https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration

As suggested in the helper, issue the following command (change the name of the user according to your setup):

$ lxc config trust add /home/ubuntu/.vagrant.d/data/lxd/client.crt

Then, issue the following command:

$ vagrant up > log/provision.log

:memo: You can safely ignore the warning: ==> default: The host machine does not support LXD synced folders.


You may to want to follow the provisioning progress with the following command:

$ tail -f log/provision.log

Once the provisioning ended, you can see that the container has been created:

$ lxc ls
+------+---------+---------------------+------+-----------+-----------+
| NAME |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+------+---------+---------------------+------+-----------+-----------+
| nc01 | RUNNING | 10.20.73.116 (eth0) |      | CONTAINER | 0         |
+------+---------+---------------------+------+-----------+-----------+

Assign nc01 the profile nc-env-profile:


:memo: This result could have been achieved also by setting the parameter lxd.profiles in Vagrantfile.


$ lxc profile add nc01 nc-env-profile

Conclusion

You can now start using your brand-new Nextcloud instance by opening the instance url in your browser. The instance url and the admin user login credentials are displayed at the end of the log. For example if you set MACHINE_HOSTNAME=nc01.localenv.com in provision.sh, you will be able to access the instance with the following url: https://nc01.localenv.com.

To build up more complex environments please check the list of templates available and pick the ones that fit your needs!