diff --git a/how-to/How-To-Setup-Nc-Env-In-Fedora-Workstation.md b/how-to/How-To-Setup-Nc-Env-In-Fedora-Workstation.md new file mode 100644 index 0000000..c234b50 --- /dev/null +++ b/how-to/How-To-Setup-Nc-Env-In-Fedora-Workstation.md @@ -0,0 +1,480 @@ +# Initial Setup of nc-env in Fedora Workstation (36) + +## Introduction + +This document describes the step-by-step procedure for the initial setup of nc-env in a Fedora Workstation system. + +Check the [project homepage](https://codeberg.org/pmarini/nc-env) to know more about nc-env. + +## Software Environment + +| Component |Version | +|----------------|--------------------------------| +|Operating System|Fedora Linux 36 (Workstation Edition)| +|LXD |5.1 | +|vagrant |2.2.19 | +|vagrant-lxd |0.5.6 | + + + +## Procedure + +Install all the software needed for the project, starting by updating the package repositories: + +``` +$ sudo dnf update +``` + +The following packages are not strictly needed, but may make the following operations more agile, depending on the user tooling preferences. + +``` +$ sudo dnf install vim +``` + + +### Check the storage + +You should have a disk or unformatted partition, of 20GB or more, that we can use as LXD ZFS-based storage pool. If you don’t have any, you can use the directory storage backend and change your answers in section "LXD Configuration" below accordingly. + +### LXD Installation + +Install snap: + +``` +$ sudo dnf install snapd +``` + +Install LXD using snap: + +``` +$ sudo snap install lxd +``` + +Verify that LXD has been correctly installed: + +``` +$ snap list lxd +Name Version Rev Tracking Publisher Notes +lxd 5.1-1f6f485 23037 latest/stable canonical✓ - + +``` + + +### ZFS Utilities Installation + +--- + +:memo: While it is recommeded to use ZFS as backend for the LXD storage pools for this project, you can skip this section if you plan to use directory-based LXD storage pools. + +--- + + +The ZFS installation instructions for Fedora can be found at [this link](https://openzfs.github.io/openzfs-docs/Getting%20Started/Fedora/index.html). + +Verify that it has been correctly installed: + +``` +$ sudo zpool --version +zfs-2.1.4-1 +zfs-kmod-2.1.4-1 +``` + +### Vagrant Installation + + +Get the signing key and the Vagrant official repository address in the local DNF sources: + +``` +$ sudo dnf install -y dnf-plugins-core + +$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo + +$ sudo dnf -y install vagrant +``` + +Install the vagrant package: + +``` +$ sudo apt install vagrant + +``` + +Verify that it has been correctly installed: + +``` +$ vagrant --version +Vagrant 2.2.19 +``` + + +### Vagrant LXD plugin Installation + +Vagrant plugins are installed by using the `plugin` switch, so let's install the `vagrant-lxd` plugin by issuing the following command: + +``` +$ vagrant plugin install vagrant-lxd +Installing the 'vagrant-lxd' plugin. This can take a few minutes... +Fetching thread_safe-0.3.6.gem +Fetching tzinfo-1.2.9.gem +Fetching minitest-5.15.0.gem +Fetching activesupport-5.2.7.gem +Fetching multipart-post-2.1.1.gem +Fetching faraday-0.17.5.gem +Fetching public_suffix-4.0.7.gem +Fetching addressable-2.8.0.gem +Fetching sawyer-0.9.1.gem +Fetching hyperkit-1.3.0.gem +Fetching vagrant-lxd-0.5.6.gem +Installed the plugin 'vagrant-lxd (0.5.6)'! +$ vagrant plugin list +vagrant-lxd (0.5.6, global) +``` + +As recommended in the vagrant-lxd documentation ([here](https://gitlab.com/catalyst-it/devtools/vagrant-lxd#synced-folders)), to ensure synced folders work as expected, run the following commands: + +``` +$ echo root:$(id -u):1 | sudo tee -a /etc/subuid +$ echo root:$(id -g):1 | sudo tee -a /etc/subgid +``` + +### nc-env Installation + +Pick the latest release of the project from [here](https://codeberg.org/pmarini/nc-env/releases), in this example `v20220421`: + +``` +$ cd ~/Documents +$ wget https://codeberg.org/pmarini/nc-env/archive/v20220421.tar.gz +$ tar -xzf v20220421.tar.gz +$ cd nc-env/ +``` + +### LXD Configuration + +Go through the LXD initialization process: + +--- + +:memo: The process below assumes you have a free local block device that we can use as a storage pool. In the below listing the block device is identified as `/dev/vdb` in the system. If you don’t have a free local block device, you can use the `dir` backend. + +--- + +--- + +:memo: Make sure the user is in group `lxd`. + +--- + +``` +$ lxd init +Would you like to use LXD clustering? (yes/no) [default=no]: no +Do you want to configure a new storage pool? (yes/no) [default=yes]: yes +Name of the new storage pool [default=default]: pool01 +Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: zfs +Create a new ZFS pool? (yes/no) [default=yes]: yes +Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes +Path to the existing block device: /dev/vdb +Would you like to connect to a MAAS server? (yes/no) [default=no]: no +Would you like to create a new local network bridge? (yes/no) [default=yes]: yes +What should the new bridge be called? [default=lxdbr0]: +What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: +What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none +Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes +Address to bind LXD to (not including port) [default=all]: +Port to bind LXD to [default=8443]: +Trust password for new clients: +Again: +Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes +Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: + +``` + +Create a LXD profile for the containers created in nc-env: + +``` +$ lxc profile create nc-env-profile +$ lxc profile device add nc-env-profile root disk path=/ pool=pool01 size=5GB +$ lxc profile device add nc-env-profile eth0 nic nictype=bridged name=eth0 parent=lxdbr0 +$ lxc profile list ++----------------+---------------------+---------+ +| NAME | DESCRIPTION | USED BY | ++----------------+---------------------+---------+ +| default | Default LXD profile | 1 | ++----------------+---------------------+---------+ +| nc-env-profile | | 0 | ++----------------+---------------------+---------+ +``` + + +Configure the internal LXD DNS server to automatically resolve hostnames of containers in nc-env: + + +``` +$ lxc network set lxdbr0 dns.domain '' +$ lxc network set lxdbr0 dns.mode managed + +$ sudo vim /etc/systemd/system/lxd-dns-lxdbr0.service +[Unit] +Description=LXD per-link DNS configuration for lxdbr0 +BindsTo=sys-subsystem-net-devices-lxdbr0.device +After=sys-subsystem-net-devices-lxdbr0.device + +[Service] +Type=oneshot +ExecStart=/usr/bin/resolvectl dns lxdbr0 +ExecStart=/usr/bin/resolvectl domain lxdbr0 '' + +[Install] +WantedBy=sys-subsystem-net-devices-lxdbr0.device +``` + +Now you can reload the systemd daemons and enable and start the service: + +``` +$ sudo systemctl daemon-reload + +$ sudo systemctl enable lxd-dns-lxdbr0.service + +$ sudo systemctl start lxd-dns-lxdbr0.service + +$ sudo resolvectl status lxdbr0 +Link 3 (lxdbr0) + Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 + Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported +Current DNS Server: + DNS Servers: + DNS Domain: +``` + +In Fedora, for containers to correctly get their IPv4 address, you should either disable the firewall (service `firewalld`) or put the network interface `lxdbr0` in zone `trusted`. + + + +#### LXD Configuration Checks + +At this point you should be able to start using LXD to run your conatiners. + +Let's first display the storage pools: + +``` +$ lxc storage list ++--------+--------+--------+-------------+---------+---------+ +| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE | ++--------+--------+--------+-------------+---------+---------+ +| pool01 | zfs | pool01 | | 1 | CREATED | ++--------+--------+--------+-------------+---------+---------+ +``` + +Then, let's display the networks: + +``` +$ lxc network list ++--------+----------+---------+---------------+------+-------------+---------+---------+ +| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE | ++--------+----------+---------+---------------+------+-------------+---------+---------+ +| enp1s0 | physical | NO | | | | 0 | | ++--------+----------+---------+---------------+------+-------------+---------+---------+ +| lxdbr0 | bridge | YES | 10.20.73.1/24 | none | | 1 | CREATED | ++--------+----------+---------+---------------+------+-------------+---------+---------+ +``` + +Let' start a sample container: + +``` +$ lxc launch ubuntu:22.04 c1 +Creating c1 +Starting c1 +``` + +Verify its status: + +``` +$ lxc ls ++------+---------+---------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++------+---------+---------------------+------+-----------+-----------+ +| c1 | RUNNING | 10.20.73.194 (eth0) | | CONTAINER | 0 | ++------+---------+---------------------+------+-----------+-----------+ +``` + +Log into the container and get information about the system: + +``` +$ lxc exec c1 bash +root@c1:~# hostnamectl + Static hostname: c1 + Icon name: computer-container + Chassis: container + Machine ID: 615754225c3e48b2945deebfdeff4b4a + Boot ID: b71e857de62a4c239cd3d9a274a433e6 + Virtualization: lxc + Operating System: Ubuntu 20.04.4 LTS + Kernel: Linux 5.13.0-40-generic + Architecture: x86-64 +``` + +Check the ZFS storage pool, using the `zpool` utility: + + +``` +$ sudo zpool list +NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT +pool01 19,5G 586M 18,9G - - 0% 2% 1.00x ONLINE - +``` + +Delete the sample container: + + +``` +$ lxc delete -f c1 + +``` + +## mkcert Installation + +Download from [the official repository](https://github.com/FiloSottile/mkcert/releases) the latest released binary of mkcert, an utility that makes it easy to create locally trusted TLS certificates: + +``` +$ pwd +/home/fedora_usr/Documents/nc-env/ +$ mkdir my-local-env +$ cd my-local-env +$ wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64 +``` + +Install a needed dependency for mkcert: + +``` +$ sudo dnf install nss-tools +``` + +Rename the mkcert binary by stripping the version: + +``` +$ mv mkcert-v1.4.4-linux-amd64 mkcert +``` + + +Make it executable: + +``` +$ chmod +x mkcert +``` + +Run the install command + +``` +$ ./mkcert -install +The local CA is already installed in the system trust store! 👍 +The local CA is now installed in the Firefox and/or Chrome/Chromium trust store (requires browser restart)! 🦊 +``` + +Check the local CAROOT folder by issuing the following command. We will need to make the files in that folder available to the containers as well: + +``` +$ ./mkcert -CAROOT +``` + + +## Containers Creation with nc-env + +Create the first container with nc-env. As a first container let’s choose the Nextcloud Standalone container, that is `template01-nextcloud-standalone`: + +``` +$ pwd +/home/fedora_usr/Documents/nc-env/my-local-env +$ cp -r ../templates/template01-nextcloud-standalone nc01 +$ cd nc01 +``` + +Follow the instructions that you find in the file Readme.md . + +At the end of the configuration the local folder layout should be similar to the following: + + +``` + +├── artifacts +│   ├── mkcert +│   ├── nc-fulltext-live-indexer.service +│   ├── nc-redirect.conf +│   ├── nextcloud-23.0.3.tar.bz2 +│   ├── nextcloud-23.0.3.tar.bz2.md5 +│   ├── nextcloud.conf +│   ├── occ +│   ├── redis.conf +│   ├── rootCA-key.pem +│   └── rootCA.pem +├── log +│   └── provision.log +├── provision.sh +├── Readme.md +└── Vagrantfile +``` + + +If this is your first container created with vagrant-lxd you'll need to add a certificate in LXD trust store allowing vagrant to interact with it. To do that, just issue the following command: + +``` +$ vagrant up +The LXD provider could not authenticate to the daemon at https://127.0.0.1:8443. + +You may need configure LXD to allow requests from this machine. The +easiest way to do this is to add your LXC client certificate to LXD's +list of trusted certificates. This can typically be done with the +following command: + + $ lxc config trust add /home/fedora_usr/.vagrant.d/data/lxd/client.crt + +You can find more information about configuring LXD at: + + https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration +``` + +As suggested in the helper, issue the following command (change the name of the user according to your setup): + +``` +$ lxc config trust add /home/fedora_usr/.vagrant.d/data/lxd/client.crt +``` + +Then, issue the following command: + +``` +$ vagrant up > log/provision.log +``` + +You may to want to follow the provisioning progress with the following command: + + +``` +$ tail -f log/provision.log +``` + +Once the provisioning ended, you can see that the container has been created: + +``` +$ lxc ls ++------+---------+---------------------+------+-----------+-----------+ +| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ++------+---------+---------------------+------+-----------+-----------+ +| nc01 | RUNNING | 10.20.73.116 (eth0) | | CONTAINER | 0 | ++------+---------+---------------------+------+-----------+-----------+ +``` + +Assign `nc01` the profile `nc-env-profile`: + + +--- + +:memo: This result could have been achieved also by setting the parameter `lxd.profiles` in `Vagrantfile`. + +--- + +``` +$ lxc profile add nc01 nc-env-profile +``` + +## Conclusion + +You can now start using your brand-new Nextcloud instance by opening the instance url in your browser. The instance url and the admin user login credentials are displayed at the end of the log. For example if you set `MACHINE_HOSTNAME=nc01.localenv.com` in `provision.sh`, you will be able to access the instance with the following url: `https://nc01.localenv.com`. + +To build up more complex environments please check the list of templates available and pick the ones that fit your needs!