diff --git a/README.md b/README.md
index 3e4c031..ba41e7f 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,14 @@
-| :zap: This project is a personal initiative, without any official support by Nextcloud GmbH |
-|----------------------------------------------------------------------------------------------------|
-
### Introduction
nc-env is a tool that enables you to provision isolated Nextcloud test environments in your machine. It is built on [vagrant](https://www.vagrantup.com/) and [LXD](https://linuxcontainers.org/).
+nc-env is developed and maintained by [RCA Systems](https://rcasys.com), a company dedicated to consulting and services on Nextcloud.
+
+### Code repositories
+
+* [RCA Systems official code repository](https://git.rcasys.com/pmarini/nc-env.git)
+* [Codeberg](https://codeberg.org/pmarini/nc-env.git) (mirror)
+
### Features
@@ -38,8 +42,10 @@
| `template09-web-server-node` | Web Server Node (to be used in a cluster) | YES |
| `template10-redis-server` | Redis server(to be used in a cluster) | YES |
| `template11-minio-storage-server` | MinIO Storage Server | YES |
+| `template12-lookup-server` | Nextcloud lookup server | YES |
| `template13-talk-hpb` | Talk High Performance Backend | YES |
| `template14-self-hosted-appstore` | Nextcloud Self-Hosted Appstore | NO |
+| `template15-notify-push-server` | Nextcloud Notify Push Server (a.k.a. Files HPB) | YES |
| `template16-zabbix-server` | Zabbix Monitoring Server | YES |
| `template17-mail-server` | Mox Mail Server | YES |
@@ -64,4 +70,5 @@
### Resources
* Article: [nc-env: A tool to deploy Nextcloud Environments in your laptop](https://propuestas.eslib.re/2022/miscelanea/nc-env-creacion-entornos-nextcloud-local).
-* Video: [This workshop](https://commons.wikimedia.org/wiki/File:EsLibre_2022_P24_-_Pietro_Marini_-_C%C3%B3mo_montar_un_cl%C3%BAster_de_Nextcloud.webm), recorded at esLibre 2022, is focused on how to build a Nextcloud cluster using nc-env. In Spanish.
+* Video: [This workshop](https://commons.wikimedia.org/wiki/File:EsLibre_2022_P24_-_Pietro_Marini_-_C%C3%B3mo_montar_un_cl%C3%BAster_de_Nextcloud.webm), recorded at [esLibre 2022](https://eslib.re/2022/), is focused on how to build a Nextcloud cluster using nc-env. In Spanish.
+* Video: [This workshop](https://commons.wikimedia.org/wiki/File:EsLibre_2023_P58_-_Pietro_Marini_-_nc-env_%E2%80%93_una_herramienta_para_montar_entornos_Nextcloud_en_tu_ordenador.webm), recorded at [esLibre 2023](https://eslib.re/2023/), is a walkthrough on how to setup nc-env in your laptop. In Spanish.
diff --git a/how-to/How-To-Setup-Global-Scale.md b/how-to/How-To-Setup-Global-Scale.md
new file mode 100644
index 0000000..efb81d3
--- /dev/null
+++ b/how-to/How-To-Setup-Global-Scale.md
@@ -0,0 +1,140 @@
+## How to Setup Global Scale
+
+
+### Introduction
+
+This step-by-step tutorial gives you the procedure to setup a basic Global Scale setup in your machine, with a Lookup Server, a Global Site Selector Node and 2 Global Scale Nodes.
+
+To easily follow this document, please find here the container names and their fully qualified domain names. These are just sample names, and you can change them provided you change them in all the files where they need to be changed.
+
+| Component | Container Name| Fully Qualified Domain Name|
+| --- | --- | --- |
+| Lookup Server | `gs-lookup` | `gs-lookup.localenv.com` |
+| Global Site Selector Node |`gs-selector`| `gs-selector.localenv.com` |
+| Global Scale Node 1 | `gs-node01` | `gs-node01.localenv.com` |
+| Global Scale Node 2 | `gs-node02` | `gs-node02.localenv.com` |
+
+### Create a LXD project
+
+Create a folder to use for the Global Scale setup and navigate into it:
+
+```
+mkdir global-scale
+cd global-scale
+```
+
+Create a project in LXD called `global-scale`:
+
+```
+lxc project create global-scale -c features.images=false -c features.profiles=false
+```
+
+### Install the Lookup Server
+
+1. Use template `template12-lookup-server` (follow instructions in `Readme.md`).
+
+2. Move the container to the project `global-scale`:
+
+ ```
+ lxc move gs-lookup --target-project global-scale
+ ```
+
+ Moving the container will shut it down. Restart it.
+
+3. Add the DNS entry in the `/etc/hosts` file. The IP address is reported in the tail of the provision log.
+
+4. You can do a simple test to check whether the Lookup Server is up and running, that is issueing the following wget request from your host machine:
+ ```
+ wget -O /dev/null https://gs-lookup.localenv.com/index.php/status
+ ```
+ You must get a 200 return code. This test can be repeated from the other components (Global Site Selector Node, Global Scale Nodes) once setup, as they require connectivity to the Lookup Server.
+
+### Install the Global Site Selector Node
+
+1. Use template `template01-nextcloud-standalone` (follow instructions in `Readme.md`).
+
+2. Move the container to the project `global-scale`:
+
+ ```
+ lxc move gs-selector --project default --target-project global-scale
+ ```
+
+ Moving the container will shut it down. Restart it.
+
+3. Add the DNS entry in the `/etc/hosts` file. The IP address is reported in the tail of the provision log.
+
+4. Connect via ssh to the container and issue the following commands:
+
+ ```
+ sudo -u www-data php /var/www/nextcloud/occ app:enable globalsiteselector
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set lookup_server --value 'https://gs-lookup.localenv.com/index.php'
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gs.enabled --type boolean --value true
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gss.jwt.key --value 'random-key'
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gss.mode --value 'master'
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gss.master.admin 0 --value 'admin'
+ ```
+
+5. At this point you can just test that you can login with `admin` to the Nextcloud Web UI using the following URL: `https://gs-selector.localenv.com/index.php`.
+
+
+### Install 2 Global Scale Nodes
+
+1. Use template `template01-nextcloud-standalone` (follow instructions in `Readme.md`).
+
+2. Move the container to the project `global-scale`:
+
+ ```
+ lxc move gs-node --target-project global-scale
+ ```
+
+ Moving the container will shut it down. Restart it.
+
+3. Add the DNS entry in the `/etc/hosts` file. The IP address is reported in the tail of the provision log.
+
+4. Connect via ssh to the container and issue the following commands:
+
+ ```
+ sudo -u www-data php /var/www/nextcloud/occ app:enable globalsiteselector
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set lookup_server --value 'https://gs-lookup.localenv.com/index.php'
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gs.enabled --type boolean --value true
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gss.jwt.key --value 'random-key'
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gss.mode --value 'slave'
+ sudo -u www-data php /var/www/nextcloud/occ config:system:set gss.master.url --value 'https://gs-selector.localenv.com'
+ ```
+
+5. To test that everything is working you can:
+ * create a user in the node by connecting directly to the node with `admin`. To bypass the Global Site Selector redirection use the `direct=1` parameter: `https://gs-node01.localenv.com/index.php/login?direct=1`
+ * logout from the node. The browser should be redirected to `gs-selector.localenv.com`
+ * login with the user just created. The user should be allowed in and redirected to node `gs-node01`.
+
+6. Repeat these steps for `gs-node02`
+
+## How to Start the Global Scale installation
+
+You can start the Global Scale with the following commands:
+
+
+```
+lxc switch project global-scale
+
+lxc start gs-lookup
+
+lxc start gs-selector
+
+lxc start gs-node01
+
+lxc start gs-node02
+
+lxc ls
++-------------+---------+---------------------+---------------------------------------------+-----------+-----------+
+| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
++-------------+---------+---------------------+---------------------------------------------+-----------+-----------+
+| gs-lookup | RUNNING | 10.23.46.172 (eth0) | fd42:fa6c:5eee:12:216:3eff:fe18:bf2e (eth0) | CONTAINER | 0 |
++-------------+---------+---------------------+---------------------------------------------+-----------+-----------+
+| gs-node01 | RUNNING | 10.23.46.32 (eth0) | fd42:fa6c:5eee:12:216:3eff:feb4:d046 (eth0) | CONTAINER | 0 |
++-------------+---------+---------------------+---------------------------------------------+-----------+-----------+
+| gs-node02 | RUNNING | 10.23.46.159 (eth0) | fd42:fa6c:5eee:12:216:3eff:fe27:6f58 (eth0) | CONTAINER | 0 |
++-------------+---------+---------------------+---------------------------------------------+-----------+-----------+
+| gs-selector | RUNNING | 10.23.46.188 (eth0) | fd42:fa6c:5eee:12:216:3eff:fec2:f2d4 (eth0) | CONTAINER | 0 |
++-------------+---------+---------------------+---------------------------------------------+-----------+-----------+
+```
diff --git a/how-to/How-To-Setup-Nc-Env-In-Ubuntu-Desktop.md b/how-to/How-To-Setup-Nc-Env-In-Ubuntu-Desktop.md
index 2474728..911f8b3 100644
--- a/how-to/How-To-Setup-Nc-Env-In-Ubuntu-Desktop.md
+++ b/how-to/How-To-Setup-Nc-Env-In-Ubuntu-Desktop.md
@@ -17,10 +17,10 @@
| Component |Version |
|----------------|----------------------------------|
-|Operating System|Ubuntu 22.04.2 LTS (Jammy Jellyfish)|
-|LXD |5.12 |
-|vagrant |2.3.4 |
-|vagrant-lxd |0.6.0 |
+|Operating System|Ubuntu 22.04.3 LTS (Jammy Jellyfish)|
+|LXD |5.19 |
+|vagrant |2.4.0 |
+|vagrant-lxd |0.7.0 |
## Pre-requisites
diff --git a/templates/template05-elasticsearch/Vagrantfile b/templates/template05-elasticsearch/Vagrantfile
index 9e0c61b..3457d18 100644
--- a/templates/template05-elasticsearch/Vagrantfile
+++ b/templates/template05-elasticsearch/Vagrantfile
@@ -19,7 +19,6 @@
# lxd.nesting = nil
# lxd.privileged = nil
# lxd.ephemeral = false
- # lxd.profiles = ['default']
# lxd.environment = {}
# lxd.config = {}
end
diff --git a/templates/template05-elasticsearch/artifacts/elasticsearch.yml b/templates/template05-elasticsearch/artifacts/elasticsearch.yml
index 539c472..03593fd 100644
--- a/templates/template05-elasticsearch/artifacts/elasticsearch.yml
+++ b/templates/template05-elasticsearch/artifacts/elasticsearch.yml
@@ -1,86 +1,8 @@
-# ======================== Elasticsearch Configuration =========================
-#
-# NOTE: Elasticsearch comes with reasonable defaults for most settings.
-# Before you set out to tweak and tune the configuration, make sure you
-# understand what are you trying to accomplish and the consequences.
-#
-# The primary way of configuring a node is via this file. This template lists
-# the most important settings you may want to configure for a production cluster.
-#
-# Please consult the documentation for further information on configuration options:
-# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
-#
-# ---------------------------------- Cluster -----------------------------------
-#
-# Use a descriptive name for your cluster:
-#
-#cluster.name: my-application
-#
-# ------------------------------------ Node ------------------------------------
-#
-# Use a descriptive name for the node:
-#
-#node.name: node-1
-#
-# Add custom attributes to the node:
-#
-#node.attr.rack: r1
-#
-# ----------------------------------- Paths ------------------------------------
-#
-# Path to directory where to store the data (separate multiple locations by comma):
-#
path.data: /var/lib/elasticsearch
-#
-# Path to log files:
-#
+
path.logs: /var/log/elasticsearch
-#
-# ----------------------------------- Memory -----------------------------------
-#
-# Lock the memory on startup:
-#
-#bootstrap.memory_lock: true
-#
-# Make sure that the heap size is set to about half the memory available
-# on the system and that the owner of the process is allowed to use this
-# limit.
-#
-# Elasticsearch performs poorly when the system is swapping the memory.
-#
-# ---------------------------------- Network -----------------------------------
-#
-# By default Elasticsearch is only accessible on localhost. Set a different
-# address here to expose this node on the network:
-#
+
network.host: 0.0.0.0
-#
-# By default Elasticsearch listens for HTTP traffic on the first free port it
-# finds starting at 9200. Set a specific HTTP port here:
-#
-#http.port: 9200
-#
-# For more information, consult the network module documentation.
-#
-# --------------------------------- Discovery ----------------------------------
-#
-# Pass an initial list of hosts to perform discovery when this node is started:
-# The default list of hosts is ["127.0.0.1", "[::1]"]
-#
-#discovery.seed_hosts: ["host1", "host2"]
-#discovery.seed_hosts: ["localhost"]
-#
-# Bootstrap the cluster using an initial set of master-eligible nodes:
-#
-cluster.initial_master_nodes: ["#MACHINE_HOSTNAME#",]
-#
-# For more information, consult the discovery and cluster formation module documentation.
-#
-# ---------------------------------- Various -----------------------------------
-#
-# Require explicit names when deleting indices:
-#
-#action.destructive_requires_name: true
xpack.security.http.ssl.enabled: true
@@ -100,5 +22,4 @@
authz_exception: true
-
-
+discovery.type: single-node
diff --git a/templates/template05-elasticsearch/provision.sh b/templates/template05-elasticsearch/provision.sh
index 12235fd..244d3c2 100644
--- a/templates/template05-elasticsearch/provision.sh
+++ b/templates/template05-elasticsearch/provision.sh
@@ -66,7 +66,7 @@
systemctl status elasticsearch.service
-elastic_password=`/usr/share/elasticsearch/bin/elasticsearch-reset-password --url https://es-server01.localenv.com:9200 --username elastic --batch --silent --force`
+elastic_password=`/usr/share/elasticsearch/bin/elasticsearch-reset-password --url https://${MACHINE_HOSTNAME}:9200 --username elastic --batch --silent --force`
echo "automatically generated password for user elastic: ${elastic_password}"
diff --git a/templates/template12-lookup-server/Readme.md b/templates/template12-lookup-server/Readme.md
new file mode 100644
index 0000000..18d0fe1
--- /dev/null
+++ b/templates/template12-lookup-server/Readme.md
@@ -0,0 +1,27 @@
+### Lookup server - meant to be deployed in a Global Scale
+
+#### Setup
+
+* Assuming that the copy of the template is called `gs-lookup-server`, move to folder `gs-lookup-server`.
+* Check the content of folder `artifacts`
+
+|File name | Description|
+| --- | --- |
+| config.php | Main configuration file for the Lookup Server |
+| lookup-server.conf | Apache Virtual Host configuration file |
+| `rootCA.pem` | The rootCA previously created in your host machine |
+| `rootCA-key.pem` | The rootCA key previously created in your host machine |
+| `lookup-server-1.1.0.tar.gz` | Release archive of the Lookup Server, downloadable from [here](https://github.com/nextcloud/lookup-server/tags) |
+
+
+* Create folder `log`
+* Open `Vagrantfile` and change the value of the following variables and parameters:
+
+ * `lxd.name` (it is recommended to give the same name as the folder, in this example `gs-lookup-server`)
+ * `MACHINE_HOSTNAME`
+ * `LOOKUP_SERVER_INSTALLER`
+ * `LOOKUP_SERVER_VERSION`
+
+* Run `vagrant up > log/provisioning.log`
+* Make sure your system is able to resolve the domain name that you specified in variable `MACHINE_HOSTNAME`, for example by adding an entry in `/etc/hosts`
+* Start using your environment
diff --git a/templates/template12-lookup-server/Vagrantfile b/templates/template12-lookup-server/Vagrantfile
new file mode 100644
index 0000000..88fb321
--- /dev/null
+++ b/templates/template12-lookup-server/Vagrantfile
@@ -0,0 +1,35 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "isc/forge-clt-ubuntu-22.04"
+
+ config.vm.box_version = "1"
+
+ config.vm.box_check_update = false
+
+ config.vm.provider 'lxd' do |lxd|
+ lxd.api_endpoint = 'https://127.0.0.1:8443'
+ lxd.timeout = 10
+ lxd.name = 'lookup-server01'
+ lxd.project = 'gs-system'
+ lxd.profiles = ['prf-nc-env']
+ # lxd.nesting = nil
+ # lxd.privileged = nil
+ # lxd.ephemeral = false
+ # lxd.environment = {}
+ # lxd.config = {}
+ end
+
+ config.vm.provision "shell" do |s|
+ s.env = {
+ "MACHINE_HOSTNAME" => "lookup-server01.localenv.com",
+ "LOOKUP_SERVER_VERSION" => "1.1.0",
+ "LOOKUP_SERVER_INSTALLER" => "lookup-server-1.1.0.tar.gz"
+ }
+ s.path = "provision.sh"
+ end
+end
+
diff --git a/templates/template12-lookup-server/artifacts/config.php b/templates/template12-lookup-server/artifacts/config.php
new file mode 100644
index 0000000..4ac81d7
--- /dev/null
+++ b/templates/template12-lookup-server/artifacts/config.php
@@ -0,0 +1,70 @@
+ [
+ 'host' => 'localhost',
+ 'db' => '#LOOKUP_DATABASE_NAME#',
+ 'user' => '#LOOKUP_DATABASE_USER#',
+ 'pass' => '#LOOKUP_DATABASE_USER#',
+ ],
+
+ // error verbose
+ 'ERROR_VERBOSE' => true,
+
+ // logfile
+ 'LOG' => '/tmp/lookup.log',
+
+ // replication logfile
+ 'REPLICATION_LOG' => '/tmp/lookup_replication.log',
+
+ // max user search page. limit the maximum number of pages to avoid scraping.
+ 'MAX_SEARCH_PAGE' => 10,
+
+ // max requests per IP and 10min.
+ 'MAX_REQUESTS' => 10000,
+
+ // credential to read the replication log. IMPORTANT!! SET TO SOMETHING SECURE!!
+ 'REPLICATION_AUTH' => 'foobar',
+
+ // credential to read the slave replication log. Replication slaves are read only and don't get the authkey. IMPORTANT!! SET TO SOMETHING SECURE!!
+ 'SLAVEREPLICATION_AUTH' => 'slavefoobar',
+
+ // the list of remote replication servers that should be queried in the cronjob
+ 'REPLICATION_HOSTS' => [
+ 'https://lookup:slavefoobar@example.com/replication'
+ ],
+
+ // ip black list. usefull to block spammers.
+ 'IP_BLACKLIST' => [
+ '333.444.555.',
+ '666.777.888.',
+ ],
+
+ // spam black list. usefull to block spammers.
+ 'SPAM_BLACKLIST' => [
+ ],
+
+ // Email sender address
+ 'EMAIL_SENDER' => 'admin@example.com',
+
+ // Public Server Url
+ 'PUBLIC_URL' => 'http://gs-lookup.localenv.com/index.php',
+
+ // does the lookup server run in a global scale setup
+ 'GLOBAL_SCALE' => true,
+
+ // auth token
+ 'AUTH_KEY' => 'random-key',
+
+ // twitter oauth credentials, needed to perform twitter verification
+ 'TWITTER' => [
+ 'CONSUMER_KEY' => '',
+ 'CONSUMER_SECRET' => '',
+ 'ACCESS_TOKEN' => '',
+ 'ACCESS_TOKEN_SECRET' => '',
+ ]
+];
+
diff --git a/templates/template12-lookup-server/artifacts/lookup-server.conf b/templates/template12-lookup-server/artifacts/lookup-server.conf
new file mode 100644
index 0000000..4d466b0
--- /dev/null
+++ b/templates/template12-lookup-server/artifacts/lookup-server.conf
@@ -0,0 +1,20 @@
+
+
+ DocumentRoot /var/www/lookup-server/
+
+ ServerName #MACHINE_HOSTNAME#
+
+ SSLEngine on
+
+ SSLCertificateFile /etc/ssl/certs/#MACHINE_HOSTNAME#.pem
+
+ SSLCertificateKeyFile /etc/ssl/private/#MACHINE_HOSTNAME#-key.pem
+
+
+
+ Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains"
+
+
+
+
+
diff --git a/templates/template12-lookup-server/provision.sh b/templates/template12-lookup-server/provision.sh
new file mode 100644
index 0000000..8c31940
--- /dev/null
+++ b/templates/template12-lookup-server/provision.sh
@@ -0,0 +1,118 @@
+#!/bin/bash
+
+timedatectl set-timezone Europe/Madrid
+
+start_time=`date`
+
+echo "provisioning started: ${start_time}"
+
+PHP_VERSION=8.1
+
+NETWORK_INTERFACE=eth0
+
+LOOKUP_DATABASE_NAME=lookup_server_db
+
+LOOKUP_DATABASE_USER=lookup_server_usr
+
+hostnamectl set-hostname ${MACHINE_HOSTNAME}
+
+# Print some information about the container OS
+hostnamectl
+
+# Print some information about the container timezone
+timedatectl
+
+#####################################################################
+## Get the IP address into an environment variable. This command outputs
+## an empty variable if the network interface name is not ${NETWORK_INTERFACE}
+#####################################################################
+ip_address=`ip -4 addr show ${NETWORK_INTERFACE} | grep -oP '(?<=inet\s)\d+(\.\d+){3}'`
+
+if [ "${PHP_VERSION}" == "7.4" ]; then
+
+ apt update
+
+ apt install -y software-properties-common apt-transport-https
+
+ add-apt-repository ppa:ondrej/php -y
+
+fi
+
+## Install the needed packages from apt repositories
+apt update
+
+apt install -y apache2 mariadb-server libapache2-mod-php${PHP_VERSION} php${PHP_VERSION}-mysql unzip wget mkcert make composer php-curl
+
+export CAROOT=/vagrant/artifacts/
+
+mkcert -install
+
+mkcert --cert-file /etc/ssl/certs/${MACHINE_HOSTNAME}.pem --key-file /etc/ssl/private/${MACHINE_HOSTNAME}-key.pem "${MACHINE_HOSTNAME}"
+
+#################
+## Expand the installer archive
+#################
+tar -xf /vagrant/artifacts/${LOOKUP_SERVER_INSTALLER} -C /vagrant/artifacts/
+
+#################
+## Database
+#################
+
+echo "START - Create the database"
+
+mysql -u root -e "create database ${LOOKUP_DATABASE_NAME};"
+
+echo "Create the database -> Create the user"
+mysql -u root -e "create user '${LOOKUP_DATABASE_USER}'@'localhost' identified by '${LOOKUP_DATABASE_USER}'"
+
+echo "Create the database -> Grant to the user all the privileges on the database"
+mysql -u root -e "grant all privileges on ${LOOKUP_DATABASE_NAME}.* to '${LOOKUP_DATABASE_USER}'@'localhost'"
+
+## Putting the name of the database inside the tables creation script
+sed -i 's|`lookup`|`'"${LOOKUP_DATABASE_NAME}"'`|g' /vagrant/artifacts/lookup-server-${LOOKUP_SERVER_VERSION}/mysql.dmp
+
+echo "Create the database -> Execute the DDL script"
+mysql -u${LOOKUP_DATABASE_USER} -p${LOOKUP_DATABASE_USER} < /vagrant/artifacts/lookup-server-${LOOKUP_SERVER_VERSION}/mysql.dmp
+
+echo "END - Create the database"
+
+cd /vagrant/artifacts/lookup-server-${LOOKUP_SERVER_VERSION}/
+
+make release
+
+mv /vagrant/artifacts/lookup-server-${LOOKUP_SERVER_VERSION}/server /var/www/lookup-server
+
+cp /vagrant/artifacts/config.php /var/www/lookup-server/config/
+
+sed -i "s|#LOOKUP_DATABASE_NAME#|${LOOKUP_DATABASE_NAME}|g" /var/www/lookup-server/config/config.php
+
+sed -i "s|#LOOKUP_DATABASE_USER#|${LOOKUP_DATABASE_USER}|g" /var/www/lookup-server/config/config.php
+
+cp /vagrant/artifacts/lookup-server.conf /etc/apache2/sites-available/lookup-server.conf
+
+## Putting the machine hostname in the Apache site configuration file (lookup-server.conf)
+sed -i "s|#MACHINE_HOSTNAME#|${MACHINE_HOSTNAME}|g" /etc/apache2/sites-available/lookup-server.conf
+
+a2ensite lookup-server.conf
+
+a2enmod ssl
+
+systemctl restart apache2
+
+systemctl status apache2
+
+echo "${ip_address} ${MACHINE_HOSTNAME}" >> /etc/hosts
+
+wget -O/tmp/lookup-server-status.txt https://${MACHINE_HOSTNAME}/index.php/status
+
+cat /tmp/lookup-server-status.txt
+
+rm /tmp/lookup-server-status.txt
+
+end_time=`date`
+
+echo "This container has IP (interface: ${NETWORK_INTERFACE}): ${ip_address}"
+
+echo "provisioning started: ${start_time}"
+
+echo "provisioning ended: ${end_time}"
diff --git a/templates/template15-notify-push-server/Readme.md b/templates/template15-notify-push-server/Readme.md
new file mode 100644
index 0000000..945f124
--- /dev/null
+++ b/templates/template15-notify-push-server/Readme.md
@@ -0,0 +1,29 @@
+### Notify Push Server
+
+#### Setup
+
+* Assuming that the copy of the template is called `notify-push-server`, move to folder `notify-push-server`.
+* Check the content of folder `artifacts`
+
+| File name | Description|
+| --- | --- |
+| notify_push.env | Environment File for the systemd service |
+| notify_push.service | Systemd service definition |
+| notify_push | Binary to be downloaded from the [release page](https://github.com/nextcloud/notify_push/releases) and renamed |
+
+* Create folder `log`
+* Open `Vagrantfile` and change the value of variables and parameters
+
+ * `lxd.name` (it is recommended to give the same name as the folder, in this example `notify-push-server`)
+ * `MACHINE_HOSTNAME` (it is recommended to give the same name as the folder, plus the domain, in this example `notify-push-server.localenv.com`)
+ * `NEXTCLOUD_URL`
+ * `DATABASE_URL`. Format: ://:@:/, example with db_type=mysql, db_user=nextcloud_usr, db_password=nextcloud_usr, db_host=localhost, dbport=3306, db_name=nextcloud_db: mysql://nextcloud_usr:nextcloud_usr@localhost:3306/nextcloud_db
+ * `REDIS_URL`. Example: redis://localhost:6379
+ * `DATABASE_PREFIX` (optional)
+ * `ALLOW_SELF_SIGNED` (optional)
+ * `PORT` (optional)
+
+* Run `vagrant up > log/provisioning.log`
+* Make sure your system is able to resolve the domain name that you specified in variable `MACHINE_HOSTNAME`, for example by adding an entry in `/etc/hosts`
+* Start using your environment
+
diff --git a/templates/template15-notify-push-server/Vagrantfile b/templates/template15-notify-push-server/Vagrantfile
new file mode 100644
index 0000000..7850d2d
--- /dev/null
+++ b/templates/template15-notify-push-server/Vagrantfile
@@ -0,0 +1,39 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+
+Vagrant.configure("2") do |config|
+
+ config.vm.box = "isc/forge-clt-ubuntu-22.04"
+
+ config.vm.box_version = "1"
+
+ config.vm.box_check_update = false
+
+ config.vm.provider 'lxd' do |lxd|
+ lxd.api_endpoint = 'https://127.0.0.1:8443'
+ lxd.timeout = 10
+ lxd.name = ''
+ lxd.project = 'default'
+ lxd.profiles = ['default']
+ # lxd.nesting = nil
+ # lxd.privileged = nil
+ # lxd.ephemeral = false
+ # lxd.environment = {}
+ # lxd.config = {}
+ end
+
+ config.vm.provision "shell" do |s|
+ s.env = {
+ "MACHINE_HOSTNAME" => "",
+ "NEXTCLOUD_URL" => "",
+ "DATABASE_URL" => "",
+ "REDIS_URL" => "",
+ "ALLOW_SELF_SIGNED" => "true",
+ "DATABASE_PREFIX" => "oc_",
+ "PORT" => "7867"
+ }
+ s.path = "provision.sh"
+ end
+end
+
diff --git a/templates/template15-notify-push-server/artifacts/notify_push.env b/templates/template15-notify-push-server/artifacts/notify_push.env
new file mode 100644
index 0000000..d181390
--- /dev/null
+++ b/templates/template15-notify-push-server/artifacts/notify_push.env
@@ -0,0 +1,11 @@
+PORT=#PORT#
+
+ALLOW_SELF_SIGNED=#ALLOW_SELF_SIGNED#
+
+NEXTCLOUD_URL=#NEXTCLOUD_URL#
+
+DATABASE_URL=#DATABASE_URL#
+
+DATABASE_PREFIX=#DATABASE_PREFIX#
+
+REDIS_URL=#REDIS_URL#
diff --git a/templates/template15-notify-push-server/artifacts/notify_push.service b/templates/template15-notify-push-server/artifacts/notify_push.service
new file mode 100644
index 0000000..eca6beb
--- /dev/null
+++ b/templates/template15-notify-push-server/artifacts/notify_push.service
@@ -0,0 +1,10 @@
+[Unit]
+Description = Push daemon for Nextcloud clients
+Documentation = https://github.com/nextcloud/notify_push
+
+[Service]
+EnvironmentFile = /etc/notify_push/notify_push.env
+ExecStart = /opt/notify_push/notify_push
+
+[Install]
+WantedBy = multi-user.target
diff --git a/templates/template15-notify-push-server/provision.sh b/templates/template15-notify-push-server/provision.sh
new file mode 100644
index 0000000..7233d1d
--- /dev/null
+++ b/templates/template15-notify-push-server/provision.sh
@@ -0,0 +1,68 @@
+#!/bin/bash
+
+timedatectl set-timezone Europe/Madrid
+
+start_time=`date`
+
+echo "provisioning started: ${start_time}"
+
+NETWORK_INTERFACE=eth0
+
+hostnamectl set-hostname ${MACHINE_HOSTNAME}
+
+# Print some information about the container OS
+hostnamectl
+
+# Print some information about the container timezone
+timedatectl
+
+#####################################################################
+## Get the IP address into an environment variable. This command outputs
+## an empty variable if the network interface name is not ${NETWORK_INTERFACE}
+#####################################################################
+ip_address=`ip -4 addr show ${NETWORK_INTERFACE} | grep -oP '(?<=inet\s)\d+(\.\d+){3}'`
+
+## Copy the notify_push binary in its dedicated folder
+mkdir /opt/notify_push
+
+cp /vagrant/artifacts/notify_push /opt/notify_push/
+
+chmod +x /opt/notify_push/notify_push
+
+## Create the environment file for the notify_push systemd service
+mkdir /etc/notify_push
+
+cp /vagrant/artifacts/notify_push.env /etc/notify_push/
+
+sed -i "s|#NEXTCLOUD_URL#|${NEXTCLOUD_URL}|g" /etc/notify_push/notify_push.env
+
+sed -i "s|#DATABASE_URL#|${DATABASE_URL}|g" /etc/notify_push/notify_push.env
+
+sed -i "s|#REDIS_URL#|${REDIS_URL}|g" /etc/notify_push/notify_push.env
+
+sed -i "s|#PORT#|${PORT}|g" /etc/notify_push/notify_push.env
+
+sed -i "s|#ALLOW_SELF_SIGNED#|${ALLOW_SELF_SIGNED}|g" /etc/notify_push/notify_push.env
+
+sed -i "s|#DATABASE_PREFIX#|${DATABASE_PREFIX}|g" /etc/notify_push/notify_push.env
+
+## Create, enable and start the systemd service
+cp /vagrant/artifacts/notify_push.service /etc/systemd/system/
+
+systemctl daemon-reload
+
+systemctl enable notify_push
+
+systemctl start notify_push
+
+systemctl status notify_push
+
+end_time=`date`
+
+echo "This container has IP (interface: ${NETWORK_INTERFACE}): ${ip_address}"
+
+echo "The notify_push service is listening on port ${PORT} to proxy notifications from ${NEXTCLOUD_URL}"
+
+echo "provisioning started: ${start_time}"
+
+echo "provisioning ended: ${end_time}"