Installing in Air-Gapped Ubuntu Environments
This guide provides a comprehensive procedure for deploying Capacity Private Cloud in isolated networks without external internet access. The installation uses a two-server approach: an online system for downloading components and an offline system for production deployment.
Scope of Installation
This procedure covers installing the following components in an isolated network:
- Runtimes & Orchestration: Docker, Containerd, and Kubernetes
- Networking & Service Mesh: Calico, Linkerd, and Ingress-nginx
- Package Management: Helm
- Infrastructure Services: Docker Private Registry, MongoDB, PostgreSQL, RabbitMQ, and Redis
- Platform Stack: Capacity Private Cloud services, MRCP-API, and MRCP-Client
Environment Requirements
This installation requires a two-server approach:
- Online Server: A Linux system connected to the internet to download and stage all required assets.
- Offline Server: A secured Linux system with no external network access where the production environment is installed.
While this guide is compatible with Red Hat or Ubuntu, the examples are based on Ubuntu 24.04.4 LTS.
Server Information
| Server Type | Server Name | Server IP |
|---|---|---|
| Online Server | ubuntu-online | 172.18.2.71 |
| Offline Server | ubuntu-offline | 172.18.2.72 |
curl and rsync are installed on both servers.Online Server Preparation
System Prerequisites
Kubernetes requires specific system settings to manage container networking and memory efficiently.
Disable Swap Space
sudo swapoff -a sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
Configure Kernel Modules
The following modules are required for the Kubernetes pod network (Calico) to function correctly.
sudo tee /etc/modules-load.d/k8s.conf <<EOF ip_tables overlay br_netfilter EOF sudo modprobe ip_tables sudo modprobe overlay sudo modprobe br_netfilter
Network & Security Settings
Adjust the system's network filtering and disable the firewall to allow inter-pod communication within the cluster.
# Enable bridged traffic and IP forwarding sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl settings without reboot sudo sysctl --system # Disable Firewall and AppArmor sudo systemctl stop ufw sudo systemctl disable ufw sudo systemctl stop apparmor sudo systemctl disable apparmor
Online Asset Staging
Docker and Containerd
Create a directory to store the files. In this example, we save the files in /lumenvox.
mkdir /lumenvox && cd /lumenvox mkdir docker-offline && cd docker-offline # Add GPG key sudo apt update sudo apt install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Setup the repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Run Docker without sudo sudo usermod -aG docker $USER newgrp docker
Download the Docker core components from https://download.docker.com/linux/ubuntu/dists
cd docker-offline
# Define the base URL for Ubuntu 24.04 (Noble)
BASE_URL="https://download.docker.com/linux/ubuntu/dists/noble/pool/stable/amd64/"
curl -LO "${BASE_URL}containerd.io_1.7.25-1_amd64.deb"
curl -LO "${BASE_URL}docker-ce_27.5.1-1~ubuntu.24.04~noble_amd64.deb"
curl -LO "${BASE_URL}docker-ce-cli_27.5.1-1~ubuntu.24.04~noble_amd64.deb"
curl -LO "${BASE_URL}docker-buildx-plugin_0.20.0-1~ubuntu.24.04~noble_amd64.deb"
curl -LO "${BASE_URL}docker-compose-plugin_2.32.4-1~ubuntu.24.04~noble_amd64.deb"Kubernetes
v1.33 in the URL if you require a different version.Add the Kubernetes repository:
sudo apt install -y apt-transport-https ca-certificates curl sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Download Packages
Download the Kubernetes binaries and their necessary support tools into a local directory without installing them:
# Create and enter the staging directory mkdir -p /lumenvox/k8s-offline && cd /lumenvox/k8s-offline # Download packages and all required dependencies sudo apt update apt-get download kubelet kubeadm kubectl kubernetes-cni conntrack socat ebtables
Download Kubernetes Images
kubeadm config images list on a running Kubernetes system.Download the required images for Kubernetes v1.33 and save them as .tar archives:
mkdir -p /lumenvox/k8s-images && cd /lumenvox/k8s-images docker pull registry.k8s.io/kube-apiserver:v1.33.8 docker save registry.k8s.io/kube-apiserver:v1.33.8 > kube-apiserver:v1.33.8.tar docker pull registry.k8s.io/kube-controller-manager:v1.33.8 docker save registry.k8s.io/kube-controller-manager:v1.33.8 > kube-controller-manager:v1.33.8.tar docker pull registry.k8s.io/kube-scheduler:v1.33.8 docker save registry.k8s.io/kube-scheduler:v1.33.8 > kube-scheduler:v1.33.8.tar docker pull registry.k8s.io/kube-proxy:v1.33.8 docker save registry.k8s.io/kube-proxy:v1.33.8 > kube-proxy:v1.33.8.tar docker pull registry.k8s.io/coredns/coredns:v1.12.0 docker save registry.k8s.io/coredns/coredns:v1.12.0 > coredns:v1.12.0.tar docker pull registry.k8s.io/pause:3.10 docker save registry.k8s.io/pause:3.10 > pause:3.10.tar docker pull registry.k8s.io/etcd:3.5.24-0 docker save registry.k8s.io/etcd:3.5.24-0 > etcd:3.5.24-0.tar
Calico
Download the essential container images for the Calico CNI. These components are critical for establishing the Kubernetes pod network and managing inter-service communication.
mkdir -p /lumenvox/calico-offline && cd /lumenvox/calico-offline
# Download the installation manifest
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# List the image information
grep image: calico.yaml | awk '{print $2}' | sort -u
# Pull the required images and save them as .tar archives
docker pull docker.io/calico/cni:v3.27.0
docker save calico/cni:v3.27.0 > cni:v3.27.0.tar
docker pull docker.io/calico/kube-controllers:v3.27.0
docker save calico/kube-controllers:v3.27.0 > kube-controllers:v3.27.0.tar
docker pull docker.io/calico/node:v3.27.0
docker save calico/node:v3.27.0 > node:v3.27.0.tarCrictl
Download and install the crictl utility. This tool is required for inspecting and managing your container runtime environment during installation.
mkdir -p /lumenvox/crictl-offline && cd /lumenvox/crictl-offline curl -LO https://pkgs.k8s.io/core:/stable:/v1.33/deb/amd64/cri-tools_1.33.0-1.1_amd64.deb
Linkerd
Linkerd manages the complex networking between microservices in a Kubernetes environment.
mkdir -p /lumenvox/linkerd-offline && cd /lumenvox/linkerd-offline curl -O https://assets.lumenvox.com/kubeadm/linkerd.tar tar -xvf linkerd.tar
Helm
Download and initialize the Helm binary. Helm is the package manager used to install, upgrade, and configure the platform Kubernetes charts.
mkdir -p /lumenvox/helm-offline && cd /lumenvox/helm-offline curl -O https://get.helm.sh/helm-v3.19.2-linux-amd64.tar.gz tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
Add the Helm Repository
helm repo add lumenvox https://lumenvox.github.io/helm-charts helm repo update
Download Helm Charts
The following command creates a lumenvox folder in the current directory with all Helm charts:
helm fetch lumenvox/lumenvox --untar
Open the lumenvox/values.yaml file and update the image repository settings to point to your private Docker registry. Ensure that the image tag matches the version you are deploying.
cd /lumenvox/helm-offline/lumenvox vi values.yaml
Set the repository to your private registry (e.g., my-docker-registry.com:5000) and the appropriate tag (e.g., :7.0).
Alternatively, download the values.yaml from GitHub:
cd /lumenvox curl -O https://raw.githubusercontent.com/lumenvox/containers-quick-start/master/values.yaml
Platform Images and External Services
Use the following script to pull and archive the platform and external service images. Save this script as download_lv_images.sh in the /lumenvox directory.
#!/bin/bash
IMAGES=(
"lumenvox/admin-portal:7.0"
"lumenvox/archive:7.0"
"lumenvox/asr:7.0"
"lumenvox/cloud-init-tools:7.0"
"lumenvox/configuration:7.0"
"lumenvox/deployment:7.0"
"lumenvox/deployment-portal:7.0"
"lumenvox/file-store:7.0"
"lumenvox/grammar:7.0"
"lumenvox/itn:7.0"
"lumenvox/license:7.0"
"lumenvox/lumenvox-api:7.0"
"lumenvox/management-api:7.0"
"lumenvox/neural-tts:7.0"
"lumenvox/reporting-api:7.0"
"lumenvox/resource:7.0"
"lumenvox/session:7.0"
"lumenvox/storage:7.0"
"lumenvox/vad:7.0"
"lumenvox/cloud-logging-sidecar:7.0"
"lumenvox/mrcp-api:7.0"
"lumenvox/simple_mrcp_client:latest"
"lumenvox/diag-tools:jammy-4.2.0"
"lumenvox/license-reporter-tool:latest"
"docker.io/rabbitmq:4.1.8-management"
"docker.io/redis:8.2.4-alpine"
"docker.io/mongo:8.2"
"docker.io/postgres:17.5"
)
SAVE_DIR="/lumenvox/lv_images-offline"
mkdir -p "$SAVE_DIR"
for IMAGE in "${IMAGES[@]}"; do
echo "Processing: $IMAGE"
if docker pull "$IMAGE"; then
FILE_NAME=$(echo "$IMAGE" | tr '/:' '_')
echo "Saving to $SAVE_DIR/${FILE_NAME}.tar"
docker save -o "$SAVE_DIR/${FILE_NAME}.tar" "$IMAGE"
else
echo "ERROR: Failed to pull $IMAGE. Skipping..."
fi
done
echo "Compressing all images into one bundle..."
tar czvf lv_images-offline.tar.gz -C /lumenvox lv_images-offline
echo "Done! Final bundle: lv_images-offline.tar.gz"Model Files
Save the following script as download_lv_models.sh in the /lumenvox directory to download the required model files:
#!/bin/bash
DOWNLOAD_DIR="/lumenvox/lv_models-offline"
mkdir -p "$DOWNLOAD_DIR"
URLS=(
"https://assets.lumenvox.com/model-files/asr/asr_decoder_model_en_gb-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_decoder_model_en_us-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_encoder_model_en-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_lang_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_lib_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/dnn/backend_dnn_model_7-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/dnn/backend_dnn_model_p.manifest"
"https://assets.lumenvox.com/model-files/asr/dist_package_model_asr-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/dnn/dist_package_model_en.manifest"
"https://assets.lumenvox.com/model-files/dnn/dist_package_model_itn-7.0.3.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/dist_package_model_neural_tts.manifest"
"https://assets.lumenvox.com/model-files/itn/itn_dnn_model_en.manifest"
"https://assets.lumenvox.com/model-files/asr/multilingual_confidence_model.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/neural_tts_en_us_aurora-8.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/neural_tts_en_us_caspian-8.manifest"
"https://assets.lumenvox.com/model-files/nlu/nlu_model_en.manifest"
"https://assets.lumenvox.com/model-files/tts/tts_base_lang_data.manifest"
"https://assets.lumenvox.com/model-files/asr/1.0.0/asr_lib_model_en_us-1.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/4.1.0/multilingual_confidence_model-4.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/asr_decoder_model_en_gb-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/asr_decoder_model_en_us-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/asr_encoder_model_en-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/dist_package_model_asr-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/asr_lang_model_en_us.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/1.0.0/backend_dnn_model_p-1.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/1.0.3/dist_package_model_en-1.0.3.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/7.0.0/backend_dnn_model_7-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/7.0.3/dist_package_model_itn-7.0.3.tar.gz"
"https://assets.lumenvox.com/model-files/itn/3.0.1/itn_dnn_model_en-3.0.1.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/2.0.0/dist_package_model_neural_tts-2.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/8.1.0/neural_tts_en_us_aurora-8.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/8.1.0/neural_tts_en_us_caspian-8.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/nlu/1.0.4/nlu_model_en-1.0.4.tar.gz"
"https://assets.lumenvox.com/model-files/tts/tts_base_lang_data.tar.gz"
)
for URL in "${URLS[@]}"; do
FILE_NAME=$(basename "$URL")
echo "Downloading $FILE_NAME..."
curl -fLo "${DOWNLOAD_DIR}/${FILE_NAME}" "$URL" || echo "Failed to download $URL"
done
echo "All downloads complete. Files are in: $DOWNLOAD_DIR"Media Server and External Services
Download the external services, MRCP-API, and MRCP client:
mkdir -p /lumenvox/services-offline && cd /lumenvox/services-offline git clone https://github.com/lumenvox/mrcp-api.git git clone https://github.com/lumenvox/mrcp-client.git git clone https://github.com/lumenvox/external-services.git cd external-services curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/docker-compose.yaml curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/rabbitmq.conf curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/.env
Ingress-nginx
mkdir -p /lumenvox/ingress-nginx-offline && cd /lumenvox/ingress-nginx-offline helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm fetch ingress-nginx/ingress-nginx --untar docker pull registry.k8s.io/ingress-nginx/controller:v1.14.1 docker save registry.k8s.io/ingress-nginx/controller:v1.14.1 > controller:v1.14.1.tar docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 docker save registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 > kube-webhook-certgen:v1.5.2.tar
Docker Private Registry
A Docker private registry is a container image server that your organization controls. Instead of pulling and pushing images to a public service like Docker Hub, you store them in your own registry, allowing only authorized users and systems to access them.
Set Up the Registry
cd /lumenvox/docker-offline docker pull registry:2 docker save registry:2 -o registry.tar mkdir -p registry/data cd registry
Create a docker-compose.yaml file to map port 5000 and ensure your images are saved permanently:
sudo tee /lumenvox/docker-offline/registry/docker-compose.yaml <<EOF
services:
registry:
image: registry:2
container_name: private-registry
ports:
- "5000:5000"
volumes:
- ./data:/var/lib/registry
restart: always
EOFStart the Docker registry:
docker compose up -d
Configure Insecure Registry
Docker, by default, refuses to push to a registry that doesn't use HTTPS. Add the following to /etc/docker/daemon.json:
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries": ["my-docker-registry.com:5000"]
}
EOFReload and restart Docker:
sudo systemctl daemon-reload sudo systemctl restart docker
Add an entry for my-docker-registry.com to the /etc/hosts file. Replace the IP address with the actual IP of the online server:
172.18.2.71 my-docker-registry.com
Tag and Push Images
Tag and push ingress-nginx to the local registry:
cd /lumenvox/ingress-nginx-offline docker tag registry.k8s.io/ingress-nginx/controller:v1.14.1 my-docker-registry.com:5000/controller:v1.14.1 docker push my-docker-registry.com:5000/controller:v1.14.1 docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 my-docker-registry.com:5000/kube-webhook-certgen:v1.5.2 docker push my-docker-registry.com:5000/kube-webhook-certgen:v1.5.2
Confirm the images are in the private registry:
curl my-docker-registry.com:5000/v2/_catalog
The output should look similar to the following:
{"repositories":["controller","kube-webhook-certgen"]}Use a script to tag and push all platform images. Save as load_push_local_registry.sh:
#!/bin/bash
REGISTRY="my-docker-registry.com:5000"
IMAGE_DIR="/lumenvox/lv_images-offline"
REGISTRY="${REGISTRY%/}"
for TAR in "$IMAGE_DIR"/*.tar; do
echo "Processing $TAR..."
IMAGE_FULL_NAME=$(docker load -i "$TAR" | awk '/Loaded image:/ { print $3 }')
if [ -z "$IMAGE_FULL_NAME" ]; then
echo "Error: Failed to extract image name from $TAR"
continue
fi
echo "Found image: $IMAGE_FULL_NAME"
CLEAN_NAME="${IMAGE_FULL_NAME#docker.io/}"
CLEAN_NAME="${CLEAN_NAME#lumenvox/}"
TARGET_IMAGE="${REGISTRY}/${CLEAN_NAME}"
echo "Tagging as: $TARGET_IMAGE"
docker tag "$IMAGE_FULL_NAME" "$TARGET_IMAGE"
echo "Pushing: $TARGET_IMAGE"
docker push "$TARGET_IMAGE"
done
echo "Done."To list the content of the registry after pushing all images:
curl my-docker-registry.com:5000/v2/_catalog
The output should look similar to this:
Offline Server Preparation
System Prerequisites
Apply the same system prerequisites as the online server:
Disable Swap Space
sudo swapoff -a sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
Configure Kernel Modules
sudo tee /etc/modules-load.d/k8s.conf <<EOF ip_tables overlay br_netfilter EOF sudo modprobe ip_tables sudo modprobe overlay sudo modprobe br_netfilter
Network & Security Settings
sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system sudo systemctl stop ufw sudo systemctl disable ufw sudo systemctl stop apparmor sudo systemctl disable apparmor
Modify /etc/hosts
Add entries for the offline server and the private repository (hosted on the online server):
sudo vi /etc/hosts 172.18.2.72 ubuntu-offline 172.18.2.71 my-docker-registry.com
Copy Staging Files from Online Server
Use rsync to synchronize folders and files between servers:
sudo mkdir /lumenvox rsync -avzP user@remote_host:/lumenvox/* /lumenvox/
Install Docker and Containerd
cd /lumenvox/docker-offline sudo dpkg -i *.deb sudo systemctl enable --now docker sudo systemctl enable --now containerd
Configure Containerd for Private Registry
containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml sudo sed -i 's|sandbox_image = "registry.k8s.io/pause:3.8"|sandbox_image = "registry.k8s.io/pause:3.10"|g' /etc/containerd/config.toml sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry\]/,/\[/ s|config_path = .*|config_path = "/etc/containerd/certs.d"|' /etc/containerd/config.toml sudo mkdir -p /etc/containerd/certs.d/my-docker-registry.com:5000 cat <<EOF | sudo tee /etc/containerd/certs.d/my-docker-registry.com:5000/hosts.toml server = "http://my-docker-registry.com:5000" [host."http://my-docker-registry.com:5000"] capabilities = ["pull", "resolve"] skip_verify = true EOF sudo systemctl restart containerd sudo usermod -aG docker $USER newgrp docker
Configure Insecure Registries
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries": ["my-docker-registry.com:5000"]
}
EOF
sudo systemctl restart dockerTo List the Content of the private docker registry on the online server.
url my-docker-registry.com:5000/v2/_catalog
The output should look similar to this:
Install Crictl
cd /lumenvox/crictl-offline sudo dpkg -i cri-tools_1.33.0-1.1_amd64.deb sudo crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock
Install Kubernetes
cd /lumenvox/k8s-offline/ sudo dpkg -i *.deb sudo systemctl enable --now containerd
Load Kubernetes Images
cd /lumenvox/k8s-images sudo ctr -n k8s.io images import coredns:v1.12.0.tar sudo ctr -n k8s.io images import etcd:3.5.24-0.tar sudo ctr -n k8s.io images import kube-apiserver:v1.33.8.tar sudo ctr -n k8s.io images import kube-controller-manager:v1.33.8.tar sudo ctr -n k8s.io images import kube-proxy:v1.33.8.tar sudo ctr -n k8s.io images import kube-scheduler:v1.33.8.tar sudo ctr -n k8s.io images import pause:3.10.tar
Initialize the Control Plane
sudo kubeadm init --apiserver-advertise-address=172.18.2.72 --kubernetes-version=v1.33.8
You should see the following output if the control plane initializes successfully (this may take up to 5 minutes):
Set up kubectl for the user:
mkdir -p $HOME/.kube sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Untaint the node (replace <node-name> with your node name):
kubectl get node kubectl taint node <node-name> node-role.kubernetes.io/control-plane-
NotReady status is normal at this stage. This is because the Container Network Interface (Calico) has not been installed yet.Install Calico
cd /lumenvox/calico-offline sudo ctr -n k8s.io images import kube-controllers:v3.27.0.tar sudo ctr -n k8s.io images import node:v3.27.0.tar sudo ctr -n k8s.io images import cni:v3.27.0.tar kubectl apply -f calico.yaml
Verify the node is now in Ready status:
kubectl get node
The node should now be in Ready status
Install Linkerd
cd /lumenvox/linkerd-offline sudo chmod +x linkerd_cli_installer_offline.sh sudo ctr -n k8s.io images import controller:edge-24.8.2.tar sudo ctr -n k8s.io images import metrics-api:edge-24.8.2.tar sudo ctr -n k8s.io images import policy-controller:edge-24.8.2.tar sudo ctr -n k8s.io images import prometheus:v2.48.1.tar sudo ctr -n k8s.io images import proxy:edge-24.8.2.tar sudo ctr -n k8s.io images import proxy-init:v2.4.1.tar sudo ctr -n k8s.io images import tap:edge-24.8.2.tar sudo ctr -n k8s.io images import web:edge-24.8.2.tar ./linkerd_cli_installer_offline.sh export PATH=$PATH:~/.linkerd2/bin linkerd check --pre linkerd install --crds | kubectl apply -f - linkerd install | kubectl apply -f - linkerd check linkerd viz install | kubectl apply -f - kubectl delete cronjob linkerd-heartbeat -n linkerd
Install Helm
cd /lumenvox/helm-offline tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
Platform and External Services Installation
Create the Namespace
kubectl create ns lumenvox
Configure External Services
Copy the external-services folder and update the docker-compose.yaml to point to your private registry:
cp -r /lumenvox/services-offline/external-services/ ~ cd ~/external-services vi docker-compose.yaml
Update the image tags:
### mongodb image: my-docker-registry.com:5000/mongo:8.2 ### postgresql image: my-docker-registry.com:5000/postgres:17.5 ### rabbitmq image: my-docker-registry.com:5000/rabbitmq:4.1.8-management ### redis image: my-docker-registry.com:5000/redis:8.2.4-alpine
Update the .env file with your credentials:
vi .env #-------------------------# # MongoDB Configuration #-------------------------# MONGO_INITDB_ROOT_USERNAME=lvuser MONGO_INITDB_ROOT_PASSWORD=mongo1234 #-------------------------# # PostgreSQL Configuration #-------------------------# POSTGRES_USER=lvuser POSTGRES_PASSWORD=postgres1234 #-------------------------# # RabbitMQ Configuration #-------------------------# RABBITMQ_USERNAME=lvuser RABBITMQ_PASSWORD=rabbit1234 #-------------------------# # Redis Configuration #-------------------------# REDIS_PASSWORD=redis1234
Start the external services & check if they are running:
docker compose up -d docker ps
Create Kubernetes Secrets
Replace the $PASSWORD placeholders with the actual values from your .env file:
kubectl create secret generic mongodb-existing-secret --from-literal=mongodb-root-password=$MONGO_INITDB_ROOT_PASSWORD -n lumenvox kubectl create secret generic postgres-existing-secret --from-literal=postgresql-password=$POSTGRES_PASSWORD -n lumenvox kubectl create secret generic rabbitmq-existing-secret --from-literal=rabbitmq-password=$RABBITMQ_PASSWORD -n lumenvox kubectl create secret generic redis-existing-secret --from-literal=redis-password=$REDIS_PASSWORD -n lumenvox
Configure values.yaml
Edit the values.yaml with your clusterGUID, IP address of the external services, ASR language(s), and TTS voice(s):
cd /lumenvox vi values.yaml
Create TLS Certificates
Create the certificate key:
openssl genrsa -out server.key 2048
Create the certificate. Ensure the subjectAltName matches the hostnameSuffix in your values.yaml file:
openssl req -new -x509 -sha256 -key server.key -out server.crt -days 3650 \ -addext "subjectAltName = DNS:lumenvox-api.ubuntu12.testmachine.com, \ DNS:biometric-api.lumenvox-api.ubuntu12.testmachine.com, \ DNS:management-api.lumenvox-api.ubuntu12.testmachine.com, \ DNS:reporting-api.lumenvox-api.ubuntu12.testmachine.com, \ DNS:admin-portal.lumenvox-api.ubuntu12.testmachine.com, \ DNS:deployment-portal.lumenvox-api.ubuntu12.testmachine.com"
Create the speech-tls-secret:
cd /lumenvox kubectl create secret tls speech-tls-secret --key server.key --cert server.crt -n lumenvox
Install the Platform
cd /lumenvox helm install lumenvox helm-offline/lumenvox -n lumenvox -f values.yaml watch kubectl get po -A
Load Model Files
First, reinstall to set proper permissions for the /data directory:
helm uninstall lumenvox -n lumenvox helm install lumenvox helm-offline/lumenvox -n lumenvox -f values.yaml
Copy the manifest files to /data/lang/manifests:
cd /lumenvox/lv_models-offline cp -p *.manifest /data/lang/manifests/
Copy the model archives to /data/lang/downloads:
cd /lumenvox/lv_models-offline cp -p *.tar.gz /data/lang/downloads/
Restart the deployment:
kubectl rollout restart deployment -n lumenvox
All pods should now be running:
Ingress-nginx Installation
Configure ingress-nginx to pull from your private Docker repository:
cd /lumenvox/ingress-nginx-offline vi ingress-nginx/values.yaml
Search for "controller" and set:
image: "controller"repository: "my-docker-registry.com:5000/controller"tag: "v1.14.1"digest: nulldigestChroot: nullSearch for "kube-webhook" and set:image: "kube-webhook-certgen"repository: "my-docker-registry.com:5000/kube-webhook-certgen"tag: "v1.5.2"digest: null
Search for "hostNetwork" and set it to true:
Install ingress-nginx:
helm upgrade --install ingress-nginx ./ingress-nginx -n ingress-nginx --create-namespace --set controller.hostNetwork=true --version 4.14.1 -f ./ingress-nginx/values.yaml
MRCP-API Installation
Move the mrcp-api to your home directory and configure Docker to pull from your private registry:
cd /lumenvox/services-offline/ cp -r mrcp-api ~ cd ~/mrcp-api/docker/ vi .env
Set DOCKER_REGISTRY=my-docker-registry.com:5000/ Copy the certificate and start the service:
cd ~/mrcp-api/docker docker compose down sudo cp /lumenvox/server.crt certs docker compose up -d
MRCP-Client Installation
Move the mrcp-client to your home directory and configure Docker:
cd /lumenvox/services-offline/ cp -r mrcp-client ~ cd ~/mrcp-client vi docker-compose.yml
Set the image to my-docker-registry.com:5000/simple_mrcp_client
docker compose up -d
Next Steps
- Creating a deployment: See the Setup via quick start (kubeadm) guide for instructions on accessing the Admin Portal to create a deployment.
- Licensing the server: See Setting up the license reporter tool to license a server in an air-gapped environment.
