27 KiB
Phase 4 — Packaging & Deployment
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (
- [ ]) syntax for tracking.
Goal: Turn Phase 1–3 code into shippable artifacts: a single-file Burrito binary for the agent, a Mix release for the server, a Caddy reverse-proxy config template, systemd unit for the agent, and deployment docs (LXC for server, scp+systemd for agents).
Architecture: Two artifact pipelines.
-
Agent: Burrito wraps the OTP release in a single self-extracting binary so Proxmox hosts only need the file and a systemd unit — no Erlang install on the hosts. Because Burrito cross-compiles via Zig, we document a Docker-based build path so developers on any platform can produce Linux binaries reproducibly.
-
Server: Standard
mix release. Runs inside an LXC container on a Proxmox host in the RZ. Caddy fronts it, terminates TLS, proxies HTTP + WebSockets to127.0.0.1:4000. Migrations run via a release eval command on boot.
No new tests — packaging is verified by actually building artifacts.
Tech Stack: burrito (single-binary packaging), mix release, Docker (optional, for cross-compile), Caddy (TLS + reverse proxy), systemd.
File Structure
agent/
├── mix.exs modify: add burrito + releases
├── rel/
│ ├── proxmox-monitor-agent.service create (systemd unit)
│ └── env.sh.eex create (runtime env)
├── build/ (gitignored) Burrito output
├── Dockerfile.build create (reproducible linux builds)
└── docs/
└── install.md create (per-host install steps)
server/
├── mix.exs modify: release config
├── rel/
│ ├── env.sh.eex create if missing
│ └── remote.vm.args.eex create if missing
├── lib/server/release.ex modify: add migrate/rollback
├── Dockerfile create (build+runtime, via phx.gen.release)
└── docs/
├── deploy-lxc.md create (setup LXC container)
└── Caddyfile.example create (reverse-proxy template)
docs/
└── deployment-overview.md create (who-builds-what, ports, flow)
Ignored in git: agent/build/, server/_build/, anything the release tooling writes.
Task 1: Agent — Burrito Dep + Mix Config
Files:
-
Modify:
agent/mix.exs -
Modify:
.gitignore -
Step 1: Add
:burritodep and release config
Open agent/mix.exs. Replace the whole file with:
defmodule ProxmoxAgent.MixProject do
use Mix.Project
@version "0.1.0"
def project do
[
app: :agent,
version: @version,
elixir: "~> 1.17",
start_permanent: Mix.env() == :prod,
deps: deps(),
elixirc_paths: elixirc_paths(Mix.env()),
releases: releases()
]
end
def application do
[
extra_applications: [:logger, :crypto],
mod: {ProxmoxAgent.Application, []}
]
end
defp deps do
[
{:slipstream, "~> 1.1"},
{:jason, "~> 1.4"},
{:toml, "~> 0.7"},
{:burrito, "~> 1.3"}
]
end
defp elixirc_paths(:test), do: ["lib", "test/support"]
defp elixirc_paths(_), do: ["lib"]
defp releases do
[
agent: [
steps: [:assemble, &Burrito.wrap/1],
burrito: [
targets: [
linux_amd64: [os: :linux, cpu: :x86_64],
linux_arm64: [os: :linux, cpu: :aarch64],
macos: [os: :darwin, cpu: :aarch64]
]
]
]
]
end
end
Rationale for the three targets: linux_amd64 is the canonical Proxmox deploy; linux_arm64 is there so a future Raspberry-Pi-class host works; macos exists only to let developers smoke-test the binary locally.
- Step 2: Extend
.gitignore
Open /Users/cabele/claudeprojects/proxmox_monitor/.gitignore and append:
# Burrito build output
/agent/build/
/agent/burrito_out/
- Step 3: Fetch deps and confirm compile
cd /Users/cabele/claudeprojects/proxmox_monitor/agent
mix deps.get 2>&1 | tail -5
mix compile --warnings-as-errors 2>&1 | tail -3
Expected: burrito fetched (plus its deps typed_struct and similar). Compile succeeds. Do not run mix release yet — it requires Zig to be installed and will be covered in Task 7.
- Step 4: Commit
cd /Users/cabele/claudeprojects/proxmox_monitor
git add agent/mix.exs agent/mix.lock .gitignore
git commit -m "feat(agent): burrito dep + release config for linux_amd64/arm64 + macos"
Task 2: Agent — systemd Unit + env.sh.eex
Files:
-
Create:
agent/rel/proxmox-monitor-agent.service -
Create:
agent/rel/env.sh.eex -
Step 1: systemd unit
Create agent/rel/proxmox-monitor-agent.service:
[Unit]
Description=Proxmox Monitor Agent
Documentation=https://github.com/you/proxmox_monitor
After=network-online.target zfs.target
Wants=network-online.target
[Service]
Type=simple
User=root
Environment=AGENT_CONFIG=/etc/proxmox-monitor/agent.toml
ExecStart=/usr/local/bin/proxmox-monitor-agent start
ExecStop=/usr/local/bin/proxmox-monitor-agent stop
Restart=always
RestartSec=5
# Burrito unpacks into this directory; keep it stable across runs
Environment=BURRITO_CACHE_DIR=/var/cache/proxmox-monitor-agent
# Resource limits
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Rationale:
-
Runs as root — required by
zpool statusagainst degraded pools. -
After=zfs.targetso ZFS is ready before the agent tries to read it. -
BURRITO_CACHE_DIRpinned so each restart reuses the unpacked release, avoiding repeated extraction to/tmp. -
Restart=alwayswith 5s backoff — short network blips shouldn't need admin attention. -
Step 2: release env.sh.eex
Create agent/rel/env.sh.eex:
#!/bin/sh
# Default-logs to journald (stdout) when running under systemd
export RELEASE_COOKIE="${RELEASE_COOKIE:-$(head -c 16 /dev/urandom | od -An -tx1 | tr -d ' \n')}"
Rationale: Cookie is only relevant for distribution, but the release runtime wants one set. Generating a per-boot random cookie is fine for a single-node runtime.
- Step 3: Commit
git add agent/rel
git commit -m "feat(agent): systemd unit + release env.sh for root+journald install"
Task 3: Agent — Docker-based Cross-Compile
Files:
- Create:
agent/Dockerfile.build - Create:
agent/scripts/build-linux.sh
Burrito's Zig toolchain is painful to install. A Debian-based Docker image produces reproducible Linux artifacts without touching the developer's machine.
- Step 1: Build Dockerfile
Create agent/Dockerfile.build:
# Reproducible Burrito build environment for the agent.
# Produces linux_amd64 + linux_arm64 binaries into /work/agent/burrito_out.
# Keep elixir/OTP in sync with what the project compiles against locally
# (see `elixir --version` — currently Elixir 1.19 on OTP 28).
FROM elixir:1.19-otp-28 AS build
ENV MIX_ENV=prod DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential git ca-certificates curl xz-utils unzip 7zip \
&& rm -rf /var/lib/apt/lists/*
# Zig (Burrito needs it for cross-compile)
ARG ZIG_VERSION=0.13.0
RUN curl -fsSL https://ziglang.org/download/${ZIG_VERSION}/zig-linux-$(uname -m)-${ZIG_VERSION}.tar.xz \
| tar -xJ -C /opt && ln -s /opt/zig-linux-*/zig /usr/local/bin/zig
WORKDIR /work/agent
RUN mix local.hex --force && mix local.rebar --force
# Copy sources last for layer caching
COPY mix.exs mix.lock ./
RUN mix deps.get --only prod
COPY lib lib
COPY config config
RUN mix deps.compile
RUN mix release
# Default: print the produced artifacts
CMD ["sh", "-c", "ls -la burrito_out/"]
Notes:
-
Image is single-stage; the goal is to produce binaries, not to be a runtime.
-
Zig version pinned — Burrito 1.x is sensitive to Zig major changes.
-
Step 2: Build helper script
Create agent/scripts/build-linux.sh:
#!/usr/bin/env bash
# Produce Linux Burrito binaries for the agent.
# Usage: ./scripts/build-linux.sh [output_dir]
set -euo pipefail
cd "$(dirname "$0")/.."
OUT="${1:-$(pwd)/dist}"
mkdir -p "$OUT"
IMG="proxmox-monitor-agent-build:latest"
docker build -f Dockerfile.build -t "$IMG" .
docker run --rm -v "$OUT":/out "$IMG" sh -c 'cp -v burrito_out/* /out/'
echo
echo "Binaries written to $OUT:"
ls -la "$OUT"
- Step 3: Make it executable
chmod +x /Users/cabele/claudeprojects/proxmox_monitor/agent/scripts/build-linux.sh
- Step 4: Commit
cd /Users/cabele/claudeprojects/proxmox_monitor
git add agent/Dockerfile.build agent/scripts/build-linux.sh
git commit -m "feat(agent): docker-based cross-compile for linux binaries"
Task 4: Server — mix phx.gen.release
Files:
-
Generated by the command:
server/lib/server/release.ex(modify),server/Dockerfile,server/rel/env.sh.eex,server/rel/remote.vm.args.eex -
Step 1: Run the generator
cd /Users/cabele/claudeprojects/proxmox_monitor/server
mix phx.gen.release
Expected prompt: "A release was generated. Would you like to overwrite lib/server/release.ex?" — answer N to preserve our existing register_host/1 helper. The generator will create Dockerfile, rel/env.sh.eex, rel/remote.vm.args.eex, and bin/server scripts if absent.
- Step 2: Extend
Server.Releasewith migrate/rollback
Our Server.Release currently only defines register_host/1. Open server/lib/server/release.ex and replace the module body with:
defmodule Server.Release do
@moduledoc "Convenience functions for IEx and release-stage admin tasks."
@app :server
@doc "Create a host and print the plaintext token once."
def register_host(name) do
load_app!()
case Server.Hosts.create_host(name) do
{:ok, {host, token}} ->
IO.puts("Host '#{host.name}' registered (id=#{host.id}).")
IO.puts("TOKEN: #{token}")
IO.puts("Store this token NOW — it will never be shown again.")
{:ok, host, token}
{:error, cs} ->
IO.puts("Failed to register host: #{inspect(cs.errors)}")
{:error, cs}
end
end
@doc "Run pending migrations. Invoke via: bin/server eval 'Server.Release.migrate()'"
def migrate do
load_app!()
for repo <- repos() do
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
end
@doc "Roll back one step per repo."
def rollback(repo, version) do
load_app!()
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :down, to: version))
end
defp repos, do: Application.fetch_env!(@app, :ecto_repos)
defp load_app! do
Application.load(@app)
end
end
- Step 3: Inspect generated env.sh.eex
Open the generated server/rel/env.sh.eex. The default is fine for LXC; add a one-liner to be explicit about the cookie source (prevents accidental default):
# Inside server/rel/env.sh.eex, near the top:
if [ -z "${RELEASE_COOKIE:-}" ]; then
# For single-node deployments we don't need a stable cookie across
# releases; refresh it per-boot so a stale cookie file cannot be
# used to attach a remote shell.
export RELEASE_COOKIE="$(head -c 16 /dev/urandom | od -An -tx1 | tr -d ' \n')"
fi
If the generator already wrote a similar block, leave it alone.
- Step 4: Compile + release dry-run
mix compile 2>&1 | tail -3
MIX_ENV=prod DASHBOARD_PASSWORD_HASH='placeholder' mix release --overwrite 2>&1 | tail -10
Expected: Release created at _build/prod/rel/server. The placeholder hash is only to satisfy runtime.exs during build; real deploys set a proper one before start.
- Step 5: Smoke-test the release binary locally
DATABASE_PATH=/tmp/proxmox_monitor_release.db \
PHX_SERVER=true \
SECRET_KEY_BASE="$(mix phx.gen.secret)" \
DASHBOARD_PASSWORD_HASH="$(mix run -e 'IO.puts(Argon2.hash_pwd_salt("devpass"))' 2>&1 | tail -1)" \
_build/prod/rel/server/bin/server eval 'Server.Release.migrate()'
Expected: migrations run, exits 0. Do not start the full server — we only verify the release builds and eval works.
- Step 6: Commit
cd /Users/cabele/claudeprojects/proxmox_monitor
git add server/lib/server/release.ex server/Dockerfile server/rel
git commit -m "feat(server): phoenix release with migrate/rollback helpers"
Task 5: Caddyfile Template
Files:
-
Create:
server/docs/Caddyfile.example -
Step 1: Write template
Create server/docs/Caddyfile.example:
# /etc/caddy/Caddyfile — Proxmox Monitor reverse-proxy
#
# Replace monitor.example.com with your actual hostname.
# Caddy handles Let's Encrypt automatically when the domain's A record
# points at this host.
monitor.example.com {
# Security headers
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
-Server
}
# The Phoenix endpoint handles both HTTP requests and WebSocket upgrades
# on the same port; Caddy's reverse_proxy transparently upgrades /socket.
reverse_proxy 127.0.0.1:4000 {
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-For {remote_host}
# Keep WebSocket connections open long enough for the Phoenix heartbeat
# cycle (30s by default).
transport http {
read_timeout 90s
dial_timeout 10s
}
}
# Basic access log
log {
output file /var/log/caddy/monitor.log {
roll_size 10mb
roll_keep 5
}
}
}
- Step 2: Commit
cd /Users/cabele/claudeprojects/proxmox_monitor
git add server/docs/Caddyfile.example
git commit -m "docs(server): Caddyfile template with TLS + WSS reverse-proxy"
Task 6: Deployment Docs
Files:
-
Create:
server/docs/deploy-lxc.md -
Create:
agent/docs/install.md -
Create:
docs/deployment-overview.md -
Step 1: Deployment overview
Create docs/deployment-overview.md:
# Deployment Overview
Two artifacts, built independently, deployed independently.
┌─────────────────────────┐
│ Server (LXC in RZ) │
agents ──WSS─>│ - Phoenix release │ │ - SQLite │ │ - Caddy (TLS) │ └─────────────────────────┘ ▲ │ ssh │ ┌─────────────────────────┐ │ Operator workstation │ │ - Builds server release│ │ - Builds agent binary │ └─────────────────────────┘ │ scp ▼ ┌─────────────────────────┐ │ Proxmox host (any of N) │ │ - Burrito agent binary │ │ - systemd unit │ └─────────────────────────┘
## What runs where
| Component | Host | Port / Path |
|-----------|------|------------------------------------------|
| Caddy | Server LXC | 443 public, forwards → 127.0.0.1:4000 |
| Phoenix | Server LXC | 127.0.0.1:4000 (HTTP + WS) |
| SQLite | Server LXC | file at $DATABASE_PATH |
| Agent | Proxmox host | no listening ports |
## Secrets the operator must provide
| Variable | Where | How to generate |
|---------------------------|------------|-------------------------------------------------|
| `SECRET_KEY_BASE` | Server env | `mix phx.gen.secret` |
| `DASHBOARD_PASSWORD_HASH` | Server env | `mix run -e 'IO.puts(Argon2.hash_pwd_salt("..."))'` |
| Agent token | Server DB | Admin UI → "Add host" reveals it once |
## Build flow
1. `cd server && MIX_ENV=prod mix release` → produces `_build/prod/rel/server/`
2. `cd agent && ./scripts/build-linux.sh` → produces `dist/proxmox-monitor-agent_linux_amd64`
See `server/docs/deploy-lxc.md` and `agent/docs/install.md` for step-by-step.
- Step 2: LXC server deploy
Create server/docs/deploy-lxc.md:
# Server Deployment (LXC + Caddy)
Target: a Proxmox LXC container running Debian 12 in the RZ, publicly reachable
on port 443 via Caddy. ~1 GB RAM, 2 cores, 10 GB disk covers >20 agents.
## 1. Create the LXC (on the hypervisor)
```bash
pct create 200 \
/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst \
--hostname proxmox-monitor \
--memory 1024 --cores 2 \
--rootfs local-zfs:10 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp \
--unprivileged 1 --features nesting=0 --onboot 1
pct start 200
pct enter 200
```
## 2. Inside the LXC: base packages
```bash
apt-get update && apt-get install -y \
ca-certificates curl debian-keyring debian-archive-keyring apt-transport-https
# Caddy's apt repo
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | \
gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' \
> /etc/apt/sources.list.d/caddy-stable.list
apt-get update && apt-get install -y caddy sqlite3
```
## 3. Upload the release
From the operator workstation:
```bash
cd proxmox_monitor/server
MIX_ENV=prod mix release --overwrite
tar -czf server_release.tgz -C _build/prod/rel server
scp server_release.tgz root@<LXC-IP>:/tmp/
```
Back in the LXC:
```bash
mkdir -p /opt/proxmox-monitor
tar -xzf /tmp/server_release.tgz -C /opt/proxmox-monitor
```
## 4. Directories & env file
```bash
install -d -m 0700 /var/lib/proxmox-monitor
cat > /etc/default/proxmox-monitor <<EOF
DATABASE_PATH=/var/lib/proxmox-monitor/monitor.db
SECRET_KEY_BASE=$(/opt/proxmox-monitor/server/bin/server eval 'IO.puts(64 |> :crypto.strong_rand_bytes() |> Base.encode64())' 2>/dev/null | tail -1)
DASHBOARD_PASSWORD_HASH='<paste from: mix run -e "IO.puts(Argon2.hash_pwd_salt(\"your-password\"))">'
PHX_SERVER=true
PHX_HOST=monitor.example.com
PORT=4000
EOF
chmod 0600 /etc/default/proxmox-monitor
```
## 5. systemd unit
```ini
# /etc/systemd/system/proxmox-monitor.service
[Unit]
Description=Proxmox Monitor Server
After=network-online.target
Wants=network-online.target
[Service]
Type=exec
EnvironmentFile=/etc/default/proxmox-monitor
ExecStartPre=/opt/proxmox-monitor/server/bin/server eval 'Server.Release.migrate()'
ExecStart=/opt/proxmox-monitor/server/bin/server start
ExecStop=/opt/proxmox-monitor/server/bin/server stop
Restart=always
RestartSec=5
User=root
[Install]
WantedBy=multi-user.target
```
```bash
systemctl daemon-reload
systemctl enable --now proxmox-monitor
journalctl -u proxmox-monitor -f # verify it listens on 4000
```
## 6. Caddy
```bash
install -m 0644 /opt/proxmox-monitor/server/lib/server-0.1.0/priv/docs/Caddyfile.example /etc/caddy/Caddyfile
# Edit monitor.example.com to match your real DNS.
nano /etc/caddy/Caddyfile
systemctl reload caddy
```
(If Caddy isn't the one in this LXC, copy the template to wherever Caddy lives.)
## 7. Create the first host
```bash
/opt/proxmox-monitor/server/bin/server rpc 'Server.Release.register_host("pve-host-01")'
```
Copy the printed TOKEN — you'll paste it into the agent config.
## 8. Upgrade flow
```bash
# operator
cd server && MIX_ENV=prod mix release --overwrite
scp _build/prod/rel/server.tar.gz root@<LXC>:/tmp/server_release.tgz
# LXC
systemctl stop proxmox-monitor
tar -xzf /tmp/server_release.tgz -C /opt/proxmox-monitor --overwrite
systemctl start proxmox-monitor # ExecStartPre runs migrate automatically
```
- Step 3: Agent install
Create agent/docs/install.md:
# Agent Install (per Proxmox host)
## Prerequisites on the Proxmox host
- Proxmox VE 8.3+ (OpenZFS 2.3+ for the `-j` flags on `zpool`/`zfs`)
- Root SSH access
- Outbound HTTPS to the monitor server
No Erlang or Elixir needed — the Burrito binary ships its own runtime.
## 1. Build the binary (operator workstation)
```bash
cd proxmox_monitor/agent
./scripts/build-linux.sh # requires Docker
ls dist/
# proxmox-monitor-agent_linux_amd64
# proxmox-monitor-agent_linux_arm64
```
## 2. Register the host in the dashboard
From the dashboard at `https://monitor.example.com/admin/hosts`:
1. "Register a new host" → enter the short name (e.g. `pve-host-01`).
2. Copy the one-time token shown.
## 3. Copy files to the Proxmox host
```bash
HOST=pve-host-01
scp dist/proxmox-monitor-agent_linux_amd64 \
root@$HOST:/usr/local/bin/proxmox-monitor-agent
ssh root@$HOST 'chmod 0755 /usr/local/bin/proxmox-monitor-agent'
# systemd unit (included in the repo)
scp rel/proxmox-monitor-agent.service \
root@$HOST:/etc/systemd/system/
```
## 4. Write the config
On the Proxmox host:
```bash
install -d -m 0700 /etc/proxmox-monitor
cat > /etc/proxmox-monitor/agent.toml <<EOF
server_url = "wss://monitor.example.com/socket/websocket"
token = "<paste-token-from-dashboard>"
host_id = "pve-host-01"
[intervals]
fast_seconds = 30
medium_seconds = 300
slow_seconds = 1800
EOF
chmod 0600 /etc/proxmox-monitor/agent.toml
```
## 5. Enable the service
```bash
install -d -m 0700 /var/cache/proxmox-monitor-agent
systemctl daemon-reload
systemctl enable --now proxmox-monitor-agent
journalctl -u proxmox-monitor-agent -f
```
Expected within ~10s:
```
agent: starting with host_id=pve-host-01
reporter: connected, joining host:pve-host-01
reporter: joined host:pve-host-01
```
The host's card on the dashboard should flip to `online`.
## 6. Token rotation
If a token leaks: dashboard → Admin → "Rotate". Copy the new token, update
`/etc/proxmox-monitor/agent.toml` on the affected host, `systemctl restart
proxmox-monitor-agent`. Old token is invalidated immediately.
## 7. Upgrade flow
```bash
# operator
./scripts/build-linux.sh
scp dist/proxmox-monitor-agent_linux_amd64 root@$HOST:/usr/local/bin/proxmox-monitor-agent.new
# on the host
mv /usr/local/bin/proxmox-monitor-agent{.new,}
systemctl restart proxmox-monitor-agent
```
## Troubleshooting
| Symptom | Check |
|------------------------------------------|-----------------------------------------------------------------|
| `enoent` errors for `zpool`/`pvesh` | You're not on a Proxmox host, or binaries aren't in `$PATH`. |
| `handshake_failed: :nxdomain` | DNS for the monitor hostname fails from this host. |
| `unknown_host` rejection on join | Host name in `agent.toml` doesn't match the dashboard entry. |
| `invalid_token` rejection | Token was rotated; paste the new one. |
| Agent reconnects every 30s | Server's WebSocket timeout hit — check Caddy `read_timeout 90s`.|
- Step 4: Commit
cd /Users/cabele/claudeprojects/proxmox_monitor
git add docs/deployment-overview.md server/docs/deploy-lxc.md agent/docs/install.md
git commit -m "docs: deployment overview + LXC server deploy + per-host agent install"
Task 7: Build Verification
This task runs the actual build tooling to prove the artifacts produce.
- Step 1: Check Docker is available for the agent build
docker --version 2>&1 | head -1
- If Docker is present: proceed to Step 2.
- If Docker is absent or non-functional: skip Step 2 but verify Burrito is wired in by compiling:
cd /Users/cabele/claudeprojects/proxmox_monitor/agent
mix compile --warnings-as-errors 2>&1 | tail -3
Expected: clean compile. Real Linux binaries will be built on a host that has Docker.
- Step 2 (only if Docker available): Produce Linux agent binaries
cd /Users/cabele/claudeprojects/proxmox_monitor/agent
./scripts/build-linux.sh
ls -la dist/
Expected: dist/proxmox-monitor-agent_linux_amd64 and ..._linux_arm64, each an ELF executable, roughly 30-60 MB.
- Step 3: Build the server release
cd /Users/cabele/claudeprojects/proxmox_monitor/server
MIX_ENV=prod DASHBOARD_PASSWORD_HASH='placeholder' mix release --overwrite 2>&1 | tail -10
ls _build/prod/rel/server/bin/
Expected: _build/prod/rel/server/bin/server exists, plus migrate (if generator created one).
- Step 4: Release migration smoke test
TMPDB=/tmp/phase4_migrate.db
rm -f "$TMPDB"
DATABASE_PATH="$TMPDB" \
PHX_SERVER=false \
SECRET_KEY_BASE="$(cd /Users/cabele/claudeprojects/proxmox_monitor/server && mix phx.gen.secret 2>&1 | tail -1)" \
DASHBOARD_PASSWORD_HASH='placeholder' \
/Users/cabele/claudeprojects/proxmox_monitor/server/_build/prod/rel/server/bin/server eval 'Server.Release.migrate()'
Expected: output shows migrations applied. $TMPDB now contains a hosts and metrics table.
sqlite3 /tmp/phase4_migrate.db '.schema' | head -20
rm -f /tmp/phase4_migrate.db
Expected: CREATE TABLE statements for hosts and metrics.
- Step 5: Rollup
No commit — this task is pure verification.
Phase 4 Exit Criteria
agent/mix.exsdefines Burrito release targets andmix compilestays clean.agent/Dockerfile.build+scripts/build-linux.shreproducibly produce Linux binaries (verified when Docker is available).server/_build/prod/rel/server/bin/serverexists aftermix releaseandserver eval 'Server.Release.migrate()'creates a schema in a fresh SQLite file.- systemd units for both agent and server live in the repo.
Caddyfile.examplecovers TLS + WSS reverse-proxy.docs/deployment-overview.md,server/docs/deploy-lxc.md,agent/docs/install.mdwalk from zero to "agent reports metrics to dashboard" in a sequence an engineer can follow without asking questions.
Deferred (not in MVP):
- CI pipeline to build and publish artifacts (manual
scpfor now — fine at N≤20 hosts per the concept). - Agent self-update (concept calls this out as YAGNI).
- Docker image for the server (we use bare LXC + release, which has lower overhead).