Compare commits

..

40 Commits

Author SHA1 Message Date
Chris
629f48e670 Update ct/seerr.sh 2026-02-15 09:14:19 -05:00
Chris
a0bb6f072b Rename seer-install.sh to seerr-install.sh 2026-02-15 09:07:34 -05:00
Chris
4d9354c911 Rename seer.sh to seerr.sh 2026-02-15 09:07:04 -05:00
Tobias
72fc2d9d36 Delete frontend/public/json/overseerr.json 2026-02-15 13:10:18 +01:00
Tobias
e56acaedef Delete install/overseerr-install.sh 2026-02-15 13:10:09 +01:00
Tobias
303592834a Delete ct/headers/overseerr 2026-02-15 13:09:56 +01:00
Tobias
015688e394 Update jellyseerr.sh 2026-02-15 13:06:57 +01:00
Tobias
03c53252f6 Update jellyseerr.sh 2026-02-15 12:36:09 +01:00
Tobias
d40a0829f5 Delete ct/headers/jellyseerr 2026-02-15 12:29:27 +01:00
Tobias
21b98df1fd Delete frontend/public/json/jellyseerr.json 2026-02-15 12:29:16 +01:00
Tobias
6aee01f1eb Delete install/jellyseerr-install.sh 2026-02-15 12:29:04 +01:00
Tobias
51249d5987 Update overseerr.sh 2026-02-15 12:28:40 +01:00
Tobias
166b3afa77 Create seerr.json 2026-02-15 12:19:33 +01:00
Tobias
55f6049cd1 Create seer-install.sh 2026-02-15 12:17:33 +01:00
Tobias
dcadd3683b Create seer.sh 2026-02-15 12:16:58 +01:00
community-scripts-pr-app[bot]
da915b87f6 chore: update github-versions.json (#11917)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 18:07:36 +00:00
community-scripts-pr-app[bot]
b46dddbd7d Update CHANGELOG.md (#11916)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 15:43:15 +00:00
community-scripts-pr-app[bot]
b3e3ed5fb3 Update CHANGELOG.md (#11915)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 15:42:52 +00:00
failure101
7b767ff58b Feature/lxc update patchmon aware (#11905)
* update-lxcs.sh ist now patchmon-agent aware: if patchmon agent is detected inside LXC,
then it is called after updating the container

it is called with the argument "report" to relay the current update situation back to the patchmon system

* Added status message if patchmon agent is found

* whitespace added

* Update tools/pve/update-lxcs.sh

Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>

* Update tools/pve/update-lxcs.sh

Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>

* removed os differentiation, removed use of explicit shell during call of patchmon-agent

* removed os differentiation
removed use of explicit shell during call of patchmon-agent
fixed whitespace

* Delete .git-setup-info

not needed

---------

Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-14 16:42:29 +01:00
community-scripts-pr-app[bot]
79baf4360e Update CHANGELOG.md (#11914)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 15:37:34 +00:00
Tobias
773f3f67b8 core: overwriteable app version (#11753)
* refactor: homepage

* add: overwriteable app version var

* fix wrong commit

* fix: wrong commit

* fix: wrong commit

* Remove APPLICATION_VERSION export

* Change APPLICATION_VERSION to var_appversion
2026-02-14 16:37:13 +01:00
CanbiZ (MickLesk)
9a95d81f17 Strip ANSI and control chars in json_escape & logs
Enhance json_escape to remove ANSI escape sequences (color codes) and other non-printable control characters before escaping backslashes, quotes, newlines, tabs and carriage returns. Also update get_error_text to strip ANSI sequences from tailed logfile output. These changes ensure safe JSON embedding of strings and prevent control characters / terminal color codes from leaking into logs or JSON payloads.
2026-02-14 16:13:05 +01:00
community-scripts-pr-app[bot]
ed9a6d9d4b Update CHANGELOG.md (#11911)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 14:29:19 +00:00
community-scripts-pr-app[bot]
c6005af29d Update CHANGELOG.md (#11910)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 14:28:56 +00:00
CanbiZ (MickLesk)
911f533e6a fix(cluster): validate container IDs cluster-wide across all nodes (#11906)
- Query /cluster/resources via pvesh to check all VMs/CTs on ALL nodes
- Check /etc/pve/nodes/*/qemu-server and /etc/pve/nodes/*/lxc dirs
- Handles pmxcfs sync delays that caused sporadic ID conflicts
- Remove duplicate validate_container_id/get_valid_container_id definitions
- Add max_attempts safeguard to prevent infinite loops
2026-02-14 15:28:47 +01:00
CanbiZ (MickLesk)
cecadf5681 core: improve error reporting with structured error strings and better categorization + output formatting (#11907)
* fix(telemetry): improve error reporting with structured error strings and better categorization

- Add build_error_string() that creates structured format:
  'exit_code=N | description\n---\n<last 20 log lines>'
- Fix categorize_error() to map ALL known exit codes:
  - Added: shell(1,2), proxmox(200-231), service(150-154),
    database(170-193), runtime(243-249), signal(139,141,143)
  - Split timeout from network (28 was in both)
  - Added DPKG(255) to dependency category
- Update all API functions to use build_error_string():
  post_update_to_api, post_update_to_api_extended,
  post_tool_to_api, post_addon_to_api
- Add ensure_log_on_host() calls to on_exit, on_interrupt,
  on_terminate handlers to prevent race condition where
  telemetry reports before container log is pulled to host

* fix(ui): improve error output formatting and remove redundant log paths

- error_handler: Use msg_info/msg_ok/msg_warn for container cleanup
  instead of raw echo with manual ANSI codes
- error_handler: Add  icon before 'Remove broken container?' prompt
- error_handler: Indent log output with TAB for visual consistency
- build.func: Use msg_custom for installation log path display
- build.func: Use msg_info → msg_ok for container removal flow
- build.func: Use msg_warn for 'kept for debugging' message
- core.func/vm-core.func: Remove redundant container-internal log
  path display (📋 View full log) since combined log on host is
  the canonical location shown after failure
2026-02-14 15:28:30 +01:00
CanbiZ (MickLesk)
f762155870 feat(core): merge ProxmoxVED core.func with prompt utilities and unattended mode 2026-02-14 13:48:06 +01:00
community-scripts-pr-app[bot]
867d02d969 Update CHANGELOG.md (#11903)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 12:41:34 +00:00
CanbiZ (MickLesk)
1a9bbdd6e7 core: unified logging system with combined logs (#11761)
* refactor(logging): unified logging system with combined logs

- Add log_msg(), log_section(), strip_ansi() helper functions to core.func
- Extend msg_ok, msg_error, msg_warn, msg_info, msg_custom to write to log file
- Log container settings (default and advanced) to log file
- Combine host creation log and container installation log on failure
- Use app-specific log path: /tmp/{app}-{ctid}-{session}.log
- Add timestamps and section headers in log files
- Improve get_error_text() with combined log fallback chain
- Add ensure_log_on_host() for trap handlers to pull logs before API reporting

* linting
2026-02-14 13:41:09 +01:00
community-scripts-pr-app[bot]
3e0db150bd chore: update github-versions.json (#11902)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 12:08:49 +00:00
community-scripts-pr-app[bot]
6b3653627c Update CHANGELOG.md (#11899)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 10:39:34 +00:00
Copilot
733ad75dc1 Disable UniFi script - APT packages no longer available (#11898)
* Initial plan

* Add disabled flag and description to unifi.json

Co-authored-by: MickLesk <47820557+MickLesk@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: MickLesk <47820557+MickLesk@users.noreply.github.com>
2026-02-14 11:39:11 +01:00
community-scripts-pr-app[bot]
60aaaab3e7 chore: update github-versions.json (#11895)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 06:14:54 +00:00
community-scripts-pr-app[bot]
1f735cb31f Update CHANGELOG.md (#11893)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 00:22:13 +00:00
community-scripts-pr-app[bot]
adcbe8dae2 chore: update github-versions.json (#11892)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-14 00:21:47 +00:00
community-scripts-pr-app[bot]
bba520dbbf Update CHANGELOG.md (#11890)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 18:14:34 +00:00
community-scripts-pr-app[bot]
b43963d352 chore: update github-versions.json (#11889)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 18:14:07 +00:00
CanbiZ (MickLesk)
0957a23366 JSON-escape CPU and GPU model strings
Apply json_escape to GPU_MODEL and CPU_MODEL before assigning to gpu_model and cpu_model to ensure values are safe for inclusion in API JSON payloads. Updated in post_to_api, post_to_api_vm, and post_update_to_api; variable declarations were adjusted to call json_escape on the existing environment values (fallbacks unchanged). This prevents raw model strings from breaking the API payload.
2026-02-13 14:18:36 +01:00
community-scripts-pr-app[bot]
9f3588dd8d Update CHANGELOG.md (#11886)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 12:50:14 +00:00
CanbiZ (MickLesk)
f23414a1a8 Retry reporting with fallback payloads (#11885)
Enhance post_update_to_api to support a "force" mode and robust retry logic: add a 3rd-arg bypass to duplicate suppression, capture a short error summary, and perform up to three POST attempts (full payload, shortened error payload, minimal payload) with HTTP code checks and small backoffs. Mark POST_UPDATE_DONE on success (or after three attempts) to avoid infinite retries. Also invoke post_update_to_api with the "force" flag from cleanup paths in build.func and error_handler.func so a final status update is attempted after cleanup.
2026-02-13 13:49:49 +01:00
25 changed files with 1766 additions and 832 deletions

View File

@@ -401,6 +401,30 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details> </details>
## 2026-02-14
### 🚀 Updated Scripts
- #### ✨ New Features
- core: overwriteable app version [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11753](https://github.com/community-scripts/ProxmoxVE/pull/11753))
### 💾 Core
- #### ✨ New Features
- core: validate container IDs cluster-wide across all nodes [@MickLesk](https://github.com/MickLesk) ([#11906](https://github.com/community-scripts/ProxmoxVE/pull/11906))
- core: improve error reporting with structured error strings and better categorization + output formatting [@MickLesk](https://github.com/MickLesk) ([#11907](https://github.com/community-scripts/ProxmoxVE/pull/11907))
- core: unified logging system with combined logs [@MickLesk](https://github.com/MickLesk) ([#11761](https://github.com/community-scripts/ProxmoxVE/pull/11761))
### 🧰 Tools
- lxc-updater: add patchmon aware [@failure101](https://github.com/failure101) ([#11905](https://github.com/community-scripts/ProxmoxVE/pull/11905))
### ❔ Uncategorized
- Disable UniFi script - APT packages no longer available [@Copilot](https://github.com/Copilot) ([#11898](https://github.com/community-scripts/ProxmoxVE/pull/11898))
## 2026-02-13 ## 2026-02-13
### 🚀 Updated Scripts ### 🚀 Updated Scripts
@@ -414,8 +438,14 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- #### 🔧 Refactor - #### 🔧 Refactor
- chore(donetick): add config entry for v0.1.73 [@tomfrenzel](https://github.com/tomfrenzel) ([#11872](https://github.com/community-scripts/ProxmoxVE/pull/11872))
- Refactor: Radicale [@vhsdream](https://github.com/vhsdream) ([#11850](https://github.com/community-scripts/ProxmoxVE/pull/11850)) - Refactor: Radicale [@vhsdream](https://github.com/vhsdream) ([#11850](https://github.com/community-scripts/ProxmoxVE/pull/11850))
- chore(donetick): add config entry for v0.1.73 [@tomfrenzel](https://github.com/tomfrenzel) ([#11872](https://github.com/community-scripts/ProxmoxVE/pull/11872))
### 💾 Core
- #### 🔧 Refactor
- core: retry reporting with fallback payloads [@MickLesk](https://github.com/MickLesk) ([#11885](https://github.com/community-scripts/ProxmoxVE/pull/11885))
### 📡 API ### 📡 API

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source <(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/misc/build.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG # Copyright (c) 2021-2026 tteck
# Authors: MickLesk (CanbiZ) | Co-Author: remz1337 # Authors: tteck (tteckster)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://frigate.video/ # Source: https://frigate.video/
@@ -11,7 +11,7 @@ var_cpu="${var_cpu:-4}"
var_ram="${var_ram:-4096}" var_ram="${var_ram:-4096}"
var_disk="${var_disk:-20}" var_disk="${var_disk:-20}"
var_os="${var_os:-debian}" var_os="${var_os:-debian}"
var_version="${var_version:-12}" var_version="${var_version:-11}"
var_unprivileged="${var_unprivileged:-0}" var_unprivileged="${var_unprivileged:-0}"
var_gpu="${var_gpu:-yes}" var_gpu="${var_gpu:-yes}"
@@ -21,15 +21,15 @@ color
catch_errors catch_errors
function update_script() { function update_script() {
header_info header_info
check_container_storage check_container_storage
check_container_resources check_container_resources
if [[ ! -f /etc/systemd/system/frigate.service ]]; then if [[ ! -f /etc/systemd/system/frigate.service ]]; then
msg_error "No ${APP} Installation Found!" msg_error "No ${APP} Installation Found!"
exit
fi
msg_error "To update Frigate, create a new container and transfer your configuration."
exit exit
fi
msg_error "To update Frigate, create a new container and transfer your configuration."
exit
} }
start start

View File

@@ -1,6 +0,0 @@
__ ____
/ /__ / / /_ __________ ___ __________
__ / / _ \/ / / / / / ___/ _ \/ _ \/ ___/ ___/
/ /_/ / __/ / / /_/ (__ ) __/ __/ / / /
\____/\___/_/_/\__, /____/\___/\___/_/ /_/
/____/

View File

@@ -1,6 +0,0 @@
____
/ __ \_ _____ _____________ ___ __________
/ / / / | / / _ \/ ___/ ___/ _ \/ _ \/ ___/ ___/
/ /_/ /| |/ / __/ / (__ ) __/ __/ / / /
\____/ |___/\___/_/ /____/\___/\___/_/ /_/

View File

@@ -29,47 +29,43 @@ function update_script() {
exit exit
fi fi
if [ "$(node -v | cut -c2-3)" -ne 22 ]; then if [[ -f "/opt/jellyseerr/package.json" ]] && [[ "$(grep -m1 '"version"' /opt/jellyseerr/package.json | awk -F'"' '{print $4}')" == "2.7.3" ]]; then
msg_info "Updating Node.js Repository" echo
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_22.x nodistro main" >/etc/apt/sources.list.d/nodesource.list echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
msg_ok "Updating Node.js Repository" echo "Jellyseerr v2.7.3 detected."
echo
echo "Seerr is the new unified Jellyseerr and Overseerr."
echo "More info: https://docs.seerr.dev/blog/seerr-release"
echo
read -rp "Do you want to migrate to Seerr now? (y/N): " MIGRATE
echo
if [[ ! "$MIGRATE" =~ ^[Yy]$ ]]; then
msg_info "Migration cancelled. Exiting."
exit 0
fi
msg_info "Updating Packages" msg_info "Switching update script to Seerr"
$STD apt-get update sed -i 's|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/jellyseerr.sh|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/seerr.sh|g' /usr/bin/update
$STD apt-get -y upgrade msg_ok "Switched update script to Seerr. Running update..."
msg_ok "Updating Packages" exec /usr/bin/update
fi
cd /opt/jellyseerr
output=$(git pull --no-rebase)
pnpm_current=$(pnpm --version 2>/dev/null)
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
if [ -z "$pnpm_current" ]; then
msg_error "pnpm not found. Installing version $pnpm_desired..."
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
elif ! node -e "const semver = require('semver'); process.exit(semver.satisfies('$pnpm_current', '$pnpm_desired') ? 0 : 1)"; then
msg_error "Updating pnpm from version $pnpm_current to $pnpm_desired..."
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
else
msg_ok "pnpm is already installed and satisfies version $pnpm_desired."
fi fi
msg_info "Updating Jellyseerr" msg_info "Updating Jellyseerr"
cd /opt/jellyseerr
systemctl stop jellyseerr
output=$(git pull --no-rebase)
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
if echo "$output" | grep -q "Already up to date."; then if echo "$output" | grep -q "Already up to date."; then
msg_ok "$APP is already up to date." msg_ok "$APP is already up to date."
exit exit
fi fi
systemctl stop jellyseerr
rm -rf dist .next node_modules rm -rf dist .next node_modules
export CYPRESS_INSTALL_BINARY=0 export CYPRESS_INSTALL_BINARY=0
cd /opt/jellyseerr cd /opt/jellyseerr
$STD pnpm install --frozen-lockfile $STD pnpm install --frozen-lockfile
export NODE_OPTIONS="--max-old-space-size=3072" export NODE_OPTIONS="--max-old-space-size=3072"
$STD pnpm build $STD pnpm build
cat <<EOF >/etc/systemd/system/jellyseerr.service cat <<EOF >/etc/systemd/system/jellyseerr.service
[Unit] [Unit]
Description=jellyseerr Service Description=jellyseerr Service
@@ -85,7 +81,6 @@ ExecStart=/usr/bin/node dist/index.js
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
systemctl daemon-reload systemctl daemon-reload
systemctl start jellyseerr systemctl start jellyseerr
msg_ok "Updated Jellyseerr" msg_ok "Updated Jellyseerr"

View File

@@ -27,6 +27,28 @@ function update_script() {
msg_error "No ${APP} Installation Found!" msg_error "No ${APP} Installation Found!"
exit exit
fi fi
if [[ -f "$HOME/.overseerr" ]] && [[ "$(cat "$HOME/.overseerr")" == "1.34.0" ]]; then
echo
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Overseerr v1.34.0 detected."
echo
echo "Seerr is the new unified Jellyseerr and Overseerr."
echo "More info: https://docs.seerr.dev/blog/seerr-release"
echo
read -rp "Do you want to migrate to Seerr now? (y/N): " MIGRATE
echo
if [[ ! "$MIGRATE" =~ ^[Yy]$ ]]; then
msg_info "Migration cancelled. Exiting."
exit 0
fi
msg_info "Switching update script to Seerr"
sed -i 's|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/overseerr.sh|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/seerr.sh|g' /usr/bin/update
msg_ok "Switched update script to Seerr. Running update..."
exec /usr/bin/update
fi
if check_for_gh_release "overseerr" "sct/overseerr"; then if check_for_gh_release "overseerr" "sct/overseerr"; then
msg_info "Stopping Service" msg_info "Stopping Service"
systemctl stop overseerr systemctl stop overseerr

165
ct/seerr.sh Normal file
View File

@@ -0,0 +1,165 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: CrazyWolf13
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://docs.seerr.dev/
APP="Seerr"
var_tags="${var_tags:-media}"
var_cpu="${var_cpu:-4}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-12}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/seerr && ! -d /opt/jellyseerr && ! -d /opt/overseerr ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
# Start Migration from Jellyseerr
if [[ -f /etc/systemd/system/jellyseerr.service ]]; then
msg_info "Stopping Jellyseerr"
$STD systemctl stop jellyseerr || true
$STD systemctl disable jellyseerr || true
[ -f /etc/systemd/system/jellyseerr.service ] && rm -f /etc/systemd/system/jellyseerr.service
msg_ok "Stopped Jellyseerr"
msg_info "Creating Backup (Patience)"
tar -czf /opt/jellyseerr_backup_$(date +%Y%m%d_%H%M%S).tar.gz -C /opt jellyseerr
msg_ok "Created Backup"
msg_info "Migrating Jellyseerr to seerr"
[ -d /opt/jellyseerr ] && mv /opt/jellyseerr /opt/seerr
[ -d /etc/jellyseerr ] && mv /etc/jellyseerr /etc/seerr
[ -f /etc/seerr/jellyseerr.conf ] && mv /etc/seerr/jellyseerr.conf /etc/seerr/seerr.conf
cat <<EOF >/etc/systemd/system/seerr.service
[Unit]
Description=Seerr Service
Wants=network-online.target
After=network-online.target
[Service]
EnvironmentFile=/etc/seerr/seerr.conf
Environment=NODE_ENV=production
Type=exec
Restart=on-failure
WorkingDirectory=/opt/seerr
ExecStart=/usr/bin/node dist/index.js
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable -q --now seerr
msg_ok "Migrated Jellyserr to Seerr"
fi
# END Jellyseerr Migration
# Start Migration from Overseerr
if [[ -f /etc/systemd/system/overseerr.service ]]; then
msg_info "Stopping Overseerr"
$STD systemctl stop overseerr || true
$STD systemctl disable overseerr || true
[ -f /etc/systemd/system/overseerr.service ] && rm -f /etc/systemd/system/overseerr.service
msg_ok "Stopped Overseerr"
msg_info "Creating Backup (Patience)"
tar -czf /opt/overseerr_backup_$(date +%Y%m%d_%H%M%S).tar.gz -C /opt overseerr
msg_ok "Created Backup"
msg_info "Migrating Overseerr to seerr"
[ -d /opt/overseerr ] && mv /opt/overseerr /opt/seerr
mkdir -p /etc/seerr
cat <<EOF >/etc/seerr/seerr.conf
## Seerr's default port is 5055, if you want to use both, change this.
## specify on which port to listen
PORT=5055
## specify on which interface to listen, by default seerr listens on all interfaces
#HOST=127.0.0.1
## Uncomment if you want to force Node.js to resolve IPv4 before IPv6 (advanced users only)
# FORCE_IPV4_FIRST=true
EOF
cat <<EOF >/etc/systemd/system/seerr.service
[Unit]
Description=Seerr Service
Wants=network-online.target
After=network-online.target
[Service]
EnvironmentFile=/etc/seerr/seerr.conf
Environment=NODE_ENV=production
Type=exec
Restart=on-failure
WorkingDirectory=/opt/seerr
ExecStart=/usr/bin/node dist/index.js
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable -q --now seerr
msg_ok "Migrated Overseerr to Seerr"
fi
# END Overseerr Migration
if check_for_gh_release "seerr" "seerr-team/seerr"; then
msg_info "Stopping Service"
systemctl stop seerr
msg_ok "Stopped Service"
msg_info "Creating Backup"
cp -a /opt/seerr/config /opt/seerr_backup
msg_ok "Created Backup"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
msg_info "Updating PNPM Version"
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/seerr/package.json)
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
msg_ok "Updated PNPM Version"
msg_info "Updating Seerr"
cd /opt/seerr
rm -rf dist .next node_modules
export CYPRESS_INSTALL_BINARY=0
$STD pnpm install --frozen-lockfile
export NODE_OPTIONS="--max-old-space-size=3072"
$STD pnpm build
msg_ok "Updated Seerr"
msg_info "Restoring Backup"
rm -rf /opt/seerr/config
mv /opt/seerr_backup /opt/seerr/config
msg_ok "Restored Backup"
msg_info "Starting Service"
systemctl start seerr
msg_ok "Started Service"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:5055${CL}"

View File

@@ -1,44 +0,0 @@
{
"name": "Frigate",
"slug": "frigate",
"categories": [
15
],
"date_created": "2026-02-13",
"type": "ct",
"updateable": false,
"privileged": false,
"config_path": "/config/config.yml",
"interface_port": 5000,
"documentation": "https://frigate.io/",
"website": "https://frigate.io/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/frigate-light.webp",
"description": "Frigate is a complete and local NVR (Network Video Recorder) with realtime AI object detection for CCTV cameras.",
"install_methods": [
{
"type": "default",
"script": "ct/frigate.sh",
"resources": {
"cpu": 4,
"ram": 4096,
"hdd": 20,
"os": "Debian",
"version": "12"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "SemanticSearch is not pre-installed due to high resource requirements (8+ cores, 16-24GB RAM, GPU recommended). Manual configuration required if needed.",
"type": "info"
},
{
"text": "OpenVino detector may fail on older CPUs (pre-Haswell/AVX2). If you encounter 'Illegal instruction' errors, consider using alternative detectors.",
"type": "warning"
}
]
}

View File

@@ -0,0 +1,44 @@
{
"name": "Frigate",
"slug": "frigate",
"categories": [
15
],
"date_created": "2024-05-02",
"type": "ct",
"updateable": false,
"privileged": true,
"interface_port": 5000,
"documentation": "https://docs.frigate.video/",
"website": "https://frigate.video/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/frigate.webp",
"config_path": "",
"description": "Frigate is an open source NVR built around real-time AI object detection. All processing is performed locally on your own hardware, and your camera feeds never leave your home.",
"install_methods": [
{
"type": "default",
"script": "ct/frigate.sh",
"resources": {
"cpu": 4,
"ram": 4096,
"hdd": 20,
"os": "debian",
"version": "11"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "Discussions (explore more advanced methods): `https://github.com/tteck/Proxmox/discussions/2711`",
"type": "info"
},
{
"text": "go2rtc Interface port:`1984`",
"type": "info"
}
]
}

View File

@@ -1,5 +1,5 @@
{ {
"generated": "2026-02-13T12:11:36Z", "generated": "2026-02-14T18:07:29Z",
"versions": [ "versions": [
{ {
"slug": "2fauth", "slug": "2fauth",
@@ -67,9 +67,9 @@
{ {
"slug": "autobrr", "slug": "autobrr",
"repo": "autobrr/autobrr", "repo": "autobrr/autobrr",
"version": "v1.72.1", "version": "v1.73.0",
"pinned": false, "pinned": false,
"date": "2026-01-30T12:57:58Z" "date": "2026-02-13T16:37:28Z"
}, },
{ {
"slug": "autocaliweb", "slug": "autocaliweb",
@@ -116,9 +116,9 @@
{ {
"slug": "bentopdf", "slug": "bentopdf",
"repo": "alam00000/bentopdf", "repo": "alam00000/bentopdf",
"version": "v2.2.0", "version": "v2.2.1",
"pinned": false, "pinned": false,
"date": "2026-02-09T07:07:40Z" "date": "2026-02-14T16:33:47Z"
}, },
{ {
"slug": "beszel", "slug": "beszel",
@@ -200,9 +200,9 @@
{ {
"slug": "cloudreve", "slug": "cloudreve",
"repo": "cloudreve/cloudreve", "repo": "cloudreve/cloudreve",
"version": "4.13.0", "version": "4.14.0",
"pinned": false, "pinned": false,
"date": "2026-02-05T12:53:24Z" "date": "2026-02-14T06:05:06Z"
}, },
{ {
"slug": "comfyui", "slug": "comfyui",
@@ -284,9 +284,9 @@
{ {
"slug": "domain-locker", "slug": "domain-locker",
"repo": "Lissy93/domain-locker", "repo": "Lissy93/domain-locker",
"version": "v0.1.3", "version": "v0.1.4",
"pinned": false, "pinned": false,
"date": "2026-02-11T10:03:32Z" "date": "2026-02-14T07:41:29Z"
}, },
{ {
"slug": "domain-monitor", "slug": "domain-monitor",
@@ -354,9 +354,9 @@
{ {
"slug": "firefly", "slug": "firefly",
"repo": "firefly-iii/firefly-iii", "repo": "firefly-iii/firefly-iii",
"version": "v6.4.18", "version": "v6.4.20",
"pinned": false, "pinned": false,
"date": "2026-02-08T07:28:00Z" "date": "2026-02-14T12:39:02Z"
}, },
{ {
"slug": "fladder", "slug": "fladder",
@@ -445,9 +445,9 @@
{ {
"slug": "gotify", "slug": "gotify",
"repo": "gotify/server", "repo": "gotify/server",
"version": "v2.8.0", "version": "v2.9.0",
"pinned": false, "pinned": false,
"date": "2026-01-02T11:56:16Z" "date": "2026-02-13T15:22:31Z"
}, },
{ {
"slug": "grist", "slug": "grist",
@@ -508,9 +508,9 @@
{ {
"slug": "homarr", "slug": "homarr",
"repo": "homarr-labs/homarr", "repo": "homarr-labs/homarr",
"version": "v1.53.0", "version": "v1.53.1",
"pinned": false, "pinned": false,
"date": "2026-02-06T19:42:58Z" "date": "2026-02-13T19:47:11Z"
}, },
{ {
"slug": "homebox", "slug": "homebox",
@@ -578,9 +578,9 @@
{ {
"slug": "jackett", "slug": "jackett",
"repo": "Jackett/Jackett", "repo": "Jackett/Jackett",
"version": "v0.24.1103", "version": "v0.24.1113",
"pinned": false, "pinned": false,
"date": "2026-02-13T05:53:23Z" "date": "2026-02-14T17:46:58Z"
}, },
{ {
"slug": "jellystat", "slug": "jellystat",
@@ -823,16 +823,16 @@
{ {
"slug": "metube", "slug": "metube",
"repo": "alexta69/metube", "repo": "alexta69/metube",
"version": "2026.02.12", "version": "2026.02.14",
"pinned": false, "pinned": false,
"date": "2026-02-12T21:05:49Z" "date": "2026-02-14T07:49:11Z"
}, },
{ {
"slug": "miniflux", "slug": "miniflux",
"repo": "miniflux/v2", "repo": "miniflux/v2",
"version": "2.2.16", "version": "2.2.17",
"pinned": false, "pinned": false,
"date": "2026-01-07T03:26:27Z" "date": "2026-02-13T20:30:17Z"
}, },
{ {
"slug": "monica", "slug": "monica",
@@ -1000,7 +1000,7 @@
"repo": "fosrl/pangolin", "repo": "fosrl/pangolin",
"version": "1.15.4", "version": "1.15.4",
"pinned": false, "pinned": false,
"date": "2026-02-13T00:54:02Z" "date": "2026-02-13T23:01:29Z"
}, },
{ {
"slug": "paperless-ai", "slug": "paperless-ai",
@@ -1096,9 +1096,9 @@
{ {
"slug": "pocketbase", "slug": "pocketbase",
"repo": "pocketbase/pocketbase", "repo": "pocketbase/pocketbase",
"version": "v0.36.2", "version": "v0.36.3",
"pinned": false, "pinned": false,
"date": "2026-02-01T08:12:42Z" "date": "2026-02-13T18:38:58Z"
}, },
{ {
"slug": "pocketid", "slug": "pocketid",
@@ -1313,9 +1313,9 @@
{ {
"slug": "semaphore", "slug": "semaphore",
"repo": "semaphoreui/semaphore", "repo": "semaphoreui/semaphore",
"version": "v2.16.51", "version": "v2.17.0",
"pinned": false, "pinned": false,
"date": "2026-01-12T16:26:38Z" "date": "2026-02-13T21:08:30Z"
}, },
{ {
"slug": "shelfmark", "slug": "shelfmark",
@@ -1411,9 +1411,9 @@
{ {
"slug": "tandoor", "slug": "tandoor",
"repo": "TandoorRecipes/recipes", "repo": "TandoorRecipes/recipes",
"version": "2.5.0", "version": "2.5.3",
"pinned": false, "pinned": false,
"date": "2026-02-08T13:23:02Z" "date": "2026-02-14T12:42:14Z"
}, },
{ {
"slug": "tasmoadmin", "slug": "tasmoadmin",
@@ -1467,9 +1467,9 @@
{ {
"slug": "tianji", "slug": "tianji",
"repo": "msgbyte/tianji", "repo": "msgbyte/tianji",
"version": "v1.31.12", "version": "v1.31.13",
"pinned": false, "pinned": false,
"date": "2026-02-12T19:06:14Z" "date": "2026-02-13T16:30:09Z"
}, },
{ {
"slug": "traccar", "slug": "traccar",
@@ -1516,9 +1516,9 @@
{ {
"slug": "tududi", "slug": "tududi",
"repo": "chrisvel/tududi", "repo": "chrisvel/tududi",
"version": "v0.88.4", "version": "v0.88.5",
"pinned": false, "pinned": false,
"date": "2026-01-20T15:11:58Z" "date": "2026-02-13T13:54:14Z"
}, },
{ {
"slug": "tunarr", "slug": "tunarr",
@@ -1565,9 +1565,9 @@
{ {
"slug": "uptimekuma", "slug": "uptimekuma",
"repo": "louislam/uptime-kuma", "repo": "louislam/uptime-kuma",
"version": "2.1.0", "version": "2.1.1",
"pinned": false, "pinned": false,
"date": "2026-02-07T02:31:49Z" "date": "2026-02-13T16:07:33Z"
}, },
{ {
"slug": "vaultwarden", "slug": "vaultwarden",

View File

@@ -1,35 +0,0 @@
{
"name": "Jellyseerr",
"slug": "jellyseerr",
"categories": [
14
],
"date_created": "2024-05-02",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 5055,
"documentation": "https://docs.jellyseerr.dev/",
"website": "https://github.com/Fallenbagel/jellyseerr",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/jellyseerr.webp",
"config_path": "/etc/jellyseerr/jellyseerr.conf",
"description": "Jellyseerr is a free and open source software application for managing requests for your media library. It is a a fork of Overseerr built to bring support for Jellyfin & Emby media servers.",
"install_methods": [
{
"type": "default",
"script": "ct/jellyseerr.sh",
"resources": {
"cpu": 4,
"ram": 4096,
"hdd": 8,
"os": "debian",
"version": "12"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -1,35 +0,0 @@
{
"name": "Overseerr",
"slug": "overseerr",
"categories": [
14
],
"date_created": "2024-05-02",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 5055,
"documentation": "https://docs.overseerr.dev/",
"website": "https://overseerr.dev/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/overseerr.webp",
"config_path": "/opt/overseerr/config/settings.json",
"description": "Overseerr is a request management and media discovery tool built to work with your existing Plex ecosystem.",
"install_methods": [
{
"type": "default",
"script": "ct/overseerr.sh",
"resources": {
"cpu": 2,
"ram": 4096,
"hdd": 8,
"os": "debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -0,0 +1,35 @@
{
"name": "Seerr",
"slug": "seerr",
"categories": [
13
],
"date_created": "2026-01-19",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 5055,
"documentation": "https://docs.seerr.dev/",
"website": "https://seerr.dev/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/seerr.webp",
"config_path": "/etc/seerr/seerr.conf",
"description": "Open-source media request and discovery manager for Jellyfin, Plex, and Emby. Unified version of Overseerr and Jellyseerr.",
"install_methods": [
{
"type": "default",
"script": "ct/seerr.sh",
"resources": {
"cpu": 4,
"ram": 4096,
"hdd": 12,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -14,6 +14,8 @@
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/ubiquiti-unifi.webp", "logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/ubiquiti-unifi.webp",
"config_path": "", "config_path": "",
"description": "UniFi Network Server is a software that helps manage and monitor UniFi networks (Wi-Fi, Ethernet, etc.) by providing an intuitive user interface and advanced features. It allows network administrators to configure, monitor, and upgrade network devices, as well as view network statistics, client devices, and historical events. The aim of the application is to make the management of UniFi networks easier and more efficient.", "description": "UniFi Network Server is a software that helps manage and monitor UniFi networks (Wi-Fi, Ethernet, etc.) by providing an intuitive user interface and advanced features. It allows network administrators to configure, monitor, and upgrade network devices, as well as view network statistics, client devices, and historical events. The aim of the application is to make the management of UniFi networks easier and more efficient.",
"disable": true,
"disable_description": "This script is disabled because UniFi no longer delivers APT packages for Debian systems. The installation relies on APT repositories that are no longer maintained or available. For more details, see: https://github.com/community-scripts/ProxmoxVE/issues/11876",
"install_methods": [ "install_methods": [
{ {
"type": "default", "type": "default",

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG # Copyright (c) 2021-2026 tteck
# Authors: MickLesk (CanbiZ) # Author: tteck (tteckster)
# Co-Authors: remz1337 # Co-Author: remz1337
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://frigate.video/ # Source: https://frigate.video/
@@ -14,221 +14,60 @@ setting_up_container
network_check network_check
update_os update_os
source /etc/os-release msg_info "Installing Dependencies (Patience)"
if [[ "$VERSION_ID" != "12" ]]; then $STD apt-get install -y {git,ca-certificates,automake,build-essential,xz-utils,libtool,ccache,pkg-config,libgtk-3-dev,libavcodec-dev,libavformat-dev,libswscale-dev,libv4l-dev,libxvidcore-dev,libx264-dev,libjpeg-dev,libpng-dev,libtiff-dev,gfortran,openexr,libatlas-base-dev,libssl-dev,libtbb2,libtbb-dev,libdc1394-22-dev,libopenexr-dev,libgstreamer-plugins-base1.0-dev,libgstreamer1.0-dev,gcc,gfortran,libopenblas-dev,liblapack-dev,libusb-1.0-0-dev,jq,moreutils}
msg_error "Frigate requires Debian 12 (Bookworm) due to Python 3.11 dependencies"
exit 1
fi
msg_info "Converting APT sources to DEB822 format"
if [ -f /etc/apt/sources.list ]; then
cat > /etc/apt/sources.list.d/debian.sources <<'EOF'
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm
Components: main contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm-updates
Components: main contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://security.debian.org
Suites: bookworm-security
Components: main contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
mv /etc/apt/sources.list /etc/apt/sources.list.bak
$STD apt update
fi
msg_ok "Converted APT sources"
msg_info "Installing Dependencies"
$STD apt install -y \
xz-utils \
python3 \
python3-dev \
python3-pip \
gcc \
pkg-config \
libhdf5-dev \
build-essential \
automake \
libtool \
ccache \
libusb-1.0-0-dev \
apt-transport-https \
cmake \
git \
libgtk-3-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
gfortran \
openexr \
libssl-dev \
libtbbmalloc2 \
libtbb-dev \
libdc1394-dev \
libopenexr-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer1.0-dev \
tclsh \
libopenblas-dev \
liblapack-dev \
make \
moreutils
msg_ok "Installed Dependencies" msg_ok "Installed Dependencies"
msg_info "Setup Python3"
$STD apt-get install -y {python3,python3-dev,python3-setuptools,python3-distutils,python3-pip}
$STD pip install --upgrade pip
msg_ok "Setup Python3"
NODE_VERSION="22" setup_nodejs
msg_info "Installing go2rtc"
mkdir -p /usr/local/go2rtc/bin
cd /usr/local/go2rtc/bin
curl -fsSL "https://github.com/AlexxIT/go2rtc/releases/latest/download/go2rtc_linux_amd64" -o "go2rtc"
chmod +x go2rtc
$STD ln -svf /usr/local/go2rtc/bin/go2rtc /usr/local/bin/go2rtc
msg_ok "Installed go2rtc"
setup_hwaccel setup_hwaccel
export TARGETARCH="amd64" msg_info "Installing Frigate v0.14.1 (Perseverance)"
export CCACHE_DIR=/root/.ccache cd ~
export CCACHE_MAXSIZE=2G mkdir -p /opt/frigate/models
export APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=DontWarn curl -fsSL "https://github.com/blakeblackshear/frigate/archive/refs/tags/v0.14.1.tar.gz" -o "frigate.tar.gz"
export PIP_BREAK_SYSTEM_PACKAGES=1 tar -xzf frigate.tar.gz -C /opt/frigate --strip-components 1
export NVIDIA_VISIBLE_DEVICES=all rm -rf frigate.tar.gz
export NVIDIA_DRIVER_CAPABILITIES="compute,video,utility" cd /opt/frigate
export TOKENIZERS_PARALLELISM=true $STD pip3 wheel --wheel-dir=/wheels -r /opt/frigate/docker/main/requirements-wheels.txt
export TRANSFORMERS_NO_ADVISORY_WARNINGS=1
export OPENCV_FFMPEG_LOGLEVEL=8
export HAILORT_LOGGER_PATH=NONE
fetch_and_deploy_gh_release "frigate" "blakeblackshear/frigate" "tarball" "v0.16.4" "/opt/frigate"
msg_info "Building Nginx"
$STD bash /opt/frigate/docker/main/build_nginx.sh
sed -e '/s6-notifyoncheck/ s/^#*/#/' -i /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run
ln -sf /usr/local/nginx/sbin/nginx /usr/local/bin/nginx
msg_ok "Built Nginx"
msg_info "Building SQLite Extensions"
$STD bash /opt/frigate/docker/main/build_sqlite_vec.sh
msg_ok "Built SQLite Extensions"
fetch_and_deploy_gh_release "go2rtc" "AlexxIT/go2rtc" "singlefile" "latest" "/usr/local/go2rtc/bin" "go2rtc_linux_amd64"
msg_info "Installing Tempio"
sed -i 's|/rootfs/usr/local|/usr/local|g' /opt/frigate/docker/main/install_tempio.sh
$STD bash /opt/frigate/docker/main/install_tempio.sh
ln -sf /usr/local/tempio/bin/tempio /usr/local/bin/tempio
msg_ok "Installed Tempio"
msg_info "Building libUSB"
fetch_and_deploy_gh_release "libusb" "libusb/libusb" "tarball" "v1.0.26" "/opt/libusb"
cd /opt/libusb
$STD ./bootstrap.sh
$STD ./configure CC='ccache gcc' CCX='ccache g++' --disable-udev --enable-shared
$STD make -j "$(nproc)"
cd /opt/libusb/libusb
mkdir -p /usr/local/lib /usr/local/include/libusb-1.0 /usr/local/lib/pkgconfig
$STD bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la /usr/local/lib
install -c -m 644 libusb.h /usr/local/include/libusb-1.0
cd /opt/libusb/
install -c -m 644 libusb-1.0.pc /usr/local/lib/pkgconfig
ldconfig
msg_ok "Built libUSB"
msg_info "Installing Python Dependencies"
$STD pip3 install -r /opt/frigate/docker/main/requirements.txt
msg_ok "Installed Python Dependencies"
msg_info "Building Python Wheels (Patience)"
mkdir -p /wheels
sed -i 's|^SQLITE3_VERSION=.*|SQLITE3_VERSION="version-3.46.0"|g' /opt/frigate/docker/main/build_pysqlite3.sh
$STD bash /opt/frigate/docker/main/build_pysqlite3.sh
for i in {1..3}; do
$STD pip3 wheel --wheel-dir=/wheels -r /opt/frigate/docker/main/requirements-wheels.txt --default-timeout=300 --retries=3 && break
[[ $i -lt 3 ]] && sleep 10
done
msg_ok "Built Python Wheels"
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
msg_info "Downloading Inference Models"
mkdir -p /models /openvino-model
wget -q -O /edgetpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
wget -q -O /models/cpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite
cp /opt/frigate/labelmap.txt /labelmap.txt
msg_ok "Downloaded Inference Models"
msg_info "Downloading Audio Model"
wget -q -O /tmp/yamnet.tar.gz https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download
$STD tar xzf /tmp/yamnet.tar.gz -C /
mv /1.tflite /cpu_audio_model.tflite
cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
rm -f /tmp/yamnet.tar.gz
msg_ok "Downloaded Audio Model"
msg_info "Installing HailoRT Runtime"
$STD bash /opt/frigate/docker/main/install_hailort.sh
cp -a /opt/frigate/docker/main/rootfs/. / cp -a /opt/frigate/docker/main/rootfs/. /
sed -i '/^.*unset DEBIAN_FRONTEND.*$/d' /opt/frigate/docker/main/install_deps.sh export TARGETARCH="amd64"
echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections echo 'libc6 libraries/restart-without-asking boolean true' | debconf-set-selections
echo "libedgetpu1-max libedgetpu/install-confirm-max boolean true" | debconf-set-selections $STD /opt/frigate/docker/main/install_deps.sh
$STD bash /opt/frigate/docker/main/install_deps.sh $STD apt update
$STD ln -svf /usr/lib/btbn-ffmpeg/bin/ffmpeg /usr/local/bin/ffmpeg
$STD ln -svf /usr/lib/btbn-ffmpeg/bin/ffprobe /usr/local/bin/ffprobe
$STD pip3 install -U /wheels/*.whl $STD pip3 install -U /wheels/*.whl
ldconfig ldconfig
msg_ok "Installed HailoRT Runtime"
msg_info "Installing OpenVino"
$STD pip3 install -r /opt/frigate/docker/main/requirements-ov.txt
msg_ok "Installed OpenVino"
msg_info "Building OpenVino Model"
cd /models
wget -q http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
$STD tar -zxf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz --no-same-owner
if python3 /opt/frigate/docker/main/build_ov_model.py 2>&1; then
cp /models/ssdlite_mobilenet_v2.xml /openvino-model/
cp /models/ssdlite_mobilenet_v2.bin /openvino-model/
wget -q https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt -O /openvino-model/coco_91cl_bkgr.txt
sed -i 's/truck/car/g' /openvino-model/coco_91cl_bkgr.txt
msg_ok "Built OpenVino Model"
else
msg_warn "OpenVino build failed (CPU may not support required instructions). Frigate will use CPU model."
fi
msg_info "Building Frigate Application (Patience)"
cd /opt/frigate
$STD pip3 install -r /opt/frigate/docker/main/requirements-dev.txt $STD pip3 install -r /opt/frigate/docker/main/requirements-dev.txt
$STD bash /opt/frigate/.devcontainer/initialize.sh $STD /opt/frigate/.devcontainer/initialize.sh
$STD make version $STD make version
cd /opt/frigate/web cd /opt/frigate/web
$STD npm install $STD npm install
$STD npm run build $STD npm run build
cp -r /opt/frigate/web/dist/* /opt/frigate/web/ cp -r /opt/frigate/web/dist/* /opt/frigate/web/
sed -i '/^s6-svc -O \.$/s/^/#/' /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/frigate/run
msg_ok "Built Frigate Application"
msg_info "Configuring Frigate"
mkdir -p /config /media/frigate
cp -r /opt/frigate/config/. /config cp -r /opt/frigate/config/. /config
sed -i '/^s6-svc -O \.$/s/^/#/' /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/frigate/run
curl -fsSL "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4" -o "/media/frigate/person-bicycle-car-detection.mp4"
echo "tmpfs /tmp/cache tmpfs defaults 0 0" >>/etc/fstab
cat <<EOF >/etc/frigate.env
DEFAULT_FFMPEG_VERSION="7.0"
INCLUDED_FFMPEG_VERSIONS="7.0:5.0"
EOF
cat <<EOF >/config/config.yml cat <<EOF >/config/config.yml
mqtt: mqtt:
enabled: false enabled: false
cameras: cameras:
test: test:
ffmpeg: ffmpeg:
#hwaccel_args: preset-vaapi
inputs: inputs:
- path: /media/frigate/person-bicycle-car-detection.mp4 - path: /media/frigate/person-bicycle-car-detection.mp4
input_args: -re -stream_loop -1 -fflags +genpts input_args: -re -stream_loop -1 -fflags +genpts
@@ -239,42 +78,96 @@ cameras:
height: 1080 height: 1080
width: 1920 width: 1920
fps: 5 fps: 5
auth:
enabled: false
detect:
enabled: false
EOF EOF
ln -sf /config/config.yml /opt/frigate/config/config.yml
if [[ "$CTTYPE" == "0" ]]; then
sed -i -e 's/^kvm:x:104:$/render:x:104:root,frigate/' -e 's/^render:x:105:root$/kvm:x:105:/' /etc/group
else
sed -i -e 's/^kvm:x:104:$/render:x:104:frigate/' -e 's/^render:x:105:$/kvm:x:105:/' /etc/group
fi
echo "tmpfs /tmp/cache tmpfs defaults 0 0" >>/etc/fstab
msg_ok "Installed Frigate"
if grep -q -o -m1 -E 'avx[^ ]*|sse4_2' /proc/cpuinfo; then if grep -q -o -m1 -E 'avx[^ ]*' /proc/cpuinfo; then
msg_ok "AVX Support Detected"
msg_info "Installing Openvino Object Detection Model (Resilience)"
$STD pip install -r /opt/frigate/docker/main/requirements-ov.txt
cd /opt/frigate/models
export ENABLE_ANALYTICS=NO
$STD /usr/local/bin/omz_downloader --name ssdlite_mobilenet_v2 --num_attempts 2
$STD /usr/local/bin/omz_converter --name ssdlite_mobilenet_v2 --precision FP16 --mo /usr/local/bin/mo
cd /
cp -r /opt/frigate/models/public/ssdlite_mobilenet_v2 openvino-model
curl -fsSL "https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt" -o "openvino-model/coco_91cl_bkgr.txt"
sed -i 's/truck/car/g' openvino-model/coco_91cl_bkgr.txt
cat <<EOF >>/config/config.yml cat <<EOF >>/config/config.yml
ffmpeg:
hwaccel_args: auto
detectors: detectors:
detector01: ov:
type: openvino type: openvino
device: CPU
model:
path: /openvino-model/FP16/ssdlite_mobilenet_v2.xml
model: model:
width: 300 width: 300
height: 300 height: 300
input_tensor: nhwc input_tensor: nhwc
input_pixel_format: bgr input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt labelmap_path: /openvino-model/coco_91cl_bkgr.txt
EOF EOF
msg_ok "Installed Openvino Object Detection Model"
else else
cat <<EOF >>/config/config.yml cat <<EOF >>/config/config.yml
ffmpeg:
hwaccel_args: auto
model: model:
path: /cpu_model.tflite path: /cpu_model.tflite
EOF EOF
fi fi
msg_ok "Configured Frigate"
msg_info "Installing Coral Object Detection Model (Patience)"
cd /opt/frigate
export CCACHE_DIR=/root/.ccache
export CCACHE_MAXSIZE=2G
curl -fsSL "https://github.com/libusb/libusb/archive/v1.0.26.zip" -o "v1.0.26.zip"
$STD unzip v1.0.26.zip
rm v1.0.26.zip
cd libusb-1.0.26
$STD ./bootstrap.sh
$STD ./configure --disable-udev --enable-shared
$STD make -j $(nproc --all)
cd /opt/frigate/libusb-1.0.26/libusb
mkdir -p /usr/local/lib
$STD /bin/bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la '/usr/local/lib'
mkdir -p /usr/local/include/libusb-1.0
$STD /usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0'
ldconfig
cd /
curl -fsSL "https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite" -o "edgetpu_model.tflite"
curl -fsSL "https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite" -o "cpu_model.tflite"
cp /opt/frigate/labelmap.txt /labelmap.txt
curl -fsSL "https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download" -o "yamnet-tflite-classification-tflite-v1.tar.gz"
tar xzf yamnet-tflite-classification-tflite-v1.tar.gz
rm -rf yamnet-tflite-classification-tflite-v1.tar.gz
mv 1.tflite cpu_audio_model.tflite
cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
mkdir -p /media/frigate
curl -fsSL "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4" -o "/media/frigate/person-bicycle-car-detection.mp4"
msg_ok "Installed Coral Object Detection Model"
msg_info "Building Nginx with Custom Modules"
$STD /opt/frigate/docker/main/build_nginx.sh
sed -e '/s6-notifyoncheck/ s/^#*/#/' -i /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run
ln -sf /usr/local/nginx/sbin/nginx /usr/local/bin/nginx
msg_ok "Built Nginx"
msg_info "Installing Tempio"
sed -i 's|/rootfs/usr/local|/usr/local|g' /opt/frigate/docker/main/install_tempio.sh
$STD /opt/frigate/docker/main/install_tempio.sh
ln -sf /usr/local/tempio/bin/tempio /usr/local/bin/tempio
msg_ok "Installed Tempio"
msg_info "Creating Services" msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/create_directories.service cat <<EOF >/etc/systemd/system/create_directories.service
[Unit] [Unit]
Description=Create necessary directories for Frigate logs Description=Create necessary directories for logs
Before=frigate.service go2rtc.service nginx.service
[Service] [Service]
Type=oneshot Type=oneshot
@@ -283,11 +176,13 @@ ExecStart=/bin/bash -c '/bin/mkdir -p /dev/shm/logs/{frigate,go2rtc,nginx} && /b
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
systemctl enable -q --now create_directories
sleep 3
cat <<EOF >/etc/systemd/system/go2rtc.service cat <<EOF >/etc/systemd/system/go2rtc.service
[Unit] [Unit]
Description=go2rtc streaming service Description=go2rtc service
After=network.target create_directories.service After=network.target
After=create_directories.service
StartLimitIntervalSec=0 StartLimitIntervalSec=0
[Service] [Service]
@@ -295,8 +190,7 @@ Type=simple
Restart=always Restart=always
RestartSec=1 RestartSec=1
User=root User=root
EnvironmentFile=/etc/frigate.env ExecStartPre=+rm /dev/shm/logs/go2rtc/current
ExecStartPre=+rm -f /dev/shm/logs/go2rtc/current
ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/go2rtc/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '" ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/go2rtc/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '"
StandardOutput=file:/dev/shm/logs/go2rtc/current StandardOutput=file:/dev/shm/logs/go2rtc/current
StandardError=file:/dev/shm/logs/go2rtc/current StandardError=file:/dev/shm/logs/go2rtc/current
@@ -304,11 +198,13 @@ StandardError=file:/dev/shm/logs/go2rtc/current
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
systemctl enable -q --now go2rtc
sleep 3
cat <<EOF >/etc/systemd/system/frigate.service cat <<EOF >/etc/systemd/system/frigate.service
[Unit] [Unit]
Description=Frigate NVR service Description=Frigate service
After=go2rtc.service create_directories.service After=go2rtc.service
After=create_directories.service
StartLimitIntervalSec=0 StartLimitIntervalSec=0
[Service] [Service]
@@ -316,8 +212,8 @@ Type=simple
Restart=always Restart=always
RestartSec=1 RestartSec=1
User=root User=root
EnvironmentFile=/etc/frigate.env # Environment=PLUS_API_KEY=
ExecStartPre=+rm -f /dev/shm/logs/frigate/current ExecStartPre=+rm /dev/shm/logs/frigate/current
ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/frigate/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '" ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/frigate/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '"
StandardOutput=file:/dev/shm/logs/frigate/current StandardOutput=file:/dev/shm/logs/frigate/current
StandardError=file:/dev/shm/logs/frigate/current StandardError=file:/dev/shm/logs/frigate/current
@@ -325,11 +221,13 @@ StandardError=file:/dev/shm/logs/frigate/current
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
systemctl enable -q --now frigate
sleep 3
cat <<EOF >/etc/systemd/system/nginx.service cat <<EOF >/etc/systemd/system/nginx.service
[Unit] [Unit]
Description=Nginx reverse proxy for Frigate Description=Nginx service
After=frigate.service create_directories.service After=frigate.service
After=create_directories.service
StartLimitIntervalSec=0 StartLimitIntervalSec=0
[Service] [Service]
@@ -337,7 +235,7 @@ Type=simple
Restart=always Restart=always
RestartSec=1 RestartSec=1
User=root User=root
ExecStartPre=+rm -f /dev/shm/logs/nginx/current ExecStartPre=+rm /dev/shm/logs/nginx/current
ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '" ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '"
StandardOutput=file:/dev/shm/logs/nginx/current StandardOutput=file:/dev/shm/logs/nginx/current
StandardError=file:/dev/shm/logs/nginx/current StandardError=file:/dev/shm/logs/nginx/current
@@ -345,20 +243,8 @@ StandardError=file:/dev/shm/logs/nginx/current
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
systemctl daemon-reload
systemctl enable -q --now create_directories
sleep 2
systemctl enable -q --now go2rtc
sleep 2
systemctl enable -q --now frigate
sleep 2
systemctl enable -q --now nginx systemctl enable -q --now nginx
msg_ok "Created Services" msg_ok "Configured Services"
msg_info "Cleaning Up"
rm -rf /opt/libusb /wheels /models/*.tar.gz
msg_ok "Cleaned Up"
motd_ssh motd_ssh
customize customize

View File

@@ -1,64 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://docs.jellyseerr.dev/
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt-get install -y \
git \
build-essential
msg_ok "Installed Dependencies"
git clone -q https://github.com/Fallenbagel/jellyseerr.git /opt/jellyseerr
cd /opt/jellyseerr
$STD git checkout main
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
msg_info "Installing Jellyseerr (Patience)"
export CYPRESS_INSTALL_BINARY=0
cd /opt/jellyseerr
$STD pnpm install --frozen-lockfile
export NODE_OPTIONS="--max-old-space-size=3072"
$STD pnpm build
mkdir -p /etc/jellyseerr/
cat <<EOF >/etc/jellyseerr/jellyseerr.conf
PORT=5055
# HOST=0.0.0.0
# JELLYFIN_TYPE=emby
EOF
msg_ok "Installed Jellyseerr"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/jellyseerr.service
[Unit]
Description=jellyseerr Service
After=network.target
[Service]
EnvironmentFile=/etc/jellyseerr/jellyseerr.conf
Environment=NODE_ENV=production
Type=exec
WorkingDirectory=/opt/jellyseerr
ExecStart=/usr/bin/node dist/index.js
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now jellyseerr
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://overseerr.dev/
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
NODE_VERSION="22" NODE_MODULE="yarn@latest" setup_nodejs
fetch_and_deploy_gh_release "overseerr" "sct/overseerr" "tarball"
msg_info "Configuring Overseerr (Patience)"
cd /opt/overseerr
$STD yarn install
$STD yarn build
msg_ok "Configured Overseerr"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/overseerr.service
[Unit]
Description=Overseerr Service
After=network.target
[Service]
Type=exec
WorkingDirectory=/opt/overseerr
ExecStart=/usr/bin/yarn start
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now overseerr
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

67
install/seerr-install.sh Normal file
View File

@@ -0,0 +1,67 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: CrazyWolf13
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://docs.seerr.dev/
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt-get install -y build-essential
msg_ok "Installed Dependencies"
fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/seerr/package.json)
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
msg_info "Installing Seerr (Patience)"
export CYPRESS_INSTALL_BINARY=0
cd /opt/seerr
$STD pnpm install --frozen-lockfile
export NODE_OPTIONS="--max-old-space-size=3072"
$STD pnpm build
mkdir -p /etc/seerr/
cat <<EOF >/etc/seerr/seerr.conf
## Seerr's default port is 5055, if you want to use both, change this.
## specify on which port to listen
PORT=5055
## specify on which interface to listen, by default seerr listens on all interfaces
HOST=0.0.0.0
## Uncomment if you want to force Node.js to resolve IPv4 before IPv6 (advanced users only)
# FORCE_IPV4_FIRST=true
EOF
msg_ok "Installed Seerr"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/seerr.service
[Unit]
Description=Seerr Service
Wants=network-online.target
After=network-online.target
[Service]
EnvironmentFile=/etc/seerr/seerr.conf
Environment=NODE_ENV=production
Type=exec
Restart=on-failure
WorkingDirectory=/opt/seerr
ExecStart=/usr/bin/node dist/index.js
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now seerr
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -237,23 +237,29 @@ explain_exit_code() {
# json_escape() # json_escape()
# #
# - Escapes a string for safe JSON embedding # - Escapes a string for safe JSON embedding
# - Strips ANSI escape sequences and non-printable control characters
# - Handles backslashes, quotes, newlines, tabs, and carriage returns # - Handles backslashes, quotes, newlines, tabs, and carriage returns
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
json_escape() { json_escape() {
local s="$1" local s="$1"
# Strip ANSI escape sequences (color codes etc.)
s=$(printf '%s' "$s" | sed 's/\x1b\[[0-9;]*[a-zA-Z]//g')
s=${s//\\/\\\\} s=${s//\\/\\\\}
s=${s//"/\\"} s=${s//"/\\"/}
s=${s//$'\n'/\\n} s=${s//$'\n'/\\n}
s=${s//$'\r'/} s=${s//$'\r'/}
s=${s//$'\t'/\\t} s=${s//$'\t'/\\t}
echo "$s" # Remove any remaining control characters (0x00-0x1F except those already handled)
s=$(printf '%s' "$s" | tr -d '\000-\010\013\014\016-\037')
printf '%s' "$s"
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# get_error_text() # get_error_text()
# #
# - Returns last 20 lines of the active log (INSTALL_LOG or BUILD_LOG) # - Returns last 20 lines of the active log (INSTALL_LOG or BUILD_LOG)
# - Falls back to empty string if no log is available # - Falls back to combined log or BUILD_LOG if primary is not accessible
# - Handles container paths that don't exist on the host
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
get_error_text() { get_error_text() {
local logfile="" local logfile=""
@@ -265,8 +271,50 @@ get_error_text() {
logfile="$BUILD_LOG" logfile="$BUILD_LOG"
fi fi
# If logfile is inside container (e.g. /root/.install-*), try the host copy
if [[ -n "$logfile" && ! -s "$logfile" ]]; then
# Try combined log: /tmp/<app>-<CTID>-<SESSION_ID>.log
if [[ -n "${CTID:-}" && -n "${SESSION_ID:-}" ]]; then
local combined_log="/tmp/${NSAPP:-lxc}-${CTID}-${SESSION_ID}.log"
if [[ -s "$combined_log" ]]; then
logfile="$combined_log"
fi
fi
fi
# Also try BUILD_LOG as fallback if primary log is empty/missing
if [[ -z "$logfile" || ! -s "$logfile" ]] && [[ -n "${BUILD_LOG:-}" && -s "${BUILD_LOG}" ]]; then
logfile="$BUILD_LOG"
fi
if [[ -n "$logfile" && -s "$logfile" ]]; then if [[ -n "$logfile" && -s "$logfile" ]]; then
tail -n 20 "$logfile" 2>/dev/null | sed 's/\r$//' tail -n 20 "$logfile" 2>/dev/null | sed 's/\r$//' | sed 's/\x1b\[[0-9;]*[a-zA-Z]//g'
fi
}
# ------------------------------------------------------------------------------
# build_error_string()
#
# - Builds a structured error string for telemetry reporting
# - Format: "exit_code=<N> | <explanation>\n---\n<last 20 log lines>"
# - If no log lines available, returns just the explanation
# - Arguments:
# * $1: exit_code (numeric)
# * $2: log_text (optional, output from get_error_text)
# - Returns structured error string via stdout
# ------------------------------------------------------------------------------
build_error_string() {
local exit_code="${1:-1}"
local log_text="${2:-}"
local explanation
explanation=$(explain_exit_code "$exit_code")
if [[ -n "$log_text" ]]; then
# Structured format: header + separator + log lines
printf 'exit_code=%s | %s\n---\n%s' "$exit_code" "$explanation" "$log_text"
else
# No log available - just the explanation with exit code
printf 'exit_code=%s | %s' "$exit_code" "$explanation"
fi fi
} }
@@ -422,7 +470,8 @@ post_to_api() {
detect_gpu detect_gpu
fi fi
local gpu_vendor="${GPU_VENDOR:-unknown}" local gpu_vendor="${GPU_VENDOR:-unknown}"
local gpu_model="${GPU_MODEL:-}" local gpu_model
gpu_model=$(json_escape "${GPU_MODEL:-}")
local gpu_passthrough="${GPU_PASSTHROUGH:-unknown}" local gpu_passthrough="${GPU_PASSTHROUGH:-unknown}"
# Detect CPU if not already set # Detect CPU if not already set
@@ -430,7 +479,8 @@ post_to_api() {
detect_cpu detect_cpu
fi fi
local cpu_vendor="${CPU_VENDOR:-unknown}" local cpu_vendor="${CPU_VENDOR:-unknown}"
local cpu_model="${CPU_MODEL:-}" local cpu_model
cpu_model=$(json_escape "${CPU_MODEL:-}")
# Detect RAM if not already set # Detect RAM if not already set
if [[ -z "${RAM_SPEED:-}" ]]; then if [[ -z "${RAM_SPEED:-}" ]]; then
@@ -521,7 +571,8 @@ post_to_api_vm() {
detect_gpu detect_gpu
fi fi
local gpu_vendor="${GPU_VENDOR:-unknown}" local gpu_vendor="${GPU_VENDOR:-unknown}"
local gpu_model="${GPU_MODEL:-}" local gpu_model
gpu_model=$(json_escape "${GPU_MODEL:-}")
local gpu_passthrough="${GPU_PASSTHROUGH:-unknown}" local gpu_passthrough="${GPU_PASSTHROUGH:-unknown}"
# Detect CPU if not already set # Detect CPU if not already set
@@ -529,7 +580,8 @@ post_to_api_vm() {
detect_cpu detect_cpu
fi fi
local cpu_vendor="${CPU_VENDOR:-unknown}" local cpu_vendor="${CPU_VENDOR:-unknown}"
local cpu_model="${CPU_MODEL:-}" local cpu_model
cpu_model=$(json_escape "${CPU_MODEL:-}")
# Detect RAM if not already set # Detect RAM if not already set
if [[ -z "${RAM_SPEED:-}" ]]; then if [[ -z "${RAM_SPEED:-}" ]]; then
@@ -592,9 +644,12 @@ post_update_to_api() {
# Silent fail - telemetry should never break scripts # Silent fail - telemetry should never break scripts
command -v curl &>/dev/null || return 0 command -v curl &>/dev/null || return 0
# Prevent duplicate submissions # Support "force" mode (3rd arg) to bypass duplicate check for retries after cleanup
local force="${3:-}"
POST_UPDATE_DONE=${POST_UPDATE_DONE:-false} POST_UPDATE_DONE=${POST_UPDATE_DONE:-false}
[[ "$POST_UPDATE_DONE" == "true" ]] && return 0 if [[ "$POST_UPDATE_DONE" == "true" && "$force" != "force" ]]; then
return 0
fi
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0 [[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0 [[ -z "${RANDOM_UUID:-}" ]] && return 0
@@ -605,12 +660,14 @@ post_update_to_api() {
# Get GPU info (if detected) # Get GPU info (if detected)
local gpu_vendor="${GPU_VENDOR:-unknown}" local gpu_vendor="${GPU_VENDOR:-unknown}"
local gpu_model="${GPU_MODEL:-}" local gpu_model
gpu_model=$(json_escape "${GPU_MODEL:-}")
local gpu_passthrough="${GPU_PASSTHROUGH:-unknown}" local gpu_passthrough="${GPU_PASSTHROUGH:-unknown}"
# Get CPU info (if detected) # Get CPU info (if detected)
local cpu_vendor="${CPU_VENDOR:-unknown}" local cpu_vendor="${CPU_VENDOR:-unknown}"
local cpu_model="${CPU_MODEL:-}" local cpu_model
cpu_model=$(json_escape "${CPU_MODEL:-}")
# Get RAM info (if detected) # Get RAM info (if detected)
local ram_speed="${RAM_SPEED:-}" local ram_speed="${RAM_SPEED:-}"
@@ -632,19 +689,20 @@ post_update_to_api() {
esac esac
# For failed/unknown status, resolve exit code and error description # For failed/unknown status, resolve exit code and error description
local short_error=""
if [[ "$pb_status" == "failed" ]] || [[ "$pb_status" == "unknown" ]]; then if [[ "$pb_status" == "failed" ]] || [[ "$pb_status" == "unknown" ]]; then
if [[ "$raw_exit_code" =~ ^[0-9]+$ ]]; then if [[ "$raw_exit_code" =~ ^[0-9]+$ ]]; then
exit_code="$raw_exit_code" exit_code="$raw_exit_code"
else else
exit_code=1 exit_code=1
fi fi
# Get log lines and build structured error string
local error_text="" local error_text=""
error_text=$(get_error_text) error_text=$(get_error_text)
if [[ -n "$error_text" ]]; then local full_error
error=$(json_escape "$error_text") full_error=$(build_error_string "$exit_code" "$error_text")
else error=$(json_escape "$full_error")
error=$(json_escape "$(explain_exit_code "$exit_code")") short_error=$(json_escape "$(explain_exit_code "$exit_code")")
fi
error_category=$(categorize_error "$exit_code") error_category=$(categorize_error "$exit_code")
[[ -z "$error" ]] && error="Unknown error" [[ -z "$error" ]] && error="Unknown error"
fi fi
@@ -661,8 +719,9 @@ post_update_to_api() {
pve_version=$(pveversion 2>/dev/null | awk -F'[/ ]' '{print $2}') || true pve_version=$(pveversion 2>/dev/null | awk -F'[/ ]' '{print $2}') || true
fi fi
# Full payload including all fields - allows record creation if initial call failed local http_code=""
# The Go service will find the record by random_id and PATCH, or create if not found
# ── Attempt 1: Full payload with complete error text ──
local JSON_PAYLOAD local JSON_PAYLOAD
JSON_PAYLOAD=$( JSON_PAYLOAD=$(
cat <<EOF cat <<EOF
@@ -694,11 +753,80 @@ post_update_to_api() {
EOF EOF
) )
# Fire-and-forget: never block, never fail http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
if [[ "$http_code" =~ ^2[0-9]{2}$ ]]; then
POST_UPDATE_DONE=true
return 0
fi
# ── Attempt 2: Short error text (no full log) ──
sleep 1
local RETRY_PAYLOAD
RETRY_PAYLOAD=$(
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}",
"ct_type": ${CT_TYPE:-1},
"disk_size": ${DISK_SIZE:-0},
"core_count": ${CORE_COUNT:-0},
"ram_size": ${RAM_SIZE:-0},
"os_type": "${var_os:-}",
"os_version": "${var_version:-}",
"pve_version": "${pve_version}",
"method": "${METHOD:-default}",
"exit_code": ${exit_code},
"error": "${short_error}",
"error_category": "${error_category}",
"install_duration": ${duration},
"cpu_vendor": "${cpu_vendor}",
"cpu_model": "${cpu_model}",
"gpu_vendor": "${gpu_vendor}",
"gpu_model": "${gpu_model}",
"gpu_passthrough": "${gpu_passthrough}",
"ram_speed": "${ram_speed}",
"repo_source": "${REPO_SOURCE}"
}
EOF
)
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \
-d "$RETRY_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
if [[ "$http_code" =~ ^2[0-9]{2}$ ]]; then
POST_UPDATE_DONE=true
return 0
fi
# ── Attempt 3: Minimal payload (bare minimum to set status) ──
sleep 2
local MINIMAL_PAYLOAD
MINIMAL_PAYLOAD=$(
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}",
"exit_code": ${exit_code},
"error": "${short_error}",
"error_category": "${error_category}",
"install_duration": ${duration}
}
EOF
)
curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \ curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" -o /dev/null 2>&1 || true -d "$MINIMAL_PAYLOAD" -o /dev/null 2>/dev/null || true
# Tried 3 times - mark as done regardless to prevent infinite loops
POST_UPDATE_DONE=true POST_UPDATE_DONE=true
} }
@@ -716,31 +844,52 @@ EOF
categorize_error() { categorize_error() {
local code="$1" local code="$1"
case "$code" in case "$code" in
# Network errors # Network errors (curl/wget)
6 | 7 | 22 | 28 | 35) echo "network" ;; 6 | 7 | 22 | 35) echo "network" ;;
# Storage errors # Timeout errors
214 | 217 | 219) echo "storage" ;; 28 | 124 | 211) echo "timeout" ;;
# Dependency/Package errors # Storage errors (Proxmox storage)
100 | 101 | 102 | 127 | 160 | 161 | 162) echo "dependency" ;; 214 | 217 | 219 | 224) echo "storage" ;;
# Dependency/Package errors (APT, DPKG, pip, commands)
100 | 101 | 102 | 127 | 160 | 161 | 162 | 255) echo "dependency" ;;
# Permission errors # Permission errors
126 | 152) echo "permission" ;; 126 | 152) echo "permission" ;;
# Timeout errors # Configuration errors (Proxmox config, invalid args)
124 | 28 | 211) echo "timeout" ;; 128 | 203 | 204 | 205 | 206 | 207 | 208) echo "config" ;;
# Configuration errors # Proxmox container/template errors
203 | 204 | 205 | 206 | 207 | 208) echo "config" ;; 200 | 209 | 210 | 212 | 213 | 215 | 216 | 218 | 220 | 221 | 222 | 223 | 225 | 231) echo "proxmox" ;;
# Service/Systemd errors
150 | 151 | 153 | 154) echo "service" ;;
# Database errors (PostgreSQL, MySQL, MongoDB)
170 | 171 | 172 | 173 | 180 | 181 | 182 | 183 | 190 | 191 | 192 | 193) echo "database" ;;
# Node.js / JavaScript runtime errors
243 | 245 | 246 | 247 | 248 | 249) echo "runtime" ;;
# Python environment errors
# (already covered: 160-162 under dependency)
# Aborted by user # Aborted by user
130) echo "aborted" ;; 130) echo "aborted" ;;
# Resource errors (OOM, etc) # Resource errors (OOM, SIGKILL, SIGABRT)
137 | 134) echo "resource" ;; 134 | 137) echo "resource" ;;
# Default # Signal/Process errors (SIGTERM, SIGPIPE, SIGSEGV)
139 | 141 | 143) echo "signal" ;;
# Shell errors (general error, syntax error)
1 | 2) echo "shell" ;;
# Default - truly unknown
*) echo "unknown" ;; *) echo "unknown" ;;
esac esac
} }
@@ -803,11 +952,9 @@ post_tool_to_api() {
[[ ! "$exit_code" =~ ^[0-9]+$ ]] && exit_code=1 [[ ! "$exit_code" =~ ^[0-9]+$ ]] && exit_code=1
local error_text="" local error_text=""
error_text=$(get_error_text) error_text=$(get_error_text)
if [[ -n "$error_text" ]]; then local full_error
error=$(json_escape "$error_text") full_error=$(build_error_string "$exit_code" "$error_text")
else error=$(json_escape "$full_error")
error=$(json_escape "$(explain_exit_code "$exit_code")")
fi
error_category=$(categorize_error "$exit_code") error_category=$(categorize_error "$exit_code")
fi fi
@@ -870,11 +1017,9 @@ post_addon_to_api() {
[[ ! "$exit_code" =~ ^[0-9]+$ ]] && exit_code=1 [[ ! "$exit_code" =~ ^[0-9]+$ ]] && exit_code=1
local error_text="" local error_text=""
error_text=$(get_error_text) error_text=$(get_error_text)
if [[ -n "$error_text" ]]; then local full_error
error=$(json_escape "$error_text") full_error=$(build_error_string "$exit_code" "$error_text")
else error=$(json_escape "$full_error")
error=$(json_escape "$(explain_exit_code "$exit_code")")
fi
error_category=$(categorize_error "$exit_code") error_category=$(categorize_error "$exit_code")
fi fi
@@ -969,11 +1114,9 @@ post_update_to_api_extended() {
fi fi
local error_text="" local error_text=""
error_text=$(get_error_text) error_text=$(get_error_text)
if [[ -n "$error_text" ]]; then local full_error
error=$(json_escape "$error_text") full_error=$(build_error_string "$exit_code" "$error_text")
else error=$(json_escape "$full_error")
error=$(json_escape "$(explain_exit_code "$exit_code")")
fi
error_category=$(categorize_error "$exit_code") error_category=$(categorize_error "$exit_code")
[[ -z "$error" ]] && error="Unknown error" [[ -z "$error" ]] && error="Unknown error"
fi fi

View File

@@ -38,15 +38,16 @@
# - Captures app-declared resource defaults (CPU, RAM, Disk) # - Captures app-declared resource defaults (CPU, RAM, Disk)
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
variables() { variables() {
NSAPP=$(echo "${APP,,}" | tr -d ' ') # This function sets the NSAPP variable by converting the value of the APP variable to lowercase and removing any spaces. NSAPP=$(echo "${APP,,}" | tr -d ' ') # This function sets the NSAPP variable by converting the value of the APP variable to lowercase and removing any spaces.
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP. var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern. INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
DIAGNOSTICS="yes" # sets the DIAGNOSTICS variable to "yes", used for the API call. DIAGNOSTICS="yes" # sets the DIAGNOSTICS variable to "yes", used for the API call.
METHOD="default" # sets the METHOD variable to "default", used for the API call. METHOD="default" # sets the METHOD variable to "default", used for the API call.
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable. RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
combined_log="/tmp/install-${SESSION_ID}-combined.log" # Combined log (build + install) for failed installations
CTTYPE="${CTTYPE:-${CT_TYPE:-1}}" CTTYPE="${CTTYPE:-${CT_TYPE:-1}}"
# Parse dev_mode early # Parse dev_mode early
@@ -217,7 +218,7 @@ update_motd_ip() {
local current_os="$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '"') - Version: $(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '"')" local current_os="$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '"') - Version: $(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '"')"
local current_hostname="$(hostname)" local current_hostname="$(hostname)"
local current_ip="$(hostname -I | awk '{print $1}')" local current_ip="$(hostname -I | awk '{print $1}')"
# Update only if values actually changed # Update only if values actually changed
if ! grep -q "OS:.*$current_os" "$PROFILE_FILE" 2>/dev/null; then if ! grep -q "OS:.*$current_os" "$PROFILE_FILE" 2>/dev/null; then
sed -i "s|OS:.*|OS: \${GN}$current_os\${CL}\\\"|" "$PROFILE_FILE" sed -i "s|OS:.*|OS: \${GN}$current_os\${CL}\\\"|" "$PROFILE_FILE"
@@ -276,8 +277,9 @@ install_ssh_keys_into_ct() {
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# validate_container_id() # validate_container_id()
# #
# - Validates if a container ID is available for use # - Validates if a container ID is available for use (CLUSTER-WIDE)
# - Checks if ID is already used by VM or LXC container # - Checks cluster resources via pvesh for VMs/CTs on ALL nodes
# - Falls back to local config file check if pvesh unavailable
# - Checks if ID is used in LVM logical volumes # - Checks if ID is used in LVM logical volumes
# - Returns 0 if ID is available, 1 if already in use # - Returns 0 if ID is available, 1 if already in use
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -289,11 +291,35 @@ validate_container_id() {
return 1 return 1
fi fi
# Check if config file exists for VM or LXC # CLUSTER-WIDE CHECK: Query all VMs/CTs across all nodes
# This catches IDs used on other nodes in the cluster
# NOTE: Works on single-node too - Proxmox always has internal cluster structure
# Falls back gracefully if pvesh unavailable or returns empty
if command -v pvesh &>/dev/null; then
local cluster_ids
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true)
if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then
return 1
fi
fi
# LOCAL FALLBACK: Check if config file exists for VM or LXC
# This handles edge cases where pvesh might not return all info
if [[ -f "/etc/pve/qemu-server/${ctid}.conf" ]] || [[ -f "/etc/pve/lxc/${ctid}.conf" ]]; then if [[ -f "/etc/pve/qemu-server/${ctid}.conf" ]] || [[ -f "/etc/pve/lxc/${ctid}.conf" ]]; then
return 1 return 1
fi fi
# Check ALL nodes in cluster for config files (handles pmxcfs sync delays)
# NOTE: On single-node, /etc/pve/nodes/ contains just the one node - still works
if [[ -d "/etc/pve/nodes" ]]; then
for node_dir in /etc/pve/nodes/*/; do
if [[ -f "${node_dir}qemu-server/${ctid}.conf" ]] || [[ -f "${node_dir}lxc/${ctid}.conf" ]]; then
return 1
fi
done
fi
# Check if ID is used in LVM logical volumes # Check if ID is used in LVM logical volumes
if lvs --noheadings -o lv_name 2>/dev/null | grep -qE "(^|[-_])${ctid}($|[-_])"; then if lvs --noheadings -o lv_name 2>/dev/null | grep -qE "(^|[-_])${ctid}($|[-_])"; then
return 1 return 1
@@ -305,63 +331,30 @@ validate_container_id() {
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# get_valid_container_id() # get_valid_container_id()
# #
# - Returns a valid, unused container ID # - Returns a valid, unused container ID (CLUSTER-AWARE)
# - Uses pvesh /cluster/nextid as starting point (already cluster-aware)
# - If provided ID is valid, returns it # - If provided ID is valid, returns it
# - Otherwise increments from suggested ID until a free one is found # - Otherwise increments until a free one is found across entire cluster
# - Calls validate_container_id() to check availability # - Calls validate_container_id() to check availability
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
get_valid_container_id() { get_valid_container_id() {
local suggested_id="${1:-$(pvesh get /cluster/nextid)}" local suggested_id="${1:-$(pvesh get /cluster/nextid 2>/dev/null || echo 100)}"
while ! validate_container_id "$suggested_id"; do # Ensure we have a valid starting ID
suggested_id=$((suggested_id + 1)) if ! [[ "$suggested_id" =~ ^[0-9]+$ ]]; then
done suggested_id=$(pvesh get /cluster/nextid 2>/dev/null || echo 100)
fi
echo "$suggested_id"
} local max_attempts=1000
local attempts=0
# ------------------------------------------------------------------------------
# validate_container_id()
#
# - Validates if a container ID is available for use
# - Checks if ID is already used by VM or LXC container
# - Checks if ID is used in LVM logical volumes
# - Returns 0 if ID is available, 1 if already in use
# ------------------------------------------------------------------------------
validate_container_id() {
local ctid="$1"
# Check if ID is numeric
if ! [[ "$ctid" =~ ^[0-9]+$ ]]; then
return 1
fi
# Check if config file exists for VM or LXC
if [[ -f "/etc/pve/qemu-server/${ctid}.conf" ]] || [[ -f "/etc/pve/lxc/${ctid}.conf" ]]; then
return 1
fi
# Check if ID is used in LVM logical volumes
if lvs --noheadings -o lv_name 2>/dev/null | grep -qE "(^|[-_])${ctid}($|[-_])"; then
return 1
fi
return 0
}
# ------------------------------------------------------------------------------
# get_valid_container_id()
#
# - Returns a valid, unused container ID
# - If provided ID is valid, returns it
# - Otherwise increments from suggested ID until a free one is found
# - Calls validate_container_id() to check availability
# ------------------------------------------------------------------------------
get_valid_container_id() {
local suggested_id="${1:-$(pvesh get /cluster/nextid)}"
while ! validate_container_id "$suggested_id"; do while ! validate_container_id "$suggested_id"; do
suggested_id=$((suggested_id + 1)) suggested_id=$((suggested_id + 1))
attempts=$((attempts + 1))
if [[ $attempts -ge $max_attempts ]]; then
msg_error "Could not find available container ID after $max_attempts attempts"
exit 1
fi
done done
echo "$suggested_id" echo "$suggested_id"
@@ -385,7 +378,7 @@ validate_hostname() {
# Split by dots and validate each label # Split by dots and validate each label
local IFS='.' local IFS='.'
read -ra labels <<< "$hostname" read -ra labels <<<"$hostname"
for label in "${labels[@]}"; do for label in "${labels[@]}"; do
# Each label: 1-63 chars, alphanumeric, hyphens allowed (not at start/end) # Each label: 1-63 chars, alphanumeric, hyphens allowed (not at start/end)
if [[ -z "$label" ]] || [[ ${#label} -gt 63 ]]; then if [[ -z "$label" ]] || [[ ${#label} -gt 63 ]]; then
@@ -489,7 +482,7 @@ validate_ipv6_address() {
# Check that no segment exceeds 4 hex chars # Check that no segment exceeds 4 hex chars
local IFS=':' local IFS=':'
local -a segments local -a segments
read -ra segments <<< "$addr" read -ra segments <<<"$addr"
for seg in "${segments[@]}"; do for seg in "${segments[@]}"; do
if [[ ${#seg} -gt 4 ]]; then if [[ ${#seg} -gt 4 ]]; then
return 1 return 1
@@ -539,14 +532,14 @@ validate_gateway_in_subnet() {
# Convert IPs to integers # Convert IPs to integers
local IFS='.' local IFS='.'
read -r i1 i2 i3 i4 <<< "$ip" read -r i1 i2 i3 i4 <<<"$ip"
read -r g1 g2 g3 g4 <<< "$gateway" read -r g1 g2 g3 g4 <<<"$gateway"
local ip_int=$(( (i1 << 24) + (i2 << 16) + (i3 << 8) + i4 )) local ip_int=$(((i1 << 24) + (i2 << 16) + (i3 << 8) + i4))
local gw_int=$(( (g1 << 24) + (g2 << 16) + (g3 << 8) + g4 )) local gw_int=$(((g1 << 24) + (g2 << 16) + (g3 << 8) + g4))
# Check if both are in same network # Check if both are in same network
if (( (ip_int & mask) != (gw_int & mask) )); then if (((ip_int & mask) != (gw_int & mask))); then
return 1 return 1
fi fi
@@ -1079,117 +1072,117 @@ load_vars_file() {
# Validate values before setting (skip empty values - they use defaults) # Validate values before setting (skip empty values - they use defaults)
if [[ -n "$var_val" ]]; then if [[ -n "$var_val" ]]; then
case "$var_key" in case "$var_key" in
var_mac) var_mac)
if ! validate_mac_address "$var_val"; then if ! validate_mac_address "$var_val"; then
msg_warn "Invalid MAC address '$var_val' in $file, ignoring" msg_warn "Invalid MAC address '$var_val' in $file, ignoring"
continue
fi
;;
var_vlan)
if ! validate_vlan_tag "$var_val"; then
msg_warn "Invalid VLAN tag '$var_val' in $file (must be 1-4094), ignoring"
continue
fi
;;
var_mtu)
if ! validate_mtu "$var_val"; then
msg_warn "Invalid MTU '$var_val' in $file (must be 576-65535), ignoring"
continue
fi
;;
var_tags)
if ! validate_tags "$var_val"; then
msg_warn "Invalid tags '$var_val' in $file (alphanumeric, -, _, ; only), ignoring"
continue
fi
;;
var_timezone)
if ! validate_timezone "$var_val"; then
msg_warn "Invalid timezone '$var_val' in $file, ignoring"
continue
fi
;;
var_brg)
if ! validate_bridge "$var_val"; then
msg_warn "Bridge '$var_val' not found in $file, ignoring"
continue
fi
;;
var_gateway)
if ! validate_gateway_ip "$var_val"; then
msg_warn "Invalid gateway IP '$var_val' in $file, ignoring"
continue
fi
;;
var_hostname)
if ! validate_hostname "$var_val"; then
msg_warn "Invalid hostname '$var_val' in $file, ignoring"
continue
fi
;;
var_cpu)
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1 || var_val > 128)); then
msg_warn "Invalid CPU count '$var_val' in $file (must be 1-128), ignoring"
continue
fi
;;
var_ram)
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 256)); then
msg_warn "Invalid RAM '$var_val' in $file (must be >= 256 MiB), ignoring"
continue
fi
;;
var_disk)
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1)); then
msg_warn "Invalid disk size '$var_val' in $file (must be >= 1 GB), ignoring"
continue
fi
;;
var_unprivileged)
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
msg_warn "Invalid unprivileged value '$var_val' in $file (must be 0 or 1), ignoring"
continue
fi
;;
var_nesting)
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
msg_warn "Invalid nesting value '$var_val' in $file (must be 0 or 1), ignoring"
continue
fi
# Warn about potential issues with systemd-based OS when nesting is disabled via vars file
if [[ "$var_val" == "0" && "${var_os:-debian}" != "alpine" ]]; then
msg_warn "Nesting disabled in $file - modern systemd-based distributions may require nesting for proper operation"
fi
;;
var_keyctl)
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
msg_warn "Invalid keyctl value '$var_val' in $file (must be 0 or 1), ignoring"
continue
fi
;;
var_net)
# var_net can be: dhcp, static IP/CIDR, or IP range
if [[ "$var_val" != "dhcp" ]]; then
if is_ip_range "$var_val"; then
: # IP range is valid, will be resolved at runtime
elif ! validate_ip_address "$var_val"; then
msg_warn "Invalid network '$var_val' in $file (must be dhcp or IP/CIDR), ignoring"
continue continue
fi fi
;; fi
var_vlan) ;;
if ! validate_vlan_tag "$var_val"; then var_fuse | var_tun | var_gpu | var_ssh | var_verbose | var_protection)
msg_warn "Invalid VLAN tag '$var_val' in $file (must be 1-4094), ignoring" if [[ "$var_val" != "yes" && "$var_val" != "no" ]]; then
continue msg_warn "Invalid boolean '$var_val' for $var_key in $file (must be yes/no), ignoring"
fi continue
;; fi
var_mtu) ;;
if ! validate_mtu "$var_val"; then var_ipv6_method)
msg_warn "Invalid MTU '$var_val' in $file (must be 576-65535), ignoring" if [[ "$var_val" != "auto" && "$var_val" != "dhcp" && "$var_val" != "static" && "$var_val" != "none" ]]; then
continue msg_warn "Invalid IPv6 method '$var_val' in $file (must be auto/dhcp/static/none), ignoring"
fi continue
;; fi
var_tags) ;;
if ! validate_tags "$var_val"; then
msg_warn "Invalid tags '$var_val' in $file (alphanumeric, -, _, ; only), ignoring"
continue
fi
;;
var_timezone)
if ! validate_timezone "$var_val"; then
msg_warn "Invalid timezone '$var_val' in $file, ignoring"
continue
fi
;;
var_brg)
if ! validate_bridge "$var_val"; then
msg_warn "Bridge '$var_val' not found in $file, ignoring"
continue
fi
;;
var_gateway)
if ! validate_gateway_ip "$var_val"; then
msg_warn "Invalid gateway IP '$var_val' in $file, ignoring"
continue
fi
;;
var_hostname)
if ! validate_hostname "$var_val"; then
msg_warn "Invalid hostname '$var_val' in $file, ignoring"
continue
fi
;;
var_cpu)
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1 || var_val > 128)); then
msg_warn "Invalid CPU count '$var_val' in $file (must be 1-128), ignoring"
continue
fi
;;
var_ram)
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 256)); then
msg_warn "Invalid RAM '$var_val' in $file (must be >= 256 MiB), ignoring"
continue
fi
;;
var_disk)
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1)); then
msg_warn "Invalid disk size '$var_val' in $file (must be >= 1 GB), ignoring"
continue
fi
;;
var_unprivileged)
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
msg_warn "Invalid unprivileged value '$var_val' in $file (must be 0 or 1), ignoring"
continue
fi
;;
var_nesting)
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
msg_warn "Invalid nesting value '$var_val' in $file (must be 0 or 1), ignoring"
continue
fi
# Warn about potential issues with systemd-based OS when nesting is disabled via vars file
if [[ "$var_val" == "0" && "${var_os:-debian}" != "alpine" ]]; then
msg_warn "Nesting disabled in $file - modern systemd-based distributions may require nesting for proper operation"
fi
;;
var_keyctl)
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
msg_warn "Invalid keyctl value '$var_val' in $file (must be 0 or 1), ignoring"
continue
fi
;;
var_net)
# var_net can be: dhcp, static IP/CIDR, or IP range
if [[ "$var_val" != "dhcp" ]]; then
if is_ip_range "$var_val"; then
: # IP range is valid, will be resolved at runtime
elif ! validate_ip_address "$var_val"; then
msg_warn "Invalid network '$var_val' in $file (must be dhcp or IP/CIDR), ignoring"
continue
fi
fi
;;
var_fuse|var_tun|var_gpu|var_ssh|var_verbose|var_protection)
if [[ "$var_val" != "yes" && "$var_val" != "no" ]]; then
msg_warn "Invalid boolean '$var_val' for $var_key in $file (must be yes/no), ignoring"
continue
fi
;;
var_ipv6_method)
if [[ "$var_val" != "auto" && "$var_val" != "dhcp" && "$var_val" != "static" && "$var_val" != "none" ]]; then
msg_warn "Invalid IPv6 method '$var_val' in $file (must be auto/dhcp/static/none), ignoring"
continue
fi
;;
esac esac
fi fi
@@ -2764,6 +2757,26 @@ Advanced:
[[ "$APT_CACHER" == "yes" ]] && echo -e "${INFO}${BOLD}${DGN}APT Cacher: ${BGN}$APT_CACHER_IP${CL}" [[ "$APT_CACHER" == "yes" ]] && echo -e "${INFO}${BOLD}${DGN}APT Cacher: ${BGN}$APT_CACHER_IP${CL}"
echo -e "${SEARCH}${BOLD}${DGN}Verbose Mode: ${BGN}$VERBOSE${CL}" echo -e "${SEARCH}${BOLD}${DGN}Verbose Mode: ${BGN}$VERBOSE${CL}"
echo -e "${CREATING}${BOLD}${RD}Creating an LXC of ${APP} using the above advanced settings${CL}" echo -e "${CREATING}${BOLD}${RD}Creating an LXC of ${APP} using the above advanced settings${CL}"
# Log settings to file
log_section "CONTAINER SETTINGS (ADVANCED) - ${APP}"
log_msg "Application: ${APP}"
log_msg "PVE Version: ${PVEVERSION} (Kernel: ${KERNEL_VERSION})"
log_msg "Operating System: $var_os ($var_version)"
log_msg "Container Type: $([ "$CT_TYPE" == "1" ] && echo "Unprivileged" || echo "Privileged")"
log_msg "Container ID: $CT_ID"
log_msg "Hostname: $HN"
log_msg "Disk Size: ${DISK_SIZE} GB"
log_msg "CPU Cores: $CORE_COUNT"
log_msg "RAM Size: ${RAM_SIZE} MiB"
log_msg "Bridge: $BRG"
log_msg "IPv4: $NET"
log_msg "IPv6: $IPV6_METHOD"
log_msg "FUSE Support: ${ENABLE_FUSE:-no}"
log_msg "Nesting: $([ "${ENABLE_NESTING:-1}" == "1" ] && echo "Enabled" || echo "Disabled")"
log_msg "GPU Passthrough: ${ENABLE_GPU:-no}"
log_msg "Verbose Mode: $VERBOSE"
log_msg "Session ID: ${SESSION_ID}"
} }
# ============================================================================== # ==============================================================================
@@ -2871,6 +2884,7 @@ diagnostics_menu() {
# - Prints summary of default values (ID, OS, type, disk, RAM, CPU, etc.) # - Prints summary of default values (ID, OS, type, disk, RAM, CPU, etc.)
# - Uses icons and formatting for readability # - Uses icons and formatting for readability
# - Convert CT_TYPE to description # - Convert CT_TYPE to description
# - Also logs settings to log file for debugging
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
echo_default() { echo_default() {
CT_TYPE_DESC="Unprivileged" CT_TYPE_DESC="Unprivileged"
@@ -2892,6 +2906,20 @@ echo_default() {
fi fi
echo -e "${CREATING}${BOLD}${BL}Creating a ${APP} LXC using the above default settings${CL}" echo -e "${CREATING}${BOLD}${BL}Creating a ${APP} LXC using the above default settings${CL}"
echo -e " " echo -e " "
# Log settings to file
log_section "CONTAINER SETTINGS - ${APP}"
log_msg "Application: ${APP}"
log_msg "PVE Version: ${PVEVERSION} (Kernel: ${KERNEL_VERSION})"
log_msg "Container ID: ${CT_ID}"
log_msg "Operating System: $var_os ($var_version)"
log_msg "Container Type: $CT_TYPE_DESC"
log_msg "Disk Size: ${DISK_SIZE} GB"
log_msg "CPU Cores: ${CORE_COUNT}"
log_msg "RAM Size: ${RAM_SIZE} MiB"
[[ -n "${var_gpu:-}" && "${var_gpu}" == "yes" ]] && log_msg "GPU Passthrough: Enabled"
[[ "$VERBOSE" == "yes" ]] && log_msg "Verbose Mode: Enabled"
log_msg "Session ID: ${SESSION_ID}"
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -4049,30 +4077,55 @@ EOF'
# Copy install log from container BEFORE API call so get_error_text() can read it # Copy install log from container BEFORE API call so get_error_text() can read it
local build_log_copied=false local build_log_copied=false
local install_log_copied=false local install_log_copied=false
local host_install_log="/tmp/install-lxc-${CTID}-${SESSION_ID}.log" local combined_log="/tmp/${NSAPP:-lxc}-${CTID}-${SESSION_ID}.log"
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
# Copy BUILD_LOG (creation log) if it exists # Create combined log with header
{
echo "================================================================================"
echo "COMBINED INSTALLATION LOG - ${APP:-LXC}"
echo "Container ID: ${CTID}"
echo "Session ID: ${SESSION_ID}"
echo "Timestamp: $(date '+%Y-%m-%d %H:%M:%S')"
echo "================================================================================"
echo ""
} >"$combined_log"
# Append BUILD_LOG (host-side creation log) if it exists
if [[ -f "${BUILD_LOG}" ]]; then if [[ -f "${BUILD_LOG}" ]]; then
cp "${BUILD_LOG}" "/tmp/create-lxc-${CTID}-${SESSION_ID}.log" 2>/dev/null && build_log_copied=true {
echo "================================================================================"
echo "PHASE 1: CONTAINER CREATION (Host)"
echo "================================================================================"
cat "${BUILD_LOG}"
echo ""
} >>"$combined_log"
build_log_copied=true
fi fi
# Copy INSTALL_LOG from container to host # Copy and append INSTALL_LOG from container
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$host_install_log" 2>/dev/null; then local temp_install_log="/tmp/.install-temp-${SESSION_ID}.log"
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_install_log" 2>/dev/null; then
{
echo "================================================================================"
echo "PHASE 2: APPLICATION INSTALLATION (Container)"
echo "================================================================================"
cat "$temp_install_log"
echo ""
} >>"$combined_log"
rm -f "$temp_install_log"
install_log_copied=true install_log_copied=true
# Point INSTALL_LOG to host copy so get_error_text() finds it # Point INSTALL_LOG to combined log so get_error_text() finds it
INSTALL_LOG="$host_install_log" INSTALL_LOG="$combined_log"
fi fi
fi fi
# Report failure to telemetry API (now with log available on host) # Report failure to telemetry API (now with log available on host)
post_update_to_api "failed" "$install_exit_code" post_update_to_api "failed" "$install_exit_code"
# Show available logs # Show combined log location
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
echo "" msg_custom "📋" "${YW}" "Installation log: ${combined_log}"
[[ "$build_log_copied" == true ]] && echo -e "${GN}${CL} Container creation log: ${BL}/tmp/create-lxc-${CTID}-${SESSION_ID}.log${CL}"
[[ "$install_log_copied" == true ]] && echo -e "${GN}${CL} Installation log: ${BL}${host_install_log}${CL}"
fi fi
# Dev mode: Keep container or open breakpoint shell # Dev mode: Keep container or open breakpoint shell
@@ -4095,19 +4148,21 @@ EOF'
exit $install_exit_code exit $install_exit_code
fi fi
# Prompt user for cleanup with 60s timeout (plain echo - no msg_info to avoid spinner) # Prompt user for cleanup with 60s timeout
echo "" echo ""
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}" echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
if read -t 60 -r response; then if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
# Remove container # Remove container
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID}${CL}" echo ""
msg_info "Removing container ${CTID}"
pct stop "$CTID" &>/dev/null || true pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}" msg_ok "Container ${CTID} removed"
elif [[ "$response" =~ ^[Nn]$ ]]; then elif [[ "$response" =~ ^[Nn]$ ]]; then
echo -e "\n${TAB}${YW}Container ${CTID} kept for debugging${CL}" echo ""
msg_warn "Container ${CTID} kept for debugging"
# Dev mode: Setup MOTD/SSH for debugging access to broken container # Dev mode: Setup MOTD/SSH for debugging access to broken container
if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then
@@ -4123,13 +4178,17 @@ EOF'
fi fi
else else
# Timeout - auto-remove # Timeout - auto-remove
echo -e "\n${YW}No response - auto-removing container${CL}" echo ""
echo -e "${TAB}${HOLD}${YW}Removing container ${CTID}${CL}" msg_info "No response - removing container ${CTID}"
pct stop "$CTID" &>/dev/null || true pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}" msg_ok "Container ${CTID} removed"
fi fi
# Force one final status update attempt after cleanup
# This ensures status is updated even if the first attempt failed (e.g., HTTP 400)
post_update_to_api "failed" "$install_exit_code" "force"
exit $install_exit_code exit $install_exit_code
fi fi
} }
@@ -5133,6 +5192,61 @@ EOF
# SECTION 10: ERROR HANDLING & EXIT TRAPS # SECTION 10: ERROR HANDLING & EXIT TRAPS
# ============================================================================== # ==============================================================================
# ------------------------------------------------------------------------------
# ensure_log_on_host()
#
# - Ensures INSTALL_LOG points to a readable file on the host
# - If INSTALL_LOG points to a container path (e.g. /root/.install-*),
# tries to pull it from the container and create a combined log
# - This allows get_error_text() to find actual error output for telemetry
# ------------------------------------------------------------------------------
ensure_log_on_host() {
# Already readable on host? Nothing to do.
[[ -n "${INSTALL_LOG:-}" && -s "${INSTALL_LOG}" ]] && return 0
# Try pulling from container and creating combined log
if [[ -n "${CTID:-}" && -n "${SESSION_ID:-}" ]] && command -v pct &>/dev/null; then
local combined_log="/tmp/${NSAPP:-lxc}-${CTID}-${SESSION_ID}.log"
if [[ ! -s "$combined_log" ]]; then
# Create combined log
{
echo "================================================================================"
echo "COMBINED INSTALLATION LOG - ${APP:-LXC}"
echo "Container ID: ${CTID}"
echo "Session ID: ${SESSION_ID}"
echo "Timestamp: $(date '+%Y-%m-%d %H:%M:%S')"
echo "================================================================================"
echo ""
} >"$combined_log" 2>/dev/null || return 0
# Append BUILD_LOG if it exists
if [[ -f "${BUILD_LOG:-}" ]]; then
{
echo "================================================================================"
echo "PHASE 1: CONTAINER CREATION (Host)"
echo "================================================================================"
cat "${BUILD_LOG}"
echo ""
} >>"$combined_log"
fi
# Pull INSTALL_LOG from container
local temp_log="/tmp/.install-temp-${SESSION_ID}.log"
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_log" 2>/dev/null; then
{
echo "================================================================================"
echo "PHASE 2: APPLICATION INSTALLATION (Container)"
echo "================================================================================"
cat "$temp_log"
echo ""
} >>"$combined_log"
rm -f "$temp_log"
fi
fi
if [[ -s "$combined_log" ]]; then
INSTALL_LOG="$combined_log"
fi
fi
}
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# api_exit_script() # api_exit_script()
# #
@@ -5145,6 +5259,7 @@ EOF
api_exit_script() { api_exit_script() {
exit_code=$? exit_code=$?
if [ $exit_code -ne 0 ]; then if [ $exit_code -ne 0 ]; then
ensure_log_on_host
post_update_to_api "failed" "$exit_code" post_update_to_api "failed" "$exit_code"
fi fi
} }
@@ -5152,6 +5267,6 @@ api_exit_script() {
if command -v pveversion >/dev/null 2>&1; then if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT trap 'api_exit_script' EXIT
fi fi
trap 'post_update_to_api "failed" "$?"' ERR trap 'ensure_log_on_host; post_update_to_api "failed" "$?"' ERR
trap 'post_update_to_api "failed" "130"' SIGINT trap 'ensure_log_on_host; post_update_to_api "failed" "130"' SIGINT
trap 'post_update_to_api "failed" "143"' SIGTERM trap 'ensure_log_on_host; post_update_to_api "failed" "143"' SIGTERM

View File

@@ -115,7 +115,7 @@ icons() {
BRIDGE="${TAB}🌉${TAB}${CL}" BRIDGE="${TAB}🌉${TAB}${CL}"
NETWORK="${TAB}📡${TAB}${CL}" NETWORK="${TAB}📡${TAB}${CL}"
GATEWAY="${TAB}🌐${TAB}${CL}" GATEWAY="${TAB}🌐${TAB}${CL}"
DISABLEIPV6="${TAB}🚫${TAB}${CL}" ICON_DISABLEIPV6="${TAB}🚫${TAB}${CL}"
DEFAULT="${TAB}⚙️${TAB}${CL}" DEFAULT="${TAB}⚙️${TAB}${CL}"
MACADDRESS="${TAB}🔗${TAB}${CL}" MACADDRESS="${TAB}🔗${TAB}${CL}"
VLANTAG="${TAB}🏷️${TAB}${CL}" VLANTAG="${TAB}🏷️${TAB}${CL}"
@@ -413,6 +413,69 @@ get_active_logfile() {
# Legacy compatibility: SILENT_LOGFILE points to active log # Legacy compatibility: SILENT_LOGFILE points to active log
SILENT_LOGFILE="$(get_active_logfile)" SILENT_LOGFILE="$(get_active_logfile)"
# ------------------------------------------------------------------------------
# strip_ansi()
#
# - Removes ANSI escape sequences from input text
# - Used to clean colored output for log files
# - Handles both piped input and arguments
# ------------------------------------------------------------------------------
strip_ansi() {
if [[ $# -gt 0 ]]; then
echo -e "$*" | sed 's/\x1b\[[0-9;]*m//g; s/\x1b\[[0-9;]*[a-zA-Z]//g'
else
sed 's/\x1b\[[0-9;]*m//g; s/\x1b\[[0-9;]*[a-zA-Z]//g'
fi
}
# ------------------------------------------------------------------------------
# log_msg()
#
# - Writes message to active log file without ANSI codes
# - Adds timestamp prefix for log correlation
# - Creates log file if it doesn't exist
# - Arguments: message text (can include ANSI codes, will be stripped)
# ------------------------------------------------------------------------------
log_msg() {
local msg="$*"
local logfile
logfile="$(get_active_logfile)"
[[ -z "$msg" ]] && return
[[ -z "$logfile" ]] && return
# Ensure log directory exists
mkdir -p "$(dirname "$logfile")" 2>/dev/null || true
# Strip ANSI codes and write with timestamp
local clean_msg
clean_msg=$(strip_ansi "$msg")
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $clean_msg" >>"$logfile"
}
# ------------------------------------------------------------------------------
# log_section()
#
# - Writes a section header to the log file
# - Used for separating different phases of installation
# - Arguments: section name
# ------------------------------------------------------------------------------
log_section() {
local section="$1"
local logfile
logfile="$(get_active_logfile)"
[[ -z "$logfile" ]] && return
mkdir -p "$(dirname "$logfile")" 2>/dev/null || true
{
echo ""
echo "================================================================================"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $section"
echo "================================================================================"
} >>"$logfile"
}
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# silent() # silent()
# #
@@ -459,15 +522,9 @@ silent() {
msg_custom "→" "${YWB}" "${cmd}" msg_custom "→" "${YWB}" "${cmd}"
if [[ -s "$logfile" ]]; then if [[ -s "$logfile" ]]; then
local log_lines=$(wc -l <"$logfile") echo -e "\n${TAB}--- Last 10 lines of log ---"
echo "--- Last 10 lines of silent log ---"
tail -n 10 "$logfile" tail -n 10 "$logfile"
echo "-----------------------------------" echo -e "${TAB}-----------------------------------\n"
# Show how to view full log if there are more lines
if [[ $log_lines -gt 10 ]]; then
msg_custom "📋" "${YW}" "View full log (${log_lines} lines): ${logfile}"
fi
fi fi
exit "$rc" exit "$rc"
@@ -488,7 +545,7 @@ spinner() {
local i=0 local i=0
while true; do while true; do
local index=$((i++ % ${#chars[@]})) local index=$((i++ % ${#chars[@]}))
printf "\r\033[2K%s %b" "${CS_YWB}${TAB}${chars[$index]}${TAB}${CS_CL}" "${CS_YWB}${msg}${CS_CL}" printf "\r\033[2K%s %b" "${CS_YWB}${chars[$index]}${CS_CL}" "${CS_YWB}${msg}${CS_CL}"
sleep 0.1 sleep 0.1
done done
} }
@@ -555,6 +612,9 @@ msg_info() {
[[ -n "${MSG_INFO_SHOWN["$msg"]+x}" ]] && return [[ -n "${MSG_INFO_SHOWN["$msg"]+x}" ]] && return
MSG_INFO_SHOWN["$msg"]=1 MSG_INFO_SHOWN["$msg"]=1
# Log to file
log_msg "[INFO] $msg"
stop_spinner stop_spinner
SPINNER_MSG="$msg" SPINNER_MSG="$msg"
@@ -598,7 +658,10 @@ msg_ok() {
stop_spinner stop_spinner
clear_line clear_line
echo -e "$CM ${GN}${msg}${CL}" echo -e "$CM ${GN}${msg}${CL}"
unset MSG_INFO_SHOWN["$msg"] log_msg "[OK] $msg"
local sanitized_msg
sanitized_msg=$(printf '%s' "$msg" | sed 's/\x1b\[[0-9;]*m//g; s/[^a-zA-Z0-9_]/_/g')
unset 'MSG_INFO_SHOWN['"$sanitized_msg"']' 2>/dev/null || true
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -613,6 +676,7 @@ msg_error() {
stop_spinner stop_spinner
local msg="$1" local msg="$1"
echo -e "${BFR:-}${CROSS:-✖️} ${RD}${msg}${CL}" >&2 echo -e "${BFR:-}${CROSS:-✖️} ${RD}${msg}${CL}" >&2
log_msg "[ERROR] $msg"
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -627,6 +691,7 @@ msg_warn() {
stop_spinner stop_spinner
local msg="$1" local msg="$1"
echo -e "${BFR:-}${INFO:-} ${YWB}${msg}${CL}" >&2 echo -e "${BFR:-}${INFO:-} ${YWB}${msg}${CL}" >&2
log_msg "[WARN] $msg"
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -644,6 +709,7 @@ msg_custom() {
[[ -z "$msg" ]] && return [[ -z "$msg" ]] && return
stop_spinner stop_spinner
echo -e "${BFR:-} ${symbol} ${color}${msg}${CL:-\e[0m}" echo -e "${BFR:-} ${symbol} ${color}${msg}${CL:-\e[0m}"
log_msg "$msg"
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -808,6 +874,562 @@ is_verbose_mode() {
[[ "$verbose" != "no" || ! -t 2 ]] [[ "$verbose" != "no" || ! -t 2 ]]
} }
# ------------------------------------------------------------------------------
# is_unattended()
#
# - Detects if script is running in unattended/non-interactive mode
# - Checks MODE variable first (primary method)
# - Falls back to legacy flags (PHS_SILENT, var_unattended)
# - Returns 0 (true) if unattended, 1 (false) otherwise
# - Used by prompt functions to auto-apply defaults
#
# Modes that are unattended:
# - default (1) : Use script defaults, no prompts
# - mydefaults (3) : Use user's default.vars, no prompts
# - appdefaults (4) : Use app-specific defaults, no prompts
#
# Modes that are interactive:
# - advanced (2) : Full wizard with all options
#
# Note: Even in advanced mode, install scripts run unattended because
# all values are already collected during the wizard phase.
# ------------------------------------------------------------------------------
is_unattended() {
# Primary: Check MODE variable (case-insensitive)
local mode="${MODE:-${mode:-}}"
mode="${mode,,}" # lowercase
case "$mode" in
default | 1)
return 0
;;
mydefaults | userdefaults | 3)
return 0
;;
appdefaults | 4)
return 0
;;
advanced | 2)
# Advanced mode is interactive ONLY during wizard
# Inside container (install scripts), it should be unattended
# Check if we're inside a container (no pveversion command)
if ! command -v pveversion &>/dev/null; then
# We're inside the container - all values already collected
return 0
fi
# On host during wizard - interactive
return 1
;;
esac
# Legacy fallbacks for compatibility
[[ "${PHS_SILENT:-0}" == "1" ]] && return 0
[[ "${var_unattended:-}" =~ ^(yes|true|1)$ ]] && return 0
[[ "${UNATTENDED:-}" =~ ^(yes|true|1)$ ]] && return 0
# No TTY available = unattended
[[ ! -t 0 ]] && return 0
# Default: interactive
return 1
}
# ------------------------------------------------------------------------------
# show_missing_values_warning()
#
# - Displays a summary of required values that used fallback defaults
# - Should be called at the end of install scripts
# - Only shows warning if MISSING_REQUIRED_VALUES array has entries
# - Provides clear guidance on what needs manual configuration
#
# Global:
# MISSING_REQUIRED_VALUES - Array of variable names that need configuration
#
# Example:
# # At end of install script:
# show_missing_values_warning
# ------------------------------------------------------------------------------
show_missing_values_warning() {
if [[ ${#MISSING_REQUIRED_VALUES[@]} -gt 0 ]]; then
echo ""
echo -e "${YW}╔════════════════════════════════════════════════════════════╗${CL}"
echo -e "${YW}║ ⚠️ MANUAL CONFIGURATION REQUIRED ║${CL}"
echo -e "${YW}╠════════════════════════════════════════════════════════════╣${CL}"
echo -e "${YW}║ The following values were not provided and need to be ║${CL}"
echo -e "${YW}║ configured manually for the service to work properly: ║${CL}"
echo -e "${YW}╟────────────────────────────────────────────────────────────╢${CL}"
for val in "${MISSING_REQUIRED_VALUES[@]}"; do
printf "${YW}${CL} • %-56s ${YW}${CL}\n" "$val"
done
echo -e "${YW}╟────────────────────────────────────────────────────────────╢${CL}"
echo -e "${YW}║ Check the service configuration files or environment ║${CL}"
echo -e "${YW}║ variables and update the placeholder values. ║${CL}"
echo -e "${YW}╚════════════════════════════════════════════════════════════╝${CL}"
echo ""
return 1
fi
return 0
}
# ------------------------------------------------------------------------------
# prompt_confirm()
#
# - Prompts user for yes/no confirmation with timeout and unattended support
# - In unattended mode: immediately returns default value
# - In interactive mode: waits for user input with configurable timeout
# - After timeout: auto-applies default value
#
# Arguments:
# $1 - Prompt message (required)
# $2 - Default value: "y" or "n" (optional, default: "n")
# $3 - Timeout in seconds (optional, default: 60)
#
# Returns:
# 0 - User confirmed (yes)
# 1 - User declined (no) or timeout with default "n"
#
# Example:
# if prompt_confirm "Proceed with installation?" "y" 30; then
# echo "Installing..."
# fi
#
# # Unattended: prompt_confirm will use default without waiting
# var_unattended=yes
# prompt_confirm "Delete files?" "n" && echo "Deleting" || echo "Skipped"
# ------------------------------------------------------------------------------
prompt_confirm() {
local message="${1:-Confirm?}"
local default="${2:-n}"
local timeout="${3:-60}"
local response
# Normalize default to lowercase
default="${default,,}"
[[ "$default" != "y" ]] && default="n"
# Build prompt hint
local hint
if [[ "$default" == "y" ]]; then
hint="[Y/n]"
else
hint="[y/N]"
fi
# Unattended mode: apply default immediately
if is_unattended; then
if [[ "$default" == "y" ]]; then
return 0
else
return 1
fi
fi
# Check if running in a TTY
if [[ ! -t 0 ]]; then
# Not a TTY, use default
if [[ "$default" == "y" ]]; then
return 0
else
return 1
fi
fi
# Interactive prompt with timeout
echo -en "${YW}${message} ${hint} (auto-${default} in ${timeout}s): ${CL}"
if read -t "$timeout" -r response; then
# User provided input
response="${response,,}" # lowercase
case "$response" in
y | yes)
return 0
;;
n | no)
return 1
;;
"")
# Empty response, use default
if [[ "$default" == "y" ]]; then
return 0
else
return 1
fi
;;
*)
# Invalid input, use default
echo -e "${YW}Invalid response, using default: ${default}${CL}"
if [[ "$default" == "y" ]]; then
return 0
else
return 1
fi
;;
esac
else
# Timeout occurred
echo "" # Newline after timeout
echo -e "${YW}Timeout - auto-selecting: ${default}${CL}"
if [[ "$default" == "y" ]]; then
return 0
else
return 1
fi
fi
}
# ------------------------------------------------------------------------------
# prompt_input()
#
# - Prompts user for text input with timeout and unattended support
# - In unattended mode: immediately returns default value
# - In interactive mode: waits for user input with configurable timeout
# - After timeout: auto-applies default value
#
# Arguments:
# $1 - Prompt message (required)
# $2 - Default value (optional, default: "")
# $3 - Timeout in seconds (optional, default: 60)
#
# Output:
# Prints the user input or default value to stdout
#
# Example:
# username=$(prompt_input "Enter username:" "admin" 30)
# echo "Using username: $username"
#
# # With validation
# while true; do
# port=$(prompt_input "Enter port:" "8080" 30)
# [[ "$port" =~ ^[0-9]+$ ]] && break
# echo "Invalid port number"
# done
# ------------------------------------------------------------------------------
prompt_input() {
local message="${1:-Enter value:}"
local default="${2:-}"
local timeout="${3:-60}"
local response
# Build display default hint
local hint=""
[[ -n "$default" ]] && hint=" (default: ${default})"
# Unattended mode: return default immediately
if is_unattended; then
echo "$default"
return 0
fi
# Check if running in a TTY
if [[ ! -t 0 ]]; then
# Not a TTY, use default
echo "$default"
return 0
fi
# Interactive prompt with timeout
echo -en "${YW}${message}${hint} (auto-default in ${timeout}s): ${CL}" >&2
if read -t "$timeout" -r response; then
# User provided input (or pressed Enter for empty)
if [[ -n "$response" ]]; then
echo "$response"
else
echo "$default"
fi
else
# Timeout occurred
echo "" >&2 # Newline after timeout
echo -e "${YW}Timeout - using default: ${default}${CL}" >&2
echo "$default"
fi
}
# ------------------------------------------------------------------------------
# prompt_input_required()
#
# - Prompts user for REQUIRED text input with fallback support
# - In unattended mode: Uses fallback value if no env var set (with warning)
# - In interactive mode: loops until user provides non-empty input
# - Tracks missing required values for end-of-script summary
#
# Arguments:
# $1 - Prompt message (required)
# $2 - Fallback/example value for unattended mode (optional)
# $3 - Timeout in seconds (optional, default: 120)
# $4 - Environment variable name hint for error messages (optional)
#
# Output:
# Prints the user input or fallback value to stdout
#
# Returns:
# 0 - Success (value provided or fallback used)
# 1 - Failed (interactive timeout without input)
#
# Global:
# MISSING_REQUIRED_VALUES - Array tracking fields that used fallbacks
#
# Example:
# # With fallback - script continues even in unattended mode
# token=$(prompt_input_required "Enter API Token:" "YOUR_TOKEN_HERE" 60 "var_api_token")
#
# # Check at end of script if any values need manual configuration
# if [[ ${#MISSING_REQUIRED_VALUES[@]} -gt 0 ]]; then
# msg_warn "Please configure: ${MISSING_REQUIRED_VALUES[*]}"
# fi
# ------------------------------------------------------------------------------
# Global array to track missing required values
declare -g -a MISSING_REQUIRED_VALUES=()
prompt_input_required() {
local message="${1:-Enter required value:}"
local fallback="${2:-CHANGE_ME}"
local timeout="${3:-120}"
local env_var_hint="${4:-}"
local response=""
# Check if value is already set via environment variable (if hint provided)
if [[ -n "$env_var_hint" ]]; then
local env_value="${!env_var_hint:-}"
if [[ -n "$env_value" ]]; then
echo "$env_value"
return 0
fi
fi
# Unattended mode: use fallback with warning
if is_unattended; then
if [[ -n "$env_var_hint" ]]; then
echo -e "${YW}⚠ Required value '${env_var_hint}' not set - using fallback: ${fallback}${CL}" >&2
MISSING_REQUIRED_VALUES+=("$env_var_hint")
else
echo -e "${YW}⚠ Required value not provided - using fallback: ${fallback}${CL}" >&2
MISSING_REQUIRED_VALUES+=("(unnamed)")
fi
echo "$fallback"
return 0
fi
# Check if running in a TTY
if [[ ! -t 0 ]]; then
echo -e "${YW}⚠ Not interactive - using fallback: ${fallback}${CL}" >&2
MISSING_REQUIRED_VALUES+=("${env_var_hint:-unnamed}")
echo "$fallback"
return 0
fi
# Interactive prompt - loop until non-empty input or use fallback on timeout
local attempts=0
while [[ -z "$response" ]]; do
attempts=$((attempts + 1))
if [[ $attempts -gt 3 ]]; then
echo -e "${YW}Too many empty inputs - using fallback: ${fallback}${CL}" >&2
MISSING_REQUIRED_VALUES+=("${env_var_hint:-manual_input}")
echo "$fallback"
return 0
fi
echo -en "${YW}${message} (required, timeout ${timeout}s): ${CL}" >&2
if read -t "$timeout" -r response; then
if [[ -z "$response" ]]; then
echo -e "${YW}This field is required. Please enter a value. (attempt ${attempts}/3)${CL}" >&2
fi
else
# Timeout occurred - use fallback
echo "" >&2
echo -e "${YW}Timeout - using fallback value: ${fallback}${CL}" >&2
MISSING_REQUIRED_VALUES+=("${env_var_hint:-timeout}")
echo "$fallback"
return 0
fi
done
echo "$response"
}
# ------------------------------------------------------------------------------
# prompt_select()
#
# - Prompts user to select from a list of options with timeout support
# - In unattended mode: immediately returns default selection
# - In interactive mode: displays numbered menu and waits for choice
# - After timeout: auto-applies default selection
#
# Arguments:
# $1 - Prompt message (required)
# $2 - Default option number, 1-based (optional, default: 1)
# $3 - Timeout in seconds (optional, default: 60)
# $4+ - Options to display (required, at least 2)
#
# Output:
# Prints the selected option value to stdout
#
# Returns:
# 0 - Success
# 1 - No options provided or invalid state
#
# Example:
# choice=$(prompt_select "Select database:" 1 30 "PostgreSQL" "MySQL" "SQLite")
# echo "Selected: $choice"
#
# # With array
# options=("Option A" "Option B" "Option C")
# selected=$(prompt_select "Choose:" 2 60 "${options[@]}")
# ------------------------------------------------------------------------------
prompt_select() {
local message="${1:-Select option:}"
local default="${2:-1}"
local timeout="${3:-60}"
shift 3
local options=("$@")
local num_options=${#options[@]}
# Validate options
if [[ $num_options -eq 0 ]]; then
echo "" >&2
return 1
fi
# Validate default
if [[ ! "$default" =~ ^[0-9]+$ ]] || [[ "$default" -lt 1 ]] || [[ "$default" -gt "$num_options" ]]; then
default=1
fi
# Unattended mode: return default immediately
if is_unattended; then
echo "${options[$((default - 1))]}"
return 0
fi
# Check if running in a TTY
if [[ ! -t 0 ]]; then
echo "${options[$((default - 1))]}"
return 0
fi
# Display menu
echo -e "${YW}${message}${CL}" >&2
local i
for i in "${!options[@]}"; do
local num=$((i + 1))
if [[ $num -eq $default ]]; then
echo -e " ${GN}${num})${CL} ${options[$i]} ${YW}(default)${CL}" >&2
else
echo -e " ${GN}${num})${CL} ${options[$i]}" >&2
fi
done
# Interactive prompt with timeout
echo -en "${YW}Select [1-${num_options}] (auto-select ${default} in ${timeout}s): ${CL}" >&2
local response
if read -t "$timeout" -r response; then
if [[ -z "$response" ]]; then
# Empty response, use default
echo "${options[$((default - 1))]}"
elif [[ "$response" =~ ^[0-9]+$ ]] && [[ "$response" -ge 1 ]] && [[ "$response" -le "$num_options" ]]; then
# Valid selection
echo "${options[$((response - 1))]}"
else
# Invalid input, use default
echo -e "${YW}Invalid selection, using default: ${options[$((default - 1))]}${CL}" >&2
echo "${options[$((default - 1))]}"
fi
else
# Timeout occurred
echo "" >&2 # Newline after timeout
echo -e "${YW}Timeout - auto-selecting: ${options[$((default - 1))]}${CL}" >&2
echo "${options[$((default - 1))]}"
fi
}
# ------------------------------------------------------------------------------
# prompt_password()
#
# - Prompts user for password input with hidden characters
# - In unattended mode: returns default or generates random password
# - Supports auto-generation of secure passwords
# - After timeout: generates random password if allowed
#
# Arguments:
# $1 - Prompt message (required)
# $2 - Default value or "generate" for auto-generation (optional)
# $3 - Timeout in seconds (optional, default: 60)
# $4 - Minimum length for validation (optional, default: 0 = no minimum)
#
# Output:
# Prints the password to stdout
#
# Example:
# password=$(prompt_password "Enter password:" "generate" 30 8)
# echo "Password set"
#
# # Require user input (no default)
# db_pass=$(prompt_password "Database password:" "" 60 12)
# ------------------------------------------------------------------------------
prompt_password() {
local message="${1:-Enter password:}"
local default="${2:-}"
local timeout="${3:-60}"
local min_length="${4:-0}"
local response
# Generate random password if requested
local generated=""
if [[ "$default" == "generate" ]]; then
generated=$(openssl rand -base64 16 2>/dev/null | tr -dc 'a-zA-Z0-9' | head -c 16)
[[ -z "$generated" ]] && generated=$(head /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 16)
default="$generated"
fi
# Unattended mode: return default immediately
if is_unattended; then
echo "$default"
return 0
fi
# Check if running in a TTY
if [[ ! -t 0 ]]; then
echo "$default"
return 0
fi
# Build hint
local hint=""
if [[ -n "$generated" ]]; then
hint=" (Enter for auto-generated)"
elif [[ -n "$default" ]]; then
hint=" (Enter for default)"
fi
[[ "$min_length" -gt 0 ]] && hint="${hint} [min ${min_length} chars]"
# Interactive prompt with timeout (silent input)
echo -en "${YW}${message}${hint} (timeout ${timeout}s): ${CL}" >&2
if read -t "$timeout" -rs response; then
echo "" >&2 # Newline after hidden input
if [[ -n "$response" ]]; then
# Validate minimum length
if [[ "$min_length" -gt 0 ]] && [[ ${#response} -lt "$min_length" ]]; then
echo -e "${YW}Password too short (min ${min_length}), using default${CL}" >&2
echo "$default"
else
echo "$response"
fi
else
echo "$default"
fi
else
# Timeout occurred
echo "" >&2 # Newline after timeout
echo -e "${YW}Timeout - using generated password${CL}" >&2
echo "$default"
fi
}
# ============================================================================== # ==============================================================================
# SECTION 6: CLEANUP & MAINTENANCE # SECTION 6: CLEANUP & MAINTENANCE
# ============================================================================== # ==============================================================================
@@ -896,15 +1518,13 @@ check_or_create_swap() {
msg_error "No active swap detected" msg_error "No active swap detected"
read -p "Do you want to create a swap file? [y/N]: " create_swap if ! prompt_confirm "Do you want to create a swap file?" "n" 60; then
create_swap="${create_swap,,}" # to lowercase
if [[ "$create_swap" != "y" && "$create_swap" != "yes" ]]; then
msg_info "Skipping swap file creation" msg_info "Skipping swap file creation"
return 1 return 1
fi fi
read -p "Enter swap size in MB (e.g., 2048 for 2GB): " swap_size_mb local swap_size_mb
swap_size_mb=$(prompt_input "Enter swap size in MB (e.g., 2048 for 2GB):" "2048" 60)
if ! [[ "$swap_size_mb" =~ ^[0-9]+$ ]]; then if ! [[ "$swap_size_mb" =~ ^[0-9]+$ ]]; then
msg_error "Invalid size input. Aborting." msg_error "Invalid size input. Aborting."
return 1 return 1

View File

@@ -175,9 +175,9 @@ error_handler() {
fi fi
if [[ -n "$active_log" && -s "$active_log" ]]; then if [[ -n "$active_log" && -s "$active_log" ]]; then
echo "--- Last 20 lines of silent log ---" echo -e "\n${TAB}--- Last 20 lines of log ---"
tail -n 20 "$active_log" tail -n 20 "$active_log"
echo "-----------------------------------" echo -e "${TAB}-----------------------------------\n"
# Detect context: Container (INSTALL_LOG set + /root exists) vs Host (BUILD_LOG) # Detect context: Container (INSTALL_LOG set + /root exists) vs Host (BUILD_LOG)
if [[ -n "${INSTALL_LOG:-}" && -d /root ]]; then if [[ -n "${INSTALL_LOG:-}" && -d /root ]]; then
@@ -204,23 +204,56 @@ error_handler() {
fi fi
echo "" echo ""
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}" if declare -f msg_custom >/dev/null 2>&1; then
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
else
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
fi
if read -t 60 -r response; then if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
echo -e "\n${YW}Removing container ${CTID}${CL}" echo ""
if declare -f msg_info >/dev/null 2>&1; then
msg_info "Removing container ${CTID}"
else
echo -e "${YW}Removing container ${CTID}${CL}"
fi
pct stop "$CTID" &>/dev/null || true pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true pct destroy "$CTID" &>/dev/null || true
echo -e "${GN}${CL} Container ${CTID} removed" if declare -f msg_ok >/dev/null 2>&1; then
msg_ok "Container ${CTID} removed"
else
echo -e "${GN}${CL} Container ${CTID} removed"
fi
elif [[ "$response" =~ ^[Nn]$ ]]; then elif [[ "$response" =~ ^[Nn]$ ]]; then
echo -e "\n${YW}Container ${CTID} kept for debugging${CL}" echo ""
if declare -f msg_warn >/dev/null 2>&1; then
msg_warn "Container ${CTID} kept for debugging"
else
echo -e "${YW}Container ${CTID} kept for debugging${CL}"
fi
fi fi
else else
# Timeout - auto-remove # Timeout - auto-remove
echo -e "\n${YW}No response - auto-removing container${CL}" echo ""
if declare -f msg_info >/dev/null 2>&1; then
msg_info "No response - removing container ${CTID}"
else
echo -e "${YW}No response - removing container ${CTID}${CL}"
fi
pct stop "$CTID" &>/dev/null || true pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true pct destroy "$CTID" &>/dev/null || true
echo -e "${GN}${CL} Container ${CTID} removed" if declare -f msg_ok >/dev/null 2>&1; then
msg_ok "Container ${CTID} removed"
else
echo -e "${GN}${CL} Container ${CTID} removed"
fi
fi
# Force one final status update attempt after cleanup
# This ensures status is updated even if the first attempt failed (e.g., HTTP 400)
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code" "force"
fi fi
fi fi
fi fi
@@ -248,6 +281,10 @@ on_exit() {
# post_to_api was called ("installing" sent) but post_update_to_api was never called # post_to_api was called ("installing" sent) but post_update_to_api was never called
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then if declare -f post_update_to_api >/dev/null 2>&1; then
# Ensure log is accessible on host before reporting
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host
fi
if [[ $exit_code -ne 0 ]]; then if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code" post_update_to_api "failed" "$exit_code"
else else
@@ -267,6 +304,10 @@ on_exit() {
# - Exits with code 130 (128 + SIGINT=2) # - Exits with code 130 (128 + SIGINT=2)
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
on_interrupt() { on_interrupt() {
# Ensure log is accessible on host before reporting
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host
fi
# Report interruption to telemetry API (prevents stuck "installing" records) # Report interruption to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "130" post_update_to_api "failed" "130"
@@ -288,6 +329,10 @@ on_interrupt() {
# - Triggered by external process termination # - Triggered by external process termination
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
on_terminate() { on_terminate() {
# Ensure log is accessible on host before reporting
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host
fi
# Report termination to telemetry API (prevents stuck "installing" records) # Report termination to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "143" post_update_to_api "failed" "143"

View File

@@ -1913,7 +1913,7 @@ function fetch_and_deploy_codeberg_release() {
local app="$1" local app="$1"
local repo="$2" local repo="$2"
local mode="${3:-tarball}" # tarball | binary | prebuild | singlefile | tag local mode="${3:-tarball}" # tarball | binary | prebuild | singlefile | tag
local version="${4:-latest}" local version="${var_appversion:-${4:-latest}}"
local target="${5:-/opt/$app}" local target="${5:-/opt/$app}"
local asset_pattern="${6:-}" local asset_pattern="${6:-}"
@@ -2443,7 +2443,7 @@ function fetch_and_deploy_gh_release() {
local app="$1" local app="$1"
local repo="$2" local repo="$2"
local mode="${3:-tarball}" # tarball | binary | prebuild | singlefile local mode="${3:-tarball}" # tarball | binary | prebuild | singlefile
local version="${4:-latest}" local version="${var_appversion:-${4:-latest}}"
local target="${5:-/opt/$app}" local target="${5:-/opt/$app}"
local asset_pattern="${6:-}" local asset_pattern="${6:-}"

View File

@@ -207,15 +207,9 @@ silent() {
msg_custom "→" "${YWB}" "${cmd}" msg_custom "→" "${YWB}" "${cmd}"
if [[ -s "$logfile" ]]; then if [[ -s "$logfile" ]]; then
local log_lines=$(wc -l <"$logfile") echo -e "\n${TAB}--- Last 10 lines of log ---"
echo "--- Last 10 lines of log ---"
tail -n 10 "$logfile" tail -n 10 "$logfile"
echo "----------------------------" echo -e "${TAB}----------------------------\n"
# Show how to view full log if there are more lines
if [[ $log_lines -gt 10 ]]; then
msg_custom "📋" "${YW}" "View full log (${log_lines} lines): ${logfile}"
fi
fi fi
exit "$rc" exit "$rc"

View File

@@ -110,6 +110,11 @@ for container in $(pct list | awk '{if(NR>1) print $1}'); do
container_hostname=$(pct exec "$container" hostname) container_hostname=$(pct exec "$container" hostname)
containers_needing_reboot+=("$container ($container_hostname)") containers_needing_reboot+=("$container ($container_hostname)")
fi fi
# check if patchmon agent is present in container and run a report if found
if pct exec "$container" -- [ -e "/usr/local/bin/patchmon-agent" ]; then
echo -e "${BL}[Info]${GN} patchmon-agent found in ${BL} $container ${CL}, triggering report. \n"
pct exec "$container" -- "/usr/local/bin/patchmon-agent" "report"
fi
fi fi
done done
wait wait