Compare commits

...

34 Commits

Author SHA1 Message Date
github-actions[bot]
62a66433f2 Update CHANGELOG.md 2026-02-17 15:36:39 +00:00
CanbiZ (MickLesk)
3ce3c6f613 tools/pve: add data analytics / formatting / linting (#12034)
* core: add progress; fix exit status

Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.

* Safer tools.func load and improved error handling

Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.

* tools: add data init and auto-reporting to tools and pve section

Introduce telemetry helpers in misc/api.func: _telemetry_report_exit (reports success/failure via post_tool_to_api/post_addon_to_api) and init_tool_telemetry (reads DIAGNOSTICS, starts install timer and installs an EXIT trap to auto-report). Integrate telemetry into many tools/addon and tools/pve scripts by sourcing the remote api.func and calling init_tool_telemetry (guarded with declare -f). Also apply a minor arithmetic formatting tweak in misc/build.func for RECOVERY_ATTEMPT.
2026-02-17 16:36:20 +01:00
community-scripts-pr-app[bot]
97652792be Update CHANGELOG.md (#12031)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 12:25:42 +00:00
CanbiZ (MickLesk)
f07f2cb04e core: error-handler improvements | better exit_code handling | better tools.func source check (#12019)
* core: add progress; fix exit status

Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.

* Safer tools.func load and improved error handling

Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.
2026-02-17 13:25:17 +01:00
community-scripts-pr-app[bot]
5f73f9d5e6 chore: update github-versions.json (#12030)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 12:14:28 +00:00
community-scripts-pr-app[bot]
0183ae0fff Update CHANGELOG.md (#12029)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:16:34 +00:00
CanbiZ (MickLesk)
32d1937a74 Refactor: centralize systemd service creation (#12025)
Introduce create_service() to generate the immich-proxy systemd unit and run systemctl daemon-reload. Replace duplicated heredoc service blocks in install with a call to create_service, and invoke create_service during update before starting the service. Adjust unit WorkingDirectory to ${INSTALL_PATH}/app and ExecStart to run dist/index.js.
2026-02-17 12:16:09 +01:00
community-scripts-pr-app[bot]
0a7bd20b06 Update CHANGELOG.md (#12028)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:15:14 +00:00
CanbiZ (MickLesk)
c9ecb1ccca core: smart recovery for failed installs | extend exit_codes (#11221)
* feat(build.func): smart error recovery menu for failed installations

Replace simple Y/n removal prompt with interactive recovery menu:

- Option 1: Remove container and exit (default, auto after 60s timeout)
- Option 2: Keep container for debugging
- Option 3: Retry installation with verbose mode enabled
- Option 4: Retry with 1.5x RAM and +1 CPU core (OOM errors only)

Improvements:
- Detect OOM errors (exit codes 137, 243) and offer resource increase
- Show human-readable error explanation using explain_exit_code()
- Recursive rebuild preserves ALL settings from advanced/app.vars/default.vars
- Settings preserved: Network (IP, Gateway, VLAN, MTU, Bridge), Features
  (Nesting, FUSE, TUN, GPU), Storage, SSH keys, Tags, Hostname, etc.
- Show rebuild summary before retry (old→new CTID, resources, network)
- New container ID generated automatically for rebuilds

This helps users recover from transient failures without re-running
the entire script manually.

* fix(api.func): fix duplicate exit codes and add missing error codes

Exit code fixes:
- Remove duplicate definitions for codes 243, 254 (Node.js vs DB)
- Reassign MySQL/MariaDB to 240-242, 244 (was 241-244)
- Reassign MongoDB to 250-253 (was 251-254)

New exit codes added (based on GitHub issues analysis):
- 6: curl couldn't resolve host (DNS failure)
- 7: curl failed to connect (network unreachable)
- 22: curl HTTP error (404, 429 rate limit, 500)
- 28: curl timeout (very common in download failures)
- 35: curl SSL error
- 102: APT lock held by another process
- 124: Command timeout
- 141: SIGPIPE (broken pipe)

Also update OOM detection to include exit code 134 (SIGABRT)
which is commonly seen in Node.js heap overflow issues.

Fixes based on analysis of ~500 GitHub issues.

* fix(exit-codes): sync error_handler.func and api.func with conflict-free code ranges

- Add curl error codes (6, 7, 22, 28, 35)
- Add APT lock code (102), timeout (124), signals (134, 141)
- Move Python codes: 210-212 → 160-162 (avoid Proxmox conflict)
- Move PostgreSQL codes: 231-234 → 170-173
- Move MySQL/MariaDB codes: 241-244 → 180-183
- Move MongoDB codes: 251-254 → 190-193
- Keep Node.js at 243-249, Proxmox at 200-231
- Both files now synchronized with identical mappings

* feat(exit-codes): add systemd and build error codes (150-154)

- 150: Systemd service failed to start
- 151: Systemd service unit not found
- 152: Permission denied (EACCES)
- 153: Build/compile failed (make/gcc/cmake)
- 154: Node.js native addon build failed (node-gyp)

Based on issue analysis: 57 service failures, 25 build failures, 22 node-gyp issues

* fix(build): restore smart recovery and add OOM/DNS retry paths

* feat(build): APT in-place repair, exit 1 subclassification, new exit codes

- Add APT/DPKG in-place recovery: detects exit 100/101/102/255 and exit 1
  with APT log patterns, offers to repair dpkg state and re-run install
  script without destroying the container
- Add exit 1 subclassification: analyzes combined log to identify root
  cause (APT, OOM, network, command-not-found) and routes to appropriate
  recovery option
- Add exit 10 hint: shows privileged mode / nesting suggestion
- Add exit 127 hint: extracts missing command name from logs
- Refactor recovery menu: use named option variables (APT_OPTION,
  OOM_OPTION, DNS_OPTION) instead of hardcoded option numbers, supports
  up to 6 dynamic options cleanly
- Map missing exit codes in api.func: curl 27/36/45/47/55, signals
  129 (SIGHUP) / 131 (SIGQUIT), npm 239

* feat(api+build): map 25 more exit codes, add SIGHUP trap, network/perm hints

api.func:
- Map 25+ new exit codes that were showing as 'Unknown' in telemetry:
  curl: 3, 16, 18, 24, 26, 32-34, 39, 44, 46, 48, 51, 52, 57, 59, 61,
  63, 79, 92, 95; signals: 125, 132, 144, 146
- Update code 8 description (FTP + apk untrusted key)
- Update header comment with full supported ranges

build.func:
- Add SIGHUP trap: reports 'failed/129' to API when terminal is closed,
  should significantly reduce the 2841 stuck 'installing' records
- Add exit 52 (empty reply) and 57 (poll error) to network issue
  detection for DNS override recovery option
- Add exit 125/126 hint: suggests privileged mode for permission errors

* fix: sync error_handler fallback, Alpine APK repair, retry limit

error_handler.func:
- Sync fallback explain_exit_code() with api.func: add 25+ codes that
  were missing (curl 16/18/24/26/27/32-34/36/39/44-48/51/52/55/57/59/
  61/63/79/92/95, signals 125/129/131/132/144/146, npm 239, code 3/8)
- Ensures consistent error descriptions even when api.func isn't loaded

build.func:
- Alpine APK repair: detect var_os=alpine and run 'apk fix && apk
  cache clean && apk update' instead of apt-get/dpkg commands
- Show 'Repair APK state' instead of 'APT/DPKG' in menu for Alpine
- Retry safety counter: OOM x2 retry limited to max 2 attempts
  (prevents infinite RAM doubling via RECOVERY_ATTEMPT env var)
- Show attempt count in rebuild summary

* fix(build): preserve exit code in ERR trap to prevent false exit_code=0

The ERR trap called ensure_log_on_host before post_update_to_api,
which reset \True to 0 (success). This caused ~15-20 records/day to be
reported as 'failed' with exit_code=0 instead of the actual error code.

Root cause chain:
1. Command fails with exit code N → ERR trap fires (\True = N)
2. ensure_log_on_host succeeds → \True becomes 0
3. post_update_to_api 'failed' '\True' → sends 'failed/0' (wrong!)
4. POST_UPDATE_DONE=true → EXIT trap skips the correct code

Fix: capture \True into _ERR_CODE before ensure_log_on_host runs.

* Implement telemetry settings and repo source detection

Add telemetry configuration and repository source detection function.
2026-02-17 12:14:46 +01:00
community-scripts-pr-app[bot]
d274a269b5 Update .app files (#12022)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 10:45:09 +01:00
community-scripts-pr-app[bot]
cbee9d64b5 Update CHANGELOG.md (#12024)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:45 +00:00
community-scripts-pr-app[bot]
ffcda217e3 Update CHANGELOG.md (#12023)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:25 +00:00
community-scripts-pr-app[bot]
438d5d6b94 Update date in json (#12021)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:19 +00:00
push-app-to-main[bot]
104366bc64 Databasus (#12018)
* Add databasus (ct)

* Update databasus.sh

* Update databasus-install.sh

* Fix backup and restore paths for Databasus config

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-17 10:40:58 +01:00
community-scripts-pr-app[bot]
9dab79f8ca Update CHANGELOG.md (#12017)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:08:35 +00:00
CanbiZ (MickLesk)
2dddeaf966 Call get_lxc_ip in start() before updates (#12015) 2026-02-17 09:08:09 +01:00
community-scripts-pr-app[bot]
fae06a3a58 Update CHANGELOG.md (#12016)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:07:22 +00:00
Tobias
137272c354 fix: pterodactyl-panel add symlink (#11997) 2026-02-17 09:06:59 +01:00
community-scripts-pr-app[bot]
52a9e23401 chore: update github-versions.json (#12013)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 06:22:15 +00:00
community-scripts-pr-app[bot]
c2333de180 Update CHANGELOG.md (#12007)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:47 +00:00
community-scripts-pr-app[bot]
ad8974894b chore: update github-versions.json (#12006)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:21 +00:00
community-scripts-pr-app[bot]
38af4be5ba Update CHANGELOG.md (#12005)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 21:38:03 +00:00
Chris
80ae1f34fa Opencloud: Pin version to 5.1.0 (#12004) 2026-02-16 22:37:35 +01:00
community-scripts-pr-app[bot]
06bc6e20d5 chore: update github-versions.json (#12001)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 18:13:41 +00:00
community-scripts-pr-app[bot]
4418e72856 Update CHANGELOG.md (#11999)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 16:14:29 +00:00
CanbiZ (MickLesk)
896714e06f core/vm's: ensure script state is sent on script exit (#11991)
* Ensure API update is sent on script exit

Add exit-time telemetry handling across scripts to avoid orphaned "installing" records. Introduce local exit_code capture in api_exit_script and cleanup handlers and, when POST_TO_API_DONE is true but POST_UPDATE_DONE is not, post a final status (marking failures on non-zero exit codes, or marking done/failed in VM cleanups based on exit code). Changes touch misc/build.func, misc/vm-core.func and various vm/*-vm.sh cleanup functions to reliably send post_update_to_api on normal or early exits.

* Update api.func

* fix(telemetry): add missing exit codes to explain_exit_code()

- Add curl error codes: 4, 5, 8, 23, 25, 30, 56, 78
- Add code 10: Docker/privileged mode required (used in ~15 scripts)
- Add code 75: Temporary failure (retry later)
- Add BSD sysexits.h codes: 64-77
- Sync error_handler.func fallback with canonical api.func
2026-02-16 17:14:00 +01:00
community-scripts-pr-app[bot]
96389a02cb chore: update github-versions.json (#11996)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 12:14:25 +00:00
community-scripts-pr-app[bot]
a4e6286260 Update .app files (#11993)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-16 12:49:30 +01:00
community-scripts-pr-app[bot]
a6617cc6a1 Update CHANGELOG.md (#11995)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 11:49:19 +00:00
community-scripts-pr-app[bot]
f1377e6cb0 Update CHANGELOG.md (#11994)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 11:49:03 +00:00
community-scripts-pr-app[bot]
56cff01240 Update date in json (#11992)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-16 11:48:56 +00:00
push-app-to-main[bot]
26ba17c8c3 RomM (#11987)
* Add romm (ct)

* Update romm.sh

* Update romm-install.sh

* Revise author line in romm.sh

Updated author attribution format in romm.sh

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
2026-02-16 12:48:37 +01:00
community-scripts-pr-app[bot]
2bd4b063d9 Update CHANGELOG.md (#11990)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 10:47:03 +00:00
Slaviša Arežina
40bd7dc366 Fix sed command for DB_FILE configuration (#11988) 2026-02-16 11:46:37 +01:00
84 changed files with 2125 additions and 447 deletions

View File

@@ -404,22 +404,60 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-16
## 2026-02-17
### 🆕 New Scripts
- LinkDing ([#11976](https://github.com/community-scripts/ProxmoxVE/pull/11976))
- Databasus ([#12018](https://github.com/community-scripts/ProxmoxVE/pull/12018))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- fix: pterodactyl-panel add symlink [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11997](https://github.com/community-scripts/ProxmoxVE/pull/11997))
### 💾 Core
- #### 🐞 Bug Fixes
- core: call get_lxc_ip in start() before updates [@MickLesk](https://github.com/MickLesk) ([#12015](https://github.com/community-scripts/ProxmoxVE/pull/12015))
- #### ✨ New Features
- tools/pve: add data analytics / formatting / linting [@MickLesk](https://github.com/MickLesk) ([#12034](https://github.com/community-scripts/ProxmoxVE/pull/12034))
- core: smart recovery for failed installs | extend exit_codes [@MickLesk](https://github.com/MickLesk) ([#11221](https://github.com/community-scripts/ProxmoxVE/pull/11221))
- #### 🔧 Refactor
- core: error-handler improvements | better exit_code handling | better tools.func source check [@MickLesk](https://github.com/MickLesk) ([#12019](https://github.com/community-scripts/ProxmoxVE/pull/12019))
### 🧰 Tools
- #### 🔧 Refactor
- Immich Public Proxy: centralize and fix systemd service creation [@MickLesk](https://github.com/MickLesk) ([#12025](https://github.com/community-scripts/ProxmoxVE/pull/12025))
## 2026-02-16
### 🆕 New Scripts
- RomM ([#11987](https://github.com/community-scripts/ProxmoxVE/pull/11987))
- LinkDing ([#11976](https://github.com/community-scripts/ProxmoxVE/pull/11976))
### 🚀 Updated Scripts
- Opencloud: Pin version to 5.1.0 [@vhsdream](https://github.com/vhsdream) ([#12004](https://github.com/community-scripts/ProxmoxVE/pull/12004))
- #### 🐞 Bug Fixes
- Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988))
- slskd: fix exit position [@MickLesk](https://github.com/MickLesk) ([#11963](https://github.com/community-scripts/ProxmoxVE/pull/11963))
- cryptpad: restore config earlier and run onlyoffice upgrade [@MickLesk](https://github.com/MickLesk) ([#11964](https://github.com/community-scripts/ProxmoxVE/pull/11964))
- jellyseerr/overseerr: Migrate update script to Seerr; prompt rerun [@MickLesk](https://github.com/MickLesk) ([#11965](https://github.com/community-scripts/ProxmoxVE/pull/11965))
- #### 🔧 Refactor
- core/vm's: ensure script state is sent on script exit [@MickLesk](https://github.com/MickLesk) ([#11991](https://github.com/community-scripts/ProxmoxVE/pull/11991))
- Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966))
- Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968))
@@ -433,7 +471,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- core: remove duplicate error handler from alpine-install.func [@MickLesk](https://github.com/MickLesk) ([#11971](https://github.com/community-scripts/ProxmoxVE/pull/11971))
### 📚 Documentation
### 📂 Github
- github: add "website" label if "json" changed [@MickLesk](https://github.com/MickLesk) ([#11975](https://github.com/community-scripts/ProxmoxVE/pull/11975))
@@ -441,12 +479,9 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- #### 📝 Script Information
- Update Wishlist LXC webpage to include reverse proxy info [@summoningpixels](https://github.com/summoningpixels) ([#11973](https://github.com/community-scripts/ProxmoxVE/pull/11973))
- Update OpenCloud LXC webpage to include services ports [@summoningpixels](https://github.com/summoningpixels) ([#11969](https://github.com/community-scripts/ProxmoxVE/pull/11969))
### ❔ Uncategorized
- Update Wishlist LXC webpage to include reverse proxy info [@summoningpixels](https://github.com/summoningpixels) ([#11973](https://github.com/community-scripts/ProxmoxVE/pull/11973))
## 2026-02-15
### 🆕 New Scripts

78
ct/databasus.sh Normal file
View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
APP="Databasus"
var_tags="${var_tags:-backup;postgresql;database}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /opt/databasus/databasus ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "databasus" "databasus/databasus"; then
msg_info "Stopping Databasus"
$STD systemctl stop databasus
msg_ok "Stopped Databasus"
msg_info "Backing up Configuration"
cp /opt/databasus/.env /opt/databasus.env.bak
msg_ok "Backed up Configuration"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Updating Databasus"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod download
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations /opt/databasus/
chown -R postgres:postgres /opt/databasus
msg_ok "Updated Databasus"
msg_info "Restoring Configuration"
cp /opt/databasus.env.bak /opt/databasus/.env
rm -f /opt/databasus.env.bak
chown postgres:postgres /opt/databasus/.env
msg_ok "Restored Configuration"
msg_info "Starting Databasus"
$STD systemctl start databasus
msg_ok "Started Databasus"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"

6
ct/headers/databasus Normal file
View File

@@ -0,0 +1,6 @@
____ __ __
/ __ \____ _/ /_____ _/ /_ ____ ________ _______
/ / / / __ `/ __/ __ `/ __ \/ __ `/ ___/ / / / ___/
/ /_/ / /_/ / /_/ /_/ / /_/ / /_/ (__ ) /_/ (__ )
/_____/\__,_/\__/\__,_/_.___/\__,_/____/\__,_/____/

6
ct/headers/romm Normal file
View File

@@ -0,0 +1,6 @@
____ __ ___
/ __ \____ ____ ___ / |/ /
/ /_/ / __ \/ __ `__ \/ /|_/ /
/ _, _/ /_/ / / / / / / / / /
/_/ |_|\____/_/ /_/ /_/_/ /_/

View File

@@ -29,7 +29,7 @@ function update_script() {
exit
fi
RELEASE="v5.0.2"
RELEASE="v5.1.0"
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
msg_info "Stopping services"
systemctl stop opencloud opencloud-wopi

View File

@@ -71,6 +71,7 @@ EOF
$STD php artisan migrate --seed --force --no-interaction
chown -R www-data:www-data /opt/pterodactyl-panel/*
chmod -R 755 /opt/pterodactyl-panel/storage /opt/pterodactyl-panel/bootstrap/cache/
ln -s /opt/pterodactyl-panel /var/www/pterodactyl
rm -rf "/opt/pterodactyl-panel/panel.tar.gz"
echo "${RELEASE}" >/opt/${APP}_version.txt
msg_ok "Updated $APP to v${RELEASE}"

74
ct/romm.sh Normal file
View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bash
source <(curl -s https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ) | DevelopmentCats | AlphaLawless
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://romm.app
APP="RomM"
var_tags="${var_tags:-emulation}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-20}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/romm ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "romm" "rommapp/romm"; then
msg_info "Stopping Services"
systemctl stop romm-backend romm-worker romm-scheduler romm-watcher
msg_ok "Stopped Services"
msg_info "Backing up configuration"
cp /opt/romm/.env /opt/romm/.env.backup
msg_ok "Backed up configuration"
fetch_and_deploy_gh_release "romm" "rommapp/romm" "tarball" "latest" "/opt/romm"
msg_info "Updating ROMM"
cp /opt/romm/.env.backup /opt/romm/.env
cd /opt/romm
$STD uv sync --all-extras
cd /opt/romm/backend
$STD uv run alembic upgrade head
cd /opt/romm/frontend
$STD npm install
$STD npm run build
# Merge static assets into dist folder
cp -rf /opt/romm/frontend/assets/* /opt/romm/frontend/dist/assets/
mkdir -p /opt/romm/frontend/dist/assets/romm
ln -sfn /var/lib/romm/resources /opt/romm/frontend/dist/assets/romm/resources
ln -sfn /var/lib/romm/assets /opt/romm/frontend/dist/assets/romm/assets
msg_ok "Updated ROMM"
msg_info "Starting Services"
systemctl start romm-backend romm-worker romm-scheduler romm-watcher
msg_ok "Started Services"
msg_ok "Updated successfully"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"

View File

@@ -0,0 +1,44 @@
{
"name": "Databasus",
"slug": "databasus",
"categories": [
7
],
"date_created": "2026-02-17",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 80,
"documentation": "https://github.com/databasus/databasus",
"website": "https://github.com/databasus/databasus",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/databasus.webp",
"config_path": "/opt/databasus/.env",
"description": "Free, open source and self-hosted solution for automated PostgreSQL backups. With multiple storage options, notifications, scheduling, and a beautiful web interface for managing database backups across multiple PostgreSQL instances.",
"install_methods": [
{
"type": "default",
"script": "ct/databasus.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 8,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": "admin@localhost",
"password": "See /root/databasus.creds"
},
"notes": [
{
"text": "Supports PostgreSQL versions 12-18 with cloud and self-hosted instances",
"type": "info"
},
{
"text": "Features: Scheduled backups, multiple storage providers, notifications, encryption",
"type": "info"
}
]
}

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-16T06:25:10Z",
"generated": "2026-02-17T12:14:18Z",
"versions": [
{
"slug": "2fauth",
@@ -158,9 +158,9 @@
{
"slug": "bookstack",
"repo": "BookStackApp/BookStack",
"version": "v25.12.3",
"version": "v25.12.4",
"pinned": false,
"date": "2026-01-29T15:29:25Z"
"date": "2026-02-17T11:44:48Z"
},
{
"slug": "byparr",
@@ -193,9 +193,9 @@
{
"slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr",
"version": "v2.6.2",
"version": "v2.6.3",
"pinned": false,
"date": "2026-02-15T02:15:19Z"
"date": "2026-02-16T22:41:25Z"
},
{
"slug": "cloudreve",
@@ -207,9 +207,9 @@
{
"slug": "comfyui",
"repo": "comfyanonymous/ComfyUI",
"version": "v0.13.0",
"version": "v0.14.0",
"pinned": false,
"date": "2026-02-10T20:27:38Z"
"date": "2026-02-17T06:29:10Z"
},
{
"slug": "commafeed",
@@ -253,6 +253,13 @@
"pinned": false,
"date": "2026-02-11T15:39:05Z"
},
{
"slug": "databasus",
"repo": "databasus/databasus",
"version": "v3.12.2",
"pinned": false,
"date": "2026-02-14T22:28:59Z"
},
{
"slug": "dawarich",
"repo": "Freika/dawarich",
@@ -550,16 +557,16 @@
{
"slug": "huntarr",
"repo": "plexguide/Huntarr.io",
"version": "9.2.4.1",
"version": "9.3.0",
"pinned": false,
"date": "2026-02-12T22:17:47Z"
"date": "2026-02-17T06:34:38Z"
},
{
"slug": "immich-public-proxy",
"repo": "alangrainger/immich-public-proxy",
"version": "v1.15.1",
"version": "v1.15.3",
"pinned": false,
"date": "2026-01-26T08:04:27Z"
"date": "2026-02-16T22:54:27Z"
},
{
"slug": "inspircd",
@@ -578,16 +585,16 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.60",
"version": "v5.12.62",
"pinned": false,
"date": "2026-02-15T00:11:31Z"
"date": "2026-02-17T03:23:48Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1124",
"version": "v0.24.1140",
"pinned": false,
"date": "2026-02-15T05:54:22Z"
"date": "2026-02-17T05:54:25Z"
},
{
"slug": "jellystat",
@@ -704,9 +711,9 @@
{
"slug": "librenms",
"repo": "librenms/librenms",
"version": "26.1.1",
"version": "26.2.0",
"pinned": false,
"date": "2026-01-12T23:26:02Z"
"date": "2026-02-16T12:15:13Z"
},
{
"slug": "librespeed-rust",
@@ -729,12 +736,19 @@
"pinned": false,
"date": "2025-11-16T22:40:18Z"
},
{
"slug": "linkding",
"repo": "sissbruecker/linkding",
"version": "v1.45.0",
"pinned": false,
"date": "2026-01-06T20:31:04Z"
},
{
"slug": "linkstack",
"repo": "linkstackorg/linkstack",
"version": "v4.8.5",
"version": "v4.8.4",
"pinned": false,
"date": "2026-01-26T18:46:52Z"
"date": "2024-12-10T15:14:34Z"
},
{
"slug": "linkwarden",
@@ -774,9 +788,9 @@
{
"slug": "mail-archiver",
"repo": "s1t5/mail-archiver",
"version": "2602.1",
"version": "2602.2",
"pinned": false,
"date": "2026-02-11T06:23:11Z"
"date": "2026-02-17T09:46:52Z"
},
{
"slug": "managemydamnlife",
@@ -795,9 +809,9 @@
{
"slug": "mealie",
"repo": "mealie-recipes/mealie",
"version": "v3.10.2",
"version": "v3.11.0",
"pinned": false,
"date": "2026-02-04T23:32:32Z"
"date": "2026-02-17T04:13:35Z"
},
{
"slug": "mediamanager",
@@ -949,9 +963,9 @@
{
"slug": "opencloud",
"repo": "opencloud-eu/opencloud",
"version": "v5.0.2",
"version": "v5.1.0",
"pinned": true,
"date": "2026-02-05T16:29:01Z"
"date": "2026-02-16T15:04:28Z"
},
{
"slug": "opengist",
@@ -963,9 +977,9 @@
{
"slug": "ots",
"repo": "Luzifer/ots",
"version": "v1.21.0",
"version": "v1.21.1",
"pinned": false,
"date": "2026-01-19T23:21:29Z"
"date": "2026-02-16T12:12:23Z"
},
{
"slug": "outline",
@@ -1012,23 +1026,23 @@
{
"slug": "paperless-gpt",
"repo": "icereed/paperless-gpt",
"version": "v0.24.0",
"version": "v0.25.0",
"pinned": false,
"date": "2026-01-14T21:28:09Z"
"date": "2026-02-16T08:31:48Z"
},
{
"slug": "paperless-ngx",
"repo": "paperless-ngx/paperless-ngx",
"version": "v2.20.6",
"version": "v2.20.7",
"pinned": false,
"date": "2026-01-31T07:30:27Z"
"date": "2026-02-16T16:52:23Z"
},
{
"slug": "patchmon",
"repo": "PatchMon/PatchMon",
"version": "v1.4.0",
"version": "v1.4.1",
"pinned": false,
"date": "2026-02-13T10:39:03Z"
"date": "2026-02-16T18:00:13Z"
},
{
"slug": "paymenter",
@@ -1096,9 +1110,9 @@
{
"slug": "pocketbase",
"repo": "pocketbase/pocketbase",
"version": "v0.36.3",
"version": "v0.36.4",
"pinned": false,
"date": "2026-02-13T18:38:58Z"
"date": "2026-02-17T08:02:51Z"
},
{
"slug": "pocketid",
@@ -1268,6 +1282,13 @@
"pinned": false,
"date": "2025-03-28T13:00:23Z"
},
{
"slug": "romm",
"repo": "RetroAchievements/RALibretro",
"version": "1.8.2",
"pinned": false,
"date": "2026-01-23T17:03:31Z"
},
{
"slug": "rustdeskserver",
"repo": "rustdesk/rustdesk-server",
@@ -1320,9 +1341,9 @@
{
"slug": "semaphore",
"repo": "semaphoreui/semaphore",
"version": "v2.17.0",
"version": "v2.17.2",
"pinned": false,
"date": "2026-02-13T21:08:30Z"
"date": "2026-02-16T10:27:40Z"
},
{
"slug": "shelfmark",
@@ -1348,9 +1369,9 @@
{
"slug": "slskd",
"repo": "slskd/slskd",
"version": "0.24.3",
"version": "0.24.4",
"pinned": false,
"date": "2026-01-15T14:40:15Z"
"date": "2026-02-16T16:50:17Z"
},
{
"slug": "snipeit",
@@ -1397,9 +1418,9 @@
{
"slug": "stirling-pdf",
"repo": "Stirling-Tools/Stirling-PDF",
"version": "v2.4.6",
"version": "v2.5.0",
"pinned": false,
"date": "2026-02-12T00:01:19Z"
"date": "2026-02-16T22:58:17Z"
},
{
"slug": "streamlink-webui",
@@ -1467,9 +1488,9 @@
{
"slug": "threadfin",
"repo": "threadfin/threadfin",
"version": "1.2.37",
"version": "delete",
"pinned": false,
"date": "2025-09-11T16:13:41Z"
"date": ""
},
{
"slug": "tianji",
@@ -1530,9 +1551,9 @@
{
"slug": "tunarr",
"repo": "chrisbenincasa/tunarr",
"version": "v1.1.12",
"version": "v1.1.13",
"pinned": false,
"date": "2026-02-03T20:19:00Z"
"date": "2026-02-16T16:16:17Z"
},
{
"slug": "uhf",
@@ -1614,9 +1635,9 @@
{
"slug": "wanderer",
"repo": "meilisearch/meilisearch",
"version": "v1.35.0",
"version": "v1.35.1",
"pinned": false,
"date": "2026-02-02T09:57:03Z"
"date": "2026-02-16T17:01:17Z"
},
{
"slug": "warracker",
@@ -1726,9 +1747,9 @@
{
"slug": "zitadel",
"repo": "zitadel/zitadel",
"version": "v4.10.1",
"version": "v4.11.0",
"pinned": false,
"date": "2026-01-30T06:52:53Z"
"date": "2026-02-16T09:48:38Z"
},
{
"slug": "zoraxy",

View File

@@ -0,0 +1,35 @@
{
"name": "RomM",
"slug": "romm",
"categories": [
24
],
"date_created": "2026-02-16",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 80,
"documentation": "https://docs.romm.app/latest/",
"website": "https://romm.app/",
"config_path": "/opt/romm/.env",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/romm.webp",
"description": "RomM (ROM Manager) allows you to scan, enrich, browse and play your game collection with a clean and responsive interface. Support for multiple platforms, various naming schemes, and custom tags.",
"install_methods": [
{
"type": "default",
"script": "ct/romm.sh",
"resources": {
"cpu": 2,
"ram": 4096,
"hdd": 20,
"os": "debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -0,0 +1,171 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
nginx \
valkey
msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
setup_go
NODE_VERSION="24" setup_nodejs
fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Building Databasus (Patience)"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod tidy
$STD go mod download
$STD go install github.com/swaggo/swag/cmd/swag@latest
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
mkdir -p /databasus-data/{pgdata,temp,backups,data,logs}
mkdir -p /opt/databasus/ui/build
mkdir -p /opt/databasus/migrations
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations/* /opt/databasus/migrations/
chown -R postgres:postgres /databasus-data
msg_ok "Built Databasus"
msg_info "Configuring Databasus"
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install goose for migrations
$STD go install github.com/pressly/goose/v3/cmd/goose@latest
ln -sf /root/go/bin/goose /usr/local/bin/goose
cat <<EOF >/opt/databasus/.env
# Environment
ENV_MODE=production
# Server
SERVER_PORT=4005
SERVER_HOST=0.0.0.0
# Database
DATABASE_DSN=host=localhost user=postgres password=postgres dbname=databasus port=5432 sslmode=disable
DATABASE_URL=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
# Migrations
GOOSE_DRIVER=postgres
GOOSE_DBSTRING=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
GOOSE_MIGRATION_DIR=/opt/databasus/migrations
# Valkey (Redis-compatible cache)
VALKEY_HOST=localhost
VALKEY_PORT=6379
# Security
JWT_SECRET=${JWT_SECRET}
ENCRYPTION_KEY=${ENCRYPTION_KEY}
# Paths
DATA_DIR=/databasus-data/data
BACKUP_DIR=/databasus-data/backups
LOG_DIR=/databasus-data/logs
EOF
chown postgres:postgres /opt/databasus/.env
chmod 600 /opt/databasus/.env
msg_ok "Configured Databasus"
msg_info "Configuring Valkey"
cat <<EOF >/etc/valkey/valkey.conf
port 6379
bind 127.0.0.1
protected-mode yes
save ""
maxmemory 256mb
maxmemory-policy allkeys-lru
EOF
systemctl enable -q --now valkey-server
systemctl restart valkey-server
msg_ok "Configured Valkey"
msg_info "Creating Database"
# Configure PostgreSQL to allow local password auth for databasus
PG_HBA="/etc/postgresql/17/main/pg_hba.conf"
if ! grep -q "databasus" "$PG_HBA"; then
sed -i '/^local\s*all\s*all/i local databasus postgres trust' "$PG_HBA"
sed -i '/^host\s*all\s*all\s*127/i host databasus postgres 127.0.0.1/32 trust' "$PG_HBA"
systemctl reload postgresql
fi
$STD sudo -u postgres psql -c "CREATE DATABASE databasus;" 2>/dev/null || true
$STD sudo -u postgres psql -c "ALTER USER postgres WITH SUPERUSER CREATEROLE CREATEDB;" 2>/dev/null || true
msg_ok "Created Database"
msg_info "Creating Databasus Service"
cat <<EOF >/etc/systemd/system/databasus.service
[Unit]
Description=Databasus - Database Backup Management
After=network.target postgresql.service valkey.service
Requires=postgresql.service valkey.service
[Service]
Type=simple
WorkingDirectory=/opt/databasus
EnvironmentFile=/opt/databasus/.env
ExecStart=/opt/databasus/databasus
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$STD systemctl daemon-reload
$STD systemctl enable -q --now databasus
msg_ok "Created Databasus Service"
msg_info "Configuring Nginx"
cat <<EOF >/etc/nginx/sites-available/databasus
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:4005;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_cache_bypass \$http_upgrade;
proxy_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
EOF
ln -sf /etc/nginx/sites-available/databasus /etc/nginx/sites-enabled/databasus
rm -f /etc/nginx/sites-enabled/default
$STD nginx -t
$STD systemctl enable -q --now nginx
$STD systemctl reload nginx
msg_ok "Configured Nginx"
motd_ssh
customize
cleanup_lxc

View File

@@ -64,7 +64,7 @@ $STD sudo -u cool coolconfig set-admin-password --user=admin --password="$COOLPA
echo "$COOLPASS" >~/.coolpass
msg_ok "Installed Collabora Online"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.0.2" "/usr/bin" "opencloud-*-linux-amd64"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.1.0" "/usr/bin" "opencloud-*-linux-amd64"
msg_info "Configuring OpenCloud"
DATA_DIR="/var/lib/opencloud"

View File

@@ -80,6 +80,7 @@ $STD php artisan p:user:make --no-interaction --admin=1 --email "$ADMIN_EMAIL" -
echo "* * * * * php /opt/pterodactyl-panel/artisan schedule:run >> /dev/null 2>&1" | crontab -u www-data -
chown -R www-data:www-data /opt/pterodactyl-panel/*
chmod -R 755 /opt/pterodactyl-panel/storage/* /opt/pterodactyl-panel/bootstrap/cache/
ln -s /opt/pterodactyl-panel /var/www/pterodactyl
{
echo ""
echo "pterodactyl Admin Username: admin"

344
install/romm-install.sh Normal file
View File

@@ -0,0 +1,344 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ) | DevelopmentCats | AlphaLawless
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://romm.app
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
acl \
git \
build-essential \
libssl-dev \
libffi-dev \
libmagic-dev \
python3-dev \
python3-pip \
python3-venv \
libmariadb3 \
libmariadb-dev \
libpq-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
zlib1g-dev \
liblzma-dev \
libncurses5-dev \
libncursesw5-dev \
redis-server \
redis-tools \
p7zip-full \
tzdata \
nginx
msg_ok "Installed Dependencies"
PYTHON_VERSION="3.13" setup_uv
NODE_VERSION="22" setup_nodejs
setup_mariadb
MARIADB_DB_NAME="romm" MARIADB_DB_USER="romm" setup_mariadb_db
msg_info "Creating directories"
mkdir -p /opt/romm \
/var/lib/romm/config \
/var/lib/romm/resources \
/var/lib/romm/assets/{saves,states,screenshots} \
/var/lib/romm/library/roms \
/var/lib/romm/library/bios
msg_ok "Created directories"
msg_info "Creating configuration file"
cat <<'EOF' >/var/lib/romm/config/config.yml
# RomM Configuration File
# Documentation: https://docs.romm.app/latest/Getting-Started/Configuration-File/
# Only uncomment the lines you want to use/modify
# exclude:
# platforms:
# - excluded_folder_a
# roms:
# single_file:
# extensions:
# - xml
# - txt
# names:
# - '._*'
# - '*.nfo'
# multi_file:
# names:
# - downloaded_media
# - media
# system:
# platforms:
# gc: ngc
# ps1: psx
# The folder name where your roms are located (relative to library path)
# filesystem:
# roms_folder: 'roms'
# scan:
# priority:
# metadata:
# - "igdb"
# - "moby"
# - "ss"
# - "ra"
# artwork:
# - "igdb"
# - "moby"
# - "ss"
# region:
# - "us"
# - "eu"
# - "jp"
# language:
# - "en"
# media:
# - box2d
# - box3d
# - screenshot
# - manual
# emulatorjs:
# debug: false
# cache_limit: null
EOF
chmod 644 /var/lib/romm/config/config.yml
msg_ok "Created configuration file"
fetch_and_deploy_gh_release "RAHasher" "RetroAchievements/RALibretro" "prebuild" "latest" "/opt/RALibretro" "RAHasher-x64-Linux-*.zip"
cp /opt/RALibretro/RAHasher /usr/bin/RAHasher
chmod +x /usr/bin/RAHasher
fetch_and_deploy_gh_release "romm" "rommapp/romm"
msg_info "Creating environment file"
sed -i 's/^supervised no/supervised systemd/' /etc/redis/redis.conf
systemctl restart redis-server
systemctl enable -q --now redis-server
AUTH_SECRET_KEY=$(openssl rand -hex 32)
cat <<EOF >/opt/romm/.env
ROMM_BASE_PATH=/var/lib/romm
ROMM_CONFIG_PATH=/var/lib/romm/config/config.yml
WEB_CONCURRENCY=4
DB_HOST=127.0.0.1
DB_PORT=3306
DB_NAME=$MARIADB_DB_NAME
DB_USER=$MARIADB_DB_USER
DB_PASSWD=$MARIADB_DB_PASS
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
ROMM_AUTH_SECRET_KEY=$AUTH_SECRET_KEY
DISABLE_DOWNLOAD_ENDPOINT_AUTH=false
DISABLE_CSRF_PROTECTION=false
ENABLE_RESCAN_ON_FILESYSTEM_CHANGE=true
RESCAN_ON_FILESYSTEM_CHANGE_DELAY=5
ENABLE_SCHEDULED_RESCAN=true
SCHEDULED_RESCAN_CRON=0 3 * * *
ENABLE_SCHEDULED_UPDATE_SWITCH_TITLEDB=true
SCHEDULED_UPDATE_SWITCH_TITLEDB_CRON=0 4 * * *
LOGLEVEL=INFO
EOF
chmod 600 /opt/romm/.env
msg_ok "Created environment file"
msg_info "Setting up RomM Backend"
cd /opt/romm
export UV_CONCURRENT_DOWNLOADS=1
$STD uv sync --all-extras
cd /opt/romm/backend
$STD uv run alembic upgrade head
msg_ok "Set up RomM Backend"
msg_info "Setting up RomM Frontend"
cd /opt/romm/frontend
$STD npm install
$STD npm run build
cp -rf /opt/romm/frontend/assets/* /opt/romm/frontend/dist/assets/
mkdir -p /opt/romm/frontend/dist/assets/romm
ln -sfn /var/lib/romm/resources /opt/romm/frontend/dist/assets/romm/resources
ln -sfn /var/lib/romm/assets /opt/romm/frontend/dist/assets/romm/assets
msg_ok "Set up RomM Frontend"
msg_info "Configuring Nginx"
cat <<'EOF' >/etc/nginx/sites-available/romm
upstream romm_backend {
server 127.0.0.1:5000;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name _;
root /opt/romm/frontend/dist;
client_max_body_size 0;
# Frontend SPA
location / {
try_files $uri $uri/ /index.html;
}
# Static assets
location /assets {
alias /opt/romm/frontend/dist/assets;
try_files $uri $uri/ =404;
expires 1y;
add_header Cache-Control "public, immutable";
}
# EmulatorJS player - requires COOP/COEP headers for SharedArrayBuffer
location ~ ^/rom/.*/ejs$ {
add_header Cross-Origin-Embedder-Policy "require-corp";
add_header Cross-Origin-Opener-Policy "same-origin";
try_files $uri /index.html;
}
# Backend API
location /api {
proxy_pass http://romm_backend;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket and Netplay
location ~ ^/(ws|netplay) {
proxy_pass http://romm_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_read_timeout 86400;
}
# OpenAPI docs
location = /openapi.json {
proxy_pass http://romm_backend;
}
# Internal library file serving
location /library/ {
internal;
alias /var/lib/romm/library/;
}
}
EOF
rm -f /etc/nginx/sites-enabled/default
ln -sf /etc/nginx/sites-available/romm /etc/nginx/sites-enabled/romm
systemctl restart nginx
systemctl enable -q --now nginx
msg_ok "Configured Nginx"
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/romm-backend.service
[Unit]
Description=RomM Backend
After=network.target mariadb.service redis-server.service
Requires=mariadb.service redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm"
ExecStart=/opt/romm/.venv/bin/python main.py
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/romm-worker.service
[Unit]
Description=RomM RQ Worker
After=network.target mariadb.service redis-server.service romm-backend.service
Requires=mariadb.service redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm/backend"
ExecStart=/opt/romm/.venv/bin/rq worker --path /opt/romm/backend --url redis://127.0.0.1:6379/0 high default low
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/romm-scheduler.service
[Unit]
Description=RomM RQ Scheduler
After=network.target mariadb.service redis-server.service romm-backend.service
Requires=mariadb.service redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm/backend"
Environment="RQ_REDIS_HOST=127.0.0.1"
Environment="RQ_REDIS_PORT=6379"
ExecStart=/opt/romm/.venv/bin/rqscheduler --path /opt/romm/backend
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/romm-watcher.service
[Unit]
Description=RomM Filesystem Watcher
After=network.target romm-backend.service
Requires=romm-backend.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm/backend"
ExecStart=/opt/romm/.venv/bin/watchfiles --target-type command '/opt/romm/.venv/bin/python watcher.py' /var/lib/romm/library
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now romm-backend romm-worker romm-scheduler romm-watcher
msg_ok "Created Services"
motd_ssh
customize
cleanup_lxc

View File

@@ -38,8 +38,8 @@ SECRET="$(openssl rand -hex 64)"
sed -e '/^NODE_ENV=/s/=.*$/=production/' \
-e 's/^TUDUDI_USER/# TUDUDI_USER/g' \
-e "/_SECRET=/s/=.*$/=${SECRET}/" \
-e "/^# DB_FILE/s/^# //; \
\|DB_FILE|s|/path.*$|${DB_LOCATION}/production.sqlite3|" \
-e '/^# DB_FILE=/s/^# //' \
-e "s|^DB_FILE=.*|DB_FILE=${DB_LOCATION}/production.sqlite3|" \
-e "/^# TUDUDI_ALLOWED/s/^# //; \
\|_ORIGINS=|s|=.*$|=<your tududi IP or FDQN>|" \
-e "/^# TUDUDI_UPLOAD/s/^# //; \

View File

@@ -14,6 +14,25 @@ catch_errors
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# This function enables IPv6 if it's not disabled and sets verbose mode
verb_ip6() {
set_std_mode # Set STD mode based on VERBOSE
@@ -53,6 +72,7 @@ setting_up_container() {
fi
msg_ok "Set up Container OS"
msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}"
post_progress_to_api
}
# This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
@@ -85,8 +105,18 @@ network_check() {
update_os() {
msg_info "Updating Container OS"
$STD apk -U upgrade
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
msg_ok "Updated Container OS"
post_progress_to_api
}
# This function modifies the message of the day (motd) and SSH settings

View File

@@ -117,16 +117,17 @@ detect_repo_source
# - Canonical source of truth for ALL exit code mappings
# - Used by both api.func (telemetry) and error_handler.func (error display)
# - Supports:
# * Generic/Shell errors (1, 2, 124, 126-130, 134, 137, 139, 141, 143)
# * curl/wget errors (6, 7, 22, 28, 35)
# * Generic/Shell errors (1-3, 10, 124-132, 134, 137, 139, 141, 143-146)
# * curl/wget errors (4-8, 16, 18, 22-28, 30, 32-36, 39, 44-48, 51-52, 55-57, 59, 61, 63, 75, 78-79, 92, 95)
# * Package manager errors (APT, DPKG: 100-102, 255)
# * BSD sysexits (64-78)
# * Systemd/Service errors (150-154)
# * Python/pip/uv errors (160-162)
# * PostgreSQL errors (170-173)
# * MySQL/MariaDB errors (180-183)
# * MongoDB errors (190-193)
# * Proxmox custom codes (200-231)
# * Node.js/npm errors (243, 245-249)
# * Node.js/npm errors (239, 243, 245-249)
# - Returns description string for given exit code
# ------------------------------------------------------------------------------
explain_exit_code() {
@@ -135,30 +136,87 @@ explain_exit_code() {
# --- Generic / Shell ---
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
# --- curl / wget errors (commonly seen in downloads) ---
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
# --- Package manager / APT / DPKG ---
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
# --- BSD sysexits.h (64-78) ---
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
# --- Common shell/system errors ---
124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
# --- Systemd / Service errors (150-154) ---
150) echo "Systemd: Service failed to start" ;;
@@ -166,7 +224,6 @@ explain_exit_code() {
152) echo "Permission denied (EACCES)" ;;
153) echo "Build/compile failed (make/gcc/cmake)" ;;
154) echo "Node.js: Native addon build failed (node-gyp)" ;;
# --- Python / pip / uv (160-162) ---
160) echo "Python: Virtualenv / uv environment missing or broken" ;;
161) echo "Python: Dependency resolution failed" ;;
@@ -217,7 +274,8 @@ explain_exit_code() {
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
# --- Node.js / npm / pnpm / yarn (243-249) ---
# --- Node.js / npm / pnpm / yarn (239-249) ---
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
@@ -624,6 +682,8 @@ EOF
curl -fsS -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" &>/dev/null || true
POST_TO_API_DONE=true
}
# ------------------------------------------------------------------------------
@@ -848,6 +908,9 @@ categorize_error() {
# Network errors (curl/wget)
6 | 7 | 22 | 35) echo "network" ;;
# Docker / Privileged mode required
10) echo "config" ;;
# Timeout errors
28 | 124 | 211) echo "timeout" ;;
@@ -922,6 +985,62 @@ get_install_duration() {
echo $((now - INSTALL_START_TIME))
}
# ------------------------------------------------------------------------------
# _telemetry_report_exit()
#
# - Internal handler called by EXIT trap set in init_tool_telemetry()
# - Determines success/failure from exit code and reports via appropriate API
# - Arguments:
# * $1: exit_code from the script
# ------------------------------------------------------------------------------
_telemetry_report_exit() {
local ec="${1:-0}"
local status="success"
[[ "$ec" -ne 0 ]] && status="failed"
# Lazy name resolution: use explicit name, fall back to $APP, then "unknown"
local name="${TELEMETRY_TOOL_NAME:-${APP:-unknown}}"
if [[ "${TELEMETRY_TOOL_TYPE:-tool}" == "addon" ]]; then
post_addon_to_api "$name" "$status" "$ec"
else
post_tool_to_api "$name" "$status" "$ec"
fi
}
# ------------------------------------------------------------------------------
# init_tool_telemetry()
#
# - One-line telemetry setup for tools/addon scripts
# - Reads DIAGNOSTICS from /usr/local/community-scripts/diagnostics
# - Starts install timer for duration tracking
# - Sets EXIT trap to automatically report success/failure on script exit
# - Arguments:
# * $1: tool_name (optional, falls back to $APP at exit time)
# * $2: type ("tool" for PVE host scripts, "addon" for container addons)
# - Usage:
# source <(curl -fsSL .../misc/api.func) 2>/dev/null || true
# init_tool_telemetry "post-pve-install" "tool"
# init_tool_telemetry "" "addon" # uses $APP at exit time
# ------------------------------------------------------------------------------
init_tool_telemetry() {
local name="${1:-}"
local type="${2:-tool}"
[[ -n "$name" ]] && TELEMETRY_TOOL_NAME="$name"
TELEMETRY_TOOL_TYPE="$type"
# Read diagnostics opt-in/opt-out
if [[ -f /usr/local/community-scripts/diagnostics ]]; then
DIAGNOSTICS=$(grep -i "^DIAGNOSTICS=" /usr/local/community-scripts/diagnostics 2>/dev/null | awk -F'=' '{print $2}') || true
fi
start_install_timer
# EXIT trap: automatically report telemetry when script ends
trap '_telemetry_report_exit "$?"' EXIT
}
# ------------------------------------------------------------------------------
# post_tool_to_api()
#

View File

@@ -297,7 +297,7 @@ validate_container_id() {
# Falls back gracefully if pvesh unavailable or returns empty
if command -v pvesh &>/dev/null; then
local cluster_ids
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true)
if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then
return 1
@@ -3427,6 +3427,7 @@ start() {
VERBOSE="no"
set_std_mode
ensure_profile_loaded
get_lxc_ip
update_script
update_motd_ip
cleanup_lxc
@@ -3454,6 +3455,7 @@ start() {
;;
esac
ensure_profile_loaded
get_lxc_ip
update_script
update_motd_ip
cleanup_lxc
@@ -4038,6 +4040,13 @@ EOF'
msg_ok "Customized LXC Container"
# Optional DNS override for retry scenarios (inside LXC, never on host)
if [[ "${DNS_RETRY_OVERRIDE:-false}" == "true" ]]; then
msg_info "Applying DNS retry override in LXC (8.8.8.8, 1.1.1.1)"
pct exec "$CTID" -- bash -c "printf 'nameserver 8.8.8.8\nnameserver 1.1.1.1\n' >/etc/resolv.conf" >/dev/null 2>&1 || true
msg_ok "DNS override applied in LXC"
fi
# Install SSH keys
install_ssh_keys_into_ct
@@ -4150,32 +4159,322 @@ EOF'
# Prompt user for cleanup with 60s timeout
echo ""
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
# Detect error type for smart recovery options
local is_oom=false
local is_network_issue=false
local is_apt_issue=false
local is_cmd_not_found=false
local error_explanation=""
if declare -f explain_exit_code >/dev/null 2>&1; then
error_explanation="$(explain_exit_code "$install_exit_code")"
fi
# OOM detection: exit codes 134 (SIGABRT/heap), 137 (SIGKILL/OOM), 243 (Node.js heap)
if [[ $install_exit_code -eq 134 || $install_exit_code -eq 137 || $install_exit_code -eq 243 ]]; then
is_oom=true
fi
# APT/DPKG detection: exit codes 100-102 (APT), 255 (DPKG with log evidence)
case "$install_exit_code" in
100 | 101 | 102) is_apt_issue=true ;;
255)
if [[ -f "$combined_log" ]] && grep -qiE 'dpkg|apt-get|apt\.conf|broken packages|unmet dependencies|E: Sub-process|E: Failed' "$combined_log"; then
is_apt_issue=true
fi
;;
esac
# Command not found detection
if [[ $install_exit_code -eq 127 ]]; then
is_cmd_not_found=true
fi
# Network-related detection (curl/apt/git fetch failures and transient network issues)
case "$install_exit_code" in
6 | 7 | 22 | 28 | 35 | 52 | 56 | 57 | 75 | 78) is_network_issue=true ;;
100)
# APT can fail due to network (Failed to fetch)
if [[ -f "$combined_log" ]] && grep -qiE 'Failed to fetch|Could not resolve|Connection failed|Network is unreachable|Temporary failure resolving' "$combined_log"; then
is_network_issue=true
fi
;;
128)
if [[ -f "$combined_log" ]] && grep -qiE 'RPC failed|early EOF|fetch-pack|HTTP/2 stream|Could not resolve host|Temporary failure resolving|Failed to fetch|Connection reset|Network is unreachable' "$combined_log"; then
is_network_issue=true
fi
;;
esac
# Exit 1 subclassification: analyze logs to identify actual root cause
# Many exit 1 errors are actually APT, OOM, network, or command-not-found issues
if [[ $install_exit_code -eq 1 && -f "$combined_log" ]]; then
if grep -qiE 'E: Unable to|E: Package|E: Failed to fetch|dpkg.*error|broken packages|unmet dependencies|dpkg --configure -a' "$combined_log"; then
is_apt_issue=true
fi
if grep -qiE 'Cannot allocate memory|Out of memory|oom-killer|Killed process|JavaScript heap' "$combined_log"; then
is_oom=true
fi
if grep -qiE 'Could not resolve|DNS|Connection refused|Network is unreachable|No route to host|Temporary failure resolving|Failed to fetch' "$combined_log"; then
is_network_issue=true
fi
if grep -qiE ': command not found|No such file or directory.*/s?bin/' "$combined_log"; then
is_cmd_not_found=true
fi
fi
# Show error explanation if available
if [[ -n "$error_explanation" ]]; then
echo -e "${TAB}${RD}Error: ${error_explanation}${CL}"
echo ""
fi
# Show specific hints for known error types
if [[ $install_exit_code -eq 10 ]]; then
echo -e "${TAB}${INFO} This error usually means the container needs ${GN}privileged${CL} mode or Docker/nesting support."
echo -e "${TAB}${INFO} Recreate with: Advanced Install → Container Type: ${GN}Privileged${CL}"
echo ""
fi
if [[ $install_exit_code -eq 125 || $install_exit_code -eq 126 ]]; then
echo -e "${TAB}${INFO} The command exists but cannot be executed. This may be a ${GN}permission${CL} issue."
echo -e "${TAB}${INFO} If using Docker, ensure the container is ${GN}privileged${CL} or has correct permissions."
echo ""
fi
if [[ "$is_cmd_not_found" == true ]]; then
local missing_cmd=""
if [[ -f "$combined_log" ]]; then
missing_cmd=$(grep -oiE '[a-zA-Z0-9_.-]+: command not found' "$combined_log" | tail -1 | sed 's/: command not found//')
fi
if [[ -n "$missing_cmd" ]]; then
echo -e "${TAB}${INFO} Missing command: ${GN}${missing_cmd}${CL}"
fi
echo ""
fi
# Build recovery menu based on error type
echo -e "${YW}What would you like to do?${CL}"
echo ""
echo -e " ${GN}1)${CL} Remove container and exit"
echo -e " ${GN}2)${CL} Keep container for debugging"
echo -e " ${GN}3)${CL} Retry with verbose mode (full rebuild)"
local next_option=4
local APT_OPTION="" OOM_OPTION="" DNS_OPTION=""
if [[ "$is_apt_issue" == true ]]; then
if [[ "$var_os" == "alpine" ]]; then
echo -e " ${GN}${next_option})${CL} Repair APK state and re-run install (in-place)"
else
echo -e " ${GN}${next_option})${CL} Repair APT/DPKG state and re-run install (in-place)"
fi
APT_OPTION=$next_option
next_option=$((next_option + 1))
fi
if [[ "$is_oom" == true ]]; then
local recovery_attempt="${RECOVERY_ATTEMPT:-0}"
if [[ $recovery_attempt -lt 2 ]]; then
local new_ram=$((RAM_SIZE * 2))
local new_cpu=$((CORE_COUNT * 2))
echo -e " ${GN}${next_option})${CL} Retry with more resources (RAM: ${RAM_SIZE}${new_ram} MiB, CPU: ${CORE_COUNT}${new_cpu} cores)"
OOM_OPTION=$next_option
next_option=$((next_option + 1))
else
echo -e " ${DGN}-)${CL} ${DGN}OOM retry exhausted (already retried ${recovery_attempt}x)${CL}"
fi
fi
if [[ "$is_network_issue" == true ]]; then
echo -e " ${GN}${next_option})${CL} Retry with DNS override in LXC (8.8.8.8 / 1.1.1.1)"
DNS_OPTION=$next_option
next_option=$((next_option + 1))
fi
local max_option=$((next_option - 1))
echo ""
echo -en "${YW}Select option [1-${max_option}] (default: 1, auto-remove in 60s): ${CL}"
if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
case "${response:-1}" in
1)
# Remove container
echo ""
msg_info "Removing container ${CTID}"
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID}${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
msg_ok "Container ${CTID} removed"
elif [[ "$response" =~ ^[Nn]$ ]]; then
echo ""
msg_warn "Container ${CTID} kept for debugging"
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
;;
2)
echo -e "\n${TAB}${YW}Container ${CTID} kept for debugging${CL}"
# Dev mode: Setup MOTD/SSH for debugging access to broken container
if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then
echo -e "${TAB}${HOLD}${DGN}Setting up MOTD and SSH for debugging...${CL}"
if pct exec "$CTID" -- bash -c "
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then
local ct_ip=$(pct exec "$CTID" ip a s dev eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1)
echo -e "${BFR}${CM}${GN}MOTD/SSH ready - SSH into container: ssh root@${ct_ip}${CL}"
fi
fi
fi
exit $install_exit_code
;;
3)
# Retry with verbose mode (full rebuild)
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
# Get new container ID
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export VERBOSE="yes"
export var_verbose="yes"
# Show rebuild summary
echo -e "${YW}Rebuilding with preserved settings:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
# Re-run build_container
build_container
return $?
;;
*)
# Handle dynamic smart recovery options via named option variables
local handled=false
if [[ -n "${APT_OPTION}" && "${response}" == "${APT_OPTION}" ]]; then
# Package manager in-place repair: fix broken state and re-run install script
handled=true
if [[ "$var_os" == "alpine" ]]; then
echo -e "\n${TAB}${HOLD}${YW}Repairing APK state in container ${CTID}...${CL}"
pct exec "$CTID" -- ash -c "
apk fix 2>/dev/null || true
apk cache clean 2>/dev/null || true
apk update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APK state repaired in container ${CTID}${CL}"
else
echo -e "\n${TAB}${HOLD}${YW}Repairing APT/DPKG state in container ${CTID}...${CL}"
pct exec "$CTID" -- bash -c "
DEBIAN_FRONTEND=noninteractive dpkg --configure -a 2>/dev/null || true
apt-get -f install -y 2>/dev/null || true
apt-get clean 2>/dev/null
apt-get update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APT/DPKG state repaired in container ${CTID}${CL}"
fi
echo ""
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Re-running installation in existing container ${CTID}:${CL}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Re-running installation script..."
# Re-run install script in existing container (don't destroy/recreate)
set +Eeuo pipefail
trap - ERR
lxc-attach -n "$CTID" -- bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/install/${var_install}.sh)"
local apt_retry_exit=$?
set -Eeuo pipefail
trap 'error_handler' ERR
# Check for error flag from retry
local apt_retry_code=0
if [[ -n "${SESSION_ID:-}" ]]; then
local retry_error_flag="/root/.install-${SESSION_ID}.failed"
if pct exec "$CTID" -- test -f "$retry_error_flag" 2>/dev/null; then
apt_retry_code=$(pct exec "$CTID" -- cat "$retry_error_flag" 2>/dev/null || echo "1")
pct exec "$CTID" -- rm -f "$retry_error_flag" 2>/dev/null || true
fi
fi
if [[ $apt_retry_code -eq 0 && $apt_retry_exit -ne 0 ]]; then
apt_retry_code=$apt_retry_exit
fi
if [[ $apt_retry_code -eq 0 ]]; then
msg_ok "Installation completed successfully after APT repair!"
post_update_to_api "done" "0" "force"
return 0
else
msg_error "Installation still failed after APT repair (exit code: ${apt_retry_code})"
install_exit_code=$apt_retry_code
fi
fi
if [[ -n "${OOM_OPTION}" && "${response}" == "${OOM_OPTION}" ]]; then
# Retry with doubled resources
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with more resources...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
local old_ram="$RAM_SIZE"
local old_cpu="$CORE_COUNT"
export CTID=$(get_valid_container_id "$CTID")
export RAM_SIZE=$((RAM_SIZE * 2))
export CORE_COUNT=$((CORE_COUNT * 2))
export var_ram="$RAM_SIZE"
export var_cpu="$CORE_COUNT"
export VERBOSE="yes"
export var_verbose="yes"
export RECOVERY_ATTEMPT=$((${RECOVERY_ATTEMPT:-0} + 1))
echo -e "${YW}Rebuilding with increased resources (attempt ${RECOVERY_ATTEMPT}/2):${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${old_ram}${GN}${RAM_SIZE}${CL} MiB (x2)"
echo -e " CPU: ${old_cpu}${GN}${CORE_COUNT}${CL} cores (x2)"
echo -e " Disk: ${DISK_SIZE} GB | Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ -n "${DNS_OPTION}" && "${response}" == "${DNS_OPTION}" ]]; then
# Retry with DNS override in LXC
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with DNS override...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export DNS_RETRY_OVERRIDE="true"
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Rebuilding with DNS override in LXC:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " DNS: ${GN}8.8.8.8, 1.1.1.1${CL} (inside LXC only)"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ "$handled" == false ]]; then
echo -e "\n${TAB}${YW}Invalid option. Container ${CTID} kept.${CL}"
exit $install_exit_code
fi
;;
esac
else
# Timeout - auto-remove
echo ""
@@ -5253,20 +5552,27 @@ ensure_log_on_host() {
# - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages
# - Posts failure status with exit code to API (error description resolved automatically)
# - Only executes on non-zero exit codes
# - For non-zero exit codes: posts "failed" status
# - For zero exit codes where post_update_to_api was never called:
# catches orphaned "installing" records (e.g., script exited cleanly
# but description() was never reached)
# ------------------------------------------------------------------------------
api_exit_script() {
exit_code=$?
local exit_code=$?
if [ $exit_code -ne 0 ]; then
ensure_log_on_host
post_update_to_api "failed" "$exit_code"
elif [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
# Script exited with 0 but never sent a completion status
# exit_code=0 is never an error — report as success
post_update_to_api "done" "0"
fi
}
if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT
fi
trap 'ensure_log_on_host; post_update_to_api "failed" "$?"' ERR
trap 'local _ec=$?; if [[ $_ec -ne 0 ]]; then ensure_log_on_host; post_update_to_api "failed" "$_ec"; fi' ERR
trap 'ensure_log_on_host; post_update_to_api "failed" "129"; exit 129' SIGHUP
trap 'ensure_log_on_host; post_update_to_api "failed" "130"; exit 130' SIGINT
trap 'ensure_log_on_host; post_update_to_api "failed" "143"; exit 143' SIGTERM

View File

@@ -37,24 +37,79 @@ if ! declare -f explain_exit_code &>/dev/null; then
case "$code" in
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
150) echo "Systemd: Service failed to start" ;;
151) echo "Systemd: Service unit not found" ;;
152) echo "Permission denied (EACCES)" ;;
@@ -100,6 +155,7 @@ if ! declare -f explain_exit_code &>/dev/null; then
224) echo "Proxmox: PBS storage is for backups only" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;

View File

@@ -40,6 +40,25 @@ catch_errors
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# ==============================================================================
# SECTION 2: NETWORK & CONNECTIVITY
# ==============================================================================
@@ -103,6 +122,7 @@ setting_up_container() {
msg_ok "Set up Container OS"
#msg_custom "${CM}" "${GN}" "Network Connected: ${BL}$(hostname -I)"
msg_ok "Network Connected: ${BL}$(hostname -I)"
post_progress_to_api
}
# ------------------------------------------------------------------------------
@@ -206,8 +226,18 @@ EOF
$STD apt-get -o Dpkg::Options::="--force-confold" -y dist-upgrade
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
msg_ok "Updated Container OS"
post_progress_to_api
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
}
# ==============================================================================

View File

@@ -529,9 +529,21 @@ cleanup_vmid() {
}
cleanup() {
local exit_code=$?
if [[ "$(dirs -p | wc -l)" -gt 1 ]]; then
popd >/dev/null || true
fi
# Report final telemetry status if post_to_api_vm was called but no update was sent
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
else
# Exited cleanly but description()/success was never called — shouldn't happen
post_update_to_api "failed" "1"
fi
fi
fi
}
check_root() {

View File

@@ -19,6 +19,11 @@ EOF
}
header_info
set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-netbird-lxc" "tool"
while true; do
read -p "This will add NetBird to an existing LXC Container ONLY. Proceed(y/n)?" yn
case $yn in

View File

@@ -23,6 +23,10 @@ function msg_info() { echo -e " \e[1;36m➤\e[0m $1"; }
function msg_ok() { echo -e " \e[1;32m✔\e[0m $1"; }
function msg_error() { echo -e " \e[1;31m✖\e[0m $1"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-tailscale-lxc" "tool"
header_info
if ! command -v pveversion &>/dev/null; then

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=8080
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# HEADER

View File

@@ -42,6 +42,11 @@ function msg() {
local TEXT="$1"
echo -e "$TEXT"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "all-templates" "tool"
function validate_container_id() {
local ctid="$1"
# Check if ID is numeric

View File

@@ -28,6 +28,11 @@ HOLD="-"
CM="${GN}${CL}"
APP="Coder Code Server"
hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "coder-code-server" "addon"
set -o errexit
set -o errtrace
set -o nounset

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -17,6 +17,11 @@ HOLD="-"
CM="${GN}${CL}"
APP="CrowdSec"
hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "crowdsec" "addon"
set -o errexit
set -o errtrace
set -o nounset

View File

@@ -32,6 +32,10 @@ DEFAULT_PORT=8080
SRC_DIR="/"
TMP_BIN="/tmp/filebrowser.$$"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "filebrowser-quantum" "addon"
# Get primary IP
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)

View File

@@ -5,8 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
function header_info {
clear
cat <<"EOF"
clear
cat <<"EOF"
_______ __ ____
/ ____(_) /__ / __ )_________ _ __________ _____
/ /_ / / / _ \/ __ / ___/ __ \ | /| / / ___/ _ \/ ___/
@@ -29,6 +29,10 @@ INSTALL_PATH="/usr/local/bin/filebrowser"
DB_PATH="/usr/local/community-scripts/filebrowser.db"
DEFAULT_PORT=8080
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "filebrowser" "addon"
# Get first non-loopback IP & Detect primary network interface dynamically
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
@@ -38,65 +42,65 @@ IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n
# Detect OS
if [[ -f "/etc/alpine-release" ]]; then
OS="Alpine"
SERVICE_PATH="/etc/init.d/filebrowser"
PKG_MANAGER="apk add --no-cache"
OS="Alpine"
SERVICE_PATH="/etc/init.d/filebrowser"
PKG_MANAGER="apk add --no-cache"
elif [[ -f "/etc/debian_version" ]]; then
OS="Debian"
SERVICE_PATH="/etc/systemd/system/filebrowser.service"
PKG_MANAGER="apt-get install -y"
OS="Debian"
SERVICE_PATH="/etc/systemd/system/filebrowser.service"
PKG_MANAGER="apt-get install -y"
else
echo -e "${CROSS} Unsupported OS detected. Exiting."
exit 1
echo -e "${CROSS} Unsupported OS detected. Exiting."
exit 1
fi
header_info
function msg_info() {
local msg="$1"
echo -e "${INFO} ${YW}${msg}...${CL}"
local msg="$1"
echo -e "${INFO} ${YW}${msg}...${CL}"
}
function msg_ok() {
local msg="$1"
echo -e "${CM} ${GN}${msg}${CL}"
local msg="$1"
echo -e "${CM} ${GN}${msg}${CL}"
}
function msg_error() {
local msg="$1"
echo -e "${CROSS} ${RD}${msg}${CL}"
local msg="$1"
echo -e "${CROSS} ${RD}${msg}${CL}"
}
if [ -f "$INSTALL_PATH" ]; then
echo -e "${YW}⚠️ ${APP} is already installed.${CL}"
read -r -p "Would you like to uninstall ${APP}? (y/N): " uninstall_prompt
if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Uninstalling ${APP}"
if [[ "$OS" == "Debian" ]]; then
systemctl disable --now filebrowser.service &>/dev/null
rm -f "$SERVICE_PATH"
else
rc-service filebrowser stop &>/dev/null
rc-update del filebrowser &>/dev/null
rm -f "$SERVICE_PATH"
fi
rm -f "$INSTALL_PATH" "$DB_PATH"
msg_ok "${APP} has been uninstalled."
exit 0
fi
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"
exit 0
echo -e "${YW}⚠️ ${APP} is already installed.${CL}"
read -r -p "Would you like to uninstall ${APP}? (y/N): " uninstall_prompt
if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Uninstalling ${APP}"
if [[ "$OS" == "Debian" ]]; then
systemctl disable --now filebrowser.service &>/dev/null
rm -f "$SERVICE_PATH"
else
echo -e "${YW}⚠️ Update skipped. Exiting.${CL}"
exit 0
rc-service filebrowser stop &>/dev/null
rc-update del filebrowser &>/dev/null
rm -f "$SERVICE_PATH"
fi
rm -f "$INSTALL_PATH" "$DB_PATH"
msg_ok "${APP} has been uninstalled."
exit 0
fi
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"
exit 0
else
echo -e "${YW}⚠️ Update skipped. Exiting.${CL}"
exit 0
fi
fi
echo -e "${YW}⚠️ ${APP} is not installed.${CL}"
@@ -105,43 +109,43 @@ PORT=${PORT:-$DEFAULT_PORT}
read -r -p "Would you like to install ${APP}? (y/n): " install_prompt
if [[ "${install_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Installing ${APP} on ${OS}"
$PKG_MANAGER wget tar curl &>/dev/null
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Installed ${APP}"
msg_info "Installing ${APP} on ${OS}"
$PKG_MANAGER wget tar curl &>/dev/null
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Installed ${APP}"
msg_info "Creating FileBrowser directory"
mkdir -p /usr/local/community-scripts
chown root:root /usr/local/community-scripts
chmod 755 /usr/local/community-scripts
touch "$DB_PATH"
chown root:root "$DB_PATH"
chmod 644 "$DB_PATH"
msg_ok "Directory created successfully"
msg_info "Creating FileBrowser directory"
mkdir -p /usr/local/community-scripts
chown root:root /usr/local/community-scripts
chmod 755 /usr/local/community-scripts
touch "$DB_PATH"
chown root:root "$DB_PATH"
chmod 644 "$DB_PATH"
msg_ok "Directory created successfully"
read -r -p "Would you like to use No Authentication? (y/N): " auth_prompt
if [[ "${auth_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Configuring No Authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config init --auth.method=noauth &>/dev/null
filebrowser config set --auth.method=noauth &>/dev/null
filebrowser users add ID 1 --perm.admin &>/dev/null
msg_ok "No Authentication configured"
else
msg_info "Setting up default authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser users add admin helper-scripts.com --perm.admin --database "$DB_PATH" &>/dev/null
msg_ok "Default authentication configured (admin:helper-scripts.com)"
fi
read -r -p "Would you like to use No Authentication? (y/N): " auth_prompt
if [[ "${auth_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Configuring No Authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config init --auth.method=noauth &>/dev/null
filebrowser config set --auth.method=noauth &>/dev/null
filebrowser users add ID 1 --perm.admin &>/dev/null
msg_ok "No Authentication configured"
else
msg_info "Setting up default authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser users add admin helper-scripts.com --perm.admin --database "$DB_PATH" &>/dev/null
msg_ok "Default authentication configured (admin:helper-scripts.com)"
fi
msg_info "Creating service"
if [[ "$OS" == "Debian" ]]; then
cat <<EOF >"$SERVICE_PATH"
msg_info "Creating service"
if [[ "$OS" == "Debian" ]]; then
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Filebrowser
After=network-online.target
@@ -157,9 +161,9 @@ Restart=always
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now filebrowser
else
cat <<EOF >"$SERVICE_PATH"
systemctl enable -q --now filebrowser
else
cat <<EOF >"$SERVICE_PATH"
#!/sbin/openrc-run
command="/usr/local/bin/filebrowser"
@@ -172,14 +176,14 @@ depend() {
need net
}
EOF
chmod +x "$SERVICE_PATH"
rc-update add filebrowser default &>/dev/null
rc-service filebrowser start &>/dev/null
fi
msg_ok "Service created successfully"
chmod +x "$SERVICE_PATH"
rc-update add filebrowser default &>/dev/null
rc-service filebrowser start &>/dev/null
fi
msg_ok "Service created successfully"
echo -e "${CM} ${GN}${APP} is reachable at: ${BL}http://$IP:$PORT${CL}"
echo -e "${CM} ${GN}${APP} is reachable at: ${BL}http://$IP:$PORT${CL}"
else
echo -e "${YW}⚠️ Installation skipped. Exiting.${CL}"
exit 0
echo -e "${YW}⚠️ Installation skipped. Exiting.${CL}"
exit 0
fi

View File

@@ -30,6 +30,10 @@ function msg_info() { echo -e "${INFO} ${YW}$1...${CL}"; }
function msg_ok() { echo -e "${CM} ${GN}$1${CL}"; }
function msg_error() { echo -e "${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "glances" "addon"
get_lxc_ip() {
if command -v hostname >/dev/null 2>&1 && hostname -I 2>/dev/null; then
hostname -I | awk '{print $1}'

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# HEADER
@@ -104,6 +106,10 @@ function update() {
$STD npm run build
msg_ok "Built ${APP}"
msg_info "Updating service"
create_service
msg_ok "Updated service"
msg_info "Starting service"
systemctl start immich-proxy
msg_ok "Started service"
@@ -112,6 +118,27 @@ function update() {
fi
}
function create_service() {
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}/app
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/dist/index.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
}
# ==============================================================================
# INSTALL
# ==============================================================================
@@ -173,23 +200,7 @@ EOF
msg_ok "Created configuration"
msg_info "Creating service"
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
create_service
systemctl enable -q --now immich-proxy
msg_ok "Created and started service"

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# HEADER

View File

@@ -26,6 +26,11 @@ BFR="\\r\\033[K"
HOLD="-"
CM="${GN}${CL}"
silent() { "$@" >/dev/null 2>&1; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "netdata" "tool"
set -e
header_info
echo "Loading..."

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -27,6 +27,11 @@ HOLD="-"
CM="${GN}${CL}"
APP="OliveTin"
hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "olivetin" "addon"
set-e
header_info

View File

@@ -29,6 +29,10 @@ APP="phpMyAdmin"
INSTALL_DIR_DEBIAN="/var/www/html/phpMyAdmin"
INSTALL_DIR_ALPINE="/usr/share/phpmyadmin"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "phpmyadmin" "addon"
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
[[ -z "$IP" ]] && IP=$(hostname -I | awk '{print $1}')

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -8,11 +8,13 @@
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -28,6 +28,11 @@ function msg_error() {
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pyenv" "addon"
if command -v pveversion >/dev/null 2>&1; then
msg_error "Can't Install on Proxmox "
exit

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -36,6 +36,10 @@ msg_ok() {
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "webmin" "addon"
header_info
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Webmin Installer" --yesno "This Will Install Webmin on this LXC Container. Proceed?" 10 58

View File

@@ -31,6 +31,10 @@ HOLD=" "
CM="${GN}${CL} "
CROSS="${RD}${CL} "
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-iptag" "tool"
# Stop any running spinner
stop_spinner() {
if [ -n "$SPINNER_PID" ] && kill -0 "$SPINNER_PID" 2>/dev/null; then

View File

@@ -22,6 +22,10 @@ CM='\xE2\x9C\x94\033'
GN="\033[1;92m"
CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "clean-lxcs" "tool"
header_info
echo "Loading..."

View File

@@ -5,8 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
function header_info {
clear
cat <<"EOF"
clear
cat <<"EOF"
____ ________ ____ __ __ __ _ ____ ___
/ __ \_________ _ ______ ___ ____ _ __ / ____/ /__ ____ _____ / __ \_________ / /_ ____ _____ ___ ____/ / / /| | / / |/ /____
/ /_/ / ___/ __ \| |/_/ __ `__ \/ __ \| |/_/ / / / / _ \/ __ `/ __ \ / / / / ___/ __ \/ __ \/ __ `/ __ \/ _ \/ __ / / / | | / / /|_/ / ___/
@@ -16,62 +16,66 @@ function header_info {
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "clean-orphaned-lvm" "tool"
# Function to check for orphaned LVM volumes
function find_orphaned_lvm {
echo -e "\n🔍 Scanning for orphaned LVM volumes...\n"
echo -e "\n🔍 Scanning for orphaned LVM volumes...\n"
orphaned_volumes=()
while read -r lv vg size seg_type; do
# Exclude system-critical LVs and Ceph OSDs
if [[ "$lv" == "data" || "$lv" == "root" || "$lv" == "swap" || "$lv" =~ ^osd-block- ]]; then
continue
fi
orphaned_volumes=()
while read -r lv vg size seg_type; do
# Exclude system-critical LVs and Ceph OSDs
if [[ "$lv" == "data" || "$lv" == "root" || "$lv" == "swap" || "$lv" =~ ^osd-block- ]]; then
continue
fi
# Exclude thin pools (any name)
if [[ "$seg_type" == "thin-pool" ]]; then
continue
fi
# Exclude thin pools (any name)
if [[ "$seg_type" == "thin-pool" ]]; then
continue
fi
container_id=$(echo "$lv" | grep -oE "[0-9]+" | head -1)
# Check if the ID exists as a VM or LXC container
if [ -f "/etc/pve/lxc/${container_id}.conf" ] || [ -f "/etc/pve/qemu-server/${container_id}.conf" ]; then
continue
fi
container_id=$(echo "$lv" | grep -oE "[0-9]+" | head -1)
# Check if the ID exists as a VM or LXC container
if [ -f "/etc/pve/lxc/${container_id}.conf" ] || [ -f "/etc/pve/qemu-server/${container_id}.conf" ]; then
continue
fi
orphaned_volumes+=("$lv" "$vg" "$size")
done < <(lvs --noheadings -o lv_name,vg_name,lv_size,seg_type --separator ' ' 2>/dev/null | awk '{print $1, $2, $3, $4}')
orphaned_volumes+=("$lv" "$vg" "$size")
done < <(lvs --noheadings -o lv_name,vg_name,lv_size,seg_type --separator ' ' 2>/dev/null | awk '{print $1, $2, $3, $4}')
# Display orphaned volumes
echo -e "❗ The following orphaned LVM volumes were found:\n"
printf "%-25s %-10s %-10s\n" "LV Name" "VG" "Size"
printf "%-25s %-10s %-10s\n" "-------------------------" "----------" "----------"
# Display orphaned volumes
echo -e "❗ The following orphaned LVM volumes were found:\n"
printf "%-25s %-10s %-10s\n" "LV Name" "VG" "Size"
printf "%-25s %-10s %-10s\n" "-------------------------" "----------" "----------"
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
printf "%-25s %-10s %-10s\n" "${orphaned_volumes[i]}" "${orphaned_volumes[i + 1]}" "${orphaned_volumes[i + 2]}"
done
echo ""
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
printf "%-25s %-10s %-10s\n" "${orphaned_volumes[i]}" "${orphaned_volumes[i + 1]}" "${orphaned_volumes[i + 2]}"
done
echo ""
}
# Function to delete selected volumes
function delete_orphaned_lvm {
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
lv="${orphaned_volumes[i]}"
vg="${orphaned_volumes[i + 1]}"
size="${orphaned_volumes[i + 2]}"
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
lv="${orphaned_volumes[i]}"
vg="${orphaned_volumes[i + 1]}"
size="${orphaned_volumes[i + 2]}"
read -p "❓ Do you want to delete $lv (VG: $vg, Size: $size)? [y/N]: " confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
echo -e "🗑️ Deleting $lv from $vg..."
lvremove -f "$vg/$lv"
if [ $? -eq 0 ]; then
echo -e "✅ Successfully deleted $lv.\n"
else
echo -e "❌ Failed to delete $lv.\n"
fi
else
echo -e "⚠️ Skipping $lv.\n"
fi
done
read -p "❓ Do you want to delete $lv (VG: $vg, Size: $size)? [y/N]: " confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
echo -e "🗑️ Deleting $lv from $vg..."
lvremove -f "$vg/$lv"
if [ $? -eq 0 ]; then
echo -e "✅ Successfully deleted $lv.\n"
else
echo -e "❌ Failed to delete $lv.\n"
fi
else
echo -e "⚠️ Skipping $lv.\n"
fi
done
}
# Run script

View File

@@ -7,8 +7,8 @@
clear
if command -v pveversion >/dev/null 2>&1; then
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
fi
YW=$(echo "\033[33m")
BL=$(echo "\033[36m")
@@ -23,16 +23,16 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}"
APP="Home Assistant Container"
while true; do
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
clear
function header_info {
cat <<"EOF"
cat <<"EOF"
__ __ ___ _ __ __
/ / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_
/ /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/
@@ -44,35 +44,39 @@ EOF
header_info
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "container-restore" "tool"
function msg_info() {
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
}
function msg_ok() {
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
}
function msg_error() {
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
if [ -z "$(ls -A /var/lib/docker/volumes/hass_config/_data/backups/)" ]; then
msg_error "No backups found! \n"
exit 1
msg_error "No backups found! \n"
exit 1
fi
DIR=/var/lib/docker/volumes/hass_config/_data/restore
if [ -d "$DIR" ]; then
msg_ok "Restore Directory Exists."
msg_ok "Restore Directory Exists."
else
mkdir -p /var/lib/docker/volumes/hass_config/_data/restore
msg_ok "Created Restore Directory."
mkdir -p /var/lib/docker/volumes/hass_config/_data/restore
msg_ok "Created Restore Directory."
fi
cd /var/lib/docker/volumes/hass_config/_data/backups/
PS3="Please enter your choice: "
files="$(ls -A .)"
select filename in ${files}; do
msg_ok "You selected ${BL}${filename}${CL}"
break
msg_ok "You selected ${BL}${filename}${CL}"
break
done
msg_info "Stopping Home Assistant"
docker stop homeassistant &>/dev/null

View File

@@ -7,8 +7,8 @@
clear
if command -v pveversion >/dev/null 2>&1; then
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
fi
YW=$(echo "\033[33m")
BL=$(echo "\033[36m")
@@ -23,16 +23,16 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}"
APP="Home Assistant Core"
while true; do
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
clear
function header_info {
cat <<"EOF"
cat <<"EOF"
__ __ ___ _ __ __ ______
/ / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_ / ____/___ ________
/ /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/ / / / __ \/ ___/ _ \
@@ -44,35 +44,39 @@ EOF
header_info
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "core-restore" "tool"
function msg_info() {
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
}
function msg_ok() {
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
}
function msg_error() {
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
if [ -z "$(ls -A /root/.homeassistant/backups/)" ]; then
msg_error "No backups found! \n"
exit 1
msg_error "No backups found! \n"
exit 1
fi
DIR=/root/.homeassistant/restore
if [ -d "$DIR" ]; then
msg_ok "Restore Directory Exists."
msg_ok "Restore Directory Exists."
else
mkdir -p /root/.homeassistant/restore
msg_ok "Created Restore Directory."
mkdir -p /root/.homeassistant/restore
msg_ok "Created Restore Directory."
fi
cd /root/.homeassistant/backups/
PS3="Please enter your choice: "
files="$(ls -A .)"
select filename in ${files}; do
msg_ok "You selected ${BL}${filename}${CL}"
break
msg_ok "You selected ${BL}${filename}${CL}"
break
done
msg_info "Stopping Home Assistant"
sudo service homeassistant stop

View File

@@ -22,6 +22,11 @@ RD=$(echo "\033[01;31m")
CM='\xE2\x9C\x94\033'
GN=$(echo "\033[1;92m")
CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "execute-lxcs" "tool"
header_info
echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Execute" --yesno "This will execute a command inside selected LXC Containers. Proceed?" 10 58
@@ -40,7 +45,6 @@ if [ $? -ne 0 ]; then
exit
fi
read -r -p "Enter here command for inside the containers: " custom_command
header_info
@@ -50,12 +54,11 @@ function execute_in() {
container=$1
name=$(pct exec "$container" hostname)
echo -e "${BL}[Info]${GN} Execute inside${BL} ${name}${GN} with output: ${CL}"
if ! pct exec "$container" -- bash -c "command ${custom_command} >/dev/null 2>&1"
then
echo -e "${BL}[Info]${GN} Skipping ${name} ${RD}$container has no command: ${custom_command}"
else
pct exec "$container" -- bash -c "${custom_command}" | tee
fi
if ! pct exec "$container" -- bash -c "command ${custom_command} >/dev/null 2>&1"; then
echo -e "${BL}[Info]${GN} Skipping ${name} ${RD}$container has no command: ${custom_command}"
else
pct exec "$container" -- bash -c "${custom_command}" | tee
fi
}
for container in $(pct list | awk '{if(NR>1) print $1}'); do

View File

@@ -15,6 +15,11 @@ function header_info {
/___/ /_/ /_/
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "frigate-support" "tool"
header_info
while true; do
read -p "This will Prepare a LXC Container for Frigate. Proceed (y/n)?" yn

View File

@@ -19,6 +19,10 @@ RD="\033[01;31m"
GN="\033[1;92m"
CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "fstrim" "tool"
LOGFILE="/var/log/fstrim.log"
touch "$LOGFILE"
chmod 600 "$LOGFILE"

View File

@@ -16,6 +16,10 @@ function header_info {
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "host-backup" "tool"
# Function to perform backup
function perform_backup {
local BACKUP_PATH

View File

@@ -29,6 +29,11 @@ BFR="\\r\\033[K"
HOLD="-"
CM="${GN}${CL}"
set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "hw-acceleration" "tool"
header_info
echo "Loading..."
function msg_info() {

View File

@@ -22,6 +22,10 @@ GN="\033[1;92m"
RD="\033[01;31m"
CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "kernel-clean" "tool"
# Detect current kernel
current_kernel=$(uname -r)
available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print $2}' | grep -v "$current_kernel" | sort -V)

View File

@@ -25,6 +25,11 @@ HOLD="-"
CM="${GN}${CL}"
current_kernel=$(uname -r)
available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print substr($2, 16, length($2)-22)}')
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "kernel-pin" "tool"
header_info
function msg_info() {

View File

@@ -38,6 +38,10 @@ CL=$(echo "\033[m")
TAB=" "
CM="${TAB}✔️${TAB}${CL}"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "lxc-delete" "tool"
header_info
echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Deletion" --yesno "This will delete LXC containers. Proceed?" 10 58

View File

@@ -29,6 +29,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "microcode" "tool"
header_info
current_microcode=$(journalctl -k | grep -i 'microcode: Current revision:' | grep -oP 'Current revision: \K0x[0-9a-f]+')
[ -z "$current_microcode" ] && current_microcode="Not found."

View File

@@ -15,6 +15,10 @@ cat <<"EOF"
EOF
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "monitor-all" "tool"
add() {
echo -e "\n IMPORTANT: Tag-Based Monitoring Enabled"
echo "Only VMs and containers with the tag 'mon-restart' will be automatically restarted by this service."
@@ -28,9 +32,9 @@ add() {
while true; do
read -p "This script will add Monitor All to Proxmox VE. Proceed (y/n)? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
@@ -175,5 +179,8 @@ CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "Monitor-All f
case $CHOICE in
"Add") add ;;
"Remove") remove ;;
*) echo "Exiting..."; exit 0 ;;
*)
echo "Exiting..."
exit 0
;;
esac

View File

@@ -19,8 +19,8 @@ INFO="${TAB}${TAB}${CL}"
WARN="${TAB}⚠️${TAB}${CL}"
function header_info {
clear
cat <<"EOF"
clear
cat <<"EOF"
_ ____________ ____ __________ ___ ____ _ __ __
/ | / / _/ ____/ / __ \/ __/ __/ /___ ____ _____/ (_)___ ____ _ / __ \(_)________ _/ /_ / /__ _____
@@ -33,6 +33,10 @@ Enhanced version supporting both e1000e and e1000 drivers
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "nic-offloading-fix" "tool"
header_info
function msg_info() { echo -e "${INFO} ${YW}${1}...${CL}"; }
@@ -42,15 +46,18 @@ function msg_warn() { echo -e "${WARN} ${YWB}${1}"; }
# Check for root privileges
if [ "$(id -u)" -ne 0 ]; then
msg_error "Error: This script must be run as root."
exit 1
msg_error "Error: This script must be run as root."
exit 1
fi
if ! command -v ethtool >/dev/null 2>&1; then
msg_info "Installing ethtool"
apt-get update &>/dev/null
apt-get install -y ethtool &>/dev/null || { msg_error "Failed to install ethtool. Exiting."; exit 1; }
msg_ok "ethtool installed successfully"
msg_info "Installing ethtool"
apt-get update &>/dev/null
apt-get install -y ethtool &>/dev/null || {
msg_error "Failed to install ethtool. Exiting."
exit 1
}
msg_ok "ethtool installed successfully"
fi
# Get list of network interfaces using Intel e1000e or e1000 drivers
@@ -60,95 +67,95 @@ COUNT=0
msg_info "Searching for Intel e1000e and e1000 interfaces"
for device in /sys/class/net/*; do
interface="$(basename "$device")" # or adjust the rest of the usages below, as mostly you'll use the path anyway
# Skip loopback interface and virtual interfaces
if [[ "$interface" != "lo" ]] && [[ ! "$interface" =~ ^(tap|fwbr|veth|vmbr|bonding_masters) ]]; then
# Check if the interface uses the e1000e or e1000 driver
driver=$(basename $(readlink -f /sys/class/net/$interface/device/driver 2>/dev/null) 2>/dev/null)
if [[ "$driver" == "e1000e" ]] || [[ "$driver" == "e1000" ]]; then
# Get MAC address for additional identification
mac=$(cat /sys/class/net/$interface/address 2>/dev/null)
INTERFACES+=("$interface" "Intel $driver NIC ($mac)")
((COUNT++))
fi
interface="$(basename "$device")" # or adjust the rest of the usages below, as mostly you'll use the path anyway
# Skip loopback interface and virtual interfaces
if [[ "$interface" != "lo" ]] && [[ ! "$interface" =~ ^(tap|fwbr|veth|vmbr|bonding_masters) ]]; then
# Check if the interface uses the e1000e or e1000 driver
driver=$(basename $(readlink -f /sys/class/net/$interface/device/driver 2>/dev/null) 2>/dev/null)
if [[ "$driver" == "e1000e" ]] || [[ "$driver" == "e1000" ]]; then
# Get MAC address for additional identification
mac=$(cat /sys/class/net/$interface/address 2>/dev/null)
INTERFACES+=("$interface" "Intel $driver NIC ($mac)")
((COUNT++))
fi
fi
done
# Check if any Intel e1000e/e1000 interfaces were found
if [ ${#INTERFACES[@]} -eq 0 ]; then
whiptail --title "Error" --msgbox "No Intel e1000e or e1000 network interfaces found!" 10 60
msg_error "No Intel e1000e or e1000 network interfaces found! Exiting."
exit 1
whiptail --title "Error" --msgbox "No Intel e1000e or e1000 network interfaces found!" 10 60
msg_error "No Intel e1000e or e1000 network interfaces found! Exiting."
exit 1
fi
msg_ok "Found ${BL}$COUNT${GN} Intel e1000e/e1000 interfaces"
# Create a checklist for interface selection with all interfaces initially checked
INTERFACES_CHECKLIST=()
for ((i=0; i<${#INTERFACES[@]}; i+=2)); do
INTERFACES_CHECKLIST+=("${INTERFACES[i]}" "${INTERFACES[i+1]}" "ON")
for ((i = 0; i < ${#INTERFACES[@]}; i += 2)); do
INTERFACES_CHECKLIST+=("${INTERFACES[i]}" "${INTERFACES[i + 1]}" "ON")
done
# Show interface selection checklist
SELECTED_INTERFACES=$(whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Network Interfaces" \
--separate-output --checklist "Select Intel e1000e/e1000 network interfaces\n(Space to toggle, Enter to confirm):" 15 80 6 \
"${INTERFACES_CHECKLIST[@]}" 3>&1 1>&2 2>&3)
--separate-output --checklist "Select Intel e1000e/e1000 network interfaces\n(Space to toggle, Enter to confirm):" 15 80 6 \
"${INTERFACES_CHECKLIST[@]}" 3>&1 1>&2 2>&3)
exitstatus=$?
if [ $exitstatus != 0 ]; then
msg_info "User canceled. Exiting."
exit 0
msg_info "User canceled. Exiting."
exit 0
fi
# Check if any interfaces were selected
if [ -z "$SELECTED_INTERFACES" ]; then
msg_error "No interfaces selected. Exiting."
exit 0
msg_error "No interfaces selected. Exiting."
exit 0
fi
# Convert the selected interfaces into an array
readarray -t INTERFACE_ARRAY <<< "$SELECTED_INTERFACES"
readarray -t INTERFACE_ARRAY <<<"$SELECTED_INTERFACES"
# Show the number of selected interfaces
INTERFACE_COUNT=${#INTERFACE_ARRAY[@]}
# Print selected interfaces with their driver types
for iface in "${INTERFACE_ARRAY[@]}"; do
driver=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
msg_ok "Selected interface: ${BL}$iface${GN} (${BL}$driver${GN})"
driver=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
msg_ok "Selected interface: ${BL}$iface${GN} (${BL}$driver${GN})"
done
# Ask for confirmation with the list of selected interfaces
CONFIRMATION_MSG="You have selected the following interface(s):\n\n"
for iface in "${INTERFACE_ARRAY[@]}"; do
SPEED=$(cat /sys/class/net/$iface/speed 2>/dev/null || echo "Unknown")
MAC=$(cat /sys/class/net/$iface/address 2>/dev/null)
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
CONFIRMATION_MSG+="- $iface (Driver: $DRIVER, MAC: $MAC, Speed: ${SPEED}Mbps)\n"
SPEED=$(cat /sys/class/net/$iface/speed 2>/dev/null || echo "Unknown")
MAC=$(cat /sys/class/net/$iface/address 2>/dev/null)
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
CONFIRMATION_MSG+="- $iface (Driver: $DRIVER, MAC: $MAC, Speed: ${SPEED}Mbps)\n"
done
CONFIRMATION_MSG+="\nThis will create systemd service(s) to disable offloading features.\n\nProceed?"
if ! whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Confirmation" \
--yesno "$CONFIRMATION_MSG" 20 80; then
msg_info "User canceled. Exiting."
exit 0
--yesno "$CONFIRMATION_MSG" 20 80; then
msg_info "User canceled. Exiting."
exit 0
fi
# Loop through all selected interfaces and create services for each
for SELECTED_INTERFACE in "${INTERFACE_ARRAY[@]}"; do
# Get the driver type for this specific interface
DRIVER=$(basename $(readlink -f /sys/class/net/$SELECTED_INTERFACE/device/driver 2>/dev/null) 2>/dev/null)
# Create service name for this interface
SERVICE_NAME="disable-nic-offload-$SELECTED_INTERFACE.service"
SERVICE_PATH="/etc/systemd/system/$SERVICE_NAME"
# Create the service file with driver-specific optimizations
msg_info "Creating systemd service for interface: ${BL}$SELECTED_INTERFACE${YW} (${BL}$DRIVER${YW})"
# Start with the common part of the service file
cat > "$SERVICE_PATH" << EOF
# Get the driver type for this specific interface
DRIVER=$(basename $(readlink -f /sys/class/net/$SELECTED_INTERFACE/device/driver 2>/dev/null) 2>/dev/null)
# Create service name for this interface
SERVICE_NAME="disable-nic-offload-$SELECTED_INTERFACE.service"
SERVICE_PATH="/etc/systemd/system/$SERVICE_NAME"
# Create the service file with driver-specific optimizations
msg_info "Creating systemd service for interface: ${BL}$SELECTED_INTERFACE${YW} (${BL}$DRIVER${YW})"
# Start with the common part of the service file
cat >"$SERVICE_PATH" <<EOF
[Unit]
Description=Disable NIC offloading for Intel $DRIVER interface $SELECTED_INTERFACE
After=network.target
@@ -163,45 +170,49 @@ RemainAfterExit=true
WantedBy=multi-user.target
EOF
# Check if service file was created successfully
if [ ! -f "$SERVICE_PATH" ]; then
whiptail --title "Error" --msgbox "Failed to create service file for $SELECTED_INTERFACE!" 10 50
msg_error "Failed to create service file for $SELECTED_INTERFACE! Skipping to next interface."
continue
fi
# Check if service file was created successfully
if [ ! -f "$SERVICE_PATH" ]; then
whiptail --title "Error" --msgbox "Failed to create service file for $SELECTED_INTERFACE!" 10 50
msg_error "Failed to create service file for $SELECTED_INTERFACE! Skipping to next interface."
continue
fi
# Configure this service
{
echo "25"; sleep 0.2
# Reload systemd to recognize the new service
systemctl daemon-reload
echo "50"; sleep 0.2
# Start the service
systemctl start "$SERVICE_NAME"
echo "75"; sleep 0.2
# Enable the service to start on boot
systemctl enable "$SERVICE_NAME"
echo "100"; sleep 0.2
} | whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --gauge "Configuring service for $SELECTED_INTERFACE..." 10 80 0
# Configure this service
{
echo "25"
sleep 0.2
# Reload systemd to recognize the new service
systemctl daemon-reload
echo "50"
sleep 0.2
# Start the service
systemctl start "$SERVICE_NAME"
echo "75"
sleep 0.2
# Enable the service to start on boot
systemctl enable "$SERVICE_NAME"
echo "100"
sleep 0.2
} | whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --gauge "Configuring service for $SELECTED_INTERFACE..." 10 80 0
# Individual service status
if systemctl is-active --quiet "$SERVICE_NAME"; then
SERVICE_STATUS="Active"
else
SERVICE_STATUS="Inactive"
fi
# Individual service status
if systemctl is-active --quiet "$SERVICE_NAME"; then
SERVICE_STATUS="Active"
else
SERVICE_STATUS="Inactive"
fi
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_STATUS="Enabled"
else
BOOT_STATUS="Disabled"
fi
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_STATUS="Enabled"
else
BOOT_STATUS="Disabled"
fi
# Show individual service results
msg_ok "Service for ${BL}$SELECTED_INTERFACE${GN} (${BL}$DRIVER${GN}) created and enabled!"
msg_info "${TAB}Service: ${BL}$SERVICE_NAME${YW}"
msg_info "${TAB}Status: ${BL}$SERVICE_STATUS${YW}"
msg_info "${TAB}Start on boot: ${BL}$BOOT_STATUS${YW}"
# Show individual service results
msg_ok "Service for ${BL}$SELECTED_INTERFACE${GN} (${BL}$DRIVER${GN}) created and enabled!"
msg_info "${TAB}Service: ${BL}$SERVICE_NAME${YW}"
msg_info "${TAB}Status: ${BL}$SERVICE_STATUS${YW}"
msg_info "${TAB}Start on boot: ${BL}$BOOT_STATUS${YW}"
done
# Prepare summary of all interfaces
@@ -209,22 +220,22 @@ SUMMARY_MSG="Services created successfully!\n\n"
SUMMARY_MSG+="Configured Interfaces:\n"
for iface in "${INTERFACE_ARRAY[@]}"; do
SERVICE_NAME="disable-nic-offload-$iface.service"
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
if systemctl is-active --quiet "$SERVICE_NAME"; then
SVC_STATUS="Active"
else
SVC_STATUS="Inactive"
fi
SERVICE_NAME="disable-nic-offload-$iface.service"
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_SVC_STATUS="Enabled"
else
BOOT_SVC_STATUS="Disabled"
fi
if systemctl is-active --quiet "$SERVICE_NAME"; then
SVC_STATUS="Active"
else
SVC_STATUS="Inactive"
fi
SUMMARY_MSG+="- $iface ($DRIVER): $SVC_STATUS, Boot: $BOOT_SVC_STATUS\n"
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_SVC_STATUS="Enabled"
else
BOOT_SVC_STATUS="Disabled"
fi
SUMMARY_MSG+="- $iface ($DRIVER): $SVC_STATUS, Boot: $BOOT_SVC_STATUS\n"
done
# Show summary results
@@ -236,8 +247,8 @@ msg_ok "Intel e1000e/e1000 optimization complete for ${#INTERFACE_ARRAY[@]} inte
echo ""
msg_info "Verification commands:"
for iface in "${INTERFACE_ARRAY[@]}"; do
echo -e "${TAB}${BL}ethtool -k $iface${CL} ${YW}# Check offloading status${CL}"
echo -e "${TAB}${BL}systemctl status disable-nic-offload-$iface.service${CL} ${YW}# Check service status${CL}"
echo -e "${TAB}${BL}ethtool -k $iface${CL} ${YW}# Check offloading status${CL}"
echo -e "${TAB}${BL}systemctl status disable-nic-offload-$iface.service${CL} ${YW}# Check service status${CL}"
done
exit 0

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs3-upgrade" "tool"
start_routines() {
header_info
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 2 BACKUP" --menu "\nMake a backup of /etc/proxmox-backup to ensure that in the worst case, any relevant configuration can be recovered?" 14 58 2 \

View File

@@ -32,6 +32,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs4-upgrade" "tool"
start_routines() {
header_info
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 3 BACKUP" --menu \

View File

@@ -28,6 +28,10 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}"
msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs-microcode" "tool"
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }

View File

@@ -32,6 +32,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "post-pbs-install" "tool"
# ---- helpers ----
get_pbs_codename() {
awk -F'=' '/^VERSION_CODENAME=/{print $2}' /etc/os-release

View File

@@ -43,6 +43,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "post-pmg-install" "tool"
if ! grep -q "Proxmox Mail Gateway" /etc/issue 2>/dev/null; then
msg_error "This script is only intended for Proxmox Mail Gateway"
exit 1

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "post-pve-install" "tool"
get_pve_version() {
local pve_ver
pve_ver="$(pveversion | awk -F'/' '{print $2}' | awk -F'-' '{print $1}')"

View File

@@ -11,7 +11,9 @@ if ! command -v curl >/dev/null 2>&1; then
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
load_functions
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pve-privilege-converter" "tool"
set -euo pipefail
shopt -s inherit_errexit nullglob

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pve8-upgrade" "tool"
start_routines() {
header_info

View File

@@ -5,6 +5,11 @@
# License: MIT
# https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "scaling-governor" "tool"
header_info() {
clear
cat <<EOF

View File

@@ -5,6 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/refs/heads/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "update-apps" "tool"
# =============================================================================
# CONFIGURATION VARIABLES
@@ -98,14 +100,14 @@ EOF
# Handle command line arguments
case "${1:-}" in
--help|-h)
print_usage
exit 0
;;
--export-config)
export_config_json
exit 0
;;
--help | -h)
print_usage
exit 0
;;
--export-config)
export_config_json
exit 0
;;
esac
# =============================================================================
@@ -202,40 +204,40 @@ msg_ok "Loaded ${#menu_items[@]} containers"
# Determine container selection based on var_container
if [[ -n "$var_container" ]]; then
case "$var_container" in
all)
# Select all containers with matching tags
CHOICE=""
for ((i=0; i<${#menu_items[@]}; i+=3)); do
CHOICE="$CHOICE ${menu_items[$i]}"
done
CHOICE=$(echo "$CHOICE" | xargs)
;;
all_running)
# Select only running containers with matching tags
CHOICE=""
for ((i=0; i<${#menu_items[@]}; i+=3)); do
cid="${menu_items[$i]}"
if pct status "$cid" 2>/dev/null | grep -q "running"; then
CHOICE="$CHOICE $cid"
fi
done
CHOICE=$(echo "$CHOICE" | xargs)
;;
all_stopped)
# Select only stopped containers with matching tags
CHOICE=""
for ((i=0; i<${#menu_items[@]}; i+=3)); do
cid="${menu_items[$i]}"
if pct status "$cid" 2>/dev/null | grep -q "stopped"; then
CHOICE="$CHOICE $cid"
fi
done
CHOICE=$(echo "$CHOICE" | xargs)
;;
*)
# Assume comma-separated list of container IDs
CHOICE=$(echo "$var_container" | tr ',' ' ')
;;
all)
# Select all containers with matching tags
CHOICE=""
for ((i = 0; i < ${#menu_items[@]}; i += 3)); do
CHOICE="$CHOICE ${menu_items[$i]}"
done
CHOICE=$(echo "$CHOICE" | xargs)
;;
all_running)
# Select only running containers with matching tags
CHOICE=""
for ((i = 0; i < ${#menu_items[@]}; i += 3)); do
cid="${menu_items[$i]}"
if pct status "$cid" 2>/dev/null | grep -q "running"; then
CHOICE="$CHOICE $cid"
fi
done
CHOICE=$(echo "$CHOICE" | xargs)
;;
all_stopped)
# Select only stopped containers with matching tags
CHOICE=""
for ((i = 0; i < ${#menu_items[@]}; i += 3)); do
cid="${menu_items[$i]}"
if pct status "$cid" 2>/dev/null | grep -q "stopped"; then
CHOICE="$CHOICE $cid"
fi
done
CHOICE=$(echo "$CHOICE" | xargs)
;;
*)
# Assume comma-separated list of container IDs
CHOICE=$(echo "$var_container" | tr ',' ' ')
;;
esac
if [[ -z "$CHOICE" ]]; then

View File

@@ -24,6 +24,11 @@ RD=$(echo "\033[01;31m")
CM='\xE2\x9C\x94\033'
GN=$(echo "\033[1;92m")
CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "update-lxcs" "tool"
header_info
echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Updater" --yesno "This Will Update LXC Containers. Proceed?" 10 58

View File

@@ -23,6 +23,10 @@ RD=$(echo "\033[01;31m")
GN=$(echo "\033[1;92m")
CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "update-repo" "tool"
header_info
echo "Loading..."
NODE=$(hostname)

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -104,8 +104,16 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
# Only send telemetry if post_to_api_vm was called (installing status was sent)
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -101,8 +101,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -105,7 +105,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -79,8 +79,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -101,8 +101,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -109,8 +109,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -97,7 +97,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,7 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -99,7 +99,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -99,8 +99,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}