Compare commits

...

60 Commits

Author SHA1 Message Date
CanbiZ (MickLesk)
8864e9aa9d Merge main - resolve conflict in build.func (keep improved ERR trap with exit_code check, add SIGHUP trap) 2026-02-17 12:20:43 +01:00
community-scripts-pr-app[bot]
0183ae0fff Update CHANGELOG.md (#12029)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:16:34 +00:00
CanbiZ (MickLesk)
32d1937a74 Refactor: centralize systemd service creation (#12025)
Introduce create_service() to generate the immich-proxy systemd unit and run systemctl daemon-reload. Replace duplicated heredoc service blocks in install with a call to create_service, and invoke create_service during update before starting the service. Adjust unit WorkingDirectory to ${INSTALL_PATH}/app and ExecStart to run dist/index.js.
2026-02-17 12:16:09 +01:00
community-scripts-pr-app[bot]
0a7bd20b06 Update CHANGELOG.md (#12028)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:15:14 +00:00
CanbiZ (MickLesk)
c9ecb1ccca core: smart recovery for failed installs | extend exit_codes (#11221)
* feat(build.func): smart error recovery menu for failed installations

Replace simple Y/n removal prompt with interactive recovery menu:

- Option 1: Remove container and exit (default, auto after 60s timeout)
- Option 2: Keep container for debugging
- Option 3: Retry installation with verbose mode enabled
- Option 4: Retry with 1.5x RAM and +1 CPU core (OOM errors only)

Improvements:
- Detect OOM errors (exit codes 137, 243) and offer resource increase
- Show human-readable error explanation using explain_exit_code()
- Recursive rebuild preserves ALL settings from advanced/app.vars/default.vars
- Settings preserved: Network (IP, Gateway, VLAN, MTU, Bridge), Features
  (Nesting, FUSE, TUN, GPU), Storage, SSH keys, Tags, Hostname, etc.
- Show rebuild summary before retry (old→new CTID, resources, network)
- New container ID generated automatically for rebuilds

This helps users recover from transient failures without re-running
the entire script manually.

* fix(api.func): fix duplicate exit codes and add missing error codes

Exit code fixes:
- Remove duplicate definitions for codes 243, 254 (Node.js vs DB)
- Reassign MySQL/MariaDB to 240-242, 244 (was 241-244)
- Reassign MongoDB to 250-253 (was 251-254)

New exit codes added (based on GitHub issues analysis):
- 6: curl couldn't resolve host (DNS failure)
- 7: curl failed to connect (network unreachable)
- 22: curl HTTP error (404, 429 rate limit, 500)
- 28: curl timeout (very common in download failures)
- 35: curl SSL error
- 102: APT lock held by another process
- 124: Command timeout
- 141: SIGPIPE (broken pipe)

Also update OOM detection to include exit code 134 (SIGABRT)
which is commonly seen in Node.js heap overflow issues.

Fixes based on analysis of ~500 GitHub issues.

* fix(exit-codes): sync error_handler.func and api.func with conflict-free code ranges

- Add curl error codes (6, 7, 22, 28, 35)
- Add APT lock code (102), timeout (124), signals (134, 141)
- Move Python codes: 210-212 → 160-162 (avoid Proxmox conflict)
- Move PostgreSQL codes: 231-234 → 170-173
- Move MySQL/MariaDB codes: 241-244 → 180-183
- Move MongoDB codes: 251-254 → 190-193
- Keep Node.js at 243-249, Proxmox at 200-231
- Both files now synchronized with identical mappings

* feat(exit-codes): add systemd and build error codes (150-154)

- 150: Systemd service failed to start
- 151: Systemd service unit not found
- 152: Permission denied (EACCES)
- 153: Build/compile failed (make/gcc/cmake)
- 154: Node.js native addon build failed (node-gyp)

Based on issue analysis: 57 service failures, 25 build failures, 22 node-gyp issues

* fix(build): restore smart recovery and add OOM/DNS retry paths

* feat(build): APT in-place repair, exit 1 subclassification, new exit codes

- Add APT/DPKG in-place recovery: detects exit 100/101/102/255 and exit 1
  with APT log patterns, offers to repair dpkg state and re-run install
  script without destroying the container
- Add exit 1 subclassification: analyzes combined log to identify root
  cause (APT, OOM, network, command-not-found) and routes to appropriate
  recovery option
- Add exit 10 hint: shows privileged mode / nesting suggestion
- Add exit 127 hint: extracts missing command name from logs
- Refactor recovery menu: use named option variables (APT_OPTION,
  OOM_OPTION, DNS_OPTION) instead of hardcoded option numbers, supports
  up to 6 dynamic options cleanly
- Map missing exit codes in api.func: curl 27/36/45/47/55, signals
  129 (SIGHUP) / 131 (SIGQUIT), npm 239

* feat(api+build): map 25 more exit codes, add SIGHUP trap, network/perm hints

api.func:
- Map 25+ new exit codes that were showing as 'Unknown' in telemetry:
  curl: 3, 16, 18, 24, 26, 32-34, 39, 44, 46, 48, 51, 52, 57, 59, 61,
  63, 79, 92, 95; signals: 125, 132, 144, 146
- Update code 8 description (FTP + apk untrusted key)
- Update header comment with full supported ranges

build.func:
- Add SIGHUP trap: reports 'failed/129' to API when terminal is closed,
  should significantly reduce the 2841 stuck 'installing' records
- Add exit 52 (empty reply) and 57 (poll error) to network issue
  detection for DNS override recovery option
- Add exit 125/126 hint: suggests privileged mode for permission errors

* fix: sync error_handler fallback, Alpine APK repair, retry limit

error_handler.func:
- Sync fallback explain_exit_code() with api.func: add 25+ codes that
  were missing (curl 16/18/24/26/27/32-34/36/39/44-48/51/52/55/57/59/
  61/63/79/92/95, signals 125/129/131/132/144/146, npm 239, code 3/8)
- Ensures consistent error descriptions even when api.func isn't loaded

build.func:
- Alpine APK repair: detect var_os=alpine and run 'apk fix && apk
  cache clean && apk update' instead of apt-get/dpkg commands
- Show 'Repair APK state' instead of 'APT/DPKG' in menu for Alpine
- Retry safety counter: OOM x2 retry limited to max 2 attempts
  (prevents infinite RAM doubling via RECOVERY_ATTEMPT env var)
- Show attempt count in rebuild summary

* fix(build): preserve exit code in ERR trap to prevent false exit_code=0

The ERR trap called ensure_log_on_host before post_update_to_api,
which reset \True to 0 (success). This caused ~15-20 records/day to be
reported as 'failed' with exit_code=0 instead of the actual error code.

Root cause chain:
1. Command fails with exit code N → ERR trap fires (\True = N)
2. ensure_log_on_host succeeds → \True becomes 0
3. post_update_to_api 'failed' '\True' → sends 'failed/0' (wrong!)
4. POST_UPDATE_DONE=true → EXIT trap skips the correct code

Fix: capture \True into _ERR_CODE before ensure_log_on_host runs.

* Implement telemetry settings and repo source detection

Add telemetry configuration and repository source detection function.
2026-02-17 12:14:46 +01:00
community-scripts-pr-app[bot]
d274a269b5 Update .app files (#12022)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 10:45:09 +01:00
community-scripts-pr-app[bot]
cbee9d64b5 Update CHANGELOG.md (#12024)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:45 +00:00
community-scripts-pr-app[bot]
ffcda217e3 Update CHANGELOG.md (#12023)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:25 +00:00
community-scripts-pr-app[bot]
438d5d6b94 Update date in json (#12021)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:19 +00:00
push-app-to-main[bot]
104366bc64 Databasus (#12018)
* Add databasus (ct)

* Update databasus.sh

* Update databasus-install.sh

* Fix backup and restore paths for Databasus config

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-17 10:40:58 +01:00
CanbiZ (MickLesk)
16a0329af3 Safer tools.func load and improved error handling
Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.
2026-02-17 09:50:49 +01:00
community-scripts-pr-app[bot]
9dab79f8ca Update CHANGELOG.md (#12017)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:08:35 +00:00
CanbiZ (MickLesk)
2dddeaf966 Call get_lxc_ip in start() before updates (#12015) 2026-02-17 09:08:09 +01:00
community-scripts-pr-app[bot]
fae06a3a58 Update CHANGELOG.md (#12016)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:07:22 +00:00
Tobias
137272c354 fix: pterodactyl-panel add symlink (#11997) 2026-02-17 09:06:59 +01:00
CanbiZ (MickLesk)
50effb6d86 core: add progress; fix exit status
Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.
2026-02-17 09:02:05 +01:00
community-scripts-pr-app[bot]
52a9e23401 chore: update github-versions.json (#12013)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 06:22:15 +00:00
community-scripts-pr-app[bot]
c2333de180 Update CHANGELOG.md (#12007)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:47 +00:00
community-scripts-pr-app[bot]
ad8974894b chore: update github-versions.json (#12006)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:21 +00:00
community-scripts-pr-app[bot]
38af4be5ba Update CHANGELOG.md (#12005)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 21:38:03 +00:00
Chris
80ae1f34fa Opencloud: Pin version to 5.1.0 (#12004) 2026-02-16 22:37:35 +01:00
community-scripts-pr-app[bot]
06bc6e20d5 chore: update github-versions.json (#12001)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 18:13:41 +00:00
community-scripts-pr-app[bot]
4418e72856 Update CHANGELOG.md (#11999)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 16:14:29 +00:00
CanbiZ (MickLesk)
896714e06f core/vm's: ensure script state is sent on script exit (#11991)
* Ensure API update is sent on script exit

Add exit-time telemetry handling across scripts to avoid orphaned "installing" records. Introduce local exit_code capture in api_exit_script and cleanup handlers and, when POST_TO_API_DONE is true but POST_UPDATE_DONE is not, post a final status (marking failures on non-zero exit codes, or marking done/failed in VM cleanups based on exit code). Changes touch misc/build.func, misc/vm-core.func and various vm/*-vm.sh cleanup functions to reliably send post_update_to_api on normal or early exits.

* Update api.func

* fix(telemetry): add missing exit codes to explain_exit_code()

- Add curl error codes: 4, 5, 8, 23, 25, 30, 56, 78
- Add code 10: Docker/privileged mode required (used in ~15 scripts)
- Add code 75: Temporary failure (retry later)
- Add BSD sysexits.h codes: 64-77
- Sync error_handler.func fallback with canonical api.func
2026-02-16 17:14:00 +01:00
community-scripts-pr-app[bot]
96389a02cb chore: update github-versions.json (#11996)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 12:14:25 +00:00
community-scripts-pr-app[bot]
a4e6286260 Update .app files (#11993)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-16 12:49:30 +01:00
community-scripts-pr-app[bot]
a6617cc6a1 Update CHANGELOG.md (#11995)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 11:49:19 +00:00
community-scripts-pr-app[bot]
f1377e6cb0 Update CHANGELOG.md (#11994)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 11:49:03 +00:00
community-scripts-pr-app[bot]
56cff01240 Update date in json (#11992)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-16 11:48:56 +00:00
push-app-to-main[bot]
26ba17c8c3 RomM (#11987)
* Add romm (ct)

* Update romm.sh

* Update romm-install.sh

* Revise author line in romm.sh

Updated author attribution format in romm.sh

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
2026-02-16 12:48:37 +01:00
community-scripts-pr-app[bot]
2bd4b063d9 Update CHANGELOG.md (#11990)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 10:47:03 +00:00
Slaviša Arežina
40bd7dc366 Fix sed command for DB_FILE configuration (#11988) 2026-02-16 11:46:37 +01:00
community-scripts-pr-app[bot]
a81ebcb16c Update CHANGELOG.md (#11986)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:54:09 +00:00
community-scripts-pr-app[bot]
cebdbcc35d Update CHANGELOG.md (#11985)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:53:53 +00:00
CanbiZ (MickLesk)
42475ed4f6 slskd: fix exit position (#11963) 2026-02-16 10:53:41 +01:00
community-scripts-pr-app[bot]
11eba0093f Update CHANGELOG.md (#11984)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:53:29 +00:00
CanbiZ (MickLesk)
f6e535c7b7 cryptpad: restore config earlier and run onlyoffice upgrade (#11964) 2026-02-16 10:53:21 +01:00
community-scripts-pr-app[bot]
58329f99ea Update CHANGELOG.md (#11983)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:53:04 +00:00
CanbiZ (MickLesk)
558220fb0e Vaultwarden: export VW_VERSION as version number (#11966) 2026-02-16 10:52:56 +01:00
community-scripts-pr-app[bot]
61aee12a82 Update CHANGELOG.md (#11982)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:52:40 +00:00
CanbiZ (MickLesk)
215c441129 Improve Zabbix agent service detection (#11968) 2026-02-16 10:52:34 +01:00
community-scripts-pr-app[bot]
32afe0c2e4 Update CHANGELOG.md (#11981)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:52:18 +00:00
summoningpixels
73ee5f8f19 Update Wishlist LXC webpage to include reverse proxy info (#11973) 2026-02-16 10:52:09 +01:00
CanbiZ (MickLesk)
34db7c652f github: add "website" label if "json" changed (#11975) 2026-02-16 10:51:49 +01:00
community-scripts-pr-app[bot]
c5c6e660ba Update CHANGELOG.md (#11980)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:30:31 +00:00
CanbiZ (MickLesk)
ae8dd5ba36 tools.func: persist /usr/local/bin in shell PATHs (#11970) 2026-02-16 10:30:05 +01:00
community-scripts-pr-app[bot]
c975b25ad5 Update .app files (#11978)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-16 10:15:43 +01:00
community-scripts-pr-app[bot]
4e3ee020e4 Update CHANGELOG.md (#11979)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 09:14:49 +00:00
push-app-to-main[bot]
90ce773247 LinkDing (#11976)
* Add linkding (ct)

* Update messages for LinkDing in script

* Update date_created to 2026-02-16

* Update linkding-install.sh

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
2026-02-16 10:14:24 +01:00
CanbiZ (MickLesk)
704f8d7e10 hotfix pipefail issue alpine-teamspeak
Replace the final '| head -1' in both install and ct scripts with 'awk 'NR==1'' to pick the first matching Teamspeak release line. In the ct script the previous temporary toggling of pipefail was also removed, simplifying the command. This improves compatibility and reduces reliance on an extra utility in minimal environments.
2026-02-16 08:57:06 +01:00
community-scripts-pr-app[bot]
d7fbbbde0f Update CHANGELOG.md (#11974)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 07:51:33 +00:00
CanbiZ (MickLesk)
2edd2ffaf8 fix: remove duplicate error handler from alpine-install.func (#11971)
- Remove legacy error_handler(), on_exit(), on_interrupt(), on_terminate() and set/trap definitions from alpine-install.func (already provided by error_handler.func which is sourced on line 10)

- The local error_handler() expected positional args as required, but catch_errors() sets trap as 'error_handler' (without args), causing unbound variable error with set -u (nounset)

- error_handler.func uses default values which is set -u safe

- Also align legacy trap in install.func network_check() to standard format

Fixes #11929
2026-02-16 08:51:05 +01:00
community-scripts-pr-app[bot]
cba1a0bb6b Update CHANGELOG.md (#11972)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 07:50:08 +00:00
summoningpixels
97bc69452d Update OpenCloud LXC webpage to include services ports for reverse proxy users (#11969)
Included it directly in the reverse proxy warning note
2026-02-16 08:49:42 +01:00
community-scripts-pr-app[bot]
4257954cfa Update CHANGELOG.md (#11967)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 07:29:32 +00:00
CanbiZ (MickLesk)
39354352ff Migrate update script to Seerr; prompt rerun (#11965)
Update ct/jellyseerr.sh and ct/overseerr.sh to switch the container update handler to the Seerr script. The here-doc now uses a single-quoted EOF to avoid shell expansion and includes an explicit shebang for the generated /usr/bin/update. Instead of auto-executing the new update script, the code now informs the user to run 'update' again and exits (overseerr exits with 0). Also includes minor whitespace cleanup (removed trailing spaces on cd lines). This prevents unexpected immediate execution and ensures the generated script runs with the intended shell.
2026-02-16 08:29:04 +01:00
community-scripts-pr-app[bot]
652920ee49 chore: update github-versions.json (#11962)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 06:25:17 +00:00
community-scripts-pr-app[bot]
057bdefcc0 Update CHANGELOG.md (#11957)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 00:22:32 +00:00
community-scripts-pr-app[bot]
74b2a29d37 chore: update github-versions.json (#11956)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 00:22:10 +00:00
Tobias
607e868328 fix: url (#11954)
* fix: url

* Update overseerr.sh
2026-02-15 23:09:16 +01:00
55 changed files with 1926 additions and 203 deletions

3
.github/workflows/autolabeler.yml generated vendored
View File

@@ -100,7 +100,8 @@ jobs:
// If it's an update script PR with json changes and a content label, skip adding website/json
// The PR should be categorized as update script with the content label
if (!(hasUpdateScript && hasJson && hasContentLabel)) {
labelsToAdd.add(hasJson ? "json" : "website");
labelsToAdd.add("website");
if (hasJson) labelsToAdd.add("json");
}
}

View File

@@ -404,6 +404,79 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-17
### 🆕 New Scripts
- Databasus ([#12018](https://github.com/community-scripts/ProxmoxVE/pull/12018))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- fix: pterodactyl-panel add symlink [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11997](https://github.com/community-scripts/ProxmoxVE/pull/11997))
### 💾 Core
- #### 🐞 Bug Fixes
- core: call get_lxc_ip in start() before updates [@MickLesk](https://github.com/MickLesk) ([#12015](https://github.com/community-scripts/ProxmoxVE/pull/12015))
- #### ✨ New Features
- core: smart recovery for failed installs | extend exit_codes [@MickLesk](https://github.com/MickLesk) ([#11221](https://github.com/community-scripts/ProxmoxVE/pull/11221))
### 🧰 Tools
- #### 🔧 Refactor
- Immich Public Proxy: centralize and fix systemd service creation [@MickLesk](https://github.com/MickLesk) ([#12025](https://github.com/community-scripts/ProxmoxVE/pull/12025))
## 2026-02-16
### 🆕 New Scripts
- RomM ([#11987](https://github.com/community-scripts/ProxmoxVE/pull/11987))
- LinkDing ([#11976](https://github.com/community-scripts/ProxmoxVE/pull/11976))
### 🚀 Updated Scripts
- Opencloud: Pin version to 5.1.0 [@vhsdream](https://github.com/vhsdream) ([#12004](https://github.com/community-scripts/ProxmoxVE/pull/12004))
- #### 🐞 Bug Fixes
- Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988))
- slskd: fix exit position [@MickLesk](https://github.com/MickLesk) ([#11963](https://github.com/community-scripts/ProxmoxVE/pull/11963))
- cryptpad: restore config earlier and run onlyoffice upgrade [@MickLesk](https://github.com/MickLesk) ([#11964](https://github.com/community-scripts/ProxmoxVE/pull/11964))
- jellyseerr/overseerr: Migrate update script to Seerr; prompt rerun [@MickLesk](https://github.com/MickLesk) ([#11965](https://github.com/community-scripts/ProxmoxVE/pull/11965))
- #### 🔧 Refactor
- core/vm's: ensure script state is sent on script exit [@MickLesk](https://github.com/MickLesk) ([#11991](https://github.com/community-scripts/ProxmoxVE/pull/11991))
- Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966))
- Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968))
### 💾 Core
- #### ✨ New Features
- tools.func: ensure /usr/local/bin PATH persists for pct enter sessions [@MickLesk](https://github.com/MickLesk) ([#11970](https://github.com/community-scripts/ProxmoxVE/pull/11970))
- #### 🔧 Refactor
- core: remove duplicate error handler from alpine-install.func [@MickLesk](https://github.com/MickLesk) ([#11971](https://github.com/community-scripts/ProxmoxVE/pull/11971))
### 📂 Github
- github: add "website" label if "json" changed [@MickLesk](https://github.com/MickLesk) ([#11975](https://github.com/community-scripts/ProxmoxVE/pull/11975))
### 🌐 Website
- #### 📝 Script Information
- Update Wishlist LXC webpage to include reverse proxy info [@summoningpixels](https://github.com/summoningpixels) ([#11973](https://github.com/community-scripts/ProxmoxVE/pull/11973))
- Update OpenCloud LXC webpage to include services ports [@summoningpixels](https://github.com/summoningpixels) ([#11969](https://github.com/community-scripts/ProxmoxVE/pull/11969))
## 2026-02-15
### 🆕 New Scripts

View File

@@ -27,7 +27,7 @@ function update_script() {
exit
fi
set +o pipefail && RELEASE=$(curl -fsSL https://teamspeak.com/en/downloads/#server | sed -n 's/.*teamspeak3-server_linux_amd64-\([0-9.]*[0-9]\).*/\1/p' | head -1) && set -o pipefail
RELEASE=$(curl -fsSL https://teamspeak.com/en/downloads/#server | sed -n 's/.*teamspeak3-server_linux_amd64-\([0-9.]*[0-9]\).*/\1/p' | awk 'NR==1')
if [ "${RELEASE}" != "$(cat ~/.teamspeak-server)" ] || [ ! -f ~/.teamspeak-server ]; then
msg_info "Updating ${APP} LXC"

View File

@@ -39,17 +39,20 @@ function update_script() {
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "cryptpad" "cryptpad/cryptpad" "tarball"
msg_info "Restoring configuration"
mv /opt/config.js /opt/cryptpad/config/
msg_ok "Configuration restored"
msg_info "Updating CryptaPad"
cd /opt/cryptpad
$STD npm ci
$STD npm run install:components
if [ -f "/opt/cryptpad/install-onlyoffice.sh" ]; then
$STD bash /opt/cryptpad/install-onlyoffice.sh --accept-license
fi
$STD npm run build
msg_ok "Updated CryptaPad"
msg_info "Restoring configuration"
mv /opt/config.js /opt/cryptpad/config/
msg_ok "Configuration restored"
msg_info "Starting Service"
systemctl start cryptpad
msg_ok "Started Service"

78
ct/databasus.sh Normal file
View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
APP="Databasus"
var_tags="${var_tags:-backup;postgresql;database}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /opt/databasus/databasus ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "databasus" "databasus/databasus"; then
msg_info "Stopping Databasus"
$STD systemctl stop databasus
msg_ok "Stopped Databasus"
msg_info "Backing up Configuration"
cp /opt/databasus/.env /opt/databasus.env.bak
msg_ok "Backed up Configuration"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Updating Databasus"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod download
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations /opt/databasus/
chown -R postgres:postgres /opt/databasus
msg_ok "Updated Databasus"
msg_info "Restoring Configuration"
cp /opt/databasus.env.bak /opt/databasus/.env
rm -f /opt/databasus.env.bak
chown postgres:postgres /opt/databasus/.env
msg_ok "Restored Configuration"
msg_info "Starting Databasus"
$STD systemctl start databasus
msg_ok "Started Databasus"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"

6
ct/headers/databasus Normal file
View File

@@ -0,0 +1,6 @@
____ __ __
/ __ \____ _/ /_____ _/ /_ ____ ________ _______
/ / / / __ `/ __/ __ `/ __ \/ __ `/ ___/ / / / ___/
/ /_/ / /_/ / /_/ /_/ / /_/ / /_/ (__ ) /_/ (__ )
/_____/\__,_/\__/\__,_/_.___/\__,_/____/\__,_/____/

6
ct/headers/linkding Normal file
View File

@@ -0,0 +1,6 @@
___ __ ___
/ (_)___ / /______/ (_)___ ____ _
/ / / __ \/ //_/ __ / / __ \/ __ `/
/ / / / / / ,< / /_/ / / / / / /_/ /
/_/_/_/ /_/_/|_|\__,_/_/_/ /_/\__, /
/____/

6
ct/headers/romm Normal file
View File

@@ -0,0 +1,6 @@
____ __ ___
/ __ \____ ____ ___ / |/ /
/ /_/ / __ \/ __ `__ \/ /|_/ /
/ _, _/ /_/ / / / / / / / / /
/_/ |_|\____/_/ /_/ /_/_/ /_/

View File

@@ -45,13 +45,18 @@ function update_script() {
fi
msg_info "Switching update script to Seerr"
sed -i 's|https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/jellyseerr.sh|https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/seerr.sh|g' /usr/bin/update
msg_ok "Switched update script to Seerr. Running update..."
exec /usr/bin/update
cat <<'EOF' >/usr/bin/update
#!/usr/bin/env bash
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/seerr.sh)"
EOF
chmod +x /usr/bin/update
msg_ok "Switched update script to Seerr"
msg_warn "Please type 'update' again to complete the migration"
exit
fi
msg_info "Updating Jellyseerr"
cd /opt/jellyseerr
cd /opt/jellyseerr
systemctl stop jellyseerr
output=$(git pull --no-rebase)
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
@@ -62,7 +67,7 @@ function update_script() {
fi
rm -rf dist .next node_modules
export CYPRESS_INSTALL_BINARY=0
cd /opt/jellyseerr
cd /opt/jellyseerr
$STD pnpm install --frozen-lockfile
export NODE_OPTIONS="--max-old-space-size=3072"
$STD pnpm build

79
ct/linkding.sh Normal file
View File

@@ -0,0 +1,79 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (MickLesk)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://linkding.link/
APP="linkding"
var_tags="${var_tags:-bookmarks;management}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-1024}"
var_disk="${var_disk:-4}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/linkding ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "linkding" "sissbruecker/linkding"; then
msg_info "Stopping Services"
systemctl stop nginx linkding linkding-tasks
msg_ok "Stopped Services"
msg_info "Backing up Data"
cp -r /opt/linkding/data /opt/linkding_data_backup
cp /opt/linkding/.env /opt/linkding_env_backup
msg_ok "Backed up Data"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "linkding" "sissbruecker/linkding"
msg_info "Restoring Data"
cp -r /opt/linkding_data_backup/. /opt/linkding/data
cp /opt/linkding_env_backup /opt/linkding/.env
rm -rf /opt/linkding_data_backup /opt/linkding_env_backup
ln -sf /usr/lib/x86_64-linux-gnu/mod_icu.so /opt/linkding/libicu.so
msg_ok "Restored Data"
msg_info "Updating LinkDing"
cd /opt/linkding
rm -f bookmarks/settings/dev.py
touch bookmarks/settings/custom.py
$STD npm ci
$STD npm run build
$STD uv sync --no-dev --frozen
$STD uv pip install gunicorn
set -a && source /opt/linkding/.env && set +a
$STD /opt/linkding/.venv/bin/python manage.py migrate
$STD /opt/linkding/.venv/bin/python manage.py collectstatic --no-input
msg_ok "Updated LinkDing"
msg_info "Starting Services"
systemctl start nginx linkding linkding-tasks
msg_ok "Started Services"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed Successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:9090${CL}"

View File

@@ -29,7 +29,7 @@ function update_script() {
exit
fi
RELEASE="v5.0.2"
RELEASE="v5.1.0"
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
msg_info "Stopping services"
systemctl stop opencloud opencloud-wopi

View File

@@ -44,9 +44,14 @@ function update_script() {
fi
msg_info "Switching update script to Seerr"
sed -i 's|https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/overseerr.sh|https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/seerr.sh|g' /usr/bin/update
msg_ok "Switched update script to Seerr. Running update..."
exec /usr/bin/update
cat <<'EOF' >/usr/bin/update
#!/usr/bin/env bash
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/seerr.sh)"
EOF
chmod +x /usr/bin/update
msg_ok "Switched update script to Seerr"
msg_warn "Please type 'update' again to complete the migration"
exit 0
fi
if check_for_gh_release "overseerr" "sct/overseerr"; then

View File

@@ -71,6 +71,7 @@ EOF
$STD php artisan migrate --seed --force --no-interaction
chown -R www-data:www-data /opt/pterodactyl-panel/*
chmod -R 755 /opt/pterodactyl-panel/storage /opt/pterodactyl-panel/bootstrap/cache/
ln -s /opt/pterodactyl-panel /var/www/pterodactyl
rm -rf "/opt/pterodactyl-panel/panel.tar.gz"
echo "${RELEASE}" >/opt/${APP}_version.txt
msg_ok "Updated $APP to v${RELEASE}"

74
ct/romm.sh Normal file
View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bash
source <(curl -s https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ) | DevelopmentCats | AlphaLawless
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://romm.app
APP="RomM"
var_tags="${var_tags:-emulation}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-20}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/romm ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "romm" "rommapp/romm"; then
msg_info "Stopping Services"
systemctl stop romm-backend romm-worker romm-scheduler romm-watcher
msg_ok "Stopped Services"
msg_info "Backing up configuration"
cp /opt/romm/.env /opt/romm/.env.backup
msg_ok "Backed up configuration"
fetch_and_deploy_gh_release "romm" "rommapp/romm" "tarball" "latest" "/opt/romm"
msg_info "Updating ROMM"
cp /opt/romm/.env.backup /opt/romm/.env
cd /opt/romm
$STD uv sync --all-extras
cd /opt/romm/backend
$STD uv run alembic upgrade head
cd /opt/romm/frontend
$STD npm install
$STD npm run build
# Merge static assets into dist folder
cp -rf /opt/romm/frontend/assets/* /opt/romm/frontend/dist/assets/
mkdir -p /opt/romm/frontend/dist/assets/romm
ln -sfn /var/lib/romm/resources /opt/romm/frontend/dist/assets/romm/resources
ln -sfn /var/lib/romm/assets /opt/romm/frontend/dist/assets/romm/assets
msg_ok "Updated ROMM"
msg_info "Starting Services"
systemctl start romm-backend romm-worker romm-scheduler romm-watcher
msg_ok "Started Services"
msg_ok "Updated successfully"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"

View File

@@ -83,6 +83,7 @@ function update_script() {
msg_ok "Started Soularr Timer"
msg_ok "Updated Soularr successfully!"
fi
exit
}
start

View File

@@ -45,6 +45,8 @@ function update_script() {
msg_info "Updating VaultWarden to $VAULT (Patience)"
cd /tmp/vaultwarden-src
VW_VERSION="$VAULT"
export VW_VERSION
$STD cargo build --features "sqlite,mysql,postgresql" --release
if [[ -f /usr/bin/vaultwarden ]]; then
cp target/release/vaultwarden /usr/bin/

View File

@@ -35,15 +35,18 @@ function update_script() {
exit
fi
if systemctl list-unit-files | grep -q zabbix-agent2.service; then
if systemctl cat zabbix-agent2.service &>/dev/null; then
AGENT_SERVICE="zabbix-agent2"
else
elif systemctl cat zabbix-agent.service &>/dev/null; then
AGENT_SERVICE="zabbix-agent"
else
AGENT_SERVICE=""
msg_warn "No Zabbix Agent service found, skipping agent actions"
fi
msg_info "Stopping Services"
systemctl stop zabbix-server
systemctl stop "$AGENT_SERVICE"
[[ -n "$AGENT_SERVICE" ]] && systemctl stop "$AGENT_SERVICE"
msg_ok "Stopped Services"
read -rp "Choose Zabbix version [1] 7.0 LTS [2] 7.4 (Latest Stable) [3] Latest available (default: 2): " ZABBIX_CHOICE
@@ -83,13 +86,13 @@ function update_script() {
$STD apt install --only-upgrade zabbix-server-pgsql zabbix-frontend-php php8.4-pgsql
if [ "$AGENT_SERVICE" = "zabbix-agent2" ]; then
if [[ "$AGENT_SERVICE" == "zabbix-agent2" ]]; then
$STD apt install --only-upgrade zabbix-agent2 zabbix-agent2-plugin-postgresql
if [ -f /etc/zabbix/zabbix_agent2.d/plugins.d/nvidia.conf ]; then
sed -i 's|^Plugins.NVIDIA.System.Path=.*|# Plugins.NVIDIA.System.Path=/usr/libexec/zabbix/zabbix-agent2-plugin-nvidia-gpu|' \
/etc/zabbix/zabbix_agent2.d/plugins.d/nvidia.conf
fi
else
elif [[ "$AGENT_SERVICE" == "zabbix-agent" ]]; then
$STD apt install --only-upgrade zabbix-agent
fi
@@ -105,7 +108,7 @@ function update_script() {
msg_info "Starting Services"
systemctl start zabbix-server
systemctl start "$AGENT_SERVICE"
[[ -n "$AGENT_SERVICE" ]] && systemctl start "$AGENT_SERVICE"
systemctl restart apache2
msg_ok "Started Services"
msg_ok "Updated successfully!"

View File

@@ -0,0 +1,44 @@
{
"name": "Databasus",
"slug": "databasus",
"categories": [
7
],
"date_created": "2026-02-17",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 80,
"documentation": "https://github.com/databasus/databasus",
"website": "https://github.com/databasus/databasus",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/databasus.webp",
"config_path": "/opt/databasus/.env",
"description": "Free, open source and self-hosted solution for automated PostgreSQL backups. With multiple storage options, notifications, scheduling, and a beautiful web interface for managing database backups across multiple PostgreSQL instances.",
"install_methods": [
{
"type": "default",
"script": "ct/databasus.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 8,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": "admin@localhost",
"password": "See /root/databasus.creds"
},
"notes": [
{
"text": "Supports PostgreSQL versions 12-18 with cloud and self-hosted instances",
"type": "info"
},
{
"text": "Features: Scheduled backups, multiple storage providers, notifications, encryption",
"type": "info"
}
]
}

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-15T18:07:51Z",
"generated": "2026-02-17T06:22:06Z",
"versions": [
{
"slug": "2fauth",
@@ -193,9 +193,9 @@
{
"slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr",
"version": "v2.6.2",
"version": "v2.6.3",
"pinned": false,
"date": "2026-02-15T02:15:19Z"
"date": "2026-02-16T22:41:25Z"
},
{
"slug": "cloudreve",
@@ -256,9 +256,9 @@
{
"slug": "dawarich",
"repo": "Freika/dawarich",
"version": "1.1.0",
"version": "1.2.0",
"pinned": false,
"date": "2026-02-08T14:42:45Z"
"date": "2026-02-15T22:33:56Z"
},
{
"slug": "discopanel",
@@ -316,6 +316,13 @@
"pinned": false,
"date": "2026-01-06T12:05:40Z"
},
{
"slug": "ebusd",
"repo": "john30/ebusd",
"version": "26.1",
"pinned": false,
"date": "2026-02-09T06:09:24Z"
},
{
"slug": "elementsynapse",
"repo": "etkecc/synapse-admin",
@@ -354,9 +361,9 @@
{
"slug": "firefly",
"repo": "firefly-iii/firefly-iii",
"version": "v6.4.21",
"version": "v6.4.22",
"pinned": false,
"date": "2026-02-14T19:40:46Z"
"date": "2026-02-15T18:43:08Z"
},
{
"slug": "fladder",
@@ -550,9 +557,9 @@
{
"slug": "immich-public-proxy",
"repo": "alangrainger/immich-public-proxy",
"version": "v1.15.1",
"version": "v1.15.3",
"pinned": false,
"date": "2026-01-26T08:04:27Z"
"date": "2026-02-16T22:54:27Z"
},
{
"slug": "inspircd",
@@ -571,16 +578,16 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.60",
"version": "v5.12.62",
"pinned": false,
"date": "2026-02-15T00:11:31Z"
"date": "2026-02-17T03:23:48Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1124",
"version": "v0.24.1140",
"pinned": false,
"date": "2026-02-15T05:54:22Z"
"date": "2026-02-17T05:54:25Z"
},
{
"slug": "jellystat",
@@ -634,9 +641,9 @@
{
"slug": "kimai",
"repo": "kimai/kimai",
"version": "2.48.0",
"version": "2.49.0",
"pinned": false,
"date": "2026-01-31T18:10:59Z"
"date": "2026-02-15T20:40:19Z"
},
{
"slug": "kitchenowl",
@@ -697,9 +704,9 @@
{
"slug": "librenms",
"repo": "librenms/librenms",
"version": "26.1.1",
"version": "26.2.0",
"pinned": false,
"date": "2026-01-12T23:26:02Z"
"date": "2026-02-16T12:15:13Z"
},
{
"slug": "librespeed-rust",
@@ -722,12 +729,19 @@
"pinned": false,
"date": "2025-11-16T22:40:18Z"
},
{
"slug": "linkding",
"repo": "sissbruecker/linkding",
"version": "v1.45.0",
"pinned": false,
"date": "2026-01-06T20:31:04Z"
},
{
"slug": "linkstack",
"repo": "linkstackorg/linkstack",
"version": "v4.8.5",
"version": "v4.8.4",
"pinned": false,
"date": "2026-01-26T18:46:52Z"
"date": "2024-12-10T15:14:34Z"
},
{
"slug": "linkwarden",
@@ -788,9 +802,9 @@
{
"slug": "mealie",
"repo": "mealie-recipes/mealie",
"version": "v3.10.2",
"version": "v3.11.0",
"pinned": false,
"date": "2026-02-04T23:32:32Z"
"date": "2026-02-17T04:13:35Z"
},
{
"slug": "mediamanager",
@@ -942,9 +956,9 @@
{
"slug": "opencloud",
"repo": "opencloud-eu/opencloud",
"version": "v5.0.2",
"version": "v5.1.0",
"pinned": true,
"date": "2026-02-05T16:29:01Z"
"date": "2026-02-16T15:04:28Z"
},
{
"slug": "opengist",
@@ -956,9 +970,9 @@
{
"slug": "ots",
"repo": "Luzifer/ots",
"version": "v1.21.0",
"version": "v1.21.1",
"pinned": false,
"date": "2026-01-19T23:21:29Z"
"date": "2026-02-16T12:12:23Z"
},
{
"slug": "outline",
@@ -1005,23 +1019,23 @@
{
"slug": "paperless-gpt",
"repo": "icereed/paperless-gpt",
"version": "v0.24.0",
"version": "v0.25.0",
"pinned": false,
"date": "2026-01-14T21:28:09Z"
"date": "2026-02-16T08:31:48Z"
},
{
"slug": "paperless-ngx",
"repo": "paperless-ngx/paperless-ngx",
"version": "v2.20.6",
"version": "v2.20.7",
"pinned": false,
"date": "2026-01-31T07:30:27Z"
"date": "2026-02-16T16:52:23Z"
},
{
"slug": "patchmon",
"repo": "PatchMon/PatchMon",
"version": "v1.4.0",
"version": "v1.4.1",
"pinned": false,
"date": "2026-02-13T10:39:03Z"
"date": "2026-02-16T18:00:13Z"
},
{
"slug": "paymenter",
@@ -1261,6 +1275,13 @@
"pinned": false,
"date": "2025-03-28T13:00:23Z"
},
{
"slug": "romm",
"repo": "RetroAchievements/RALibretro",
"version": "1.8.2",
"pinned": false,
"date": "2026-01-23T17:03:31Z"
},
{
"slug": "rustdeskserver",
"repo": "rustdesk/rustdesk-server",
@@ -1313,9 +1334,9 @@
{
"slug": "semaphore",
"repo": "semaphoreui/semaphore",
"version": "v2.17.0",
"version": "v2.17.2",
"pinned": false,
"date": "2026-02-13T21:08:30Z"
"date": "2026-02-16T10:27:40Z"
},
{
"slug": "shelfmark",
@@ -1341,9 +1362,9 @@
{
"slug": "slskd",
"repo": "slskd/slskd",
"version": "0.24.3",
"version": "0.24.4",
"pinned": false,
"date": "2026-01-15T14:40:15Z"
"date": "2026-02-16T16:50:17Z"
},
{
"slug": "snipeit",
@@ -1390,9 +1411,9 @@
{
"slug": "stirling-pdf",
"repo": "Stirling-Tools/Stirling-PDF",
"version": "v2.4.6",
"version": "v2.5.0",
"pinned": false,
"date": "2026-02-12T00:01:19Z"
"date": "2026-02-16T22:58:17Z"
},
{
"slug": "streamlink-webui",
@@ -1425,9 +1446,9 @@
{
"slug": "tautulli",
"repo": "Tautulli/Tautulli",
"version": "v2.16.0",
"version": "v2.16.1",
"pinned": false,
"date": "2025-09-09T01:05:45Z"
"date": "2026-02-15T20:40:37Z"
},
{
"slug": "teddycloud",
@@ -1481,9 +1502,9 @@
{
"slug": "tracearr",
"repo": "connorgallopo/Tracearr",
"version": "v1.4.17",
"version": "v1.4.18",
"pinned": false,
"date": "2026-02-11T01:33:21Z"
"date": "2026-02-15T19:55:40Z"
},
{
"slug": "tracktor",
@@ -1523,9 +1544,9 @@
{
"slug": "tunarr",
"repo": "chrisbenincasa/tunarr",
"version": "v1.1.12",
"version": "v1.1.13",
"pinned": false,
"date": "2026-02-03T20:19:00Z"
"date": "2026-02-16T16:16:17Z"
},
{
"slug": "uhf",
@@ -1607,9 +1628,9 @@
{
"slug": "wanderer",
"repo": "meilisearch/meilisearch",
"version": "v1.35.0",
"version": "v1.35.1",
"pinned": false,
"date": "2026-02-02T09:57:03Z"
"date": "2026-02-16T17:01:17Z"
},
{
"slug": "warracker",
@@ -1719,9 +1740,9 @@
{
"slug": "zitadel",
"repo": "zitadel/zitadel",
"version": "v4.10.1",
"version": "v4.11.0",
"pinned": false,
"date": "2026-01-30T06:52:53Z"
"date": "2026-02-16T09:48:38Z"
},
{
"slug": "zoraxy",

View File

@@ -0,0 +1,40 @@
{
"name": "linkding",
"slug": "linkding",
"categories": [
12
],
"date_created": "2026-02-16",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 9090,
"documentation": "https://linkding.link/",
"website": "https://linkding.link/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/linkding.webp",
"config_path": "/opt/linkding/.env",
"description": "linkding is a self-hosted bookmark manager that is designed to be minimal, fast, and easy to set up. It features a clean UI, tag-based organization, bulk editing, Markdown notes, read it later functionality, sharing, REST API, and browser extensions for Firefox and Chrome.",
"install_methods": [
{
"type": "default",
"script": "ct/linkding.sh",
"resources": {
"cpu": 2,
"ram": 1024,
"hdd": 4,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": "admin",
"password": null
},
"notes": [
{
"text": "Admin credentials are stored in /opt/linkding/.env",
"type": "info"
}
]
}

View File

@@ -33,7 +33,7 @@
},
"notes": [
{
"text": "Valid TLS certificates and fully-qualified domain names behind a reverse proxy (Caddy) for 3 services - OpenCloud, Collabora, and WOPI are **REQUIRED**",
"text": "Valid TLS certificates and fully-qualified domain names behind a reverse proxy (Caddy) for 3 services - OpenCloud (port: 9200), Collabora (port: 9980), and WOPI (port: 9300) are **REQUIRED**",
"type": "warning"
},
{

View File

@@ -0,0 +1,35 @@
{
"name": "RomM",
"slug": "romm",
"categories": [
24
],
"date_created": "2026-02-16",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 80,
"documentation": "https://docs.romm.app/latest/",
"website": "https://romm.app/",
"config_path": "/opt/romm/.env",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/romm.webp",
"description": "RomM (ROM Manager) allows you to scan, enrich, browse and play your game collection with a clean and responsive interface. Support for multiple platforms, various naming schemes, and custom tags.",
"install_methods": [
{
"type": "default",
"script": "ct/romm.sh",
"resources": {
"cpu": 2,
"ram": 4096,
"hdd": 20,
"os": "debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -1,35 +1,40 @@
{
"name": "Wishlist",
"slug": "wishlist",
"categories": [
12
],
"date_created": "2026-02-04",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 3280,
"documentation": "https://github.com/cmintey/wishlist/blob/main/README.md#getting-started",
"config_path": "/opt/wishlist/.env",
"website": "https://github.com/cmintey/wishlist",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/cmintey-wishlist.webp",
"description": "Wishlist is a self-hosted wishlist application that you can share with your friends and family. You no longer have to wonder what to get your family for the holidays, simply check their wishlist and claim any available item!",
"install_methods": [
{
"type": "default",
"script": "ct/wishlist.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 5,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
"name": "Wishlist",
"slug": "wishlist",
"categories": [
12
],
"date_created": "2026-02-04",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 3280,
"documentation": "https://github.com/cmintey/wishlist/blob/main/README.md#getting-started",
"website": "https://github.com/cmintey/wishlist",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/cmintey-wishlist.webp",
"config_path": "/opt/wishlist/.env",
"description": "Wishlist is a self-hosted wishlist application that you can share with your friends and family. You no longer have to wonder what to get your family for the holidays, simply check their wishlist and claim any available item!",
"install_methods": [
{
"type": "default",
"script": "ct/wishlist.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 5,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "When using a reverse proxy with this script, please edit the`ORIGIN` value in `/opt/wishlist/.env` to point to your new URL, otherwise creating an admin account or logging in will not work.",
"type": "info"
}
]
}

View File

@@ -20,7 +20,7 @@ $STD apk add --no-cache \
libc6-compat
msg_ok "Installed dependencies"
RELEASE=$(curl -fsSL https://teamspeak.com/en/downloads/#server | sed -n 's/.*teamspeak3-server_linux_amd64-\([0-9.]*[0-9]\).*/\1/p' | head -1)
RELEASE=$(curl -fsSL https://teamspeak.com/en/downloads/#server | sed -n 's/.*teamspeak3-server_linux_amd64-\([0-9.]*[0-9]\).*/\1/p' | awk 'NR==1')
msg_info "Installing Teamspeak Server v${RELEASE}"
mkdir -p /opt/teamspeak-server
cd /opt/teamspeak-server

View File

@@ -26,13 +26,13 @@ msg_info "Setup CryptPad"
cd /opt/cryptpad
$STD npm ci
$STD npm run install:components
$STD npm run build
cp config/config.example.js config/config.js
sed -i "51s/localhost/${LOCAL_IP}/g" /opt/cryptpad/config/config.js
sed -i "80s#//httpAddress: 'localhost'#httpAddress: '0.0.0.0'#g" /opt/cryptpad/config/config.js
if [[ "$onlyoffice" =~ ^[Yy]$ ]]; then
$STD bash -c "./install-onlyoffice.sh --accept-license"
fi
cp config/config.example.js config/config.js
sed -i "51s/localhost/${LOCAL_IP}/g" /opt/cryptpad/config/config.js
sed -i "80s#//httpAddress: 'localhost'#httpAddress: '0.0.0.0'#g" /opt/cryptpad/config/config.js
$STD npm run build
msg_ok "Setup CryptPad"
msg_info "Creating Service"

View File

@@ -0,0 +1,171 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
nginx \
valkey
msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
setup_go
NODE_VERSION="24" setup_nodejs
fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Building Databasus (Patience)"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod tidy
$STD go mod download
$STD go install github.com/swaggo/swag/cmd/swag@latest
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
mkdir -p /databasus-data/{pgdata,temp,backups,data,logs}
mkdir -p /opt/databasus/ui/build
mkdir -p /opt/databasus/migrations
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations/* /opt/databasus/migrations/
chown -R postgres:postgres /databasus-data
msg_ok "Built Databasus"
msg_info "Configuring Databasus"
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install goose for migrations
$STD go install github.com/pressly/goose/v3/cmd/goose@latest
ln -sf /root/go/bin/goose /usr/local/bin/goose
cat <<EOF >/opt/databasus/.env
# Environment
ENV_MODE=production
# Server
SERVER_PORT=4005
SERVER_HOST=0.0.0.0
# Database
DATABASE_DSN=host=localhost user=postgres password=postgres dbname=databasus port=5432 sslmode=disable
DATABASE_URL=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
# Migrations
GOOSE_DRIVER=postgres
GOOSE_DBSTRING=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
GOOSE_MIGRATION_DIR=/opt/databasus/migrations
# Valkey (Redis-compatible cache)
VALKEY_HOST=localhost
VALKEY_PORT=6379
# Security
JWT_SECRET=${JWT_SECRET}
ENCRYPTION_KEY=${ENCRYPTION_KEY}
# Paths
DATA_DIR=/databasus-data/data
BACKUP_DIR=/databasus-data/backups
LOG_DIR=/databasus-data/logs
EOF
chown postgres:postgres /opt/databasus/.env
chmod 600 /opt/databasus/.env
msg_ok "Configured Databasus"
msg_info "Configuring Valkey"
cat <<EOF >/etc/valkey/valkey.conf
port 6379
bind 127.0.0.1
protected-mode yes
save ""
maxmemory 256mb
maxmemory-policy allkeys-lru
EOF
systemctl enable -q --now valkey-server
systemctl restart valkey-server
msg_ok "Configured Valkey"
msg_info "Creating Database"
# Configure PostgreSQL to allow local password auth for databasus
PG_HBA="/etc/postgresql/17/main/pg_hba.conf"
if ! grep -q "databasus" "$PG_HBA"; then
sed -i '/^local\s*all\s*all/i local databasus postgres trust' "$PG_HBA"
sed -i '/^host\s*all\s*all\s*127/i host databasus postgres 127.0.0.1/32 trust' "$PG_HBA"
systemctl reload postgresql
fi
$STD sudo -u postgres psql -c "CREATE DATABASE databasus;" 2>/dev/null || true
$STD sudo -u postgres psql -c "ALTER USER postgres WITH SUPERUSER CREATEROLE CREATEDB;" 2>/dev/null || true
msg_ok "Created Database"
msg_info "Creating Databasus Service"
cat <<EOF >/etc/systemd/system/databasus.service
[Unit]
Description=Databasus - Database Backup Management
After=network.target postgresql.service valkey.service
Requires=postgresql.service valkey.service
[Service]
Type=simple
WorkingDirectory=/opt/databasus
EnvironmentFile=/opt/databasus/.env
ExecStart=/opt/databasus/databasus
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$STD systemctl daemon-reload
$STD systemctl enable -q --now databasus
msg_ok "Created Databasus Service"
msg_info "Configuring Nginx"
cat <<EOF >/etc/nginx/sites-available/databasus
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:4005;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_cache_bypass \$http_upgrade;
proxy_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
EOF
ln -sf /etc/nginx/sites-available/databasus /etc/nginx/sites-enabled/databasus
rm -f /etc/nginx/sites-enabled/default
$STD nginx -t
$STD systemctl enable -q --now nginx
$STD systemctl reload nginx
msg_ok "Configured Nginx"
motd_ssh
customize
cleanup_lxc

126
install/linkding-install.sh Normal file
View File

@@ -0,0 +1,126 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (MickLesk)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://linkding.link/
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
build-essential \
pkg-config \
python3-dev \
nginx \
libpq-dev \
libicu-dev \
libsqlite3-dev \
libffi-dev
msg_ok "Installed Dependencies"
NODE_VERSION="22" setup_nodejs
setup_uv
fetch_and_deploy_gh_release "linkding" "sissbruecker/linkding"
msg_info "Building Frontend"
cd /opt/linkding
$STD npm ci
$STD npm run build
ln -sf /usr/lib/x86_64-linux-gnu/mod_icu.so /opt/linkding/libicu.so
msg_ok "Built Frontend"
msg_info "Setting up LinkDing"
rm -f bookmarks/settings/dev.py
touch bookmarks/settings/custom.py
$STD uv sync --no-dev --frozen
$STD uv pip install gunicorn
mkdir -p data/{favicons,previews,assets}
ADMIN_PASS=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | cut -c1-13)
cat <<EOF >/opt/linkding/.env
LD_SUPERUSER_NAME=admin
LD_SUPERUSER_PASSWORD=${ADMIN_PASS}
LD_CSRF_TRUSTED_ORIGINS=http://${LOCAL_IP}:9090
EOF
set -a && source /opt/linkding/.env && set +a
$STD /opt/linkding/.venv/bin/python manage.py generate_secret_key
$STD /opt/linkding/.venv/bin/python manage.py migrate
$STD /opt/linkding/.venv/bin/python manage.py enable_wal
$STD /opt/linkding/.venv/bin/python manage.py create_initial_superuser
$STD /opt/linkding/.venv/bin/python manage.py collectstatic --no-input
msg_ok "Set up LinkDing"
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/linkding.service
[Unit]
Description=linkding Bookmark Manager
After=network.target
[Service]
User=root
WorkingDirectory=/opt/linkding
EnvironmentFile=/opt/linkding/.env
ExecStart=/opt/linkding/.venv/bin/gunicorn \
--bind 127.0.0.1:8000 \
--workers 3 \
--threads 2 \
--timeout 120 \
bookmarks.wsgi:application
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/linkding-tasks.service
[Unit]
Description=linkding Background Tasks
After=network.target
[Service]
User=root
WorkingDirectory=/opt/linkding
EnvironmentFile=/opt/linkding/.env
ExecStart=/opt/linkding/.venv/bin/python manage.py run_huey
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<'EOF' >/etc/nginx/sites-available/linkding
server {
listen 9090;
server_name _;
client_max_body_size 20M;
location /static/ {
alias /opt/linkding/static/;
expires 30d;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
}
EOF
$STD rm -f /etc/nginx/sites-enabled/default
$STD ln -sf /etc/nginx/sites-available/linkding /etc/nginx/sites-enabled/linkding
systemctl enable -q --now nginx linkding linkding-tasks
systemctl restart nginx
msg_ok "Created Services"
motd_ssh
customize
cleanup_lxc

View File

@@ -64,7 +64,7 @@ $STD sudo -u cool coolconfig set-admin-password --user=admin --password="$COOLPA
echo "$COOLPASS" >~/.coolpass
msg_ok "Installed Collabora Online"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.0.2" "/usr/bin" "opencloud-*-linux-amd64"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.1.0" "/usr/bin" "opencloud-*-linux-amd64"
msg_info "Configuring OpenCloud"
DATA_DIR="/var/lib/opencloud"

View File

@@ -80,6 +80,7 @@ $STD php artisan p:user:make --no-interaction --admin=1 --email "$ADMIN_EMAIL" -
echo "* * * * * php /opt/pterodactyl-panel/artisan schedule:run >> /dev/null 2>&1" | crontab -u www-data -
chown -R www-data:www-data /opt/pterodactyl-panel/*
chmod -R 755 /opt/pterodactyl-panel/storage/* /opt/pterodactyl-panel/bootstrap/cache/
ln -s /opt/pterodactyl-panel /var/www/pterodactyl
{
echo ""
echo "pterodactyl Admin Username: admin"

344
install/romm-install.sh Normal file
View File

@@ -0,0 +1,344 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ) | DevelopmentCats | AlphaLawless
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://romm.app
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
acl \
git \
build-essential \
libssl-dev \
libffi-dev \
libmagic-dev \
python3-dev \
python3-pip \
python3-venv \
libmariadb3 \
libmariadb-dev \
libpq-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
zlib1g-dev \
liblzma-dev \
libncurses5-dev \
libncursesw5-dev \
redis-server \
redis-tools \
p7zip-full \
tzdata \
nginx
msg_ok "Installed Dependencies"
PYTHON_VERSION="3.13" setup_uv
NODE_VERSION="22" setup_nodejs
setup_mariadb
MARIADB_DB_NAME="romm" MARIADB_DB_USER="romm" setup_mariadb_db
msg_info "Creating directories"
mkdir -p /opt/romm \
/var/lib/romm/config \
/var/lib/romm/resources \
/var/lib/romm/assets/{saves,states,screenshots} \
/var/lib/romm/library/roms \
/var/lib/romm/library/bios
msg_ok "Created directories"
msg_info "Creating configuration file"
cat <<'EOF' >/var/lib/romm/config/config.yml
# RomM Configuration File
# Documentation: https://docs.romm.app/latest/Getting-Started/Configuration-File/
# Only uncomment the lines you want to use/modify
# exclude:
# platforms:
# - excluded_folder_a
# roms:
# single_file:
# extensions:
# - xml
# - txt
# names:
# - '._*'
# - '*.nfo'
# multi_file:
# names:
# - downloaded_media
# - media
# system:
# platforms:
# gc: ngc
# ps1: psx
# The folder name where your roms are located (relative to library path)
# filesystem:
# roms_folder: 'roms'
# scan:
# priority:
# metadata:
# - "igdb"
# - "moby"
# - "ss"
# - "ra"
# artwork:
# - "igdb"
# - "moby"
# - "ss"
# region:
# - "us"
# - "eu"
# - "jp"
# language:
# - "en"
# media:
# - box2d
# - box3d
# - screenshot
# - manual
# emulatorjs:
# debug: false
# cache_limit: null
EOF
chmod 644 /var/lib/romm/config/config.yml
msg_ok "Created configuration file"
fetch_and_deploy_gh_release "RAHasher" "RetroAchievements/RALibretro" "prebuild" "latest" "/opt/RALibretro" "RAHasher-x64-Linux-*.zip"
cp /opt/RALibretro/RAHasher /usr/bin/RAHasher
chmod +x /usr/bin/RAHasher
fetch_and_deploy_gh_release "romm" "rommapp/romm"
msg_info "Creating environment file"
sed -i 's/^supervised no/supervised systemd/' /etc/redis/redis.conf
systemctl restart redis-server
systemctl enable -q --now redis-server
AUTH_SECRET_KEY=$(openssl rand -hex 32)
cat <<EOF >/opt/romm/.env
ROMM_BASE_PATH=/var/lib/romm
ROMM_CONFIG_PATH=/var/lib/romm/config/config.yml
WEB_CONCURRENCY=4
DB_HOST=127.0.0.1
DB_PORT=3306
DB_NAME=$MARIADB_DB_NAME
DB_USER=$MARIADB_DB_USER
DB_PASSWD=$MARIADB_DB_PASS
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
ROMM_AUTH_SECRET_KEY=$AUTH_SECRET_KEY
DISABLE_DOWNLOAD_ENDPOINT_AUTH=false
DISABLE_CSRF_PROTECTION=false
ENABLE_RESCAN_ON_FILESYSTEM_CHANGE=true
RESCAN_ON_FILESYSTEM_CHANGE_DELAY=5
ENABLE_SCHEDULED_RESCAN=true
SCHEDULED_RESCAN_CRON=0 3 * * *
ENABLE_SCHEDULED_UPDATE_SWITCH_TITLEDB=true
SCHEDULED_UPDATE_SWITCH_TITLEDB_CRON=0 4 * * *
LOGLEVEL=INFO
EOF
chmod 600 /opt/romm/.env
msg_ok "Created environment file"
msg_info "Setting up RomM Backend"
cd /opt/romm
export UV_CONCURRENT_DOWNLOADS=1
$STD uv sync --all-extras
cd /opt/romm/backend
$STD uv run alembic upgrade head
msg_ok "Set up RomM Backend"
msg_info "Setting up RomM Frontend"
cd /opt/romm/frontend
$STD npm install
$STD npm run build
cp -rf /opt/romm/frontend/assets/* /opt/romm/frontend/dist/assets/
mkdir -p /opt/romm/frontend/dist/assets/romm
ln -sfn /var/lib/romm/resources /opt/romm/frontend/dist/assets/romm/resources
ln -sfn /var/lib/romm/assets /opt/romm/frontend/dist/assets/romm/assets
msg_ok "Set up RomM Frontend"
msg_info "Configuring Nginx"
cat <<'EOF' >/etc/nginx/sites-available/romm
upstream romm_backend {
server 127.0.0.1:5000;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name _;
root /opt/romm/frontend/dist;
client_max_body_size 0;
# Frontend SPA
location / {
try_files $uri $uri/ /index.html;
}
# Static assets
location /assets {
alias /opt/romm/frontend/dist/assets;
try_files $uri $uri/ =404;
expires 1y;
add_header Cache-Control "public, immutable";
}
# EmulatorJS player - requires COOP/COEP headers for SharedArrayBuffer
location ~ ^/rom/.*/ejs$ {
add_header Cross-Origin-Embedder-Policy "require-corp";
add_header Cross-Origin-Opener-Policy "same-origin";
try_files $uri /index.html;
}
# Backend API
location /api {
proxy_pass http://romm_backend;
proxy_buffering off;
proxy_request_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket and Netplay
location ~ ^/(ws|netplay) {
proxy_pass http://romm_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_read_timeout 86400;
}
# OpenAPI docs
location = /openapi.json {
proxy_pass http://romm_backend;
}
# Internal library file serving
location /library/ {
internal;
alias /var/lib/romm/library/;
}
}
EOF
rm -f /etc/nginx/sites-enabled/default
ln -sf /etc/nginx/sites-available/romm /etc/nginx/sites-enabled/romm
systemctl restart nginx
systemctl enable -q --now nginx
msg_ok "Configured Nginx"
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/romm-backend.service
[Unit]
Description=RomM Backend
After=network.target mariadb.service redis-server.service
Requires=mariadb.service redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm"
ExecStart=/opt/romm/.venv/bin/python main.py
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/romm-worker.service
[Unit]
Description=RomM RQ Worker
After=network.target mariadb.service redis-server.service romm-backend.service
Requires=mariadb.service redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm/backend"
ExecStart=/opt/romm/.venv/bin/rq worker --path /opt/romm/backend --url redis://127.0.0.1:6379/0 high default low
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/romm-scheduler.service
[Unit]
Description=RomM RQ Scheduler
After=network.target mariadb.service redis-server.service romm-backend.service
Requires=mariadb.service redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm/backend"
Environment="RQ_REDIS_HOST=127.0.0.1"
Environment="RQ_REDIS_PORT=6379"
ExecStart=/opt/romm/.venv/bin/rqscheduler --path /opt/romm/backend
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/romm-watcher.service
[Unit]
Description=RomM Filesystem Watcher
After=network.target romm-backend.service
Requires=romm-backend.service
[Service]
Type=simple
WorkingDirectory=/opt/romm/backend
EnvironmentFile=/opt/romm/.env
Environment="PYTHONPATH=/opt/romm/backend"
ExecStart=/opt/romm/.venv/bin/watchfiles --target-type command '/opt/romm/.venv/bin/python watcher.py' /var/lib/romm/library
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now romm-backend romm-worker romm-scheduler romm-watcher
msg_ok "Created Services"
motd_ssh
customize
cleanup_lxc

View File

@@ -38,8 +38,8 @@ SECRET="$(openssl rand -hex 64)"
sed -e '/^NODE_ENV=/s/=.*$/=production/' \
-e 's/^TUDUDI_USER/# TUDUDI_USER/g' \
-e "/_SECRET=/s/=.*$/=${SECRET}/" \
-e "/^# DB_FILE/s/^# //; \
\|DB_FILE|s|/path.*$|${DB_LOCATION}/production.sqlite3|" \
-e '/^# DB_FILE=/s/^# //' \
-e "s|^DB_FILE=.*|DB_FILE=${DB_LOCATION}/production.sqlite3|" \
-e "/^# TUDUDI_ALLOWED/s/^# //; \
\|_ORIGINS=|s|=.*$|=<your tududi IP or FDQN>|" \
-e "/^# TUDUDI_UPLOAD/s/^# //; \

View File

@@ -29,6 +29,8 @@ fetch_and_deploy_gh_release "vaultwarden" "dani-garcia/vaultwarden" "tarball" "l
msg_info "Building Vaultwarden (Patience)"
cd /tmp/vaultwarden-src
VW_VERSION=$(get_latest_github_release "dani-garcia/vaultwarden")
export VW_VERSION
$STD cargo build --features "sqlite,mysql,postgresql" --release
msg_ok "Built Vaultwarden"

View File

@@ -14,6 +14,25 @@ catch_errors
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# This function enables IPv6 if it's not disabled and sets verbose mode
verb_ip6() {
set_std_mode # Set STD mode based on VERBOSE
@@ -34,42 +53,6 @@ EOF
fi
}
set -Eeuo pipefail
trap 'error_handler $? $LINENO "$BASH_COMMAND"' ERR
trap on_exit EXIT
trap on_interrupt INT
trap on_terminate TERM
error_handler() {
local exit_code="$1"
local line_number="$2"
local command="$3"
if [[ "$exit_code" -eq 0 ]]; then
return 0
fi
printf "\e[?25h"
echo -e "\n${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}\n"
exit "$exit_code"
}
on_exit() {
local exit_code="$?"
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
exit "$exit_code"
}
on_interrupt() {
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}"
exit 130
}
on_terminate() {
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}"
exit 143
}
# This function sets up the Container OS by generating the locale, setting the timezone, and checking the network connection
setting_up_container() {
msg_info "Setting up Container OS"
@@ -89,6 +72,7 @@ setting_up_container() {
fi
msg_ok "Set up Container OS"
msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}"
post_progress_to_api
}
# This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
@@ -121,8 +105,18 @@ network_check() {
update_os() {
msg_info "Updating Container OS"
$STD apk -U upgrade
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
msg_ok "Updated Container OS"
post_progress_to_api
}
# This function modifies the message of the day (motd) and SSH settings

View File

@@ -34,11 +34,19 @@ net_resolves() {
}
ensure_usr_local_bin_persist() {
# Login shells: /etc/profile.d/
local PROFILE_FILE="/etc/profile.d/10-localbin.sh"
if [ ! -f "$PROFILE_FILE" ]; then
echo 'case ":$PATH:" in *:/usr/local/bin:*) ;; *) export PATH="/usr/local/bin:$PATH";; esac' >"$PROFILE_FILE"
chmod +x "$PROFILE_FILE"
fi
# Non-login shells (pct enter): /root/.profile and /root/.bashrc
for rc_file in /root/.profile /root/.bashrc; do
if [ -f "$rc_file" ] && ! grep -q '/usr/local/bin' "$rc_file"; then
echo 'export PATH="/usr/local/bin:$PATH"' >>"$rc_file"
fi
done
}
download_with_progress() {

View File

@@ -117,16 +117,17 @@ detect_repo_source
# - Canonical source of truth for ALL exit code mappings
# - Used by both api.func (telemetry) and error_handler.func (error display)
# - Supports:
# * Generic/Shell errors (1, 2, 124, 126-130, 134, 137, 139, 141, 143)
# * curl/wget errors (6, 7, 22, 28, 35)
# * Generic/Shell errors (1-3, 10, 124-132, 134, 137, 139, 141, 143-146)
# * curl/wget errors (4-8, 16, 18, 22-28, 30, 32-36, 39, 44-48, 51-52, 55-57, 59, 61, 63, 75, 78-79, 92, 95)
# * Package manager errors (APT, DPKG: 100-102, 255)
# * BSD sysexits (64-78)
# * Systemd/Service errors (150-154)
# * Python/pip/uv errors (160-162)
# * PostgreSQL errors (170-173)
# * MySQL/MariaDB errors (180-183)
# * MongoDB errors (190-193)
# * Proxmox custom codes (200-231)
# * Node.js/npm errors (243, 245-249)
# * Node.js/npm errors (239, 243, 245-249)
# - Returns description string for given exit code
# ------------------------------------------------------------------------------
explain_exit_code() {
@@ -135,30 +136,87 @@ explain_exit_code() {
# --- Generic / Shell ---
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
# --- curl / wget errors (commonly seen in downloads) ---
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
# --- Package manager / APT / DPKG ---
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
# --- BSD sysexits.h (64-78) ---
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
# --- Common shell/system errors ---
124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
# --- Systemd / Service errors (150-154) ---
150) echo "Systemd: Service failed to start" ;;
@@ -166,7 +224,6 @@ explain_exit_code() {
152) echo "Permission denied (EACCES)" ;;
153) echo "Build/compile failed (make/gcc/cmake)" ;;
154) echo "Node.js: Native addon build failed (node-gyp)" ;;
# --- Python / pip / uv (160-162) ---
160) echo "Python: Virtualenv / uv environment missing or broken" ;;
161) echo "Python: Dependency resolution failed" ;;
@@ -217,7 +274,8 @@ explain_exit_code() {
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
# --- Node.js / npm / pnpm / yarn (243-249) ---
# --- Node.js / npm / pnpm / yarn (239-249) ---
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
@@ -624,6 +682,8 @@ EOF
curl -fsS -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" &>/dev/null || true
POST_TO_API_DONE=true
}
# ------------------------------------------------------------------------------
@@ -848,6 +908,9 @@ categorize_error() {
# Network errors (curl/wget)
6 | 7 | 22 | 35) echo "network" ;;
# Docker / Privileged mode required
10) echo "config" ;;
# Timeout errors
28 | 124 | 211) echo "timeout" ;;

View File

@@ -297,7 +297,7 @@ validate_container_id() {
# Falls back gracefully if pvesh unavailable or returns empty
if command -v pvesh &>/dev/null; then
local cluster_ids
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true)
if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then
return 1
@@ -3427,6 +3427,7 @@ start() {
VERBOSE="no"
set_std_mode
ensure_profile_loaded
get_lxc_ip
update_script
update_motd_ip
cleanup_lxc
@@ -3454,6 +3455,7 @@ start() {
;;
esac
ensure_profile_loaded
get_lxc_ip
update_script
update_motd_ip
cleanup_lxc
@@ -4038,6 +4040,13 @@ EOF'
msg_ok "Customized LXC Container"
# Optional DNS override for retry scenarios (inside LXC, never on host)
if [[ "${DNS_RETRY_OVERRIDE:-false}" == "true" ]]; then
msg_info "Applying DNS retry override in LXC (8.8.8.8, 1.1.1.1)"
pct exec "$CTID" -- bash -c "printf 'nameserver 8.8.8.8\nnameserver 1.1.1.1\n' >/etc/resolv.conf" >/dev/null 2>&1 || true
msg_ok "DNS override applied in LXC"
fi
# Install SSH keys
install_ssh_keys_into_ct
@@ -4150,32 +4159,322 @@ EOF'
# Prompt user for cleanup with 60s timeout
echo ""
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
# Detect error type for smart recovery options
local is_oom=false
local is_network_issue=false
local is_apt_issue=false
local is_cmd_not_found=false
local error_explanation=""
if declare -f explain_exit_code >/dev/null 2>&1; then
error_explanation="$(explain_exit_code "$install_exit_code")"
fi
# OOM detection: exit codes 134 (SIGABRT/heap), 137 (SIGKILL/OOM), 243 (Node.js heap)
if [[ $install_exit_code -eq 134 || $install_exit_code -eq 137 || $install_exit_code -eq 243 ]]; then
is_oom=true
fi
# APT/DPKG detection: exit codes 100-102 (APT), 255 (DPKG with log evidence)
case "$install_exit_code" in
100 | 101 | 102) is_apt_issue=true ;;
255)
if [[ -f "$combined_log" ]] && grep -qiE 'dpkg|apt-get|apt\.conf|broken packages|unmet dependencies|E: Sub-process|E: Failed' "$combined_log"; then
is_apt_issue=true
fi
;;
esac
# Command not found detection
if [[ $install_exit_code -eq 127 ]]; then
is_cmd_not_found=true
fi
# Network-related detection (curl/apt/git fetch failures and transient network issues)
case "$install_exit_code" in
6 | 7 | 22 | 28 | 35 | 52 | 56 | 57 | 75 | 78) is_network_issue=true ;;
100)
# APT can fail due to network (Failed to fetch)
if [[ -f "$combined_log" ]] && grep -qiE 'Failed to fetch|Could not resolve|Connection failed|Network is unreachable|Temporary failure resolving' "$combined_log"; then
is_network_issue=true
fi
;;
128)
if [[ -f "$combined_log" ]] && grep -qiE 'RPC failed|early EOF|fetch-pack|HTTP/2 stream|Could not resolve host|Temporary failure resolving|Failed to fetch|Connection reset|Network is unreachable' "$combined_log"; then
is_network_issue=true
fi
;;
esac
# Exit 1 subclassification: analyze logs to identify actual root cause
# Many exit 1 errors are actually APT, OOM, network, or command-not-found issues
if [[ $install_exit_code -eq 1 && -f "$combined_log" ]]; then
if grep -qiE 'E: Unable to|E: Package|E: Failed to fetch|dpkg.*error|broken packages|unmet dependencies|dpkg --configure -a' "$combined_log"; then
is_apt_issue=true
fi
if grep -qiE 'Cannot allocate memory|Out of memory|oom-killer|Killed process|JavaScript heap' "$combined_log"; then
is_oom=true
fi
if grep -qiE 'Could not resolve|DNS|Connection refused|Network is unreachable|No route to host|Temporary failure resolving|Failed to fetch' "$combined_log"; then
is_network_issue=true
fi
if grep -qiE ': command not found|No such file or directory.*/s?bin/' "$combined_log"; then
is_cmd_not_found=true
fi
fi
# Show error explanation if available
if [[ -n "$error_explanation" ]]; then
echo -e "${TAB}${RD}Error: ${error_explanation}${CL}"
echo ""
fi
# Show specific hints for known error types
if [[ $install_exit_code -eq 10 ]]; then
echo -e "${TAB}${INFO} This error usually means the container needs ${GN}privileged${CL} mode or Docker/nesting support."
echo -e "${TAB}${INFO} Recreate with: Advanced Install → Container Type: ${GN}Privileged${CL}"
echo ""
fi
if [[ $install_exit_code -eq 125 || $install_exit_code -eq 126 ]]; then
echo -e "${TAB}${INFO} The command exists but cannot be executed. This may be a ${GN}permission${CL} issue."
echo -e "${TAB}${INFO} If using Docker, ensure the container is ${GN}privileged${CL} or has correct permissions."
echo ""
fi
if [[ "$is_cmd_not_found" == true ]]; then
local missing_cmd=""
if [[ -f "$combined_log" ]]; then
missing_cmd=$(grep -oiE '[a-zA-Z0-9_.-]+: command not found' "$combined_log" | tail -1 | sed 's/: command not found//')
fi
if [[ -n "$missing_cmd" ]]; then
echo -e "${TAB}${INFO} Missing command: ${GN}${missing_cmd}${CL}"
fi
echo ""
fi
# Build recovery menu based on error type
echo -e "${YW}What would you like to do?${CL}"
echo ""
echo -e " ${GN}1)${CL} Remove container and exit"
echo -e " ${GN}2)${CL} Keep container for debugging"
echo -e " ${GN}3)${CL} Retry with verbose mode (full rebuild)"
local next_option=4
local APT_OPTION="" OOM_OPTION="" DNS_OPTION=""
if [[ "$is_apt_issue" == true ]]; then
if [[ "$var_os" == "alpine" ]]; then
echo -e " ${GN}${next_option})${CL} Repair APK state and re-run install (in-place)"
else
echo -e " ${GN}${next_option})${CL} Repair APT/DPKG state and re-run install (in-place)"
fi
APT_OPTION=$next_option
next_option=$((next_option + 1))
fi
if [[ "$is_oom" == true ]]; then
local recovery_attempt="${RECOVERY_ATTEMPT:-0}"
if [[ $recovery_attempt -lt 2 ]]; then
local new_ram=$((RAM_SIZE * 2))
local new_cpu=$((CORE_COUNT * 2))
echo -e " ${GN}${next_option})${CL} Retry with more resources (RAM: ${RAM_SIZE}${new_ram} MiB, CPU: ${CORE_COUNT}${new_cpu} cores)"
OOM_OPTION=$next_option
next_option=$((next_option + 1))
else
echo -e " ${DGN}-)${CL} ${DGN}OOM retry exhausted (already retried ${recovery_attempt}x)${CL}"
fi
fi
if [[ "$is_network_issue" == true ]]; then
echo -e " ${GN}${next_option})${CL} Retry with DNS override in LXC (8.8.8.8 / 1.1.1.1)"
DNS_OPTION=$next_option
next_option=$((next_option + 1))
fi
local max_option=$((next_option - 1))
echo ""
echo -en "${YW}Select option [1-${max_option}] (default: 1, auto-remove in 60s): ${CL}"
if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
case "${response:-1}" in
1)
# Remove container
echo ""
msg_info "Removing container ${CTID}"
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID}${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
msg_ok "Container ${CTID} removed"
elif [[ "$response" =~ ^[Nn]$ ]]; then
echo ""
msg_warn "Container ${CTID} kept for debugging"
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
;;
2)
echo -e "\n${TAB}${YW}Container ${CTID} kept for debugging${CL}"
# Dev mode: Setup MOTD/SSH for debugging access to broken container
if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then
echo -e "${TAB}${HOLD}${DGN}Setting up MOTD and SSH for debugging...${CL}"
if pct exec "$CTID" -- bash -c "
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then
local ct_ip=$(pct exec "$CTID" ip a s dev eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1)
echo -e "${BFR}${CM}${GN}MOTD/SSH ready - SSH into container: ssh root@${ct_ip}${CL}"
fi
fi
fi
exit $install_exit_code
;;
3)
# Retry with verbose mode (full rebuild)
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
# Get new container ID
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export VERBOSE="yes"
export var_verbose="yes"
# Show rebuild summary
echo -e "${YW}Rebuilding with preserved settings:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
# Re-run build_container
build_container
return $?
;;
*)
# Handle dynamic smart recovery options via named option variables
local handled=false
if [[ -n "${APT_OPTION}" && "${response}" == "${APT_OPTION}" ]]; then
# Package manager in-place repair: fix broken state and re-run install script
handled=true
if [[ "$var_os" == "alpine" ]]; then
echo -e "\n${TAB}${HOLD}${YW}Repairing APK state in container ${CTID}...${CL}"
pct exec "$CTID" -- ash -c "
apk fix 2>/dev/null || true
apk cache clean 2>/dev/null || true
apk update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APK state repaired in container ${CTID}${CL}"
else
echo -e "\n${TAB}${HOLD}${YW}Repairing APT/DPKG state in container ${CTID}...${CL}"
pct exec "$CTID" -- bash -c "
DEBIAN_FRONTEND=noninteractive dpkg --configure -a 2>/dev/null || true
apt-get -f install -y 2>/dev/null || true
apt-get clean 2>/dev/null
apt-get update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APT/DPKG state repaired in container ${CTID}${CL}"
fi
echo ""
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Re-running installation in existing container ${CTID}:${CL}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Re-running installation script..."
# Re-run install script in existing container (don't destroy/recreate)
set +Eeuo pipefail
trap - ERR
lxc-attach -n "$CTID" -- bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/install/${var_install}.sh)"
local apt_retry_exit=$?
set -Eeuo pipefail
trap 'error_handler' ERR
# Check for error flag from retry
local apt_retry_code=0
if [[ -n "${SESSION_ID:-}" ]]; then
local retry_error_flag="/root/.install-${SESSION_ID}.failed"
if pct exec "$CTID" -- test -f "$retry_error_flag" 2>/dev/null; then
apt_retry_code=$(pct exec "$CTID" -- cat "$retry_error_flag" 2>/dev/null || echo "1")
pct exec "$CTID" -- rm -f "$retry_error_flag" 2>/dev/null || true
fi
fi
if [[ $apt_retry_code -eq 0 && $apt_retry_exit -ne 0 ]]; then
apt_retry_code=$apt_retry_exit
fi
if [[ $apt_retry_code -eq 0 ]]; then
msg_ok "Installation completed successfully after APT repair!"
post_update_to_api "done" "0" "force"
return 0
else
msg_error "Installation still failed after APT repair (exit code: ${apt_retry_code})"
install_exit_code=$apt_retry_code
fi
fi
if [[ -n "${OOM_OPTION}" && "${response}" == "${OOM_OPTION}" ]]; then
# Retry with doubled resources
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with more resources...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
local old_ram="$RAM_SIZE"
local old_cpu="$CORE_COUNT"
export CTID=$(get_valid_container_id "$CTID")
export RAM_SIZE=$((RAM_SIZE * 2))
export CORE_COUNT=$((CORE_COUNT * 2))
export var_ram="$RAM_SIZE"
export var_cpu="$CORE_COUNT"
export VERBOSE="yes"
export var_verbose="yes"
export RECOVERY_ATTEMPT=$(( ${RECOVERY_ATTEMPT:-0} + 1 ))
echo -e "${YW}Rebuilding with increased resources (attempt ${RECOVERY_ATTEMPT}/2):${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${old_ram}${GN}${RAM_SIZE}${CL} MiB (x2)"
echo -e " CPU: ${old_cpu}${GN}${CORE_COUNT}${CL} cores (x2)"
echo -e " Disk: ${DISK_SIZE} GB | Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ -n "${DNS_OPTION}" && "${response}" == "${DNS_OPTION}" ]]; then
# Retry with DNS override in LXC
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with DNS override...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export DNS_RETRY_OVERRIDE="true"
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Rebuilding with DNS override in LXC:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " DNS: ${GN}8.8.8.8, 1.1.1.1${CL} (inside LXC only)"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ "$handled" == false ]]; then
echo -e "\n${TAB}${YW}Invalid option. Container ${CTID} kept.${CL}"
exit $install_exit_code
fi
;;
esac
else
# Timeout - auto-remove
echo ""
@@ -5253,20 +5552,27 @@ ensure_log_on_host() {
# - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages
# - Posts failure status with exit code to API (error description resolved automatically)
# - Only executes on non-zero exit codes
# - For non-zero exit codes: posts "failed" status
# - For zero exit codes where post_update_to_api was never called:
# catches orphaned "installing" records (e.g., script exited cleanly
# but description() was never reached)
# ------------------------------------------------------------------------------
api_exit_script() {
exit_code=$?
local exit_code=$?
if [ $exit_code -ne 0 ]; then
ensure_log_on_host
post_update_to_api "failed" "$exit_code"
elif [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
# Script exited with 0 but never sent a completion status
# exit_code=0 is never an error — report as success
post_update_to_api "done" "0"
fi
}
if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT
fi
trap 'ensure_log_on_host; post_update_to_api "failed" "$?"' ERR
trap 'local _ec=$?; if [[ $_ec -ne 0 ]]; then ensure_log_on_host; post_update_to_api "failed" "$_ec"; fi' ERR
trap 'ensure_log_on_host; post_update_to_api "failed" "129"; exit 129' SIGHUP
trap 'ensure_log_on_host; post_update_to_api "failed" "130"; exit 130' SIGINT
trap 'ensure_log_on_host; post_update_to_api "failed" "143"; exit 143' SIGTERM

View File

@@ -37,24 +37,79 @@ if ! declare -f explain_exit_code &>/dev/null; then
case "$code" in
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
150) echo "Systemd: Service failed to start" ;;
151) echo "Systemd: Service unit not found" ;;
152) echo "Permission denied (EACCES)" ;;
@@ -100,6 +155,7 @@ if ! declare -f explain_exit_code &>/dev/null; then
224) echo "Proxmox: PBS storage is for backups only" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;

View File

@@ -40,6 +40,25 @@ catch_errors
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# ==============================================================================
# SECTION 2: NETWORK & CONNECTIVITY
# ==============================================================================
@@ -103,6 +122,7 @@ setting_up_container() {
msg_ok "Set up Container OS"
#msg_custom "${CM}" "${GN}" "Network Connected: ${BL}$(hostname -I)"
msg_ok "Network Connected: ${BL}$(hostname -I)"
post_progress_to_api
}
# ------------------------------------------------------------------------------
@@ -172,7 +192,7 @@ network_check() {
fi
set -e
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
trap 'error_handler' ERR
}
# ==============================================================================
@@ -206,8 +226,18 @@ EOF
$STD apt-get -o Dpkg::Options::="--force-confold" -y dist-upgrade
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
msg_ok "Updated Container OS"
post_progress_to_api
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
}
# ==============================================================================

View File

@@ -1851,16 +1851,26 @@ function download_with_progress() {
# Ensures /usr/local/bin is permanently in system PATH.
#
# Description:
# - Adds to /etc/profile.d if not present
# - Adds to /etc/profile.d for login shells (SSH, noVNC)
# - Adds to /root/.bashrc for non-login shells (pct enter)
# ------------------------------------------------------------------------------
function ensure_usr_local_bin_persist() {
local PROFILE_FILE="/etc/profile.d/custom_path.sh"
# Skip on Proxmox host
command -v pveversion &>/dev/null && return
if [[ ! -f "$PROFILE_FILE" ]] && ! command -v pveversion &>/dev/null; then
# Login shells: /etc/profile.d/
local PROFILE_FILE="/etc/profile.d/custom_path.sh"
if [[ ! -f "$PROFILE_FILE" ]]; then
echo 'export PATH="/usr/local/bin:$PATH"' >"$PROFILE_FILE"
chmod +x "$PROFILE_FILE"
fi
# Non-login shells (pct enter): /root/.bashrc
local BASHRC="/root/.bashrc"
if [[ -f "$BASHRC" ]] && ! grep -q '/usr/local/bin' "$BASHRC"; then
echo 'export PATH="/usr/local/bin:$PATH"' >>"$BASHRC"
fi
}
# ------------------------------------------------------------------------------

View File

@@ -529,9 +529,21 @@ cleanup_vmid() {
}
cleanup() {
local exit_code=$?
if [[ "$(dirs -p | wc -l)" -gt 1 ]]; then
popd >/dev/null || true
fi
# Report final telemetry status if post_to_api_vm was called but no update was sent
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
else
# Exited cleanly but description()/success was never called — shouldn't happen
post_update_to_api "failed" "1"
fi
fi
fi
}
check_root() {

View File

@@ -104,6 +104,10 @@ function update() {
$STD npm run build
msg_ok "Built ${APP}"
msg_info "Updating service"
create_service
msg_ok "Updated service"
msg_info "Starting service"
systemctl start immich-proxy
msg_ok "Started service"
@@ -112,6 +116,27 @@ function update() {
fi
}
function create_service() {
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}/app
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/dist/index.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
}
# ==============================================================================
# INSTALL
# ==============================================================================
@@ -173,23 +198,7 @@ EOF
msg_ok "Created configuration"
msg_info "Creating service"
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
create_service
systemctl enable -q --now immich-proxy
msg_ok "Created and started service"

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -104,8 +104,16 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
# Only send telemetry if post_to_api_vm was called (installing status was sent)
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -101,8 +101,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -105,7 +105,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -79,8 +79,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -101,8 +101,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -109,8 +109,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -97,7 +97,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -100,7 +100,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -99,7 +99,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}

View File

@@ -99,8 +99,15 @@ function cleanup_vmid() {
}
function cleanup() {
local exit_code=$?
popd >/dev/null
post_update_to_api "done" "none"
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR
}