Compare commits

..

44 Commits

Author SHA1 Message Date
github-actions[bot]
829159133b chore: update github-versions.json 2026-02-19 00:22:37 +00:00
CanbiZ (MickLesk)
b6a4e6a2a6 OPNSense: add disk space check | increase disk space (#12058)
* Fix: Add disk space checking for OPNsense VM FreeBSD image decompression

- Add check_disk_space() function to verify available storage
- Check for 20GB before download and 15GB before decompression
- Provide clear error messages showing available vs required space
- Add proper error handling for unxz decompression failures
- Clean up compressed .xz file after decompression to save space
- Add progress messages for download and decompression steps

Fixes issue where script fails at line 611 with 'No space left on device'
when /tmp directory lacks sufficient space for ~10-15GB decompressed image.

* Increase OPNsense VM disk size from 10GB to 20GB

- Provides more space for system updates, logs, and package installations
- 20GB is a more appropriate size for OPNsense production use
2026-02-18 22:06:04 +01:00
community-scripts-pr-app[bot]
96c056ea4e chore: update github-versions.json (#12067)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 18:17:12 +00:00
CanbiZ (MickLesk)
491081ffbf Add post_progress_to_api lightweight telemetry ping
Introduce post_progress_to_api() in misc/api.func — a non-blocking, fire-and-forget curl ping (gated by DIAGNOSTICS and RANDOM_UUID) that updates telemetry status to "configuring". Wire this progress ping into multiple scripts (alpine-install.func, install.func, build.func, core.func) at key milestones (container start, network ready, customization, creation, cleanup) and replace/deduplicate some earlier post_to_api calls. Also update error_handler.func to always report failures immediately via post_update_to_api to ensure failures are captured even before/after container lifecycle.
2026-02-18 16:19:19 +01:00
community-scripts-pr-app[bot]
1123fdca14 Update CHANGELOG.md (#12064)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 12:53:37 +00:00
Chris
a3a383361d [Fix] PatchMon: use SERVER_PORT in Nginx config if set in env (#12053)
- This PR will add the ability to change the PatchMon listen port in the
Nginx config during update, if the `SERVER_PORT` env var is set in
`/opt/patchmon/backend.env` and if the port is not 443
- If not set, or if set and the port is 443, then no changes are made to
the listen port in the Nginx config
2026-02-18 13:53:08 +01:00
CanbiZ (MickLesk)
6cc8877852 Add timeouts and prioritize telemetry on exit
Prevent hangs when pulling logs from containers by wrapping pct pull calls with timeout (8s) and running ensure_log_on_host under timeout (10s). Always send telemetry (post_update_to_api) before attempting best-effort log collection so status is reported even if log retrieval blocks. Update EXIT/ERR/SIGHUP/SIGINT/SIGTERM traps and consolidate error/interrupt handlers to use the new timeouted log collection. Changes in misc/build.func and misc/error_handler.func.
2026-02-18 13:14:59 +01:00
community-scripts-pr-app[bot]
845b89f975 chore: update github-versions.json (#12061)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 12:13:39 +00:00
community-scripts-pr-app[bot]
be26dc33dd Update CHANGELOG.md (#12057)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 09:24:29 +00:00
CanbiZ (MickLesk)
b439960222 core: Execution ID & Telemetry Improvements (#12041)
* fix: send telemetry BEFORE log collection in signal handlers

- Swap ensure_log_on_host/post_update_to_api order in on_interrupt, on_terminate, api_exit_script, and inline SIGHUP/SIGINT/SIGTERM traps
- For signal exits (>128): send telemetry immediately, then best-effort log collection
- Add 2>/dev/null || true to all I/O in signal handlers to prevent SIGPIPE
- Fix on_exit: exit_code=0 now reports 'done' instead of 'failed 1'
- Root cause: pct pull hangs on dying containers blocked telemetry updates, leaving 595+ records stuck in 'installing' daily

* feat: add execution_id to all telemetry payloads

- Generate EXECUTION_ID from RANDOM_UUID in variables()
- Export EXECUTION_ID to container environment
- Add execution_id field to all 8 API payloads in api.func
- Add execution_id to post_progress_to_api in install.func and alpine-install.func
- Fallback to RANDOM_UUID when EXECUTION_ID not set (backward compat)

* fix: correct telemetry type values for PVE and addon scripts

- PVE scripts (tools/pve/*): change type 'tool' -> 'pve'
- Addon scripts (tools/addon/*): fix 4 scripts that wrongly used 'tool' -> 'addon'
  (netdata, add-tailscale-lxc, add-netbird-lxc, all-templates)
- api.func: post_tool_to_api sends type='pve', default fallback 'pve'
- Aligns with PocketBase categories: lxc, vm, pve, addon

* fix: persist diagnostics opt-in inside containers for addon telemetry

- install.func + alpine-install.func: create /usr/local/community-scripts/diagnostics
  inside the container when DIAGNOSTICS=yes (from build.func export)
- Enables addon scripts running later inside containers to find the opt-in
- Update init_tool_telemetry default type from 'tool' to 'pve'

* refactor: clean up diagnostics/telemetry opt-in system

- diagnostics_check(): deduplicate heredoc (was 2x 22 lines), improve whiptail
  text with clear what/what-not collected, add telemetry + privacy links
- diagnostics_menu(): better UX with current status, clear enable/disable
  buttons, note about existing containers
- variables(): change DIAGNOSTICS default from 'yes' to 'no' (safe: no
  telemetry before user consents via diagnostics_check)
- install.func + alpine-install.func: persist BOTH yes AND no in container
  so opt-out is explicit (not just missing file = no)
- Fix typo 'menue' -> 'menu' in config file comments

* fix: no pre-selection in telemetry dialog, link to telemetry-service README

- Add --defaultno so 'No, opt out' is focused by default (user must Tab to Yes)
- Change privacy link from discussions/1836 to telemetry-service#privacy--compliance

* fix: use radiolist for telemetry dialog (no pre-selection)

- Replace --yesno with --radiolist: user must actively SPACE-select an option
- Both options start as OFF (no pre-selection)
- Cancel/Exit defaults to 'no' (opt-out)

* simplify: inline telemetry dialog text like other whiptail dialogs

* improve: telemetry dialog with more detail, link to PRIVACY.md

- Add what we collect / don't collect sections back to dialog
- Link to telemetry-service/docs/PRIVACY.md instead of README anchor
- Update config file comment with same link
2026-02-18 10:24:06 +01:00
community-scripts-pr-app[bot]
b4a5d28957 chore: update github-versions.json (#12054)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 06:25:12 +00:00
community-scripts-pr-app[bot]
eaa69d58be Update CHANGELOG.md (#12052)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 00:22:33 +00:00
community-scripts-pr-app[bot]
3ffff334a0 chore: update github-versions.json (#12051)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 00:22:12 +00:00
community-scripts-pr-app[bot]
38f04f4dcc Update CHANGELOG.md (#12048)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 20:14:36 +00:00
Andreas Abeck
fdbcee3a93 fix according to issue #12045 (#12047) 2026-02-17 21:14:12 +01:00
community-scripts-pr-app[bot]
ce11ba8f27 Update CHANGELOG.md (#12046)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 19:18:03 +00:00
Chris
43a0a078f5 [Hotfix] Cleanuparr: backup config before update (#12039) 2026-02-17 20:17:22 +01:00
community-scripts-pr-app[bot]
646dabf0f0 chore: update github-versions.json (#12043)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 18:20:21 +00:00
community-scripts-pr-app[bot]
2582c1f63b Update CHANGELOG.md (#12040)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 15:36:48 +00:00
CanbiZ (MickLesk)
3ce3c6f613 tools/pve: add data analytics / formatting / linting (#12034)
* core: add progress; fix exit status

Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.

* Safer tools.func load and improved error handling

Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.

* tools: add data init and auto-reporting to tools and pve section

Introduce telemetry helpers in misc/api.func: _telemetry_report_exit (reports success/failure via post_tool_to_api/post_addon_to_api) and init_tool_telemetry (reads DIAGNOSTICS, starts install timer and installs an EXIT trap to auto-report). Integrate telemetry into many tools/addon and tools/pve scripts by sourcing the remote api.func and calling init_tool_telemetry (guarded with declare -f). Also apply a minor arithmetic formatting tweak in misc/build.func for RECOVERY_ATTEMPT.
2026-02-17 16:36:20 +01:00
community-scripts-pr-app[bot]
97652792be Update CHANGELOG.md (#12031)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 12:25:42 +00:00
CanbiZ (MickLesk)
f07f2cb04e core: error-handler improvements | better exit_code handling | better tools.func source check (#12019)
* core: add progress; fix exit status

Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.

* Safer tools.func load and improved error handling

Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.
2026-02-17 13:25:17 +01:00
community-scripts-pr-app[bot]
5f73f9d5e6 chore: update github-versions.json (#12030)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 12:14:28 +00:00
community-scripts-pr-app[bot]
0183ae0fff Update CHANGELOG.md (#12029)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:16:34 +00:00
CanbiZ (MickLesk)
32d1937a74 Refactor: centralize systemd service creation (#12025)
Introduce create_service() to generate the immich-proxy systemd unit and run systemctl daemon-reload. Replace duplicated heredoc service blocks in install with a call to create_service, and invoke create_service during update before starting the service. Adjust unit WorkingDirectory to ${INSTALL_PATH}/app and ExecStart to run dist/index.js.
2026-02-17 12:16:09 +01:00
community-scripts-pr-app[bot]
0a7bd20b06 Update CHANGELOG.md (#12028)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:15:14 +00:00
CanbiZ (MickLesk)
c9ecb1ccca core: smart recovery for failed installs | extend exit_codes (#11221)
* feat(build.func): smart error recovery menu for failed installations

Replace simple Y/n removal prompt with interactive recovery menu:

- Option 1: Remove container and exit (default, auto after 60s timeout)
- Option 2: Keep container for debugging
- Option 3: Retry installation with verbose mode enabled
- Option 4: Retry with 1.5x RAM and +1 CPU core (OOM errors only)

Improvements:
- Detect OOM errors (exit codes 137, 243) and offer resource increase
- Show human-readable error explanation using explain_exit_code()
- Recursive rebuild preserves ALL settings from advanced/app.vars/default.vars
- Settings preserved: Network (IP, Gateway, VLAN, MTU, Bridge), Features
  (Nesting, FUSE, TUN, GPU), Storage, SSH keys, Tags, Hostname, etc.
- Show rebuild summary before retry (old→new CTID, resources, network)
- New container ID generated automatically for rebuilds

This helps users recover from transient failures without re-running
the entire script manually.

* fix(api.func): fix duplicate exit codes and add missing error codes

Exit code fixes:
- Remove duplicate definitions for codes 243, 254 (Node.js vs DB)
- Reassign MySQL/MariaDB to 240-242, 244 (was 241-244)
- Reassign MongoDB to 250-253 (was 251-254)

New exit codes added (based on GitHub issues analysis):
- 6: curl couldn't resolve host (DNS failure)
- 7: curl failed to connect (network unreachable)
- 22: curl HTTP error (404, 429 rate limit, 500)
- 28: curl timeout (very common in download failures)
- 35: curl SSL error
- 102: APT lock held by another process
- 124: Command timeout
- 141: SIGPIPE (broken pipe)

Also update OOM detection to include exit code 134 (SIGABRT)
which is commonly seen in Node.js heap overflow issues.

Fixes based on analysis of ~500 GitHub issues.

* fix(exit-codes): sync error_handler.func and api.func with conflict-free code ranges

- Add curl error codes (6, 7, 22, 28, 35)
- Add APT lock code (102), timeout (124), signals (134, 141)
- Move Python codes: 210-212 → 160-162 (avoid Proxmox conflict)
- Move PostgreSQL codes: 231-234 → 170-173
- Move MySQL/MariaDB codes: 241-244 → 180-183
- Move MongoDB codes: 251-254 → 190-193
- Keep Node.js at 243-249, Proxmox at 200-231
- Both files now synchronized with identical mappings

* feat(exit-codes): add systemd and build error codes (150-154)

- 150: Systemd service failed to start
- 151: Systemd service unit not found
- 152: Permission denied (EACCES)
- 153: Build/compile failed (make/gcc/cmake)
- 154: Node.js native addon build failed (node-gyp)

Based on issue analysis: 57 service failures, 25 build failures, 22 node-gyp issues

* fix(build): restore smart recovery and add OOM/DNS retry paths

* feat(build): APT in-place repair, exit 1 subclassification, new exit codes

- Add APT/DPKG in-place recovery: detects exit 100/101/102/255 and exit 1
  with APT log patterns, offers to repair dpkg state and re-run install
  script without destroying the container
- Add exit 1 subclassification: analyzes combined log to identify root
  cause (APT, OOM, network, command-not-found) and routes to appropriate
  recovery option
- Add exit 10 hint: shows privileged mode / nesting suggestion
- Add exit 127 hint: extracts missing command name from logs
- Refactor recovery menu: use named option variables (APT_OPTION,
  OOM_OPTION, DNS_OPTION) instead of hardcoded option numbers, supports
  up to 6 dynamic options cleanly
- Map missing exit codes in api.func: curl 27/36/45/47/55, signals
  129 (SIGHUP) / 131 (SIGQUIT), npm 239

* feat(api+build): map 25 more exit codes, add SIGHUP trap, network/perm hints

api.func:
- Map 25+ new exit codes that were showing as 'Unknown' in telemetry:
  curl: 3, 16, 18, 24, 26, 32-34, 39, 44, 46, 48, 51, 52, 57, 59, 61,
  63, 79, 92, 95; signals: 125, 132, 144, 146
- Update code 8 description (FTP + apk untrusted key)
- Update header comment with full supported ranges

build.func:
- Add SIGHUP trap: reports 'failed/129' to API when terminal is closed,
  should significantly reduce the 2841 stuck 'installing' records
- Add exit 52 (empty reply) and 57 (poll error) to network issue
  detection for DNS override recovery option
- Add exit 125/126 hint: suggests privileged mode for permission errors

* fix: sync error_handler fallback, Alpine APK repair, retry limit

error_handler.func:
- Sync fallback explain_exit_code() with api.func: add 25+ codes that
  were missing (curl 16/18/24/26/27/32-34/36/39/44-48/51/52/55/57/59/
  61/63/79/92/95, signals 125/129/131/132/144/146, npm 239, code 3/8)
- Ensures consistent error descriptions even when api.func isn't loaded

build.func:
- Alpine APK repair: detect var_os=alpine and run 'apk fix && apk
  cache clean && apk update' instead of apt-get/dpkg commands
- Show 'Repair APK state' instead of 'APT/DPKG' in menu for Alpine
- Retry safety counter: OOM x2 retry limited to max 2 attempts
  (prevents infinite RAM doubling via RECOVERY_ATTEMPT env var)
- Show attempt count in rebuild summary

* fix(build): preserve exit code in ERR trap to prevent false exit_code=0

The ERR trap called ensure_log_on_host before post_update_to_api,
which reset \True to 0 (success). This caused ~15-20 records/day to be
reported as 'failed' with exit_code=0 instead of the actual error code.

Root cause chain:
1. Command fails with exit code N → ERR trap fires (\True = N)
2. ensure_log_on_host succeeds → \True becomes 0
3. post_update_to_api 'failed' '\True' → sends 'failed/0' (wrong!)
4. POST_UPDATE_DONE=true → EXIT trap skips the correct code

Fix: capture \True into _ERR_CODE before ensure_log_on_host runs.

* Implement telemetry settings and repo source detection

Add telemetry configuration and repository source detection function.
2026-02-17 12:14:46 +01:00
community-scripts-pr-app[bot]
d274a269b5 Update .app files (#12022)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 10:45:09 +01:00
community-scripts-pr-app[bot]
cbee9d64b5 Update CHANGELOG.md (#12024)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:45 +00:00
community-scripts-pr-app[bot]
ffcda217e3 Update CHANGELOG.md (#12023)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:25 +00:00
community-scripts-pr-app[bot]
438d5d6b94 Update date in json (#12021)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:19 +00:00
push-app-to-main[bot]
104366bc64 Databasus (#12018)
* Add databasus (ct)

* Update databasus.sh

* Update databasus-install.sh

* Fix backup and restore paths for Databasus config

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-17 10:40:58 +01:00
community-scripts-pr-app[bot]
9dab79f8ca Update CHANGELOG.md (#12017)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:08:35 +00:00
CanbiZ (MickLesk)
2dddeaf966 Call get_lxc_ip in start() before updates (#12015) 2026-02-17 09:08:09 +01:00
community-scripts-pr-app[bot]
fae06a3a58 Update CHANGELOG.md (#12016)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:07:22 +00:00
Tobias
137272c354 fix: pterodactyl-panel add symlink (#11997) 2026-02-17 09:06:59 +01:00
community-scripts-pr-app[bot]
52a9e23401 chore: update github-versions.json (#12013)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 06:22:15 +00:00
community-scripts-pr-app[bot]
c2333de180 Update CHANGELOG.md (#12007)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:47 +00:00
community-scripts-pr-app[bot]
ad8974894b chore: update github-versions.json (#12006)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:21 +00:00
community-scripts-pr-app[bot]
38af4be5ba Update CHANGELOG.md (#12005)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 21:38:03 +00:00
Chris
80ae1f34fa Opencloud: Pin version to 5.1.0 (#12004) 2026-02-16 22:37:35 +01:00
community-scripts-pr-app[bot]
06bc6e20d5 chore: update github-versions.json (#12001)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 18:13:41 +00:00
community-scripts-pr-app[bot]
4418e72856 Update CHANGELOG.md (#11999)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 16:14:29 +00:00
CanbiZ (MickLesk)
896714e06f core/vm's: ensure script state is sent on script exit (#11991)
* Ensure API update is sent on script exit

Add exit-time telemetry handling across scripts to avoid orphaned "installing" records. Introduce local exit_code capture in api_exit_script and cleanup handlers and, when POST_TO_API_DONE is true but POST_UPDATE_DONE is not, post a final status (marking failures on non-zero exit codes, or marking done/failed in VM cleanups based on exit code). Changes touch misc/build.func, misc/vm-core.func and various vm/*-vm.sh cleanup functions to reliably send post_update_to_api on normal or early exits.

* Update api.func

* fix(telemetry): add missing exit codes to explain_exit_code()

- Add curl error codes: 4, 5, 8, 23, 25, 30, 56, 78
- Add code 10: Docker/privileged mode required (used in ~15 scripts)
- Add code 75: Temporary failure (retry later)
- Add BSD sysexits.h codes: 64-77
- Sync error_handler.func fallback with canonical api.func
2026-02-16 17:14:00 +01:00
82 changed files with 1922 additions and 573 deletions

View File

@@ -404,6 +404,58 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details> </details>
## 2026-02-18
### 🚀 Updated Scripts
- #### 💥 Breaking Changes
- [Fix] PatchMon: use `SERVER_PORT` in Nginx config if set in env [@vhsdream](https://github.com/vhsdream) ([#12053](https://github.com/community-scripts/ProxmoxVE/pull/12053))
### 💾 Core
- #### ✨ New Features
- core: Execution ID & Telemetry Improvements [@MickLesk](https://github.com/MickLesk) ([#12041](https://github.com/community-scripts/ProxmoxVE/pull/12041))
## 2026-02-17
### 🆕 New Scripts
- Databasus ([#12018](https://github.com/community-scripts/ProxmoxVE/pull/12018))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- [Hotfix] Cleanuparr: backup config before update [@vhsdream](https://github.com/vhsdream) ([#12039](https://github.com/community-scripts/ProxmoxVE/pull/12039))
- fix: pterodactyl-panel add symlink [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11997](https://github.com/community-scripts/ProxmoxVE/pull/11997))
### 💾 Core
- #### 🐞 Bug Fixes
- core: call get_lxc_ip in start() before updates [@MickLesk](https://github.com/MickLesk) ([#12015](https://github.com/community-scripts/ProxmoxVE/pull/12015))
- #### ✨ New Features
- tools/pve: add data analytics / formatting / linting [@MickLesk](https://github.com/MickLesk) ([#12034](https://github.com/community-scripts/ProxmoxVE/pull/12034))
- core: smart recovery for failed installs | extend exit_codes [@MickLesk](https://github.com/MickLesk) ([#11221](https://github.com/community-scripts/ProxmoxVE/pull/11221))
- #### 🔧 Refactor
- core: error-handler improvements | better exit_code handling | better tools.func source check [@MickLesk](https://github.com/MickLesk) ([#12019](https://github.com/community-scripts/ProxmoxVE/pull/12019))
### 🧰 Tools
- #### 🔧 Refactor
- Immich Public Proxy: centralize and fix systemd service creation [@MickLesk](https://github.com/MickLesk) ([#12025](https://github.com/community-scripts/ProxmoxVE/pull/12025))
### 📚 Documentation
- fix contribution/setup-fork [@andreasabeck](https://github.com/andreasabeck) ([#12047](https://github.com/community-scripts/ProxmoxVE/pull/12047))
## 2026-02-16 ## 2026-02-16
### 🆕 New Scripts ### 🆕 New Scripts
@@ -413,6 +465,8 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
### 🚀 Updated Scripts ### 🚀 Updated Scripts
- Opencloud: Pin version to 5.1.0 [@vhsdream](https://github.com/vhsdream) ([#12004](https://github.com/community-scripts/ProxmoxVE/pull/12004))
- #### 🐞 Bug Fixes - #### 🐞 Bug Fixes
- Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988)) - Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988))
@@ -422,6 +476,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- #### 🔧 Refactor - #### 🔧 Refactor
- core/vm's: ensure script state is sent on script exit [@MickLesk](https://github.com/MickLesk) ([#11991](https://github.com/community-scripts/ProxmoxVE/pull/11991))
- Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966)) - Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966))
- Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968)) - Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968))

View File

@@ -32,8 +32,17 @@ function update_script() {
systemctl stop cleanuparr systemctl stop cleanuparr
msg_ok "Stopped Service" msg_ok "Stopped Service"
msg_info "Backing up config"
cp -r /opt/cleanuparr/config /opt/cleanuparr_config_backup
msg_ok "Backed up config"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Cleanuparr" "Cleanuparr/Cleanuparr" "prebuild" "latest" "/opt/cleanuparr" "*linux-amd64.zip" CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Cleanuparr" "Cleanuparr/Cleanuparr" "prebuild" "latest" "/opt/cleanuparr" "*linux-amd64.zip"
msg_info "Restoring config"
[[ -d /opt/cleanuparr/config ]] && rm -rf /opt/cleanuparr/config
mv /opt/cleanuparr_config_backup /opt/cleanuparr/config
msg_ok "Restored config"
msg_info "Starting Service" msg_info "Starting Service"
systemctl start cleanuparr systemctl start cleanuparr
msg_ok "Started Service" msg_ok "Started Service"

78
ct/databasus.sh Normal file
View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
APP="Databasus"
var_tags="${var_tags:-backup;postgresql;database}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /opt/databasus/databasus ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "databasus" "databasus/databasus"; then
msg_info "Stopping Databasus"
$STD systemctl stop databasus
msg_ok "Stopped Databasus"
msg_info "Backing up Configuration"
cp /opt/databasus/.env /opt/databasus.env.bak
msg_ok "Backed up Configuration"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Updating Databasus"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod download
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations /opt/databasus/
chown -R postgres:postgres /opt/databasus
msg_ok "Updated Databasus"
msg_info "Restoring Configuration"
cp /opt/databasus.env.bak /opt/databasus/.env
rm -f /opt/databasus.env.bak
chown postgres:postgres /opt/databasus/.env
msg_ok "Restored Configuration"
msg_info "Starting Databasus"
$STD systemctl start databasus
msg_ok "Started Databasus"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"

6
ct/headers/databasus Normal file
View File

@@ -0,0 +1,6 @@
____ __ __
/ __ \____ _/ /_____ _/ /_ ____ ________ _______
/ / / / __ `/ __/ __ `/ __ \/ __ `/ ___/ / / / ___/
/ /_/ / /_/ / /_/ /_/ / /_/ / /_/ (__ ) /_/ (__ )
/_____/\__,_/\__/\__,_/_.___/\__,_/____/\__,_/____/

View File

@@ -29,7 +29,7 @@ function update_script() {
exit exit
fi fi
RELEASE="v5.0.2" RELEASE="v5.1.0"
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
msg_info "Stopping services" msg_info "Stopping services"
systemctl stop opencloud opencloud-wopi systemctl stop opencloud opencloud-wopi

View File

@@ -46,6 +46,7 @@ function update_script() {
VERSION=$(get_latest_github_release "PatchMon/PatchMon") VERSION=$(get_latest_github_release "PatchMon/PatchMon")
PROTO="$(sed -n '/SERVER_PROTOCOL/s/[^=]*=//p' /opt/backend.env)" PROTO="$(sed -n '/SERVER_PROTOCOL/s/[^=]*=//p' /opt/backend.env)"
HOST="$(sed -n '/SERVER_HOST/s/[^=]*=//p' /opt/backend.env)" HOST="$(sed -n '/SERVER_HOST/s/[^=]*=//p' /opt/backend.env)"
SERVER_PORT="$(sed -n '/SERVER_PORT/s/[^=]*=//p' /opt/backend.env)"
[[ "${PROTO:-http}" == "http" ]] && PORT=":3001" [[ "${PROTO:-http}" == "http" ]] && PORT=":3001"
sed -i 's/PORT=3399/PORT=3001/' /opt/backend.env sed -i 's/PORT=3399/PORT=3001/' /opt/backend.env
sed -i -e "s/VERSION=.*/VERSION=$VERSION/" \ sed -i -e "s/VERSION=.*/VERSION=$VERSION/" \
@@ -66,6 +67,9 @@ function update_script() {
-e '\|try_files |i\ root /opt/patchmon/frontend/dist;' \ -e '\|try_files |i\ root /opt/patchmon/frontend/dist;' \
-e 's|alias.*|alias /opt/patchmon/frontend/dist/assets;|' \ -e 's|alias.*|alias /opt/patchmon/frontend/dist/assets;|' \
-e '\|expires 1y|i\ root /opt/patchmon/frontend/dist;' /etc/nginx/sites-available/patchmon.conf -e '\|expires 1y|i\ root /opt/patchmon/frontend/dist;' /etc/nginx/sites-available/patchmon.conf
if [[ -n "$SERVER_PORT" ]] && [[ "$SERVER_PORT" != "443" ]]; then
sed -i "s/listen [[:digit:]]/listen ${SERVER_PORT};/" /etc/nginx/sites-available/patchmon.conf
fi
ln -sf /etc/nginx/sites-available/patchmon.conf /etc/nginx/sites-enabled/ ln -sf /etc/nginx/sites-available/patchmon.conf /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default rm -f /etc/nginx/sites-enabled/default
$STD nginx -t $STD nginx -t

View File

@@ -71,6 +71,7 @@ EOF
$STD php artisan migrate --seed --force --no-interaction $STD php artisan migrate --seed --force --no-interaction
chown -R www-data:www-data /opt/pterodactyl-panel/* chown -R www-data:www-data /opt/pterodactyl-panel/*
chmod -R 755 /opt/pterodactyl-panel/storage /opt/pterodactyl-panel/bootstrap/cache/ chmod -R 755 /opt/pterodactyl-panel/storage /opt/pterodactyl-panel/bootstrap/cache/
ln -s /opt/pterodactyl-panel /var/www/pterodactyl
rm -rf "/opt/pterodactyl-panel/panel.tar.gz" rm -rf "/opt/pterodactyl-panel/panel.tar.gz"
echo "${RELEASE}" >/opt/${APP}_version.txt echo "${RELEASE}" >/opt/${APP}_version.txt
msg_ok "Updated $APP to v${RELEASE}" msg_ok "Updated $APP to v${RELEASE}"

View File

@@ -134,7 +134,7 @@ update_links() {
# Find all files containing the old repo reference # Find all files containing the old repo reference
while IFS= read -r file; do while IFS= read -r file; do
# Count occurrences # Count occurrences
local count=$(grep -c "github.com/$old_repo/$old_name" "$file" 2>/dev/null || echo 0) local count=$(grep -E -c "(github.com|githubusercontent.com)/$old_repo/$old_name" "$file" 2>/dev/null || echo 0)
if [[ $count -gt 0 ]]; then if [[ $count -gt 0 ]]; then
# Backup original # Backup original
@@ -143,16 +143,16 @@ update_links() {
# Replace links - use different sed syntax for BSD/macOS vs GNU sed # Replace links - use different sed syntax for BSD/macOS vs GNU sed
if sed --version &>/dev/null 2>&1; then if sed --version &>/dev/null 2>&1; then
# GNU sed # GNU sed
sed -i "s|github.com/$old_repo/$old_name|github.com/$new_owner/$new_repo|g" "$file" sed -E -i "s@(github.com|githubusercontent.com)/$old_repo/$old_name@\\1/$new_owner/$new_repo@g" "$file"
else else
# BSD sed (macOS) # BSD sed (macOS)
sed -i '' "s|github.com/$old_repo/$old_name|github.com/$new_owner/$new_repo|g" "$file" sed -E -i '' "s@(github.com|githubusercontent.com)/$old_repo/$old_name@\\1/$new_owner/$new_repo@g" "$file"
fi fi
((files_updated++)) ((files_updated++))
print_success "Updated $file ($count links)" print_success "Updated $file ($count links)"
fi fi
done < <(find "$search_path" -type f \( -name "*.md" -o -name "*.sh" -o -name "*.func" -o -name "*.json" \) -not -path "*/.git/*" 2>/dev/null | xargs grep -l "github.com/$old_repo/$old_name" 2>/dev/null) done < <(find "$search_path" -type f \( -name "*.md" -o -name "*.sh" -o -name "*.func" -o -name "*.json" \) -not -path "*/.git/*" 2>/dev/null | xargs grep -E -l "(github.com|githubusercontent.com)/$old_repo/$old_name" 2>/dev/null)
return $files_updated return $files_updated
} }

View File

@@ -0,0 +1,44 @@
{
"name": "Databasus",
"slug": "databasus",
"categories": [
7
],
"date_created": "2026-02-17",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 80,
"documentation": "https://github.com/databasus/databasus",
"website": "https://github.com/databasus/databasus",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/databasus.webp",
"config_path": "/opt/databasus/.env",
"description": "Free, open source and self-hosted solution for automated PostgreSQL backups. With multiple storage options, notifications, scheduling, and a beautiful web interface for managing database backups across multiple PostgreSQL instances.",
"install_methods": [
{
"type": "default",
"script": "ct/databasus.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 8,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": "admin@localhost",
"password": "See /root/databasus.creds"
},
"notes": [
{
"text": "Supports PostgreSQL versions 12-18 with cloud and self-hosted instances",
"type": "info"
},
{
"text": "Features: Scheduled backups, multiple storage providers, notifications, encryption",
"type": "info"
}
]
}

View File

@@ -1,5 +1,5 @@
{ {
"generated": "2026-02-16T12:14:16Z", "generated": "2026-02-19T00:22:36Z",
"versions": [ "versions": [
{ {
"slug": "2fauth", "slug": "2fauth",
@@ -158,9 +158,9 @@
{ {
"slug": "bookstack", "slug": "bookstack",
"repo": "BookStackApp/BookStack", "repo": "BookStackApp/BookStack",
"version": "v25.12.3", "version": "v25.12.6",
"pinned": false, "pinned": false,
"date": "2026-01-29T15:29:25Z" "date": "2026-02-18T19:53:07Z"
}, },
{ {
"slug": "byparr", "slug": "byparr",
@@ -193,9 +193,9 @@
{ {
"slug": "cleanuparr", "slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr", "repo": "Cleanuparr/Cleanuparr",
"version": "v2.6.2", "version": "v2.6.3",
"pinned": false, "pinned": false,
"date": "2026-02-15T02:15:19Z" "date": "2026-02-16T22:41:25Z"
}, },
{ {
"slug": "cloudreve", "slug": "cloudreve",
@@ -207,9 +207,9 @@
{ {
"slug": "comfyui", "slug": "comfyui",
"repo": "comfyanonymous/ComfyUI", "repo": "comfyanonymous/ComfyUI",
"version": "v0.13.0", "version": "v0.14.2",
"pinned": false, "pinned": false,
"date": "2026-02-10T20:27:38Z" "date": "2026-02-18T06:12:02Z"
}, },
{ {
"slug": "commafeed", "slug": "commafeed",
@@ -221,9 +221,9 @@
{ {
"slug": "configarr", "slug": "configarr",
"repo": "raydak-labs/configarr", "repo": "raydak-labs/configarr",
"version": "v1.20.0", "version": "v1.21.0",
"pinned": false, "pinned": false,
"date": "2026-01-10T21:25:47Z" "date": "2026-02-17T22:59:07Z"
}, },
{ {
"slug": "convertx", "slug": "convertx",
@@ -253,6 +253,13 @@
"pinned": false, "pinned": false,
"date": "2026-02-11T15:39:05Z" "date": "2026-02-11T15:39:05Z"
}, },
{
"slug": "databasus",
"repo": "databasus/databasus",
"version": "v3.14.1",
"pinned": false,
"date": "2026-02-18T10:43:45Z"
},
{ {
"slug": "dawarich", "slug": "dawarich",
"repo": "Freika/dawarich", "repo": "Freika/dawarich",
@@ -263,9 +270,9 @@
{ {
"slug": "discopanel", "slug": "discopanel",
"repo": "nickheyer/discopanel", "repo": "nickheyer/discopanel",
"version": "v1.0.36", "version": "v1.0.37",
"pinned": false, "pinned": false,
"date": "2026-02-09T21:15:44Z" "date": "2026-02-18T08:53:43Z"
}, },
{ {
"slug": "dispatcharr", "slug": "dispatcharr",
@@ -410,9 +417,9 @@
{ {
"slug": "ghostfolio", "slug": "ghostfolio",
"repo": "ghostfolio/ghostfolio", "repo": "ghostfolio/ghostfolio",
"version": "2.239.0", "version": "2.240.0",
"pinned": false, "pinned": false,
"date": "2026-02-15T09:51:16Z" "date": "2026-02-18T20:08:56Z"
}, },
{ {
"slug": "gitea", "slug": "gitea",
@@ -550,16 +557,16 @@
{ {
"slug": "huntarr", "slug": "huntarr",
"repo": "plexguide/Huntarr.io", "repo": "plexguide/Huntarr.io",
"version": "9.2.4.1", "version": "9.3.5",
"pinned": false, "pinned": false,
"date": "2026-02-12T22:17:47Z" "date": "2026-02-18T16:25:07Z"
}, },
{ {
"slug": "immich-public-proxy", "slug": "immich-public-proxy",
"repo": "alangrainger/immich-public-proxy", "repo": "alangrainger/immich-public-proxy",
"version": "v1.15.2", "version": "v1.15.3",
"pinned": false, "pinned": false,
"date": "2026-02-16T08:54:59Z" "date": "2026-02-16T22:54:27Z"
}, },
{ {
"slug": "inspircd", "slug": "inspircd",
@@ -578,16 +585,16 @@
{ {
"slug": "invoiceninja", "slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja", "repo": "invoiceninja/invoiceninja",
"version": "v5.12.60", "version": "v5.12.64",
"pinned": false, "pinned": false,
"date": "2026-02-15T00:11:31Z" "date": "2026-02-18T07:59:44Z"
}, },
{ {
"slug": "jackett", "slug": "jackett",
"repo": "Jackett/Jackett", "repo": "Jackett/Jackett",
"version": "v0.24.1127", "version": "v0.24.1147",
"pinned": false, "pinned": false,
"date": "2026-02-16T08:43:41Z" "date": "2026-02-18T05:54:19Z"
}, },
{ {
"slug": "jellystat", "slug": "jellystat",
@@ -697,16 +704,16 @@
{ {
"slug": "leantime", "slug": "leantime",
"repo": "Leantime/leantime", "repo": "Leantime/leantime",
"version": "v3.6.2", "version": "v3.7.0",
"pinned": false, "pinned": false,
"date": "2026-01-29T16:37:00Z" "date": "2026-02-18T00:02:31Z"
}, },
{ {
"slug": "librenms", "slug": "librenms",
"repo": "librenms/librenms", "repo": "librenms/librenms",
"version": "26.1.1", "version": "26.2.0",
"pinned": false, "pinned": false,
"date": "2026-01-12T23:26:02Z" "date": "2026-02-16T12:15:13Z"
}, },
{ {
"slug": "librespeed-rust", "slug": "librespeed-rust",
@@ -739,9 +746,9 @@
{ {
"slug": "linkstack", "slug": "linkstack",
"repo": "linkstackorg/linkstack", "repo": "linkstackorg/linkstack",
"version": "v4.8.5", "version": "v4.8.6",
"pinned": false, "pinned": false,
"date": "2026-01-26T18:46:52Z" "date": "2026-02-17T16:53:47Z"
}, },
{ {
"slug": "linkwarden", "slug": "linkwarden",
@@ -781,9 +788,9 @@
{ {
"slug": "mail-archiver", "slug": "mail-archiver",
"repo": "s1t5/mail-archiver", "repo": "s1t5/mail-archiver",
"version": "2602.1", "version": "2602.2",
"pinned": false, "pinned": false,
"date": "2026-02-11T06:23:11Z" "date": "2026-02-17T09:46:52Z"
}, },
{ {
"slug": "managemydamnlife", "slug": "managemydamnlife",
@@ -802,9 +809,9 @@
{ {
"slug": "mealie", "slug": "mealie",
"repo": "mealie-recipes/mealie", "repo": "mealie-recipes/mealie",
"version": "v3.10.2", "version": "v3.11.0",
"pinned": false, "pinned": false,
"date": "2026-02-04T23:32:32Z" "date": "2026-02-17T04:13:35Z"
}, },
{ {
"slug": "mediamanager", "slug": "mediamanager",
@@ -886,9 +893,9 @@
{ {
"slug": "netbox", "slug": "netbox",
"repo": "netbox-community/netbox", "repo": "netbox-community/netbox",
"version": "v4.5.2", "version": "v4.5.3",
"pinned": false, "pinned": false,
"date": "2026-02-03T13:54:26Z" "date": "2026-02-17T15:39:18Z"
}, },
{ {
"slug": "nextcloud-exporter", "slug": "nextcloud-exporter",
@@ -956,9 +963,9 @@
{ {
"slug": "opencloud", "slug": "opencloud",
"repo": "opencloud-eu/opencloud", "repo": "opencloud-eu/opencloud",
"version": "v5.0.2", "version": "v5.1.0",
"pinned": true, "pinned": true,
"date": "2026-02-05T16:29:01Z" "date": "2026-02-16T15:04:28Z"
}, },
{ {
"slug": "opengist", "slug": "opengist",
@@ -1026,16 +1033,16 @@
{ {
"slug": "paperless-ngx", "slug": "paperless-ngx",
"repo": "paperless-ngx/paperless-ngx", "repo": "paperless-ngx/paperless-ngx",
"version": "v2.20.6", "version": "v2.20.7",
"pinned": false, "pinned": false,
"date": "2026-01-31T07:30:27Z" "date": "2026-02-16T16:52:23Z"
}, },
{ {
"slug": "patchmon", "slug": "patchmon",
"repo": "PatchMon/PatchMon", "repo": "PatchMon/PatchMon",
"version": "v1.4.0", "version": "v1.4.1",
"pinned": false, "pinned": false,
"date": "2026-02-13T10:39:03Z" "date": "2026-02-16T18:00:13Z"
}, },
{ {
"slug": "paymenter", "slug": "paymenter",
@@ -1054,9 +1061,9 @@
{ {
"slug": "pelican-panel", "slug": "pelican-panel",
"repo": "pelican-dev/panel", "repo": "pelican-dev/panel",
"version": "v1.0.0-beta32", "version": "v1.0.0-beta33",
"pinned": false, "pinned": false,
"date": "2026-02-09T22:15:44Z" "date": "2026-02-18T21:37:11Z"
}, },
{ {
"slug": "pelican-wings", "slug": "pelican-wings",
@@ -1089,9 +1096,9 @@
{ {
"slug": "planka", "slug": "planka",
"repo": "plankanban/planka", "repo": "plankanban/planka",
"version": "v2.0.0", "version": "v2.0.1",
"pinned": false, "pinned": false,
"date": "2026-02-11T13:50:10Z" "date": "2026-02-17T15:26:55Z"
}, },
{ {
"slug": "plant-it", "slug": "plant-it",
@@ -1103,9 +1110,9 @@
{ {
"slug": "pocketbase", "slug": "pocketbase",
"repo": "pocketbase/pocketbase", "repo": "pocketbase/pocketbase",
"version": "v0.36.3", "version": "v0.36.4",
"pinned": false, "pinned": false,
"date": "2026-02-13T18:38:58Z" "date": "2026-02-17T08:02:51Z"
}, },
{ {
"slug": "pocketid", "slug": "pocketid",
@@ -1180,9 +1187,9 @@
{ {
"slug": "pulse", "slug": "pulse",
"repo": "rcourtman/Pulse", "repo": "rcourtman/Pulse",
"version": "v5.1.9", "version": "v5.1.10",
"pinned": false, "pinned": false,
"date": "2026-02-11T15:34:40Z" "date": "2026-02-18T14:00:51Z"
}, },
{ {
"slug": "pve-scripts-local", "slug": "pve-scripts-local",
@@ -1236,9 +1243,9 @@
{ {
"slug": "rclone", "slug": "rclone",
"repo": "rclone/rclone", "repo": "rclone/rclone",
"version": "v1.73.0", "version": "v1.73.1",
"pinned": false, "pinned": false,
"date": "2026-01-30T22:12:03Z" "date": "2026-02-17T18:27:21Z"
}, },
{ {
"slug": "rdtclient", "slug": "rdtclient",
@@ -1306,9 +1313,9 @@
{ {
"slug": "scanopy", "slug": "scanopy",
"repo": "scanopy/scanopy", "repo": "scanopy/scanopy",
"version": "v0.14.4", "version": "v0.14.6",
"pinned": false, "pinned": false,
"date": "2026-02-10T03:57:28Z" "date": "2026-02-18T16:54:14Z"
}, },
{ {
"slug": "scraparr", "slug": "scraparr",
@@ -1334,9 +1341,9 @@
{ {
"slug": "semaphore", "slug": "semaphore",
"repo": "semaphoreui/semaphore", "repo": "semaphoreui/semaphore",
"version": "v2.17.2", "version": "v2.17.8",
"pinned": false, "pinned": false,
"date": "2026-02-16T10:27:40Z" "date": "2026-02-18T19:46:43Z"
}, },
{ {
"slug": "shelfmark", "slug": "shelfmark",
@@ -1362,9 +1369,9 @@
{ {
"slug": "slskd", "slug": "slskd",
"repo": "slskd/slskd", "repo": "slskd/slskd",
"version": "0.24.3", "version": "0.24.4",
"pinned": false, "pinned": false,
"date": "2026-01-15T14:40:15Z" "date": "2026-02-16T16:50:17Z"
}, },
{ {
"slug": "snipeit", "slug": "snipeit",
@@ -1411,9 +1418,9 @@
{ {
"slug": "stirling-pdf", "slug": "stirling-pdf",
"repo": "Stirling-Tools/Stirling-PDF", "repo": "Stirling-Tools/Stirling-PDF",
"version": "v2.4.6", "version": "v2.5.1",
"pinned": false, "pinned": false,
"date": "2026-02-12T00:01:19Z" "date": "2026-02-18T11:05:34Z"
}, },
{ {
"slug": "streamlink-webui", "slug": "streamlink-webui",
@@ -1544,9 +1551,9 @@
{ {
"slug": "tunarr", "slug": "tunarr",
"repo": "chrisbenincasa/tunarr", "repo": "chrisbenincasa/tunarr",
"version": "v1.1.12", "version": "v1.1.14",
"pinned": false, "pinned": false,
"date": "2026-02-03T20:19:00Z" "date": "2026-02-17T18:26:17Z"
}, },
{ {
"slug": "uhf", "slug": "uhf",
@@ -1600,9 +1607,9 @@
{ {
"slug": "victoriametrics", "slug": "victoriametrics",
"repo": "VictoriaMetrics/VictoriaMetrics", "repo": "VictoriaMetrics/VictoriaMetrics",
"version": "v1.135.0", "version": "v1.136.0",
"pinned": false, "pinned": false,
"date": "2026-02-02T14:20:15Z" "date": "2026-02-16T13:17:50Z"
}, },
{ {
"slug": "vikunja", "slug": "vikunja",
@@ -1628,9 +1635,9 @@
{ {
"slug": "wanderer", "slug": "wanderer",
"repo": "meilisearch/meilisearch", "repo": "meilisearch/meilisearch",
"version": "v1.35.0", "version": "v1.35.1",
"pinned": false, "pinned": false,
"date": "2026-02-02T09:57:03Z" "date": "2026-02-16T17:01:17Z"
}, },
{ {
"slug": "warracker", "slug": "warracker",
@@ -1719,9 +1726,9 @@
{ {
"slug": "yubal", "slug": "yubal",
"repo": "guillevc/yubal", "repo": "guillevc/yubal",
"version": "v0.6.0", "version": "v0.6.1",
"pinned": false, "pinned": false,
"date": "2026-02-15T17:47:56Z" "date": "2026-02-18T23:24:16Z"
}, },
{ {
"slug": "zigbee2mqtt", "slug": "zigbee2mqtt",

View File

@@ -0,0 +1,171 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
nginx \
valkey
msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
setup_go
NODE_VERSION="24" setup_nodejs
fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Building Databasus (Patience)"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod tidy
$STD go mod download
$STD go install github.com/swaggo/swag/cmd/swag@latest
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
mkdir -p /databasus-data/{pgdata,temp,backups,data,logs}
mkdir -p /opt/databasus/ui/build
mkdir -p /opt/databasus/migrations
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations/* /opt/databasus/migrations/
chown -R postgres:postgres /databasus-data
msg_ok "Built Databasus"
msg_info "Configuring Databasus"
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install goose for migrations
$STD go install github.com/pressly/goose/v3/cmd/goose@latest
ln -sf /root/go/bin/goose /usr/local/bin/goose
cat <<EOF >/opt/databasus/.env
# Environment
ENV_MODE=production
# Server
SERVER_PORT=4005
SERVER_HOST=0.0.0.0
# Database
DATABASE_DSN=host=localhost user=postgres password=postgres dbname=databasus port=5432 sslmode=disable
DATABASE_URL=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
# Migrations
GOOSE_DRIVER=postgres
GOOSE_DBSTRING=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
GOOSE_MIGRATION_DIR=/opt/databasus/migrations
# Valkey (Redis-compatible cache)
VALKEY_HOST=localhost
VALKEY_PORT=6379
# Security
JWT_SECRET=${JWT_SECRET}
ENCRYPTION_KEY=${ENCRYPTION_KEY}
# Paths
DATA_DIR=/databasus-data/data
BACKUP_DIR=/databasus-data/backups
LOG_DIR=/databasus-data/logs
EOF
chown postgres:postgres /opt/databasus/.env
chmod 600 /opt/databasus/.env
msg_ok "Configured Databasus"
msg_info "Configuring Valkey"
cat <<EOF >/etc/valkey/valkey.conf
port 6379
bind 127.0.0.1
protected-mode yes
save ""
maxmemory 256mb
maxmemory-policy allkeys-lru
EOF
systemctl enable -q --now valkey-server
systemctl restart valkey-server
msg_ok "Configured Valkey"
msg_info "Creating Database"
# Configure PostgreSQL to allow local password auth for databasus
PG_HBA="/etc/postgresql/17/main/pg_hba.conf"
if ! grep -q "databasus" "$PG_HBA"; then
sed -i '/^local\s*all\s*all/i local databasus postgres trust' "$PG_HBA"
sed -i '/^host\s*all\s*all\s*127/i host databasus postgres 127.0.0.1/32 trust' "$PG_HBA"
systemctl reload postgresql
fi
$STD sudo -u postgres psql -c "CREATE DATABASE databasus;" 2>/dev/null || true
$STD sudo -u postgres psql -c "ALTER USER postgres WITH SUPERUSER CREATEROLE CREATEDB;" 2>/dev/null || true
msg_ok "Created Database"
msg_info "Creating Databasus Service"
cat <<EOF >/etc/systemd/system/databasus.service
[Unit]
Description=Databasus - Database Backup Management
After=network.target postgresql.service valkey.service
Requires=postgresql.service valkey.service
[Service]
Type=simple
WorkingDirectory=/opt/databasus
EnvironmentFile=/opt/databasus/.env
ExecStart=/opt/databasus/databasus
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$STD systemctl daemon-reload
$STD systemctl enable -q --now databasus
msg_ok "Created Databasus Service"
msg_info "Configuring Nginx"
cat <<EOF >/etc/nginx/sites-available/databasus
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:4005;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_cache_bypass \$http_upgrade;
proxy_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
EOF
ln -sf /etc/nginx/sites-available/databasus /etc/nginx/sites-enabled/databasus
rm -f /etc/nginx/sites-enabled/default
$STD nginx -t
$STD systemctl enable -q --now nginx
$STD systemctl reload nginx
msg_ok "Configured Nginx"
motd_ssh
customize
cleanup_lxc

View File

@@ -64,7 +64,7 @@ $STD sudo -u cool coolconfig set-admin-password --user=admin --password="$COOLPA
echo "$COOLPASS" >~/.coolpass echo "$COOLPASS" >~/.coolpass
msg_ok "Installed Collabora Online" msg_ok "Installed Collabora Online"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.0.2" "/usr/bin" "opencloud-*-linux-amd64" fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.1.0" "/usr/bin" "opencloud-*-linux-amd64"
msg_info "Configuring OpenCloud" msg_info "Configuring OpenCloud"
DATA_DIR="/var/lib/opencloud" DATA_DIR="/var/lib/opencloud"

View File

@@ -11,9 +11,35 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
load_functions load_functions
catch_errors catch_errors
# Persist diagnostics setting inside container (exported from build.func)
# so addon scripts running later can find the user's choice
if [[ ! -f /usr/local/community-scripts/diagnostics ]]; then
mkdir -p /usr/local/community-scripts
echo "DIAGNOSTICS=${DIAGNOSTICS:-no}" >/usr/local/community-scripts/diagnostics
fi
# Get LXC IP address (must be called INSIDE container, after network is up) # Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# This function enables IPv6 if it's not disabled and sets verbose mode # This function enables IPv6 if it's not disabled and sets verbose mode
verb_ip6() { verb_ip6() {
set_std_mode # Set STD mode based on VERBOSE set_std_mode # Set STD mode based on VERBOSE
@@ -53,6 +79,7 @@ setting_up_container() {
fi fi
msg_ok "Set up Container OS" msg_ok "Set up Container OS"
msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}" msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}"
post_progress_to_api
} }
# This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected # This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
@@ -85,8 +112,18 @@ network_check() {
update_os() { update_os() {
msg_info "Updating Container OS" msg_info "Updating Container OS"
$STD apk -U upgrade $STD apk -U upgrade
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
msg_ok "Updated Container OS" msg_ok "Updated Container OS"
post_progress_to_api
} }
# This function modifies the message of the day (motd) and SSH settings # This function modifies the message of the day (motd) and SSH settings
@@ -111,6 +148,7 @@ motd_ssh() {
# Start the sshd service # Start the sshd service
$STD /etc/init.d/sshd start $STD /etc/init.d/sshd start
fi fi
post_progress_to_api
} }
# Validate Timezone for some LXC's # Validate Timezone for some LXC's
@@ -147,5 +185,5 @@ EOF
echo "bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/${app}.sh)\"" >/usr/bin/update echo "bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/${app}.sh)\"" >/usr/bin/update
chmod +x /usr/bin/update chmod +x /usr/bin/update
post_progress_to_api
} }

View File

@@ -117,16 +117,17 @@ detect_repo_source
# - Canonical source of truth for ALL exit code mappings # - Canonical source of truth for ALL exit code mappings
# - Used by both api.func (telemetry) and error_handler.func (error display) # - Used by both api.func (telemetry) and error_handler.func (error display)
# - Supports: # - Supports:
# * Generic/Shell errors (1, 2, 124, 126-130, 134, 137, 139, 141, 143) # * Generic/Shell errors (1-3, 10, 124-132, 134, 137, 139, 141, 143-146)
# * curl/wget errors (6, 7, 22, 28, 35) # * curl/wget errors (4-8, 16, 18, 22-28, 30, 32-36, 39, 44-48, 51-52, 55-57, 59, 61, 63, 75, 78-79, 92, 95)
# * Package manager errors (APT, DPKG: 100-102, 255) # * Package manager errors (APT, DPKG: 100-102, 255)
# * BSD sysexits (64-78)
# * Systemd/Service errors (150-154) # * Systemd/Service errors (150-154)
# * Python/pip/uv errors (160-162) # * Python/pip/uv errors (160-162)
# * PostgreSQL errors (170-173) # * PostgreSQL errors (170-173)
# * MySQL/MariaDB errors (180-183) # * MySQL/MariaDB errors (180-183)
# * MongoDB errors (190-193) # * MongoDB errors (190-193)
# * Proxmox custom codes (200-231) # * Proxmox custom codes (200-231)
# * Node.js/npm errors (243, 245-249) # * Node.js/npm errors (239, 243, 245-249)
# - Returns description string for given exit code # - Returns description string for given exit code
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
explain_exit_code() { explain_exit_code() {
@@ -135,30 +136,87 @@ explain_exit_code() {
# --- Generic / Shell --- # --- Generic / Shell ---
1) echo "General error / Operation not permitted" ;; 1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;; 2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
# --- curl / wget errors (commonly seen in downloads) --- # --- curl / wget errors (commonly seen in downloads) ---
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;; 6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;; 7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;; 22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;; 28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;; 35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
# --- Package manager / APT / DPKG --- # --- Package manager / APT / DPKG ---
100) echo "APT: Package manager error (broken packages / dependency problems)" ;; 100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;; 101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;; 102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
# --- BSD sysexits.h (64-78) ---
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
# --- Common shell/system errors --- # --- Common shell/system errors ---
124) echo "Command timed out (timeout command)" ;; 124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;; 126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;; 127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;; 128) echo "Invalid argument to exit" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;; 130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;; 134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;; 137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;; 139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;; 141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;; 143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
# --- Systemd / Service errors (150-154) --- # --- Systemd / Service errors (150-154) ---
150) echo "Systemd: Service failed to start" ;; 150) echo "Systemd: Service failed to start" ;;
@@ -166,7 +224,6 @@ explain_exit_code() {
152) echo "Permission denied (EACCES)" ;; 152) echo "Permission denied (EACCES)" ;;
153) echo "Build/compile failed (make/gcc/cmake)" ;; 153) echo "Build/compile failed (make/gcc/cmake)" ;;
154) echo "Node.js: Native addon build failed (node-gyp)" ;; 154) echo "Node.js: Native addon build failed (node-gyp)" ;;
# --- Python / pip / uv (160-162) --- # --- Python / pip / uv (160-162) ---
160) echo "Python: Virtualenv / uv environment missing or broken" ;; 160) echo "Python: Virtualenv / uv environment missing or broken" ;;
161) echo "Python: Dependency resolution failed" ;; 161) echo "Python: Dependency resolution failed" ;;
@@ -217,7 +274,8 @@ explain_exit_code() {
225) echo "Proxmox: No template available for OS/Version" ;; 225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;; 231) echo "Proxmox: LXC stack upgrade failed" ;;
# --- Node.js / npm / pnpm / yarn (243-249) --- # --- Node.js / npm / pnpm / yarn (239-249) ---
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;; 243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;; 245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;; 246) echo "Node.js: Internal JavaScript Parse Error" ;;
@@ -494,6 +552,7 @@ post_to_api() {
cat <<EOF cat <<EOF
{ {
"random_id": "${RANDOM_UUID}", "random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "lxc", "type": "lxc",
"nsapp": "${NSAPP:-unknown}", "nsapp": "${NSAPP:-unknown}",
"status": "installing", "status": "installing",
@@ -598,6 +657,7 @@ post_to_api_vm() {
cat <<EOF cat <<EOF
{ {
"random_id": "${RANDOM_UUID}", "random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "vm", "type": "vm",
"nsapp": "${NSAPP:-unknown}", "nsapp": "${NSAPP:-unknown}",
"status": "installing", "status": "installing",
@@ -624,6 +684,31 @@ EOF
curl -fsS -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \ curl -fsS -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" &>/dev/null || true -d "$JSON_PAYLOAD" &>/dev/null || true
POST_TO_API_DONE=true
}
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from host or container
# - Updates the existing telemetry record status to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# - Can be called multiple times safely
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
local app_name="${NSAPP:-${app:-unknown}}"
local telemetry_type="${TELEMETRY_TYPE:-lxc}"
curl -fsS -m 5 -X POST "${TELEMETRY_URL:-https://telemetry.community-scripts.org/telemetry}" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"${telemetry_type}\",\"nsapp\":\"${app_name}\",\"status\":\"configuring\"}" &>/dev/null || true
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -728,6 +813,7 @@ post_update_to_api() {
cat <<EOF cat <<EOF
{ {
"random_id": "${RANDOM_UUID}", "random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}", "type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}", "nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}", "status": "${pb_status}",
@@ -770,6 +856,7 @@ EOF
cat <<EOF cat <<EOF
{ {
"random_id": "${RANDOM_UUID}", "random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}", "type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}", "nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}", "status": "${pb_status}",
@@ -812,6 +899,7 @@ EOF
cat <<EOF cat <<EOF
{ {
"random_id": "${RANDOM_UUID}", "random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}", "type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}", "nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}", "status": "${pb_status}",
@@ -848,6 +936,9 @@ categorize_error() {
# Network errors (curl/wget) # Network errors (curl/wget)
6 | 7 | 22 | 35) echo "network" ;; 6 | 7 | 22 | 35) echo "network" ;;
# Docker / Privileged mode required
10) echo "config" ;;
# Timeout errors # Timeout errors
28 | 124 | 211) echo "timeout" ;; 28 | 124 | 211) echo "timeout" ;;
@@ -922,6 +1013,63 @@ get_install_duration() {
echo $((now - INSTALL_START_TIME)) echo $((now - INSTALL_START_TIME))
} }
# ------------------------------------------------------------------------------
# _telemetry_report_exit()
#
# - Internal handler called by EXIT trap set in init_tool_telemetry()
# - Determines success/failure from exit code and reports via appropriate API
# - Arguments:
# * $1: exit_code from the script
# ------------------------------------------------------------------------------
_telemetry_report_exit() {
local ec="${1:-0}"
local status="success"
[[ "$ec" -ne 0 ]] && status="failed"
# Lazy name resolution: use explicit name, fall back to $APP, then "unknown"
local name="${TELEMETRY_TOOL_NAME:-${APP:-unknown}}"
if [[ "${TELEMETRY_TOOL_TYPE:-pve}" == "addon" ]]; then
post_addon_to_api "$name" "$status" "$ec"
else
post_tool_to_api "$name" "$status" "$ec"
fi
}
# ------------------------------------------------------------------------------
# init_tool_telemetry()
#
# - One-line telemetry setup for tools/addon scripts
# - Reads DIAGNOSTICS from /usr/local/community-scripts/diagnostics
# (persisted on PVE host during first build, and inside containers by install.func)
# - Starts install timer for duration tracking
# - Sets EXIT trap to automatically report success/failure on script exit
# - Arguments:
# * $1: tool_name (optional, falls back to $APP at exit time)
# * $2: type ("pve" for PVE host scripts, "addon" for container addons)
# - Usage:
# source <(curl -fsSL .../misc/api.func) 2>/dev/null || true
# init_tool_telemetry "post-pve-install" "pve"
# init_tool_telemetry "" "addon" # uses $APP at exit time
# ------------------------------------------------------------------------------
init_tool_telemetry() {
local name="${1:-}"
local type="${2:-pve}"
[[ -n "$name" ]] && TELEMETRY_TOOL_NAME="$name"
TELEMETRY_TOOL_TYPE="$type"
# Read diagnostics opt-in/opt-out
if [[ -f /usr/local/community-scripts/diagnostics ]]; then
DIAGNOSTICS=$(grep -i "^DIAGNOSTICS=" /usr/local/community-scripts/diagnostics 2>/dev/null | awk -F'=' '{print $2}') || true
fi
start_install_timer
# EXIT trap: automatically report telemetry when script ends
trap '_telemetry_report_exit "$?"' EXIT
}
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# post_tool_to_api() # post_tool_to_api()
# #
@@ -969,7 +1117,8 @@ post_tool_to_api() {
cat <<EOF cat <<EOF
{ {
"random_id": "${uuid}", "random_id": "${uuid}",
"type": "tool", "execution_id": "${EXECUTION_ID:-${uuid}}",
"type": "pve",
"nsapp": "${tool_name}", "nsapp": "${tool_name}",
"status": "${status}", "status": "${status}",
"exit_code": ${exit_code}, "exit_code": ${exit_code},
@@ -1036,6 +1185,7 @@ post_addon_to_api() {
cat <<EOF cat <<EOF
{ {
"random_id": "${uuid}", "random_id": "${uuid}",
"execution_id": "${EXECUTION_ID:-${uuid}}",
"type": "addon", "type": "addon",
"nsapp": "${addon_name}", "nsapp": "${addon_name}",
"status": "${status}", "status": "${status}",
@@ -1127,6 +1277,7 @@ post_update_to_api_extended() {
cat <<EOF cat <<EOF
{ {
"random_id": "${RANDOM_UUID}", "random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}", "type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}", "nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}", "status": "${pb_status}",

View File

@@ -42,9 +42,10 @@ variables() {
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP. var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern. INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
DIAGNOSTICS="yes" # sets the DIAGNOSTICS variable to "yes", used for the API call. DIAGNOSTICS="no" # Safe default: no telemetry until user consents via diagnostics_check()
METHOD="default" # sets the METHOD variable to "default", used for the API call. METHOD="default" # sets the METHOD variable to "default", used for the API call.
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable. RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
EXECUTION_ID="${RANDOM_UUID}" # Unique execution ID for telemetry record identification (unique-indexed in PocketBase)
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
combined_log="/tmp/install-${SESSION_ID}-combined.log" # Combined log (build + install) for failed installations combined_log="/tmp/install-${SESSION_ID}-combined.log" # Combined log (build + install) for failed installations
@@ -297,7 +298,7 @@ validate_container_id() {
# Falls back gracefully if pvesh unavailable or returns empty # Falls back gracefully if pvesh unavailable or returns empty
if command -v pvesh &>/dev/null; then if command -v pvesh &>/dev/null; then
local cluster_ids local cluster_ids
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null | cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true) grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true)
if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then
return 1 return 1
@@ -2787,93 +2788,85 @@ Advanced:
# diagnostics_check() # diagnostics_check()
# #
# - Ensures diagnostics config file exists at /usr/local/community-scripts/diagnostics # - Ensures diagnostics config file exists at /usr/local/community-scripts/diagnostics
# - Asks user whether to send anonymous diagnostic data # - Asks user whether to send anonymous diagnostic data (first run only)
# - Saves DIAGNOSTICS=yes/no in the config file # - Saves DIAGNOSTICS=yes/no in the config file
# - Creates file if missing with default DIAGNOSTICS=yes # - Reads current diagnostics setting from existing file
# - Reads current diagnostics setting from file
# - Sets global DIAGNOSTICS variable for API telemetry opt-in/out # - Sets global DIAGNOSTICS variable for API telemetry opt-in/out
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
diagnostics_check() { diagnostics_check() {
if ! [ -d "/usr/local/community-scripts" ]; then local config_dir="/usr/local/community-scripts"
mkdir -p /usr/local/community-scripts local config_file="${config_dir}/diagnostics"
mkdir -p "$config_dir"
if [[ -f "$config_file" ]]; then
DIAGNOSTICS=$(awk -F '=' '/^DIAGNOSTICS/ {print $2}' "$config_file") || true
DIAGNOSTICS="${DIAGNOSTICS:-no}"
return
fi fi
if ! [ -f "/usr/local/community-scripts/diagnostics" ]; then local result
if (whiptail --backtitle "Proxmox VE Helper Scripts" --title "DIAGNOSTICS" --yesno "Send Diagnostics of LXC Installation?\n\n(This only transmits data without user data, just RAM, CPU, LXC name, ...)" 10 58); then result=$(whiptail --backtitle "Proxmox VE Helper Scripts" \
cat <<EOF >/usr/local/community-scripts/diagnostics --title "TELEMETRY & DIAGNOSTICS" \
DIAGNOSTICS=yes --ok-button "Confirm" --cancel-button "Exit" \
--radiolist "\nHelp improve Community-Scripts by sharing anonymous data.\n\nWhat we collect:\n - Container resources (CPU, RAM, disk), OS & PVE version\n - Application name, install method and status\n\nWhat we DON'T collect:\n - No IP addresses, hostnames, or personal data\n\nYou can change this anytime in the Settings menu.\nPrivacy: https://github.com/community-scripts/telemetry-service/blob/main/docs/PRIVACY.md\n\nUse SPACE to select, ENTER to confirm." 22 76 2 \
"yes" "Yes, share anonymous data" OFF \
"no" "No, opt out" OFF \
3>&1 1>&2 2>&3) || result="no"
#This file is used to store the diagnostics settings for the Community-Scripts API. DIAGNOSTICS="${result:-no}"
#https://github.com/community-scripts/ProxmoxVE/discussions/1836
#Your diagnostics will be sent to the Community-Scripts API for troubleshooting/statistical purposes. cat <<EOF >"$config_file"
#You can review the data at https://community-scripts.github.io/ProxmoxVE/data DIAGNOSTICS=${DIAGNOSTICS}
#If you do not wish to send diagnostics, please set the variable 'DIAGNOSTICS' to "no" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will disable the diagnostics feature. # Community-Scripts Telemetry Configuration
#To send diagnostics, set the variable 'DIAGNOSTICS' to "yes" in /usr/local/community-scripts/diagnostics, or use the menue. # https://telemetry.community-scripts.org
#This will enable the diagnostics feature. #
#The following information will be sent: # This file stores your telemetry preference.
#"disk_size" # Set DIAGNOSTICS=yes to share anonymous installation data.
#"core_count" # Set DIAGNOSTICS=no to disable telemetry.
#"ram_size" #
#"os_type" # You can also change this via the Settings menu during installation.
#"os_version" #
#"nsapp" # Data collected (when enabled):
#"method" # disk_size, core_count, ram_size, os_type, os_version,
#"pve_version" # nsapp, method, pve_version, status, exit_code
#"status" #
#If you have any concerns, please review the source code at /misc/build.func # No personal data (IPs, hostnames, passwords) is ever collected.
# Privacy: https://github.com/community-scripts/telemetry-service/blob/main/docs/PRIVACY.md
EOF EOF
DIAGNOSTICS="yes"
else
cat <<EOF >/usr/local/community-scripts/diagnostics
DIAGNOSTICS=no
#This file is used to store the diagnostics settings for the Community-Scripts API.
#https://github.com/community-scripts/ProxmoxVE/discussions/1836
#Your diagnostics will be sent to the Community-Scripts API for troubleshooting/statistical purposes.
#You can review the data at https://community-scripts.github.io/ProxmoxVE/data
#If you do not wish to send diagnostics, please set the variable 'DIAGNOSTICS' to "no" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will disable the diagnostics feature.
#To send diagnostics, set the variable 'DIAGNOSTICS' to "yes" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will enable the diagnostics feature.
#The following information will be sent:
#"disk_size"
#"core_count"
#"ram_size"
#"os_type"
#"os_version"
#"nsapp"
#"method"
#"pve_version"
#"status"
#If you have any concerns, please review the source code at /misc/build.func
EOF
DIAGNOSTICS="no"
fi
else
DIAGNOSTICS=$(awk -F '=' '/^DIAGNOSTICS/ {print $2}' /usr/local/community-scripts/diagnostics)
fi
} }
diagnostics_menu() { diagnostics_menu() {
if [ "${DIAGNOSTICS:-no}" = "yes" ]; then local current="${DIAGNOSTICS:-no}"
local status_text="DISABLED"
[[ "$current" == "yes" ]] && status_text="ENABLED"
local dialog_text=(
"Telemetry is currently: ${status_text}\n\n"
"Anonymous data helps us improve scripts and track issues.\n"
"No personal data is ever collected.\n\n"
"More info: https://telemetry.community-scripts.org\n\n"
"Do you want to ${current:+change this setting}?"
)
if [[ "$current" == "yes" ]]; then
if whiptail --backtitle "Proxmox VE Helper Scripts" \ if whiptail --backtitle "Proxmox VE Helper Scripts" \
--title "DIAGNOSTIC SETTINGS" \ --title "TELEMETRY SETTINGS" \
--yesno "Send Diagnostics?\n\nCurrent: ${DIAGNOSTICS}" 10 58 \ --yesno "${dialog_text[*]}" 14 64 \
--yes-button "No" --no-button "Back"; then --yes-button "Disable" --no-button "Keep enabled"; then
DIAGNOSTICS="no" DIAGNOSTICS="no"
sed -i 's/^DIAGNOSTICS=.*/DIAGNOSTICS=no/' /usr/local/community-scripts/diagnostics sed -i 's/^DIAGNOSTICS=.*/DIAGNOSTICS=no/' /usr/local/community-scripts/diagnostics
whiptail --msgbox "Diagnostics set to ${DIAGNOSTICS}." 8 58 whiptail --msgbox "Telemetry disabled.\n\nNote: Existing containers keep their current setting.\nNew containers will inherit this choice." 10 58
fi fi
else else
if whiptail --backtitle "Proxmox VE Helper Scripts" \ if whiptail --backtitle "Proxmox VE Helper Scripts" \
--title "DIAGNOSTIC SETTINGS" \ --title "TELEMETRY SETTINGS" \
--yesno "Send Diagnostics?\n\nCurrent: ${DIAGNOSTICS}" 10 58 \ --yesno "${dialog_text[*]}" 14 64 \
--yes-button "Yes" --no-button "Back"; then --yes-button "Enable" --no-button "Keep disabled"; then
DIAGNOSTICS="yes" DIAGNOSTICS="yes"
sed -i 's/^DIAGNOSTICS=.*/DIAGNOSTICS=yes/' /usr/local/community-scripts/diagnostics sed -i 's/^DIAGNOSTICS=.*/DIAGNOSTICS=yes/' /usr/local/community-scripts/diagnostics
whiptail --msgbox "Diagnostics set to ${DIAGNOSTICS}." 8 58 whiptail --msgbox "Telemetry enabled.\n\nNote: Existing containers keep their current setting.\nNew containers will inherit this choice." 10 58
fi fi
fi fi
} }
@@ -3427,6 +3420,7 @@ start() {
VERBOSE="no" VERBOSE="no"
set_std_mode set_std_mode
ensure_profile_loaded ensure_profile_loaded
get_lxc_ip
update_script update_script
update_motd_ip update_motd_ip
cleanup_lxc cleanup_lxc
@@ -3454,6 +3448,7 @@ start() {
;; ;;
esac esac
ensure_profile_loaded ensure_profile_loaded
get_lxc_ip
update_script update_script
update_motd_ip update_motd_ip
cleanup_lxc cleanup_lxc
@@ -3559,6 +3554,7 @@ build_container() {
# Core exports for install.func # Core exports for install.func
export DIAGNOSTICS="$DIAGNOSTICS" export DIAGNOSTICS="$DIAGNOSTICS"
export RANDOM_UUID="$RANDOM_UUID" export RANDOM_UUID="$RANDOM_UUID"
export EXECUTION_ID="$EXECUTION_ID"
export SESSION_ID="$SESSION_ID" export SESSION_ID="$SESSION_ID"
export CACHER="$APT_CACHER" export CACHER="$APT_CACHER"
export CACHER_IP="$APT_CACHER_IP" export CACHER_IP="$APT_CACHER_IP"
@@ -3664,9 +3660,6 @@ $PCT_OPTIONS_STRING"
exit 214 exit 214
fi fi
msg_ok "Storage space validated" msg_ok "Storage space validated"
# Report installation start to API (early - captures failed installs too)
post_to_api
fi fi
create_lxc_container || exit $? create_lxc_container || exit $?
@@ -3912,6 +3905,7 @@ EOF
for i in {1..10}; do for i in {1..10}; do
if pct status "$CTID" | grep -q "status: running"; then if pct status "$CTID" | grep -q "status: running"; then
msg_ok "Started LXC Container" msg_ok "Started LXC Container"
post_progress_to_api # Signal container is running
break break
fi fi
sleep 1 sleep 1
@@ -3966,6 +3960,7 @@ EOF
echo -e "${YW}Container may have limited internet access. Installation will continue...${CL}" echo -e "${YW}Container may have limited internet access. Installation will continue...${CL}"
else else
msg_ok "Network in LXC is reachable (ping)" msg_ok "Network in LXC is reachable (ping)"
post_progress_to_api # Signal network is ready
fi fi
fi fi
# Function to get correct GID inside container # Function to get correct GID inside container
@@ -4037,6 +4032,14 @@ EOF'
fi fi
msg_ok "Customized LXC Container" msg_ok "Customized LXC Container"
post_progress_to_api # Signal ready for app installation
# Optional DNS override for retry scenarios (inside LXC, never on host)
if [[ "${DNS_RETRY_OVERRIDE:-false}" == "true" ]]; then
msg_info "Applying DNS retry override in LXC (8.8.8.8, 1.1.1.1)"
pct exec "$CTID" -- bash -c "printf 'nameserver 8.8.8.8\nnameserver 1.1.1.1\n' >/etc/resolv.conf" >/dev/null 2>&1 || true
msg_ok "DNS override applied in LXC"
fi
# Install SSH keys # Install SSH keys
install_ssh_keys_into_ct install_ssh_keys_into_ct
@@ -4103,9 +4106,9 @@ EOF'
build_log_copied=true build_log_copied=true
fi fi
# Copy and append INSTALL_LOG from container # Copy and append INSTALL_LOG from container (with timeout to prevent hangs)
local temp_install_log="/tmp/.install-temp-${SESSION_ID}.log" local temp_install_log="/tmp/.install-temp-${SESSION_ID}.log"
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_install_log" 2>/dev/null; then if timeout 8 pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_install_log" 2>/dev/null; then
{ {
echo "================================================================================" echo "================================================================================"
echo "PHASE 2: APPLICATION INSTALLATION (Container)" echo "PHASE 2: APPLICATION INSTALLATION (Container)"
@@ -4150,32 +4153,322 @@ EOF'
# Prompt user for cleanup with 60s timeout # Prompt user for cleanup with 60s timeout
echo "" echo ""
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
# Detect error type for smart recovery options
local is_oom=false
local is_network_issue=false
local is_apt_issue=false
local is_cmd_not_found=false
local error_explanation=""
if declare -f explain_exit_code >/dev/null 2>&1; then
error_explanation="$(explain_exit_code "$install_exit_code")"
fi
# OOM detection: exit codes 134 (SIGABRT/heap), 137 (SIGKILL/OOM), 243 (Node.js heap)
if [[ $install_exit_code -eq 134 || $install_exit_code -eq 137 || $install_exit_code -eq 243 ]]; then
is_oom=true
fi
# APT/DPKG detection: exit codes 100-102 (APT), 255 (DPKG with log evidence)
case "$install_exit_code" in
100 | 101 | 102) is_apt_issue=true ;;
255)
if [[ -f "$combined_log" ]] && grep -qiE 'dpkg|apt-get|apt\.conf|broken packages|unmet dependencies|E: Sub-process|E: Failed' "$combined_log"; then
is_apt_issue=true
fi
;;
esac
# Command not found detection
if [[ $install_exit_code -eq 127 ]]; then
is_cmd_not_found=true
fi
# Network-related detection (curl/apt/git fetch failures and transient network issues)
case "$install_exit_code" in
6 | 7 | 22 | 28 | 35 | 52 | 56 | 57 | 75 | 78) is_network_issue=true ;;
100)
# APT can fail due to network (Failed to fetch)
if [[ -f "$combined_log" ]] && grep -qiE 'Failed to fetch|Could not resolve|Connection failed|Network is unreachable|Temporary failure resolving' "$combined_log"; then
is_network_issue=true
fi
;;
128)
if [[ -f "$combined_log" ]] && grep -qiE 'RPC failed|early EOF|fetch-pack|HTTP/2 stream|Could not resolve host|Temporary failure resolving|Failed to fetch|Connection reset|Network is unreachable' "$combined_log"; then
is_network_issue=true
fi
;;
esac
# Exit 1 subclassification: analyze logs to identify actual root cause
# Many exit 1 errors are actually APT, OOM, network, or command-not-found issues
if [[ $install_exit_code -eq 1 && -f "$combined_log" ]]; then
if grep -qiE 'E: Unable to|E: Package|E: Failed to fetch|dpkg.*error|broken packages|unmet dependencies|dpkg --configure -a' "$combined_log"; then
is_apt_issue=true
fi
if grep -qiE 'Cannot allocate memory|Out of memory|oom-killer|Killed process|JavaScript heap' "$combined_log"; then
is_oom=true
fi
if grep -qiE 'Could not resolve|DNS|Connection refused|Network is unreachable|No route to host|Temporary failure resolving|Failed to fetch' "$combined_log"; then
is_network_issue=true
fi
if grep -qiE ': command not found|No such file or directory.*/s?bin/' "$combined_log"; then
is_cmd_not_found=true
fi
fi
# Show error explanation if available
if [[ -n "$error_explanation" ]]; then
echo -e "${TAB}${RD}Error: ${error_explanation}${CL}"
echo ""
fi
# Show specific hints for known error types
if [[ $install_exit_code -eq 10 ]]; then
echo -e "${TAB}${INFO} This error usually means the container needs ${GN}privileged${CL} mode or Docker/nesting support."
echo -e "${TAB}${INFO} Recreate with: Advanced Install → Container Type: ${GN}Privileged${CL}"
echo ""
fi
if [[ $install_exit_code -eq 125 || $install_exit_code -eq 126 ]]; then
echo -e "${TAB}${INFO} The command exists but cannot be executed. This may be a ${GN}permission${CL} issue."
echo -e "${TAB}${INFO} If using Docker, ensure the container is ${GN}privileged${CL} or has correct permissions."
echo ""
fi
if [[ "$is_cmd_not_found" == true ]]; then
local missing_cmd=""
if [[ -f "$combined_log" ]]; then
missing_cmd=$(grep -oiE '[a-zA-Z0-9_.-]+: command not found' "$combined_log" | tail -1 | sed 's/: command not found//')
fi
if [[ -n "$missing_cmd" ]]; then
echo -e "${TAB}${INFO} Missing command: ${GN}${missing_cmd}${CL}"
fi
echo ""
fi
# Build recovery menu based on error type
echo -e "${YW}What would you like to do?${CL}"
echo ""
echo -e " ${GN}1)${CL} Remove container and exit"
echo -e " ${GN}2)${CL} Keep container for debugging"
echo -e " ${GN}3)${CL} Retry with verbose mode (full rebuild)"
local next_option=4
local APT_OPTION="" OOM_OPTION="" DNS_OPTION=""
if [[ "$is_apt_issue" == true ]]; then
if [[ "$var_os" == "alpine" ]]; then
echo -e " ${GN}${next_option})${CL} Repair APK state and re-run install (in-place)"
else
echo -e " ${GN}${next_option})${CL} Repair APT/DPKG state and re-run install (in-place)"
fi
APT_OPTION=$next_option
next_option=$((next_option + 1))
fi
if [[ "$is_oom" == true ]]; then
local recovery_attempt="${RECOVERY_ATTEMPT:-0}"
if [[ $recovery_attempt -lt 2 ]]; then
local new_ram=$((RAM_SIZE * 2))
local new_cpu=$((CORE_COUNT * 2))
echo -e " ${GN}${next_option})${CL} Retry with more resources (RAM: ${RAM_SIZE}${new_ram} MiB, CPU: ${CORE_COUNT}${new_cpu} cores)"
OOM_OPTION=$next_option
next_option=$((next_option + 1))
else
echo -e " ${DGN}-)${CL} ${DGN}OOM retry exhausted (already retried ${recovery_attempt}x)${CL}"
fi
fi
if [[ "$is_network_issue" == true ]]; then
echo -e " ${GN}${next_option})${CL} Retry with DNS override in LXC (8.8.8.8 / 1.1.1.1)"
DNS_OPTION=$next_option
next_option=$((next_option + 1))
fi
local max_option=$((next_option - 1))
echo ""
echo -en "${YW}Select option [1-${max_option}] (default: 1, auto-remove in 60s): ${CL}"
if read -t 60 -r response; then if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then case "${response:-1}" in
1)
# Remove container # Remove container
echo "" echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID}${CL}"
msg_info "Removing container ${CTID}"
pct stop "$CTID" &>/dev/null || true pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true pct destroy "$CTID" &>/dev/null || true
msg_ok "Container ${CTID} removed" echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
elif [[ "$response" =~ ^[Nn]$ ]]; then ;;
echo "" 2)
msg_warn "Container ${CTID} kept for debugging" echo -e "\n${TAB}${YW}Container ${CTID} kept for debugging${CL}"
# Dev mode: Setup MOTD/SSH for debugging access to broken container # Dev mode: Setup MOTD/SSH for debugging access to broken container
if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then
echo -e "${TAB}${HOLD}${DGN}Setting up MOTD and SSH for debugging...${CL}" echo -e "${TAB}${HOLD}${DGN}Setting up MOTD and SSH for debugging...${CL}"
if pct exec "$CTID" -- bash -c " if pct exec "$CTID" -- bash -c "
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then " >/dev/null 2>&1; then
local ct_ip=$(pct exec "$CTID" ip a s dev eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1) local ct_ip=$(pct exec "$CTID" ip a s dev eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1)
echo -e "${BFR}${CM}${GN}MOTD/SSH ready - SSH into container: ssh root@${ct_ip}${CL}" echo -e "${BFR}${CM}${GN}MOTD/SSH ready - SSH into container: ssh root@${ct_ip}${CL}"
fi fi
fi fi
fi exit $install_exit_code
;;
3)
# Retry with verbose mode (full rebuild)
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
# Get new container ID
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export VERBOSE="yes"
export var_verbose="yes"
# Show rebuild summary
echo -e "${YW}Rebuilding with preserved settings:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
# Re-run build_container
build_container
return $?
;;
*)
# Handle dynamic smart recovery options via named option variables
local handled=false
if [[ -n "${APT_OPTION}" && "${response}" == "${APT_OPTION}" ]]; then
# Package manager in-place repair: fix broken state and re-run install script
handled=true
if [[ "$var_os" == "alpine" ]]; then
echo -e "\n${TAB}${HOLD}${YW}Repairing APK state in container ${CTID}...${CL}"
pct exec "$CTID" -- ash -c "
apk fix 2>/dev/null || true
apk cache clean 2>/dev/null || true
apk update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APK state repaired in container ${CTID}${CL}"
else
echo -e "\n${TAB}${HOLD}${YW}Repairing APT/DPKG state in container ${CTID}...${CL}"
pct exec "$CTID" -- bash -c "
DEBIAN_FRONTEND=noninteractive dpkg --configure -a 2>/dev/null || true
apt-get -f install -y 2>/dev/null || true
apt-get clean 2>/dev/null
apt-get update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APT/DPKG state repaired in container ${CTID}${CL}"
fi
echo ""
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Re-running installation in existing container ${CTID}:${CL}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Re-running installation script..."
# Re-run install script in existing container (don't destroy/recreate)
set +Eeuo pipefail
trap - ERR
lxc-attach -n "$CTID" -- bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/install/${var_install}.sh)"
local apt_retry_exit=$?
set -Eeuo pipefail
trap 'error_handler' ERR
# Check for error flag from retry
local apt_retry_code=0
if [[ -n "${SESSION_ID:-}" ]]; then
local retry_error_flag="/root/.install-${SESSION_ID}.failed"
if pct exec "$CTID" -- test -f "$retry_error_flag" 2>/dev/null; then
apt_retry_code=$(pct exec "$CTID" -- cat "$retry_error_flag" 2>/dev/null || echo "1")
pct exec "$CTID" -- rm -f "$retry_error_flag" 2>/dev/null || true
fi
fi
if [[ $apt_retry_code -eq 0 && $apt_retry_exit -ne 0 ]]; then
apt_retry_code=$apt_retry_exit
fi
if [[ $apt_retry_code -eq 0 ]]; then
msg_ok "Installation completed successfully after APT repair!"
post_update_to_api "done" "0" "force"
return 0
else
msg_error "Installation still failed after APT repair (exit code: ${apt_retry_code})"
install_exit_code=$apt_retry_code
fi
fi
if [[ -n "${OOM_OPTION}" && "${response}" == "${OOM_OPTION}" ]]; then
# Retry with doubled resources
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with more resources...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
local old_ram="$RAM_SIZE"
local old_cpu="$CORE_COUNT"
export CTID=$(get_valid_container_id "$CTID")
export RAM_SIZE=$((RAM_SIZE * 2))
export CORE_COUNT=$((CORE_COUNT * 2))
export var_ram="$RAM_SIZE"
export var_cpu="$CORE_COUNT"
export VERBOSE="yes"
export var_verbose="yes"
export RECOVERY_ATTEMPT=$((${RECOVERY_ATTEMPT:-0} + 1))
echo -e "${YW}Rebuilding with increased resources (attempt ${RECOVERY_ATTEMPT}/2):${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${old_ram}${GN}${RAM_SIZE}${CL} MiB (x2)"
echo -e " CPU: ${old_cpu}${GN}${CORE_COUNT}${CL} cores (x2)"
echo -e " Disk: ${DISK_SIZE} GB | Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ -n "${DNS_OPTION}" && "${response}" == "${DNS_OPTION}" ]]; then
# Retry with DNS override in LXC
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with DNS override...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export DNS_RETRY_OVERRIDE="true"
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Rebuilding with DNS override in LXC:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " DNS: ${GN}8.8.8.8, 1.1.1.1${CL} (inside LXC only)"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ "$handled" == false ]]; then
echo -e "\n${TAB}${YW}Invalid option. Container ${CTID} kept.${CL}"
exit $install_exit_code
fi
;;
esac
else else
# Timeout - auto-remove # Timeout - auto-remove
echo "" echo ""
@@ -4595,6 +4888,9 @@ create_lxc_container() {
exit 206 exit 206
fi fi
# Report installation start to API early - captures failures in storage/template/create
post_to_api
# Storage capability check # Storage capability check
check_storage_support "rootdir" || { check_storage_support "rootdir" || {
msg_error "No valid storage found for 'rootdir' [Container]" msg_error "No valid storage found for 'rootdir' [Container]"
@@ -5124,9 +5420,7 @@ create_lxc_container() {
} }
msg_ok "LXC Container ${BL}$CTID${CL} ${GN}was successfully created." msg_ok "LXC Container ${BL}$CTID${CL} ${GN}was successfully created."
post_progress_to_api # Signal container creation complete
# Report container creation to API
post_to_api
} }
# ============================================================================== # ==============================================================================
@@ -5199,6 +5493,7 @@ EOF
# - If INSTALL_LOG points to a container path (e.g. /root/.install-*), # - If INSTALL_LOG points to a container path (e.g. /root/.install-*),
# tries to pull it from the container and create a combined log # tries to pull it from the container and create a combined log
# - This allows get_error_text() to find actual error output for telemetry # - This allows get_error_text() to find actual error output for telemetry
# - Uses timeout on pct pull to prevent hangs on dead/unresponsive containers
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
ensure_log_on_host() { ensure_log_on_host() {
# Already readable on host? Nothing to do. # Already readable on host? Nothing to do.
@@ -5228,9 +5523,9 @@ ensure_log_on_host() {
echo "" echo ""
} >>"$combined_log" } >>"$combined_log"
fi fi
# Pull INSTALL_LOG from container # Pull INSTALL_LOG from container (with timeout to prevent hangs on dead containers)
local temp_log="/tmp/.install-temp-${SESSION_ID}.log" local temp_log="/tmp/.install-temp-${SESSION_ID}.log"
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_log" 2>/dev/null; then if timeout 8 pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_log" 2>/dev/null; then
{ {
echo "================================================================================" echo "================================================================================"
echo "PHASE 2: APPLICATION INSTALLATION (Container)" echo "PHASE 2: APPLICATION INSTALLATION (Container)"
@@ -5253,20 +5548,34 @@ ensure_log_on_host() {
# - Exit trap handler for reporting to API telemetry # - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to PocketBase using centralized error descriptions # - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages # - Uses explain_exit_code() from api.func for consistent error messages
# - Posts failure status with exit code to API (error description resolved automatically) # - ALWAYS sends telemetry FIRST before log collection to prevent pct pull
# - Only executes on non-zero exit codes # hangs from blocking status updates (container may be dead/unresponsive)
# - For non-zero exit codes: posts "failed" status
# - For zero exit codes where post_update_to_api was never called:
# catches orphaned "installing" records (e.g., script exited cleanly
# but description() was never reached)
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
api_exit_script() { api_exit_script() {
exit_code=$? local exit_code=$?
if [ $exit_code -ne 0 ]; then if [ $exit_code -ne 0 ]; then
ensure_log_on_host # ALWAYS send telemetry FIRST - ensure status is reported even if
post_update_to_api "failed" "$exit_code" # ensure_log_on_host hangs (e.g. pct pull on dead container)
post_update_to_api "failed" "$exit_code" 2>/dev/null || true
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi
elif [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
# Script exited with 0 but never sent a completion status
# exit_code=0 is never an error — report as success
post_update_to_api "done" "0"
fi fi
} }
if command -v pveversion >/dev/null 2>&1; then if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT trap 'api_exit_script' EXIT
fi fi
trap 'ensure_log_on_host; post_update_to_api "failed" "$?"' ERR trap 'local _ec=$?; if [[ $_ec -ne 0 ]]; then post_update_to_api "failed" "$_ec" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; fi' ERR
trap 'ensure_log_on_host; post_update_to_api "failed" "130"; exit 130' SIGINT trap 'post_update_to_api "failed" "129" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; exit 129' SIGHUP
trap 'ensure_log_on_host; post_update_to_api "failed" "143"; exit 143' SIGTERM trap 'post_update_to_api "failed" "130" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; exit 130' SIGINT
trap 'post_update_to_api "failed" "143" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; exit 143' SIGTERM

View File

@@ -1496,6 +1496,11 @@ cleanup_lxc() {
fi fi
msg_ok "Cleaned" msg_ok "Cleaned"
# Send progress ping if available (defined in install.func)
if declare -f post_progress_to_api &>/dev/null; then
post_progress_to_api
fi
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------

View File

@@ -37,24 +37,79 @@ if ! declare -f explain_exit_code &>/dev/null; then
case "$code" in case "$code" in
1) echo "General error / Operation not permitted" ;; 1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;; 2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;; 6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;; 7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;; 22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;; 28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;; 35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
100) echo "APT: Package manager error (broken packages / dependency problems)" ;; 100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;; 101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;; 102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
124) echo "Command timed out (timeout command)" ;; 124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;; 126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;; 127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;; 128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;; 129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;; 134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;; 137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;; 139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;; 141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;; 143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
150) echo "Systemd: Service failed to start" ;; 150) echo "Systemd: Service failed to start" ;;
151) echo "Systemd: Service unit not found" ;; 151) echo "Systemd: Service unit not found" ;;
152) echo "Permission denied (EACCES)" ;; 152) echo "Permission denied (EACCES)" ;;
@@ -100,6 +155,7 @@ if ! declare -f explain_exit_code &>/dev/null; then
224) echo "Proxmox: PBS storage is for backups only" ;; 224) echo "Proxmox: PBS storage is for backups only" ;;
225) echo "Proxmox: No template available for OS/Version" ;; 225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;; 231) echo "Proxmox: LXC stack upgrade failed" ;;
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;; 243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;; 245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;; 246) echo "Node.js: Internal JavaScript Parse Error" ;;
@@ -148,6 +204,12 @@ error_handler() {
printf "\e[?25h" printf "\e[?25h"
# ALWAYS report failure to API immediately - don't wait for container checks
# This ensures we capture failures that occur before/after container exists
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code" 2>/dev/null || true
fi
# Use msg_error if available, fallback to echo # Use msg_error if available, fallback to echo
if declare -f msg_error >/dev/null 2>&1; then if declare -f msg_error >/dev/null 2>&1; then
msg_error "in line ${line_number}: exit code ${exit_code} (${explanation}): while executing command ${command}" msg_error "in line ${line_number}: exit code ${exit_code} (${explanation}): while executing command ${command}"
@@ -198,11 +260,6 @@ error_handler() {
# Offer to remove container if it exists (build errors after container creation) # Offer to remove container if it exists (build errors after container creation)
if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then
# Report failure to API before container cleanup
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code"
fi
echo "" echo ""
if declare -f msg_custom >/dev/null 2>&1; then if declare -f msg_custom >/dev/null 2>&1; then
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}" echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
@@ -273,6 +330,8 @@ error_handler() {
# - Cleans up lock files if lockfile variable is set # - Cleans up lock files if lockfile variable is set
# - Exits with captured exit code # - Exits with captured exit code
# - Always runs on script termination (success or failure) # - Always runs on script termination (success or failure)
# - For signal exits (>128): sends telemetry FIRST before log collection
# to prevent pct pull hangs from blocking status updates
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
on_exit() { on_exit() {
local exit_code=$? local exit_code=$?
@@ -281,14 +340,17 @@ on_exit() {
# post_to_api was called ("installing" sent) but post_update_to_api was never called # post_to_api was called ("installing" sent) but post_update_to_api was never called
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then if declare -f post_update_to_api >/dev/null 2>&1; then
# Ensure log is accessible on host before reporting # ALWAYS send telemetry FIRST - ensure status is reported even if
if declare -f ensure_log_on_host >/dev/null 2>&1; then # ensure_log_on_host hangs (e.g. pct pull on dead/unresponsive container)
ensure_log_on_host
fi
if [[ $exit_code -ne 0 ]]; then if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code" post_update_to_api "failed" "$exit_code" 2>/dev/null || true
else else
post_update_to_api "failed" "1" # exit_code=0 is never an error — report as success
post_update_to_api "done" "0" 2>/dev/null || true
fi
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi fi
fi fi
fi fi
@@ -300,22 +362,26 @@ on_exit() {
# on_interrupt() # on_interrupt()
# #
# - SIGINT (Ctrl+C) trap handler # - SIGINT (Ctrl+C) trap handler
# - Reports to telemetry FIRST (time-critical: container may be dying)
# - Displays "Interrupted by user" message # - Displays "Interrupted by user" message
# - Exits with code 130 (128 + SIGINT=2) # - Exits with code 130 (128 + SIGINT=2)
# - Output redirected to /dev/null fallback to prevent SIGPIPE on closed terminals
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
on_interrupt() { on_interrupt() {
# Ensure log is accessible on host before reporting # CRITICAL: Send telemetry FIRST before any cleanup or output
if declare -f ensure_log_on_host >/dev/null 2>&1; then # If ensure_log_on_host hangs (e.g. pct pull on dying container),
ensure_log_on_host # the status update would never be sent, leaving records stuck in "installing"
fi
# Report interruption to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "130" post_update_to_api "failed" "130" 2>/dev/null || true
fi
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi fi
if declare -f msg_error >/dev/null 2>&1; then if declare -f msg_error >/dev/null 2>&1; then
msg_error "Interrupted by user (SIGINT)" msg_error "Interrupted by user (SIGINT)" 2>/dev/null || true
else else
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}" echo -e "\n${RD}Interrupted by user (SIGINT)${CL}" 2>/dev/null || true
fi fi
exit 130 exit 130
} }
@@ -324,23 +390,27 @@ on_interrupt() {
# on_terminate() # on_terminate()
# #
# - SIGTERM trap handler # - SIGTERM trap handler
# - Reports to telemetry FIRST (time-critical: process being killed)
# - Displays "Terminated by signal" message # - Displays "Terminated by signal" message
# - Exits with code 143 (128 + SIGTERM=15) # - Exits with code 143 (128 + SIGTERM=15)
# - Triggered by external process termination # - Triggered by external process termination
# - Output redirected to /dev/null fallback to prevent SIGPIPE on closed terminals
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
on_terminate() { on_terminate() {
# Ensure log is accessible on host before reporting # CRITICAL: Send telemetry FIRST before any cleanup or output
if declare -f ensure_log_on_host >/dev/null 2>&1; then # Same rationale as on_interrupt: ensure status gets reported even if
ensure_log_on_host # ensure_log_on_host hangs or terminal is already closed
fi
# Report termination to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "143" post_update_to_api "failed" "143" 2>/dev/null || true
fi
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi fi
if declare -f msg_error >/dev/null 2>&1; then if declare -f msg_error >/dev/null 2>&1; then
msg_error "Terminated by signal (SIGTERM)" msg_error "Terminated by signal (SIGTERM)" 2>/dev/null || true
else else
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}" echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}" 2>/dev/null || true
fi fi
exit 143 exit 143
} }

View File

@@ -37,9 +37,35 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
load_functions load_functions
catch_errors catch_errors
# Persist diagnostics setting inside container (exported from build.func)
# so addon scripts running later can find the user's choice
if [[ ! -f /usr/local/community-scripts/diagnostics ]]; then
mkdir -p /usr/local/community-scripts
echo "DIAGNOSTICS=${DIAGNOSTICS:-no}" >/usr/local/community-scripts/diagnostics
fi
# Get LXC IP address (must be called INSIDE container, after network is up) # Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# ============================================================================== # ==============================================================================
# SECTION 2: NETWORK & CONNECTIVITY # SECTION 2: NETWORK & CONNECTIVITY
# ============================================================================== # ==============================================================================
@@ -103,6 +129,7 @@ setting_up_container() {
msg_ok "Set up Container OS" msg_ok "Set up Container OS"
#msg_custom "${CM}" "${GN}" "Network Connected: ${BL}$(hostname -I)" #msg_custom "${CM}" "${GN}" "Network Connected: ${BL}$(hostname -I)"
msg_ok "Network Connected: ${BL}$(hostname -I)" msg_ok "Network Connected: ${BL}$(hostname -I)"
post_progress_to_api
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -206,8 +233,18 @@ EOF
$STD apt-get -o Dpkg::Options::="--force-confold" -y dist-upgrade $STD apt-get -o Dpkg::Options::="--force-confold" -y dist-upgrade
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
msg_ok "Updated Container OS" msg_ok "Updated Container OS"
post_progress_to_api
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
} }
# ============================================================================== # ==============================================================================
@@ -248,6 +285,7 @@ motd_ssh() {
sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/g" /etc/ssh/sshd_config sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
systemctl restart sshd systemctl restart sshd
fi fi
post_progress_to_api
} }
# ============================================================================== # ==============================================================================
@@ -286,4 +324,5 @@ EOF
chmod 700 /root/.ssh chmod 700 /root/.ssh
chmod 600 /root/.ssh/authorized_keys chmod 600 /root/.ssh/authorized_keys
fi fi
post_progress_to_api
} }

View File

@@ -529,9 +529,21 @@ cleanup_vmid() {
} }
cleanup() { cleanup() {
local exit_code=$?
if [[ "$(dirs -p | wc -l)" -gt 1 ]]; then if [[ "$(dirs -p | wc -l)" -gt 1 ]]; then
popd >/dev/null || true popd >/dev/null || true
fi fi
# Report final telemetry status if post_to_api_vm was called but no update was sent
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
else
# Exited cleanly but description()/success was never called — shouldn't happen
post_update_to_api "failed" "1"
fi
fi
fi
} }
check_root() { check_root() {

View File

@@ -19,6 +19,11 @@ EOF
} }
header_info header_info
set -e set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-netbird-lxc" "addon"
while true; do while true; do
read -p "This will add NetBird to an existing LXC Container ONLY. Proceed(y/n)?" yn read -p "This will add NetBird to an existing LXC Container ONLY. Proceed(y/n)?" yn
case $yn in case $yn in

View File

@@ -23,6 +23,10 @@ function msg_info() { echo -e " \e[1;36m➤\e[0m $1"; }
function msg_ok() { echo -e " \e[1;32m✔\e[0m $1"; } function msg_ok() { echo -e " \e[1;32m✔\e[0m $1"; }
function msg_error() { echo -e " \e[1;31m✖\e[0m $1"; } function msg_error() { echo -e " \e[1;31m✖\e[0m $1"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-tailscale-lxc" "addon"
header_info header_info
if ! command -v pveversion &>/dev/null; then if ! command -v pveversion &>/dev/null; then

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=8080
# Initialize all core functions (colors, formatting, icons, STD mode) # Initialize all core functions (colors, formatting, icons, STD mode)
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# HEADER # HEADER

View File

@@ -42,6 +42,11 @@ function msg() {
local TEXT="$1" local TEXT="$1"
echo -e "$TEXT" echo -e "$TEXT"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "all-templates" "addon"
function validate_container_id() { function validate_container_id() {
local ctid="$1" local ctid="$1"
# Check if ID is numeric # Check if ID is numeric

View File

@@ -28,6 +28,11 @@ HOLD="-"
CM="${GN}${CL}" CM="${GN}${CL}"
APP="Coder Code Server" APP="Coder Code Server"
hostname="$(hostname)" hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "coder-code-server" "addon"
set -o errexit set -o errexit
set -o errtrace set -o errtrace
set -o nounset set -o nounset

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
trap 'error_handler' ERR trap 'error_handler' ERR
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# CONFIGURATION # CONFIGURATION

View File

@@ -17,6 +17,11 @@ HOLD="-"
CM="${GN}${CL}" CM="${GN}${CL}"
APP="CrowdSec" APP="CrowdSec"
hostname="$(hostname)" hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "crowdsec" "addon"
set -o errexit set -o errexit
set -o errtrace set -o errtrace
set -o nounset set -o nounset

View File

@@ -32,6 +32,10 @@ DEFAULT_PORT=8080
SRC_DIR="/" SRC_DIR="/"
TMP_BIN="/tmp/filebrowser.$$" TMP_BIN="/tmp/filebrowser.$$"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "filebrowser-quantum" "addon"
# Get primary IP # Get primary IP
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}') IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1) IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)

View File

@@ -5,8 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
function header_info { function header_info {
clear clear
cat <<"EOF" cat <<"EOF"
_______ __ ____ _______ __ ____
/ ____(_) /__ / __ )_________ _ __________ _____ / ____(_) /__ / __ )_________ _ __________ _____
/ /_ / / / _ \/ __ / ___/ __ \ | /| / / ___/ _ \/ ___/ / /_ / / / _ \/ __ / ___/ __ \ | /| / / ___/ _ \/ ___/
@@ -29,6 +29,10 @@ INSTALL_PATH="/usr/local/bin/filebrowser"
DB_PATH="/usr/local/community-scripts/filebrowser.db" DB_PATH="/usr/local/community-scripts/filebrowser.db"
DEFAULT_PORT=8080 DEFAULT_PORT=8080
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "filebrowser" "addon"
# Get first non-loopback IP & Detect primary network interface dynamically # Get first non-loopback IP & Detect primary network interface dynamically
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}') IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1) IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
@@ -38,65 +42,65 @@ IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n
# Detect OS # Detect OS
if [[ -f "/etc/alpine-release" ]]; then if [[ -f "/etc/alpine-release" ]]; then
OS="Alpine" OS="Alpine"
SERVICE_PATH="/etc/init.d/filebrowser" SERVICE_PATH="/etc/init.d/filebrowser"
PKG_MANAGER="apk add --no-cache" PKG_MANAGER="apk add --no-cache"
elif [[ -f "/etc/debian_version" ]]; then elif [[ -f "/etc/debian_version" ]]; then
OS="Debian" OS="Debian"
SERVICE_PATH="/etc/systemd/system/filebrowser.service" SERVICE_PATH="/etc/systemd/system/filebrowser.service"
PKG_MANAGER="apt-get install -y" PKG_MANAGER="apt-get install -y"
else else
echo -e "${CROSS} Unsupported OS detected. Exiting." echo -e "${CROSS} Unsupported OS detected. Exiting."
exit 1 exit 1
fi fi
header_info header_info
function msg_info() { function msg_info() {
local msg="$1" local msg="$1"
echo -e "${INFO} ${YW}${msg}...${CL}" echo -e "${INFO} ${YW}${msg}...${CL}"
} }
function msg_ok() { function msg_ok() {
local msg="$1" local msg="$1"
echo -e "${CM} ${GN}${msg}${CL}" echo -e "${CM} ${GN}${msg}${CL}"
} }
function msg_error() { function msg_error() {
local msg="$1" local msg="$1"
echo -e "${CROSS} ${RD}${msg}${CL}" echo -e "${CROSS} ${RD}${msg}${CL}"
} }
if [ -f "$INSTALL_PATH" ]; then if [ -f "$INSTALL_PATH" ]; then
echo -e "${YW}⚠️ ${APP} is already installed.${CL}" echo -e "${YW}⚠️ ${APP} is already installed.${CL}"
read -r -p "Would you like to uninstall ${APP}? (y/N): " uninstall_prompt read -r -p "Would you like to uninstall ${APP}? (y/N): " uninstall_prompt
if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Uninstalling ${APP}" msg_info "Uninstalling ${APP}"
if [[ "$OS" == "Debian" ]]; then if [[ "$OS" == "Debian" ]]; then
systemctl disable --now filebrowser.service &>/dev/null systemctl disable --now filebrowser.service &>/dev/null
rm -f "$SERVICE_PATH" rm -f "$SERVICE_PATH"
else
rc-service filebrowser stop &>/dev/null
rc-update del filebrowser &>/dev/null
rm -f "$SERVICE_PATH"
fi
rm -f "$INSTALL_PATH" "$DB_PATH"
msg_ok "${APP} has been uninstalled."
exit 0
fi
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"
exit 0
else else
echo -e "${YW}⚠️ Update skipped. Exiting.${CL}" rc-service filebrowser stop &>/dev/null
exit 0 rc-update del filebrowser &>/dev/null
rm -f "$SERVICE_PATH"
fi fi
rm -f "$INSTALL_PATH" "$DB_PATH"
msg_ok "${APP} has been uninstalled."
exit 0
fi
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"
exit 0
else
echo -e "${YW}⚠️ Update skipped. Exiting.${CL}"
exit 0
fi
fi fi
echo -e "${YW}⚠️ ${APP} is not installed.${CL}" echo -e "${YW}⚠️ ${APP} is not installed.${CL}"
@@ -105,43 +109,43 @@ PORT=${PORT:-$DEFAULT_PORT}
read -r -p "Would you like to install ${APP}? (y/n): " install_prompt read -r -p "Would you like to install ${APP}? (y/n): " install_prompt
if [[ "${install_prompt,,}" =~ ^(y|yes)$ ]]; then if [[ "${install_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Installing ${APP} on ${OS}" msg_info "Installing ${APP} on ${OS}"
$PKG_MANAGER wget tar curl &>/dev/null $PKG_MANAGER wget tar curl &>/dev/null
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH" chmod +x "$INSTALL_PATH"
msg_ok "Installed ${APP}" msg_ok "Installed ${APP}"
msg_info "Creating FileBrowser directory" msg_info "Creating FileBrowser directory"
mkdir -p /usr/local/community-scripts mkdir -p /usr/local/community-scripts
chown root:root /usr/local/community-scripts chown root:root /usr/local/community-scripts
chmod 755 /usr/local/community-scripts chmod 755 /usr/local/community-scripts
touch "$DB_PATH" touch "$DB_PATH"
chown root:root "$DB_PATH" chown root:root "$DB_PATH"
chmod 644 "$DB_PATH" chmod 644 "$DB_PATH"
msg_ok "Directory created successfully" msg_ok "Directory created successfully"
read -r -p "Would you like to use No Authentication? (y/N): " auth_prompt read -r -p "Would you like to use No Authentication? (y/N): " auth_prompt
if [[ "${auth_prompt,,}" =~ ^(y|yes)$ ]]; then if [[ "${auth_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Configuring No Authentication" msg_info "Configuring No Authentication"
cd /usr/local/community-scripts cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config init --auth.method=noauth &>/dev/null filebrowser config init --auth.method=noauth &>/dev/null
filebrowser config set --auth.method=noauth &>/dev/null filebrowser config set --auth.method=noauth &>/dev/null
filebrowser users add ID 1 --perm.admin &>/dev/null filebrowser users add ID 1 --perm.admin &>/dev/null
msg_ok "No Authentication configured" msg_ok "No Authentication configured"
else else
msg_info "Setting up default authentication" msg_info "Setting up default authentication"
cd /usr/local/community-scripts cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser users add admin helper-scripts.com --perm.admin --database "$DB_PATH" &>/dev/null filebrowser users add admin helper-scripts.com --perm.admin --database "$DB_PATH" &>/dev/null
msg_ok "Default authentication configured (admin:helper-scripts.com)" msg_ok "Default authentication configured (admin:helper-scripts.com)"
fi fi
msg_info "Creating service" msg_info "Creating service"
if [[ "$OS" == "Debian" ]]; then if [[ "$OS" == "Debian" ]]; then
cat <<EOF >"$SERVICE_PATH" cat <<EOF >"$SERVICE_PATH"
[Unit] [Unit]
Description=Filebrowser Description=Filebrowser
After=network-online.target After=network-online.target
@@ -157,9 +161,9 @@ Restart=always
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
systemctl enable -q --now filebrowser systemctl enable -q --now filebrowser
else else
cat <<EOF >"$SERVICE_PATH" cat <<EOF >"$SERVICE_PATH"
#!/sbin/openrc-run #!/sbin/openrc-run
command="/usr/local/bin/filebrowser" command="/usr/local/bin/filebrowser"
@@ -172,14 +176,14 @@ depend() {
need net need net
} }
EOF EOF
chmod +x "$SERVICE_PATH" chmod +x "$SERVICE_PATH"
rc-update add filebrowser default &>/dev/null rc-update add filebrowser default &>/dev/null
rc-service filebrowser start &>/dev/null rc-service filebrowser start &>/dev/null
fi fi
msg_ok "Service created successfully" msg_ok "Service created successfully"
echo -e "${CM} ${GN}${APP} is reachable at: ${BL}http://$IP:$PORT${CL}" echo -e "${CM} ${GN}${APP} is reachable at: ${BL}http://$IP:$PORT${CL}"
else else
echo -e "${YW}⚠️ Installation skipped. Exiting.${CL}" echo -e "${YW}⚠️ Installation skipped. Exiting.${CL}"
exit 0 exit 0
fi fi

View File

@@ -30,6 +30,10 @@ function msg_info() { echo -e "${INFO} ${YW}$1...${CL}"; }
function msg_ok() { echo -e "${CM} ${GN}$1${CL}"; } function msg_ok() { echo -e "${CM} ${GN}$1${CL}"; }
function msg_error() { echo -e "${CROSS} ${RD}$1${CL}"; } function msg_error() { echo -e "${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "glances" "addon"
get_lxc_ip() { get_lxc_ip() {
if command -v hostname >/dev/null 2>&1 && hostname -I 2>/dev/null; then if command -v hostname >/dev/null 2>&1 && hostname -I 2>/dev/null; then
hostname -I | awk '{print $1}' hostname -I | awk '{print $1}'

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, STD mode) # Initialize all core functions (colors, formatting, icons, STD mode)
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# HEADER # HEADER
@@ -104,6 +106,10 @@ function update() {
$STD npm run build $STD npm run build
msg_ok "Built ${APP}" msg_ok "Built ${APP}"
msg_info "Updating service"
create_service
msg_ok "Updated service"
msg_info "Starting service" msg_info "Starting service"
systemctl start immich-proxy systemctl start immich-proxy
msg_ok "Started service" msg_ok "Started service"
@@ -112,6 +118,27 @@ function update() {
fi fi
} }
function create_service() {
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}/app
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/dist/index.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
}
# ============================================================================== # ==============================================================================
# INSTALL # INSTALL
# ============================================================================== # ==============================================================================
@@ -173,23 +200,7 @@ EOF
msg_ok "Created configuration" msg_ok "Created configuration"
msg_info "Creating service" msg_info "Creating service"
cat <<EOF >"$SERVICE_PATH" create_service
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now immich-proxy systemctl enable -q --now immich-proxy
msg_ok "Created and started service" msg_ok "Created and started service"

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, STD mode) # Initialize all core functions (colors, formatting, icons, STD mode)
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# HEADER # HEADER

View File

@@ -26,6 +26,11 @@ BFR="\\r\\033[K"
HOLD="-" HOLD="-"
CM="${GN}${CL}" CM="${GN}${CL}"
silent() { "$@" >/dev/null 2>&1; } silent() { "$@" >/dev/null 2>&1; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "netdata" "addon"
set -e set -e
header_info header_info
echo "Loading..." echo "Loading..."

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
trap 'error_handler' ERR trap 'error_handler' ERR
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# CONFIGURATION # CONFIGURATION

View File

@@ -27,6 +27,11 @@ HOLD="-"
CM="${GN}${CL}" CM="${GN}${CL}"
APP="OliveTin" APP="OliveTin"
hostname="$(hostname)" hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "olivetin" "addon"
set-e set-e
header_info header_info

View File

@@ -29,6 +29,10 @@ APP="phpMyAdmin"
INSTALL_DIR_DEBIAN="/var/www/html/phpMyAdmin" INSTALL_DIR_DEBIAN="/var/www/html/phpMyAdmin"
INSTALL_DIR_ALPINE="/usr/share/phpmyadmin" INSTALL_DIR_ALPINE="/usr/share/phpmyadmin"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "phpmyadmin" "addon"
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}') IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1) IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
[[ -z "$IP" ]] && IP=$(hostname -I | awk '{print $1}') [[ -z "$IP" ]] && IP=$(hostname -I | awk '{print $1}')

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
trap 'error_handler' ERR trap 'error_handler' ERR
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# CONFIGURATION # CONFIGURATION

View File

@@ -8,11 +8,13 @@
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
trap 'error_handler' ERR trap 'error_handler' ERR
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# CONFIGURATION # CONFIGURATION

View File

@@ -28,6 +28,11 @@ function msg_error() {
local msg="$1" local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pyenv" "addon"
if command -v pveversion >/dev/null 2>&1; then if command -v pveversion >/dev/null 2>&1; then
msg_error "Can't Install on Proxmox " msg_error "Can't Install on Proxmox "
exit exit

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling # Enable error handling
set -Eeuo pipefail set -Eeuo pipefail
trap 'error_handler' ERR trap 'error_handler' ERR
load_functions load_functions
init_tool_telemetry "" "addon"
# ============================================================================== # ==============================================================================
# CONFIGURATION # CONFIGURATION

View File

@@ -36,6 +36,10 @@ msg_ok() {
echo -e "${BFR} ${CM} ${GN}${msg}${CL}" echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "webmin" "addon"
header_info header_info
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Webmin Installer" --yesno "This Will Install Webmin on this LXC Container. Proceed?" 10 58 whiptail --backtitle "Proxmox VE Helper Scripts" --title "Webmin Installer" --yesno "This Will Install Webmin on this LXC Container. Proceed?" 10 58

View File

@@ -31,6 +31,10 @@ HOLD=" "
CM="${GN}${CL} " CM="${GN}${CL} "
CROSS="${RD}${CL} " CROSS="${RD}${CL} "
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-iptag" "pve"
# Stop any running spinner # Stop any running spinner
stop_spinner() { stop_spinner() {
if [ -n "$SPINNER_PID" ] && kill -0 "$SPINNER_PID" 2>/dev/null; then if [ -n "$SPINNER_PID" ] && kill -0 "$SPINNER_PID" 2>/dev/null; then

View File

@@ -22,6 +22,10 @@ CM='\xE2\x9C\x94\033'
GN="\033[1;92m" GN="\033[1;92m"
CL="\033[m" CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "clean-lxcs" "pve"
header_info header_info
echo "Loading..." echo "Loading..."

View File

@@ -5,8 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
function header_info { function header_info {
clear clear
cat <<"EOF" cat <<"EOF"
____ ________ ____ __ __ __ _ ____ ___ ____ ________ ____ __ __ __ _ ____ ___
/ __ \_________ _ ______ ___ ____ _ __ / ____/ /__ ____ _____ / __ \_________ / /_ ____ _____ ___ ____/ / / /| | / / |/ /____ / __ \_________ _ ______ ___ ____ _ __ / ____/ /__ ____ _____ / __ \_________ / /_ ____ _____ ___ ____/ / / /| | / / |/ /____
/ /_/ / ___/ __ \| |/_/ __ `__ \/ __ \| |/_/ / / / / _ \/ __ `/ __ \ / / / / ___/ __ \/ __ \/ __ `/ __ \/ _ \/ __ / / / | | / / /|_/ / ___/ / /_/ / ___/ __ \| |/_/ __ `__ \/ __ \| |/_/ / / / / _ \/ __ `/ __ \ / / / / ___/ __ \/ __ \/ __ `/ __ \/ _ \/ __ / / / | | / / /|_/ / ___/
@@ -16,62 +16,66 @@ function header_info {
EOF EOF
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "clean-orphaned-lvm" "pve"
# Function to check for orphaned LVM volumes # Function to check for orphaned LVM volumes
function find_orphaned_lvm { function find_orphaned_lvm {
echo -e "\n🔍 Scanning for orphaned LVM volumes...\n" echo -e "\n🔍 Scanning for orphaned LVM volumes...\n"
orphaned_volumes=() orphaned_volumes=()
while read -r lv vg size seg_type; do while read -r lv vg size seg_type; do
# Exclude system-critical LVs and Ceph OSDs # Exclude system-critical LVs and Ceph OSDs
if [[ "$lv" == "data" || "$lv" == "root" || "$lv" == "swap" || "$lv" =~ ^osd-block- ]]; then if [[ "$lv" == "data" || "$lv" == "root" || "$lv" == "swap" || "$lv" =~ ^osd-block- ]]; then
continue continue
fi fi
# Exclude thin pools (any name) # Exclude thin pools (any name)
if [[ "$seg_type" == "thin-pool" ]]; then if [[ "$seg_type" == "thin-pool" ]]; then
continue continue
fi fi
container_id=$(echo "$lv" | grep -oE "[0-9]+" | head -1) container_id=$(echo "$lv" | grep -oE "[0-9]+" | head -1)
# Check if the ID exists as a VM or LXC container # Check if the ID exists as a VM or LXC container
if [ -f "/etc/pve/lxc/${container_id}.conf" ] || [ -f "/etc/pve/qemu-server/${container_id}.conf" ]; then if [ -f "/etc/pve/lxc/${container_id}.conf" ] || [ -f "/etc/pve/qemu-server/${container_id}.conf" ]; then
continue continue
fi fi
orphaned_volumes+=("$lv" "$vg" "$size") orphaned_volumes+=("$lv" "$vg" "$size")
done < <(lvs --noheadings -o lv_name,vg_name,lv_size,seg_type --separator ' ' 2>/dev/null | awk '{print $1, $2, $3, $4}') done < <(lvs --noheadings -o lv_name,vg_name,lv_size,seg_type --separator ' ' 2>/dev/null | awk '{print $1, $2, $3, $4}')
# Display orphaned volumes # Display orphaned volumes
echo -e "❗ The following orphaned LVM volumes were found:\n" echo -e "❗ The following orphaned LVM volumes were found:\n"
printf "%-25s %-10s %-10s\n" "LV Name" "VG" "Size" printf "%-25s %-10s %-10s\n" "LV Name" "VG" "Size"
printf "%-25s %-10s %-10s\n" "-------------------------" "----------" "----------" printf "%-25s %-10s %-10s\n" "-------------------------" "----------" "----------"
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
printf "%-25s %-10s %-10s\n" "${orphaned_volumes[i]}" "${orphaned_volumes[i + 1]}" "${orphaned_volumes[i + 2]}" printf "%-25s %-10s %-10s\n" "${orphaned_volumes[i]}" "${orphaned_volumes[i + 1]}" "${orphaned_volumes[i + 2]}"
done done
echo "" echo ""
} }
# Function to delete selected volumes # Function to delete selected volumes
function delete_orphaned_lvm { function delete_orphaned_lvm {
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
lv="${orphaned_volumes[i]}" lv="${orphaned_volumes[i]}"
vg="${orphaned_volumes[i + 1]}" vg="${orphaned_volumes[i + 1]}"
size="${orphaned_volumes[i + 2]}" size="${orphaned_volumes[i + 2]}"
read -p "❓ Do you want to delete $lv (VG: $vg, Size: $size)? [y/N]: " confirm read -p "❓ Do you want to delete $lv (VG: $vg, Size: $size)? [y/N]: " confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then if [[ "$confirm" =~ ^[Yy]$ ]]; then
echo -e "🗑️ Deleting $lv from $vg..." echo -e "🗑️ Deleting $lv from $vg..."
lvremove -f "$vg/$lv" lvremove -f "$vg/$lv"
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
echo -e "✅ Successfully deleted $lv.\n" echo -e "✅ Successfully deleted $lv.\n"
else else
echo -e "❌ Failed to delete $lv.\n" echo -e "❌ Failed to delete $lv.\n"
fi fi
else else
echo -e "⚠️ Skipping $lv.\n" echo -e "⚠️ Skipping $lv.\n"
fi fi
done done
} }
# Run script # Run script

View File

@@ -7,8 +7,8 @@
clear clear
if command -v pveversion >/dev/null 2>&1; then if command -v pveversion >/dev/null 2>&1; then
echo -e "⚠️ Can't Run from the Proxmox Shell" echo -e "⚠️ Can't Run from the Proxmox Shell"
exit exit
fi fi
YW=$(echo "\033[33m") YW=$(echo "\033[33m")
BL=$(echo "\033[36m") BL=$(echo "\033[36m")
@@ -23,16 +23,16 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}" CROSS="${RD}${CL}"
APP="Home Assistant Container" APP="Home Assistant Container"
while true; do while true; do
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in case $yn in
[Yy]*) break ;; [Yy]*) break ;;
[Nn]*) exit ;; [Nn]*) exit ;;
*) echo "Please answer yes or no." ;; *) echo "Please answer yes or no." ;;
esac esac
done done
clear clear
function header_info { function header_info {
cat <<"EOF" cat <<"EOF"
__ __ ___ _ __ __ __ __ ___ _ __ __
/ / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_ / / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_
/ /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/ / /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/
@@ -44,35 +44,39 @@ EOF
header_info header_info
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "container-restore" "pve"
function msg_info() { function msg_info() {
local msg="$1" local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..." echo -ne " ${HOLD} ${YW}${msg}..."
} }
function msg_ok() { function msg_ok() {
local msg="$1" local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}" echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
} }
function msg_error() { function msg_error() {
local msg="$1" local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
if [ -z "$(ls -A /var/lib/docker/volumes/hass_config/_data/backups/)" ]; then if [ -z "$(ls -A /var/lib/docker/volumes/hass_config/_data/backups/)" ]; then
msg_error "No backups found! \n" msg_error "No backups found! \n"
exit 1 exit 1
fi fi
DIR=/var/lib/docker/volumes/hass_config/_data/restore DIR=/var/lib/docker/volumes/hass_config/_data/restore
if [ -d "$DIR" ]; then if [ -d "$DIR" ]; then
msg_ok "Restore Directory Exists." msg_ok "Restore Directory Exists."
else else
mkdir -p /var/lib/docker/volumes/hass_config/_data/restore mkdir -p /var/lib/docker/volumes/hass_config/_data/restore
msg_ok "Created Restore Directory." msg_ok "Created Restore Directory."
fi fi
cd /var/lib/docker/volumes/hass_config/_data/backups/ cd /var/lib/docker/volumes/hass_config/_data/backups/
PS3="Please enter your choice: " PS3="Please enter your choice: "
files="$(ls -A .)" files="$(ls -A .)"
select filename in ${files}; do select filename in ${files}; do
msg_ok "You selected ${BL}${filename}${CL}" msg_ok "You selected ${BL}${filename}${CL}"
break break
done done
msg_info "Stopping Home Assistant" msg_info "Stopping Home Assistant"
docker stop homeassistant &>/dev/null docker stop homeassistant &>/dev/null

View File

@@ -7,8 +7,8 @@
clear clear
if command -v pveversion >/dev/null 2>&1; then if command -v pveversion >/dev/null 2>&1; then
echo -e "⚠️ Can't Run from the Proxmox Shell" echo -e "⚠️ Can't Run from the Proxmox Shell"
exit exit
fi fi
YW=$(echo "\033[33m") YW=$(echo "\033[33m")
BL=$(echo "\033[36m") BL=$(echo "\033[36m")
@@ -23,16 +23,16 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}" CROSS="${RD}${CL}"
APP="Home Assistant Core" APP="Home Assistant Core"
while true; do while true; do
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in case $yn in
[Yy]*) break ;; [Yy]*) break ;;
[Nn]*) exit ;; [Nn]*) exit ;;
*) echo "Please answer yes or no." ;; *) echo "Please answer yes or no." ;;
esac esac
done done
clear clear
function header_info { function header_info {
cat <<"EOF" cat <<"EOF"
__ __ ___ _ __ __ ______ __ __ ___ _ __ __ ______
/ / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_ / ____/___ ________ / / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_ / ____/___ ________
/ /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/ / / / __ \/ ___/ _ \ / /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/ / / / __ \/ ___/ _ \
@@ -44,35 +44,39 @@ EOF
header_info header_info
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "core-restore" "pve"
function msg_info() { function msg_info() {
local msg="$1" local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..." echo -ne " ${HOLD} ${YW}${msg}..."
} }
function msg_ok() { function msg_ok() {
local msg="$1" local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}" echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
} }
function msg_error() { function msg_error() {
local msg="$1" local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
if [ -z "$(ls -A /root/.homeassistant/backups/)" ]; then if [ -z "$(ls -A /root/.homeassistant/backups/)" ]; then
msg_error "No backups found! \n" msg_error "No backups found! \n"
exit 1 exit 1
fi fi
DIR=/root/.homeassistant/restore DIR=/root/.homeassistant/restore
if [ -d "$DIR" ]; then if [ -d "$DIR" ]; then
msg_ok "Restore Directory Exists." msg_ok "Restore Directory Exists."
else else
mkdir -p /root/.homeassistant/restore mkdir -p /root/.homeassistant/restore
msg_ok "Created Restore Directory." msg_ok "Created Restore Directory."
fi fi
cd /root/.homeassistant/backups/ cd /root/.homeassistant/backups/
PS3="Please enter your choice: " PS3="Please enter your choice: "
files="$(ls -A .)" files="$(ls -A .)"
select filename in ${files}; do select filename in ${files}; do
msg_ok "You selected ${BL}${filename}${CL}" msg_ok "You selected ${BL}${filename}${CL}"
break break
done done
msg_info "Stopping Home Assistant" msg_info "Stopping Home Assistant"
sudo service homeassistant stop sudo service homeassistant stop

View File

@@ -22,6 +22,11 @@ RD=$(echo "\033[01;31m")
CM='\xE2\x9C\x94\033' CM='\xE2\x9C\x94\033'
GN=$(echo "\033[1;92m") GN=$(echo "\033[1;92m")
CL=$(echo "\033[m") CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "execute-lxcs" "pve"
header_info header_info
echo "Loading..." echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Execute" --yesno "This will execute a command inside selected LXC Containers. Proceed?" 10 58 whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Execute" --yesno "This will execute a command inside selected LXC Containers. Proceed?" 10 58
@@ -40,7 +45,6 @@ if [ $? -ne 0 ]; then
exit exit
fi fi
read -r -p "Enter here command for inside the containers: " custom_command read -r -p "Enter here command for inside the containers: " custom_command
header_info header_info
@@ -50,12 +54,11 @@ function execute_in() {
container=$1 container=$1
name=$(pct exec "$container" hostname) name=$(pct exec "$container" hostname)
echo -e "${BL}[Info]${GN} Execute inside${BL} ${name}${GN} with output: ${CL}" echo -e "${BL}[Info]${GN} Execute inside${BL} ${name}${GN} with output: ${CL}"
if ! pct exec "$container" -- bash -c "command ${custom_command} >/dev/null 2>&1" if ! pct exec "$container" -- bash -c "command ${custom_command} >/dev/null 2>&1"; then
then echo -e "${BL}[Info]${GN} Skipping ${name} ${RD}$container has no command: ${custom_command}"
echo -e "${BL}[Info]${GN} Skipping ${name} ${RD}$container has no command: ${custom_command}" else
else pct exec "$container" -- bash -c "${custom_command}" | tee
pct exec "$container" -- bash -c "${custom_command}" | tee fi
fi
} }
for container in $(pct list | awk '{if(NR>1) print $1}'); do for container in $(pct list | awk '{if(NR>1) print $1}'); do

View File

@@ -15,6 +15,11 @@ function header_info {
/___/ /_/ /_/ /___/ /_/ /_/
EOF EOF
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "frigate-support" "pve"
header_info header_info
while true; do while true; do
read -p "This will Prepare a LXC Container for Frigate. Proceed (y/n)?" yn read -p "This will Prepare a LXC Container for Frigate. Proceed (y/n)?" yn

View File

@@ -19,6 +19,10 @@ RD="\033[01;31m"
GN="\033[1;92m" GN="\033[1;92m"
CL="\033[m" CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "fstrim" "pve"
LOGFILE="/var/log/fstrim.log" LOGFILE="/var/log/fstrim.log"
touch "$LOGFILE" touch "$LOGFILE"
chmod 600 "$LOGFILE" chmod 600 "$LOGFILE"

View File

@@ -16,6 +16,10 @@ function header_info {
EOF EOF
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "host-backup" "pve"
# Function to perform backup # Function to perform backup
function perform_backup { function perform_backup {
local BACKUP_PATH local BACKUP_PATH

View File

@@ -29,6 +29,11 @@ BFR="\\r\\033[K"
HOLD="-" HOLD="-"
CM="${GN}${CL}" CM="${GN}${CL}"
set -e set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "hw-acceleration" "pve"
header_info header_info
echo "Loading..." echo "Loading..."
function msg_info() { function msg_info() {

View File

@@ -22,6 +22,10 @@ GN="\033[1;92m"
RD="\033[01;31m" RD="\033[01;31m"
CL="\033[m" CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "kernel-clean" "pve"
# Detect current kernel # Detect current kernel
current_kernel=$(uname -r) current_kernel=$(uname -r)
available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print $2}' | grep -v "$current_kernel" | sort -V) available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print $2}' | grep -v "$current_kernel" | sort -V)

View File

@@ -25,6 +25,11 @@ HOLD="-"
CM="${GN}${CL}" CM="${GN}${CL}"
current_kernel=$(uname -r) current_kernel=$(uname -r)
available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print substr($2, 16, length($2)-22)}') available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print substr($2, 16, length($2)-22)}')
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "kernel-pin" "pve"
header_info header_info
function msg_info() { function msg_info() {

View File

@@ -38,6 +38,10 @@ CL=$(echo "\033[m")
TAB=" " TAB=" "
CM="${TAB}✔️${TAB}${CL}" CM="${TAB}✔️${TAB}${CL}"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "lxc-delete" "pve"
header_info header_info
echo "Loading..." echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Deletion" --yesno "This will delete LXC containers. Proceed?" 10 58 whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Deletion" --yesno "This will delete LXC containers. Proceed?" 10 58

View File

@@ -29,6 +29,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; } msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; } msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "microcode" "pve"
header_info header_info
current_microcode=$(journalctl -k | grep -i 'microcode: Current revision:' | grep -oP 'Current revision: \K0x[0-9a-f]+') current_microcode=$(journalctl -k | grep -i 'microcode: Current revision:' | grep -oP 'Current revision: \K0x[0-9a-f]+')
[ -z "$current_microcode" ] && current_microcode="Not found." [ -z "$current_microcode" ] && current_microcode="Not found."

View File

@@ -15,6 +15,10 @@ cat <<"EOF"
EOF EOF
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "monitor-all" "pve"
add() { add() {
echo -e "\n IMPORTANT: Tag-Based Monitoring Enabled" echo -e "\n IMPORTANT: Tag-Based Monitoring Enabled"
echo "Only VMs and containers with the tag 'mon-restart' will be automatically restarted by this service." echo "Only VMs and containers with the tag 'mon-restart' will be automatically restarted by this service."
@@ -28,9 +32,9 @@ add() {
while true; do while true; do
read -p "This script will add Monitor All to Proxmox VE. Proceed (y/n)? " yn read -p "This script will add Monitor All to Proxmox VE. Proceed (y/n)? " yn
case $yn in case $yn in
[Yy]*) break ;; [Yy]*) break ;;
[Nn]*) exit ;; [Nn]*) exit ;;
*) echo "Please answer yes or no." ;; *) echo "Please answer yes or no." ;;
esac esac
done done
@@ -175,5 +179,8 @@ CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "Monitor-All f
case $CHOICE in case $CHOICE in
"Add") add ;; "Add") add ;;
"Remove") remove ;; "Remove") remove ;;
*) echo "Exiting..."; exit 0 ;; *)
echo "Exiting..."
exit 0
;;
esac esac

View File

@@ -19,8 +19,8 @@ INFO="${TAB}${TAB}${CL}"
WARN="${TAB}⚠️${TAB}${CL}" WARN="${TAB}⚠️${TAB}${CL}"
function header_info { function header_info {
clear clear
cat <<"EOF" cat <<"EOF"
_ ____________ ____ __________ ___ ____ _ __ __ _ ____________ ____ __________ ___ ____ _ __ __
/ | / / _/ ____/ / __ \/ __/ __/ /___ ____ _____/ (_)___ ____ _ / __ \(_)________ _/ /_ / /__ _____ / | / / _/ ____/ / __ \/ __/ __/ /___ ____ _____/ (_)___ ____ _ / __ \(_)________ _/ /_ / /__ _____
@@ -33,6 +33,10 @@ Enhanced version supporting both e1000e and e1000 drivers
EOF EOF
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "nic-offloading-fix" "pve"
header_info header_info
function msg_info() { echo -e "${INFO} ${YW}${1}...${CL}"; } function msg_info() { echo -e "${INFO} ${YW}${1}...${CL}"; }
@@ -42,15 +46,18 @@ function msg_warn() { echo -e "${WARN} ${YWB}${1}"; }
# Check for root privileges # Check for root privileges
if [ "$(id -u)" -ne 0 ]; then if [ "$(id -u)" -ne 0 ]; then
msg_error "Error: This script must be run as root." msg_error "Error: This script must be run as root."
exit 1 exit 1
fi fi
if ! command -v ethtool >/dev/null 2>&1; then if ! command -v ethtool >/dev/null 2>&1; then
msg_info "Installing ethtool" msg_info "Installing ethtool"
apt-get update &>/dev/null apt-get update &>/dev/null
apt-get install -y ethtool &>/dev/null || { msg_error "Failed to install ethtool. Exiting."; exit 1; } apt-get install -y ethtool &>/dev/null || {
msg_ok "ethtool installed successfully" msg_error "Failed to install ethtool. Exiting."
exit 1
}
msg_ok "ethtool installed successfully"
fi fi
# Get list of network interfaces using Intel e1000e or e1000 drivers # Get list of network interfaces using Intel e1000e or e1000 drivers
@@ -60,95 +67,95 @@ COUNT=0
msg_info "Searching for Intel e1000e and e1000 interfaces" msg_info "Searching for Intel e1000e and e1000 interfaces"
for device in /sys/class/net/*; do for device in /sys/class/net/*; do
interface="$(basename "$device")" # or adjust the rest of the usages below, as mostly you'll use the path anyway interface="$(basename "$device")" # or adjust the rest of the usages below, as mostly you'll use the path anyway
# Skip loopback interface and virtual interfaces # Skip loopback interface and virtual interfaces
if [[ "$interface" != "lo" ]] && [[ ! "$interface" =~ ^(tap|fwbr|veth|vmbr|bonding_masters) ]]; then if [[ "$interface" != "lo" ]] && [[ ! "$interface" =~ ^(tap|fwbr|veth|vmbr|bonding_masters) ]]; then
# Check if the interface uses the e1000e or e1000 driver # Check if the interface uses the e1000e or e1000 driver
driver=$(basename $(readlink -f /sys/class/net/$interface/device/driver 2>/dev/null) 2>/dev/null) driver=$(basename $(readlink -f /sys/class/net/$interface/device/driver 2>/dev/null) 2>/dev/null)
if [[ "$driver" == "e1000e" ]] || [[ "$driver" == "e1000" ]]; then if [[ "$driver" == "e1000e" ]] || [[ "$driver" == "e1000" ]]; then
# Get MAC address for additional identification # Get MAC address for additional identification
mac=$(cat /sys/class/net/$interface/address 2>/dev/null) mac=$(cat /sys/class/net/$interface/address 2>/dev/null)
INTERFACES+=("$interface" "Intel $driver NIC ($mac)") INTERFACES+=("$interface" "Intel $driver NIC ($mac)")
((COUNT++)) ((COUNT++))
fi
fi fi
fi
done done
# Check if any Intel e1000e/e1000 interfaces were found # Check if any Intel e1000e/e1000 interfaces were found
if [ ${#INTERFACES[@]} -eq 0 ]; then if [ ${#INTERFACES[@]} -eq 0 ]; then
whiptail --title "Error" --msgbox "No Intel e1000e or e1000 network interfaces found!" 10 60 whiptail --title "Error" --msgbox "No Intel e1000e or e1000 network interfaces found!" 10 60
msg_error "No Intel e1000e or e1000 network interfaces found! Exiting." msg_error "No Intel e1000e or e1000 network interfaces found! Exiting."
exit 1 exit 1
fi fi
msg_ok "Found ${BL}$COUNT${GN} Intel e1000e/e1000 interfaces" msg_ok "Found ${BL}$COUNT${GN} Intel e1000e/e1000 interfaces"
# Create a checklist for interface selection with all interfaces initially checked # Create a checklist for interface selection with all interfaces initially checked
INTERFACES_CHECKLIST=() INTERFACES_CHECKLIST=()
for ((i=0; i<${#INTERFACES[@]}; i+=2)); do for ((i = 0; i < ${#INTERFACES[@]}; i += 2)); do
INTERFACES_CHECKLIST+=("${INTERFACES[i]}" "${INTERFACES[i+1]}" "ON") INTERFACES_CHECKLIST+=("${INTERFACES[i]}" "${INTERFACES[i + 1]}" "ON")
done done
# Show interface selection checklist # Show interface selection checklist
SELECTED_INTERFACES=$(whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Network Interfaces" \ SELECTED_INTERFACES=$(whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Network Interfaces" \
--separate-output --checklist "Select Intel e1000e/e1000 network interfaces\n(Space to toggle, Enter to confirm):" 15 80 6 \ --separate-output --checklist "Select Intel e1000e/e1000 network interfaces\n(Space to toggle, Enter to confirm):" 15 80 6 \
"${INTERFACES_CHECKLIST[@]}" 3>&1 1>&2 2>&3) "${INTERFACES_CHECKLIST[@]}" 3>&1 1>&2 2>&3)
exitstatus=$? exitstatus=$?
if [ $exitstatus != 0 ]; then if [ $exitstatus != 0 ]; then
msg_info "User canceled. Exiting." msg_info "User canceled. Exiting."
exit 0 exit 0
fi fi
# Check if any interfaces were selected # Check if any interfaces were selected
if [ -z "$SELECTED_INTERFACES" ]; then if [ -z "$SELECTED_INTERFACES" ]; then
msg_error "No interfaces selected. Exiting." msg_error "No interfaces selected. Exiting."
exit 0 exit 0
fi fi
# Convert the selected interfaces into an array # Convert the selected interfaces into an array
readarray -t INTERFACE_ARRAY <<< "$SELECTED_INTERFACES" readarray -t INTERFACE_ARRAY <<<"$SELECTED_INTERFACES"
# Show the number of selected interfaces # Show the number of selected interfaces
INTERFACE_COUNT=${#INTERFACE_ARRAY[@]} INTERFACE_COUNT=${#INTERFACE_ARRAY[@]}
# Print selected interfaces with their driver types # Print selected interfaces with their driver types
for iface in "${INTERFACE_ARRAY[@]}"; do for iface in "${INTERFACE_ARRAY[@]}"; do
driver=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null) driver=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
msg_ok "Selected interface: ${BL}$iface${GN} (${BL}$driver${GN})" msg_ok "Selected interface: ${BL}$iface${GN} (${BL}$driver${GN})"
done done
# Ask for confirmation with the list of selected interfaces # Ask for confirmation with the list of selected interfaces
CONFIRMATION_MSG="You have selected the following interface(s):\n\n" CONFIRMATION_MSG="You have selected the following interface(s):\n\n"
for iface in "${INTERFACE_ARRAY[@]}"; do for iface in "${INTERFACE_ARRAY[@]}"; do
SPEED=$(cat /sys/class/net/$iface/speed 2>/dev/null || echo "Unknown") SPEED=$(cat /sys/class/net/$iface/speed 2>/dev/null || echo "Unknown")
MAC=$(cat /sys/class/net/$iface/address 2>/dev/null) MAC=$(cat /sys/class/net/$iface/address 2>/dev/null)
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null) DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
CONFIRMATION_MSG+="- $iface (Driver: $DRIVER, MAC: $MAC, Speed: ${SPEED}Mbps)\n" CONFIRMATION_MSG+="- $iface (Driver: $DRIVER, MAC: $MAC, Speed: ${SPEED}Mbps)\n"
done done
CONFIRMATION_MSG+="\nThis will create systemd service(s) to disable offloading features.\n\nProceed?" CONFIRMATION_MSG+="\nThis will create systemd service(s) to disable offloading features.\n\nProceed?"
if ! whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Confirmation" \ if ! whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Confirmation" \
--yesno "$CONFIRMATION_MSG" 20 80; then --yesno "$CONFIRMATION_MSG" 20 80; then
msg_info "User canceled. Exiting." msg_info "User canceled. Exiting."
exit 0 exit 0
fi fi
# Loop through all selected interfaces and create services for each # Loop through all selected interfaces and create services for each
for SELECTED_INTERFACE in "${INTERFACE_ARRAY[@]}"; do for SELECTED_INTERFACE in "${INTERFACE_ARRAY[@]}"; do
# Get the driver type for this specific interface # Get the driver type for this specific interface
DRIVER=$(basename $(readlink -f /sys/class/net/$SELECTED_INTERFACE/device/driver 2>/dev/null) 2>/dev/null) DRIVER=$(basename $(readlink -f /sys/class/net/$SELECTED_INTERFACE/device/driver 2>/dev/null) 2>/dev/null)
# Create service name for this interface # Create service name for this interface
SERVICE_NAME="disable-nic-offload-$SELECTED_INTERFACE.service" SERVICE_NAME="disable-nic-offload-$SELECTED_INTERFACE.service"
SERVICE_PATH="/etc/systemd/system/$SERVICE_NAME" SERVICE_PATH="/etc/systemd/system/$SERVICE_NAME"
# Create the service file with driver-specific optimizations # Create the service file with driver-specific optimizations
msg_info "Creating systemd service for interface: ${BL}$SELECTED_INTERFACE${YW} (${BL}$DRIVER${YW})" msg_info "Creating systemd service for interface: ${BL}$SELECTED_INTERFACE${YW} (${BL}$DRIVER${YW})"
# Start with the common part of the service file # Start with the common part of the service file
cat > "$SERVICE_PATH" << EOF cat >"$SERVICE_PATH" <<EOF
[Unit] [Unit]
Description=Disable NIC offloading for Intel $DRIVER interface $SELECTED_INTERFACE Description=Disable NIC offloading for Intel $DRIVER interface $SELECTED_INTERFACE
After=network.target After=network.target
@@ -163,45 +170,49 @@ RemainAfterExit=true
WantedBy=multi-user.target WantedBy=multi-user.target
EOF EOF
# Check if service file was created successfully # Check if service file was created successfully
if [ ! -f "$SERVICE_PATH" ]; then if [ ! -f "$SERVICE_PATH" ]; then
whiptail --title "Error" --msgbox "Failed to create service file for $SELECTED_INTERFACE!" 10 50 whiptail --title "Error" --msgbox "Failed to create service file for $SELECTED_INTERFACE!" 10 50
msg_error "Failed to create service file for $SELECTED_INTERFACE! Skipping to next interface." msg_error "Failed to create service file for $SELECTED_INTERFACE! Skipping to next interface."
continue continue
fi fi
# Configure this service # Configure this service
{ {
echo "25"; sleep 0.2 echo "25"
# Reload systemd to recognize the new service sleep 0.2
systemctl daemon-reload # Reload systemd to recognize the new service
echo "50"; sleep 0.2 systemctl daemon-reload
# Start the service echo "50"
systemctl start "$SERVICE_NAME" sleep 0.2
echo "75"; sleep 0.2 # Start the service
# Enable the service to start on boot systemctl start "$SERVICE_NAME"
systemctl enable "$SERVICE_NAME" echo "75"
echo "100"; sleep 0.2 sleep 0.2
} | whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --gauge "Configuring service for $SELECTED_INTERFACE..." 10 80 0 # Enable the service to start on boot
systemctl enable "$SERVICE_NAME"
echo "100"
sleep 0.2
} | whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --gauge "Configuring service for $SELECTED_INTERFACE..." 10 80 0
# Individual service status # Individual service status
if systemctl is-active --quiet "$SERVICE_NAME"; then if systemctl is-active --quiet "$SERVICE_NAME"; then
SERVICE_STATUS="Active" SERVICE_STATUS="Active"
else else
SERVICE_STATUS="Inactive" SERVICE_STATUS="Inactive"
fi fi
if systemctl is-enabled --quiet "$SERVICE_NAME"; then if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_STATUS="Enabled" BOOT_STATUS="Enabled"
else else
BOOT_STATUS="Disabled" BOOT_STATUS="Disabled"
fi fi
# Show individual service results # Show individual service results
msg_ok "Service for ${BL}$SELECTED_INTERFACE${GN} (${BL}$DRIVER${GN}) created and enabled!" msg_ok "Service for ${BL}$SELECTED_INTERFACE${GN} (${BL}$DRIVER${GN}) created and enabled!"
msg_info "${TAB}Service: ${BL}$SERVICE_NAME${YW}" msg_info "${TAB}Service: ${BL}$SERVICE_NAME${YW}"
msg_info "${TAB}Status: ${BL}$SERVICE_STATUS${YW}" msg_info "${TAB}Status: ${BL}$SERVICE_STATUS${YW}"
msg_info "${TAB}Start on boot: ${BL}$BOOT_STATUS${YW}" msg_info "${TAB}Start on boot: ${BL}$BOOT_STATUS${YW}"
done done
# Prepare summary of all interfaces # Prepare summary of all interfaces
@@ -209,22 +220,22 @@ SUMMARY_MSG="Services created successfully!\n\n"
SUMMARY_MSG+="Configured Interfaces:\n" SUMMARY_MSG+="Configured Interfaces:\n"
for iface in "${INTERFACE_ARRAY[@]}"; do for iface in "${INTERFACE_ARRAY[@]}"; do
SERVICE_NAME="disable-nic-offload-$iface.service" SERVICE_NAME="disable-nic-offload-$iface.service"
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null) DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
if systemctl is-active --quiet "$SERVICE_NAME"; then
SVC_STATUS="Active"
else
SVC_STATUS="Inactive"
fi
if systemctl is-enabled --quiet "$SERVICE_NAME"; then if systemctl is-active --quiet "$SERVICE_NAME"; then
BOOT_SVC_STATUS="Enabled" SVC_STATUS="Active"
else else
BOOT_SVC_STATUS="Disabled" SVC_STATUS="Inactive"
fi fi
SUMMARY_MSG+="- $iface ($DRIVER): $SVC_STATUS, Boot: $BOOT_SVC_STATUS\n" if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_SVC_STATUS="Enabled"
else
BOOT_SVC_STATUS="Disabled"
fi
SUMMARY_MSG+="- $iface ($DRIVER): $SVC_STATUS, Boot: $BOOT_SVC_STATUS\n"
done done
# Show summary results # Show summary results
@@ -236,8 +247,8 @@ msg_ok "Intel e1000e/e1000 optimization complete for ${#INTERFACE_ARRAY[@]} inte
echo "" echo ""
msg_info "Verification commands:" msg_info "Verification commands:"
for iface in "${INTERFACE_ARRAY[@]}"; do for iface in "${INTERFACE_ARRAY[@]}"; do
echo -e "${TAB}${BL}ethtool -k $iface${CL} ${YW}# Check offloading status${CL}" echo -e "${TAB}${BL}ethtool -k $iface${CL} ${YW}# Check offloading status${CL}"
echo -e "${TAB}${BL}systemctl status disable-nic-offload-$iface.service${CL} ${YW}# Check service status${CL}" echo -e "${TAB}${BL}systemctl status disable-nic-offload-$iface.service${CL} ${YW}# Check service status${CL}"
done done
exit 0 exit 0

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs3-upgrade" "pve"
start_routines() { start_routines() {
header_info header_info
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 2 BACKUP" --menu "\nMake a backup of /etc/proxmox-backup to ensure that in the worst case, any relevant configuration can be recovered?" 14 58 2 \ CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 2 BACKUP" --menu "\nMake a backup of /etc/proxmox-backup to ensure that in the worst case, any relevant configuration can be recovered?" 14 58 2 \

View File

@@ -32,6 +32,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; } msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; } msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs4-upgrade" "pve"
start_routines() { start_routines() {
header_info header_info
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 3 BACKUP" --menu \ CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 3 BACKUP" --menu \

View File

@@ -28,6 +28,10 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}" CROSS="${RD}${CL}"
msg_info() { echo -ne " ${HOLD} ${YW}$1..."; } msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs-microcode" "pve"
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; } msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; } msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }

View File

@@ -32,6 +32,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; } msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; } msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "post-pbs-install" "pve"
# ---- helpers ---- # ---- helpers ----
get_pbs_codename() { get_pbs_codename() {
awk -F'=' '/^VERSION_CODENAME=/{print $2}' /etc/os-release awk -F'=' '/^VERSION_CODENAME=/{print $2}' /etc/os-release

View File

@@ -43,6 +43,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "post-pmg-install" "pve"
if ! grep -q "Proxmox Mail Gateway" /etc/issue 2>/dev/null; then if ! grep -q "Proxmox Mail Gateway" /etc/issue 2>/dev/null; then
msg_error "This script is only intended for Proxmox Mail Gateway" msg_error "This script is only intended for Proxmox Mail Gateway"
exit 1 exit 1

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "post-pve-install" "pve"
get_pve_version() { get_pve_version() {
local pve_ver local pve_ver
pve_ver="$(pveversion | awk -F'/' '{print $2}' | awk -F'-' '{print $1}')" pve_ver="$(pveversion | awk -F'/' '{print $2}' | awk -F'-' '{print $1}')"

View File

@@ -11,7 +11,9 @@ if ! command -v curl >/dev/null 2>&1; then
apt-get install -y curl >/dev/null 2>&1 apt-get install -y curl >/dev/null 2>&1
fi fi
source <(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/misc/core.func) source <(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
load_functions load_functions
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pve-privilege-converter" "pve"
set -euo pipefail set -euo pipefail
shopt -s inherit_errexit nullglob shopt -s inherit_errexit nullglob

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}" echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
} }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pve8-upgrade" "pve"
start_routines() { start_routines() {
header_info header_info

View File

@@ -5,6 +5,11 @@
# License: MIT # License: MIT
# https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
set -e set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "scaling-governor" "pve"
header_info() { header_info() {
clear clear
cat <<EOF cat <<EOF

View File

@@ -5,6 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/refs/heads/main/misc/core.func) source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/refs/heads/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "update-apps" "pve"
# ============================================================================= # =============================================================================
# CONFIGURATION VARIABLES # CONFIGURATION VARIABLES
@@ -98,14 +100,14 @@ EOF
# Handle command line arguments # Handle command line arguments
case "${1:-}" in case "${1:-}" in
--help|-h) --help | -h)
print_usage print_usage
exit 0 exit 0
;; ;;
--export-config) --export-config)
export_config_json export_config_json
exit 0 exit 0
;; ;;
esac esac
# ============================================================================= # =============================================================================
@@ -202,40 +204,40 @@ msg_ok "Loaded ${#menu_items[@]} containers"
# Determine container selection based on var_container # Determine container selection based on var_container
if [[ -n "$var_container" ]]; then if [[ -n "$var_container" ]]; then
case "$var_container" in case "$var_container" in
all) all)
# Select all containers with matching tags # Select all containers with matching tags
CHOICE="" CHOICE=""
for ((i=0; i<${#menu_items[@]}; i+=3)); do for ((i = 0; i < ${#menu_items[@]}; i += 3)); do
CHOICE="$CHOICE ${menu_items[$i]}" CHOICE="$CHOICE ${menu_items[$i]}"
done done
CHOICE=$(echo "$CHOICE" | xargs) CHOICE=$(echo "$CHOICE" | xargs)
;; ;;
all_running) all_running)
# Select only running containers with matching tags # Select only running containers with matching tags
CHOICE="" CHOICE=""
for ((i=0; i<${#menu_items[@]}; i+=3)); do for ((i = 0; i < ${#menu_items[@]}; i += 3)); do
cid="${menu_items[$i]}" cid="${menu_items[$i]}"
if pct status "$cid" 2>/dev/null | grep -q "running"; then if pct status "$cid" 2>/dev/null | grep -q "running"; then
CHOICE="$CHOICE $cid" CHOICE="$CHOICE $cid"
fi fi
done done
CHOICE=$(echo "$CHOICE" | xargs) CHOICE=$(echo "$CHOICE" | xargs)
;; ;;
all_stopped) all_stopped)
# Select only stopped containers with matching tags # Select only stopped containers with matching tags
CHOICE="" CHOICE=""
for ((i=0; i<${#menu_items[@]}; i+=3)); do for ((i = 0; i < ${#menu_items[@]}; i += 3)); do
cid="${menu_items[$i]}" cid="${menu_items[$i]}"
if pct status "$cid" 2>/dev/null | grep -q "stopped"; then if pct status "$cid" 2>/dev/null | grep -q "stopped"; then
CHOICE="$CHOICE $cid" CHOICE="$CHOICE $cid"
fi fi
done done
CHOICE=$(echo "$CHOICE" | xargs) CHOICE=$(echo "$CHOICE" | xargs)
;; ;;
*) *)
# Assume comma-separated list of container IDs # Assume comma-separated list of container IDs
CHOICE=$(echo "$var_container" | tr ',' ' ') CHOICE=$(echo "$var_container" | tr ',' ' ')
;; ;;
esac esac
if [[ -z "$CHOICE" ]]; then if [[ -z "$CHOICE" ]]; then

View File

@@ -24,6 +24,11 @@ RD=$(echo "\033[01;31m")
CM='\xE2\x9C\x94\033' CM='\xE2\x9C\x94\033'
GN=$(echo "\033[1;92m") GN=$(echo "\033[1;92m")
CL=$(echo "\033[m") CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "update-lxcs" "pve"
header_info header_info
echo "Loading..." echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Updater" --yesno "This Will Update LXC Containers. Proceed?" 10 58 whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Updater" --yesno "This Will Update LXC Containers. Proceed?" 10 58

View File

@@ -23,6 +23,10 @@ RD=$(echo "\033[01;31m")
GN=$(echo "\033[1;92m") GN=$(echo "\033[1;92m")
CL=$(echo "\033[m") CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "update-repo" "pve"
header_info header_info
echo "Loading..." echo "Loading..."
NODE=$(hostname) NODE=$(hostname)

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -104,8 +104,16 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" # Only send telemetry if post_to_api_vm was called (installing status was sent)
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -101,8 +101,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -100,8 +100,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -105,7 +105,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -79,11 +79,29 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }
function check_disk_space() {
local path="$1"
local required_gb="$2"
local available_kb=$(df -k "$path" | awk 'NR==2 {print $4}')
local available_gb=$((available_kb / 1024 / 1024))
if [ $available_gb -lt $required_gb ]; then
return 1
fi
return 0
}
TEMP_DIR=$(mktemp -d) TEMP_DIR=$(mktemp -d)
pushd $TEMP_DIR >/dev/null pushd $TEMP_DIR >/dev/null
function send_line_to_vm() { function send_line_to_vm() {
@@ -598,11 +616,41 @@ if [ -z "$URL" ]; then
exit 1 exit 1
fi fi
msg_ok "Download URL: ${CL}${BL}${URL}${CL}" msg_ok "Download URL: ${CL}${BL}${URL}${CL}"
# Check available disk space (require at least 20GB for safety)
if ! check_disk_space "$TEMP_DIR" 20; then
AVAILABLE_GB=$(df -h "$TEMP_DIR" | awk 'NR==2 {print $4}')
msg_error "Insufficient disk space in temporary directory ($TEMP_DIR)."
msg_error "Available: ${AVAILABLE_GB}, Required: ~20GB for FreeBSD image decompression."
msg_error "Please free up space or ensure /tmp has sufficient storage."
exit 1
fi
msg_info "Downloading FreeBSD Image"
curl -f#SL -o "$(basename "$URL")" "$URL" curl -f#SL -o "$(basename "$URL")" "$URL"
echo -en "\e[1A\e[0K" echo -en "\e[1A\e[0K"
msg_ok "Downloaded ${CL}${BL}$(basename "$URL")${CL}"
# Check disk space again before decompression
if ! check_disk_space "$TEMP_DIR" 15; then
AVAILABLE_GB=$(df -h "$TEMP_DIR" | awk 'NR==2 {print $4}')
msg_error "Insufficient disk space for decompression."
msg_error "Available: ${AVAILABLE_GB}, Required: ~15GB for decompressed image."
exit 1
fi
msg_info "Decompressing FreeBSD Image (this may take a few minutes)"
FILE=FreeBSD.qcow2 FILE=FreeBSD.qcow2
unxz -cv $(basename $URL) >${FILE} if ! unxz -cv $(basename $URL) >${FILE}; then
msg_ok "Downloaded ${CL}${BL}${FILE}${CL}" msg_error "Failed to decompress FreeBSD image."
msg_error "This is usually caused by insufficient disk space."
df -h "$TEMP_DIR"
exit 1
fi
# Remove the compressed file to save space
rm -f "$(basename "$URL")"
msg_ok "Decompressed ${CL}${BL}${FILE}${CL}"
STORAGE_TYPE=$(pvesm status -storage $STORAGE | awk 'NR>1 {print $2}') STORAGE_TYPE=$(pvesm status -storage $STORAGE | awk 'NR>1 {print $2}')
case $STORAGE_TYPE in case $STORAGE_TYPE in
@@ -642,7 +690,7 @@ qm set $VMID \
-boot order=scsi0 \ -boot order=scsi0 \
-serial0 socket \ -serial0 socket \
-tags community-script >/dev/null -tags community-script >/dev/null
qm resize $VMID scsi0 10G >/dev/null qm resize $VMID scsi0 20G >/dev/null
DESCRIPTION=$( DESCRIPTION=$(
cat <<EOF cat <<EOF
<div align='center'> <div align='center'>

View File

@@ -101,8 +101,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -109,8 +109,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -97,7 +97,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -100,7 +100,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -99,7 +99,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }

View File

@@ -99,8 +99,15 @@ function cleanup_vmid() {
} }
function cleanup() { function cleanup() {
local exit_code=$?
popd >/dev/null popd >/dev/null
post_update_to_api "done" "none" if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -eq 0 ]]; then
post_update_to_api "done" "none"
else
post_update_to_api "failed" "$exit_code"
fi
fi
rm -rf $TEMP_DIR rm -rf $TEMP_DIR
} }