Compare commits

..

146 Commits

Author SHA1 Message Date
CanbiZ (MickLesk)
a13caec262 allow Frigate's Intel media packages to overwrite files from system GPU driver packages 2026-02-23 13:25:20 +01:00
community-scripts-pr-app[bot]
df80c849f2 Update CHANGELOG.md (#12209)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 12:22:33 +00:00
CanbiZ (MickLesk)
69de9fa57e Improve error handling and logging for LXC builds (#12208)
Prevent host-side error_handler from being triggered during in-container install/recovery by delaying re-enabling set -Eeuo pipefail and the ERR trap in misc/build.func until after install/recovery completes; add explanatory comments. Update misc/error_handler.func to fall back to BUILD_LOG if container-internal log path is unavailable, show the last 20 log lines when present, refine container vs host detection (check INSTALL_LOG file and /root), copy INSTALL_LOG into /root and write a .failed flag with the exit code for host-side detection, and ensure full-log output and container removal prompt are shown appropriately in host context. Tweak misc/core.func silent() output to include a "Full log" path and adjust formatting.
2026-02-23 13:22:09 +01:00
community-scripts-pr-app[bot]
695a6ce29b chore: update github-versions.json (#12206)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 12:14:28 +00:00
CanbiZ (MickLesk)
1976a1715c bookworm arc support 2026-02-23 13:08:39 +01:00
CanbiZ (MickLesk)
6ba22c82d7 Add Signed-By to Debian APT source entries
Update misc/tools.func to include Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg on multiple Debian repository blocks (bullseye, bookworm, trixie/sid and their -security entries) for non-free and non-free-firmware components. This ensures APT uses the specified keyring to verify repository metadata and improves reproducible/secure apt configuration.
2026-02-23 12:59:42 +01:00
community-scripts-pr-app[bot]
1a14b19fa0 Update CHANGELOG.md (#12204)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 11:14:58 +00:00
community-scripts-pr-app[bot]
9e6b1f0e12 Update CHANGELOG.md (#12203)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 11:14:44 +00:00
community-scripts-pr-app[bot]
103982fcdb Update date in json (#12202)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-23 11:14:35 +00:00
CanbiZ (MickLesk)
e315e0b17e Frigate v16.4 (#11887)
* Update frigate.sh

* Add Frigate metadata and overhaul installer

Add frontend metadata for Frigate (frontend/public/json/frigate.json), remove the old .bak metadata file, and perform a major refactor of install/frigate-install.sh. The installer now targets Debian 12 (Bookworm), converts APT sources to deb822, installs and builds required dependencies (Python wheels, libusb, OpenVINO, Tempio, Nginx, sqlite extensions), configures hardware acceleration and GPU access, fetches and deploys Frigate and go2rtc releases, and prepares inference/audio models. Systemd service units were improved (dependencies, env file usage, safer log file handling, create_directories service) and services are enabled/started with cleanup steps added. Also updated copyright/authorship and various runtime environment exports and default Frigate config changes (ffmpeg hwaccel, detector selection, disabled auth/detect in default config).

* Update frigate.json

* frigate: update metadata and installer

Update frontend metadata (config path, interface port, and expanded description) and modernize the install script: switch apt-get to apt, streamline dependency list (remove wget/jq/unzip), replace inline hardware-acceleration/GPU group tweaks with setup_hwaccel, pin Frigate release to v0.16.4 for reproducible installs, and fetch/libusb build now uses fetch_and_deploy_gh_release with adjusted paths. Also clean up removed temporary files.

* add std
2026-02-23 12:14:17 +01:00
community-scripts-pr-app[bot]
2a537f0772 chore: update github-versions.json (#12199)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 06:26:21 +00:00
community-scripts-pr-app[bot]
9d7da517f3 Update CHANGELOG.md (#12198)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 00:22:42 +00:00
community-scripts-pr-app[bot]
c1248cca14 chore: update github-versions.json (#12197)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-23 00:22:19 +00:00
community-scripts-pr-app[bot]
a83cd9a80e Update CHANGELOG.md (#12196)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 20:44:06 +00:00
community-scripts-pr-app[bot]
203c1f2f48 Update CHANGELOG.md (#12195)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 20:43:54 +00:00
CanbiZ (MickLesk)
287fe28741 Refactor & Bump to v2: Plex (#12179) 2026-02-22 21:43:40 +01:00
community-scripts-pr-app[bot]
d868b6d885 Update CHANGELOG.md (#12194)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 20:43:31 +00:00
community-scripts-pr-app[bot]
3995aa02da Update CHANGELOG.md (#12193)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 20:43:16 +00:00
Copilot
2b44ff289f fix: Apache Guacamole - bump to Temurin JDK 17 to resolve Debian 13 (Trixie) install failure (#12161) 2026-02-22 21:43:05 +01:00
community-scripts-pr-app[bot]
47757d4d7d Update CHANGELOG.md (#12192)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 20:42:55 +00:00
CanbiZ (MickLesk)
ee90bfb458 Docker-VM: add error handling for virt-customize finalization (#12127) 2026-02-22 21:42:34 +01:00
MickLesk
3e22aaa4bb qf libgirepository1.0 2026-02-22 19:30:50 +01:00
community-scripts-pr-app[bot]
618133ffca chore: update github-versions.json (#12191)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 18:08:42 +00:00
community-scripts-pr-app[bot]
ded8a95567 Update CHANGELOG.md (#12190)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 17:56:07 +00:00
Chris
85502e7c43 [Fix] Sure: add Sidekiq service (#12186) 2026-02-22 18:55:41 +01:00
community-scripts-pr-app[bot]
a8bec558f9 Update .app files (#12188)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-22 17:14:03 +01:00
community-scripts-pr-app[bot]
5c7934a71e Update CHANGELOG.md (#12187)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 16:08:17 +00:00
Tobias
adf6a03067 karakeep: bump to node 24 (#12183) 2026-02-22 11:07:52 -05:00
community-scripts-pr-app[bot]
f4cf671694 chore: update github-versions.json (#12182)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 12:08:50 +00:00
community-scripts-pr-app[bot]
2f364d2fca Update CHANGELOG.md (#12181)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:54:52 +00:00
community-scripts-pr-app[bot]
88008b8735 Update CHANGELOG.md (#12180)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:54:41 +00:00
CanbiZ (MickLesk)
07eae3a06f calibre-web: Update logo URL (#12178) 2026-02-22 11:54:28 +01:00
CanbiZ (MickLesk)
171d830c22 fix(tools): add GitHub API rate-limit detection and GITHUB_TOKEN support (#12176)
- check_for_gh_release: add Authorization header when GITHUB_TOKEN is set,
  detect HTTP 403 and show actionable rate-limit hint
- fetch_and_deploy_gh_release: improve retry loop with specific 403 handling,
  exponential backoff, and token export hint on rate-limit failure
2026-02-22 11:54:21 +01:00
MickLesk
4e8421c080 Merge branch 'main' of https://github.com/community-scripts/ProxmoxVE 2026-02-22 11:40:10 +01:00
MickLesk
0c92f40d7a name of cronmaster 2026-02-22 11:40:06 +01:00
community-scripts-pr-app[bot]
104971ada3 Update CHANGELOG.md (#12177)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:38:30 +00:00
Joerg Heinemann
9db0ff6d81 Update package management commands in clean-lxcs.sh (#12166)
Replace 'apt-get' with 'apt' for package management commands.
2026-02-22 11:38:10 +01:00
MickLesk
b05f0fb059 switch type of cronmaster to addon 2026-02-22 11:37:23 +01:00
community-scripts-pr-app[bot]
345e7f741b Update .app files (#12171)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-22 11:34:28 +01:00
community-scripts-pr-app[bot]
452f3bdc6a Update CHANGELOG.md (#12175)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:32:55 +00:00
community-scripts-pr-app[bot]
d287b5f848 Update CHANGELOG.md (#12174)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:32:39 +00:00
community-scripts-pr-app[bot]
80a435fc9f Update date in json (#12173)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:32:34 +00:00
push-app-to-main[bot]
2e32ea3c52 CR*NMASTER (#12065)
* Add cronmaster (addon)

* minor fixes

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: MickLesk <mickey.leskowitz@gmail.com>
2026-02-22 11:32:18 +01:00
community-scripts-pr-app[bot]
0753521739 Update CHANGELOG.md (#12172)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:27:16 +00:00
community-scripts-pr-app[bot]
e93949a3d2 Update CHANGELOG.md (#12170)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:26:56 +00:00
community-scripts-pr-app[bot]
c7dcedc23c Update date in json (#12169)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-22 10:26:52 +00:00
push-app-to-main[bot]
2407a633e7 Gramps-Web (#12157)
* Add gramps-web (ct)

* -

* minor fixes in install

* minor fixes

* remove empty line

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: MickLesk <mickey.leskowitz@gmail.com>
2026-02-22 11:26:36 +01:00
community-scripts-pr-app[bot]
c0b8d25b66 chore: update github-versions.json (#12165)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 06:17:08 +00:00
community-scripts-pr-app[bot]
b0112f83e9 chore: update github-versions.json (#12164)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 00:21:41 +00:00
community-scripts-pr-app[bot]
f0ed8db337 Update CHANGELOG.md (#12163)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 00:05:00 +00:00
community-scripts-pr-app[bot]
0eaaac7dea Archive old changelog entries (#12162)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-22 00:04:39 +00:00
community-scripts-pr-app[bot]
802cbdd22b chore: update github-versions.json (#12156)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 18:07:24 +00:00
community-scripts-pr-app[bot]
4f9490184b Update CHANGELOG.md (#12155)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 18:03:45 +00:00
Slaviša Arežina
b5288692cd Update documentation URL in mediamanager.json (#12154) 2026-02-21 19:03:22 +01:00
community-scripts-pr-app[bot]
8cc2c7c025 Update CHANGELOG.md (#12150)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 13:48:47 +00:00
CanbiZ (MickLesk)
85c5d54a12 fix(pangolin): restore config before db migration, use drizzle-kit push (#12130)
npm run db:push ran drizzle migrate (CREATE TABLE) on an empty directory
before the config backup was restored, creating a fresh DB that got
immediately overwritten by the old backup. The old DB was missing new
columns like resources.postAuthPath (added in 1.15.4).

Replace with drizzle-kit push after config restore, which introspects the
existing DB and applies schema diffs (ALTER TABLE) instead.

Ref: #12068
2026-02-21 14:48:22 +01:00
community-scripts-pr-app[bot]
1273778dc2 Update CHANGELOG.md (#12149)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 12:09:00 +00:00
community-scripts-pr-app[bot]
2a7fa5addb chore: update github-versions.json (#12148)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 12:08:33 +00:00
community-scripts-pr-app[bot]
116888201c Update CHANGELOG.md (#12145)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 08:10:48 +00:00
Daniel Bates
a3d9ea7530 Fix #12066: PLANKA: fix msg_info/msg_ok inconsistencies in update script (#12143)
Co-authored-by: Your Name <your-email@example.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 09:10:22 +01:00
community-scripts-pr-app[bot]
903e63d758 chore: update github-versions.json (#12144)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 06:13:39 +00:00
community-scripts-pr-app[bot]
901b90f956 Update CHANGELOG.md (#12139)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 00:21:15 +00:00
community-scripts-pr-app[bot]
8795cfb03f chore: update github-versions.json (#12138)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-21 00:20:50 +00:00
community-scripts-pr-app[bot]
ae834cb39e Update CHANGELOG.md (#12136)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 21:58:16 +00:00
CanbiZ (MickLesk)
7dbd1fdc36 recyclarr: adjust paths for v8.0 breaking changes (#12129)
Recyclarr v8.0 moved local includes from configs/ to a dedicated
includes/ directory. The install script now creates both dirs
upfront. The update script migrates existing include subdirs
from configs/ to includes/ after the binary update and also
handles detection for both old (recyclarr.yml) and new
(configs/) config layouts.

See https://recyclarr.dev/guide/upgrade-guide/v8.0

Ref #12109
2026-02-20 22:57:56 +01:00
community-scripts-pr-app[bot]
300f8c80e8 Update CHANGELOG.md (#12135)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 21:53:38 +00:00
CanbiZ (MickLesk)
d969969c2d planka: migrate data paths to new v2 directory structure (#12128)
Planka v2.0.1 moved all uploads to a unified data directory:
- public/{favicons,user-avatars,background-images} -> data/protected/
- private/attachments -> data/private/

The update script was still restoring files to the old locations,
which made attachments, avatars and backgrounds disappear after
updating from rc.4 to 2.0.1.

Changes:
- update_script: backup handles both old and new layouts
- update_script: restore now uses new v2 paths
- install script: create v2 data directories on fresh install

Ref #12066
2026-02-20 22:53:15 +01:00
community-scripts-pr-app[bot]
f6558953bc Update CHANGELOG.md (#12133)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 21:52:50 +00:00
CanbiZ (MickLesk)
b9f4e6c8bd fix(zammad): fix Elasticsearch JVM config and add daemon-reload (#12125)
The JVM heap options were appended with >> which left the old
defaults in place. Use sed to replace them instead.
Also add the missing systemctl daemon-reload that ES requires
after package installation before the service can be started.

Closes #12111
2026-02-20 22:52:28 +01:00
community-scripts-pr-app[bot]
54031b56d6 Update CHANGELOG.md (#12132)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 21:51:48 +00:00
CanbiZ (MickLesk)
c68b9b6a40 Huntarr: add build-essential for native pip dependencies (#12126)
* fix(huntarr): add build-essential for native pip dependencies

Some Python packages in requirements.txt need a C++ compiler for
native extensions. Without build-essential the uv pip install
step fails with 'command c++ not found'.

Closes #12117

* Change apt-get to apt for installing dependencies

* Ensure git dependency before updating Huntarr

Added dependency check for git before fetching and deploying the GitHub release.

* drunk
2026-02-20 22:51:26 +01:00
community-scripts-pr-app[bot]
f4c10ed75d Update CHANGELOG.md (#12131)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 20:56:12 +00:00
Chris
097ee57598 [Hotfix] Dokploy: fix update (#12116) 2026-02-20 21:55:51 +01:00
community-scripts-pr-app[bot]
120aefbdfe Update .app files (#12120)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-20 21:06:41 +01:00
community-scripts-pr-app[bot]
fc612ad369 Update CHANGELOG.md (#12123)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 20:06:08 +00:00
community-scripts-pr-app[bot]
c061434dd7 Update date in json (#12122)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-20 20:06:01 +00:00
community-scripts-pr-app[bot]
7b01ed0a90 Update CHANGELOG.md (#12121)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 20:05:58 +00:00
push-app-to-main[bot]
de94a3b0c8 Sure (#12114)
* Add sure (ct)

* Update service messages in sure.sh script

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: Chris <punk.sand7393@fastmail.com>
2026-02-20 21:05:42 +01:00
community-scripts-pr-app[bot]
f25a150fa1 Update CHANGELOG.md (#12119)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 20:05:33 +00:00
community-scripts-pr-app[bot]
fd44c98235 Update date in json (#12118)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-20 20:05:30 +00:00
push-app-to-main[bot]
c754d0c369 Calibre-Web (#12115)
* Add calibre-web (ct)

* Fix casing in Calibre-Web references

* Correct case for Calibre-Web in installation script

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: Chris <punk.sand7393@fastmail.com>
2026-02-20 21:05:09 +01:00
community-scripts-pr-app[bot]
bc5af1a536 chore: update github-versions.json (#12113)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 18:14:34 +00:00
community-scripts-pr-app[bot]
53f89258a4 chore: update github-versions.json (#12108)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 12:12:26 +00:00
community-scripts-pr-app[bot]
97302789d9 Update CHANGELOG.md (#12105)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 08:12:52 +00:00
Rik
f3b38e2ba2 fixen broken link to dawarich documentation (#12103) 2026-02-20 09:12:21 +01:00
community-scripts-pr-app[bot]
eff0963023 chore: update github-versions.json (#12102)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 06:20:06 +00:00
community-scripts-pr-app[bot]
d42f5b75cb Update CHANGELOG.md (#12101)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 00:20:08 +00:00
community-scripts-pr-app[bot]
27c02dda99 chore: update github-versions.json (#12100)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-20 00:19:44 +00:00
community-scripts-pr-app[bot]
c3bac57055 chore: update github-versions.json (#12095)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 18:19:55 +00:00
community-scripts-pr-app[bot]
9fd11feb1f Update CHANGELOG.md (#12093)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 15:15:43 +00:00
Tobias
32f3748a0e add: patchmon breaking change msg (#12075)
* add: patchmon breaking change msg

* Update ct/patchmon.sh

* Fix sed command for environment variable updates

* Fix protocol variable usage in patchmon.sh

---------

Co-authored-by: Chris <punk.sand7393@fastmail.com>
2026-02-19 16:15:10 +01:00
community-scripts-pr-app[bot]
f4547f10b6 Update CHANGELOG.md (#12091)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 15:00:10 +00:00
Slaviša Arežina
a3a6cbb6a6 Fixes (#12089) 2026-02-19 15:59:39 +01:00
community-scripts-pr-app[bot]
1a6ca67375 Update CHANGELOG.md (#12090)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 14:19:33 +00:00
juronja
401bd6b55b slug fix (#12088) 2026-02-19 15:19:04 +01:00
community-scripts-pr-app[bot]
a40da3c452 chore: update github-versions.json (#12085)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 12:14:57 +00:00
community-scripts-pr-app[bot]
5c48506629 Update CHANGELOG.md (#12078)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 07:47:33 +00:00
community-scripts-pr-app[bot]
45a8f744f1 Update CHANGELOG.md (#12077)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 07:47:07 +00:00
community-scripts-pr-app[bot]
54c66c5af4 Update date in json (#12076)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-19 07:47:03 +00:00
push-app-to-main[bot]
0fab65f0cf Add truenas-vm (vm) (#12059)
Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
2026-02-19 08:46:45 +01:00
community-scripts-pr-app[bot]
31482e7489 chore: update github-versions.json (#12074)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 06:23:51 +00:00
community-scripts-pr-app[bot]
d97f9a0bce Update CHANGELOG.md (#12070)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 00:23:09 +00:00
community-scripts-pr-app[bot]
9b2275c980 chore: update github-versions.json (#12069)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-19 00:22:45 +00:00
CanbiZ (MickLesk)
b6a4e6a2a6 OPNSense: add disk space check | increase disk space (#12058)
* Fix: Add disk space checking for OPNsense VM FreeBSD image decompression

- Add check_disk_space() function to verify available storage
- Check for 20GB before download and 15GB before decompression
- Provide clear error messages showing available vs required space
- Add proper error handling for unxz decompression failures
- Clean up compressed .xz file after decompression to save space
- Add progress messages for download and decompression steps

Fixes issue where script fails at line 611 with 'No space left on device'
when /tmp directory lacks sufficient space for ~10-15GB decompressed image.

* Increase OPNsense VM disk size from 10GB to 20GB

- Provides more space for system updates, logs, and package installations
- 20GB is a more appropriate size for OPNsense production use
2026-02-18 22:06:04 +01:00
community-scripts-pr-app[bot]
96c056ea4e chore: update github-versions.json (#12067)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 18:17:12 +00:00
CanbiZ (MickLesk)
491081ffbf Add post_progress_to_api lightweight telemetry ping
Introduce post_progress_to_api() in misc/api.func — a non-blocking, fire-and-forget curl ping (gated by DIAGNOSTICS and RANDOM_UUID) that updates telemetry status to "configuring". Wire this progress ping into multiple scripts (alpine-install.func, install.func, build.func, core.func) at key milestones (container start, network ready, customization, creation, cleanup) and replace/deduplicate some earlier post_to_api calls. Also update error_handler.func to always report failures immediately via post_update_to_api to ensure failures are captured even before/after container lifecycle.
2026-02-18 16:19:19 +01:00
community-scripts-pr-app[bot]
1123fdca14 Update CHANGELOG.md (#12064)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 12:53:37 +00:00
Chris
a3a383361d [Fix] PatchMon: use SERVER_PORT in Nginx config if set in env (#12053)
- This PR will add the ability to change the PatchMon listen port in the
Nginx config during update, if the `SERVER_PORT` env var is set in
`/opt/patchmon/backend.env` and if the port is not 443
- If not set, or if set and the port is 443, then no changes are made to
the listen port in the Nginx config
2026-02-18 13:53:08 +01:00
CanbiZ (MickLesk)
6cc8877852 Add timeouts and prioritize telemetry on exit
Prevent hangs when pulling logs from containers by wrapping pct pull calls with timeout (8s) and running ensure_log_on_host under timeout (10s). Always send telemetry (post_update_to_api) before attempting best-effort log collection so status is reported even if log retrieval blocks. Update EXIT/ERR/SIGHUP/SIGINT/SIGTERM traps and consolidate error/interrupt handlers to use the new timeouted log collection. Changes in misc/build.func and misc/error_handler.func.
2026-02-18 13:14:59 +01:00
community-scripts-pr-app[bot]
845b89f975 chore: update github-versions.json (#12061)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 12:13:39 +00:00
community-scripts-pr-app[bot]
be26dc33dd Update CHANGELOG.md (#12057)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 09:24:29 +00:00
CanbiZ (MickLesk)
b439960222 core: Execution ID & Telemetry Improvements (#12041)
* fix: send telemetry BEFORE log collection in signal handlers

- Swap ensure_log_on_host/post_update_to_api order in on_interrupt, on_terminate, api_exit_script, and inline SIGHUP/SIGINT/SIGTERM traps
- For signal exits (>128): send telemetry immediately, then best-effort log collection
- Add 2>/dev/null || true to all I/O in signal handlers to prevent SIGPIPE
- Fix on_exit: exit_code=0 now reports 'done' instead of 'failed 1'
- Root cause: pct pull hangs on dying containers blocked telemetry updates, leaving 595+ records stuck in 'installing' daily

* feat: add execution_id to all telemetry payloads

- Generate EXECUTION_ID from RANDOM_UUID in variables()
- Export EXECUTION_ID to container environment
- Add execution_id field to all 8 API payloads in api.func
- Add execution_id to post_progress_to_api in install.func and alpine-install.func
- Fallback to RANDOM_UUID when EXECUTION_ID not set (backward compat)

* fix: correct telemetry type values for PVE and addon scripts

- PVE scripts (tools/pve/*): change type 'tool' -> 'pve'
- Addon scripts (tools/addon/*): fix 4 scripts that wrongly used 'tool' -> 'addon'
  (netdata, add-tailscale-lxc, add-netbird-lxc, all-templates)
- api.func: post_tool_to_api sends type='pve', default fallback 'pve'
- Aligns with PocketBase categories: lxc, vm, pve, addon

* fix: persist diagnostics opt-in inside containers for addon telemetry

- install.func + alpine-install.func: create /usr/local/community-scripts/diagnostics
  inside the container when DIAGNOSTICS=yes (from build.func export)
- Enables addon scripts running later inside containers to find the opt-in
- Update init_tool_telemetry default type from 'tool' to 'pve'

* refactor: clean up diagnostics/telemetry opt-in system

- diagnostics_check(): deduplicate heredoc (was 2x 22 lines), improve whiptail
  text with clear what/what-not collected, add telemetry + privacy links
- diagnostics_menu(): better UX with current status, clear enable/disable
  buttons, note about existing containers
- variables(): change DIAGNOSTICS default from 'yes' to 'no' (safe: no
  telemetry before user consents via diagnostics_check)
- install.func + alpine-install.func: persist BOTH yes AND no in container
  so opt-out is explicit (not just missing file = no)
- Fix typo 'menue' -> 'menu' in config file comments

* fix: no pre-selection in telemetry dialog, link to telemetry-service README

- Add --defaultno so 'No, opt out' is focused by default (user must Tab to Yes)
- Change privacy link from discussions/1836 to telemetry-service#privacy--compliance

* fix: use radiolist for telemetry dialog (no pre-selection)

- Replace --yesno with --radiolist: user must actively SPACE-select an option
- Both options start as OFF (no pre-selection)
- Cancel/Exit defaults to 'no' (opt-out)

* simplify: inline telemetry dialog text like other whiptail dialogs

* improve: telemetry dialog with more detail, link to PRIVACY.md

- Add what we collect / don't collect sections back to dialog
- Link to telemetry-service/docs/PRIVACY.md instead of README anchor
- Update config file comment with same link
2026-02-18 10:24:06 +01:00
community-scripts-pr-app[bot]
b4a5d28957 chore: update github-versions.json (#12054)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 06:25:12 +00:00
community-scripts-pr-app[bot]
eaa69d58be Update CHANGELOG.md (#12052)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 00:22:33 +00:00
community-scripts-pr-app[bot]
3ffff334a0 chore: update github-versions.json (#12051)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-18 00:22:12 +00:00
community-scripts-pr-app[bot]
38f04f4dcc Update CHANGELOG.md (#12048)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 20:14:36 +00:00
Andreas Abeck
fdbcee3a93 fix according to issue #12045 (#12047) 2026-02-17 21:14:12 +01:00
community-scripts-pr-app[bot]
ce11ba8f27 Update CHANGELOG.md (#12046)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 19:18:03 +00:00
Chris
43a0a078f5 [Hotfix] Cleanuparr: backup config before update (#12039) 2026-02-17 20:17:22 +01:00
community-scripts-pr-app[bot]
646dabf0f0 chore: update github-versions.json (#12043)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 18:20:21 +00:00
community-scripts-pr-app[bot]
2582c1f63b Update CHANGELOG.md (#12040)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 15:36:48 +00:00
CanbiZ (MickLesk)
3ce3c6f613 tools/pve: add data analytics / formatting / linting (#12034)
* core: add progress; fix exit status

Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.

* Safer tools.func load and improved error handling

Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.

* tools: add data init and auto-reporting to tools and pve section

Introduce telemetry helpers in misc/api.func: _telemetry_report_exit (reports success/failure via post_tool_to_api/post_addon_to_api) and init_tool_telemetry (reads DIAGNOSTICS, starts install timer and installs an EXIT trap to auto-report). Integrate telemetry into many tools/addon and tools/pve scripts by sourcing the remote api.func and calling init_tool_telemetry (guarded with declare -f). Also apply a minor arithmetic formatting tweak in misc/build.func for RECOVERY_ATTEMPT.
2026-02-17 16:36:20 +01:00
community-scripts-pr-app[bot]
97652792be Update CHANGELOG.md (#12031)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 12:25:42 +00:00
CanbiZ (MickLesk)
f07f2cb04e core: error-handler improvements | better exit_code handling | better tools.func source check (#12019)
* core: add progress; fix exit status

Introduce post_progress_to_api() in alpine-install.func and install.func to send a lightweight, fire-and-forget telemetry ping (HTTP POST) that updates an existing telemetry record to "configuring" when DIAGNOSTICS=yes and RANDOM_UUID is set. The function is non-blocking (curl -m 5, errors ignored) and is invoked during container setup and after OS updates to signal active progress. Also adjust api_exit_script() in build.func to report success (post_update_to_api "done" "0") for cases where the script exited normally but a completion status wasn't posted, instead of reporting failure.

* Safer tools.func load and improved error handling

Replace process-substitution sourcing of tools.func with an explicit curl -> variable -> source via /dev/stdin, adding failure messages and a check that expected functions (e.g. fetch_and_deploy_gh_release) are present (misc/alpine-install.func, misc/install.func). Add categorize_error mapping for exit code 10 -> "config" (misc/api.func). Tweak build.func: minor pipeline formatting and change the ERR trap to capture the actual exit code and only call ensure_log_on_host/post_update on non-zero exits, preventing erroneous failure reporting.
2026-02-17 13:25:17 +01:00
community-scripts-pr-app[bot]
5f73f9d5e6 chore: update github-versions.json (#12030)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 12:14:28 +00:00
community-scripts-pr-app[bot]
0183ae0fff Update CHANGELOG.md (#12029)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:16:34 +00:00
CanbiZ (MickLesk)
32d1937a74 Refactor: centralize systemd service creation (#12025)
Introduce create_service() to generate the immich-proxy systemd unit and run systemctl daemon-reload. Replace duplicated heredoc service blocks in install with a call to create_service, and invoke create_service during update before starting the service. Adjust unit WorkingDirectory to ${INSTALL_PATH}/app and ExecStart to run dist/index.js.
2026-02-17 12:16:09 +01:00
community-scripts-pr-app[bot]
0a7bd20b06 Update CHANGELOG.md (#12028)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 11:15:14 +00:00
CanbiZ (MickLesk)
c9ecb1ccca core: smart recovery for failed installs | extend exit_codes (#11221)
* feat(build.func): smart error recovery menu for failed installations

Replace simple Y/n removal prompt with interactive recovery menu:

- Option 1: Remove container and exit (default, auto after 60s timeout)
- Option 2: Keep container for debugging
- Option 3: Retry installation with verbose mode enabled
- Option 4: Retry with 1.5x RAM and +1 CPU core (OOM errors only)

Improvements:
- Detect OOM errors (exit codes 137, 243) and offer resource increase
- Show human-readable error explanation using explain_exit_code()
- Recursive rebuild preserves ALL settings from advanced/app.vars/default.vars
- Settings preserved: Network (IP, Gateway, VLAN, MTU, Bridge), Features
  (Nesting, FUSE, TUN, GPU), Storage, SSH keys, Tags, Hostname, etc.
- Show rebuild summary before retry (old→new CTID, resources, network)
- New container ID generated automatically for rebuilds

This helps users recover from transient failures without re-running
the entire script manually.

* fix(api.func): fix duplicate exit codes and add missing error codes

Exit code fixes:
- Remove duplicate definitions for codes 243, 254 (Node.js vs DB)
- Reassign MySQL/MariaDB to 240-242, 244 (was 241-244)
- Reassign MongoDB to 250-253 (was 251-254)

New exit codes added (based on GitHub issues analysis):
- 6: curl couldn't resolve host (DNS failure)
- 7: curl failed to connect (network unreachable)
- 22: curl HTTP error (404, 429 rate limit, 500)
- 28: curl timeout (very common in download failures)
- 35: curl SSL error
- 102: APT lock held by another process
- 124: Command timeout
- 141: SIGPIPE (broken pipe)

Also update OOM detection to include exit code 134 (SIGABRT)
which is commonly seen in Node.js heap overflow issues.

Fixes based on analysis of ~500 GitHub issues.

* fix(exit-codes): sync error_handler.func and api.func with conflict-free code ranges

- Add curl error codes (6, 7, 22, 28, 35)
- Add APT lock code (102), timeout (124), signals (134, 141)
- Move Python codes: 210-212 → 160-162 (avoid Proxmox conflict)
- Move PostgreSQL codes: 231-234 → 170-173
- Move MySQL/MariaDB codes: 241-244 → 180-183
- Move MongoDB codes: 251-254 → 190-193
- Keep Node.js at 243-249, Proxmox at 200-231
- Both files now synchronized with identical mappings

* feat(exit-codes): add systemd and build error codes (150-154)

- 150: Systemd service failed to start
- 151: Systemd service unit not found
- 152: Permission denied (EACCES)
- 153: Build/compile failed (make/gcc/cmake)
- 154: Node.js native addon build failed (node-gyp)

Based on issue analysis: 57 service failures, 25 build failures, 22 node-gyp issues

* fix(build): restore smart recovery and add OOM/DNS retry paths

* feat(build): APT in-place repair, exit 1 subclassification, new exit codes

- Add APT/DPKG in-place recovery: detects exit 100/101/102/255 and exit 1
  with APT log patterns, offers to repair dpkg state and re-run install
  script without destroying the container
- Add exit 1 subclassification: analyzes combined log to identify root
  cause (APT, OOM, network, command-not-found) and routes to appropriate
  recovery option
- Add exit 10 hint: shows privileged mode / nesting suggestion
- Add exit 127 hint: extracts missing command name from logs
- Refactor recovery menu: use named option variables (APT_OPTION,
  OOM_OPTION, DNS_OPTION) instead of hardcoded option numbers, supports
  up to 6 dynamic options cleanly
- Map missing exit codes in api.func: curl 27/36/45/47/55, signals
  129 (SIGHUP) / 131 (SIGQUIT), npm 239

* feat(api+build): map 25 more exit codes, add SIGHUP trap, network/perm hints

api.func:
- Map 25+ new exit codes that were showing as 'Unknown' in telemetry:
  curl: 3, 16, 18, 24, 26, 32-34, 39, 44, 46, 48, 51, 52, 57, 59, 61,
  63, 79, 92, 95; signals: 125, 132, 144, 146
- Update code 8 description (FTP + apk untrusted key)
- Update header comment with full supported ranges

build.func:
- Add SIGHUP trap: reports 'failed/129' to API when terminal is closed,
  should significantly reduce the 2841 stuck 'installing' records
- Add exit 52 (empty reply) and 57 (poll error) to network issue
  detection for DNS override recovery option
- Add exit 125/126 hint: suggests privileged mode for permission errors

* fix: sync error_handler fallback, Alpine APK repair, retry limit

error_handler.func:
- Sync fallback explain_exit_code() with api.func: add 25+ codes that
  were missing (curl 16/18/24/26/27/32-34/36/39/44-48/51/52/55/57/59/
  61/63/79/92/95, signals 125/129/131/132/144/146, npm 239, code 3/8)
- Ensures consistent error descriptions even when api.func isn't loaded

build.func:
- Alpine APK repair: detect var_os=alpine and run 'apk fix && apk
  cache clean && apk update' instead of apt-get/dpkg commands
- Show 'Repair APK state' instead of 'APT/DPKG' in menu for Alpine
- Retry safety counter: OOM x2 retry limited to max 2 attempts
  (prevents infinite RAM doubling via RECOVERY_ATTEMPT env var)
- Show attempt count in rebuild summary

* fix(build): preserve exit code in ERR trap to prevent false exit_code=0

The ERR trap called ensure_log_on_host before post_update_to_api,
which reset \True to 0 (success). This caused ~15-20 records/day to be
reported as 'failed' with exit_code=0 instead of the actual error code.

Root cause chain:
1. Command fails with exit code N → ERR trap fires (\True = N)
2. ensure_log_on_host succeeds → \True becomes 0
3. post_update_to_api 'failed' '\True' → sends 'failed/0' (wrong!)
4. POST_UPDATE_DONE=true → EXIT trap skips the correct code

Fix: capture \True into _ERR_CODE before ensure_log_on_host runs.

* Implement telemetry settings and repo source detection

Add telemetry configuration and repository source detection function.
2026-02-17 12:14:46 +01:00
community-scripts-pr-app[bot]
d274a269b5 Update .app files (#12022)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 10:45:09 +01:00
community-scripts-pr-app[bot]
cbee9d64b5 Update CHANGELOG.md (#12024)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:45 +00:00
community-scripts-pr-app[bot]
ffcda217e3 Update CHANGELOG.md (#12023)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:25 +00:00
community-scripts-pr-app[bot]
438d5d6b94 Update date in json (#12021)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-17 09:41:19 +00:00
push-app-to-main[bot]
104366bc64 Databasus (#12018)
* Add databasus (ct)

* Update databasus.sh

* Update databasus-install.sh

* Fix backup and restore paths for Databasus config

---------

Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-17 10:40:58 +01:00
community-scripts-pr-app[bot]
9dab79f8ca Update CHANGELOG.md (#12017)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:08:35 +00:00
CanbiZ (MickLesk)
2dddeaf966 Call get_lxc_ip in start() before updates (#12015) 2026-02-17 09:08:09 +01:00
community-scripts-pr-app[bot]
fae06a3a58 Update CHANGELOG.md (#12016)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 08:07:22 +00:00
Tobias
137272c354 fix: pterodactyl-panel add symlink (#11997) 2026-02-17 09:06:59 +01:00
community-scripts-pr-app[bot]
52a9e23401 chore: update github-versions.json (#12013)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 06:22:15 +00:00
community-scripts-pr-app[bot]
c2333de180 Update CHANGELOG.md (#12007)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:47 +00:00
community-scripts-pr-app[bot]
ad8974894b chore: update github-versions.json (#12006)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-17 00:22:21 +00:00
community-scripts-pr-app[bot]
38af4be5ba Update CHANGELOG.md (#12005)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 21:38:03 +00:00
Chris
80ae1f34fa Opencloud: Pin version to 5.1.0 (#12004) 2026-02-16 22:37:35 +01:00
community-scripts-pr-app[bot]
06bc6e20d5 chore: update github-versions.json (#12001)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 18:13:41 +00:00
community-scripts-pr-app[bot]
4418e72856 Update CHANGELOG.md (#11999)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-16 16:14:29 +00:00
CanbiZ (MickLesk)
896714e06f core/vm's: ensure script state is sent on script exit (#11991)
* Ensure API update is sent on script exit

Add exit-time telemetry handling across scripts to avoid orphaned "installing" records. Introduce local exit_code capture in api_exit_script and cleanup handlers and, when POST_TO_API_DONE is true but POST_UPDATE_DONE is not, post a final status (marking failures on non-zero exit codes, or marking done/failed in VM cleanups based on exit code). Changes touch misc/build.func, misc/vm-core.func and various vm/*-vm.sh cleanup functions to reliably send post_update_to_api on normal or early exits.

* Update api.func

* fix(telemetry): add missing exit codes to explain_exit_code()

- Add curl error codes: 4, 5, 8, 23, 25, 30, 56, 78
- Add code 10: Docker/privileged mode required (used in ~15 scripts)
- Add code 75: Temporary failure (retry later)
- Add BSD sysexits.h codes: 64-77
- Sync error_handler.func fallback with canonical api.func
2026-02-16 17:14:00 +01:00
125 changed files with 4591 additions and 1198 deletions

179
.github/changelogs/2026/02.md generated vendored
View File

@@ -1,3 +1,182 @@
## 2026-02-21
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Pangolin: restore config before db migration, use drizzle-kit push [@MickLesk](https://github.com/MickLesk) ([#12130](https://github.com/community-scripts/ProxmoxVE/pull/12130))
- PLANKA: fix msg's [@danielalanbates](https://github.com/danielalanbates) ([#12143](https://github.com/community-scripts/ProxmoxVE/pull/12143))
### 🌐 Website
- #### 📝 Script Information
- MediaManager: Update documentation URL [@tremor021](https://github.com/tremor021) ([#12154](https://github.com/community-scripts/ProxmoxVE/pull/12154))
## 2026-02-20
### 🆕 New Scripts
- Sure ([#12114](https://github.com/community-scripts/ProxmoxVE/pull/12114))
- Calibre-Web ([#12115](https://github.com/community-scripts/ProxmoxVE/pull/12115))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Zammad: fix Elasticsearch JVM config and add daemon-reload [@MickLesk](https://github.com/MickLesk) ([#12125](https://github.com/community-scripts/ProxmoxVE/pull/12125))
- Huntarr: add build-essential for native pip dependencies [@MickLesk](https://github.com/MickLesk) ([#12126](https://github.com/community-scripts/ProxmoxVE/pull/12126))
- Dokploy: fix update function [@vhsdream](https://github.com/vhsdream) ([#12116](https://github.com/community-scripts/ProxmoxVE/pull/12116))
- #### 💥 Breaking Changes
- recyclarr: adjust paths for v8.0 breaking changes [@MickLesk](https://github.com/MickLesk) ([#12129](https://github.com/community-scripts/ProxmoxVE/pull/12129))
- #### 🔧 Refactor
- Planka: migrate data paths to new v2 directory structure [@MickLesk](https://github.com/MickLesk) ([#12128](https://github.com/community-scripts/ProxmoxVE/pull/12128))
### 🌐 Website
- #### 📝 Script Information
- fixen broken link to dawarich documentation [@RiX012](https://github.com/RiX012) ([#12103](https://github.com/community-scripts/ProxmoxVE/pull/12103))
## 2026-02-19
### 🆕 New Scripts
- TrueNAS-VM ([#12059](https://github.com/community-scripts/ProxmoxVE/pull/12059))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- add: patchmon breaking change msg [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12075](https://github.com/community-scripts/ProxmoxVE/pull/12075))
- LibreNMS: Various fixes [@tremor021](https://github.com/tremor021) ([#12089](https://github.com/community-scripts/ProxmoxVE/pull/12089))
### 🌐 Website
- #### 📝 Script Information
- truenas-vm: slug fix for source code link [@juronja](https://github.com/juronja) ([#12088](https://github.com/community-scripts/ProxmoxVE/pull/12088))
## 2026-02-18
### 🚀 Updated Scripts
- #### 💥 Breaking Changes
- [Fix] PatchMon: use `SERVER_PORT` in Nginx config if set in env [@vhsdream](https://github.com/vhsdream) ([#12053](https://github.com/community-scripts/ProxmoxVE/pull/12053))
### 💾 Core
- #### ✨ New Features
- core: Execution ID & Telemetry Improvements [@MickLesk](https://github.com/MickLesk) ([#12041](https://github.com/community-scripts/ProxmoxVE/pull/12041))
## 2026-02-17
### 🆕 New Scripts
- Databasus ([#12018](https://github.com/community-scripts/ProxmoxVE/pull/12018))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- [Hotfix] Cleanuparr: backup config before update [@vhsdream](https://github.com/vhsdream) ([#12039](https://github.com/community-scripts/ProxmoxVE/pull/12039))
- fix: pterodactyl-panel add symlink [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11997](https://github.com/community-scripts/ProxmoxVE/pull/11997))
### 💾 Core
- #### 🐞 Bug Fixes
- core: call get_lxc_ip in start() before updates [@MickLesk](https://github.com/MickLesk) ([#12015](https://github.com/community-scripts/ProxmoxVE/pull/12015))
- #### ✨ New Features
- tools/pve: add data analytics / formatting / linting [@MickLesk](https://github.com/MickLesk) ([#12034](https://github.com/community-scripts/ProxmoxVE/pull/12034))
- core: smart recovery for failed installs | extend exit_codes [@MickLesk](https://github.com/MickLesk) ([#11221](https://github.com/community-scripts/ProxmoxVE/pull/11221))
- #### 🔧 Refactor
- core: error-handler improvements | better exit_code handling | better tools.func source check [@MickLesk](https://github.com/MickLesk) ([#12019](https://github.com/community-scripts/ProxmoxVE/pull/12019))
### 🧰 Tools
- #### 🔧 Refactor
- Immich Public Proxy: centralize and fix systemd service creation [@MickLesk](https://github.com/MickLesk) ([#12025](https://github.com/community-scripts/ProxmoxVE/pull/12025))
### 📚 Documentation
- fix contribution/setup-fork [@andreasabeck](https://github.com/andreasabeck) ([#12047](https://github.com/community-scripts/ProxmoxVE/pull/12047))
## 2026-02-16
### 🆕 New Scripts
- RomM ([#11987](https://github.com/community-scripts/ProxmoxVE/pull/11987))
- LinkDing ([#11976](https://github.com/community-scripts/ProxmoxVE/pull/11976))
### 🚀 Updated Scripts
- Opencloud: Pin version to 5.1.0 [@vhsdream](https://github.com/vhsdream) ([#12004](https://github.com/community-scripts/ProxmoxVE/pull/12004))
- #### 🐞 Bug Fixes
- Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988))
- slskd: fix exit position [@MickLesk](https://github.com/MickLesk) ([#11963](https://github.com/community-scripts/ProxmoxVE/pull/11963))
- cryptpad: restore config earlier and run onlyoffice upgrade [@MickLesk](https://github.com/MickLesk) ([#11964](https://github.com/community-scripts/ProxmoxVE/pull/11964))
- jellyseerr/overseerr: Migrate update script to Seerr; prompt rerun [@MickLesk](https://github.com/MickLesk) ([#11965](https://github.com/community-scripts/ProxmoxVE/pull/11965))
- #### 🔧 Refactor
- core/vm's: ensure script state is sent on script exit [@MickLesk](https://github.com/MickLesk) ([#11991](https://github.com/community-scripts/ProxmoxVE/pull/11991))
- Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966))
- Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968))
### 💾 Core
- #### ✨ New Features
- tools.func: ensure /usr/local/bin PATH persists for pct enter sessions [@MickLesk](https://github.com/MickLesk) ([#11970](https://github.com/community-scripts/ProxmoxVE/pull/11970))
- #### 🔧 Refactor
- core: remove duplicate error handler from alpine-install.func [@MickLesk](https://github.com/MickLesk) ([#11971](https://github.com/community-scripts/ProxmoxVE/pull/11971))
### 📂 Github
- github: add "website" label if "json" changed [@MickLesk](https://github.com/MickLesk) ([#11975](https://github.com/community-scripts/ProxmoxVE/pull/11975))
### 🌐 Website
- #### 📝 Script Information
- Update Wishlist LXC webpage to include reverse proxy info [@summoningpixels](https://github.com/summoningpixels) ([#11973](https://github.com/community-scripts/ProxmoxVE/pull/11973))
- Update OpenCloud LXC webpage to include services ports [@summoningpixels](https://github.com/summoningpixels) ([#11969](https://github.com/community-scripts/ProxmoxVE/pull/11969))
## 2026-02-15
### 🆕 New Scripts
- ebusd ([#11942](https://github.com/community-scripts/ProxmoxVE/pull/11942))
- add: seer script and migrations [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11930](https://github.com/community-scripts/ProxmoxVE/pull/11930))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Fix seerr URL in jellyseerr script [@lucacome](https://github.com/lucacome) ([#11951](https://github.com/community-scripts/ProxmoxVE/pull/11951))
- Fix jellyseer and overseer script replacement [@lucacome](https://github.com/lucacome) ([#11949](https://github.com/community-scripts/ProxmoxVE/pull/11949))
- Tautulli: Add setuptools < 81 [@tremor021](https://github.com/tremor021) ([#11943](https://github.com/community-scripts/ProxmoxVE/pull/11943))
- #### 💥 Breaking Changes
- Refactor: Patchmon [@vhsdream](https://github.com/vhsdream) ([#11888](https://github.com/community-scripts/ProxmoxVE/pull/11888))
## 2026-02-14
### 🚀 Updated Scripts

View File

@@ -18,6 +18,9 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
<details>
<summary><h2>📜 History</h2></summary>
@@ -27,7 +30,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
<details>
<summary><h4>February (14 entries)</h4></summary>
<summary><h4>February (21 entries)</h4></summary>
[View February 2026 Changelog](.github/changelogs/2026/02.md)
@@ -404,6 +407,173 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-23
### 🆕 New Scripts
- Frigate v16.4 [@MickLesk](https://github.com/MickLesk) ([#11887](https://github.com/community-scripts/ProxmoxVE/pull/11887))
### 💾 Core
- #### 🔧 Refactor
- core: Improve error handling and logging for LXC builds [@MickLesk](https://github.com/MickLesk) ([#12208](https://github.com/community-scripts/ProxmoxVE/pull/12208))
## 2026-02-22
### 🆕 New Scripts
- Gramps-Web ([#12157](https://github.com/community-scripts/ProxmoxVE/pull/12157))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- fix: Apache Guacamole - bump to Temurin JDK 17 to resolve Debian 13 (Trixie) install failure [@Copilot](https://github.com/Copilot) ([#12161](https://github.com/community-scripts/ProxmoxVE/pull/12161))
- Docker-VM: add error handling for virt-customize finalization [@MickLesk](https://github.com/MickLesk) ([#12127](https://github.com/community-scripts/ProxmoxVE/pull/12127))
- [Fix] Sure: add Sidekiq service [@vhsdream](https://github.com/vhsdream) ([#12186](https://github.com/community-scripts/ProxmoxVE/pull/12186))
- #### ✨ New Features
- Refactor & Bump to v2: Plex [@MickLesk](https://github.com/MickLesk) ([#12179](https://github.com/community-scripts/ProxmoxVE/pull/12179))
- #### 🔧 Refactor
- karakeep: bump to node 24 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12183](https://github.com/community-scripts/ProxmoxVE/pull/12183))
### 💾 Core
- #### ✨ New Features
- tools.func: add GitHub API rate-limit detection and GITHUB_TOKEN support [@MickLesk](https://github.com/MickLesk) ([#12176](https://github.com/community-scripts/ProxmoxVE/pull/12176))
### 🧰 Tools
- CR*NMASTER ([#12065](https://github.com/community-scripts/ProxmoxVE/pull/12065))
- #### 🔧 Refactor
- Update package management commands in clean-lxcs.sh [@heinemannj](https://github.com/heinemannj) ([#12166](https://github.com/community-scripts/ProxmoxVE/pull/12166))
### ❔ Uncategorized
- calibre-web: Update logo URL [@MickLesk](https://github.com/MickLesk) ([#12178](https://github.com/community-scripts/ProxmoxVE/pull/12178))
## 2026-02-21
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Pangolin: restore config before db migration, use drizzle-kit push [@MickLesk](https://github.com/MickLesk) ([#12130](https://github.com/community-scripts/ProxmoxVE/pull/12130))
- PLANKA: fix msg's [@danielalanbates](https://github.com/danielalanbates) ([#12143](https://github.com/community-scripts/ProxmoxVE/pull/12143))
### 🌐 Website
- #### 📝 Script Information
- MediaManager: Update documentation URL [@tremor021](https://github.com/tremor021) ([#12154](https://github.com/community-scripts/ProxmoxVE/pull/12154))
## 2026-02-20
### 🆕 New Scripts
- Sure ([#12114](https://github.com/community-scripts/ProxmoxVE/pull/12114))
- Calibre-Web ([#12115](https://github.com/community-scripts/ProxmoxVE/pull/12115))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Zammad: fix Elasticsearch JVM config and add daemon-reload [@MickLesk](https://github.com/MickLesk) ([#12125](https://github.com/community-scripts/ProxmoxVE/pull/12125))
- Huntarr: add build-essential for native pip dependencies [@MickLesk](https://github.com/MickLesk) ([#12126](https://github.com/community-scripts/ProxmoxVE/pull/12126))
- Dokploy: fix update function [@vhsdream](https://github.com/vhsdream) ([#12116](https://github.com/community-scripts/ProxmoxVE/pull/12116))
- #### 💥 Breaking Changes
- recyclarr: adjust paths for v8.0 breaking changes [@MickLesk](https://github.com/MickLesk) ([#12129](https://github.com/community-scripts/ProxmoxVE/pull/12129))
- #### 🔧 Refactor
- Planka: migrate data paths to new v2 directory structure [@MickLesk](https://github.com/MickLesk) ([#12128](https://github.com/community-scripts/ProxmoxVE/pull/12128))
### 🌐 Website
- #### 📝 Script Information
- fixen broken link to dawarich documentation [@RiX012](https://github.com/RiX012) ([#12103](https://github.com/community-scripts/ProxmoxVE/pull/12103))
## 2026-02-19
### 🆕 New Scripts
- TrueNAS-VM ([#12059](https://github.com/community-scripts/ProxmoxVE/pull/12059))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- add: patchmon breaking change msg [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12075](https://github.com/community-scripts/ProxmoxVE/pull/12075))
- LibreNMS: Various fixes [@tremor021](https://github.com/tremor021) ([#12089](https://github.com/community-scripts/ProxmoxVE/pull/12089))
### 🌐 Website
- #### 📝 Script Information
- truenas-vm: slug fix for source code link [@juronja](https://github.com/juronja) ([#12088](https://github.com/community-scripts/ProxmoxVE/pull/12088))
## 2026-02-18
### 🚀 Updated Scripts
- #### 💥 Breaking Changes
- [Fix] PatchMon: use `SERVER_PORT` in Nginx config if set in env [@vhsdream](https://github.com/vhsdream) ([#12053](https://github.com/community-scripts/ProxmoxVE/pull/12053))
### 💾 Core
- #### ✨ New Features
- core: Execution ID & Telemetry Improvements [@MickLesk](https://github.com/MickLesk) ([#12041](https://github.com/community-scripts/ProxmoxVE/pull/12041))
## 2026-02-17
### 🆕 New Scripts
- Databasus ([#12018](https://github.com/community-scripts/ProxmoxVE/pull/12018))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- [Hotfix] Cleanuparr: backup config before update [@vhsdream](https://github.com/vhsdream) ([#12039](https://github.com/community-scripts/ProxmoxVE/pull/12039))
- fix: pterodactyl-panel add symlink [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11997](https://github.com/community-scripts/ProxmoxVE/pull/11997))
### 💾 Core
- #### 🐞 Bug Fixes
- core: call get_lxc_ip in start() before updates [@MickLesk](https://github.com/MickLesk) ([#12015](https://github.com/community-scripts/ProxmoxVE/pull/12015))
- #### ✨ New Features
- tools/pve: add data analytics / formatting / linting [@MickLesk](https://github.com/MickLesk) ([#12034](https://github.com/community-scripts/ProxmoxVE/pull/12034))
- core: smart recovery for failed installs | extend exit_codes [@MickLesk](https://github.com/MickLesk) ([#11221](https://github.com/community-scripts/ProxmoxVE/pull/11221))
- #### 🔧 Refactor
- core: error-handler improvements | better exit_code handling | better tools.func source check [@MickLesk](https://github.com/MickLesk) ([#12019](https://github.com/community-scripts/ProxmoxVE/pull/12019))
### 🧰 Tools
- #### 🔧 Refactor
- Immich Public Proxy: centralize and fix systemd service creation [@MickLesk](https://github.com/MickLesk) ([#12025](https://github.com/community-scripts/ProxmoxVE/pull/12025))
### 📚 Documentation
- fix contribution/setup-fork [@andreasabeck](https://github.com/andreasabeck) ([#12047](https://github.com/community-scripts/ProxmoxVE/pull/12047))
## 2026-02-16
### 🆕 New Scripts
@@ -413,6 +583,8 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
### 🚀 Updated Scripts
- Opencloud: Pin version to 5.1.0 [@vhsdream](https://github.com/vhsdream) ([#12004](https://github.com/community-scripts/ProxmoxVE/pull/12004))
- #### 🐞 Bug Fixes
- Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988))
@@ -422,6 +594,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- #### 🔧 Refactor
- core/vm's: ensure script state is sent on script exit [@MickLesk](https://github.com/MickLesk) ([#11991](https://github.com/community-scripts/ProxmoxVE/pull/11991))
- Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966))
- Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968))
@@ -1210,210 +1383,4 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- #### ✨ New Features
- core: add IPv6 fallback support to get_current_ip functions | add check for SSH_KEYS_FILE in user_defaults [@MickLesk](https://github.com/MickLesk) ([#11067](https://github.com/community-scripts/ProxmoxVE/pull/11067))
## 2026-01-22
### 🆕 New Scripts
- Loki | Alpine-Loki ([#11048](https://github.com/community-scripts/ProxmoxVE/pull/11048))
### 🚀 Updated Scripts
- Immich: Increase RAM to 6GB [@vhsdream](https://github.com/vhsdream) ([#10965](https://github.com/community-scripts/ProxmoxVE/pull/10965))
- #### 🐞 Bug Fixes
- Jotty: Increase default disk size from 6 to 8 [@tremor021](https://github.com/tremor021) ([#11056](https://github.com/community-scripts/ProxmoxVE/pull/11056))
- Fix tags in several scripts [@s4dmach1ne](https://github.com/s4dmach1ne) ([#11050](https://github.com/community-scripts/ProxmoxVE/pull/11050))
### 💾 Core
- #### ✨ New Features
- tools: use distro packages for MariaDB by default [@MickLesk](https://github.com/MickLesk) ([#11049](https://github.com/community-scripts/ProxmoxVE/pull/11049))
## 2026-01-21
### 🆕 New Scripts
- Byparr ([#11039](https://github.com/community-scripts/ProxmoxVE/pull/11039))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- fix: Snipe-IT update missing all user uploads (#11032) [@ruanmed](https://github.com/ruanmed) ([#11033](https://github.com/community-scripts/ProxmoxVE/pull/11033))
- yubal: fix for v0.2 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11006](https://github.com/community-scripts/ProxmoxVE/pull/11006))
- Joplin-Server: use yarn workspaces focus for faster builds [@MickLesk](https://github.com/MickLesk) ([#11027](https://github.com/community-scripts/ProxmoxVE/pull/11027))
### 💾 Core
- #### ✨ New Features
- tools: add ubuntu PHP repository setup [@MickLesk](https://github.com/MickLesk) ([#11034](https://github.com/community-scripts/ProxmoxVE/pull/11034))
- #### 🔧 Refactor
- core: allow empty tags & improve template search [@MickLesk](https://github.com/MickLesk) ([#11020](https://github.com/community-scripts/ProxmoxVE/pull/11020))
### 🌐 Website
- #### 📝 Script Information
- Joplin Server: Set disable flag to true in joplin-server.json [@tremor021](https://github.com/tremor021) ([#11008](https://github.com/community-scripts/ProxmoxVE/pull/11008))
## 2026-01-20
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- dolibarr: switch mirror [@MickLesk](https://github.com/MickLesk) ([#11004](https://github.com/community-scripts/ProxmoxVE/pull/11004))
- checkmk: reordner base function [@MickLesk](https://github.com/MickLesk) ([#10990](https://github.com/community-scripts/ProxmoxVE/pull/10990))
- Homepage: preserve config directory during updates [@MickLesk](https://github.com/MickLesk) ([#10993](https://github.com/community-scripts/ProxmoxVE/pull/10993))
- DiscoPanel: add go for update build process [@miausalvaje](https://github.com/miausalvaje) ([#10991](https://github.com/community-scripts/ProxmoxVE/pull/10991))
### 💾 Core
- #### ✨ New Features
- core: add retry logic for template lock in LXC container creation [@MickLesk](https://github.com/MickLesk) ([#11002](https://github.com/community-scripts/ProxmoxVE/pull/11002))
- core: implement ensure_profile_loaded function [@MickLesk](https://github.com/MickLesk) ([#10999](https://github.com/community-scripts/ProxmoxVE/pull/10999))
- core: add input validations for several functions [@MickLesk](https://github.com/MickLesk) ([#10995](https://github.com/community-scripts/ProxmoxVE/pull/10995))
## 2026-01-19
### 🆕 New Scripts
- yubal ([#10955](https://github.com/community-scripts/ProxmoxVE/pull/10955))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Apache-Guacamole: move jdbc cleanup after schema upgrade [@MickLesk](https://github.com/MickLesk) ([#10974](https://github.com/community-scripts/ProxmoxVE/pull/10974))
- Outline: prevent corepack interactive prompt blocking installation [@MickLesk](https://github.com/MickLesk) ([#10973](https://github.com/community-scripts/ProxmoxVE/pull/10973))
- firefly: prevent nested storage directories during update (#10967) [@MickLesk](https://github.com/MickLesk) ([#10972](https://github.com/community-scripts/ProxmoxVE/pull/10972))
- PeaNUT: change default port [@vhsdream](https://github.com/vhsdream) ([#10962](https://github.com/community-scripts/ProxmoxVE/pull/10962))
- Update/splunk enterprise [@rcastley](https://github.com/rcastley) ([#10949](https://github.com/community-scripts/ProxmoxVE/pull/10949))
- #### ✨ New Features
- Pangolin: use dynamic badger plugin version [@MickLesk](https://github.com/MickLesk) ([#10975](https://github.com/community-scripts/ProxmoxVE/pull/10975))
- Tautulli: add version detection and add proper update script [@MickLesk](https://github.com/MickLesk) ([#10976](https://github.com/community-scripts/ProxmoxVE/pull/10976))
- #### 🔧 Refactor
- Refactor: Remove custom IP fetching in scripts [@tremor021](https://github.com/tremor021) ([#10954](https://github.com/community-scripts/ProxmoxVE/pull/10954))
- Refactor: Homepage [@tremor021](https://github.com/tremor021) ([#10950](https://github.com/community-scripts/ProxmoxVE/pull/10950))
- Refactor: hev-socks5-server [@tremor021](https://github.com/tremor021) ([#10945](https://github.com/community-scripts/ProxmoxVE/pull/10945))
### 🗑️ Deleted Scripts
- Remove: phpIPAM [@MickLesk](https://github.com/MickLesk) ([#10939](https://github.com/community-scripts/ProxmoxVE/pull/10939))
### 💾 Core
- #### ✨ New Features
- core: add RFC 1123/952 compliant hostname/FQDN validation [@MickLesk](https://github.com/MickLesk) ([#10977](https://github.com/community-scripts/ProxmoxVE/pull/10977))
- [core]: Make LXC IP a global variable [@tremor021](https://github.com/tremor021) ([#10951](https://github.com/community-scripts/ProxmoxVE/pull/10951))
### 🧰 Tools
- #### 🔧 Refactor
- Refactor: copyparty [@MickLesk](https://github.com/MickLesk) ([#10941](https://github.com/community-scripts/ProxmoxVE/pull/10941))
## 2026-01-18
### 🆕 New Scripts
- Termix ([#10887](https://github.com/community-scripts/ProxmoxVE/pull/10887))
- ThingsBoard ([#10904](https://github.com/community-scripts/ProxmoxVE/pull/10904))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Fix Patchmon install script (escaping) [@christiaangoossens](https://github.com/christiaangoossens) ([#10920](https://github.com/community-scripts/ProxmoxVE/pull/10920))
- refactor: peanut entrypoint [@CrazyWolf13](https://github.com/CrazyWolf13) ([#10902](https://github.com/community-scripts/ProxmoxVE/pull/10902))
- #### 💥 Breaking Changes
- Update Patchmon default Nginx config (IPv6 and correct scheme) [@christiaangoossens](https://github.com/christiaangoossens) ([#10917](https://github.com/community-scripts/ProxmoxVE/pull/10917))
- #### 🔧 Refactor
- Refactor: FluidCalendar [@tremor021](https://github.com/tremor021) ([#10928](https://github.com/community-scripts/ProxmoxVE/pull/10928))
### 🗑️ Deleted Scripts
- Remove iVentoy script [@tremor021](https://github.com/tremor021) ([#10924](https://github.com/community-scripts/ProxmoxVE/pull/10924))
### 💾 Core
- #### ✨ New Features
- core: improve password handling and validation logic [@MickLesk](https://github.com/MickLesk) ([#10925](https://github.com/community-scripts/ProxmoxVE/pull/10925))
- #### 🔧 Refactor
- hwaccel: improve NVIDIA version matching and GPU selection UI [@MickLesk](https://github.com/MickLesk) ([#10901](https://github.com/community-scripts/ProxmoxVE/pull/10901))
### 📂 Github
- Fix typo in the New Script request template [@tremor021](https://github.com/tremor021) ([#10891](https://github.com/community-scripts/ProxmoxVE/pull/10891))
### 🌐 Website
- #### 🐞 Bug Fixes
- fix: preserve newest scripts pagination [@jgrubiox](https://github.com/jgrubiox) ([#10882](https://github.com/community-scripts/ProxmoxVE/pull/10882))
### ❔ Uncategorized
- Update qui.json [@GalaxyCatD3v](https://github.com/GalaxyCatD3v) ([#10896](https://github.com/community-scripts/ProxmoxVE/pull/10896))
## 2026-01-17
### 🆕 New Scripts
- TRIP ([#10864](https://github.com/community-scripts/ProxmoxVE/pull/10864))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- fix sonarqube update version info (#10870) [@Karlito83](https://github.com/Karlito83) ([#10871](https://github.com/community-scripts/ProxmoxVE/pull/10871))
- WGDashboard: Update repo URL [@tremor021](https://github.com/tremor021) ([#10872](https://github.com/community-scripts/ProxmoxVE/pull/10872))
### 🌐 Website
- #### 📝 Script Information
- Disable Palmer [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#10889](https://github.com/community-scripts/ProxmoxVE/pull/10889))
## 2026-01-16
### 🆕 New Scripts
- Flatnotes ([#10857](https://github.com/community-scripts/ProxmoxVE/pull/10857))
- Unifi OS Server ([#10856](https://github.com/community-scripts/ProxmoxVE/pull/10856))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Jotty: increase RAM; cap heap size at 3GB during build [@vhsdream](https://github.com/vhsdream) ([#10868](https://github.com/community-scripts/ProxmoxVE/pull/10868))
- SnowShare: Increase default resources [@TuroYT](https://github.com/TuroYT) ([#10865](https://github.com/community-scripts/ProxmoxVE/pull/10865))
- postgresql: name of sources file fixed (update check) [@JamborJan](https://github.com/JamborJan) ([#10854](https://github.com/community-scripts/ProxmoxVE/pull/10854))
- immich: use dpkg-query to get intel-opencl-icd version [@MickLesk](https://github.com/MickLesk) ([#10848](https://github.com/community-scripts/ProxmoxVE/pull/10848))
- domain-monitor: fix: cron user [@CrazyWolf13](https://github.com/CrazyWolf13) ([#10846](https://github.com/community-scripts/ProxmoxVE/pull/10846))
- pihole/unbound: create unbound config before apt install to prevent port conflicts [@MickLesk](https://github.com/MickLesk) ([#10839](https://github.com/community-scripts/ProxmoxVE/pull/10839))
- zammad: use ln -sf to avoid failure when symlink exists [@MickLesk](https://github.com/MickLesk) ([#10840](https://github.com/community-scripts/ProxmoxVE/pull/10840))
### ❔ Uncategorized
- qui: fix: category [@CrazyWolf13](https://github.com/CrazyWolf13) ([#10847](https://github.com/community-scripts/ProxmoxVE/pull/10847))
- core: add IPv6 fallback support to get_current_ip functions | add check for SSH_KEYS_FILE in user_defaults [@MickLesk](https://github.com/MickLesk) ([#11067](https://github.com/community-scripts/ProxmoxVE/pull/11067))

View File

@@ -51,7 +51,7 @@ function update_script() {
exit
fi
JAVA_VERSION="11" setup_java
JAVA_VERSION="17" setup_java
msg_info "Stopping Services"
systemctl stop guacd tomcat

73
ct/calibre-web.sh Normal file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: mikolaj92
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/janeczku/calibre-web
APP="calibre-web"
var_tags="${var_tags:-media;books}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/calibre-web ]]; then
msg_error "No Calibre-Web Installation Found!"
exit
fi
if check_for_gh_release "Calibre-Web" "janeczku/calibre-web"; then
msg_info "Stopping Service"
systemctl stop calibre-web
msg_ok "Stopped Service"
msg_info "Backing up Data"
cp -r /opt/calibre-web/app.db /opt/app.db_backup
cp -r /opt/calibre-web/data /opt/data_backup
msg_ok "Backed up Data"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Calibre-Web" "janeczku/calibre-web" "prebuild" "latest" "/opt/calibre-web" "calibre-web*.tar.gz"
setup_uv
msg_info "Installing Dependencies"
cd /opt/calibre-web
$STD uv venv
$STD uv pip install --python /opt/calibre-web/.venv/bin/python --no-cache-dir --upgrade pip setuptools wheel
$STD uv pip install --python /opt/calibre-web/.venv/bin/python --no-cache-dir -r requirements.txt
msg_ok "Installed Dependencies"
msg_info "Restoring Data"
cp /opt/app.db_backup /opt/calibre-web/app.db 2>/dev/null
cp -r /opt/data_backup /opt/calibre-web/data 2>/dev/null
rm -rf /opt/app.db_backup /opt/data_backup
msg_ok "Restored Data"
msg_info "Starting Service"
systemctl start calibre-web
msg_ok "Started Service"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8083${CL}"

View File

@@ -32,8 +32,17 @@ function update_script() {
systemctl stop cleanuparr
msg_ok "Stopped Service"
msg_info "Backing up config"
cp -r /opt/cleanuparr/config /opt/cleanuparr_config_backup
msg_ok "Backed up config"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Cleanuparr" "Cleanuparr/Cleanuparr" "prebuild" "latest" "/opt/cleanuparr" "*linux-amd64.zip"
msg_info "Restoring config"
[[ -d /opt/cleanuparr/config ]] && rm -rf /opt/cleanuparr/config
mv /opt/cleanuparr_config_backup /opt/cleanuparr/config
msg_ok "Restored config"
msg_info "Starting Service"
systemctl start cleanuparr
msg_ok "Started Service"

78
ct/databasus.sh Normal file
View File

@@ -0,0 +1,78 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
APP="Databasus"
var_tags="${var_tags:-backup;postgresql;database}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /opt/databasus/databasus ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "databasus" "databasus/databasus"; then
msg_info "Stopping Databasus"
$STD systemctl stop databasus
msg_ok "Stopped Databasus"
msg_info "Backing up Configuration"
cp /opt/databasus/.env /opt/databasus.env.bak
msg_ok "Backed up Configuration"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Updating Databasus"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod download
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations /opt/databasus/
chown -R postgres:postgres /opt/databasus
msg_ok "Updated Databasus"
msg_info "Restoring Configuration"
cp /opt/databasus.env.bak /opt/databasus/.env
rm -f /opt/databasus.env.bak
chown postgres:postgres /opt/databasus/.env
msg_ok "Restored Configuration"
msg_info "Starting Databasus"
$STD systemctl start databasus
msg_ok "Started Databasus"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"

View File

@@ -30,7 +30,7 @@ function update_script() {
fi
msg_info "Updating Dokploy"
$STD bash <(curl -sSL https://dokploy.com/install.sh)
curl -sSL https://dokploy.com/install.sh | $STD bash -s update
msg_ok "Updated Dokploy"
msg_ok "Updated successfully!"
exit

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 tteck
# Authors: tteck (tteckster)
source <(curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVE/raw/branch/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Authors: MickLesk (CanbiZ) | Co-Author: remz1337
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://frigate.video/
@@ -11,7 +11,7 @@ var_cpu="${var_cpu:-4}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-20}"
var_os="${var_os:-debian}"
var_version="${var_version:-11}"
var_version="${var_version:-12}"
var_unprivileged="${var_unprivileged:-0}"
var_gpu="${var_gpu:-yes}"
@@ -21,15 +21,15 @@ color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /etc/systemd/system/frigate.service ]]; then
msg_error "No ${APP} Installation Found!"
header_info
check_container_storage
check_container_resources
if [[ ! -f /etc/systemd/system/frigate.service ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
msg_error "To update Frigate, create a new container and transfer your configuration."
exit
fi
msg_error "To update Frigate, create a new container and transfer your configuration."
exit
}
start

93
ct/gramps-web.sh Normal file
View File

@@ -0,0 +1,93 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.grampsweb.org/
APP="gramps-web"
var_tags="${var_tags:-genealogy;family;collaboration}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-20}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/gramps-web-api ]] || [[ ! -d /opt/gramps-web/frontend ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
PYTHON_VERSION="3.12" setup_uv
NODE_VERSION="22" setup_nodejs
if check_for_gh_release "gramps-web-api" "gramps-project/gramps-web-api"; then
msg_info "Stopping Service"
systemctl stop gramps-web
msg_ok "Stopped Service"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "gramps-web-api" "gramps-project/gramps-web-api" "tarball" "latest" "/opt/gramps-web-api"
msg_info "Updating Gramps Web API"
$STD uv venv -c -p python3.12 /opt/gramps-web/venv
source /opt/gramps-web/venv/bin/activate
$STD uv pip install --no-cache-dir --upgrade pip setuptools wheel
$STD uv pip install --no-cache-dir gunicorn
$STD uv pip install --no-cache-dir /opt/gramps-web-api
msg_ok "Updated Gramps Web API"
msg_info "Applying Database Migration"
cd /opt/gramps-web-api
GRAMPS_API_CONFIG=/opt/gramps-web/config/config.cfg \
ALEMBIC_CONFIG=/opt/gramps-web-api/alembic.ini \
GRAMPSHOME=/opt/gramps-web/data/gramps \
GRAMPS_DATABASE_PATH=/opt/gramps-web/data/gramps/grampsdb \
$STD /opt/gramps-web/venv/bin/python3 -m gramps_webapi user migrate
msg_ok "Applied Database Migration"
msg_info "Starting Service"
systemctl start gramps-web
msg_ok "Started Service"
fi
if check_for_gh_release "gramps-web" "gramps-project/gramps-web"; then
msg_info "Stopping Service"
systemctl stop gramps-web
msg_ok "Stopped Service"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "gramps-web" "gramps-project/gramps-web" "tarball" "latest" "/opt/gramps-web/frontend"
msg_info "Updating Gramps Web Frontend"
cd /opt/gramps-web/frontend
export COREPACK_ENABLE_DOWNLOAD_PROMPT=0
$STD corepack enable
$STD npm install
$STD npm run build
msg_ok "Updated Gramps Web Frontend"
msg_info "Starting Service"
systemctl start gramps-web
msg_ok "Started Service"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:5000${CL}"

6
ct/headers/calibre-web Normal file
View File

@@ -0,0 +1,6 @@
___ __ __
_________ _/ (_) /_ ________ _ _____ / /_
/ ___/ __ `/ / / __ \/ ___/ _ \_____| | /| / / _ \/ __ \
/ /__/ /_/ / / / /_/ / / / __/_____/ |/ |/ / __/ /_/ /
\___/\__,_/_/_/_.___/_/ \___/ |__/|__/\___/_.___/

6
ct/headers/databasus Normal file
View File

@@ -0,0 +1,6 @@
____ __ __
/ __ \____ _/ /_____ _/ /_ ____ ________ _______
/ / / / __ `/ __/ __ `/ __ \/ __ `/ ___/ / / / ___/
/ /_/ / /_/ / /_/ /_/ / /_/ / /_/ (__ ) /_/ (__ )
/_____/\__,_/\__/\__,_/_.___/\__,_/____/\__,_/____/

6
ct/headers/gramps-web Normal file
View File

@@ -0,0 +1,6 @@
__
____ __________ _____ ___ ____ _____ _ _____ / /_
/ __ `/ ___/ __ `/ __ `__ \/ __ \/ ___/____| | /| / / _ \/ __ \
/ /_/ / / / /_/ / / / / / / /_/ (__ )_____/ |/ |/ / __/ /_/ /
\__, /_/ \__,_/_/ /_/ /_/ .___/____/ |__/|__/\___/_.___/
/____/ /_/

6
ct/headers/sure Normal file
View File

@@ -0,0 +1,6 @@
_____
/ ___/__ __________
\__ \/ / / / ___/ _ \
___/ / /_/ / / / __/
/____/\__,_/_/ \___/

View File

@@ -35,7 +35,8 @@ function update_script() {
msg_info "Stopping Service"
systemctl stop huntarr
msg_ok "Stopped Service"
ensure_dependencies build-essential
fetch_and_deploy_gh_release "huntarr" "plexguide/Huntarr.io" "tarball"
msg_info "Updating Huntarr"

View File

@@ -60,7 +60,7 @@ function update_script() {
$STD corepack disable
fi
MODULE_VERSION="$(jq -r '.packageManager | split("@")[1]' /opt/karakeep/package.json)"
NODE_VERSION="22" NODE_MODULE="pnpm@${MODULE_VERSION}" setup_nodejs
NODE_VERSION="24" NODE_MODULE="pnpm@${MODULE_VERSION}" setup_nodejs
setup_meilisearch
msg_info "Updating Karakeep"

View File

@@ -29,7 +29,7 @@ function update_script() {
exit
fi
RELEASE="v5.0.2"
RELEASE="v5.1.0"
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
msg_info "Stopping services"
systemctl stop opencloud opencloud-wopi

View File

@@ -51,7 +51,6 @@ function update_script() {
$STD npm run db:generate
$STD npm run build
$STD npm run build:cli
$STD npm run db:push
cp -R .next/standalone ./
chmod +x ./dist/cli.mjs
cp server/db/names.json ./dist/names.json
@@ -64,6 +63,11 @@ function update_script() {
rm -f /opt/pangolin_config_backup.tar.gz
msg_ok "Restored config"
msg_info "Running database migrations"
cd /opt/pangolin
ENVIRONMENT=prod $STD npx drizzle-kit push --config drizzle.sqlite.config.ts
msg_ok "Ran database migrations"
msg_info "Updating Badger plugin version"
BADGER_VERSION=$(get_latest_github_release "fosrl/badger" "false")
sed -i "s/version: \"v[0-9.]*\"/version: \"$BADGER_VERSION\"/g" /opt/pangolin/config/traefik/traefik_config.yml

View File

@@ -29,6 +29,13 @@ function update_script() {
exit
fi
if ! grep -q "PORT=3001" /opt/patchmon/backend/.env; then
msg_warn "⚠️ The next PatchMon update will include breaking changes (port changes)."
msg_warn "See details here: https://github.com/community-scripts/ProxmoxVE/pull/11888"
msg_warn "Press Enter to continue with the update, or Ctrl+C to abort..."
read -r
fi
NODE_VERSION="24" setup_nodejs
if check_for_gh_release "PatchMon" "PatchMon/PatchMon"; then
msg_info "Stopping Service"
@@ -46,7 +53,8 @@ function update_script() {
VERSION=$(get_latest_github_release "PatchMon/PatchMon")
PROTO="$(sed -n '/SERVER_PROTOCOL/s/[^=]*=//p' /opt/backend.env)"
HOST="$(sed -n '/SERVER_HOST/s/[^=]*=//p' /opt/backend.env)"
[[ "${PROTO:-http}" == "http" ]] && PORT=":3001"
SERVER_PORT="$(sed -n '/SERVER_PORT/s/[^=]*=//p' /opt/backend.env)"
[[ "$PROTO" == "http" ]] && PORT=":3001"
sed -i 's/PORT=3399/PORT=3001/' /opt/backend.env
sed -i -e "s/VERSION=.*/VERSION=$VERSION/" \
-e "\|VITE_API_URL=|s|http.*|${PROTO:-http}://${HOST:-$LOCAL_IP}${PORT:-}/api/v1|" /opt/frontend.env
@@ -66,6 +74,9 @@ function update_script() {
-e '\|try_files |i\ root /opt/patchmon/frontend/dist;' \
-e 's|alias.*|alias /opt/patchmon/frontend/dist/assets;|' \
-e '\|expires 1y|i\ root /opt/patchmon/frontend/dist;' /etc/nginx/sites-available/patchmon.conf
if [[ -n "$SERVER_PORT" ]] && [[ "$SERVER_PORT" != "443" ]]; then
sed -i "s/listen [[:digit:]].*/listen ${SERVER_PORT};/" /etc/nginx/sites-available/patchmon.conf
fi
ln -sf /etc/nginx/sites-available/patchmon.conf /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default
$STD nginx -t

View File

@@ -31,16 +31,24 @@ function update_script() {
if check_for_gh_release "planka" "plankanban/planka"; then
msg_info "Stopping Service"
systemctl stop planka
msg_info "Stopped Service"
msg_ok "Stopped Service"
msg_info "Backing up data"
BK="/opt/planka-backup"
mkdir -p "$BK"/{favicons,user-avatars,background-images,attachments}
[ -f /opt/planka/.env ] && mv /opt/planka/.env "$BK"/
[ -d /opt/planka/public/favicons ] && cp -a /opt/planka/public/favicons/. "$BK/favicons/"
[ -d /opt/planka/public/user-avatars ] && cp -a /opt/planka/public/user-avatars/. "$BK/user-avatars/"
[ -d /opt/planka/public/background-images ] && cp -a /opt/planka/public/background-images/. "$BK/background-images/"
[ -d /opt/planka/private/attachments ] && cp -a /opt/planka/private/attachments/. "$BK/attachments/"
# Support both old (pre-v2) and new (v2) directory layouts
if [ -d /opt/planka/data/protected ]; then
[ -d /opt/planka/data/protected/favicons ] && cp -a /opt/planka/data/protected/favicons/. "$BK/favicons/"
[ -d /opt/planka/data/protected/user-avatars ] && cp -a /opt/planka/data/protected/user-avatars/. "$BK/user-avatars/"
[ -d /opt/planka/data/protected/background-images ] && cp -a /opt/planka/data/protected/background-images/. "$BK/background-images/"
[ -d /opt/planka/data/private/attachments ] && cp -a /opt/planka/data/private/attachments/. "$BK/attachments/"
else
[ -d /opt/planka/public/favicons ] && cp -a /opt/planka/public/favicons/. "$BK/favicons/"
[ -d /opt/planka/public/user-avatars ] && cp -a /opt/planka/public/user-avatars/. "$BK/user-avatars/"
[ -d /opt/planka/public/background-images ] && cp -a /opt/planka/public/background-images/. "$BK/background-images/"
[ -d /opt/planka/private/attachments ] && cp -a /opt/planka/private/attachments/. "$BK/attachments/"
fi
rm -rf /opt/planka
msg_ok "Backed up data"
@@ -53,15 +61,16 @@ function update_script() {
msg_info "Restoring data"
[ -f "$BK/.env" ] && mv "$BK/.env" /opt/planka/.env
mkdir -p /opt/planka/public/{favicons,user-avatars,background-images} /opt/planka/private/attachments
[ -d "$BK/favicons" ] && cp -a "$BK/favicons/." /opt/planka/public/favicons/
[ -d "$BK/user-avatars" ] && cp -a "$BK/user-avatars/." /opt/planka/public/user-avatars/
[ -d "$BK/background-images" ] && cp -a "$BK/background-images/." /opt/planka/public/background-images/
[ -d "$BK/attachments" ] && cp -a "$BK/attachments/." /opt/planka/private/attachments/
# Planka v2 uses unified data directory structure
mkdir -p /opt/planka/data/protected/{favicons,user-avatars,background-images} /opt/planka/data/private/attachments
[ -d "$BK/favicons" ] && cp -a "$BK/favicons/." /opt/planka/data/protected/favicons/
[ -d "$BK/user-avatars" ] && cp -a "$BK/user-avatars/." /opt/planka/data/protected/user-avatars/
[ -d "$BK/background-images" ] && cp -a "$BK/background-images/." /opt/planka/data/protected/background-images/
[ -d "$BK/attachments" ] && cp -a "$BK/attachments/." /opt/planka/data/private/attachments/
rm -rf "$BK"
msg_ok "Restored data"
msg_ok "Migrate Database"
msg_info "Migrate Database"
cd /opt/planka
$STD npm run db:upgrade
$STD npm run db:migrate

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: tteck (tteckster) | MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.plex.tv/
@@ -24,28 +24,54 @@ function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /etc/apt/sources.list.d/plexmediaserver.list ]] &&
[[ ! -f /etc/apt/sources.list.d/plexmediaserver.sources ]]; then
if ! dpkg -l plexmediaserver &>/dev/null; then
msg_error "No ${APP} Installation Found!"
exit
fi
UPD=$(msg_menu "Plex Update Options" \
"1" "Update LXC" \
"2" "Install plexupdate")
if [ "$UPD" == "1" ]; then
msg_info "Updating ${APP} LXC"
$STD apt update
$STD apt -y upgrade
msg_ok "Updated ${APP} LXC"
msg_ok "Updated successfully!"
exit
# Migrate from old repository to new one if needed
if [[ -f /etc/apt/sources.list.d/plexmediaserver.sources ]]; then
local current_uri
current_uri=$(grep -oP '(?<=URIs: ).*' /etc/apt/sources.list.d/plexmediaserver.sources 2>/dev/null || true)
if [[ "$current_uri" == *"downloads.plex.tv/repo/deb"* ]]; then
msg_info "Migrating to new Plex repository"
rm -f /etc/apt/sources.list.d/plexmediaserver.sources
rm -f /usr/share/keyrings/PlexSign.asc
setup_deb822_repo \
"plexmediaserver" \
"https://downloads.plex.tv/plex-keys/PlexSign.v2.key" \
"https://repo.plex.tv/deb/" \
"public" \
"main"
msg_ok "Migrated to new Plex repository"
fi
elif [[ -f /etc/apt/sources.list.d/plexmediaserver.list ]]; then
msg_info "Migrating to new Plex repository (deb822)"
rm -f /etc/apt/sources.list.d/plexmediaserver.list
rm -f /etc/apt/sources.list.d/plex*
rm -f /usr/share/keyrings/PlexSign.asc
setup_deb822_repo \
"plexmediaserver" \
"https://downloads.plex.tv/plex-keys/PlexSign.v2.key" \
"https://repo.plex.tv/deb/" \
"public" \
"main"
msg_ok "Migrated to new Plex repository (deb822)"
fi
if [ "$UPD" == "2" ]; then
set +e
bash -c "$(curl -fsSL https://raw.githubusercontent.com/mrworf/plexupdate/master/extras/installer.sh)"
msg_ok "Updated successfully!"
exit
if [[ -f /usr/local/bin/plexupdate ]] || [[ -d /opt/plexupdate ]]; then
msg_info "Removing legacy plexupdate"
rm -rf /opt/plexupdate /usr/local/bin/plexupdate
crontab -l 2>/dev/null | grep -v plexupdate | crontab - 2>/dev/null || true
msg_ok "Removed legacy plexupdate"
fi
msg_info "Updating Plex Media Server"
$STD apt update
$STD apt install -y plexmediaserver
msg_ok "Updated Plex Media Server"
msg_ok "Updated successfully!"
exit
}
start

View File

@@ -71,6 +71,7 @@ EOF
$STD php artisan migrate --seed --force --no-interaction
chown -R www-data:www-data /opt/pterodactyl-panel/*
chmod -R 755 /opt/pterodactyl-panel/storage /opt/pterodactyl-panel/bootstrap/cache/
ln -s /opt/pterodactyl-panel /var/www/pterodactyl
rm -rf "/opt/pterodactyl-panel/panel.tar.gz"
echo "${RELEASE}" >/opt/${APP}_version.txt
msg_ok "Updated $APP to v${RELEASE}"

View File

@@ -23,7 +23,7 @@ function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /root/.config/recyclarr/recyclarr.yml ]]; then
if [[ ! -f /root/.config/recyclarr/recyclarr.yml ]] && [[ ! -d /root/.config/recyclarr/configs ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
@@ -33,6 +33,20 @@ function update_script() {
fetch_and_deploy_gh_release "recyclarr" "recyclarr/recyclarr" "prebuild" "latest" "/usr/local/bin" "recyclarr-linux-x64.tar.xz"
# Migrate includes from configs/ to includes/ (recyclarr v8)
RECYCLARR_DIR="/root/.config/recyclarr"
mkdir -p "$RECYCLARR_DIR/includes"
if [[ -d "$RECYCLARR_DIR/configs" ]]; then
for item in "$RECYCLARR_DIR/configs"/*/; do
[[ -d "$item" ]] || continue
dir_name=$(basename "$item")
# Only move subdirs that look like include dirs (not the configs themselves)
if [[ "$dir_name" != "configs" ]] && [[ ! -d "$RECYCLARR_DIR/includes/$dir_name" ]]; then
mv "$item" "$RECYCLARR_DIR/includes/"
fi
done
fi
msg_ok "Updated successfully!"
fi
exit

97
ct/sure.sh Normal file
View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: vhsdream
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://sure.am
APP="Sure"
var_tags="${var_tags:-finance}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-6}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/sure ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "Sure" "we-promise/sure"; then
if [[ ! -f /etc/systemd/system/sure-worker.service ]]; then
cat <<EOF >/etc/systemd/system/sure-worker.service
[Unit]
Description=Sure Background Worker (Sidekiq)
After=network.target redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/sure
Environment=RAILS_ENV=production
Environment=BUNDLE_DEPLOYMENT=1
Environment=BUNDLE_WITHOUT=development
Environment=PATH=/root/.rbenv/shims:/root/.rbenv/bin:/usr/bin:/usr/local/bin:/sbin:/bin
EnvironmentFile=/etc/sure/.env
ExecStart=/opt/sure/bin/bundle exec sidekiq -e production
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q sure-worker
msg_info "Stopping Service"
$STD systemctl stop sure
msg_ok "Stopped Service"
else
msg_info "Stopping services"
$STD systemctl stop sure-worker sure
msg_ok "Stopped services"
fi
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Sure" "we-promise/sure" "tarball" "latest" "/opt/sure"
RUBY_VERSION="$(cat /opt/sure/.ruby-version)" RUBY_INSTALL_RAILS=false setup_ruby
msg_info "Updating Sure"
source ~/.profile
cd /opt/sure
export RAILS_ENV=production
export BUNDLE_DEPLOYMENT=1
export BUNDLE_WITHOUT=development
$STD ./bin/bundle install
$STD ./bin/bundle exec bootsnap precompile --gemfile -j 0
$STD ./bin/bundle exec bootsnap precompile -j 0 app/ lib/
export SECRET_KEY_BASE_DUMMY=1 && $STD ./bin/rails assets:precompile
unset SECRET_KEY_BASE_DUMMY
msg_ok "Updated Sure"
msg_info "Starting Services"
$STD systemctl start sure sure-worker
msg_ok "Started Services"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"

View File

@@ -134,7 +134,7 @@ update_links() {
# Find all files containing the old repo reference
while IFS= read -r file; do
# Count occurrences
local count=$(grep -c "github.com/$old_repo/$old_name" "$file" 2>/dev/null || echo 0)
local count=$(grep -E -c "(github.com|githubusercontent.com)/$old_repo/$old_name" "$file" 2>/dev/null || echo 0)
if [[ $count -gt 0 ]]; then
# Backup original
@@ -143,16 +143,16 @@ update_links() {
# Replace links - use different sed syntax for BSD/macOS vs GNU sed
if sed --version &>/dev/null 2>&1; then
# GNU sed
sed -i "s|github.com/$old_repo/$old_name|github.com/$new_owner/$new_repo|g" "$file"
sed -E -i "s@(github.com|githubusercontent.com)/$old_repo/$old_name@\\1/$new_owner/$new_repo@g" "$file"
else
# BSD sed (macOS)
sed -i '' "s|github.com/$old_repo/$old_name|github.com/$new_owner/$new_repo|g" "$file"
sed -E -i '' "s@(github.com|githubusercontent.com)/$old_repo/$old_name@\\1/$new_owner/$new_repo@g" "$file"
fi
((files_updated++))
print_success "Updated $file ($count links)"
fi
done < <(find "$search_path" -type f \( -name "*.md" -o -name "*.sh" -o -name "*.func" -o -name "*.json" \) -not -path "*/.git/*" 2>/dev/null | xargs grep -l "github.com/$old_repo/$old_name" 2>/dev/null)
done < <(find "$search_path" -type f \( -name "*.md" -o -name "*.sh" -o -name "*.func" -o -name "*.json" \) -not -path "*/.git/*" 2>/dev/null | xargs grep -E -l "(github.com|githubusercontent.com)/$old_repo/$old_name" 2>/dev/null)
return $files_updated
}

View File

@@ -0,0 +1,44 @@
{
"name": "Calibre-Web",
"slug": "calibre-web",
"categories": [
4
],
"date_created": "2026-02-20",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 8083,
"documentation": "https://github.com/janeczku/calibre-web/wiki",
"website": "https://github.com/janeczku/calibre-web",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/calibre-web.webp",
"config_path": "/opt/calibre-web/app.db",
"description": "Web app for browsing, reading and downloading eBooks from a Calibre database. Provides an attractive interface with mobile support, user management, and eBook conversion capabilities.",
"install_methods": [
{
"type": "default",
"script": "ct/calibre-web.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 8,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "No credentials are set by this script. Complete setup and create credentials in the first-run wizard.",
"type": "info"
},
{
"text": "Upload your Calibre library metadata.db during first setup wizard.",
"type": "info"
}
]
}

View File

@@ -0,0 +1,44 @@
{
"name": "CR*NMASTER",
"slug": "cronmaster",
"categories": [
1
],
"date_created": "2026-02-22",
"type": "addon",
"updateable": true,
"privileged": false,
"interface_port": 3000,
"documentation": "https://github.com/fccview/cronmaster",
"website": "https://github.com/fccview/cronmaster",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/cr-nmaster.webp",
"config_path": "/opt/cronmaster/.env",
"description": "Self-hosted cron job scheduler with web UI, live logs, auth and prebuilt binaries provided upstream.",
"install_methods": [
{
"type": "default",
"script": "tools/addon/cronmaster.sh",
"resources": {
"cpu": null,
"ram": null,
"hdd": null,
"os": null,
"version": null
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "Credentials are saved to: /root/cronmaster.creds",
"type": "info"
},
{
"text": "Update with: update_cronmaster",
"type": "info"
}
]
}

View File

@@ -0,0 +1,44 @@
{
"name": "Databasus",
"slug": "databasus",
"categories": [
7
],
"date_created": "2026-02-17",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 80,
"documentation": "https://github.com/databasus/databasus",
"website": "https://github.com/databasus/databasus",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/databasus.webp",
"config_path": "/opt/databasus/.env",
"description": "Free, open source and self-hosted solution for automated PostgreSQL backups. With multiple storage options, notifications, scheduling, and a beautiful web interface for managing database backups across multiple PostgreSQL instances.",
"install_methods": [
{
"type": "default",
"script": "ct/databasus.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 8,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": "admin@localhost",
"password": "See /root/databasus.creds"
},
"notes": [
{
"text": "Supports PostgreSQL versions 12-18 with cloud and self-hosted instances",
"type": "info"
},
{
"text": "Features: Scheduled backups, multiple storage providers, notifications, encryption",
"type": "info"
}
]
}

View File

@@ -9,7 +9,7 @@
"updateable": true,
"privileged": false,
"interface_port": 3000,
"documentation": "https://dawarich.app/docs",
"documentation": "https://dawarich.app/docs/intro",
"website": "https://dawarich.app/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/dawarich.webp",
"config_path": "/opt/dawarich/.env",

View File

@@ -0,0 +1,44 @@
{
"name": "Frigate",
"slug": "frigate",
"categories": [
15
],
"date_created": "2026-02-23",
"type": "ct",
"updateable": false,
"privileged": false,
"config_path": "/config/config.yml",
"interface_port": 5000,
"documentation": "https://frigate.io/",
"website": "https://frigate.io/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/frigate-light.webp",
"description": "Frigate is a complete and local NVR (Network Video Recorder) with realtime AI object detection for CCTV cameras.",
"install_methods": [
{
"type": "default",
"script": "ct/frigate.sh",
"resources": {
"cpu": 4,
"ram": 4096,
"hdd": 20,
"os": "Debian",
"version": "12"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "SemanticSearch is not pre-installed due to high resource requirements (8+ cores, 16-24GB RAM, GPU recommended). Manual configuration required if needed.",
"type": "info"
},
{
"text": "OpenVino detector may fail on older CPUs (pre-Haswell/AVX2). If you encounter 'Illegal instruction' errors, consider using alternative detectors.",
"type": "warning"
}
]
}

View File

@@ -1,44 +0,0 @@
{
"name": "Frigate",
"slug": "frigate",
"categories": [
15
],
"date_created": "2024-05-02",
"type": "ct",
"updateable": false,
"privileged": true,
"interface_port": 5000,
"documentation": "https://docs.frigate.video/",
"website": "https://frigate.video/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/frigate.webp",
"config_path": "",
"description": "Frigate is an open source NVR built around real-time AI object detection. All processing is performed locally on your own hardware, and your camera feeds never leave your home.",
"install_methods": [
{
"type": "default",
"script": "ct/frigate.sh",
"resources": {
"cpu": 4,
"ram": 4096,
"hdd": 20,
"os": "debian",
"version": "11"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "Discussions (explore more advanced methods): `https://github.com/tteck/Proxmox/discussions/2711`",
"type": "info"
},
{
"text": "go2rtc Interface port:`1984`",
"type": "info"
}
]
}

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-16T12:14:16Z",
"generated": "2026-02-23T12:14:19Z",
"versions": [
{
"slug": "2fauth",
@@ -11,9 +11,9 @@
{
"slug": "adguard",
"repo": "AdguardTeam/AdGuardHome",
"version": "v0.107.71",
"version": "v0.107.72",
"pinned": false,
"date": "2025-12-08T14:34:55Z"
"date": "2026-02-19T15:37:49Z"
},
{
"slug": "adguardhome-sync",
@@ -39,9 +39,9 @@
{
"slug": "ampache",
"repo": "ampache/ampache",
"version": "7.8.0",
"version": "7.9.0",
"pinned": false,
"date": "2025-12-22T04:23:45Z"
"date": "2026-02-19T07:01:25Z"
},
{
"slug": "argus",
@@ -88,9 +88,9 @@
{
"slug": "backrest",
"repo": "garethgeorge/backrest",
"version": "v1.11.2",
"version": "v1.12.0",
"pinned": false,
"date": "2026-01-27T06:27:56Z"
"date": "2026-02-22T06:49:49Z"
},
{
"slug": "baikal",
@@ -116,16 +116,16 @@
{
"slug": "bentopdf",
"repo": "alam00000/bentopdf",
"version": "v2.2.1",
"version": "v2.3.1",
"pinned": false,
"date": "2026-02-14T16:33:47Z"
"date": "2026-02-21T09:04:27Z"
},
{
"slug": "beszel",
"repo": "henrygd/beszel",
"version": "v0.18.3",
"version": "v0.18.4",
"pinned": false,
"date": "2026-02-01T19:02:42Z"
"date": "2026-02-20T21:10:52Z"
},
{
"slug": "bichon",
@@ -158,9 +158,9 @@
{
"slug": "bookstack",
"repo": "BookStackApp/BookStack",
"version": "v25.12.3",
"version": "v25.12.7",
"pinned": false,
"date": "2026-01-29T15:29:25Z"
"date": "2026-02-19T23:36:55Z"
},
{
"slug": "byparr",
@@ -183,19 +183,26 @@
"pinned": false,
"date": "2025-07-29T16:39:18Z"
},
{
"slug": "calibre-web",
"repo": "janeczku/calibre-web",
"version": "0.6.26",
"pinned": false,
"date": "2026-02-06T21:17:44Z"
},
{
"slug": "checkmate",
"repo": "bluewave-labs/Checkmate",
"version": "v3.3",
"version": "v3.4.0",
"pinned": false,
"date": "2026-01-28T14:25:25Z"
"date": "2026-02-20T21:08:55Z"
},
{
"slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr",
"version": "v2.6.2",
"version": "v2.7.1",
"pinned": false,
"date": "2026-02-15T02:15:19Z"
"date": "2026-02-23T09:58:13Z"
},
{
"slug": "cloudreve",
@@ -207,23 +214,23 @@
{
"slug": "comfyui",
"repo": "comfyanonymous/ComfyUI",
"version": "v0.13.0",
"version": "v0.14.2",
"pinned": false,
"date": "2026-02-10T20:27:38Z"
"date": "2026-02-18T06:12:02Z"
},
{
"slug": "commafeed",
"repo": "Athou/commafeed",
"version": "6.2.0",
"version": "7.0.0",
"pinned": false,
"date": "2026-02-09T19:44:58Z"
"date": "2026-02-21T21:54:15Z"
},
{
"slug": "configarr",
"repo": "raydak-labs/configarr",
"version": "v1.20.0",
"version": "v1.22.0",
"pinned": false,
"date": "2026-01-10T21:25:47Z"
"date": "2026-02-20T21:55:46Z"
},
{
"slug": "convertx",
@@ -246,6 +253,13 @@
"pinned": false,
"date": "2026-02-11T17:11:46Z"
},
{
"slug": "cronmaster",
"repo": "fccview/cronmaster",
"version": "2.1.0",
"pinned": false,
"date": "2026-02-11T19:29:11Z"
},
{
"slug": "cryptpad",
"repo": "cryptpad/cryptpad",
@@ -253,6 +267,13 @@
"pinned": false,
"date": "2026-02-11T15:39:05Z"
},
{
"slug": "databasus",
"repo": "databasus/databasus",
"version": "v3.16.2",
"pinned": false,
"date": "2026-02-22T21:10:12Z"
},
{
"slug": "dawarich",
"repo": "Freika/dawarich",
@@ -263,9 +284,9 @@
{
"slug": "discopanel",
"repo": "nickheyer/discopanel",
"version": "v1.0.36",
"version": "v1.0.37",
"pinned": false,
"date": "2026-02-09T21:15:44Z"
"date": "2026-02-18T08:53:43Z"
},
{
"slug": "dispatcharr",
@@ -305,9 +326,9 @@
{
"slug": "drawio",
"repo": "jgraph/drawio",
"version": "v29.3.6",
"version": "v29.5.2",
"pinned": false,
"date": "2026-01-28T18:25:02Z"
"date": "2026-02-22T10:36:14Z"
},
{
"slug": "duplicati",
@@ -361,16 +382,16 @@
{
"slug": "firefly",
"repo": "firefly-iii/firefly-iii",
"version": "v6.4.22",
"version": "v6.4.23",
"pinned": false,
"date": "2026-02-15T18:43:08Z"
"date": "2026-02-20T07:02:05Z"
},
{
"slug": "fladder",
"repo": "DonutWare/Fladder",
"version": "v0.9.0",
"version": "v0.10.1",
"pinned": false,
"date": "2026-01-05T17:30:07Z"
"date": "2026-02-21T12:45:53Z"
},
{
"slug": "flaresolverr",
@@ -400,19 +421,26 @@
"pinned": false,
"date": "2026-01-25T18:20:14Z"
},
{
"slug": "frigate",
"repo": "blakeblackshear/frigate",
"version": "v0.16.4",
"pinned": true,
"date": "2026-01-29T00:42:14Z"
},
{
"slug": "gatus",
"repo": "TwiN/gatus",
"version": "v5.34.0",
"version": "v5.35.0",
"pinned": false,
"date": "2026-01-03T03:12:12Z"
"date": "2026-02-20T20:58:03Z"
},
{
"slug": "ghostfolio",
"repo": "ghostfolio/ghostfolio",
"version": "2.239.0",
"version": "2.242.0",
"pinned": false,
"date": "2026-02-15T09:51:16Z"
"date": "2026-02-22T10:01:44Z"
},
{
"slug": "gitea",
@@ -456,6 +484,13 @@
"pinned": false,
"date": "2026-02-13T15:22:31Z"
},
{
"slug": "gramps-web",
"repo": "gramps-project/gramps-web-api",
"version": "v3.7.1.1",
"pinned": false,
"date": "2026-01-30T09:15:46Z"
},
{
"slug": "grist",
"repo": "gristlabs/grist-core",
@@ -515,9 +550,9 @@
{
"slug": "homarr",
"repo": "homarr-labs/homarr",
"version": "v1.53.1",
"version": "v1.53.2",
"pinned": false,
"date": "2026-02-13T19:47:11Z"
"date": "2026-02-20T19:41:55Z"
},
{
"slug": "homebox",
@@ -550,16 +585,16 @@
{
"slug": "huntarr",
"repo": "plexguide/Huntarr.io",
"version": "9.2.4.1",
"version": "9.4.1",
"pinned": false,
"date": "2026-02-12T22:17:47Z"
"date": "2026-02-23T08:46:37Z"
},
{
"slug": "immich-public-proxy",
"repo": "alangrainger/immich-public-proxy",
"version": "v1.15.2",
"version": "v1.15.3",
"pinned": false,
"date": "2026-02-16T08:54:59Z"
"date": "2026-02-16T22:54:27Z"
},
{
"slug": "inspircd",
@@ -578,16 +613,16 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.60",
"version": "v5.12.65",
"pinned": false,
"date": "2026-02-15T00:11:31Z"
"date": "2026-02-21T01:03:52Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1127",
"version": "v0.24.1184",
"pinned": false,
"date": "2026-02-16T08:43:41Z"
"date": "2026-02-23T05:55:36Z"
},
{
"slug": "jellystat",
@@ -634,9 +669,9 @@
{
"slug": "keycloak",
"repo": "keycloak/keycloak",
"version": "26.5.3",
"version": "26.5.4",
"pinned": false,
"date": "2026-02-10T07:30:08Z"
"date": "2026-02-20T09:19:45Z"
},
{
"slug": "kimai",
@@ -697,16 +732,16 @@
{
"slug": "leantime",
"repo": "Leantime/leantime",
"version": "v3.6.2",
"version": "v3.7.1",
"pinned": false,
"date": "2026-01-29T16:37:00Z"
"date": "2026-02-22T01:25:16Z"
},
{
"slug": "librenms",
"repo": "librenms/librenms",
"version": "26.1.1",
"version": "26.2.0",
"pinned": false,
"date": "2026-01-12T23:26:02Z"
"date": "2026-02-16T12:15:13Z"
},
{
"slug": "librespeed-rust",
@@ -718,9 +753,9 @@
{
"slug": "libretranslate",
"repo": "LibreTranslate/LibreTranslate",
"version": "v1.9.0",
"version": "v1.9.3",
"pinned": false,
"date": "2026-02-10T19:05:48Z"
"date": "2026-02-21T19:08:33Z"
},
{
"slug": "lidarr",
@@ -739,9 +774,9 @@
{
"slug": "linkstack",
"repo": "linkstackorg/linkstack",
"version": "v4.8.5",
"version": "v4.8.6",
"pinned": false,
"date": "2026-01-26T18:46:52Z"
"date": "2026-02-17T16:53:47Z"
},
{
"slug": "linkwarden",
@@ -781,9 +816,9 @@
{
"slug": "mail-archiver",
"repo": "s1t5/mail-archiver",
"version": "2602.1",
"version": "2602.3",
"pinned": false,
"date": "2026-02-11T06:23:11Z"
"date": "2026-02-22T20:24:18Z"
},
{
"slug": "managemydamnlife",
@@ -802,9 +837,9 @@
{
"slug": "mealie",
"repo": "mealie-recipes/mealie",
"version": "v3.10.2",
"version": "v3.11.0",
"pinned": false,
"date": "2026-02-04T23:32:32Z"
"date": "2026-02-17T04:13:35Z"
},
{
"slug": "mediamanager",
@@ -816,9 +851,9 @@
{
"slug": "mediamtx",
"repo": "bluenviron/mediamtx",
"version": "v1.16.1",
"version": "v1.16.2",
"pinned": false,
"date": "2026-02-07T18:58:24Z"
"date": "2026-02-22T17:31:41Z"
},
{
"slug": "meilisearch",
@@ -837,9 +872,9 @@
{
"slug": "metube",
"repo": "alexta69/metube",
"version": "2026.02.14",
"version": "2026.02.22",
"pinned": false,
"date": "2026-02-14T07:49:11Z"
"date": "2026-02-22T00:58:45Z"
},
{
"slug": "miniflux",
@@ -886,9 +921,9 @@
{
"slug": "netbox",
"repo": "netbox-community/netbox",
"version": "v4.5.2",
"version": "v4.5.3",
"pinned": false,
"date": "2026-02-03T13:54:26Z"
"date": "2026-02-17T15:39:18Z"
},
{
"slug": "nextcloud-exporter",
@@ -956,9 +991,9 @@
{
"slug": "opencloud",
"repo": "opencloud-eu/opencloud",
"version": "v5.0.2",
"version": "v5.1.0",
"pinned": true,
"date": "2026-02-05T16:29:01Z"
"date": "2026-02-16T15:04:28Z"
},
{
"slug": "opengist",
@@ -970,9 +1005,9 @@
{
"slug": "ots",
"repo": "Luzifer/ots",
"version": "v1.21.1",
"version": "v1.21.2",
"pinned": false,
"date": "2026-02-16T12:12:23Z"
"date": "2026-02-20T16:20:03Z"
},
{
"slug": "outline",
@@ -1026,16 +1061,16 @@
{
"slug": "paperless-ngx",
"repo": "paperless-ngx/paperless-ngx",
"version": "v2.20.6",
"version": "v2.20.8",
"pinned": false,
"date": "2026-01-31T07:30:27Z"
"date": "2026-02-22T01:40:54Z"
},
{
"slug": "patchmon",
"repo": "PatchMon/PatchMon",
"version": "v1.4.0",
"version": "v1.4.2",
"pinned": false,
"date": "2026-02-13T10:39:03Z"
"date": "2026-02-20T13:15:31Z"
},
{
"slug": "paymenter",
@@ -1054,9 +1089,9 @@
{
"slug": "pelican-panel",
"repo": "pelican-dev/panel",
"version": "v1.0.0-beta32",
"version": "v1.0.0-beta33",
"pinned": false,
"date": "2026-02-09T22:15:44Z"
"date": "2026-02-18T21:37:11Z"
},
{
"slug": "pelican-wings",
@@ -1089,9 +1124,9 @@
{
"slug": "planka",
"repo": "plankanban/planka",
"version": "v2.0.0",
"version": "v2.0.1",
"pinned": false,
"date": "2026-02-11T13:50:10Z"
"date": "2026-02-17T15:26:55Z"
},
{
"slug": "plant-it",
@@ -1103,9 +1138,9 @@
{
"slug": "pocketbase",
"repo": "pocketbase/pocketbase",
"version": "v0.36.3",
"version": "v0.36.5",
"pinned": false,
"date": "2026-02-13T18:38:58Z"
"date": "2026-02-21T11:45:32Z"
},
{
"slug": "pocketid",
@@ -1180,9 +1215,9 @@
{
"slug": "pulse",
"repo": "rcourtman/Pulse",
"version": "v5.1.9",
"version": "v5.1.13",
"pinned": false,
"date": "2026-02-11T15:34:40Z"
"date": "2026-02-22T12:40:41Z"
},
{
"slug": "pve-scripts-local",
@@ -1201,23 +1236,23 @@
{
"slug": "qbittorrent-exporter",
"repo": "martabal/qbittorrent-exporter",
"version": "v1.13.2",
"version": "v1.13.3",
"pinned": false,
"date": "2025-12-13T22:59:03Z"
"date": "2026-02-22T13:01:42Z"
},
{
"slug": "qdrant",
"repo": "qdrant/qdrant",
"version": "v1.16.3",
"version": "v1.17.0",
"pinned": false,
"date": "2025-12-19T17:45:42Z"
"date": "2026-02-20T11:11:50Z"
},
{
"slug": "qui",
"repo": "autobrr/qui",
"version": "v1.13.1",
"version": "v1.14.0",
"pinned": false,
"date": "2026-01-28T20:12:50Z"
"date": "2026-02-21T22:23:46Z"
},
{
"slug": "radarr",
@@ -1236,16 +1271,16 @@
{
"slug": "rclone",
"repo": "rclone/rclone",
"version": "v1.73.0",
"version": "v1.73.1",
"pinned": false,
"date": "2026-01-30T22:12:03Z"
"date": "2026-02-17T18:27:21Z"
},
{
"slug": "rdtclient",
"repo": "rogerfar/rdt-client",
"version": "v2.0.120",
"version": "v2.0.123",
"pinned": false,
"date": "2026-02-12T02:53:51Z"
"date": "2026-02-21T23:08:13Z"
},
{
"slug": "reactive-resume",
@@ -1257,9 +1292,9 @@
{
"slug": "recyclarr",
"repo": "recyclarr/recyclarr",
"version": "v7.5.2",
"version": "v8.2.1",
"pinned": false,
"date": "2025-11-30T22:08:46Z"
"date": "2026-02-22T19:19:49Z"
},
{
"slug": "reitti",
@@ -1306,9 +1341,9 @@
{
"slug": "scanopy",
"repo": "scanopy/scanopy",
"version": "v0.14.4",
"version": "v0.14.7",
"pinned": false,
"date": "2026-02-10T03:57:28Z"
"date": "2026-02-23T01:36:44Z"
},
{
"slug": "scraparr",
@@ -1334,9 +1369,9 @@
{
"slug": "semaphore",
"repo": "semaphoreui/semaphore",
"version": "v2.17.2",
"version": "v2.17.12",
"pinned": false,
"date": "2026-02-16T10:27:40Z"
"date": "2026-02-20T09:16:50Z"
},
{
"slug": "shelfmark",
@@ -1348,9 +1383,9 @@
{
"slug": "signoz",
"repo": "SigNoz/signoz-otel-collector",
"version": "v0.142.0",
"version": "v0.142.1",
"pinned": false,
"date": "2026-02-06T11:40:30Z"
"date": "2026-02-21T18:04:07Z"
},
{
"slug": "silverbullet",
@@ -1362,9 +1397,9 @@
{
"slug": "slskd",
"repo": "slskd/slskd",
"version": "0.24.3",
"version": "0.24.4",
"pinned": false,
"date": "2026-01-15T14:40:15Z"
"date": "2026-02-16T16:50:17Z"
},
{
"slug": "snipeit",
@@ -1376,9 +1411,9 @@
{
"slug": "snowshare",
"repo": "TuroYT/snowshare",
"version": "v1.3.5",
"version": "v1.3.6",
"pinned": false,
"date": "2026-02-11T10:24:51Z"
"date": "2026-02-19T11:06:29Z"
},
{
"slug": "sonarr",
@@ -1390,9 +1425,9 @@
{
"slug": "speedtest-tracker",
"repo": "alexjustesen/speedtest-tracker",
"version": "v1.13.9",
"version": "v1.13.10",
"pinned": false,
"date": "2026-02-08T21:48:49Z"
"date": "2026-02-20T03:14:47Z"
},
{
"slug": "spoolman",
@@ -1411,9 +1446,9 @@
{
"slug": "stirling-pdf",
"repo": "Stirling-Tools/Stirling-PDF",
"version": "v2.4.6",
"version": "v2.5.2",
"pinned": false,
"date": "2026-02-12T00:01:19Z"
"date": "2026-02-20T23:20:20Z"
},
{
"slug": "streamlink-webui",
@@ -1429,6 +1464,13 @@
"pinned": false,
"date": "2025-09-19T22:23:28Z"
},
{
"slug": "sure",
"repo": "we-promise/sure",
"version": "chart-v0.6.8-alpha.13",
"pinned": false,
"date": "2026-02-20T11:15:15Z"
},
{
"slug": "tandoor",
"repo": "TandoorRecipes/recipes",
@@ -1495,9 +1537,9 @@
{
"slug": "traccar",
"repo": "traccar/traccar",
"version": "v6.11.1",
"version": "v6.12.1",
"pinned": false,
"date": "2025-12-07T19:19:08Z"
"date": "2026-02-22T18:47:37Z"
},
{
"slug": "tracearr",
@@ -1544,9 +1586,9 @@
{
"slug": "tunarr",
"repo": "chrisbenincasa/tunarr",
"version": "v1.1.12",
"version": "v1.1.15",
"pinned": false,
"date": "2026-02-03T20:19:00Z"
"date": "2026-02-19T23:51:17Z"
},
{
"slug": "uhf",
@@ -1586,9 +1628,9 @@
{
"slug": "uptimekuma",
"repo": "louislam/uptime-kuma",
"version": "2.1.1",
"version": "2.1.3",
"pinned": false,
"date": "2026-02-13T16:07:33Z"
"date": "2026-02-19T05:37:30Z"
},
{
"slug": "vaultwarden",
@@ -1600,9 +1642,9 @@
{
"slug": "victoriametrics",
"repo": "VictoriaMetrics/VictoriaMetrics",
"version": "v1.135.0",
"version": "v1.136.0",
"pinned": false,
"date": "2026-02-02T14:20:15Z"
"date": "2026-02-16T13:17:50Z"
},
{
"slug": "vikunja",
@@ -1628,9 +1670,9 @@
{
"slug": "wanderer",
"repo": "meilisearch/meilisearch",
"version": "v1.35.0",
"version": "v1.36.0",
"pinned": false,
"date": "2026-02-02T09:57:03Z"
"date": "2026-02-23T08:13:32Z"
},
{
"slug": "warracker",
@@ -1719,9 +1761,9 @@
{
"slug": "yubal",
"repo": "guillevc/yubal",
"version": "v0.6.0",
"version": "v0.6.1",
"pinned": false,
"date": "2026-02-15T17:47:56Z"
"date": "2026-02-18T23:24:16Z"
},
{
"slug": "zigbee2mqtt",
@@ -1754,9 +1796,9 @@
{
"slug": "zwave-js-ui",
"repo": "zwave-js/zwave-js-ui",
"version": "v11.11.0",
"version": "v11.12.0",
"pinned": false,
"date": "2026-02-03T13:13:05Z"
"date": "2026-02-19T10:14:19Z"
}
]
}

View File

@@ -0,0 +1,44 @@
{
"name": "Gramps Web",
"slug": "gramps-web",
"categories": [
12
],
"date_created": "2026-02-22",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 5000,
"documentation": "https://www.grampsweb.org/install_setup/setup/",
"website": "https://www.grampsweb.org/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/gramps.webp",
"config_path": "/opt/gramps-web/config/config.cfg",
"description": "Gramps Web is a collaborative genealogy platform for browsing, editing and sharing family trees through a modern web interface.",
"install_methods": [
{
"type": "default",
"script": "ct/gramps-web.sh",
"resources": {
"cpu": 2,
"ram": 4096,
"hdd": 20,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "On first access, create the owner account via the built-in onboarding wizard.",
"type": "info"
},
{
"text": "The initial deployment compiles the frontend and can take several minutes.",
"type": "warning"
}
]
}

View File

@@ -10,7 +10,7 @@
"updateable": true,
"privileged": false,
"interface_port": 8000,
"documentation": "https://maxdorninger.github.io/MediaManager/introduction.html",
"documentation": "https://maxdorninger.github.io/MediaManager/latest/",
"config_path": "/opt/mm/config/config.toml",
"website": "https://github.com/maxdorninger/MediaManager",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/mediamanager.webp",

View File

@@ -0,0 +1,35 @@
{
"name": "Sure",
"slug": "sure",
"categories": [
23
],
"date_created": "2026-02-20",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 3000,
"documentation": "https://github.com/we-promise/sure",
"website": "https://sure.am",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/sure-finance.webp",
"config_path": "/etc/sure/.env",
"description": "The personal finance app for everyone. NOT affiliated with or endorsed by Maybe Finance Inc..",
"install_methods": [
{
"type": "default",
"script": "ct/sure.sh",
"resources": {
"cpu": 2,
"ram": 2048,
"hdd": 6,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -0,0 +1,40 @@
{
"name": "TrueNAS Community Edition",
"slug": "truenas-vm",
"categories": [
2
],
"date_created": "2026-02-19",
"type": "vm",
"updateable": false,
"privileged": false,
"interface_port": null,
"documentation": "https://www.truenas.com/docs/",
"website": "https://www.truenas.com/truenas-community-edition/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/truenas-core.webp",
"config_path": "",
"description": "TrueNAS Community Edition is the world's most deployed storage software. Free, flexible and build on OpenZFS with Docker.",
"install_methods": [
{
"type": "default",
"script": "vm/truenas-vm.sh",
"resources": {
"cpu": 2,
"ram": 8192,
"hdd": 16,
"os": "Debian",
"version": null
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "Once the script finishes, proceed with the OS installation via the console. For more details, please refer to this discussion: `https://github.com/community-scripts/ProxmoxVE/discussions/11344`",
"type": "info"
}
]
}

View File

@@ -36,7 +36,7 @@ $STD apt install -y \
libavformat-dev
msg_ok "Installed Dependencies"
JAVA_VERSION="11" setup_java
JAVA_VERSION="17" setup_java
setup_mariadb
MARIADB_DB_NAME="guacamole_db" MARIADB_DB_USER="guacamole_user" setup_mariadb_db

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: mikolaj92
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/janeczku/calibre-web
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
build-essential \
python3 \
python3-dev \
libldap2-dev \
libsasl2-dev \
libssl-dev \
imagemagick \
libpango-1.0-0 \
libharfbuzz0b \
libpangoft2-1.0-0 \
fonts-liberation
msg_ok "Installed Dependencies"
msg_info "Installing Calibre (for eBook conversion)"
$STD apt install -y calibre
msg_ok "Installed Calibre"
fetch_and_deploy_gh_release "Calibre-Web" "janeczku/calibre-web" "prebuild" "latest" "/opt/calibre-web" "calibre-web*.tar.gz"
setup_uv
msg_info "Installing Python Dependencies"
cd /opt/calibre-web
$STD uv venv
$STD uv pip install --python /opt/calibre-web/.venv/bin/python --no-cache-dir --upgrade pip setuptools wheel
$STD uv pip install --python /opt/calibre-web/.venv/bin/python --no-cache-dir -r requirements.txt
msg_ok "Installed Python Dependencies"
msg_info "Creating Service"
mkdir -p /opt/calibre-web/data
cat <<EOF >/etc/systemd/system/calibre-web.service
[Unit]
Description=Calibre-Web Service
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/opt/calibre-web
ExecStart=/opt/calibre-web/.venv/bin/python /opt/calibre-web/cps.py
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now calibre-web
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -0,0 +1,171 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/databasus/databasus
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
nginx \
valkey
msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
setup_go
NODE_VERSION="24" setup_nodejs
fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Building Databasus (Patience)"
cd /opt/databasus/frontend
$STD npm ci
$STD npm run build
cd /opt/databasus/backend
$STD go mod tidy
$STD go mod download
$STD go install github.com/swaggo/swag/cmd/swag@latest
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
mkdir -p /databasus-data/{pgdata,temp,backups,data,logs}
mkdir -p /opt/databasus/ui/build
mkdir -p /opt/databasus/migrations
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations/* /opt/databasus/migrations/
chown -R postgres:postgres /databasus-data
msg_ok "Built Databasus"
msg_info "Configuring Databasus"
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install goose for migrations
$STD go install github.com/pressly/goose/v3/cmd/goose@latest
ln -sf /root/go/bin/goose /usr/local/bin/goose
cat <<EOF >/opt/databasus/.env
# Environment
ENV_MODE=production
# Server
SERVER_PORT=4005
SERVER_HOST=0.0.0.0
# Database
DATABASE_DSN=host=localhost user=postgres password=postgres dbname=databasus port=5432 sslmode=disable
DATABASE_URL=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
# Migrations
GOOSE_DRIVER=postgres
GOOSE_DBSTRING=postgres://postgres:postgres@localhost:5432/databasus?sslmode=disable
GOOSE_MIGRATION_DIR=/opt/databasus/migrations
# Valkey (Redis-compatible cache)
VALKEY_HOST=localhost
VALKEY_PORT=6379
# Security
JWT_SECRET=${JWT_SECRET}
ENCRYPTION_KEY=${ENCRYPTION_KEY}
# Paths
DATA_DIR=/databasus-data/data
BACKUP_DIR=/databasus-data/backups
LOG_DIR=/databasus-data/logs
EOF
chown postgres:postgres /opt/databasus/.env
chmod 600 /opt/databasus/.env
msg_ok "Configured Databasus"
msg_info "Configuring Valkey"
cat <<EOF >/etc/valkey/valkey.conf
port 6379
bind 127.0.0.1
protected-mode yes
save ""
maxmemory 256mb
maxmemory-policy allkeys-lru
EOF
systemctl enable -q --now valkey-server
systemctl restart valkey-server
msg_ok "Configured Valkey"
msg_info "Creating Database"
# Configure PostgreSQL to allow local password auth for databasus
PG_HBA="/etc/postgresql/17/main/pg_hba.conf"
if ! grep -q "databasus" "$PG_HBA"; then
sed -i '/^local\s*all\s*all/i local databasus postgres trust' "$PG_HBA"
sed -i '/^host\s*all\s*all\s*127/i host databasus postgres 127.0.0.1/32 trust' "$PG_HBA"
systemctl reload postgresql
fi
$STD sudo -u postgres psql -c "CREATE DATABASE databasus;" 2>/dev/null || true
$STD sudo -u postgres psql -c "ALTER USER postgres WITH SUPERUSER CREATEROLE CREATEDB;" 2>/dev/null || true
msg_ok "Created Database"
msg_info "Creating Databasus Service"
cat <<EOF >/etc/systemd/system/databasus.service
[Unit]
Description=Databasus - Database Backup Management
After=network.target postgresql.service valkey.service
Requires=postgresql.service valkey.service
[Service]
Type=simple
WorkingDirectory=/opt/databasus
EnvironmentFile=/opt/databasus/.env
ExecStart=/opt/databasus/databasus
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$STD systemctl daemon-reload
$STD systemctl enable -q --now databasus
msg_ok "Created Databasus Service"
msg_info "Configuring Nginx"
cat <<EOF >/etc/nginx/sites-available/databasus
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:4005;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_cache_bypass \$http_upgrade;
proxy_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
EOF
ln -sf /etc/nginx/sites-available/databasus /etc/nginx/sites-enabled/databasus
rm -f /etc/nginx/sites-enabled/default
$STD nginx -t
$STD systemctl enable -q --now nginx
$STD systemctl reload nginx
msg_ok "Configured Nginx"
motd_ssh
customize
cleanup_lxc

View File

@@ -1,8 +1,8 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster)
# Co-Author: remz1337
# Copyright (c) 2021-2026 community-scripts ORG
# Authors: MickLesk (CanbiZ)
# Co-Authors: remz1337
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://frigate.video/
@@ -14,60 +14,224 @@ setting_up_container
network_check
update_os
msg_info "Installing Dependencies (Patience)"
$STD apt-get install -y {git,ca-certificates,automake,build-essential,xz-utils,libtool,ccache,pkg-config,libgtk-3-dev,libavcodec-dev,libavformat-dev,libswscale-dev,libv4l-dev,libxvidcore-dev,libx264-dev,libjpeg-dev,libpng-dev,libtiff-dev,gfortran,openexr,libatlas-base-dev,libssl-dev,libtbb2,libtbb-dev,libdc1394-22-dev,libopenexr-dev,libgstreamer-plugins-base1.0-dev,libgstreamer1.0-dev,gcc,gfortran,libopenblas-dev,liblapack-dev,libusb-1.0-0-dev,jq,moreutils}
source /etc/os-release
if [[ "$VERSION_ID" != "12" ]]; then
msg_error "Frigate requires Debian 12 (Bookworm) due to Python 3.11 dependencies"
exit 1
fi
msg_info "Converting APT sources to DEB822 format"
if [ -f /etc/apt/sources.list ]; then
cat >/etc/apt/sources.list.d/debian.sources <<'EOF'
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm
Components: main contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm-updates
Components: main contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://security.debian.org
Suites: bookworm-security
Components: main contrib
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
mv /etc/apt/sources.list /etc/apt/sources.list.bak
$STD apt update
fi
msg_ok "Converted APT sources"
msg_info "Installing Dependencies"
$STD apt install -y \
xz-utils \
python3 \
python3-dev \
python3-pip \
gcc \
pkg-config \
libhdf5-dev \
build-essential \
automake \
libtool \
ccache \
libusb-1.0-0-dev \
apt-transport-https \
cmake \
git \
libgtk-3-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
gfortran \
openexr \
libssl-dev \
libtbbmalloc2 \
libtbb-dev \
libdc1394-dev \
libopenexr-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer1.0-dev \
tclsh \
libopenblas-dev \
liblapack-dev \
make \
moreutils
msg_ok "Installed Dependencies"
msg_info "Setup Python3"
$STD apt-get install -y {python3,python3-dev,python3-setuptools,python3-distutils,python3-pip}
$STD pip install --upgrade pip
msg_ok "Setup Python3"
NODE_VERSION="22" setup_nodejs
msg_info "Installing go2rtc"
mkdir -p /usr/local/go2rtc/bin
cd /usr/local/go2rtc/bin
curl -fsSL "https://github.com/AlexxIT/go2rtc/releases/latest/download/go2rtc_linux_amd64" -o "go2rtc"
chmod +x go2rtc
$STD ln -svf /usr/local/go2rtc/bin/go2rtc /usr/local/bin/go2rtc
msg_ok "Installed go2rtc"
setup_hwaccel
msg_info "Installing Frigate v0.14.1 (Perseverance)"
cd ~
mkdir -p /opt/frigate/models
curl -fsSL "https://github.com/blakeblackshear/frigate/archive/refs/tags/v0.14.1.tar.gz" -o "frigate.tar.gz"
tar -xzf frigate.tar.gz -C /opt/frigate --strip-components 1
rm -rf frigate.tar.gz
cd /opt/frigate
$STD pip3 wheel --wheel-dir=/wheels -r /opt/frigate/docker/main/requirements-wheels.txt
cp -a /opt/frigate/docker/main/rootfs/. /
export TARGETARCH="amd64"
echo 'libc6 libraries/restart-without-asking boolean true' | debconf-set-selections
$STD /opt/frigate/docker/main/install_deps.sh
$STD apt update
$STD ln -svf /usr/lib/btbn-ffmpeg/bin/ffmpeg /usr/local/bin/ffmpeg
$STD ln -svf /usr/lib/btbn-ffmpeg/bin/ffprobe /usr/local/bin/ffprobe
export CCACHE_DIR=/root/.ccache
export CCACHE_MAXSIZE=2G
export APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=DontWarn
export PIP_BREAK_SYSTEM_PACKAGES=1
export NVIDIA_VISIBLE_DEVICES=all
export NVIDIA_DRIVER_CAPABILITIES="compute,video,utility"
export TOKENIZERS_PARALLELISM=true
export TRANSFORMERS_NO_ADVISORY_WARNINGS=1
export OPENCV_FFMPEG_LOGLEVEL=8
export HAILORT_LOGGER_PATH=NONE
fetch_and_deploy_gh_release "frigate" "blakeblackshear/frigate" "tarball" "v0.16.4" "/opt/frigate"
msg_info "Building Nginx"
$STD bash /opt/frigate/docker/main/build_nginx.sh
sed -e '/s6-notifyoncheck/ s/^#*/#/' -i /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run
ln -sf /usr/local/nginx/sbin/nginx /usr/local/bin/nginx
msg_ok "Built Nginx"
msg_info "Building SQLite Extensions"
$STD bash /opt/frigate/docker/main/build_sqlite_vec.sh
msg_ok "Built SQLite Extensions"
fetch_and_deploy_gh_release "go2rtc" "AlexxIT/go2rtc" "singlefile" "latest" "/usr/local/go2rtc/bin" "go2rtc_linux_amd64"
msg_info "Installing Tempio"
sed -i 's|/rootfs/usr/local|/usr/local|g' /opt/frigate/docker/main/install_tempio.sh
$STD bash /opt/frigate/docker/main/install_tempio.sh
ln -sf /usr/local/tempio/bin/tempio /usr/local/bin/tempio
msg_ok "Installed Tempio"
msg_info "Building libUSB"
fetch_and_deploy_gh_release "libusb" "libusb/libusb" "tarball" "v1.0.26" "/opt/libusb"
cd /opt/libusb
$STD ./bootstrap.sh
$STD ./configure CC='ccache gcc' CCX='ccache g++' --disable-udev --enable-shared
$STD make -j "$(nproc)"
cd /opt/libusb/libusb
mkdir -p /usr/local/lib /usr/local/include/libusb-1.0 /usr/local/lib/pkgconfig
$STD bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la /usr/local/lib
install -c -m 644 libusb.h /usr/local/include/libusb-1.0
cd /opt/libusb/
install -c -m 644 libusb-1.0.pc /usr/local/lib/pkgconfig
ldconfig
msg_ok "Built libUSB"
msg_info "Installing Python Dependencies"
$STD pip3 install -r /opt/frigate/docker/main/requirements.txt
msg_ok "Installed Python Dependencies"
msg_info "Building Python Wheels (Patience)"
mkdir -p /wheels
sed -i 's|^SQLITE3_VERSION=.*|SQLITE3_VERSION="version-3.46.0"|g' /opt/frigate/docker/main/build_pysqlite3.sh
$STD bash /opt/frigate/docker/main/build_pysqlite3.sh
for i in {1..3}; do
$STD pip3 wheel --wheel-dir=/wheels -r /opt/frigate/docker/main/requirements-wheels.txt --default-timeout=300 --retries=3 && break
[[ $i -lt 3 ]] && sleep 10
done
msg_ok "Built Python Wheels"
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
msg_info "Downloading Inference Models"
mkdir -p /models /openvino-model
wget -q -O /edgetpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
wget -q -O /models/cpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite
cp /opt/frigate/labelmap.txt /labelmap.txt
msg_ok "Downloaded Inference Models"
msg_info "Downloading Audio Model"
wget -q -O /tmp/yamnet.tar.gz https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download
$STD tar xzf /tmp/yamnet.tar.gz -C /
mv /1.tflite /cpu_audio_model.tflite
cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
rm -f /tmp/yamnet.tar.gz
msg_ok "Downloaded Audio Model"
msg_info "Installing HailoRT Runtime"
$STD bash /opt/frigate/docker/main/install_hailort.sh
cp -a /opt/frigate/docker/main/rootfs/. /
sed -i '/^.*unset DEBIAN_FRONTEND.*$/d' /opt/frigate/docker/main/install_deps.sh
echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections
echo "libedgetpu1-max libedgetpu/install-confirm-max boolean true" | debconf-set-selections
# Allow Frigate's Intel media packages to overwrite files from system GPU driver packages
echo 'force-overwrite' >/etc/dpkg/dpkg.cfg.d/force-overwrite
$STD bash /opt/frigate/docker/main/install_deps.sh
rm -f /etc/dpkg/dpkg.cfg.d/force-overwrite
$STD pip3 install -U /wheels/*.whl
ldconfig
msg_ok "Installed HailoRT Runtime"
msg_info "Installing OpenVino"
$STD pip3 install -r /opt/frigate/docker/main/requirements-ov.txt
msg_ok "Installed OpenVino"
msg_info "Building OpenVino Model"
cd /models
wget -q http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
$STD tar -zxf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz --no-same-owner
if $STD python3 /opt/frigate/docker/main/build_ov_model.py; then
cp /models/ssdlite_mobilenet_v2.xml /openvino-model/
cp /models/ssdlite_mobilenet_v2.bin /openvino-model/
wget -q https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt -O /openvino-model/coco_91cl_bkgr.txt
sed -i 's/truck/car/g' /openvino-model/coco_91cl_bkgr.txt
msg_ok "Built OpenVino Model"
else
msg_warn "OpenVino build failed (CPU may not support required instructions). Frigate will use CPU model."
fi
msg_info "Building Frigate Application (Patience)"
cd /opt/frigate
$STD pip3 install -r /opt/frigate/docker/main/requirements-dev.txt
$STD /opt/frigate/.devcontainer/initialize.sh
$STD bash /opt/frigate/.devcontainer/initialize.sh
$STD make version
cd /opt/frigate/web
$STD npm install
$STD npm run build
cp -r /opt/frigate/web/dist/* /opt/frigate/web/
cp -r /opt/frigate/config/. /config
sed -i '/^s6-svc -O \.$/s/^/#/' /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/frigate/run
msg_ok "Built Frigate Application"
msg_info "Configuring Frigate"
mkdir -p /config /media/frigate
cp -r /opt/frigate/config/. /config
curl -fsSL "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4" -o "/media/frigate/person-bicycle-car-detection.mp4"
echo "tmpfs /tmp/cache tmpfs defaults 0 0" >>/etc/fstab
cat <<EOF >/etc/frigate.env
DEFAULT_FFMPEG_VERSION="7.0"
INCLUDED_FFMPEG_VERSIONS="7.0:5.0"
EOF
cat <<EOF >/config/config.yml
mqtt:
enabled: false
cameras:
test:
ffmpeg:
#hwaccel_args: preset-vaapi
inputs:
- path: /media/frigate/person-bicycle-car-detection.mp4
input_args: -re -stream_loop -1 -fflags +genpts
@@ -78,96 +242,42 @@ cameras:
height: 1080
width: 1920
fps: 5
auth:
enabled: false
detect:
enabled: false
EOF
ln -sf /config/config.yml /opt/frigate/config/config.yml
if [[ "$CTTYPE" == "0" ]]; then
sed -i -e 's/^kvm:x:104:$/render:x:104:root,frigate/' -e 's/^render:x:105:root$/kvm:x:105:/' /etc/group
else
sed -i -e 's/^kvm:x:104:$/render:x:104:frigate/' -e 's/^render:x:105:$/kvm:x:105:/' /etc/group
fi
echo "tmpfs /tmp/cache tmpfs defaults 0 0" >>/etc/fstab
msg_ok "Installed Frigate"
if grep -q -o -m1 -E 'avx[^ ]*' /proc/cpuinfo; then
msg_ok "AVX Support Detected"
msg_info "Installing Openvino Object Detection Model (Resilience)"
$STD pip install -r /opt/frigate/docker/main/requirements-ov.txt
cd /opt/frigate/models
export ENABLE_ANALYTICS=NO
$STD /usr/local/bin/omz_downloader --name ssdlite_mobilenet_v2 --num_attempts 2
$STD /usr/local/bin/omz_converter --name ssdlite_mobilenet_v2 --precision FP16 --mo /usr/local/bin/mo
cd /
cp -r /opt/frigate/models/public/ssdlite_mobilenet_v2 openvino-model
curl -fsSL "https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt" -o "openvino-model/coco_91cl_bkgr.txt"
sed -i 's/truck/car/g' openvino-model/coco_91cl_bkgr.txt
if grep -q -o -m1 -E 'avx[^ ]*|sse4_2' /proc/cpuinfo; then
cat <<EOF >>/config/config.yml
ffmpeg:
hwaccel_args: auto
detectors:
ov:
detector01:
type: openvino
device: CPU
model:
path: /openvino-model/FP16/ssdlite_mobilenet_v2.xml
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
EOF
msg_ok "Installed Openvino Object Detection Model"
else
cat <<EOF >>/config/config.yml
ffmpeg:
hwaccel_args: auto
model:
path: /cpu_model.tflite
EOF
fi
msg_info "Installing Coral Object Detection Model (Patience)"
cd /opt/frigate
export CCACHE_DIR=/root/.ccache
export CCACHE_MAXSIZE=2G
curl -fsSL "https://github.com/libusb/libusb/archive/v1.0.26.zip" -o "v1.0.26.zip"
$STD unzip v1.0.26.zip
rm v1.0.26.zip
cd libusb-1.0.26
$STD ./bootstrap.sh
$STD ./configure --disable-udev --enable-shared
$STD make -j $(nproc --all)
cd /opt/frigate/libusb-1.0.26/libusb
mkdir -p /usr/local/lib
$STD /bin/bash ../libtool --mode=install /usr/bin/install -c libusb-1.0.la '/usr/local/lib'
mkdir -p /usr/local/include/libusb-1.0
$STD /usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0'
ldconfig
cd /
curl -fsSL "https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite" -o "edgetpu_model.tflite"
curl -fsSL "https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite" -o "cpu_model.tflite"
cp /opt/frigate/labelmap.txt /labelmap.txt
curl -fsSL "https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download" -o "yamnet-tflite-classification-tflite-v1.tar.gz"
tar xzf yamnet-tflite-classification-tflite-v1.tar.gz
rm -rf yamnet-tflite-classification-tflite-v1.tar.gz
mv 1.tflite cpu_audio_model.tflite
cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
mkdir -p /media/frigate
curl -fsSL "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4" -o "/media/frigate/person-bicycle-car-detection.mp4"
msg_ok "Installed Coral Object Detection Model"
msg_info "Building Nginx with Custom Modules"
$STD /opt/frigate/docker/main/build_nginx.sh
sed -e '/s6-notifyoncheck/ s/^#*/#/' -i /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run
ln -sf /usr/local/nginx/sbin/nginx /usr/local/bin/nginx
msg_ok "Built Nginx"
msg_info "Installing Tempio"
sed -i 's|/rootfs/usr/local|/usr/local|g' /opt/frigate/docker/main/install_tempio.sh
$STD /opt/frigate/docker/main/install_tempio.sh
ln -sf /usr/local/tempio/bin/tempio /usr/local/bin/tempio
msg_ok "Installed Tempio"
msg_ok "Configured Frigate"
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/create_directories.service
[Unit]
Description=Create necessary directories for logs
Description=Create necessary directories for Frigate logs
Before=frigate.service go2rtc.service nginx.service
[Service]
Type=oneshot
@@ -176,13 +286,11 @@ ExecStart=/bin/bash -c '/bin/mkdir -p /dev/shm/logs/{frigate,go2rtc,nginx} && /b
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now create_directories
sleep 3
cat <<EOF >/etc/systemd/system/go2rtc.service
[Unit]
Description=go2rtc service
After=network.target
After=create_directories.service
Description=go2rtc streaming service
After=network.target create_directories.service
StartLimitIntervalSec=0
[Service]
@@ -190,7 +298,8 @@ Type=simple
Restart=always
RestartSec=1
User=root
ExecStartPre=+rm /dev/shm/logs/go2rtc/current
EnvironmentFile=/etc/frigate.env
ExecStartPre=+rm -f /dev/shm/logs/go2rtc/current
ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/go2rtc/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '"
StandardOutput=file:/dev/shm/logs/go2rtc/current
StandardError=file:/dev/shm/logs/go2rtc/current
@@ -198,13 +307,11 @@ StandardError=file:/dev/shm/logs/go2rtc/current
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now go2rtc
sleep 3
cat <<EOF >/etc/systemd/system/frigate.service
[Unit]
Description=Frigate service
After=go2rtc.service
After=create_directories.service
Description=Frigate NVR service
After=go2rtc.service create_directories.service
StartLimitIntervalSec=0
[Service]
@@ -212,8 +319,8 @@ Type=simple
Restart=always
RestartSec=1
User=root
# Environment=PLUS_API_KEY=
ExecStartPre=+rm /dev/shm/logs/frigate/current
EnvironmentFile=/etc/frigate.env
ExecStartPre=+rm -f /dev/shm/logs/frigate/current
ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/frigate/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '"
StandardOutput=file:/dev/shm/logs/frigate/current
StandardError=file:/dev/shm/logs/frigate/current
@@ -221,13 +328,11 @@ StandardError=file:/dev/shm/logs/frigate/current
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now frigate
sleep 3
cat <<EOF >/etc/systemd/system/nginx.service
[Unit]
Description=Nginx service
After=frigate.service
After=create_directories.service
Description=Nginx reverse proxy for Frigate
After=frigate.service create_directories.service
StartLimitIntervalSec=0
[Service]
@@ -235,7 +340,7 @@ Type=simple
Restart=always
RestartSec=1
User=root
ExecStartPre=+rm /dev/shm/logs/nginx/current
ExecStartPre=+rm -f /dev/shm/logs/nginx/current
ExecStart=/bin/bash -c "bash /opt/frigate/docker/main/rootfs/etc/s6-overlay/s6-rc.d/nginx/run 2> >(/usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S ' >&2) | /usr/bin/ts '%%Y-%%m-%%d %%H:%%M:%%.S '"
StandardOutput=file:/dev/shm/logs/nginx/current
StandardError=file:/dev/shm/logs/nginx/current
@@ -243,8 +348,20 @@ StandardError=file:/dev/shm/logs/nginx/current
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable -q --now create_directories
sleep 2
systemctl enable -q --now go2rtc
sleep 2
systemctl enable -q --now frigate
sleep 2
systemctl enable -q --now nginx
msg_ok "Configured Services"
msg_ok "Created Services"
msg_info "Cleaning Up"
rm -rf /opt/libusb /wheels /models/*.tar.gz
msg_ok "Cleaned Up"
motd_ssh
customize

View File

@@ -0,0 +1,118 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.grampsweb.org/
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
appstream \
build-essential \
ffmpeg \
gettext \
gobject-introspection \
gir1.2-gexiv2-0.10 \
gir1.2-gtk-3.0 \
gir1.2-osmgpsmap-1.0 \
gir1.2-pango-1.0 \
git \
graphviz \
libcairo2-dev \
libgirepository1.0-dev \
libglib2.0-dev \
libicu-dev \
libopencv-dev \
pkg-config \
poppler-utils \
python3-dev \
tesseract-ocr
msg_ok "Installed Dependencies"
PYTHON_VERSION="3.12" setup_uv
NODE_VERSION="22" setup_nodejs
fetch_and_deploy_gh_release "gramps-web-api" "gramps-project/gramps-web-api" "tarball" "latest" "/opt/gramps-web-api"
fetch_and_deploy_gh_release "gramps-web" "gramps-project/gramps-web" "tarball" "latest" "/opt/gramps-web/frontend"
msg_info "Setting up Gramps Web"
mkdir -p \
/opt/gramps-web/config \
/opt/gramps-web/data/cache/export \
/opt/gramps-web/data/cache/persistent \
/opt/gramps-web/data/cache/report \
/opt/gramps-web/data/cache/request \
/opt/gramps-web/data/cache/thumbnail \
/opt/gramps-web/data/gramps/grampsdb \
/opt/gramps-web/data/indexdir \
/opt/gramps-web/data/media \
/opt/gramps-web/data/users
SECRET_KEY="$(openssl rand -hex 32)"
cat <<EOF >/opt/gramps-web/config/config.cfg
TREE="Gramps Web"
SECRET_KEY="${SECRET_KEY}"
BASE_URL="http://${LOCAL_IP}:5000"
USER_DB_URI="sqlite:////opt/gramps-web/data/users/users.sqlite"
SEARCH_INDEX_DB_URI="sqlite:////opt/gramps-web/data/indexdir/search_index.db"
MEDIA_BASE_DIR="/opt/gramps-web/data/media"
STATIC_PATH="/opt/gramps-web/frontend/dist"
THUMBNAIL_CACHE_CONFIG={"CACHE_TYPE":"FileSystemCache","CACHE_DIR":"/opt/gramps-web/data/cache/thumbnail","CACHE_THRESHOLD":1000,"CACHE_DEFAULT_TIMEOUT":0}
REQUEST_CACHE_CONFIG={"CACHE_TYPE":"FileSystemCache","CACHE_DIR":"/opt/gramps-web/data/cache/request","CACHE_THRESHOLD":1000,"CACHE_DEFAULT_TIMEOUT":0}
PERSISTENT_CACHE_CONFIG={"CACHE_TYPE":"FileSystemCache","CACHE_DIR":"/opt/gramps-web/data/cache/persistent","CACHE_THRESHOLD":0,"CACHE_DEFAULT_TIMEOUT":0}
REPORT_DIR="/opt/gramps-web/data/cache/report"
EXPORT_DIR="/opt/gramps-web/data/cache/export"
EOF
$STD uv venv -c -p python3.12 /opt/gramps-web/venv
source /opt/gramps-web/venv/bin/activate
$STD uv pip install --no-cache-dir --upgrade pip setuptools wheel
$STD uv pip install --no-cache-dir gunicorn
$STD uv pip install --no-cache-dir /opt/gramps-web-api
cd /opt/gramps-web/frontend
export COREPACK_ENABLE_DOWNLOAD_PROMPT=0
$STD corepack enable
$STD npm install
$STD npm run build
cd /opt/gramps-web-api
GRAMPS_API_CONFIG=/opt/gramps-web/config/config.cfg \
ALEMBIC_CONFIG=/opt/gramps-web-api/alembic.ini \
GRAMPSHOME=/opt/gramps-web/data/gramps \
GRAMPS_DATABASE_PATH=/opt/gramps-web/data/gramps/grampsdb \
$STD /opt/gramps-web/venv/bin/python3 -m gramps_webapi user migrate
msg_ok "Set up Gramps Web"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/gramps-web.service
[Unit]
Description=Gramps Web Service
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/opt/gramps-web-api
Environment=GRAMPS_API_CONFIG=/opt/gramps-web/config/config.cfg
Environment=GRAMPSHOME=/opt/gramps-web/data/gramps
Environment=GRAMPS_DATABASE_PATH=/opt/gramps-web/data/gramps/grampsdb
Environment=PATH=/opt/gramps-web/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ExecStart=/opt/gramps-web/venv/bin/gunicorn -w 2 -b 0.0.0.0:5000 gramps_webapi.wsgi:app --timeout 120 --limit-request-line 8190
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now gramps-web
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -13,6 +13,10 @@ setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y build-essential
msg_ok "Installed Dependencies"
PYTHON_VERSION="3.12" setup_uv
fetch_and_deploy_gh_release "huntarr" "plexguide/Huntarr.io" "tarball"

View File

@@ -30,7 +30,7 @@ setup_meilisearch
fetch_and_deploy_gh_release "karakeep" "karakeep-app/karakeep" "tarball"
cd /opt/karakeep
MODULE_VERSION="$(jq -r '.packageManager | split("@")[1]' /opt/karakeep/package.json)"
NODE_VERSION="22" NODE_MODULE="pnpm@${MODULE_VERSION}" setup_nodejs
NODE_VERSION="24" NODE_MODULE="pnpm@${MODULE_VERSION}" setup_nodejs
msg_info "Installing karakeep"
export PUPPETEER_SKIP_DOWNLOAD="true"

View File

@@ -35,10 +35,12 @@ $STD apt install -y \
python3-redis \
python3-setuptools \
python3-systemd \
python3-pip
python3-pip \
python3-psutil \
python3-command-runner
msg_ok "Installed Python Dependencies"
PHP_VERSION="8.4" PHP_FPM="YES" PHP_MODULE="snmp" setup_php
PHP_VERSION="8.4" PHP_FPM="YES" PHP_MODULE="cli,snmp,gmp" setup_php
setup_mariadb
setup_composer
PYTHON_VERSION="3.13" setup_uv
@@ -78,7 +80,7 @@ sed -i "s/listen = \/run\/php\/php8.4-fpm.sock/listen = \/run\/php-fpm-librenms.
msg_ok "Configured PHP-FPM"
msg_info "Configure Nginx"
cat >/etc/nginx/sites-enabled/librenms <<'EOF'
cat <<EOF >/etc/nginx/sites-enabled/librenms
server {
listen 80;
server_name ${LOCAL_IP};
@@ -89,7 +91,7 @@ server {
gzip on;
gzip_types text/css application/javascript text/javascript application/x-javascript image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;
location / {
try_files $uri $uri/ /index.php?$query_string;
try_files \$uri \$uri/ /index.php?\$query_string;
}
location ~ [^/]\.php(/|$) {
fastcgi_pass unix:/run/php-fpm-librenms.sock;

View File

@@ -64,7 +64,7 @@ $STD sudo -u cool coolconfig set-admin-password --user=admin --password="$COOLPA
echo "$COOLPASS" >~/.coolpass
msg_ok "Installed Collabora Online"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.0.2" "/usr/bin" "opencloud-*-linux-amd64"
fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "v5.1.0" "/usr/bin" "opencloud-*-linux-amd64"
msg_info "Configuring OpenCloud"
DATA_DIR="/var/lib/opencloud"

View File

@@ -47,8 +47,8 @@ sed -i -e "s|DATABASE_URL=.*|DATABASE_URL=\"postgresql://$PG_DB_USER:$PG_DB_PASS
-e "/JWT_SECRET/s/[=$].*/=$JWT_SECRET/" \
-e "\|CORS_ORIGIN|s|localhost|$LOCAL_IP|" \
-e "/PORT=3001/aSERVER_PROTOCOL=http \\
SERVER_HOST=$LOCAL_IP \\
SERVER_PORT=3000" \
SERVER_HOST=$LOCAL_IP \\
SERVER_PORT=3000" \
-e '/_ENV=production/aTRUST_PROXY=1' \
-e '/REDIS_USER=.*/,+1d' /opt/patchmon/backend/.env

View File

@@ -50,6 +50,7 @@ cp .env.sample .env
sed -i "s#http://localhost:1337#http://$LOCAL_IP:1337#g" /opt/planka/.env
sed -i "s#postgres@localhost#planka:$DB_PASS@localhost#g" /opt/planka/.env
sed -i "s#notsecretkey#$SECRET_KEY#g" /opt/planka/.env
mkdir -p /opt/planka/data/protected/{favicons,user-avatars,background-images} /opt/planka/data/private/attachments
$STD npm run db:init
msg_ok "Configured PLANKA"

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: tteck (tteckster) | MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.plex.tv/
@@ -16,19 +16,16 @@ update_os
setup_hwaccel
msg_info "Setting Up Plex Media Server Repository"
curl -fsSL https://downloads.plex.tv/plex-keys/PlexSign.key | tee /usr/share/keyrings/PlexSign.asc >/dev/null
cat <<EOF >/etc/apt/sources.list.d/plexmediaserver.sources
Types: deb
URIs: https://downloads.plex.tv/repo/deb/
Suites: public
Components: main
Signed-By: /usr/share/keyrings/PlexSign.asc
EOF
setup_deb822_repo \
"plexmediaserver" \
"https://downloads.plex.tv/plex-keys/PlexSign.v2.key" \
"https://repo.plex.tv/deb/" \
"public" \
"main"
msg_ok "Set Up Plex Media Server Repository"
msg_info "Installing Plex Media Server"
$STD apt update
$STD apt -o Dpkg::Options::="--force-confold" install -y plexmediaserver
$STD apt install -y plexmediaserver
if [[ "$CTTYPE" == "0" ]]; then
sed -i -e 's/^ssl-cert:x:104:plex$/render:x:104:root,plex/' -e 's/^render:x:108:root$/ssl-cert:x:108:plex/' /etc/group
else

View File

@@ -20,7 +20,7 @@ msg_ok "Installed Dependencies"
fetch_and_deploy_gh_release "recyclarr" "recyclarr/recyclarr" "prebuild" "latest" "/usr/local/bin" "recyclarr-linux-x64.tar.xz"
msg_info "Configuring Recyclarr"
mkdir -p /root/.config/recyclarr
mkdir -p /root/.config/recyclarr/{configs,includes}
$STD recyclarr config create
msg_ok "Configured Recyclarr"

110
install/sure-install.sh Normal file
View File

@@ -0,0 +1,110 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: vhsdream
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://sure.am
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
build-essential \
redis-server \
pkg-config \
libpq-dev \
libvips
msg_ok "Installed Dependencies"
fetch_and_deploy_gh_release "Sure" "we-promise/sure" "tarball" "latest" "/opt/sure"
PG_VERSION="$(sed -n '/postgres:/s/[^[:digit:]]*//p' /opt/sure/compose.example.yml)" setup_postgresql
PG_DB_NAME=sure_production PG_DB_USER=sure_user setup_postgresql_db
RUBY_VERSION="$(cat /opt/sure/.ruby-version)" RUBY_INSTALL_RAILS=false setup_ruby
msg_info "Building Sure"
cd /opt/sure
export RAILS_ENV=production
export BUNDLE_DEPLOYMENT=1
export BUNDLE_WITHOUT=development
$STD ./bin/bundle install
$STD ./bin/bundle exec bootsnap precompile --gemfile -j 0
$STD ./bin/bundle exec bootsnap precompile -j 0 app/ lib/
export SECRET_KEY_BASE_DUMMY=1 && $STD ./bin/rails assets:precompile
unset SECRET_KEY_BASE_DUMMY
msg_ok "Built Sure"
msg_info "Configuring Sure"
KEY="$(openssl rand -hex 64)"
mkdir -p /etc/sure
mv /opt/sure/.env.example /etc/sure/.env
sed -i -e "/^SECRET_KEY_BASE=/s/secret-value/${KEY}/" \
-e 's/_KEY_BASE=.*$/&\n\nRAILS_FORCE_SSL=false \
\
# Change to true when using a reverse proxy \
RAILS_ASSUME_SSL=false/' \
-e "/POSTGRES_PASSWORD=/s/postgres/${PG_DB_PASS}/" \
-e "/POSTGRES_USER=/s/postgres/${PG_DB_USER}\\
POSTGRES_DB=${PG_DB_NAME}/" \
-e "s|^APP_DOMAIN=|&${LOCAL_IP}|" /etc/sure/.env
msg_ok "Configured Sure"
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/sure.service
[Unit]
Description=Sure Service
After=network.target
[Service]
Type=simple
WorkingDirectory=/opt/sure
Environment=RAILS_ENV=production
Environment=BUNDLE_DEPLOYMENT=1
Environment=BUNDLE_WITHOUT=development
Environment=PATH=/root/.rbenv/shims:/root/.rbenv/bin:/usr/bin:\$PATH
EnvironmentFile=/etc/sure/.env
ExecStartPre=/opt/sure/bin/rails db:prepare
ExecStart=/opt/sure/bin/rails server
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/sure-worker.service
[Unit]
Description=Sure Background Worker (Sidekiq)
After=network.target redis-server.service
[Service]
Type=simple
WorkingDirectory=/opt/sure
Environment=RAILS_ENV=production
Environment=BUNDLE_DEPLOYMENT=1
Environment=BUNDLE_WITHOUT=development
Environment=PATH=/root/.rbenv/shims:/root/.rbenv/bin:/usr/bin:/usr/local/bin:/sbin:/bin
EnvironmentFile=/etc/sure/.env
ExecStart=/opt/sure/bin/bundle exec sidekiq -e production
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$STD systemctl enable -q --now sure sure-worker
msg_ok "Created Services"
motd_ssh
customize
cleanup_lxc

View File

@@ -28,9 +28,10 @@ setup_deb822_repo \
"stable" \
"main"
$STD apt install -y elasticsearch
echo "-Xms2g" >>/etc/elasticsearch/jvm.options
echo "-Xmx2g" >>/etc/elasticsearch/jvm.options
sed -i 's/^-Xms.*/-Xms2g/' /etc/elasticsearch/jvm.options
sed -i 's/^-Xmx.*/-Xmx2g/' /etc/elasticsearch/jvm.options
$STD /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment -b
systemctl daemon-reload
systemctl enable -q elasticsearch
systemctl restart -q elasticsearch
msg_ok "Setup Elasticsearch"

View File

@@ -11,9 +11,35 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
load_functions
catch_errors
# Persist diagnostics setting inside container (exported from build.func)
# so addon scripts running later can find the user's choice
if [[ ! -f /usr/local/community-scripts/diagnostics ]]; then
mkdir -p /usr/local/community-scripts
echo "DIAGNOSTICS=${DIAGNOSTICS:-no}" >/usr/local/community-scripts/diagnostics
fi
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# This function enables IPv6 if it's not disabled and sets verbose mode
verb_ip6() {
set_std_mode # Set STD mode based on VERBOSE
@@ -53,6 +79,7 @@ setting_up_container() {
fi
msg_ok "Set up Container OS"
msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}"
post_progress_to_api
}
# This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
@@ -85,8 +112,18 @@ network_check() {
update_os() {
msg_info "Updating Container OS"
$STD apk -U upgrade
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
msg_ok "Updated Container OS"
post_progress_to_api
}
# This function modifies the message of the day (motd) and SSH settings
@@ -111,6 +148,7 @@ motd_ssh() {
# Start the sshd service
$STD /etc/init.d/sshd start
fi
post_progress_to_api
}
# Validate Timezone for some LXC's
@@ -147,5 +185,5 @@ EOF
echo "bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/${app}.sh)\"" >/usr/bin/update
chmod +x /usr/bin/update
post_progress_to_api
}

View File

@@ -117,16 +117,17 @@ detect_repo_source
# - Canonical source of truth for ALL exit code mappings
# - Used by both api.func (telemetry) and error_handler.func (error display)
# - Supports:
# * Generic/Shell errors (1, 2, 124, 126-130, 134, 137, 139, 141, 143)
# * curl/wget errors (6, 7, 22, 28, 35)
# * Generic/Shell errors (1-3, 10, 124-132, 134, 137, 139, 141, 143-146)
# * curl/wget errors (4-8, 16, 18, 22-28, 30, 32-36, 39, 44-48, 51-52, 55-57, 59, 61, 63, 75, 78-79, 92, 95)
# * Package manager errors (APT, DPKG: 100-102, 255)
# * BSD sysexits (64-78)
# * Systemd/Service errors (150-154)
# * Python/pip/uv errors (160-162)
# * PostgreSQL errors (170-173)
# * MySQL/MariaDB errors (180-183)
# * MongoDB errors (190-193)
# * Proxmox custom codes (200-231)
# * Node.js/npm errors (243, 245-249)
# * Node.js/npm errors (239, 243, 245-249)
# - Returns description string for given exit code
# ------------------------------------------------------------------------------
explain_exit_code() {
@@ -135,30 +136,87 @@ explain_exit_code() {
# --- Generic / Shell ---
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
# --- curl / wget errors (commonly seen in downloads) ---
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
# --- Package manager / APT / DPKG ---
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
# --- BSD sysexits.h (64-78) ---
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
# --- Common shell/system errors ---
124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
# --- Systemd / Service errors (150-154) ---
150) echo "Systemd: Service failed to start" ;;
@@ -166,7 +224,6 @@ explain_exit_code() {
152) echo "Permission denied (EACCES)" ;;
153) echo "Build/compile failed (make/gcc/cmake)" ;;
154) echo "Node.js: Native addon build failed (node-gyp)" ;;
# --- Python / pip / uv (160-162) ---
160) echo "Python: Virtualenv / uv environment missing or broken" ;;
161) echo "Python: Dependency resolution failed" ;;
@@ -217,7 +274,8 @@ explain_exit_code() {
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
# --- Node.js / npm / pnpm / yarn (243-249) ---
# --- Node.js / npm / pnpm / yarn (239-249) ---
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
@@ -494,6 +552,7 @@ post_to_api() {
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "lxc",
"nsapp": "${NSAPP:-unknown}",
"status": "installing",
@@ -598,6 +657,7 @@ post_to_api_vm() {
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "vm",
"nsapp": "${NSAPP:-unknown}",
"status": "installing",
@@ -624,6 +684,31 @@ EOF
curl -fsS -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" &>/dev/null || true
POST_TO_API_DONE=true
}
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from host or container
# - Updates the existing telemetry record status to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# - Can be called multiple times safely
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
local app_name="${NSAPP:-${app:-unknown}}"
local telemetry_type="${TELEMETRY_TYPE:-lxc}"
curl -fsS -m 5 -X POST "${TELEMETRY_URL:-https://telemetry.community-scripts.org/telemetry}" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"${telemetry_type}\",\"nsapp\":\"${app_name}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# ------------------------------------------------------------------------------
@@ -728,6 +813,7 @@ post_update_to_api() {
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}",
@@ -770,6 +856,7 @@ EOF
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}",
@@ -812,6 +899,7 @@ EOF
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}",
@@ -848,6 +936,9 @@ categorize_error() {
# Network errors (curl/wget)
6 | 7 | 22 | 35) echo "network" ;;
# Docker / Privileged mode required
10) echo "config" ;;
# Timeout errors
28 | 124 | 211) echo "timeout" ;;
@@ -922,6 +1013,63 @@ get_install_duration() {
echo $((now - INSTALL_START_TIME))
}
# ------------------------------------------------------------------------------
# _telemetry_report_exit()
#
# - Internal handler called by EXIT trap set in init_tool_telemetry()
# - Determines success/failure from exit code and reports via appropriate API
# - Arguments:
# * $1: exit_code from the script
# ------------------------------------------------------------------------------
_telemetry_report_exit() {
local ec="${1:-0}"
local status="success"
[[ "$ec" -ne 0 ]] && status="failed"
# Lazy name resolution: use explicit name, fall back to $APP, then "unknown"
local name="${TELEMETRY_TOOL_NAME:-${APP:-unknown}}"
if [[ "${TELEMETRY_TOOL_TYPE:-pve}" == "addon" ]]; then
post_addon_to_api "$name" "$status" "$ec"
else
post_tool_to_api "$name" "$status" "$ec"
fi
}
# ------------------------------------------------------------------------------
# init_tool_telemetry()
#
# - One-line telemetry setup for tools/addon scripts
# - Reads DIAGNOSTICS from /usr/local/community-scripts/diagnostics
# (persisted on PVE host during first build, and inside containers by install.func)
# - Starts install timer for duration tracking
# - Sets EXIT trap to automatically report success/failure on script exit
# - Arguments:
# * $1: tool_name (optional, falls back to $APP at exit time)
# * $2: type ("pve" for PVE host scripts, "addon" for container addons)
# - Usage:
# source <(curl -fsSL .../misc/api.func) 2>/dev/null || true
# init_tool_telemetry "post-pve-install" "pve"
# init_tool_telemetry "" "addon" # uses $APP at exit time
# ------------------------------------------------------------------------------
init_tool_telemetry() {
local name="${1:-}"
local type="${2:-pve}"
[[ -n "$name" ]] && TELEMETRY_TOOL_NAME="$name"
TELEMETRY_TOOL_TYPE="$type"
# Read diagnostics opt-in/opt-out
if [[ -f /usr/local/community-scripts/diagnostics ]]; then
DIAGNOSTICS=$(grep -i "^DIAGNOSTICS=" /usr/local/community-scripts/diagnostics 2>/dev/null | awk -F'=' '{print $2}') || true
fi
start_install_timer
# EXIT trap: automatically report telemetry when script ends
trap '_telemetry_report_exit "$?"' EXIT
}
# ------------------------------------------------------------------------------
# post_tool_to_api()
#
@@ -969,7 +1117,8 @@ post_tool_to_api() {
cat <<EOF
{
"random_id": "${uuid}",
"type": "tool",
"execution_id": "${EXECUTION_ID:-${uuid}}",
"type": "pve",
"nsapp": "${tool_name}",
"status": "${status}",
"exit_code": ${exit_code},
@@ -1036,6 +1185,7 @@ post_addon_to_api() {
cat <<EOF
{
"random_id": "${uuid}",
"execution_id": "${EXECUTION_ID:-${uuid}}",
"type": "addon",
"nsapp": "${addon_name}",
"status": "${status}",
@@ -1127,6 +1277,7 @@ post_update_to_api_extended() {
cat <<EOF
{
"random_id": "${RANDOM_UUID}",
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
"type": "${TELEMETRY_TYPE:-lxc}",
"nsapp": "${NSAPP:-unknown}",
"status": "${pb_status}",

View File

@@ -42,9 +42,10 @@ variables() {
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
DIAGNOSTICS="yes" # sets the DIAGNOSTICS variable to "yes", used for the API call.
DIAGNOSTICS="no" # Safe default: no telemetry until user consents via diagnostics_check()
METHOD="default" # sets the METHOD variable to "default", used for the API call.
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
EXECUTION_ID="${RANDOM_UUID}" # Unique execution ID for telemetry record identification (unique-indexed in PocketBase)
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
combined_log="/tmp/install-${SESSION_ID}-combined.log" # Combined log (build + install) for failed installations
@@ -297,7 +298,7 @@ validate_container_id() {
# Falls back gracefully if pvesh unavailable or returns empty
if command -v pvesh &>/dev/null; then
local cluster_ids
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true)
if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then
return 1
@@ -2787,93 +2788,85 @@ Advanced:
# diagnostics_check()
#
# - Ensures diagnostics config file exists at /usr/local/community-scripts/diagnostics
# - Asks user whether to send anonymous diagnostic data
# - Asks user whether to send anonymous diagnostic data (first run only)
# - Saves DIAGNOSTICS=yes/no in the config file
# - Creates file if missing with default DIAGNOSTICS=yes
# - Reads current diagnostics setting from file
# - Reads current diagnostics setting from existing file
# - Sets global DIAGNOSTICS variable for API telemetry opt-in/out
# ------------------------------------------------------------------------------
diagnostics_check() {
if ! [ -d "/usr/local/community-scripts" ]; then
mkdir -p /usr/local/community-scripts
local config_dir="/usr/local/community-scripts"
local config_file="${config_dir}/diagnostics"
mkdir -p "$config_dir"
if [[ -f "$config_file" ]]; then
DIAGNOSTICS=$(awk -F '=' '/^DIAGNOSTICS/ {print $2}' "$config_file") || true
DIAGNOSTICS="${DIAGNOSTICS:-no}"
return
fi
if ! [ -f "/usr/local/community-scripts/diagnostics" ]; then
if (whiptail --backtitle "Proxmox VE Helper Scripts" --title "DIAGNOSTICS" --yesno "Send Diagnostics of LXC Installation?\n\n(This only transmits data without user data, just RAM, CPU, LXC name, ...)" 10 58); then
cat <<EOF >/usr/local/community-scripts/diagnostics
DIAGNOSTICS=yes
local result
result=$(whiptail --backtitle "Proxmox VE Helper Scripts" \
--title "TELEMETRY & DIAGNOSTICS" \
--ok-button "Confirm" --cancel-button "Exit" \
--radiolist "\nHelp improve Community-Scripts by sharing anonymous data.\n\nWhat we collect:\n - Container resources (CPU, RAM, disk), OS & PVE version\n - Application name, install method and status\n\nWhat we DON'T collect:\n - No IP addresses, hostnames, or personal data\n\nYou can change this anytime in the Settings menu.\nPrivacy: https://github.com/community-scripts/telemetry-service/blob/main/docs/PRIVACY.md\n\nUse SPACE to select, ENTER to confirm." 22 76 2 \
"yes" "Yes, share anonymous data" OFF \
"no" "No, opt out" OFF \
3>&1 1>&2 2>&3) || result="no"
#This file is used to store the diagnostics settings for the Community-Scripts API.
#https://github.com/community-scripts/ProxmoxVE/discussions/1836
#Your diagnostics will be sent to the Community-Scripts API for troubleshooting/statistical purposes.
#You can review the data at https://community-scripts.github.io/ProxmoxVE/data
#If you do not wish to send diagnostics, please set the variable 'DIAGNOSTICS' to "no" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will disable the diagnostics feature.
#To send diagnostics, set the variable 'DIAGNOSTICS' to "yes" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will enable the diagnostics feature.
#The following information will be sent:
#"disk_size"
#"core_count"
#"ram_size"
#"os_type"
#"os_version"
#"nsapp"
#"method"
#"pve_version"
#"status"
#If you have any concerns, please review the source code at /misc/build.func
DIAGNOSTICS="${result:-no}"
cat <<EOF >"$config_file"
DIAGNOSTICS=${DIAGNOSTICS}
# Community-Scripts Telemetry Configuration
# https://telemetry.community-scripts.org
#
# This file stores your telemetry preference.
# Set DIAGNOSTICS=yes to share anonymous installation data.
# Set DIAGNOSTICS=no to disable telemetry.
#
# You can also change this via the Settings menu during installation.
#
# Data collected (when enabled):
# disk_size, core_count, ram_size, os_type, os_version,
# nsapp, method, pve_version, status, exit_code
#
# No personal data (IPs, hostnames, passwords) is ever collected.
# Privacy: https://github.com/community-scripts/telemetry-service/blob/main/docs/PRIVACY.md
EOF
DIAGNOSTICS="yes"
else
cat <<EOF >/usr/local/community-scripts/diagnostics
DIAGNOSTICS=no
#This file is used to store the diagnostics settings for the Community-Scripts API.
#https://github.com/community-scripts/ProxmoxVE/discussions/1836
#Your diagnostics will be sent to the Community-Scripts API for troubleshooting/statistical purposes.
#You can review the data at https://community-scripts.github.io/ProxmoxVE/data
#If you do not wish to send diagnostics, please set the variable 'DIAGNOSTICS' to "no" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will disable the diagnostics feature.
#To send diagnostics, set the variable 'DIAGNOSTICS' to "yes" in /usr/local/community-scripts/diagnostics, or use the menue.
#This will enable the diagnostics feature.
#The following information will be sent:
#"disk_size"
#"core_count"
#"ram_size"
#"os_type"
#"os_version"
#"nsapp"
#"method"
#"pve_version"
#"status"
#If you have any concerns, please review the source code at /misc/build.func
EOF
DIAGNOSTICS="no"
fi
else
DIAGNOSTICS=$(awk -F '=' '/^DIAGNOSTICS/ {print $2}' /usr/local/community-scripts/diagnostics)
fi
}
diagnostics_menu() {
if [ "${DIAGNOSTICS:-no}" = "yes" ]; then
local current="${DIAGNOSTICS:-no}"
local status_text="DISABLED"
[[ "$current" == "yes" ]] && status_text="ENABLED"
local dialog_text=(
"Telemetry is currently: ${status_text}\n\n"
"Anonymous data helps us improve scripts and track issues.\n"
"No personal data is ever collected.\n\n"
"More info: https://telemetry.community-scripts.org\n\n"
"Do you want to ${current:+change this setting}?"
)
if [[ "$current" == "yes" ]]; then
if whiptail --backtitle "Proxmox VE Helper Scripts" \
--title "DIAGNOSTIC SETTINGS" \
--yesno "Send Diagnostics?\n\nCurrent: ${DIAGNOSTICS}" 10 58 \
--yes-button "No" --no-button "Back"; then
--title "TELEMETRY SETTINGS" \
--yesno "${dialog_text[*]}" 14 64 \
--yes-button "Disable" --no-button "Keep enabled"; then
DIAGNOSTICS="no"
sed -i 's/^DIAGNOSTICS=.*/DIAGNOSTICS=no/' /usr/local/community-scripts/diagnostics
whiptail --msgbox "Diagnostics set to ${DIAGNOSTICS}." 8 58
whiptail --msgbox "Telemetry disabled.\n\nNote: Existing containers keep their current setting.\nNew containers will inherit this choice." 10 58
fi
else
if whiptail --backtitle "Proxmox VE Helper Scripts" \
--title "DIAGNOSTIC SETTINGS" \
--yesno "Send Diagnostics?\n\nCurrent: ${DIAGNOSTICS}" 10 58 \
--yes-button "Yes" --no-button "Back"; then
--title "TELEMETRY SETTINGS" \
--yesno "${dialog_text[*]}" 14 64 \
--yes-button "Enable" --no-button "Keep disabled"; then
DIAGNOSTICS="yes"
sed -i 's/^DIAGNOSTICS=.*/DIAGNOSTICS=yes/' /usr/local/community-scripts/diagnostics
whiptail --msgbox "Diagnostics set to ${DIAGNOSTICS}." 8 58
whiptail --msgbox "Telemetry enabled.\n\nNote: Existing containers keep their current setting.\nNew containers will inherit this choice." 10 58
fi
fi
}
@@ -3427,6 +3420,7 @@ start() {
VERBOSE="no"
set_std_mode
ensure_profile_loaded
get_lxc_ip
update_script
update_motd_ip
cleanup_lxc
@@ -3454,6 +3448,7 @@ start() {
;;
esac
ensure_profile_loaded
get_lxc_ip
update_script
update_motd_ip
cleanup_lxc
@@ -3559,6 +3554,7 @@ build_container() {
# Core exports for install.func
export DIAGNOSTICS="$DIAGNOSTICS"
export RANDOM_UUID="$RANDOM_UUID"
export EXECUTION_ID="$EXECUTION_ID"
export SESSION_ID="$SESSION_ID"
export CACHER="$APT_CACHER"
export CACHER_IP="$APT_CACHER_IP"
@@ -3664,9 +3660,6 @@ $PCT_OPTIONS_STRING"
exit 214
fi
msg_ok "Storage space validated"
# Report installation start to API (early - captures failed installs too)
post_to_api
fi
create_lxc_container || exit $?
@@ -3912,6 +3905,7 @@ EOF
for i in {1..10}; do
if pct status "$CTID" | grep -q "status: running"; then
msg_ok "Started LXC Container"
post_progress_to_api # Signal container is running
break
fi
sleep 1
@@ -3966,6 +3960,7 @@ EOF
echo -e "${YW}Container may have limited internet access. Installation will continue...${CL}"
else
msg_ok "Network in LXC is reachable (ping)"
post_progress_to_api # Signal network is ready
fi
fi
# Function to get correct GID inside container
@@ -4037,6 +4032,14 @@ EOF'
fi
msg_ok "Customized LXC Container"
post_progress_to_api # Signal ready for app installation
# Optional DNS override for retry scenarios (inside LXC, never on host)
if [[ "${DNS_RETRY_OVERRIDE:-false}" == "true" ]]; then
msg_info "Applying DNS retry override in LXC (8.8.8.8, 1.1.1.1)"
pct exec "$CTID" -- bash -c "printf 'nameserver 8.8.8.8\nnameserver 1.1.1.1\n' >/etc/resolv.conf" >/dev/null 2>&1 || true
msg_ok "DNS override applied in LXC"
fi
# Install SSH keys
install_ssh_keys_into_ct
@@ -4052,8 +4055,9 @@ EOF'
lxc-attach -n "$CTID" -- bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/install/${var_install}.sh)"
local lxc_exit=$?
set -Eeuo pipefail # Re-enable error handling
trap 'error_handler' ERR # Restore ERR trap
# Keep error handling DISABLED during failure detection and recovery
# Re-enabling it here would cause any pct exec/pull failure to trigger
# error_handler() on the host, bypassing the recovery menu entirely
# Check for error flag file in container (more reliable than lxc-attach exit code)
local install_exit_code=0
@@ -4103,9 +4107,9 @@ EOF'
build_log_copied=true
fi
# Copy and append INSTALL_LOG from container
# Copy and append INSTALL_LOG from container (with timeout to prevent hangs)
local temp_install_log="/tmp/.install-temp-${SESSION_ID}.log"
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_install_log" 2>/dev/null; then
if timeout 8 pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_install_log" 2>/dev/null; then
{
echo "================================================================================"
echo "PHASE 2: APPLICATION INSTALLATION (Container)"
@@ -4150,32 +4154,322 @@ EOF'
# Prompt user for cleanup with 60s timeout
echo ""
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
# Detect error type for smart recovery options
local is_oom=false
local is_network_issue=false
local is_apt_issue=false
local is_cmd_not_found=false
local error_explanation=""
if declare -f explain_exit_code >/dev/null 2>&1; then
error_explanation="$(explain_exit_code "$install_exit_code")"
fi
# OOM detection: exit codes 134 (SIGABRT/heap), 137 (SIGKILL/OOM), 243 (Node.js heap)
if [[ $install_exit_code -eq 134 || $install_exit_code -eq 137 || $install_exit_code -eq 243 ]]; then
is_oom=true
fi
# APT/DPKG detection: exit codes 100-102 (APT), 255 (DPKG with log evidence)
case "$install_exit_code" in
100 | 101 | 102) is_apt_issue=true ;;
255)
if [[ -f "$combined_log" ]] && grep -qiE 'dpkg|apt-get|apt\.conf|broken packages|unmet dependencies|E: Sub-process|E: Failed' "$combined_log"; then
is_apt_issue=true
fi
;;
esac
# Command not found detection
if [[ $install_exit_code -eq 127 ]]; then
is_cmd_not_found=true
fi
# Network-related detection (curl/apt/git fetch failures and transient network issues)
case "$install_exit_code" in
6 | 7 | 22 | 28 | 35 | 52 | 56 | 57 | 75 | 78) is_network_issue=true ;;
100)
# APT can fail due to network (Failed to fetch)
if [[ -f "$combined_log" ]] && grep -qiE 'Failed to fetch|Could not resolve|Connection failed|Network is unreachable|Temporary failure resolving' "$combined_log"; then
is_network_issue=true
fi
;;
128)
if [[ -f "$combined_log" ]] && grep -qiE 'RPC failed|early EOF|fetch-pack|HTTP/2 stream|Could not resolve host|Temporary failure resolving|Failed to fetch|Connection reset|Network is unreachable' "$combined_log"; then
is_network_issue=true
fi
;;
esac
# Exit 1 subclassification: analyze logs to identify actual root cause
# Many exit 1 errors are actually APT, OOM, network, or command-not-found issues
if [[ $install_exit_code -eq 1 && -f "$combined_log" ]]; then
if grep -qiE 'E: Unable to|E: Package|E: Failed to fetch|dpkg.*error|broken packages|unmet dependencies|dpkg --configure -a' "$combined_log"; then
is_apt_issue=true
fi
if grep -qiE 'Cannot allocate memory|Out of memory|oom-killer|Killed process|JavaScript heap' "$combined_log"; then
is_oom=true
fi
if grep -qiE 'Could not resolve|DNS|Connection refused|Network is unreachable|No route to host|Temporary failure resolving|Failed to fetch' "$combined_log"; then
is_network_issue=true
fi
if grep -qiE ': command not found|No such file or directory.*/s?bin/' "$combined_log"; then
is_cmd_not_found=true
fi
fi
# Show error explanation if available
if [[ -n "$error_explanation" ]]; then
echo -e "${TAB}${RD}Error: ${error_explanation}${CL}"
echo ""
fi
# Show specific hints for known error types
if [[ $install_exit_code -eq 10 ]]; then
echo -e "${TAB}${INFO} This error usually means the container needs ${GN}privileged${CL} mode or Docker/nesting support."
echo -e "${TAB}${INFO} Recreate with: Advanced Install → Container Type: ${GN}Privileged${CL}"
echo ""
fi
if [[ $install_exit_code -eq 125 || $install_exit_code -eq 126 ]]; then
echo -e "${TAB}${INFO} The command exists but cannot be executed. This may be a ${GN}permission${CL} issue."
echo -e "${TAB}${INFO} If using Docker, ensure the container is ${GN}privileged${CL} or has correct permissions."
echo ""
fi
if [[ "$is_cmd_not_found" == true ]]; then
local missing_cmd=""
if [[ -f "$combined_log" ]]; then
missing_cmd=$(grep -oiE '[a-zA-Z0-9_.-]+: command not found' "$combined_log" | tail -1 | sed 's/: command not found//')
fi
if [[ -n "$missing_cmd" ]]; then
echo -e "${TAB}${INFO} Missing command: ${GN}${missing_cmd}${CL}"
fi
echo ""
fi
# Build recovery menu based on error type
echo -e "${YW}What would you like to do?${CL}"
echo ""
echo -e " ${GN}1)${CL} Remove container and exit"
echo -e " ${GN}2)${CL} Keep container for debugging"
echo -e " ${GN}3)${CL} Retry with verbose mode (full rebuild)"
local next_option=4
local APT_OPTION="" OOM_OPTION="" DNS_OPTION=""
if [[ "$is_apt_issue" == true ]]; then
if [[ "$var_os" == "alpine" ]]; then
echo -e " ${GN}${next_option})${CL} Repair APK state and re-run install (in-place)"
else
echo -e " ${GN}${next_option})${CL} Repair APT/DPKG state and re-run install (in-place)"
fi
APT_OPTION=$next_option
next_option=$((next_option + 1))
fi
if [[ "$is_oom" == true ]]; then
local recovery_attempt="${RECOVERY_ATTEMPT:-0}"
if [[ $recovery_attempt -lt 2 ]]; then
local new_ram=$((RAM_SIZE * 2))
local new_cpu=$((CORE_COUNT * 2))
echo -e " ${GN}${next_option})${CL} Retry with more resources (RAM: ${RAM_SIZE}${new_ram} MiB, CPU: ${CORE_COUNT}${new_cpu} cores)"
OOM_OPTION=$next_option
next_option=$((next_option + 1))
else
echo -e " ${DGN}-)${CL} ${DGN}OOM retry exhausted (already retried ${recovery_attempt}x)${CL}"
fi
fi
if [[ "$is_network_issue" == true ]]; then
echo -e " ${GN}${next_option})${CL} Retry with DNS override in LXC (8.8.8.8 / 1.1.1.1)"
DNS_OPTION=$next_option
next_option=$((next_option + 1))
fi
local max_option=$((next_option - 1))
echo ""
echo -en "${YW}Select option [1-${max_option}] (default: 1, auto-remove in 60s): ${CL}"
if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
case "${response:-1}" in
1)
# Remove container
echo ""
msg_info "Removing container ${CTID}"
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID}${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
msg_ok "Container ${CTID} removed"
elif [[ "$response" =~ ^[Nn]$ ]]; then
echo ""
msg_warn "Container ${CTID} kept for debugging"
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
;;
2)
echo -e "\n${TAB}${YW}Container ${CTID} kept for debugging${CL}"
# Dev mode: Setup MOTD/SSH for debugging access to broken container
if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then
echo -e "${TAB}${HOLD}${DGN}Setting up MOTD and SSH for debugging...${CL}"
if pct exec "$CTID" -- bash -c "
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)
declare -f motd_ssh >/dev/null 2>&1 && motd_ssh || true
" >/dev/null 2>&1; then
local ct_ip=$(pct exec "$CTID" ip a s dev eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1)
echo -e "${BFR}${CM}${GN}MOTD/SSH ready - SSH into container: ssh root@${ct_ip}${CL}"
fi
fi
fi
exit $install_exit_code
;;
3)
# Retry with verbose mode (full rebuild)
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
# Get new container ID
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export VERBOSE="yes"
export var_verbose="yes"
# Show rebuild summary
echo -e "${YW}Rebuilding with preserved settings:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
# Re-run build_container
build_container
return $?
;;
*)
# Handle dynamic smart recovery options via named option variables
local handled=false
if [[ -n "${APT_OPTION}" && "${response}" == "${APT_OPTION}" ]]; then
# Package manager in-place repair: fix broken state and re-run install script
handled=true
if [[ "$var_os" == "alpine" ]]; then
echo -e "\n${TAB}${HOLD}${YW}Repairing APK state in container ${CTID}...${CL}"
pct exec "$CTID" -- ash -c "
apk fix 2>/dev/null || true
apk cache clean 2>/dev/null || true
apk update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APK state repaired in container ${CTID}${CL}"
else
echo -e "\n${TAB}${HOLD}${YW}Repairing APT/DPKG state in container ${CTID}...${CL}"
pct exec "$CTID" -- bash -c "
DEBIAN_FRONTEND=noninteractive dpkg --configure -a 2>/dev/null || true
apt-get -f install -y 2>/dev/null || true
apt-get clean 2>/dev/null
apt-get update 2>/dev/null || true
" >/dev/null 2>&1 || true
echo -e "${BFR}${CM}${GN}APT/DPKG state repaired in container ${CTID}${CL}"
fi
echo ""
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Re-running installation in existing container ${CTID}:${CL}"
echo -e " RAM: ${RAM_SIZE} MiB | CPU: ${CORE_COUNT} cores | Disk: ${DISK_SIZE} GB"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Re-running installation script..."
# Re-run install script in existing container (don't destroy/recreate)
set +Eeuo pipefail
trap - ERR
lxc-attach -n "$CTID" -- bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/install/${var_install}.sh)"
local apt_retry_exit=$?
set -Eeuo pipefail
trap 'error_handler' ERR
# Check for error flag from retry
local apt_retry_code=0
if [[ -n "${SESSION_ID:-}" ]]; then
local retry_error_flag="/root/.install-${SESSION_ID}.failed"
if pct exec "$CTID" -- test -f "$retry_error_flag" 2>/dev/null; then
apt_retry_code=$(pct exec "$CTID" -- cat "$retry_error_flag" 2>/dev/null || echo "1")
pct exec "$CTID" -- rm -f "$retry_error_flag" 2>/dev/null || true
fi
fi
if [[ $apt_retry_code -eq 0 && $apt_retry_exit -ne 0 ]]; then
apt_retry_code=$apt_retry_exit
fi
if [[ $apt_retry_code -eq 0 ]]; then
msg_ok "Installation completed successfully after APT repair!"
post_update_to_api "done" "0" "force"
return 0
else
msg_error "Installation still failed after APT repair (exit code: ${apt_retry_code})"
install_exit_code=$apt_retry_code
fi
fi
if [[ -n "${OOM_OPTION}" && "${response}" == "${OOM_OPTION}" ]]; then
# Retry with doubled resources
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with more resources...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
local old_ram="$RAM_SIZE"
local old_cpu="$CORE_COUNT"
export CTID=$(get_valid_container_id "$CTID")
export RAM_SIZE=$((RAM_SIZE * 2))
export CORE_COUNT=$((CORE_COUNT * 2))
export var_ram="$RAM_SIZE"
export var_cpu="$CORE_COUNT"
export VERBOSE="yes"
export var_verbose="yes"
export RECOVERY_ATTEMPT=$((${RECOVERY_ATTEMPT:-0} + 1))
echo -e "${YW}Rebuilding with increased resources (attempt ${RECOVERY_ATTEMPT}/2):${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " RAM: ${old_ram}${GN}${RAM_SIZE}${CL} MiB (x2)"
echo -e " CPU: ${old_cpu}${GN}${CORE_COUNT}${CL} cores (x2)"
echo -e " Disk: ${DISK_SIZE} GB | Network: ${NET:-dhcp} | Bridge: ${BRG:-vmbr0}"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ -n "${DNS_OPTION}" && "${response}" == "${DNS_OPTION}" ]]; then
# Retry with DNS override in LXC
handled=true
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID} for rebuild with DNS override...${CL}"
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
echo ""
local old_ctid="$CTID"
export CTID=$(get_valid_container_id "$CTID")
export DNS_RETRY_OVERRIDE="true"
export VERBOSE="yes"
export var_verbose="yes"
echo -e "${YW}Rebuilding with DNS override in LXC:${CL}"
echo -e " Container ID: ${old_ctid}${CTID}"
echo -e " DNS: ${GN}8.8.8.8, 1.1.1.1${CL} (inside LXC only)"
echo -e " Verbose: ${GN}enabled${CL}"
echo ""
msg_info "Restarting installation..."
build_container
return $?
fi
if [[ "$handled" == false ]]; then
echo -e "\n${TAB}${YW}Invalid option. Container ${CTID} kept.${CL}"
exit $install_exit_code
fi
;;
esac
else
# Timeout - auto-remove
echo ""
@@ -4191,6 +4485,10 @@ EOF'
exit $install_exit_code
fi
# Re-enable error handling after successful install or recovery menu completion
set -Eeuo pipefail
trap 'error_handler' ERR
}
destroy_lxc() {
@@ -4595,6 +4893,9 @@ create_lxc_container() {
exit 206
fi
# Report installation start to API early - captures failures in storage/template/create
post_to_api
# Storage capability check
check_storage_support "rootdir" || {
msg_error "No valid storage found for 'rootdir' [Container]"
@@ -5124,9 +5425,7 @@ create_lxc_container() {
}
msg_ok "LXC Container ${BL}$CTID${CL} ${GN}was successfully created."
# Report container creation to API
post_to_api
post_progress_to_api # Signal container creation complete
}
# ==============================================================================
@@ -5199,6 +5498,7 @@ EOF
# - If INSTALL_LOG points to a container path (e.g. /root/.install-*),
# tries to pull it from the container and create a combined log
# - This allows get_error_text() to find actual error output for telemetry
# - Uses timeout on pct pull to prevent hangs on dead/unresponsive containers
# ------------------------------------------------------------------------------
ensure_log_on_host() {
# Already readable on host? Nothing to do.
@@ -5228,9 +5528,9 @@ ensure_log_on_host() {
echo ""
} >>"$combined_log"
fi
# Pull INSTALL_LOG from container
# Pull INSTALL_LOG from container (with timeout to prevent hangs on dead containers)
local temp_log="/tmp/.install-temp-${SESSION_ID}.log"
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_log" 2>/dev/null; then
if timeout 8 pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_log" 2>/dev/null; then
{
echo "================================================================================"
echo "PHASE 2: APPLICATION INSTALLATION (Container)"
@@ -5253,20 +5553,34 @@ ensure_log_on_host() {
# - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages
# - Posts failure status with exit code to API (error description resolved automatically)
# - Only executes on non-zero exit codes
# - ALWAYS sends telemetry FIRST before log collection to prevent pct pull
# hangs from blocking status updates (container may be dead/unresponsive)
# - For non-zero exit codes: posts "failed" status
# - For zero exit codes where post_update_to_api was never called:
# catches orphaned "installing" records (e.g., script exited cleanly
# but description() was never reached)
# ------------------------------------------------------------------------------
api_exit_script() {
exit_code=$?
local exit_code=$?
if [ $exit_code -ne 0 ]; then
ensure_log_on_host
post_update_to_api "failed" "$exit_code"
# ALWAYS send telemetry FIRST - ensure status is reported even if
# ensure_log_on_host hangs (e.g. pct pull on dead container)
post_update_to_api "failed" "$exit_code" 2>/dev/null || true
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi
elif [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
# Script exited with 0 but never sent a completion status
# exit_code=0 is never an error — report as success
post_update_to_api "done" "0"
fi
}
if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT
fi
trap 'ensure_log_on_host; post_update_to_api "failed" "$?"' ERR
trap 'ensure_log_on_host; post_update_to_api "failed" "130"; exit 130' SIGINT
trap 'ensure_log_on_host; post_update_to_api "failed" "143"; exit 143' SIGTERM
trap 'local _ec=$?; if [[ $_ec -ne 0 ]]; then post_update_to_api "failed" "$_ec" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; fi' ERR
trap 'post_update_to_api "failed" "129" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; exit 129' SIGHUP
trap 'post_update_to_api "failed" "130" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; exit 130' SIGINT
trap 'post_update_to_api "failed" "143" 2>/dev/null || true; timeout 10 bash -c "ensure_log_on_host" 2>/dev/null || true; exit 143' SIGTERM

View File

@@ -524,7 +524,8 @@ silent() {
if [[ -s "$logfile" ]]; then
echo -e "\n${TAB}--- Last 10 lines of log ---"
tail -n 10 "$logfile"
echo -e "${TAB}-----------------------------------\n"
echo -e "${TAB}-----------------------------------"
echo -e "${TAB}📋 Full log: ${logfile}\n"
fi
exit "$rc"
@@ -1496,6 +1497,11 @@ cleanup_lxc() {
fi
msg_ok "Cleaned"
# Send progress ping if available (defined in install.func)
if declare -f post_progress_to_api &>/dev/null; then
post_progress_to_api
fi
}
# ------------------------------------------------------------------------------

View File

@@ -37,24 +37,79 @@ if ! declare -f explain_exit_code &>/dev/null; then
case "$code" in
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
3) echo "General syntax or argument error" ;;
10) echo "Docker / privileged mode required (unsupported environment)" ;;
4) echo "curl: Feature not supported or protocol error" ;;
5) echo "curl: Could not resolve proxy" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
8) echo "curl: Server reply error (FTP/SFTP or apk untrusted key)" ;;
16) echo "curl: HTTP/2 framing layer error" ;;
18) echo "curl: Partial file (transfer not completed)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
23) echo "curl: Write error (disk full or permissions)" ;;
24) echo "curl: Write to local file failed" ;;
25) echo "curl: Upload failed" ;;
26) echo "curl: Read error on local file (I/O)" ;;
27) echo "curl: Out of memory (memory allocation failed)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
30) echo "curl: FTP port command failed" ;;
32) echo "curl: FTP SIZE command failed" ;;
33) echo "curl: HTTP range error" ;;
34) echo "curl: HTTP post error" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
36) echo "curl: FTP bad download resume" ;;
39) echo "curl: LDAP search failed" ;;
44) echo "curl: Internal error (bad function call order)" ;;
45) echo "curl: Interface error (failed to bind to specified interface)" ;;
46) echo "curl: Bad password entered" ;;
47) echo "curl: Too many redirects" ;;
48) echo "curl: Unknown command line option specified" ;;
51) echo "curl: SSL peer certificate or SSH host key verification failed" ;;
52) echo "curl: Empty reply from server (got nothing)" ;;
55) echo "curl: Failed sending network data" ;;
56) echo "curl: Receive error (connection reset by peer)" ;;
57) echo "curl: Unrecoverable poll/select error (system I/O failure)" ;;
59) echo "curl: Couldn't use specified SSL cipher" ;;
61) echo "curl: Bad/unrecognized transfer encoding" ;;
63) echo "curl: Maximum file size exceeded" ;;
75) echo "Temporary failure (retry later)" ;;
78) echo "curl: Remote file not found (404 on FTP/file)" ;;
79) echo "curl: SSH session error (key exchange/auth failed)" ;;
92) echo "curl: HTTP/2 stream error (protocol violation)" ;;
95) echo "curl: HTTP/3 layer error" ;;
64) echo "Usage error (wrong arguments)" ;;
65) echo "Data format error (bad input data)" ;;
66) echo "Input file not found (cannot open input)" ;;
67) echo "User not found (addressee unknown)" ;;
68) echo "Host not found (hostname unknown)" ;;
69) echo "Service unavailable" ;;
70) echo "Internal software error" ;;
71) echo "System error (OS-level failure)" ;;
72) echo "Critical OS file missing" ;;
73) echo "Cannot create output file" ;;
74) echo "I/O error" ;;
76) echo "Remote protocol error" ;;
77) echo "Permission denied" ;;
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
124) echo "Command timed out (timeout command)" ;;
125) echo "Command failed to start (Docker daemon or execution error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
129) echo "Killed by SIGHUP (terminal closed / hangup)" ;;
130) echo "Aborted by user (SIGINT)" ;;
131) echo "Killed by SIGQUIT (core dumped)" ;;
132) echo "Killed by SIGILL (illegal CPU instruction)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
144) echo "Killed by signal 16 (SIGUSR1 / SIGSTKFLT)" ;;
146) echo "Killed by signal 18 (SIGTSTP)" ;;
150) echo "Systemd: Service failed to start" ;;
151) echo "Systemd: Service unit not found" ;;
152) echo "Permission denied (EACCES)" ;;
@@ -100,6 +155,7 @@ if ! declare -f explain_exit_code &>/dev/null; then
224) echo "Proxmox: PBS storage is for backups only" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
239) echo "npm/Node.js: Unexpected runtime error or dependency failure" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
@@ -148,6 +204,12 @@ error_handler() {
printf "\e[?25h"
# ALWAYS report failure to API immediately - don't wait for container checks
# This ensures we capture failures that occur before/after container exists
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code" 2>/dev/null || true
fi
# Use msg_error if available, fallback to echo
if declare -f msg_error >/dev/null 2>&1; then
msg_error "in line ${line_number}: exit code ${exit_code} (${explanation}): while executing command ${command}"
@@ -174,72 +236,54 @@ error_handler() {
active_log="$SILENT_LOGFILE"
fi
# If active_log points to a container-internal path that doesn't exist on host,
# fall back to BUILD_LOG (host-side log)
if [[ -n "$active_log" && ! -s "$active_log" && -n "${BUILD_LOG:-}" && -s "${BUILD_LOG}" ]]; then
active_log="$BUILD_LOG"
fi
# Show last log lines if available
if [[ -n "$active_log" && -s "$active_log" ]]; then
echo -e "\n${TAB}--- Last 20 lines of log ---"
tail -n 20 "$active_log"
echo -e "${TAB}-----------------------------------\n"
fi
# Detect context: Container (INSTALL_LOG set + /root exists) vs Host (BUILD_LOG)
if [[ -n "${INSTALL_LOG:-}" && -d /root ]]; then
# CONTAINER CONTEXT: Copy log and create flag file for host
local container_log="/root/.install-${SESSION_ID:-error}.log"
cp "$active_log" "$container_log" 2>/dev/null || true
# Detect context: Container (INSTALL_LOG set + inside container /root) vs Host
if [[ -n "${INSTALL_LOG:-}" && -f "${INSTALL_LOG:-}" && -d /root ]]; then
# CONTAINER CONTEXT: Copy log and create flag file for host
local container_log="/root/.install-${SESSION_ID:-error}.log"
cp "${INSTALL_LOG}" "$container_log" 2>/dev/null || true
# Create error flag file with exit code for host detection
echo "$exit_code" >"/root/.install-${SESSION_ID:-error}.failed" 2>/dev/null || true
# Log path is shown by host as combined log - no need to show container path
else
# HOST CONTEXT: Show local log path and offer container cleanup
# Create error flag file with exit code for host detection
echo "$exit_code" >"/root/.install-${SESSION_ID:-error}.failed" 2>/dev/null || true
# Log path is shown by host as combined log - no need to show container path
else
# HOST CONTEXT: Show local log path and offer container cleanup
if [[ -n "$active_log" && -s "$active_log" ]]; then
if declare -f msg_custom >/dev/null 2>&1; then
msg_custom "📋" "${YW}" "Full log: ${active_log}"
else
echo -e "${YW}Full log:${CL} ${BL}${active_log}${CL}"
fi
fi
# Offer to remove container if it exists (build errors after container creation)
if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then
# Report failure to API before container cleanup
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code"
fi
# Offer to remove container if it exists (build errors after container creation)
if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then
echo ""
if declare -f msg_custom >/dev/null 2>&1; then
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
else
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
fi
echo ""
if declare -f msg_custom >/dev/null 2>&1; then
echo -en "${TAB}${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
else
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
fi
if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
echo ""
if declare -f msg_info >/dev/null 2>&1; then
msg_info "Removing container ${CTID}"
else
echo -e "${YW}Removing container ${CTID}${CL}"
fi
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
if declare -f msg_ok >/dev/null 2>&1; then
msg_ok "Container ${CTID} removed"
else
echo -e "${GN}${CL} Container ${CTID} removed"
fi
elif [[ "$response" =~ ^[Nn]$ ]]; then
echo ""
if declare -f msg_warn >/dev/null 2>&1; then
msg_warn "Container ${CTID} kept for debugging"
else
echo -e "${YW}Container ${CTID} kept for debugging${CL}"
fi
fi
else
# Timeout - auto-remove
if read -t 60 -r response; then
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
echo ""
if declare -f msg_info >/dev/null 2>&1; then
msg_info "No response - removing container ${CTID}"
msg_info "Removing container ${CTID}"
else
echo -e "${YW}No response - removing container ${CTID}${CL}"
echo -e "${YW}Removing container ${CTID}${CL}"
fi
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
@@ -248,13 +292,35 @@ error_handler() {
else
echo -e "${GN}${CL} Container ${CTID} removed"
fi
elif [[ "$response" =~ ^[Nn]$ ]]; then
echo ""
if declare -f msg_warn >/dev/null 2>&1; then
msg_warn "Container ${CTID} kept for debugging"
else
echo -e "${YW}Container ${CTID} kept for debugging${CL}"
fi
fi
else
# Timeout - auto-remove
echo ""
if declare -f msg_info >/dev/null 2>&1; then
msg_info "No response - removing container ${CTID}"
else
echo -e "${YW}No response - removing container ${CTID}${CL}"
fi
pct stop "$CTID" &>/dev/null || true
pct destroy "$CTID" &>/dev/null || true
if declare -f msg_ok >/dev/null 2>&1; then
msg_ok "Container ${CTID} removed"
else
echo -e "${GN}${CL} Container ${CTID} removed"
fi
fi
# Force one final status update attempt after cleanup
# This ensures status is updated even if the first attempt failed (e.g., HTTP 400)
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code" "force"
fi
# Force one final status update attempt after cleanup
# This ensures status is updated even if the first attempt failed (e.g., HTTP 400)
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code" "force"
fi
fi
fi
@@ -273,6 +339,8 @@ error_handler() {
# - Cleans up lock files if lockfile variable is set
# - Exits with captured exit code
# - Always runs on script termination (success or failure)
# - For signal exits (>128): sends telemetry FIRST before log collection
# to prevent pct pull hangs from blocking status updates
# ------------------------------------------------------------------------------
on_exit() {
local exit_code=$?
@@ -281,14 +349,17 @@ on_exit() {
# post_to_api was called ("installing" sent) but post_update_to_api was never called
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
# Ensure log is accessible on host before reporting
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host
fi
# ALWAYS send telemetry FIRST - ensure status is reported even if
# ensure_log_on_host hangs (e.g. pct pull on dead/unresponsive container)
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
post_update_to_api "failed" "$exit_code" 2>/dev/null || true
else
post_update_to_api "failed" "1"
# exit_code=0 is never an error — report as success
post_update_to_api "done" "0" 2>/dev/null || true
fi
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi
fi
fi
@@ -300,22 +371,26 @@ on_exit() {
# on_interrupt()
#
# - SIGINT (Ctrl+C) trap handler
# - Reports to telemetry FIRST (time-critical: container may be dying)
# - Displays "Interrupted by user" message
# - Exits with code 130 (128 + SIGINT=2)
# - Output redirected to /dev/null fallback to prevent SIGPIPE on closed terminals
# ------------------------------------------------------------------------------
on_interrupt() {
# Ensure log is accessible on host before reporting
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host
fi
# Report interruption to telemetry API (prevents stuck "installing" records)
# CRITICAL: Send telemetry FIRST before any cleanup or output
# If ensure_log_on_host hangs (e.g. pct pull on dying container),
# the status update would never be sent, leaving records stuck in "installing"
if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "130"
post_update_to_api "failed" "130" 2>/dev/null || true
fi
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi
if declare -f msg_error >/dev/null 2>&1; then
msg_error "Interrupted by user (SIGINT)"
msg_error "Interrupted by user (SIGINT)" 2>/dev/null || true
else
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}"
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}" 2>/dev/null || true
fi
exit 130
}
@@ -324,23 +399,27 @@ on_interrupt() {
# on_terminate()
#
# - SIGTERM trap handler
# - Reports to telemetry FIRST (time-critical: process being killed)
# - Displays "Terminated by signal" message
# - Exits with code 143 (128 + SIGTERM=15)
# - Triggered by external process termination
# - Output redirected to /dev/null fallback to prevent SIGPIPE on closed terminals
# ------------------------------------------------------------------------------
on_terminate() {
# Ensure log is accessible on host before reporting
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host
fi
# Report termination to telemetry API (prevents stuck "installing" records)
# CRITICAL: Send telemetry FIRST before any cleanup or output
# Same rationale as on_interrupt: ensure status gets reported even if
# ensure_log_on_host hangs or terminal is already closed
if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "143"
post_update_to_api "failed" "143" 2>/dev/null || true
fi
# Best-effort log collection with timeout (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
timeout 10 bash -c 'ensure_log_on_host' 2>/dev/null || true
fi
if declare -f msg_error >/dev/null 2>&1; then
msg_error "Terminated by signal (SIGTERM)"
msg_error "Terminated by signal (SIGTERM)" 2>/dev/null || true
else
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}"
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}" 2>/dev/null || true
fi
exit 143
}

View File

@@ -37,9 +37,35 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
load_functions
catch_errors
# Persist diagnostics setting inside container (exported from build.func)
# so addon scripts running later can find the user's choice
if [[ ! -f /usr/local/community-scripts/diagnostics ]]; then
mkdir -p /usr/local/community-scripts
echo "DIAGNOSTICS=${DIAGNOSTICS:-no}" >/usr/local/community-scripts/diagnostics
fi
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ------------------------------------------------------------------------------
# post_progress_to_api()
#
# - Lightweight progress ping from inside the container
# - Updates the existing telemetry record status from "installing" to "configuring"
# - Signals that the installation is actively progressing (not stuck)
# - Fire-and-forget: never blocks or fails the script
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
# ------------------------------------------------------------------------------
post_progress_to_api() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
curl -fsS -m 5 -X POST "https://telemetry.community-scripts.org/telemetry" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"lxc\",\"nsapp\":\"${app:-unknown}\",\"status\":\"configuring\"}" &>/dev/null || true
}
# ==============================================================================
# SECTION 2: NETWORK & CONNECTIVITY
# ==============================================================================
@@ -103,6 +129,7 @@ setting_up_container() {
msg_ok "Set up Container OS"
#msg_custom "${CM}" "${GN}" "Network Connected: ${BL}$(hostname -I)"
msg_ok "Network Connected: ${BL}$(hostname -I)"
post_progress_to_api
}
# ------------------------------------------------------------------------------
@@ -206,8 +233,18 @@ EOF
$STD apt-get -o Dpkg::Options::="--force-confold" -y dist-upgrade
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
msg_ok "Updated Container OS"
post_progress_to_api
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
local tools_content
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
msg_error "Failed to download tools.func"
exit 6
}
source /dev/stdin <<<"$tools_content"
if ! declare -f fetch_and_deploy_gh_release >/dev/null 2>&1; then
msg_error "tools.func loaded but incomplete — missing expected functions"
exit 6
fi
}
# ==============================================================================
@@ -248,6 +285,7 @@ motd_ssh() {
sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
systemctl restart sshd
fi
post_progress_to_api
}
# ==============================================================================
@@ -286,4 +324,5 @@ EOF
chmod 700 /root/.ssh
chmod 600 /root/.ssh/authorized_keys
fi
post_progress_to_api
}

View File

@@ -1306,7 +1306,7 @@ setup_deb822_repo() {
if grep -q "BEGIN PGP" "$tmp_gpg" 2>/dev/null; then
# ASCII-armored — dearmor to binary
gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" < "$tmp_gpg" || {
gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" <"$tmp_gpg" || {
msg_error "Failed to dearmor GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
@@ -1567,31 +1567,54 @@ check_for_gh_release() {
ensure_dependencies jq
# Build auth header if token is available
local header_args=()
[[ -n "${GITHUB_TOKEN:-}" ]] && header_args=(-H "Authorization: Bearer $GITHUB_TOKEN")
# Try /latest endpoint for non-pinned versions (most efficient)
local releases_json=""
local releases_json="" http_code=""
if [[ -z "$pinned_version_in" ]]; then
releases_json=$(curl -fsSL --max-time 20 \
http_code=$(curl -sSL --max-time 20 -w "%{http_code}" -o /tmp/gh_check.json \
-H 'Accept: application/vnd.github+json' \
-H 'X-GitHub-Api-Version: 2022-11-28' \
"https://api.github.com/repos/${source}/releases/latest" 2>/dev/null)
"${header_args[@]}" \
"https://api.github.com/repos/${source}/releases/latest" 2>/dev/null) || true
if [[ $? -eq 0 ]] && [[ -n "$releases_json" ]]; then
# Wrap single release in array for consistent processing
releases_json="[$releases_json]"
if [[ "$http_code" == "200" ]] && [[ -s /tmp/gh_check.json ]]; then
releases_json="[$(</tmp/gh_check.json)]"
elif [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
rm -f /tmp/gh_check.json
return 1
fi
rm -f /tmp/gh_check.json
fi
# If no releases yet (pinned version OR /latest failed), fetch up to 100
if [[ -z "$releases_json" ]]; then
# Fetch releases and exclude drafts/prereleases
releases_json=$(curl -fsSL --max-time 20 \
http_code=$(curl -sSL --max-time 20 -w "%{http_code}" -o /tmp/gh_check.json \
-H 'Accept: application/vnd.github+json' \
-H 'X-GitHub-Api-Version: 2022-11-28' \
"https://api.github.com/repos/${source}/releases?per_page=100") || {
msg_error "Unable to fetch releases for ${app}"
"${header_args[@]}" \
"https://api.github.com/repos/${source}/releases?per_page=100" 2>/dev/null) || true
if [[ "$http_code" == "200" ]] && [[ -s /tmp/gh_check.json ]]; then
releases_json=$(</tmp/gh_check.json)
elif [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
rm -f /tmp/gh_check.json
return 1
}
else
msg_error "Unable to fetch releases for ${app} (HTTP ${http_code})"
rm -f /tmp/gh_check.json
return 1
fi
rm -f /tmp/gh_check.json
fi
mapfile -t raw_tags < <(jq -r '.[] | select(.draft==false and .prerelease==false) | .tag_name' <<<"$releases_json")
@@ -2410,7 +2433,11 @@ _gh_scan_older_releases() {
# Check with explicit pattern first, then arch heuristic, then any .deb
if [[ -n "$asset_pattern" ]]; then
has_match=$(echo "$releases_list" | jq -r --arg pat "$asset_pattern" ".[$i].assets[].name" | while read -r name; do
case "$name" in $asset_pattern) echo true; break ;; esac
case "$name" in $asset_pattern)
echo true
break
;;
esac
done)
fi
if [[ "$has_match" != "true" ]]; then
@@ -2422,7 +2449,11 @@ _gh_scan_older_releases() {
elif [[ "$mode" == "prebuild" || "$mode" == "singlefile" ]]; then
has_match=$(echo "$releases_list" | jq -r ".[$i].assets[].name" | while read -r name; do
case "$name" in $asset_pattern) echo true; break ;; esac
case "$name" in $asset_pattern)
echo true
break
;;
esac
done)
fi
@@ -2481,25 +2512,36 @@ function fetch_and_deploy_gh_release() {
return 1
fi
local max_retries=3 retry_delay=2 attempt=1 success=false resp http_code
local max_retries=3 retry_delay=2 attempt=1 success=false http_code
while ((attempt <= max_retries)); do
resp=$(curl $api_timeout -fsSL -w "%{http_code}" -o /tmp/gh_rel.json "${header[@]}" "$api_url") && success=true && break
sleep "$retry_delay"
http_code=$(curl $api_timeout -sSL -w "%{http_code}" -o /tmp/gh_rel.json "${header[@]}" "$api_url" 2>/dev/null) || true
if [[ "$http_code" == "200" ]]; then
success=true
break
elif [[ "$http_code" == "403" ]]; then
if ((attempt < max_retries)); then
msg_warn "GitHub API rate limit hit, retrying in ${retry_delay}s... (attempt $attempt/$max_retries)"
sleep "$retry_delay"
retry_delay=$((retry_delay * 2))
fi
else
sleep "$retry_delay"
fi
((attempt++))
done
if ! $success; then
msg_error "Failed to fetch release metadata from $api_url after $max_retries attempts"
if [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
else
msg_error "Failed to fetch release metadata from $api_url after $max_retries attempts (HTTP $http_code)"
fi
return 1
fi
http_code="${resp:(-3)}"
[[ "$http_code" != "200" ]] && {
msg_error "GitHub API returned HTTP $http_code"
return 1
}
local json tag_name
json=$(</tmp/gh_rel.json)
tag_name=$(echo "$json" | jq -r '.tag_name // .name // empty')
@@ -2599,12 +2641,19 @@ function fetch_and_deploy_gh_release() {
assets=$(echo "$json" | jq -r '.assets[].browser_download_url')
if [[ -n "$asset_pattern" ]]; then
for u in $assets; do
case "${u##*/}" in $asset_pattern) url_match="$u"; break ;; esac
case "${u##*/}" in $asset_pattern)
url_match="$u"
break
;;
esac
done
fi
if [[ -z "$url_match" ]]; then
for u in $assets; do
if [[ "$u" =~ ($arch|amd64|x86_64|aarch64|arm64).*\.deb$ ]]; then url_match="$u"; break; fi
if [[ "$u" =~ ($arch|amd64|x86_64|aarch64|arm64).*\.deb$ ]]; then
url_match="$u"
break
fi
done
fi
if [[ -z "$url_match" ]]; then
@@ -2673,7 +2722,11 @@ function fetch_and_deploy_gh_release() {
msg_info "Fetching GitHub release: $app ($version)"
for u in $(echo "$json" | jq -r '.assets[].browser_download_url'); do
filename_candidate="${u##*/}"
case "$filename_candidate" in $pattern) asset_url="$u"; break ;; esac
case "$filename_candidate" in $pattern)
asset_url="$u"
break
;;
esac
done
fi
fi
@@ -2785,7 +2838,11 @@ function fetch_and_deploy_gh_release() {
msg_info "Fetching GitHub release: $app ($version)"
for u in $(echo "$json" | jq -r '.assets[].browser_download_url'); do
filename_candidate="${u##*/}"
case "$filename_candidate" in $pattern) asset_url="$u"; break ;; esac
case "$filename_candidate" in $pattern)
asset_url="$u"
break
;;
esac
done
fi
fi
@@ -3635,20 +3692,22 @@ _setup_intel_arc() {
# Add non-free repos
_add_debian_nonfree "$os_codename"
# Arc requires latest drivers - fetch from GitHub
# Order matters: libigdgmm first (dependency), then IGC, then compute-runtime
msg_info "Fetching Intel compute-runtime for Arc support"
# For Trixie/Sid: Fetch latest drivers from GitHub (Debian repo packages may be too old or missing)
# For Bookworm: Use repo packages (GitHub latest requires libstdc++6 >= 13.1, unavailable on Bookworm)
if [[ "$os_codename" == "trixie" || "$os_codename" == "sid" ]]; then
msg_info "Fetching Intel compute-runtime from GitHub for Arc support"
# libigdgmm - bundled in compute-runtime releases (Debian version often too old)
fetch_and_deploy_gh_release "libigdgmm12" "intel/compute-runtime" "binary" "latest" "" "libigdgmm12_*_amd64.deb" || true
# libigdgmm - bundled in compute-runtime releases
fetch_and_deploy_gh_release "libigdgmm12" "intel/compute-runtime" "binary" "latest" "" "libigdgmm12_*_amd64.deb" || true
# Intel Graphics Compiler (note: packages have -2 suffix)
fetch_and_deploy_gh_release "intel-igc-core" "intel/intel-graphics-compiler" "binary" "latest" "" "intel-igc-core-2_*_amd64.deb" || true
fetch_and_deploy_gh_release "intel-igc-opencl" "intel/intel-graphics-compiler" "binary" "latest" "" "intel-igc-opencl-2_*_amd64.deb" || true
# Intel Graphics Compiler (note: packages have -2 suffix)
fetch_and_deploy_gh_release "intel-igc-core" "intel/intel-graphics-compiler" "binary" "latest" "" "intel-igc-core-2_*_amd64.deb" || true
fetch_and_deploy_gh_release "intel-igc-opencl" "intel/intel-graphics-compiler" "binary" "latest" "" "intel-igc-opencl-2_*_amd64.deb" || true
# Compute Runtime (depends on IGC and gmmlib)
fetch_and_deploy_gh_release "intel-opencl-icd" "intel/compute-runtime" "binary" "latest" "" "intel-opencl-icd_*_amd64.deb" || true
fetch_and_deploy_gh_release "intel-level-zero-gpu" "intel/compute-runtime" "binary" "latest" "" "libze-intel-gpu1_*_amd64.deb" || true
# Compute Runtime (depends on IGC and gmmlib)
fetch_and_deploy_gh_release "intel-opencl-icd" "intel/compute-runtime" "binary" "latest" "" "intel-opencl-icd_*_amd64.deb" || true
fetch_and_deploy_gh_release "intel-level-zero-gpu" "intel/compute-runtime" "binary" "latest" "" "libze-intel-gpu1_*_amd64.deb" || true
fi
$STD apt -y install \
intel-media-va-driver-non-free \
@@ -3657,6 +3716,9 @@ _setup_intel_arc() {
libmfx-gen1.2 \
vainfo \
intel-gpu-tools 2>/dev/null || msg_warn "Some Intel Arc packages failed"
# Bookworm has compatible versions of these packages in repos
[[ "$os_codename" == "bookworm" ]] && $STD apt -y install intel-opencl-icd libigdgmm12 2>/dev/null || true
fi
msg_ok "Intel Arc GPU configured"
@@ -4028,6 +4090,7 @@ Types: deb
URIs: http://deb.debian.org/debian
Suites: bullseye bullseye-updates
Components: non-free
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
;;
bookworm)
@@ -4036,6 +4099,7 @@ Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm bookworm-updates
Components: non-free non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
;;
trixie | sid)
@@ -4044,11 +4108,13 @@ Types: deb
URIs: http://deb.debian.org/debian
Suites: trixie trixie-updates
Components: non-free non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://deb.debian.org/debian-security
Suites: trixie-security
Components: non-free non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
;;
esac
@@ -4071,11 +4137,13 @@ Types: deb
URIs: http://deb.debian.org/debian
Suites: bullseye bullseye-updates
Components: non-free
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://deb.debian.org/debian-security
Suites: bullseye-security
Components: non-free
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
;;
bookworm)
@@ -4084,11 +4152,13 @@ Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm bookworm-updates
Components: non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://deb.debian.org/debian-security
Suites: bookworm-security
Components: non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
;;
trixie | sid)
@@ -4097,11 +4167,13 @@ Types: deb
URIs: http://deb.debian.org/debian
Suites: trixie trixie-updates
Components: non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
URIs: http://deb.debian.org/debian-security
Suites: trixie-security
Components: non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
EOF
;;
esac
@@ -4609,8 +4681,8 @@ EOF
# First, check if there's an old/broken repository that needs cleanup
if [[ -f /etc/apt/sources.list.d/mariadb.sources ]] || [[ -f /etc/apt/sources.list.d/mariadb.list ]]; then
local OLD_REPO_VERSION=""
OLD_REPO_VERSION=$(grep -oP 'repo/\K[0-9]+\.[0-9]+(\.[0-9]+)?' /etc/apt/sources.list.d/mariadb.sources 2>/dev/null || \
grep -oP 'repo/\K[0-9]+\.[0-9]+(\.[0-9]+)?' /etc/apt/sources.list.d/mariadb.list 2>/dev/null || echo "")
OLD_REPO_VERSION=$(grep -oP 'repo/\K[0-9]+\.[0-9]+(\.[0-9]+)?' /etc/apt/sources.list.d/mariadb.sources 2>/dev/null ||
grep -oP 'repo/\K[0-9]+\.[0-9]+(\.[0-9]+)?' /etc/apt/sources.list.d/mariadb.list 2>/dev/null || echo "")
# Check if old repo points to a different version
if [[ -n "$OLD_REPO_VERSION" ]] && [[ "${OLD_REPO_VERSION%.*}" != "${MARIADB_VERSION%.*}" ]]; then
@@ -5510,7 +5582,7 @@ EOF
# Try to install each package individually
for pkg in $MODULE_LIST; do
[[ "$pkg" == "php${PHP_VERSION}" ]] && continue # Already installed
[[ "$pkg" == "php${PHP_VERSION}" ]] && continue # Already installed
$STD apt install -y "$pkg" 2>/dev/null || {
msg_warn "Could not install $pkg - continuing without it"
}
@@ -6120,14 +6192,14 @@ function setup_meilisearch() {
local MAX_WAIT=120
local WAITED=0
local TASK_RESULT=""
while [[ $WAITED -lt $MAX_WAIT ]]; do
TASK_RESULT=$(curl -s "http://${MEILI_HOST}:${MEILI_PORT}/tasks/${TASK_UID}" \
-H "Authorization: Bearer ${MEILI_MASTER_KEY}" 2>/dev/null) || true
local TASK_STATUS
TASK_STATUS=$(echo "$TASK_RESULT" | grep -oP '"status":\s*"\K[^"]+' || true)
if [[ "$TASK_STATUS" == "succeeded" ]]; then
# Extract dumpUid from the completed task details
DUMP_UID=$(echo "$TASK_RESULT" | grep -oP '"dumpUid":\s*"\K[^"]+' || true)
@@ -6165,7 +6237,7 @@ function setup_meilisearch() {
local MEILI_DB_PATH
MEILI_DB_PATH=$(grep -E "^db_path\s*=" /etc/meilisearch.toml 2>/dev/null | sed 's/.*=\s*"\(.*\)"/\1/' | tr -d ' ')
MEILI_DB_PATH="${MEILI_DB_PATH:-/var/lib/meilisearch/data}"
if [[ -d "$MEILI_DB_PATH" ]] && [[ -n "$(ls -A "$MEILI_DB_PATH" 2>/dev/null)" ]]; then
local BACKUP_PATH="${MEILI_DB_PATH}.backup.$(date +%Y%m%d%H%M%S)"
msg_warn "Backing up MeiliSearch data to ${BACKUP_PATH}"
@@ -6193,12 +6265,12 @@ function setup_meilisearch() {
local DUMP_FILE="${MEILI_DUMP_DIR}/${DUMP_UID}.dump"
if [[ -f "$DUMP_FILE" ]]; then
msg_info "Importing dump: ${DUMP_FILE}"
# Start meilisearch with --import-dump flag
# This is a one-time import that happens during startup
/usr/bin/meilisearch --config-file-path /etc/meilisearch.toml --import-dump "$DUMP_FILE" &
local MEILI_PID=$!
# Wait for meilisearch to become healthy (import happens during startup)
msg_info "Waiting for MeiliSearch to import and start..."
local MAX_WAIT=300
@@ -6216,14 +6288,14 @@ function setup_meilisearch() {
sleep 3
WAITED=$((WAITED + 3))
done
# Stop the manual process
kill $MEILI_PID 2>/dev/null || true
sleep 2
# Start via systemd for proper management
systemctl start meilisearch
if systemctl is-active --quiet meilisearch; then
msg_ok "MeiliSearch migrated successfully"
else
@@ -6311,14 +6383,14 @@ EOF
MEILISEARCH_API_KEY=""
for i in {1..10}; do
MEILISEARCH_API_KEY=$(curl -s -X GET "http://${MEILISEARCH_HOST}:${MEILISEARCH_PORT}/keys" \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" 2>/dev/null | \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" 2>/dev/null |
grep -o '"key":"[^"]*"' | head -n 1 | sed 's/"key":"//;s/"//') || true
[[ -n "$MEILISEARCH_API_KEY" ]] && break
sleep 2
done
MEILISEARCH_API_KEY_UID=$(curl -s -X GET "http://${MEILISEARCH_HOST}:${MEILISEARCH_PORT}/keys" \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" 2>/dev/null | \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" 2>/dev/null |
grep -o '"uid":"[^"]*"' | head -n 1 | sed 's/"uid":"//;s/"//') || true
export MEILISEARCH_API_KEY
@@ -7104,9 +7176,9 @@ function fetch_and_deploy_from_url() {
# Auto-detect archive type using file description
local file_desc
file_desc=$(file -b "$tmpdir/$filename")
local archive_type="unknown"
if [[ "$file_desc" =~ gzip.*compressed|gzip\ compressed\ data ]]; then
archive_type="tar"
elif [[ "$file_desc" =~ Zip.*archive|ZIP\ archive ]]; then

View File

@@ -529,9 +529,21 @@ cleanup_vmid() {
}
cleanup() {
local exit_code=$?
if [[ "$(dirs -p | wc -l)" -gt 1 ]]; then
popd >/dev/null || true
fi
# Report final telemetry status if post_to_api_vm was called but no update was sent
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
else
# Exited cleanly but description()/success was never called — shouldn't happen
post_update_to_api "failed" "1"
fi
fi
fi
}
check_root() {

View File

@@ -19,6 +19,11 @@ EOF
}
header_info
set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-netbird-lxc" "addon"
while true; do
read -p "This will add NetBird to an existing LXC Container ONLY. Proceed(y/n)?" yn
case $yn in

View File

@@ -23,6 +23,10 @@ function msg_info() { echo -e " \e[1;36m➤\e[0m $1"; }
function msg_ok() { echo -e " \e[1;32m✔\e[0m $1"; }
function msg_error() { echo -e " \e[1;31m✖\e[0m $1"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-tailscale-lxc" "addon"
header_info
if ! command -v pveversion &>/dev/null; then

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=8080
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# HEADER

View File

@@ -42,6 +42,11 @@ function msg() {
local TEXT="$1"
echo -e "$TEXT"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "all-templates" "addon"
function validate_container_id() {
local ctid="$1"
# Check if ID is numeric

View File

@@ -28,6 +28,11 @@ HOLD="-"
CM="${GN}${CL}"
APP="Coder Code Server"
hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "coder-code-server" "addon"
set -o errexit
set -o errtrace
set -o nounset

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

229
tools/addon/cronmaster.sh Normal file
View File

@@ -0,0 +1,229 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/fccview/cronmaster
if ! command -v curl &>/dev/null; then
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION
# ==============================================================================
APP="CronMaster"
APP_TYPE="addon"
INSTALL_PATH="/opt/cronmaster"
CONFIG_PATH="/opt/cronmaster/.env"
SERVICE_PATH="/etc/systemd/system/cronmaster.service"
DEFAULT_PORT=3000
# ==============================================================================
# HEADER
# ==============================================================================
function header_info {
clear
cat <<"EOF"
______ __ ___ __
/ ____/________ ____ / |/ /___ ______/ /____ _____
/ / / ___/ __ \/ __ \/ /|_/ / __ `/ ___/ __/ _ \/ ___/
/ /___/ / / /_/ / / / / / / / /_/ (__ ) /_/ __/ /
\____/_/ \____/_/ /_/_/ /_/\__,_/____/\__/\___/_/
EOF
}
# ==============================================================================
# OS DETECTION
# ==============================================================================
if ! grep -qE 'ID=debian|ID=ubuntu' /etc/os-release 2>/dev/null; then
echo -e "${CROSS} Unsupported OS detected. This script only supports Debian and Ubuntu."
exit 1
fi
# ==============================================================================
# UNINSTALL
# ==============================================================================
function uninstall() {
msg_info "Uninstalling ${APP}"
systemctl disable --now cronmaster.service &>/dev/null || true
rm -f "$SERVICE_PATH"
rm -rf "$INSTALL_PATH"
rm -f "/usr/local/bin/update_cronmaster"
rm -f "$HOME/.cronmaster"
rm -f "/root/cronmaster.creds"
msg_ok "${APP} has been uninstalled"
}
# ==============================================================================
# UPDATE
# ==============================================================================
function update() {
if check_for_gh_release "cronmaster" "fccview/cronmaster"; then
msg_info "Stopping service"
systemctl stop cronmaster.service &>/dev/null || true
msg_ok "Stopped service"
msg_info "Backing up configuration"
cp "$CONFIG_PATH" /tmp/cronmaster.env.bak 2>/dev/null || true
msg_ok "Backed up configuration"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "cronmaster" "fccview/cronmaster" "prebuild" "latest" "$INSTALL_PATH" "cronmaster_*_prebuild.tar.gz"
msg_info "Restoring configuration"
cp /tmp/cronmaster.env.bak "$CONFIG_PATH" 2>/dev/null || true
rm -f /tmp/cronmaster.env.bak
msg_ok "Restored configuration"
msg_info "Starting service"
systemctl start cronmaster
msg_ok "Started service"
msg_ok "Updated successfully"
exit
fi
}
# ==============================================================================
# INSTALL
# ==============================================================================
function install() {
# Setup Node.js (only installs if not present or different version)
if command -v node &>/dev/null; then
msg_ok "Node.js already installed ($(node -v))"
else
NODE_VERSION="22" setup_nodejs
fi
fetch_and_deploy_gh_release "cronmaster" "fccview/cronmaster" "prebuild" "latest" "$INSTALL_PATH" "cronmaster_*_prebuild.tar.gz"
local AUTH_PASS
AUTH_PASS="$(openssl rand -base64 18 | cut -c1-13)"
msg_info "Creating configuration"
cat <<EOF >"$CONFIG_PATH"
NODE_ENV=production
AUTH_PASSWORD=${AUTH_PASS}
PORT=${DEFAULT_PORT}
HOSTNAME=0.0.0.0
NEXT_TELEMETRY_DISABLED=1
EOF
chmod 600 "$CONFIG_PATH"
msg_ok "Created configuration"
msg_info "Creating service"
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=CronMaster Service
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}
EnvironmentFile=${CONFIG_PATH}
ExecStart=/usr/bin/node ${INSTALL_PATH}/server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now cronmaster
msg_ok "Created and started service"
# Create update script
msg_info "Creating update script"
ensure_usr_local_bin_persist
cat <<EOF >/usr/local/bin/update_cronmaster
#!/usr/bin/env bash
# CronMaster Update Script
type=update bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/addon/cronmaster.sh)"
EOF
chmod +x /usr/local/bin/update_cronmaster
msg_ok "Created update script (/usr/local/bin/update_cronmaster)"
# Save credentials
local CREDS_FILE="/root/cronmaster.creds"
cat <<EOF >"$CREDS_FILE"
CronMaster Credentials
======================
Password: ${AUTH_PASS}
Web UI: http://${LOCAL_IP}:${DEFAULT_PORT}
EOF
echo ""
msg_ok "${APP} is reachable at: ${BL}http://${LOCAL_IP}:${DEFAULT_PORT}${CL}"
msg_ok "Credentials saved to: ${BL}${CREDS_FILE}${CL}"
echo ""
}
# ==============================================================================
# MAIN
# ==============================================================================
header_info
ensure_usr_local_bin_persist
get_lxc_ip
# Handle type=update (called from update script)
if [[ "${type:-}" == "update" ]]; then
if [[ -d "$INSTALL_PATH" ]]; then
update
else
msg_error "${APP} is not installed. Nothing to update."
exit 1
fi
exit 0
fi
# Check if already installed
if [[ -d "$INSTALL_PATH" && -n "$(ls -A "$INSTALL_PATH" 2>/dev/null)" ]]; then
msg_warn "${APP} is already installed."
echo ""
echo -n "${TAB}Uninstall ${APP}? (y/N): "
read -r uninstall_prompt
if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then
uninstall
exit 0
fi
echo -n "${TAB}Update ${APP}? (y/N): "
read -r update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
update
exit 0
fi
msg_warn "No action selected. Exiting."
exit 0
fi
# Fresh installation
msg_warn "${APP} is not installed."
echo ""
echo -e "${TAB}${INFO} This will install:"
echo -e "${TAB} - Node.js 22"
echo -e "${TAB} - CronMaster (prebuild)"
echo ""
echo -n "${TAB}Install ${APP}? (y/N): "
read -r install_prompt
if [[ "${install_prompt,,}" =~ ^(y|yes)$ ]]; then
install
else
msg_warn "Installation cancelled. Exiting."
exit 0
fi

View File

@@ -17,6 +17,11 @@ HOLD="-"
CM="${GN}${CL}"
APP="CrowdSec"
hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "crowdsec" "addon"
set -o errexit
set -o errtrace
set -o nounset

View File

@@ -32,6 +32,10 @@ DEFAULT_PORT=8080
SRC_DIR="/"
TMP_BIN="/tmp/filebrowser.$$"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "filebrowser-quantum" "addon"
# Get primary IP
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)

View File

@@ -5,8 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
function header_info {
clear
cat <<"EOF"
clear
cat <<"EOF"
_______ __ ____
/ ____(_) /__ / __ )_________ _ __________ _____
/ /_ / / / _ \/ __ / ___/ __ \ | /| / / ___/ _ \/ ___/
@@ -29,6 +29,10 @@ INSTALL_PATH="/usr/local/bin/filebrowser"
DB_PATH="/usr/local/community-scripts/filebrowser.db"
DEFAULT_PORT=8080
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "filebrowser" "addon"
# Get first non-loopback IP & Detect primary network interface dynamically
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
@@ -38,65 +42,65 @@ IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n
# Detect OS
if [[ -f "/etc/alpine-release" ]]; then
OS="Alpine"
SERVICE_PATH="/etc/init.d/filebrowser"
PKG_MANAGER="apk add --no-cache"
OS="Alpine"
SERVICE_PATH="/etc/init.d/filebrowser"
PKG_MANAGER="apk add --no-cache"
elif [[ -f "/etc/debian_version" ]]; then
OS="Debian"
SERVICE_PATH="/etc/systemd/system/filebrowser.service"
PKG_MANAGER="apt-get install -y"
OS="Debian"
SERVICE_PATH="/etc/systemd/system/filebrowser.service"
PKG_MANAGER="apt-get install -y"
else
echo -e "${CROSS} Unsupported OS detected. Exiting."
exit 1
echo -e "${CROSS} Unsupported OS detected. Exiting."
exit 1
fi
header_info
function msg_info() {
local msg="$1"
echo -e "${INFO} ${YW}${msg}...${CL}"
local msg="$1"
echo -e "${INFO} ${YW}${msg}...${CL}"
}
function msg_ok() {
local msg="$1"
echo -e "${CM} ${GN}${msg}${CL}"
local msg="$1"
echo -e "${CM} ${GN}${msg}${CL}"
}
function msg_error() {
local msg="$1"
echo -e "${CROSS} ${RD}${msg}${CL}"
local msg="$1"
echo -e "${CROSS} ${RD}${msg}${CL}"
}
if [ -f "$INSTALL_PATH" ]; then
echo -e "${YW}⚠️ ${APP} is already installed.${CL}"
read -r -p "Would you like to uninstall ${APP}? (y/N): " uninstall_prompt
if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Uninstalling ${APP}"
if [[ "$OS" == "Debian" ]]; then
systemctl disable --now filebrowser.service &>/dev/null
rm -f "$SERVICE_PATH"
else
rc-service filebrowser stop &>/dev/null
rc-update del filebrowser &>/dev/null
rm -f "$SERVICE_PATH"
fi
rm -f "$INSTALL_PATH" "$DB_PATH"
msg_ok "${APP} has been uninstalled."
exit 0
fi
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"
exit 0
echo -e "${YW}⚠️ ${APP} is already installed.${CL}"
read -r -p "Would you like to uninstall ${APP}? (y/N): " uninstall_prompt
if [[ "${uninstall_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Uninstalling ${APP}"
if [[ "$OS" == "Debian" ]]; then
systemctl disable --now filebrowser.service &>/dev/null
rm -f "$SERVICE_PATH"
else
echo -e "${YW}⚠️ Update skipped. Exiting.${CL}"
exit 0
rc-service filebrowser stop &>/dev/null
rc-update del filebrowser &>/dev/null
rm -f "$SERVICE_PATH"
fi
rm -f "$INSTALL_PATH" "$DB_PATH"
msg_ok "${APP} has been uninstalled."
exit 0
fi
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"
exit 0
else
echo -e "${YW}⚠️ Update skipped. Exiting.${CL}"
exit 0
fi
fi
echo -e "${YW}⚠️ ${APP} is not installed.${CL}"
@@ -105,43 +109,43 @@ PORT=${PORT:-$DEFAULT_PORT}
read -r -p "Would you like to install ${APP}? (y/n): " install_prompt
if [[ "${install_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Installing ${APP} on ${OS}"
$PKG_MANAGER wget tar curl &>/dev/null
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Installed ${APP}"
msg_info "Installing ${APP} on ${OS}"
$PKG_MANAGER wget tar curl &>/dev/null
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Installed ${APP}"
msg_info "Creating FileBrowser directory"
mkdir -p /usr/local/community-scripts
chown root:root /usr/local/community-scripts
chmod 755 /usr/local/community-scripts
touch "$DB_PATH"
chown root:root "$DB_PATH"
chmod 644 "$DB_PATH"
msg_ok "Directory created successfully"
msg_info "Creating FileBrowser directory"
mkdir -p /usr/local/community-scripts
chown root:root /usr/local/community-scripts
chmod 755 /usr/local/community-scripts
touch "$DB_PATH"
chown root:root "$DB_PATH"
chmod 644 "$DB_PATH"
msg_ok "Directory created successfully"
read -r -p "Would you like to use No Authentication? (y/N): " auth_prompt
if [[ "${auth_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Configuring No Authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config init --auth.method=noauth &>/dev/null
filebrowser config set --auth.method=noauth &>/dev/null
filebrowser users add ID 1 --perm.admin &>/dev/null
msg_ok "No Authentication configured"
else
msg_info "Setting up default authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser users add admin helper-scripts.com --perm.admin --database "$DB_PATH" &>/dev/null
msg_ok "Default authentication configured (admin:helper-scripts.com)"
fi
read -r -p "Would you like to use No Authentication? (y/N): " auth_prompt
if [[ "${auth_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Configuring No Authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config init --auth.method=noauth &>/dev/null
filebrowser config set --auth.method=noauth &>/dev/null
filebrowser users add ID 1 --perm.admin &>/dev/null
msg_ok "No Authentication configured"
else
msg_info "Setting up default authentication"
cd /usr/local/community-scripts
filebrowser config init -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser config set -a '0.0.0.0' -p "$PORT" -d "$DB_PATH" &>/dev/null
filebrowser users add admin helper-scripts.com --perm.admin --database "$DB_PATH" &>/dev/null
msg_ok "Default authentication configured (admin:helper-scripts.com)"
fi
msg_info "Creating service"
if [[ "$OS" == "Debian" ]]; then
cat <<EOF >"$SERVICE_PATH"
msg_info "Creating service"
if [[ "$OS" == "Debian" ]]; then
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Filebrowser
After=network-online.target
@@ -157,9 +161,9 @@ Restart=always
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now filebrowser
else
cat <<EOF >"$SERVICE_PATH"
systemctl enable -q --now filebrowser
else
cat <<EOF >"$SERVICE_PATH"
#!/sbin/openrc-run
command="/usr/local/bin/filebrowser"
@@ -172,14 +176,14 @@ depend() {
need net
}
EOF
chmod +x "$SERVICE_PATH"
rc-update add filebrowser default &>/dev/null
rc-service filebrowser start &>/dev/null
fi
msg_ok "Service created successfully"
chmod +x "$SERVICE_PATH"
rc-update add filebrowser default &>/dev/null
rc-service filebrowser start &>/dev/null
fi
msg_ok "Service created successfully"
echo -e "${CM} ${GN}${APP} is reachable at: ${BL}http://$IP:$PORT${CL}"
echo -e "${CM} ${GN}${APP} is reachable at: ${BL}http://$IP:$PORT${CL}"
else
echo -e "${YW}⚠️ Installation skipped. Exiting.${CL}"
exit 0
echo -e "${YW}⚠️ Installation skipped. Exiting.${CL}"
exit 0
fi

View File

@@ -30,6 +30,10 @@ function msg_info() { echo -e "${INFO} ${YW}$1...${CL}"; }
function msg_ok() { echo -e "${CM} ${GN}$1${CL}"; }
function msg_error() { echo -e "${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "glances" "addon"
get_lxc_ip() {
if command -v hostname >/dev/null 2>&1 && hostname -I 2>/dev/null; then
hostname -I | awk '{print $1}'

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# HEADER
@@ -104,6 +106,10 @@ function update() {
$STD npm run build
msg_ok "Built ${APP}"
msg_info "Updating service"
create_service
msg_ok "Updated service"
msg_info "Starting service"
systemctl start immich-proxy
msg_ok "Started service"
@@ -112,6 +118,27 @@ function update() {
fi
}
function create_service() {
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}/app
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/dist/index.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
}
# ==============================================================================
# INSTALL
# ==============================================================================
@@ -173,23 +200,7 @@ EOF
msg_ok "Created configuration"
msg_info "Creating service"
cat <<EOF >"$SERVICE_PATH"
[Unit]
Description=Immich Public Proxy
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=${INSTALL_PATH}
EnvironmentFile=${CONFIG_PATH}/.env
ExecStart=/usr/bin/node ${INSTALL_PATH}/app/server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
create_service
systemctl enable -q --now immich-proxy
msg_ok "Created and started service"

View File

@@ -13,6 +13,7 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
@@ -29,6 +30,7 @@ DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# HEADER

View File

@@ -26,6 +26,11 @@ BFR="\\r\\033[K"
HOLD="-"
CM="${GN}${CL}"
silent() { "$@" >/dev/null 2>&1; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "netdata" "addon"
set -e
header_info
echo "Loading..."

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -27,6 +27,11 @@ HOLD="-"
CM="${GN}${CL}"
APP="OliveTin"
hostname="$(hostname)"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "olivetin" "addon"
set-e
header_info

View File

@@ -29,6 +29,10 @@ APP="phpMyAdmin"
INSTALL_DIR_DEBIAN="/var/www/html/phpMyAdmin"
INSTALL_DIR_ALPINE="/usr/share/phpmyadmin"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "phpmyadmin" "addon"
IFACE=$(ip -4 route | awk '/default/ {print $5; exit}')
IP=$(ip -4 addr show "$IFACE" | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
[[ -z "$IP" ]] && IP=$(hostname -I | awk '{print $1}')

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -8,11 +8,13 @@
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -28,6 +28,11 @@ function msg_error() {
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pyenv" "addon"
if command -v pveversion >/dev/null 2>&1; then
msg_error "Can't Install on Proxmox "
exit

View File

@@ -13,11 +13,13 @@ fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
# Enable error handling
set -Eeuo pipefail
trap 'error_handler' ERR
load_functions
init_tool_telemetry "" "addon"
# ==============================================================================
# CONFIGURATION

View File

@@ -36,6 +36,10 @@ msg_ok() {
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "webmin" "addon"
header_info
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Webmin Installer" --yesno "This Will Install Webmin on this LXC Container. Proceed?" 10 58

6
tools/headers/cronmaster Normal file
View File

@@ -0,0 +1,6 @@
______ __ ___ __
/ ____/________ ____ / |/ /___ ______/ /____ _____
/ / / ___/ __ \/ __ \/ /|_/ / __ `/ ___/ __/ _ \/ ___/
/ /___/ / / /_/ / / / / / / / /_/ (__ ) /_/ __/ /
\____/_/ \____/_/ /_/_/ /_/\__,_/____/\__/\___/_/

View File

@@ -31,6 +31,10 @@ HOLD=" "
CM="${GN}${CL} "
CROSS="${RD}${CL} "
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "add-iptag" "pve"
# Stop any running spinner
stop_spinner() {
if [ -n "$SPINNER_PID" ] && kill -0 "$SPINNER_PID" 2>/dev/null; then

View File

@@ -12,6 +12,7 @@ function header_info() {
/ / / / _ \/ __ `/ __ \ / / | / /
/ /___/ / __/ /_/ / / / / / /___/ / /___
\____/_/\___/\__,_/_/ /_/ /_____/_/|_\____/
EOF
}
@@ -22,6 +23,10 @@ CM='\xE2\x9C\x94\033'
GN="\033[1;92m"
CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "clean-lxcs" "pve"
header_info
echo "Loading..."
@@ -70,10 +75,10 @@ function run_lxc_clean() {
find /var/cache -type f -delete 2>/dev/null
find /var/log -type f -delete 2>/dev/null
find /tmp -mindepth 1 -delete 2>/dev/null
apt-get -y --purge autoremove
apt-get -y autoclean
apt -y --purge autoremove
apt -y autoclean
rm -rf /var/lib/apt/lists/*
apt-get update
apt update
fi
'
}

View File

@@ -5,8 +5,8 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
function header_info {
clear
cat <<"EOF"
clear
cat <<"EOF"
____ ________ ____ __ __ __ _ ____ ___
/ __ \_________ _ ______ ___ ____ _ __ / ____/ /__ ____ _____ / __ \_________ / /_ ____ _____ ___ ____/ / / /| | / / |/ /____
/ /_/ / ___/ __ \| |/_/ __ `__ \/ __ \| |/_/ / / / / _ \/ __ `/ __ \ / / / / ___/ __ \/ __ \/ __ `/ __ \/ _ \/ __ / / / | | / / /|_/ / ___/
@@ -16,62 +16,66 @@ function header_info {
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "clean-orphaned-lvm" "pve"
# Function to check for orphaned LVM volumes
function find_orphaned_lvm {
echo -e "\n🔍 Scanning for orphaned LVM volumes...\n"
echo -e "\n🔍 Scanning for orphaned LVM volumes...\n"
orphaned_volumes=()
while read -r lv vg size seg_type; do
# Exclude system-critical LVs and Ceph OSDs
if [[ "$lv" == "data" || "$lv" == "root" || "$lv" == "swap" || "$lv" =~ ^osd-block- ]]; then
continue
fi
orphaned_volumes=()
while read -r lv vg size seg_type; do
# Exclude system-critical LVs and Ceph OSDs
if [[ "$lv" == "data" || "$lv" == "root" || "$lv" == "swap" || "$lv" =~ ^osd-block- ]]; then
continue
fi
# Exclude thin pools (any name)
if [[ "$seg_type" == "thin-pool" ]]; then
continue
fi
# Exclude thin pools (any name)
if [[ "$seg_type" == "thin-pool" ]]; then
continue
fi
container_id=$(echo "$lv" | grep -oE "[0-9]+" | head -1)
# Check if the ID exists as a VM or LXC container
if [ -f "/etc/pve/lxc/${container_id}.conf" ] || [ -f "/etc/pve/qemu-server/${container_id}.conf" ]; then
continue
fi
container_id=$(echo "$lv" | grep -oE "[0-9]+" | head -1)
# Check if the ID exists as a VM or LXC container
if [ -f "/etc/pve/lxc/${container_id}.conf" ] || [ -f "/etc/pve/qemu-server/${container_id}.conf" ]; then
continue
fi
orphaned_volumes+=("$lv" "$vg" "$size")
done < <(lvs --noheadings -o lv_name,vg_name,lv_size,seg_type --separator ' ' 2>/dev/null | awk '{print $1, $2, $3, $4}')
orphaned_volumes+=("$lv" "$vg" "$size")
done < <(lvs --noheadings -o lv_name,vg_name,lv_size,seg_type --separator ' ' 2>/dev/null | awk '{print $1, $2, $3, $4}')
# Display orphaned volumes
echo -e "❗ The following orphaned LVM volumes were found:\n"
printf "%-25s %-10s %-10s\n" "LV Name" "VG" "Size"
printf "%-25s %-10s %-10s\n" "-------------------------" "----------" "----------"
# Display orphaned volumes
echo -e "❗ The following orphaned LVM volumes were found:\n"
printf "%-25s %-10s %-10s\n" "LV Name" "VG" "Size"
printf "%-25s %-10s %-10s\n" "-------------------------" "----------" "----------"
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
printf "%-25s %-10s %-10s\n" "${orphaned_volumes[i]}" "${orphaned_volumes[i + 1]}" "${orphaned_volumes[i + 2]}"
done
echo ""
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
printf "%-25s %-10s %-10s\n" "${orphaned_volumes[i]}" "${orphaned_volumes[i + 1]}" "${orphaned_volumes[i + 2]}"
done
echo ""
}
# Function to delete selected volumes
function delete_orphaned_lvm {
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
lv="${orphaned_volumes[i]}"
vg="${orphaned_volumes[i + 1]}"
size="${orphaned_volumes[i + 2]}"
for ((i = 0; i < ${#orphaned_volumes[@]}; i += 3)); do
lv="${orphaned_volumes[i]}"
vg="${orphaned_volumes[i + 1]}"
size="${orphaned_volumes[i + 2]}"
read -p "❓ Do you want to delete $lv (VG: $vg, Size: $size)? [y/N]: " confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
echo -e "🗑️ Deleting $lv from $vg..."
lvremove -f "$vg/$lv"
if [ $? -eq 0 ]; then
echo -e "✅ Successfully deleted $lv.\n"
else
echo -e "❌ Failed to delete $lv.\n"
fi
else
echo -e "⚠️ Skipping $lv.\n"
fi
done
read -p "❓ Do you want to delete $lv (VG: $vg, Size: $size)? [y/N]: " confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
echo -e "🗑️ Deleting $lv from $vg..."
lvremove -f "$vg/$lv"
if [ $? -eq 0 ]; then
echo -e "✅ Successfully deleted $lv.\n"
else
echo -e "❌ Failed to delete $lv.\n"
fi
else
echo -e "⚠️ Skipping $lv.\n"
fi
done
}
# Run script

View File

@@ -7,8 +7,8 @@
clear
if command -v pveversion >/dev/null 2>&1; then
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
fi
YW=$(echo "\033[33m")
BL=$(echo "\033[36m")
@@ -23,16 +23,16 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}"
APP="Home Assistant Container"
while true; do
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
clear
function header_info {
cat <<"EOF"
cat <<"EOF"
__ __ ___ _ __ __
/ / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_
/ /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/
@@ -44,35 +44,39 @@ EOF
header_info
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "container-restore" "pve"
function msg_info() {
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
}
function msg_ok() {
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
}
function msg_error() {
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
if [ -z "$(ls -A /var/lib/docker/volumes/hass_config/_data/backups/)" ]; then
msg_error "No backups found! \n"
exit 1
msg_error "No backups found! \n"
exit 1
fi
DIR=/var/lib/docker/volumes/hass_config/_data/restore
if [ -d "$DIR" ]; then
msg_ok "Restore Directory Exists."
msg_ok "Restore Directory Exists."
else
mkdir -p /var/lib/docker/volumes/hass_config/_data/restore
msg_ok "Created Restore Directory."
mkdir -p /var/lib/docker/volumes/hass_config/_data/restore
msg_ok "Created Restore Directory."
fi
cd /var/lib/docker/volumes/hass_config/_data/backups/
PS3="Please enter your choice: "
files="$(ls -A .)"
select filename in ${files}; do
msg_ok "You selected ${BL}${filename}${CL}"
break
msg_ok "You selected ${BL}${filename}${CL}"
break
done
msg_info "Stopping Home Assistant"
docker stop homeassistant &>/dev/null

View File

@@ -7,8 +7,8 @@
clear
if command -v pveversion >/dev/null 2>&1; then
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
echo -e "⚠️ Can't Run from the Proxmox Shell"
exit
fi
YW=$(echo "\033[33m")
BL=$(echo "\033[36m")
@@ -23,16 +23,16 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}"
APP="Home Assistant Core"
while true; do
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
read -p "This will restore ${APP} from a backup. Proceed(y/n)?" yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
clear
function header_info {
cat <<"EOF"
cat <<"EOF"
__ __ ___ _ __ __ ______
/ / / /___ ____ ___ ___ / | __________(_)____/ /_____ _____ / /_ / ____/___ ________
/ /_/ / __ \/ __ `__ \/ _ \ / /| | / ___/ ___/ / ___/ __/ __ `/ __ \/ __/ / / / __ \/ ___/ _ \
@@ -44,35 +44,39 @@ EOF
header_info
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "core-restore" "pve"
function msg_info() {
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
local msg="$1"
echo -ne " ${HOLD} ${YW}${msg}..."
}
function msg_ok() {
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CM} ${GN}${msg}${CL}"
}
function msg_error() {
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
local msg="$1"
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
if [ -z "$(ls -A /root/.homeassistant/backups/)" ]; then
msg_error "No backups found! \n"
exit 1
msg_error "No backups found! \n"
exit 1
fi
DIR=/root/.homeassistant/restore
if [ -d "$DIR" ]; then
msg_ok "Restore Directory Exists."
msg_ok "Restore Directory Exists."
else
mkdir -p /root/.homeassistant/restore
msg_ok "Created Restore Directory."
mkdir -p /root/.homeassistant/restore
msg_ok "Created Restore Directory."
fi
cd /root/.homeassistant/backups/
PS3="Please enter your choice: "
files="$(ls -A .)"
select filename in ${files}; do
msg_ok "You selected ${BL}${filename}${CL}"
break
msg_ok "You selected ${BL}${filename}${CL}"
break
done
msg_info "Stopping Home Assistant"
sudo service homeassistant stop

View File

@@ -22,6 +22,11 @@ RD=$(echo "\033[01;31m")
CM='\xE2\x9C\x94\033'
GN=$(echo "\033[1;92m")
CL=$(echo "\033[m")
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "execute-lxcs" "pve"
header_info
echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Execute" --yesno "This will execute a command inside selected LXC Containers. Proceed?" 10 58
@@ -40,7 +45,6 @@ if [ $? -ne 0 ]; then
exit
fi
read -r -p "Enter here command for inside the containers: " custom_command
header_info
@@ -50,12 +54,11 @@ function execute_in() {
container=$1
name=$(pct exec "$container" hostname)
echo -e "${BL}[Info]${GN} Execute inside${BL} ${name}${GN} with output: ${CL}"
if ! pct exec "$container" -- bash -c "command ${custom_command} >/dev/null 2>&1"
then
echo -e "${BL}[Info]${GN} Skipping ${name} ${RD}$container has no command: ${custom_command}"
else
pct exec "$container" -- bash -c "${custom_command}" | tee
fi
if ! pct exec "$container" -- bash -c "command ${custom_command} >/dev/null 2>&1"; then
echo -e "${BL}[Info]${GN} Skipping ${name} ${RD}$container has no command: ${custom_command}"
else
pct exec "$container" -- bash -c "${custom_command}" | tee
fi
}
for container in $(pct list | awk '{if(NR>1) print $1}'); do

View File

@@ -15,6 +15,11 @@ function header_info {
/___/ /_/ /_/
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "frigate-support" "pve"
header_info
while true; do
read -p "This will Prepare a LXC Container for Frigate. Proceed (y/n)?" yn

View File

@@ -19,6 +19,10 @@ RD="\033[01;31m"
GN="\033[1;92m"
CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "fstrim" "pve"
LOGFILE="/var/log/fstrim.log"
touch "$LOGFILE"
chmod 600 "$LOGFILE"

View File

@@ -16,6 +16,10 @@ function header_info {
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "host-backup" "pve"
# Function to perform backup
function perform_backup {
local BACKUP_PATH

View File

@@ -29,6 +29,11 @@ BFR="\\r\\033[K"
HOLD="-"
CM="${GN}${CL}"
set -e
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "hw-acceleration" "pve"
header_info
echo "Loading..."
function msg_info() {

View File

@@ -22,6 +22,10 @@ GN="\033[1;92m"
RD="\033[01;31m"
CL="\033[m"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "kernel-clean" "pve"
# Detect current kernel
current_kernel=$(uname -r)
available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print $2}' | grep -v "$current_kernel" | sort -V)

View File

@@ -25,6 +25,11 @@ HOLD="-"
CM="${GN}${CL}"
current_kernel=$(uname -r)
available_kernels=$(dpkg --list | grep 'kernel-.*-pve' | awk '{print substr($2, 16, length($2)-22)}')
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "kernel-pin" "pve"
header_info
function msg_info() {

View File

@@ -38,6 +38,10 @@ CL=$(echo "\033[m")
TAB=" "
CM="${TAB}✔️${TAB}${CL}"
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "lxc-delete" "pve"
header_info
echo "Loading..."
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Proxmox VE LXC Deletion" --yesno "This will delete LXC containers. Proceed?" 10 58

View File

@@ -29,6 +29,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "microcode" "pve"
header_info
current_microcode=$(journalctl -k | grep -i 'microcode: Current revision:' | grep -oP 'Current revision: \K0x[0-9a-f]+')
[ -z "$current_microcode" ] && current_microcode="Not found."

View File

@@ -15,6 +15,10 @@ cat <<"EOF"
EOF
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "monitor-all" "pve"
add() {
echo -e "\n IMPORTANT: Tag-Based Monitoring Enabled"
echo "Only VMs and containers with the tag 'mon-restart' will be automatically restarted by this service."
@@ -28,9 +32,9 @@ add() {
while true; do
read -p "This script will add Monitor All to Proxmox VE. Proceed (y/n)? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
@@ -175,5 +179,8 @@ CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "Monitor-All f
case $CHOICE in
"Add") add ;;
"Remove") remove ;;
*) echo "Exiting..."; exit 0 ;;
*)
echo "Exiting..."
exit 0
;;
esac

View File

@@ -19,8 +19,8 @@ INFO="${TAB}${TAB}${CL}"
WARN="${TAB}⚠️${TAB}${CL}"
function header_info {
clear
cat <<"EOF"
clear
cat <<"EOF"
_ ____________ ____ __________ ___ ____ _ __ __
/ | / / _/ ____/ / __ \/ __/ __/ /___ ____ _____/ (_)___ ____ _ / __ \(_)________ _/ /_ / /__ _____
@@ -33,6 +33,10 @@ Enhanced version supporting both e1000e and e1000 drivers
EOF
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "nic-offloading-fix" "pve"
header_info
function msg_info() { echo -e "${INFO} ${YW}${1}...${CL}"; }
@@ -42,15 +46,18 @@ function msg_warn() { echo -e "${WARN} ${YWB}${1}"; }
# Check for root privileges
if [ "$(id -u)" -ne 0 ]; then
msg_error "Error: This script must be run as root."
exit 1
msg_error "Error: This script must be run as root."
exit 1
fi
if ! command -v ethtool >/dev/null 2>&1; then
msg_info "Installing ethtool"
apt-get update &>/dev/null
apt-get install -y ethtool &>/dev/null || { msg_error "Failed to install ethtool. Exiting."; exit 1; }
msg_ok "ethtool installed successfully"
msg_info "Installing ethtool"
apt-get update &>/dev/null
apt-get install -y ethtool &>/dev/null || {
msg_error "Failed to install ethtool. Exiting."
exit 1
}
msg_ok "ethtool installed successfully"
fi
# Get list of network interfaces using Intel e1000e or e1000 drivers
@@ -60,95 +67,95 @@ COUNT=0
msg_info "Searching for Intel e1000e and e1000 interfaces"
for device in /sys/class/net/*; do
interface="$(basename "$device")" # or adjust the rest of the usages below, as mostly you'll use the path anyway
# Skip loopback interface and virtual interfaces
if [[ "$interface" != "lo" ]] && [[ ! "$interface" =~ ^(tap|fwbr|veth|vmbr|bonding_masters) ]]; then
# Check if the interface uses the e1000e or e1000 driver
driver=$(basename $(readlink -f /sys/class/net/$interface/device/driver 2>/dev/null) 2>/dev/null)
if [[ "$driver" == "e1000e" ]] || [[ "$driver" == "e1000" ]]; then
# Get MAC address for additional identification
mac=$(cat /sys/class/net/$interface/address 2>/dev/null)
INTERFACES+=("$interface" "Intel $driver NIC ($mac)")
((COUNT++))
fi
interface="$(basename "$device")" # or adjust the rest of the usages below, as mostly you'll use the path anyway
# Skip loopback interface and virtual interfaces
if [[ "$interface" != "lo" ]] && [[ ! "$interface" =~ ^(tap|fwbr|veth|vmbr|bonding_masters) ]]; then
# Check if the interface uses the e1000e or e1000 driver
driver=$(basename $(readlink -f /sys/class/net/$interface/device/driver 2>/dev/null) 2>/dev/null)
if [[ "$driver" == "e1000e" ]] || [[ "$driver" == "e1000" ]]; then
# Get MAC address for additional identification
mac=$(cat /sys/class/net/$interface/address 2>/dev/null)
INTERFACES+=("$interface" "Intel $driver NIC ($mac)")
((COUNT++))
fi
fi
done
# Check if any Intel e1000e/e1000 interfaces were found
if [ ${#INTERFACES[@]} -eq 0 ]; then
whiptail --title "Error" --msgbox "No Intel e1000e or e1000 network interfaces found!" 10 60
msg_error "No Intel e1000e or e1000 network interfaces found! Exiting."
exit 1
whiptail --title "Error" --msgbox "No Intel e1000e or e1000 network interfaces found!" 10 60
msg_error "No Intel e1000e or e1000 network interfaces found! Exiting."
exit 1
fi
msg_ok "Found ${BL}$COUNT${GN} Intel e1000e/e1000 interfaces"
# Create a checklist for interface selection with all interfaces initially checked
INTERFACES_CHECKLIST=()
for ((i=0; i<${#INTERFACES[@]}; i+=2)); do
INTERFACES_CHECKLIST+=("${INTERFACES[i]}" "${INTERFACES[i+1]}" "ON")
for ((i = 0; i < ${#INTERFACES[@]}; i += 2)); do
INTERFACES_CHECKLIST+=("${INTERFACES[i]}" "${INTERFACES[i + 1]}" "ON")
done
# Show interface selection checklist
SELECTED_INTERFACES=$(whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Network Interfaces" \
--separate-output --checklist "Select Intel e1000e/e1000 network interfaces\n(Space to toggle, Enter to confirm):" 15 80 6 \
"${INTERFACES_CHECKLIST[@]}" 3>&1 1>&2 2>&3)
--separate-output --checklist "Select Intel e1000e/e1000 network interfaces\n(Space to toggle, Enter to confirm):" 15 80 6 \
"${INTERFACES_CHECKLIST[@]}" 3>&1 1>&2 2>&3)
exitstatus=$?
if [ $exitstatus != 0 ]; then
msg_info "User canceled. Exiting."
exit 0
msg_info "User canceled. Exiting."
exit 0
fi
# Check if any interfaces were selected
if [ -z "$SELECTED_INTERFACES" ]; then
msg_error "No interfaces selected. Exiting."
exit 0
msg_error "No interfaces selected. Exiting."
exit 0
fi
# Convert the selected interfaces into an array
readarray -t INTERFACE_ARRAY <<< "$SELECTED_INTERFACES"
readarray -t INTERFACE_ARRAY <<<"$SELECTED_INTERFACES"
# Show the number of selected interfaces
INTERFACE_COUNT=${#INTERFACE_ARRAY[@]}
# Print selected interfaces with their driver types
for iface in "${INTERFACE_ARRAY[@]}"; do
driver=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
msg_ok "Selected interface: ${BL}$iface${GN} (${BL}$driver${GN})"
driver=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
msg_ok "Selected interface: ${BL}$iface${GN} (${BL}$driver${GN})"
done
# Ask for confirmation with the list of selected interfaces
CONFIRMATION_MSG="You have selected the following interface(s):\n\n"
for iface in "${INTERFACE_ARRAY[@]}"; do
SPEED=$(cat /sys/class/net/$iface/speed 2>/dev/null || echo "Unknown")
MAC=$(cat /sys/class/net/$iface/address 2>/dev/null)
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
CONFIRMATION_MSG+="- $iface (Driver: $DRIVER, MAC: $MAC, Speed: ${SPEED}Mbps)\n"
SPEED=$(cat /sys/class/net/$iface/speed 2>/dev/null || echo "Unknown")
MAC=$(cat /sys/class/net/$iface/address 2>/dev/null)
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
CONFIRMATION_MSG+="- $iface (Driver: $DRIVER, MAC: $MAC, Speed: ${SPEED}Mbps)\n"
done
CONFIRMATION_MSG+="\nThis will create systemd service(s) to disable offloading features.\n\nProceed?"
if ! whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --title "Confirmation" \
--yesno "$CONFIRMATION_MSG" 20 80; then
msg_info "User canceled. Exiting."
exit 0
--yesno "$CONFIRMATION_MSG" 20 80; then
msg_info "User canceled. Exiting."
exit 0
fi
# Loop through all selected interfaces and create services for each
for SELECTED_INTERFACE in "${INTERFACE_ARRAY[@]}"; do
# Get the driver type for this specific interface
DRIVER=$(basename $(readlink -f /sys/class/net/$SELECTED_INTERFACE/device/driver 2>/dev/null) 2>/dev/null)
# Create service name for this interface
SERVICE_NAME="disable-nic-offload-$SELECTED_INTERFACE.service"
SERVICE_PATH="/etc/systemd/system/$SERVICE_NAME"
# Create the service file with driver-specific optimizations
msg_info "Creating systemd service for interface: ${BL}$SELECTED_INTERFACE${YW} (${BL}$DRIVER${YW})"
# Start with the common part of the service file
cat > "$SERVICE_PATH" << EOF
# Get the driver type for this specific interface
DRIVER=$(basename $(readlink -f /sys/class/net/$SELECTED_INTERFACE/device/driver 2>/dev/null) 2>/dev/null)
# Create service name for this interface
SERVICE_NAME="disable-nic-offload-$SELECTED_INTERFACE.service"
SERVICE_PATH="/etc/systemd/system/$SERVICE_NAME"
# Create the service file with driver-specific optimizations
msg_info "Creating systemd service for interface: ${BL}$SELECTED_INTERFACE${YW} (${BL}$DRIVER${YW})"
# Start with the common part of the service file
cat >"$SERVICE_PATH" <<EOF
[Unit]
Description=Disable NIC offloading for Intel $DRIVER interface $SELECTED_INTERFACE
After=network.target
@@ -163,45 +170,49 @@ RemainAfterExit=true
WantedBy=multi-user.target
EOF
# Check if service file was created successfully
if [ ! -f "$SERVICE_PATH" ]; then
whiptail --title "Error" --msgbox "Failed to create service file for $SELECTED_INTERFACE!" 10 50
msg_error "Failed to create service file for $SELECTED_INTERFACE! Skipping to next interface."
continue
fi
# Check if service file was created successfully
if [ ! -f "$SERVICE_PATH" ]; then
whiptail --title "Error" --msgbox "Failed to create service file for $SELECTED_INTERFACE!" 10 50
msg_error "Failed to create service file for $SELECTED_INTERFACE! Skipping to next interface."
continue
fi
# Configure this service
{
echo "25"; sleep 0.2
# Reload systemd to recognize the new service
systemctl daemon-reload
echo "50"; sleep 0.2
# Start the service
systemctl start "$SERVICE_NAME"
echo "75"; sleep 0.2
# Enable the service to start on boot
systemctl enable "$SERVICE_NAME"
echo "100"; sleep 0.2
} | whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --gauge "Configuring service for $SELECTED_INTERFACE..." 10 80 0
# Configure this service
{
echo "25"
sleep 0.2
# Reload systemd to recognize the new service
systemctl daemon-reload
echo "50"
sleep 0.2
# Start the service
systemctl start "$SERVICE_NAME"
echo "75"
sleep 0.2
# Enable the service to start on boot
systemctl enable "$SERVICE_NAME"
echo "100"
sleep 0.2
} | whiptail --backtitle "Intel e1000e/e1000 NIC Offloading Disabler" --gauge "Configuring service for $SELECTED_INTERFACE..." 10 80 0
# Individual service status
if systemctl is-active --quiet "$SERVICE_NAME"; then
SERVICE_STATUS="Active"
else
SERVICE_STATUS="Inactive"
fi
# Individual service status
if systemctl is-active --quiet "$SERVICE_NAME"; then
SERVICE_STATUS="Active"
else
SERVICE_STATUS="Inactive"
fi
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_STATUS="Enabled"
else
BOOT_STATUS="Disabled"
fi
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_STATUS="Enabled"
else
BOOT_STATUS="Disabled"
fi
# Show individual service results
msg_ok "Service for ${BL}$SELECTED_INTERFACE${GN} (${BL}$DRIVER${GN}) created and enabled!"
msg_info "${TAB}Service: ${BL}$SERVICE_NAME${YW}"
msg_info "${TAB}Status: ${BL}$SERVICE_STATUS${YW}"
msg_info "${TAB}Start on boot: ${BL}$BOOT_STATUS${YW}"
# Show individual service results
msg_ok "Service for ${BL}$SELECTED_INTERFACE${GN} (${BL}$DRIVER${GN}) created and enabled!"
msg_info "${TAB}Service: ${BL}$SERVICE_NAME${YW}"
msg_info "${TAB}Status: ${BL}$SERVICE_STATUS${YW}"
msg_info "${TAB}Start on boot: ${BL}$BOOT_STATUS${YW}"
done
# Prepare summary of all interfaces
@@ -209,22 +220,22 @@ SUMMARY_MSG="Services created successfully!\n\n"
SUMMARY_MSG+="Configured Interfaces:\n"
for iface in "${INTERFACE_ARRAY[@]}"; do
SERVICE_NAME="disable-nic-offload-$iface.service"
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
if systemctl is-active --quiet "$SERVICE_NAME"; then
SVC_STATUS="Active"
else
SVC_STATUS="Inactive"
fi
SERVICE_NAME="disable-nic-offload-$iface.service"
DRIVER=$(basename $(readlink -f /sys/class/net/$iface/device/driver 2>/dev/null) 2>/dev/null)
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_SVC_STATUS="Enabled"
else
BOOT_SVC_STATUS="Disabled"
fi
if systemctl is-active --quiet "$SERVICE_NAME"; then
SVC_STATUS="Active"
else
SVC_STATUS="Inactive"
fi
SUMMARY_MSG+="- $iface ($DRIVER): $SVC_STATUS, Boot: $BOOT_SVC_STATUS\n"
if systemctl is-enabled --quiet "$SERVICE_NAME"; then
BOOT_SVC_STATUS="Enabled"
else
BOOT_SVC_STATUS="Disabled"
fi
SUMMARY_MSG+="- $iface ($DRIVER): $SVC_STATUS, Boot: $BOOT_SVC_STATUS\n"
done
# Show summary results
@@ -236,8 +247,8 @@ msg_ok "Intel e1000e/e1000 optimization complete for ${#INTERFACE_ARRAY[@]} inte
echo ""
msg_info "Verification commands:"
for iface in "${INTERFACE_ARRAY[@]}"; do
echo -e "${TAB}${BL}ethtool -k $iface${CL} ${YW}# Check offloading status${CL}"
echo -e "${TAB}${BL}systemctl status disable-nic-offload-$iface.service${CL} ${YW}# Check service status${CL}"
echo -e "${TAB}${BL}ethtool -k $iface${CL} ${YW}# Check offloading status${CL}"
echo -e "${TAB}${BL}systemctl status disable-nic-offload-$iface.service${CL} ${YW}# Check service status${CL}"
done
exit 0

View File

@@ -44,6 +44,10 @@ msg_error() {
echo -e "${BFR} ${CROSS} ${RD}${msg}${CL}"
}
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs3-upgrade" "pve"
start_routines() {
header_info
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 2 BACKUP" --menu "\nMake a backup of /etc/proxmox-backup to ensure that in the worst case, any relevant configuration can be recovered?" 14 58 2 \

View File

@@ -32,6 +32,10 @@ msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs4-upgrade" "pve"
start_routines() {
header_info
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "PBS 3 BACKUP" --menu \

View File

@@ -28,6 +28,10 @@ CM="${GN}✓${CL}"
CROSS="${RD}${CL}"
msg_info() { echo -ne " ${HOLD} ${YW}$1..."; }
# Telemetry
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func) 2>/dev/null || true
declare -f init_tool_telemetry &>/dev/null && init_tool_telemetry "pbs-microcode" "pve"
msg_ok() { echo -e "${BFR} ${CM} ${GN}$1${CL}"; }
msg_error() { echo -e "${BFR} ${CROSS} ${RD}$1${CL}"; }

Some files were not shown because too many files have changed in this diff Show More