Compare commits

..

92 Commits

Author SHA1 Message Date
CanbiZ (MickLesk)
7ba4e5dbc9 fix(tools): auto-detect binary vs armored GPG keys in setup_deb822_repo
The UniFi GPG key at dl.ui.com/unifi/unifi-repo.gpg is already in binary
format. setup_deb822_repo unconditionally ran gpg --dearmor which expects
ASCII-armored input, corrupting binary keys and causing apt to fail with
'Unable to locate package unifi'.

setup_deb822_repo now downloads the key to a temp file first and uses
the file command to detect whether it is already a binary PGP/GPG key.
Binary keys are copied directly; armored keys are dearmored as before.

This also reverts unifi-install.sh back to using setup_deb822_repo for
consistency with all other install scripts.
2026-02-12 16:47:06 +01:00
community-scripts-pr-app[bot]
79fd0d1dda Update CHANGELOG.md (#11839)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 13:51:22 +00:00
community-scripts-pr-app[bot]
280778d53b Update CHANGELOG.md (#11838)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 13:51:13 +00:00
CanbiZ (MickLesk)
1c3a3107f1 Deluge: add python3-setuptools as dep (#11833)
* fix(deluge): add python3-setuptools for pkg_resources on Python 3.13

* Ensure dependencies before updating Deluge
2026-02-12 14:50:56 +01:00
CanbiZ (MickLesk)
ee2c3a20ee fix(dispatcharr): migrate from requirements.txt to uv sync (pyproject.toml) (#11831) 2026-02-12 14:50:42 +01:00
CanbiZ (MickLesk)
5ee4f4e34b fix(api): prevent duplicate post_to_api submissions (#11836)
Add POST_TO_API_DONE idempotency guard to post_to_api() to prevent
the same telemetry record from being submitted twice with the same
RANDOM_UUID. This mirrors the existing POST_UPDATE_DONE pattern in
post_update_to_api().

post_to_api() is called twice in build.func:
- After storage validation (inside CONTAINER_STORAGE check)
- After create_lxc_container() completes

When both execute, the second call fails with a random_id uniqueness
violation on PocketBase, generating server-side errors.
2026-02-12 13:45:32 +01:00
community-scripts-pr-app[bot]
4b0e893bf1 Update CHANGELOG.md (#11834)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 12:26:51 +00:00
CanbiZ (MickLesk)
3676157a7c Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection (#11810)
* fix(debian-13-vm): disable systemd-firstboot and add autologin via virt-customize

Newer Debian 13 (Trixie) nocloud images (Jan 29+) have systemd-firstboot
enabled, which blocks the boot process by prompting for timezone and root
password on the serial console. The noVNC console stalls after auditd checks.

Fix by using virt-customize (like docker-vm.sh) to:
- Disable systemd-firstboot.service and mask it
- Pre-seed timezone to Etc/UTC
- Configure autologin on ttyS0 and tty1 for nocloud images
- Set hostname and clear machine-id for unique IDs
- Install libguestfs-tools if not present

Fixes #11807

* feat(debian-13-vm): add cloud-init selection dialog for default and advanced settings

Add select_cloud_init() function consistent with docker-vm.sh that prompts
the user to choose between Cloud-Init (genericcloud image) and nocloud
(with auto-login) in both default and advanced settings.

Previously, default settings hardcoded CLOUD_INIT=no without asking.
2026-02-12 13:26:20 +01:00
CanbiZ (MickLesk)
e437e50882 Merge branch 'main' of https://github.com/community-scripts/ProxmoxVE 2026-02-12 13:25:49 +01:00
CanbiZ (MickLesk)
6e9a94b46d quickfix: missing fields for db 2026-02-12 13:25:42 +01:00
community-scripts-pr-app[bot]
137ae6775e chore: update github-versions.json (#11832)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 12:15:25 +00:00
community-scripts-pr-app[bot]
dd8c998d43 Update CHANGELOG.md (#11827)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 11:48:41 +00:00
Slaviša Arežina
b215bac01d Update database generation command in install script (#11825) 2026-02-12 12:48:16 +01:00
community-scripts-pr-app[bot]
c3cd9df12f Update CHANGELOG.md (#11826)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 10:55:38 +00:00
CanbiZ (MickLesk)
406d53ea2f core: remove old Go API and extend misc/api.func with new backend (#11822)
* Remove Go API and extend misc/api.func

Delete the Go-based API (api/main.go, api/go.mod, api/go.sum, api/.env.example) and significantly enhance misc/api.func. The shell telemetry file now includes telemetry configuration, repo source detection, GPU/CPU/RAM detection, expanded explain_exit_code mappings, and refactored post_to_api/post_to_api_vm to send non-blocking telemetry to telemetry.community-scripts.org while respecting DIAGNOSTICS/DEV_MODE and adding richer metadata (cpu/gpu/ram/repo_source). Also updates header/author info and improves privacy/robustness and error handling.

* Start install timer and refine error reporting

Call start_install_timer during build startup and overhaul exit/error reporting.

Changes:
- Invoke start_install_timer early in misc/build.func to track install duration.
- Update api_exit_script comments to reference PocketBase/api.func and adjust ERR/SIGINT/SIGTERM traps to post numeric exit codes (use $? / 130 / 143) instead of command strings.
- Replace the previous explain_exit_code implementation with a conditional fallback: only define explain_exit_code if not already provided (api.func is the canonical source). Expanded and reorganized exit code mappings (curl, timeout, systemd, Node/Python/Postgres/MySQL/MongoDB, Proxmox, etc.).
- In error_handler: stop echoing the container log path (host shows combined log), and post a "failed" update to the API with the exit code before offering container cleanup.

Rationale: these changes make telemetry more consistent and robust (numeric codes), provide a safe fallback for exit descriptions when api.func isn't loaded, and ensure failures are reported to the API prior to any automatic cleanup.

* Report install start/failure to telemetry API

Add telemetry hooks in misc/build.func: call post_to_api at installation start to capture early or immediately-failing installs, and call post_update_to_api with status "failed" and the install exit code when a container installation fails. This improves visibility into install failures for monitoring/telemetry.
2026-02-12 11:55:13 +01:00
community-scripts-pr-app[bot]
c8b278f26f chore: update github-versions.json (#11821)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 06:25:48 +00:00
community-scripts-pr-app[bot]
a6f0d7233e Update CHANGELOG.md (#11818)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 00:21:24 +00:00
community-scripts-pr-app[bot]
079a436286 chore: update github-versions.json (#11817)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 00:21:01 +00:00
community-scripts-pr-app[bot]
1c2ed6ff10 Update CHANGELOG.md (#11813)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 21:30:42 +00:00
Chris
c1d7f23a17 [Feature] OpenCloud: support PosixFS Collaborative Mode (#11806)
* [Feature] OpenCloud: support PosixFS Collaborative Mode

* Use ensure_dependencies in opencloud.sh
2026-02-11 22:30:09 +01:00
community-scripts-pr-app[bot]
fdbe48badb Update CHANGELOG.md (#11812)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 21:05:59 +00:00
CanbiZ (MickLesk)
d09dd0b664 dispatcharr: include port 9191 in success-message (#11808) 2026-02-11 22:05:32 +01:00
community-scripts-pr-app[bot]
cba6717469 Update CHANGELOG.md (#11809)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 19:48:18 +00:00
Tom Frenzel
0a12acf6bd fix: make donetick 0.1.71 compatible (#11804) 2026-02-11 20:47:52 +01:00
community-scripts-pr-app[bot]
4e4defa236 chore: update github-versions.json (#11805)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 18:22:35 +00:00
community-scripts-pr-app[bot]
c15f69712f Update CHANGELOG.md (#11802)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 15:39:16 +00:00
ls-root
b53a731c42 fix(core): respect EDITOR variable for config editing (#11693)
- Replace hardcoded nano with ${EDITOR:-nano} for file editing
- Default to nano if no EDITOR enviornmet variable is set
2026-02-11 16:38:46 +01:00
community-scripts-pr-app[bot]
ddfe9166a1 Update .app files (#11793)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-11 13:33:31 +01:00
community-scripts-pr-app[bot]
1b1c84ad4f Update CHANGELOG.md (#11797)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 12:33:15 +00:00
CanbiZ (MickLesk)
db69c7b0f8 fix(kasm): Support new version URL format without hash suffix (#11787)
Kasm changed their release URL format starting with v1.18.1:
- Old format: kasm_release_1.18.0.09f70a.tar.gz (with hash)
- New format: kasm_release_1.18.1.tar.gz (without hash)

The script now tries both detection methods:
1. First tries to find URL with hash suffix (old format)
2. Falls back to detecting version from service_images URLs and
   constructing the new URL format

This fixes the update detection for Kasm v1.18.1 and future versions.

Fixes #11785
2026-02-11 13:32:48 +01:00
community-scripts-pr-app[bot]
53b3b4bf9f chore: update github-versions.json (#11796)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 12:17:32 +00:00
community-scripts-pr-app[bot]
8fadcc0130 Update CHANGELOG.md (#11794)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 11:27:00 +00:00
community-scripts-pr-app[bot]
5aff8dc2f1 Update CHANGELOG.md (#11792)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 11:26:39 +00:00
community-scripts-pr-app[bot]
e7ed841361 Update date in json (#11791)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-11 11:26:32 +00:00
push-app-to-main[bot]
7e49c222e5 Draw.io (#11788)
Co-authored-by: push-app-to-main[bot] <203845782+push-app-to-main[bot]@users.noreply.github.com>
2026-02-11 12:26:14 +01:00
community-scripts-pr-app[bot]
9f31012598 Update CHANGELOG.md (#11786)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 10:52:47 +00:00
Slaviša Arežina
811062f958 remove Torch (#11783) 2026-02-11 11:52:20 +01:00
community-scripts-pr-app[bot]
893b0bfb4a Update CHANGELOG.md (#11784)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 09:34:25 +00:00
Romain PINSOLLE
f34f994560 Snowshare: fix update script (#11726)
* Snowshare: fix update error

* Implement upload backup and restore in update script

Added backup and restore functionality for uploads during the Snowshare update process.

---------

Co-authored-by: CanbiZ (MickLesk) <47820557+MickLesk@users.noreply.github.com>
2026-02-11 10:33:55 +01:00
community-scripts-pr-app[bot]
216b389635 chore: update github-versions.json (#11781)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 06:26:01 +00:00
community-scripts-pr-app[bot]
d062baf8c9 Update CHANGELOG.md (#11780)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 06:25:27 +00:00
Tiago Noronha
e09e244c3d Fix formatting in kutt.json notes section (#11774) 2026-02-11 07:25:02 +01:00
community-scripts-pr-app[bot]
2645f4cf4d Update CHANGELOG.md (#11779)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 00:26:43 +00:00
community-scripts-pr-app[bot]
a0b55b6934 chore: update github-versions.json (#11778)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 00:26:14 +00:00
community-scripts-pr-app[bot]
b263dc25fe Update CHANGELOG.md (#11777)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 22:05:48 +00:00
Chris
ac308c931e Refactor: Slskd & Soularr (#11674) 2026-02-10 23:04:56 +01:00
Chris
a16dfb6d82 Immich: Pin version to 2.5.6 (#11775) 2026-02-10 22:43:15 +01:00
community-scripts-pr-app[bot]
63e9bc3729 Update CHANGELOG.md (#11773)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 19:55:15 +00:00
Slaviša Arežina
3735f9251b Fix setuptools (#11772) 2026-02-10 20:54:30 +01:00
community-scripts-pr-app[bot]
fc2559c702 Update CHANGELOG.md (#11770)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 18:31:37 +00:00
carlosmaroot
5f2d463408 feat: enhance backup notes with guest name and improve whitespace handling and content line matching in container parsing. (#11752) 2026-02-10 19:30:54 +01:00
community-scripts-pr-app[bot]
69e0dc6968 chore: update github-versions.json (#11769)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 18:25:11 +00:00
community-scripts-pr-app[bot]
fccb8a923a Update CHANGELOG.md (#11766)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 14:27:13 +00:00
CanbiZ (MickLesk)
53dbb9d705 fix(elementsynapse): prevent systemd invoke failure during apt install in LXC (#11758) 2026-02-10 15:26:22 +01:00
community-scripts-pr-app[bot]
236c5296b8 Update CHANGELOG.md (#11764)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 14:19:46 +00:00
CanbiZ (MickLesk)
76c7e3a67f fix(workflow): include addon scripts in github-versions extraction (#11757) 2026-02-10 15:19:09 +01:00
community-scripts-pr-app[bot]
4dbb139c60 Update CHANGELOG.md (#11759)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 12:33:09 +00:00
BirdMakingStuff
c581704fdd Snowshare: fix typo in config file path on website (#11754)
* fix: Fixed typo in snowshare systemd service

* Revert "fix: Fixed typo in snowshare systemd service"

This reverts commit cce1449caa.

* update config_path of snowshare.json instead

* fix formatting issue
2026-02-10 13:32:42 +01:00
community-scripts-pr-app[bot]
e0641d1573 chore: update github-versions.json (#11756)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 12:19:10 +00:00
community-scripts-pr-app[bot]
058e860f6d Update .app files (#11751)
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2026-02-10 08:27:06 +01:00
community-scripts-pr-app[bot]
2b87aa6d52 Update CHANGELOG.md (#11750)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 07:24:02 +00:00
push-app-to-main[bot]
9640caf7bc paperless-exporter (#11737) 2026-02-10 08:23:33 +01:00
community-scripts-pr-app[bot]
fc044a73ca chore: update github-versions.json (#11749)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 06:27:33 +00:00
community-scripts-pr-app[bot]
c97ccbe9bb Update CHANGELOG.md (#11747)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 00:26:38 +00:00
community-scripts-pr-app[bot]
5f69d8c315 chore: update github-versions.json (#11746)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-10 00:26:07 +00:00
community-scripts-pr-app[bot]
e8cc7ce8ff chore: update github-versions.json (#11741)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 18:20:11 +00:00
CanbiZ (MickLesk)
1357a6f26e fix(msg_menu): redirect menu display to /dev/tty to prevent capture in command substitution 2026-02-09 17:12:56 +01:00
community-scripts-pr-app[bot]
d2da5af858 Update CHANGELOG.md (#11736)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 14:54:50 +00:00
CanbiZ (MickLesk)
14755d5efe fix: add --clear to uv venv calls for uv 0.10 compatibility (#11723)
uv 0.10 requires --clear flag to overwrite existing virtual environments.
Without it, update scripts fail when the venv already exists.

Affected: 13 ct/ update scripts, 25 install/ scripts, glances addon
2026-02-09 15:54:22 +01:00
community-scripts-pr-app[bot]
927c3a7c48 Update CHANGELOG.md (#11735)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 14:53:53 +00:00
CanbiZ (MickLesk)
9ed365e9eb fix(koillection): add missing setup_composer in update script (#11734) 2026-02-09 15:53:21 +01:00
community-scripts-pr-app[bot]
d5cdfc7405 Update CHANGELOG.md (#11732)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 13:16:31 +00:00
Slaviša Arežina
3d2bc05092 Refactor: FileFlows (#11108)
* Refactor

* Apply suggestion from @tremor021

* Apply suggestion from @tremor021
2026-02-09 14:16:04 +01:00
CanbiZ (MickLesk)
a330afde03 fix(umlautadaptarr): use release appsettings.json instead of hardcoded copy (#11725)
The install script overwrote the correct appsettings.json shipped in the
release archive with a hardcoded copy that was missing newer required
fields (ApiKey, ProxyPort, EnableChangedTitleCache) and had structural
differences (Lidarr/Readarr as arrays instead of objects), causing the
service to fail on startup.

- Remove hardcoded appsettings.json from install script (release archive
  already ships the correct version)
- Backup and restore appsettings.json during updates to preserve user
  configuration

Closes #11665
2026-02-09 14:15:29 +01:00
community-scripts-pr-app[bot]
f19bc7722b Update CHANGELOG.md (#11731)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 13:12:46 +00:00
CanbiZ (MickLesk)
03571fb26c Refactor: wger (#11722)
* Refactor wger installation script for new services

Updated installation script for wger, changing dependencies and configuration for PostgreSQL, Gunicorn, and Celery. Adjusted paths and service configurations for better compatibility.

* Fix license URL in wger-install.sh

* Fix resource defaults and enhance update process

Updated default resource values and improved backup and restore process for wger installation.

* add json

* Update ct/wger.sh

Co-authored-by: Slaviša Arežina <58952836+tremor021@users.noreply.github.com>

* Update wger.sh

* Update install/wger-install.sh

Co-authored-by: Slaviša Arežina <58952836+tremor021@users.noreply.github.com>

---------

Co-authored-by: Slaviša Arežina <58952836+tremor021@users.noreply.github.com>
2026-02-09 14:12:22 +01:00
community-scripts-pr-app[bot]
c58ca1a70a Update CHANGELOG.md (#11730)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 13:12:15 +00:00
CanbiZ (MickLesk)
157e69b365 fix(addons): ensure curl is installed before use in all addon scripts (#11718) 2026-02-09 14:11:51 +01:00
community-scripts-pr-app[bot]
ee3e53a1a2 Update CHANGELOG.md (#11729)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 13:11:35 +00:00
CanbiZ (MickLesk)
c2cb89ddc5 fix(netbird): add systemd ordering to start after Docker (#11716)
When Docker is installed in the same LXC, Docker sets the FORWARD chain
policy to DROP on startup. If Netbird starts before Docker finishes
initializing its iptables rules, Docker overrides the Netbird routing
rules, causing traffic routing to fail despite the tunnel being up.

Add a systemd drop-in override that ensures netbird.service starts after
docker.service (only if Docker is installed). This prevents the race
condition and ensures correct iptables ordering after reboot.

Closes #11354
2026-02-09 14:11:07 +01:00
community-scripts-pr-app[bot]
d21df736fd chore: update github-versions.json (#11727)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 12:17:04 +00:00
CanbiZ (MickLesk)
59e27bfc8a hotfix nginxui remove jwt 2026-02-09 11:39:46 +01:00
community-scripts-pr-app[bot]
43d76b6ea7 Update CHANGELOG.md (#11721)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 10:05:55 +00:00
CanbiZ (MickLesk)
7715a02f05 remove whiptail from update scripts for unattended update support (#11712)
* Simplify Alpine update scripts to run upgrade

Remove interactive whiptail menus, loops and newt dependency checks from ct/alpine-docker.sh, ct/alpine-zigbee2mqtt.sh, and ct/alpine.sh. Each update_script now simply calls header_info, runs $STD apk -U upgrade, displays a success message and exits, simplifying and automating the update flow.

* feat(update-scripts): replace whiptail with msg_menu for unattended updates

Remove all whiptail dialogs from ct update_script() functions and replace
with msg_menu() - a lightweight read-based menu that supports:
- PHS_SILENT=1: auto-selects first (default) option for unattended mode
- Interactive: numbered menu with 10s timeout and default fallback

Converted scripts (whiptail menu → msg_menu):
- plex.sh, npmplus.sh, cronicle.sh, meilisearch.sh, node-red.sh
- homeassistant.sh, podman-homeassistant.sh
- vaultwarden.sh, alpine-vaultwarden.sh
- loki.sh, alpine-loki.sh
- alpine-grafana.sh, alpine-redis.sh, alpine-valkey.sh
- alpine-nextcloud.sh

Simplified scripts (removed unnecessary whiptail for single-action updates):
- alpine.sh, alpine-docker.sh, alpine-zigbee2mqtt.sh

Special handling:
- gitea-mirror.sh: replaced yesno/msgbox with read -rp confirmations,
  exit 75 in silent mode for major version upgrades requiring interaction
- vaultwarden.sh/alpine-vaultwarden.sh: passwordbox replaced with
  read -r -s -p, skipped in silent mode with warning
- nginxproxymanager.sh: exit 1 → exit 75 for disabled script

Infrastructure:
- Added msg_menu() helper to misc/build.func
- Added exit code 75 handling in update-apps.sh (skip, not fail)

Closes #11620

* refactor(update-scripts): remove menus where sequential updates suffice

- alpine-nextcloud: add apk upgrade as the update action (was missing)
- meilisearch: run meilisearch + UI updates sequentially (like bar-assistant)
- npmplus: run alpine upgrade + docker pull sequentially, no menu
- vaultwarden: update VaultWarden + Web-Vault sequentially, remove admin
  token option (interactive-only, not suitable for unattended updates)
- alpine-vaultwarden: just run apk upgrade, remove admin token menu
2026-02-09 11:05:31 +01:00
community-scripts-pr-app[bot]
c29dfa7a29 Update CHANGELOG.md (#11720)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 10:04:52 +00:00
CanbiZ (MickLesk)
1439d98ec4 fix(hwaccel): add libmfx-gen1.2 to Intel Arc setup for QSV support (#11707)
Add missing libmfx-gen1.2 package to _setup_intel_arc() for both
Ubuntu and Debian. Without this package, QSV hardware acceleration
fails with 'Error creating a MFX session: -9'.

The package was already present in _setup_intel_modern() (Gen9+)
but missing from the Arc GPU path.

Closes #11701
2026-02-09 11:04:22 +01:00
community-scripts-pr-app[bot]
44ad9370b6 Update CHANGELOG.md (#11719)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 09:53:04 +00:00
CanbiZ (MickLesk)
4e96bb664f Nginx-UI: better User Handling | ACME (#11715)
* fix(nginx-ui): remove admin user hack, use setup wizard instead

The previous install script started nginx-ui for 3 seconds, stopped it,
and ran reset-password to create an admin user. This caused:
- Race condition: the internal setup wizard could trigger during the brief
  start window, conflicting with the reset-password approach
- Admin users unable to login after the setup wizard fired
- Settings lockup due to overloaded app.ini with hardcoded nginx paths
  that conflict with UI-managed settings

Changes:
- Remove start/stop/reset-password hack from install script
- Simplify app.ini to match upstream defaults (minimal config)
- Let users complete the natural setup wizard on first visit
- Update JSON: remove default credentials, add setup wizard note
- Add setup wizard hint to CT script output

The setup wizard properly handles admin account creation and ACME email
configuration, which are both needed for full functionality.

Ref: ProxmoxVED#1408

* Update nginx-ui.sh
2026-02-09 10:52:33 +01:00
community-scripts-pr-app[bot]
57c9308326 Update CHANGELOG.md (#11717)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 09:34:08 +00:00
CanbiZ (MickLesk)
f07c221ca4 NginxProxymanager: use better-sqlite3 (#11708)
* fix(nginxproxymanager): update for better-sqlite3 and setup wizard

NPM switched to better-sqlite3 as the knex database client.
The old sqlite3 client causes 'Internal Error' on user creation.

- Update production.json in install/update to use better-sqlite3
- Add sed patch in update script to fix existing production.json
- Add useNullAsDefault: true to match upstream knex config
- Remove default credentials from JSON (NPM now uses setup wizard)
- Add note about setup wizard for first-time users

Ref: NginxProxyManager/nginx-proxy-manager@0b2fa82
Closes #11681

* Update nginxproxymanager.json

* Remove disabled script error messages

Removed error messages related to OpenResty APT repository issues.
2026-02-09 10:33:44 +01:00
community-scripts-pr-app[bot]
63097e3b64 Update CHANGELOG.md (#11714)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-09 09:03:51 +00:00
Slaviša Arežina
632a39f56e Add warning to website (#11711) 2026-02-09 10:03:25 +01:00
107 changed files with 2153 additions and 1471 deletions

View File

@@ -89,9 +89,15 @@ jobs:
slug=$(jq -r '.slug // empty' "$json_file" 2>/dev/null)
[[ -z "$slug" ]] && continue
# Find corresponding install script
install_script="install/${slug}-install.sh"
[[ ! -f "$install_script" ]] && continue
# Find corresponding script (install script or addon script)
install_script=""
if [[ -f "install/${slug}-install.sh" ]]; then
install_script="install/${slug}-install.sh"
elif [[ -f "tools/addon/${slug}.sh" ]]; then
install_script="tools/addon/${slug}.sh"
else
continue
fi
# Look for fetch_and_deploy_gh_release calls
# Pattern: fetch_and_deploy_gh_release "app" "owner/repo" ["mode"] ["version"]

View File

@@ -401,14 +401,129 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-12
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Deluge: add python3-setuptools as dep [@MickLesk](https://github.com/MickLesk) ([#11833](https://github.com/community-scripts/ProxmoxVE/pull/11833))
- Dispatcharr: migrate to uv sync [@MickLesk](https://github.com/MickLesk) ([#11831](https://github.com/community-scripts/ProxmoxVE/pull/11831))
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
- #### ✨ New Features
- Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection [@MickLesk](https://github.com/MickLesk) ([#11810](https://github.com/community-scripts/ProxmoxVE/pull/11810))
### 💾 Core
- #### ✨ New Features
- core: remove old Go API and extend misc/api.func with new backend [@MickLesk](https://github.com/MickLesk) ([#11822](https://github.com/community-scripts/ProxmoxVE/pull/11822))
## 2026-02-11
### 🆕 New Scripts
- Draw.io ([#11788](https://github.com/community-scripts/ProxmoxVE/pull/11788))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- dispatcharr: include port 9191 in success-message [@MickLesk](https://github.com/MickLesk) ([#11808](https://github.com/community-scripts/ProxmoxVE/pull/11808))
- fix: make donetick 0.1.71 compatible [@tomfrenzel](https://github.com/tomfrenzel) ([#11804](https://github.com/community-scripts/ProxmoxVE/pull/11804))
- Kasm: Support new version URL format without hash suffix [@MickLesk](https://github.com/MickLesk) ([#11787](https://github.com/community-scripts/ProxmoxVE/pull/11787))
- LibreTranslate: Remove Torch [@tremor021](https://github.com/tremor021) ([#11783](https://github.com/community-scripts/ProxmoxVE/pull/11783))
- Snowshare: fix update script [@TuroYT](https://github.com/TuroYT) ([#11726](https://github.com/community-scripts/ProxmoxVE/pull/11726))
- #### ✨ New Features
- [Feature] OpenCloud: support PosixFS Collaborative Mode [@vhsdream](https://github.com/vhsdream) ([#11806](https://github.com/community-scripts/ProxmoxVE/pull/11806))
### 💾 Core
- #### 🔧 Refactor
- core: respect EDITOR variable for config editing [@ls-root](https://github.com/ls-root) ([#11693](https://github.com/community-scripts/ProxmoxVE/pull/11693))
### 📚 Documentation
- Fix formatting in kutt.json notes section [@tiagodenoronha](https://github.com/tiagodenoronha) ([#11774](https://github.com/community-scripts/ProxmoxVE/pull/11774))
## 2026-02-10
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Immich: Pin version to 2.5.6 [@vhsdream](https://github.com/vhsdream) ([#11775](https://github.com/community-scripts/ProxmoxVE/pull/11775))
- Libretranslate: Fix setuptools [@tremor021](https://github.com/tremor021) ([#11772](https://github.com/community-scripts/ProxmoxVE/pull/11772))
- Element Synapse: prevent systemd invoke failure during apt install [@MickLesk](https://github.com/MickLesk) ([#11758](https://github.com/community-scripts/ProxmoxVE/pull/11758))
- #### ✨ New Features
- Refactor: Slskd & Soularr [@vhsdream](https://github.com/vhsdream) ([#11674](https://github.com/community-scripts/ProxmoxVE/pull/11674))
### 🗑️ Deleted Scripts
- move paperless-exporter from LXC to addon ([#11737](https://github.com/community-scripts/ProxmoxVE/pull/11737))
### 🧰 Tools
- #### 🐞 Bug Fixes
- feat: improve storage parsing & add guestname [@carlosmaroot](https://github.com/carlosmaroot) ([#11752](https://github.com/community-scripts/ProxmoxVE/pull/11752))
### 📂 Github
- Github-Version Workflow: include addon scripts in extraction [@MickLesk](https://github.com/MickLesk) ([#11757](https://github.com/community-scripts/ProxmoxVE/pull/11757))
### 🌐 Website
- #### 📝 Script Information
- Snowshare: fix typo in config file path on website [@BirdMakingStuff](https://github.com/BirdMakingStuff) ([#11754](https://github.com/community-scripts/ProxmoxVE/pull/11754))
## 2026-02-09
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- tracearr: prepare for next stable release [@durzo](https://github.com/durzo) ([#11673](https://github.com/community-scripts/ProxmoxVE/pull/11673))
- several scripts: add --clear to uv venv calls for uv 0.10 compatibility [@MickLesk](https://github.com/MickLesk) ([#11723](https://github.com/community-scripts/ProxmoxVE/pull/11723))
- Koillection: ensure setup_composer is in update script [@MickLesk](https://github.com/MickLesk) ([#11734](https://github.com/community-scripts/ProxmoxVE/pull/11734))
- PeaNUT: symlink server.js after update [@vhsdream](https://github.com/vhsdream) ([#11696](https://github.com/community-scripts/ProxmoxVE/pull/11696))
- Umlautadaptarr: use release appsettings.json instead of hardcoded copy [@MickLesk](https://github.com/MickLesk) ([#11725](https://github.com/community-scripts/ProxmoxVE/pull/11725))
- tracearr: prepare for next stable release [@durzo](https://github.com/durzo) ([#11673](https://github.com/community-scripts/ProxmoxVE/pull/11673))
- #### ✨ New Features
- remove whiptail from update scripts for unattended update support [@MickLesk](https://github.com/MickLesk) ([#11712](https://github.com/community-scripts/ProxmoxVE/pull/11712))
- #### 🔧 Refactor
- Refactor: FileFlows [@tremor021](https://github.com/tremor021) ([#11108](https://github.com/community-scripts/ProxmoxVE/pull/11108))
- Refactor: wger [@MickLesk](https://github.com/MickLesk) ([#11722](https://github.com/community-scripts/ProxmoxVE/pull/11722))
- Nginx-UI: better User Handling | ACME [@MickLesk](https://github.com/MickLesk) ([#11715](https://github.com/community-scripts/ProxmoxVE/pull/11715))
- NginxProxymanager: use better-sqlite3 [@MickLesk](https://github.com/MickLesk) ([#11708](https://github.com/community-scripts/ProxmoxVE/pull/11708))
### 💾 Core
- #### 🔧 Refactor
- hwaccel: add libmfx-gen1.2 to Intel Arc setup for QSV support [@MickLesk](https://github.com/MickLesk) ([#11707](https://github.com/community-scripts/ProxmoxVE/pull/11707))
### 🧰 Tools
- #### 🐞 Bug Fixes
- addons: ensure curl is installed before use [@MickLesk](https://github.com/MickLesk) ([#11718](https://github.com/community-scripts/ProxmoxVE/pull/11718))
- Netbird (addon): add systemd ordering to start after Docker [@MickLesk](https://github.com/MickLesk) ([#11716](https://github.com/community-scripts/ProxmoxVE/pull/11716))
### ❔ Uncategorized
- Bichon: Update website [@tremor021](https://github.com/tremor021) ([#11711](https://github.com/community-scripts/ProxmoxVE/pull/11711))
## 2026-02-08

View File

@@ -1,5 +0,0 @@
MONGO_USER=
MONGO_PASSWORD=
MONGO_IP=
MONGO_PORT=
MONGO_DATABASE=

View File

@@ -1,23 +0,0 @@
module proxmox-api
go 1.24.0
require (
github.com/gorilla/mux v1.8.1
github.com/joho/godotenv v1.5.1
github.com/rs/cors v1.11.1
go.mongodb.org/mongo-driver v1.17.2
)
require (
github.com/golang/snappy v0.0.4 // indirect
github.com/klauspost/compress v1.16.7 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/text v0.31.0 // indirect
)

View File

@@ -1,56 +0,0 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.mongodb.org/mongo-driver v1.17.2 h1:gvZyk8352qSfzyZ2UMWcpDpMSGEr1eqE4T793SqyhzM=
go.mongodb.org/mongo-driver v1.17.2/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -1,450 +0,0 @@
// Copyright (c) 2021-2026 community-scripts ORG
// Author: Michel Roegl-Brunner (michelroegl-brunner)
// License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"strconv"
"time"
"github.com/gorilla/mux"
"github.com/joho/godotenv"
"github.com/rs/cors"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var client *mongo.Client
var collection *mongo.Collection
func loadEnv() {
if err := godotenv.Load(); err != nil {
log.Fatal("Error loading .env file")
}
}
// DataModel represents a single document in MongoDB
type DataModel struct {
ID primitive.ObjectID `json:"id" bson:"_id,omitempty"`
CT_TYPE uint `json:"ct_type" bson:"ct_type"`
DISK_SIZE float32 `json:"disk_size" bson:"disk_size"`
CORE_COUNT uint `json:"core_count" bson:"core_count"`
RAM_SIZE uint `json:"ram_size" bson:"ram_size"`
OS_TYPE string `json:"os_type" bson:"os_type"`
OS_VERSION string `json:"os_version" bson:"os_version"`
DISABLEIP6 string `json:"disableip6" bson:"disableip6"`
NSAPP string `json:"nsapp" bson:"nsapp"`
METHOD string `json:"method" bson:"method"`
CreatedAt time.Time `json:"created_at" bson:"created_at"`
PVEVERSION string `json:"pve_version" bson:"pve_version"`
STATUS string `json:"status" bson:"status"`
RANDOM_ID string `json:"random_id" bson:"random_id"`
TYPE string `json:"type" bson:"type"`
ERROR string `json:"error" bson:"error"`
}
type StatusModel struct {
RANDOM_ID string `json:"random_id" bson:"random_id"`
ERROR string `json:"error" bson:"error"`
STATUS string `json:"status" bson:"status"`
}
type CountResponse struct {
TotalEntries int64 `json:"total_entries"`
StatusCount map[string]int64 `json:"status_count"`
NSAPPCount map[string]int64 `json:"nsapp_count"`
}
// ConnectDatabase initializes the MongoDB connection
func ConnectDatabase() {
loadEnv()
mongoURI := fmt.Sprintf("mongodb://%s:%s@%s:%s",
os.Getenv("MONGO_USER"),
os.Getenv("MONGO_PASSWORD"),
os.Getenv("MONGO_IP"),
os.Getenv("MONGO_PORT"))
database := os.Getenv("MONGO_DATABASE")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var err error
client, err = mongo.Connect(ctx, options.Client().ApplyURI(mongoURI))
if err != nil {
log.Fatal("Failed to connect to MongoDB!", err)
}
collection = client.Database(database).Collection("data_models")
fmt.Println("Connected to MongoDB on 10.10.10.18")
}
// UploadJSON handles API requests and stores data as a document in MongoDB
func UploadJSON(w http.ResponseWriter, r *http.Request) {
var input DataModel
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
input.CreatedAt = time.Now()
_, err := collection.InsertOne(context.Background(), input)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Println("Received data:", input)
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(map[string]string{"message": "Data saved successfully"})
}
// UpdateStatus updates the status of a record based on RANDOM_ID
func UpdateStatus(w http.ResponseWriter, r *http.Request) {
var input StatusModel
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
filter := bson.M{"random_id": input.RANDOM_ID}
update := bson.M{"$set": bson.M{"status": input.STATUS, "error": input.ERROR}}
_, err := collection.UpdateOne(context.Background(), filter, update)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Println("Updated data:", input)
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"message": "Record updated successfully"})
}
// GetDataJSON fetches all data from MongoDB
func GetDataJSON(w http.ResponseWriter, r *http.Request) {
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetPaginatedData(w http.ResponseWriter, r *http.Request) {
page, _ := strconv.Atoi(r.URL.Query().Get("page"))
limit, _ := strconv.Atoi(r.URL.Query().Get("limit"))
if page < 1 {
page = 1
}
if limit < 1 {
limit = 10
}
skip := (page - 1) * limit
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
options := options.Find().SetSkip(int64(skip)).SetLimit(int64(limit))
cursor, err := collection.Find(ctx, bson.M{}, options)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetSummary(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
totalCount, err := collection.CountDocuments(ctx, bson.M{})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
statusCount := make(map[string]int64)
nsappCount := make(map[string]int64)
pipeline := []bson.M{
{"$group": bson.M{"_id": "$status", "count": bson.M{"$sum": 1}}},
}
cursor, err := collection.Aggregate(ctx, pipeline)
if err == nil {
for cursor.Next(ctx) {
var result struct {
ID string `bson:"_id"`
Count int64 `bson:"count"`
}
if err := cursor.Decode(&result); err == nil {
statusCount[result.ID] = result.Count
}
}
}
pipeline = []bson.M{
{"$group": bson.M{"_id": "$nsapp", "count": bson.M{"$sum": 1}}},
}
cursor, err = collection.Aggregate(ctx, pipeline)
if err == nil {
for cursor.Next(ctx) {
var result struct {
ID string `bson:"_id"`
Count int64 `bson:"count"`
}
if err := cursor.Decode(&result); err == nil {
nsappCount[result.ID] = result.Count
}
}
}
response := CountResponse{
TotalEntries: totalCount,
StatusCount: statusCount,
NSAPPCount: nsappCount,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
func GetByNsapp(w http.ResponseWriter, r *http.Request) {
nsapp := r.URL.Query().Get("nsapp")
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"nsapp": nsapp})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetByDateRange(w http.ResponseWriter, r *http.Request) {
startDate := r.URL.Query().Get("start_date")
endDate := r.URL.Query().Get("end_date")
if startDate == "" || endDate == "" {
http.Error(w, "Both start_date and end_date are required", http.StatusBadRequest)
return
}
start, err := time.Parse("2006-01-02T15:04:05.999999+00:00", startDate+"T00:00:00+00:00")
if err != nil {
http.Error(w, "Invalid start_date format", http.StatusBadRequest)
return
}
end, err := time.Parse("2006-01-02T15:04:05.999999+00:00", endDate+"T23:59:59+00:00")
if err != nil {
http.Error(w, "Invalid end_date format", http.StatusBadRequest)
return
}
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{
"created_at": bson.M{
"$gte": start,
"$lte": end,
},
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetByStatus(w http.ResponseWriter, r *http.Request) {
status := r.URL.Query().Get("status")
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"status": status})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetByOS(w http.ResponseWriter, r *http.Request) {
osType := r.URL.Query().Get("os_type")
osVersion := r.URL.Query().Get("os_version")
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"os_type": osType, "os_version": osVersion})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetErrors(w http.ResponseWriter, r *http.Request) {
errorCount := make(map[string]int)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"error": bson.M{"$ne": ""}})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if record.ERROR != "" {
errorCount[record.ERROR]++
}
}
type ErrorCountResponse struct {
Error string `json:"error"`
Count int `json:"count"`
}
var errorCounts []ErrorCountResponse
for err, count := range errorCount {
errorCounts = append(errorCounts, ErrorCountResponse{
Error: err,
Count: count,
})
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(struct {
ErrorCounts []ErrorCountResponse `json:"error_counts"`
}{
ErrorCounts: errorCounts,
})
}
func main() {
ConnectDatabase()
router := mux.NewRouter()
router.HandleFunc("/upload", UploadJSON).Methods("POST")
router.HandleFunc("/upload/updatestatus", UpdateStatus).Methods("POST")
router.HandleFunc("/data/json", GetDataJSON).Methods("GET")
router.HandleFunc("/data/paginated", GetPaginatedData).Methods("GET")
router.HandleFunc("/data/summary", GetSummary).Methods("GET")
router.HandleFunc("/data/nsapp", GetByNsapp).Methods("GET")
router.HandleFunc("/data/date", GetByDateRange).Methods("GET")
router.HandleFunc("/data/status", GetByStatus).Methods("GET")
router.HandleFunc("/data/os", GetByOS).Methods("GET")
router.HandleFunc("/data/errors", GetErrors).Methods("GET")
c := cors.New(cors.Options{
AllowedOrigins: []string{"*"},
AllowedMethods: []string{"GET", "POST"},
AllowedHeaders: []string{"Content-Type", "Authorization"},
AllowCredentials: true,
})
handler := c.Handler(router)
fmt.Println("Server running on port 8080")
log.Fatal(http.ListenAndServe(":8080", handler))
}

View File

@@ -51,7 +51,7 @@ function update_script() {
cp -r /opt/adventurelog-backup/backend/server/media /opt/adventurelog/backend/server/media
cd /opt/adventurelog/backend/server
if [[ ! -x .venv/bin/python ]]; then
$STD uv venv .venv
$STD uv venv --clear .venv
$STD .venv/bin/python -m ensurepip --upgrade
fi
$STD .venv/bin/python -m pip install --upgrade pip

View File

@@ -44,7 +44,7 @@ function update_script() {
msg_info "Updating Autocaliweb"
cd "$INSTALL_DIR"
if [[ ! -d "$VIRTUAL_ENV" ]]; then
$STD uv venv "$VIRTUAL_ENV"
$STD uv venv --clear "$VIRTUAL_ENV"
fi
$STD uv sync --all-extras --active
cd "$INSTALL_DIR"/koreader/plugins

View File

@@ -40,7 +40,7 @@ function update_script() {
chmod 775 /opt/bazarr /var/lib/bazarr/
# Always ensure venv exists
if [[ ! -d /opt/bazarr/venv/ ]]; then
$STD uv venv /opt/bazarr/venv --python 3.12
$STD uv venv --clear /opt/bazarr/venv --python 3.12
fi
# Always check and fix service file if needed

View File

@@ -28,6 +28,7 @@ function update_script() {
exit
fi
msg_info "Updating Deluge"
ensure_dependencies python3-setuptools
$STD apt update
$STD pip3 install deluge[all] --upgrade
msg_ok "Updated Deluge"

View File

@@ -103,8 +103,8 @@ function update_script() {
cd /opt/dispatcharr
rm -rf .venv
$STD uv venv
$STD uv pip install -r requirements.txt --index-strategy unsafe-best-match
$STD uv venv --clear
$STD uv sync
$STD uv pip install gunicorn gevent celery redis daphne
msg_ok "Updated Dispatcharr Backend"
@@ -144,4 +144,4 @@ description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:9191${CL}"

View File

@@ -35,13 +35,14 @@ function update_script() {
msg_ok "Stopped Service"
msg_info "Backing Up Configurations"
mv /opt/donetick/config/selfhosted.yml /opt/donetick/donetick.db /opt
mv /opt/donetick/config/selfhosted.yaml /opt/donetick/donetick.db /opt
msg_ok "Backed Up Configurations"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "donetick" "donetick/donetick" "prebuild" "latest" "/opt/donetick" "donetick_Linux_x86_64.tar.gz"
msg_info "Restoring Configurations"
mv /opt/selfhosted.yml /opt/donetick/config
mv /opt/selfhosted.yaml /opt/donetick/config
sed -i '/capacitor:\/\/localhost/d' /opt/donetick/config/selfhosted.yaml
mv /opt/donetick.db /opt/donetick
msg_ok "Restored Configurations"

58
ct/drawio.sh Normal file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: Slaviša Arežina (tremor021)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.drawio.com/
APP="DrawIO"
var_tags="${var_tags:-diagrams}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-4}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /var/lib/tomcat11/webapps/draw.war ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "drawio" "jgraph/drawio"; then
msg_info "Stopping service"
systemctl stop tomcat11
msg_ok "Service stopped"
msg_info "Updating Debian LXC"
$STD apt update
$STD apt upgrade -y
msg_ok "Updated Debian LXC"
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "drawio" "jgraph/drawio" "singlefile" "latest" "/var/lib/tomcat11/webapps" "draw.war"
msg_info "Starting service"
systemctl start tomcat11
msg_ok "Service started"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed Successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8080/draw${CL}"

View File

@@ -61,7 +61,7 @@ function update_script() {
msg_info "Updating Backend"
cd /opt/endurain/backend
$STD poetry export -f requirements.txt --output requirements.txt --without-hashes
$STD uv venv
$STD uv venv --clear
$STD uv pip install -r requirements.txt
msg_ok "Backend Updated"

View File

@@ -42,7 +42,7 @@ function update_script() {
rm -rf "$VENV_PATH"
mkdir -p /opt/esphome
cd /opt/esphome
$STD uv venv "$VENV_PATH"
$STD uv venv --clear "$VENV_PATH"
$STD "$VENV_PATH/bin/python" -m ensurepip --upgrade
$STD "$VENV_PATH/bin/python" -m pip install --upgrade pip
$STD "$VENV_PATH/bin/python" -m pip install esphome tornado esptool

View File

@@ -37,17 +37,12 @@ function update_script() {
msg_info "Stopped Service"
msg_info "Creating Backup"
ls /opt/*.tar.gz &>/dev/null && rm -f /opt/*.tar.gz
backup_filename="/opt/${APP}_backup_$(date +%F).tar.gz"
tar -czf "$backup_filename" -C /opt/fileflows Data
msg_ok "Backup Created"
msg_info "Updating $APP to latest version"
temp_file=$(mktemp)
curl -fsSL https://fileflows.com/downloads/zip -o "$temp_file"
$STD unzip -o -d /opt/fileflows "$temp_file"
rm -rf "$temp_file"
rm -rf "$backup_filename"
msg_ok "Updated $APP to latest version"
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
msg_info "Starting Service"
systemctl start fileflows

6
ct/headers/drawio Normal file
View File

@@ -0,0 +1,6 @@
____ ________
/ __ \_________ __ __/ _/ __ \
/ / / / ___/ __ `/ | /| / // // / / /
/ /_/ / / / /_/ /| |/ |/ // // /_/ /
/_____/_/ \__,_/ |__/|__/___/\____/

View File

@@ -1,6 +0,0 @@
____ __ __ ____ __ _ _________ __ ______ __
/ __ \_________ ____ ___ ___ / /_/ /_ ___ __ _______ / __ \____ _____ ___ _____/ /__ __________ / | / / ____/ |/ / / ____/ ______ ____ _____/ /____ _____
/ /_/ / ___/ __ \/ __ `__ \/ _ \/ __/ __ \/ _ \/ / / / ___/_____/ /_/ / __ `/ __ \/ _ \/ ___/ / _ \/ ___/ ___/_____/ |/ / / __ | /_____/ __/ | |/_/ __ \/ __ \/ ___/ __/ _ \/ ___/
/ ____/ / / /_/ / / / / / / __/ /_/ / / / __/ /_/ (__ )_____/ ____/ /_/ / /_/ / __/ / / / __(__ |__ )_____/ /| / /_/ // /_____/ /____> </ /_/ / /_/ / / / /_/ __/ /
/_/ /_/ \____/_/ /_/ /_/\___/\__/_/ /_/\___/\__,_/____/ /_/ \__,_/ .___/\___/_/ /_/\___/____/____/ /_/ |_/\____//_/|_| /_____/_/|_/ .___/\____/_/ \__/\___/_/
/_/ /_/

View File

@@ -105,7 +105,7 @@ EOF
msg_ok "Image-processing libraries up to date"
fi
RELEASE="2.5.5"
RELEASE="2.5.6"
if check_for_gh_release "Immich" "immich-app/immich" "${RELEASE}"; then
if [[ $(cat ~/.immich) > "2.5.1" ]]; then
msg_info "Enabling Maintenance Mode"

View File

@@ -34,7 +34,7 @@ function update_script() {
PYTHON_VERSION="3.12" setup_uv
mkdir -p "$INSTALL_DIR"
cd "$INSTALL_DIR"
$STD uv venv .venv
$STD uv venv --clear .venv
$STD "$VENV_PYTHON" -m ensurepip --upgrade
$STD "$VENV_PYTHON" -m pip install --upgrade pip
$STD "$VENV_PYTHON" -m pip install jupyter

View File

@@ -34,10 +34,19 @@ function update_script() {
CURRENT_VERSION=$(readlink -f /opt/kasm/current | awk -F'/' '{print $4}')
KASM_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_[0-9]+\.[0-9]+\.[0-9]+\.[a-z0-9]+\.tar\.gz' | head -n 1)
if [[ -z "$KASM_URL" ]]; then
SERVICE_IMAGE_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_service_images_amd64_[0-9]+\.[0-9]+\.[0-9]+\.tar\.gz' | head -n 1)
if [[ -n "$SERVICE_IMAGE_URL" ]]; then
KASM_VERSION=$(echo "$SERVICE_IMAGE_URL" | sed -E 's/.*kasm_release_service_images_amd64_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
KASM_URL="https://kasm-static-content.s3.amazonaws.com/kasm_release_${KASM_VERSION}.tar.gz"
fi
else
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
fi
if [[ -z "$KASM_URL" ]] || [[ -z "$KASM_VERSION" ]]; then
msg_error "Unable to detect latest Kasm release URL."
exit 1
fi
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
msg_info "Checked for new version"
msg_info "Removing outdated docker-compose plugin"

View File

@@ -33,7 +33,8 @@ function update_script() {
msg_ok "Stopped Service"
PHP_VERSION="8.5" PHP_APACHE="YES" setup_php
setup_composer
msg_info "Creating a backup"
mv /opt/koillection/ /opt/koillection-backup
msg_ok "Backup created"

View File

@@ -28,12 +28,6 @@ function update_script() {
exit
fi
msg_error "This script is currently disabled due to an external issue with the OpenResty APT repository."
msg_error "The repository's GPG key uses SHA-1 signatures, which are no longer accepted by Debian as of February 1, 2026."
msg_error "The issue is tracked in openresty/openresty#1097"
msg_error "For more details, see: https://github.com/community-scripts/ProxmoxVE/issues/11406"
exit 1
if [[ $(grep -E '^VERSION_ID=' /etc/os-release) == *"12"* ]]; then
msg_error "Wrong Debian version detected!"
msg_error "Please create a snapshot first. You must upgrade your LXC to Debian Trixie before updating. Visit: https://github.com/community-scripts/ProxmoxVE/discussions/7489"
@@ -145,15 +139,17 @@ function update_script() {
"database": {
"engine": "knex-native",
"knex": {
"client": "sqlite3",
"client": "better-sqlite3",
"connection": {
"filename": "/data/database.sqlite"
}
},
"useNullAsDefault": true
}
}
}
EOF
fi
sed -i 's/"client": "sqlite3"/"client": "better-sqlite3"/' /app/config/production.json
cd /app
$STD yarn install --network-timeout 600000
msg_ok "Initialized Backend"

View File

@@ -30,7 +30,7 @@ function update_script() {
fi
RELEASE="v5.0.2"
if check_for_gh_release "opencloud" "opencloud-eu/opencloud" "${RELEASE}"; then
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
msg_info "Stopping services"
systemctl stop opencloud opencloud-wopi
msg_ok "Stopped services"
@@ -38,9 +38,21 @@ function update_script() {
msg_info "Updating packages"
$STD apt-get update
$STD apt-get dist-upgrade -y
ensure_dependencies "inotify-tools"
msg_ok "Updated packages"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "${RELEASE}" "/usr/bin" "opencloud-*-linux-amd64"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "OpenCloud" "opencloud-eu/opencloud" "singlefile" "${RELEASE}" "/usr/bin" "opencloud-*-linux-amd64"
if ! grep -q 'POSIX_WATCH' /etc/opencloud/opencloud.env; then
sed -i '/^## External/i ## Uncomment below to enable PosixFS Collaborative Mode\
## Increase inotify watch/instance limits on your PVE host:\
### sysctl -w fs.inotify.max_user_watches=1048576\
### sysctl -w fs.inotify.max_user_instances=1024\
# STORAGE_USERS_POSIX_ENABLE_COLLABORATION=true\
# STORAGE_USERS_POSIX_WATCH_TYPE=inotifywait\
# STORAGE_USERS_POSIX_WATCH_FS=true\
# STORAGE_USERS_POSIX_WATCH_PATH=<path-to-storage-or-bind-mount>' /etc/opencloud/opencloud.env
fi
msg_info "Starting services"
systemctl start opencloud opencloud-wopi

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: Andy Grunwald (andygrunwald)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/hansmi/prometheus-paperless-exporter
APP="Prometheus-Paperless-NGX-Exporter"
var_tags="${var_tags:-monitoring;alerting}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-256}"
var_disk="${var_disk:-2}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -f /etc/systemd/system/prometheus-paperless-ngx-exporter.service ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "prom-paperless-exp" "hansmi/prometheus-paperless-exporter"; then
msg_info "Stopping Service"
systemctl stop prometheus-paperless-ngx-exporter
msg_ok "Stopped Service"
fetch_and_deploy_gh_release "prom-paperless-exp" "hansmi/prometheus-paperless-exporter" "binary"
msg_info "Starting Service"
systemctl start prometheus-paperless-ngx-exporter
msg_ok "Started Service"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8081/metrics${CL}"

View File

@@ -41,7 +41,7 @@ function update_script() {
rm -rf "$PVE_VENV_PATH"
mkdir -p /opt/prometheus-pve-exporter
cd /opt/prometheus-pve-exporter
$STD uv venv "$PVE_VENV_PATH"
$STD uv venv --clear "$PVE_VENV_PATH"
$STD "$PVE_VENV_PATH/bin/python" -m ensurepip --upgrade
$STD "$PVE_VENV_PATH/bin/python" -m pip install --upgrade pip
$STD "$PVE_VENV_PATH/bin/python" -m pip install prometheus-pve-exporter

View File

@@ -41,7 +41,7 @@ function update_script() {
# Always ensure venv exists
if [[ ! -d /opt/sabnzbd/venv ]]; then
msg_info "Migrating SABnzbd to uv virtual environment"
$STD uv venv /opt/sabnzbd/venv
$STD uv venv --clear /opt/sabnzbd/venv
msg_ok "Created uv venv at /opt/sabnzbd/venv"
fi

View File

@@ -38,7 +38,7 @@ function update_script() {
msg_info "Updating Scraparr"
cd /opt/scraparr
$STD uv venv /opt/scraparr/.venv
$STD uv venv --clear /opt/scraparr/.venv
$STD /opt/scraparr/.venv/bin/python -m ensurepip --upgrade
$STD /opt/scraparr/.venv/bin/python -m pip install --upgrade pip
$STD /opt/scraparr/.venv/bin/python -m pip install -r /opt/scraparr/src/scraparr/requirements.txt

View File

@@ -3,7 +3,7 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
# Copyright (c) 2021-2026 community-scripts ORG
# Author: vhsdream
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/slskd/slskd, https://soularr.net
# Source: https://github.com/slskd/slskd, https://github.com/mrusse/soularr
APP="slskd"
var_tags="${var_tags:-arr;p2p}"
@@ -24,50 +24,65 @@ function update_script() {
check_container_storage
check_container_resources
if [[ ! -d /opt/slskd ]] || [[ ! -d /opt/soularr ]]; then
msg_error "No ${APP} Installation Found!"
if [[ ! -d /opt/slskd ]]; then
msg_error "No Slskd Installation Found!"
exit
fi
RELEASE=$(curl -s https://api.github.com/repos/slskd/slskd/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3) }')
if [[ "${RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]] || [[ ! -f /opt/${APP}_version.txt ]]; then
msg_info "Stopping Service"
systemctl stop slskd soularr.timer soularr.service
msg_info "Stopped Service"
if check_for_gh_release "Slskd" "slskd/slskd"; then
msg_info "Stopping Service(s)"
systemctl stop slskd
[[ -f /etc/systemd/system/soularr.service ]] && systemctl stop soularr.timer soularr.service
msg_ok "Stopped Service(s)"
msg_info "Updating $APP to v${RELEASE}"
tmp_file=$(mktemp)
curl -fsSL "https://github.com/slskd/slskd/releases/download/${RELEASE}/slskd-${RELEASE}-linux-x64.zip" -o $tmp_file
$STD unzip -oj $tmp_file slskd -d /opt/${APP}
echo "${RELEASE}" >/opt/${APP}_version.txt
msg_ok "Updated $APP to v${RELEASE}"
msg_info "Backing up config"
cp /opt/slskd/config/slskd.yml /opt/slskd.yml.bak
msg_ok "Backed up config"
msg_info "Starting Service"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Slskd" "slskd/slskd" "prebuild" "latest" "/opt/slskd" "slskd-*-linux-x64.zip"
msg_info "Restoring config"
mv /opt/slskd.yml.bak /opt/slskd/config/slskd.yml
msg_ok "Restored config"
msg_info "Starting Service(s)"
systemctl start slskd
msg_ok "Started Service"
rm -rf $tmp_file
else
msg_ok "No ${APP} update required. ${APP} is already at v${RELEASE}"
[[ -f /etc/systemd/system/soularr.service ]] && systemctl start soularr.timer
msg_ok "Started Service(s)"
msg_ok "Updated Slskd successfully!"
fi
msg_info "Updating Soularr"
cp /opt/soularr/config.ini /opt/config.ini.bak
cp /opt/soularr/run.sh /opt/run.sh.bak
cd /tmp
rm -rf /opt/soularr
curl -fsSL -o main.zip https://github.com/mrusse/soularr/archive/refs/heads/main.zip
$STD unzip main.zip
mv soularr-main /opt/soularr
cd /opt/soularr
$STD pip install -r requirements.txt
mv /opt/config.ini.bak /opt/soularr/config.ini
mv /opt/run.sh.bak /opt/soularr/run.sh
rm -rf /tmp/main.zip
msg_ok "Updated soularr"
[[ -d /opt/soularr ]] && if check_for_gh_release "Soularr" "mrusse/soularr"; then
if systemctl is-active soularr.timer >/dev/null; then
msg_info "Stopping Timer and Service"
systemctl stop soularr.timer soularr.service
msg_ok "Stopped Timer and Service"
fi
msg_info "Starting soularr timer"
systemctl start soularr.timer
msg_ok "Started soularr timer"
exit
msg_info "Backing up Soularr config"
cp /opt/soularr/config.ini /opt/soularr_config.ini.bak
cp /opt/soularr/run.sh /opt/soularr_run.sh.bak
msg_ok "Backed up Soularr config"
PYTHON_VERSION="3.11" setup_uv
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Soularr" "mrusse/soularr" "tarball" "latest" "/opt/soularr"
msg_info "Updating Soularr"
cd /opt/soularr
$STD uv venv -c venv
$STD source venv/bin/activate
$STD uv pip install -r requirements.txt
deactivate
msg_ok "Updated Soularr"
msg_info "Restoring Soularr config"
mv /opt/soularr_config.ini.bak /opt/soularr/config.ini
mv /opt/soularr_run.sh.bak /opt/soularr/run.sh
msg_ok "Restored Soularr config"
msg_info "Starting Soularr Timer"
systemctl restart soularr.timer
msg_ok "Started Soularr Timer"
msg_ok "Updated Soularr successfully!"
fi
}
start

View File

@@ -33,7 +33,15 @@ function update_script() {
systemctl stop snowshare
msg_ok "Stopped Service"
fetch_and_deploy_gh_release "snowshare" "TuroYT/snowshare" "tarball"
msg_info "Backing up uploads"
[ -d /opt/snowshare/uploads ] && cp -a /opt/snowshare/uploads /opt/.snowshare_uploads_backup
msg_ok "Uploads backed up"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "snowshare" "TuroYT/snowshare" "tarball"
msg_info "Restoring uploads"
[ -d /opt/.snowshare_uploads_backup ] && rm -rf /opt/snowshare/uploads && cp -a /opt/.snowshare_uploads_backup /opt/snowshare/uploads
msg_ok "Uploads restored"
msg_info "Updating Snowshare"
cd /opt/snowshare

View File

@@ -39,7 +39,7 @@ function update_script() {
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "streamlink-webui" "CrazyWolf13/streamlink-webui" "tarball"
msg_info "Updating streamlink-webui"
$STD uv venv /opt/streamlink-webui/backend/src/.venv
$STD uv venv --clear /opt/streamlink-webui/backend/src/.venv
source /opt/streamlink-webui/backend/src/.venv/bin/activate
$STD uv pip install -r /opt/streamlink-webui/backend/src/requirements.txt --python=/opt/streamlink-webui/backend/src/.venv
cd /opt/streamlink-webui/frontend/src

View File

@@ -50,7 +50,7 @@ function update_script() {
cp -r /opt/tandoor.bak/{config,api,mediafiles,staticfiles} /opt/tandoor/
mv /opt/tandoor.bak/.env /opt/tandoor/.env
cd /opt/tandoor
$STD uv venv .venv --python=python3
$STD uv venv --clear .venv --python=python3
$STD uv pip install -r requirements.txt --python .venv/bin/python
cd /opt/tandoor/vue3
$STD yarn install

View File

@@ -33,7 +33,9 @@ function update_script() {
systemctl stop umlautadaptarr
msg_ok "Stopped Service"
cp /opt/UmlautAdaptarr/appsettings.json /opt/UmlautAdaptarr/appsettings.json.bak
fetch_and_deploy_gh_release "UmlautAdaptarr" "PCJones/Umlautadaptarr" "prebuild" "latest" "/opt/UmlautAdaptarr" "linux-x64.zip"
cp /opt/UmlautAdaptarr/appsettings.json.bak /opt/UmlautAdaptarr/appsettings.json
msg_info "Starting Service"
systemctl start umlautadaptarr

View File

@@ -39,7 +39,7 @@ function update_script() {
msg_info "Updating Warracker"
cd /opt/warracker/backend
$STD uv venv .venv
$STD uv venv --clear .venv
$STD source .venv/bin/activate
$STD uv pip install -r requirements.txt
msg_ok "Updated Warracker"

View File

@@ -7,9 +7,9 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
APP="wger"
var_tags="${var_tags:-management;fitness}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-1024}"
var_disk="${var_disk:-6}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-2048}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
@@ -23,38 +23,44 @@ function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /home/wger ]]; then
if [[ ! -d /opt/wger ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
RELEASE=$(curl -fsSL https://api.github.com/repos/wger-project/wger/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3)}')
if [[ "${RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]] || [[ ! -f /opt/${APP}_version.txt ]]; then
if check_for_gh_release "wger" "wger-project/wger"; then
msg_info "Stopping Service"
systemctl stop wger
systemctl stop redis-server nginx celery celery-beat wger
msg_ok "Stopped Service"
msg_info "Updating $APP to v${RELEASE}"
temp_file=$(mktemp)
curl -fsSL "https://github.com/wger-project/wger/archive/refs/tags/$RELEASE.tar.gz" -o "$temp_file"
tar xzf "$temp_file"
cp -rf wger-"$RELEASE"/* /home/wger/src
cd /home/wger/src
$STD pip install -r requirements_prod.txt --ignore-installed
$STD pip install -e .
$STD python3 manage.py migrate
$STD python3 manage.py collectstatic --no-input
$STD yarn install
$STD yarn build:css:sass
rm -rf "$temp_file"
echo "${RELEASE}" >/opt/${APP}_version.txt
msg_ok "Updated $APP to v${RELEASE}"
msg_info "Backing up Data"
cp -r /opt/wger/media /opt/wger_media_backup
cp /opt/wger/.env /opt/wger_env_backup
msg_ok "Backed up Data"
msg_info "Starting Service"
systemctl start wger
msg_ok "Started Service"
msg_ok "Updated successfully!"
else
msg_ok "No update required. ${APP} is already at v${RELEASE}"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "wger" "wger-project/wger" "tarball"
msg_info "Restoring Data"
cp -r /opt/wger_media_backup/. /opt/wger/media
cp /opt/wger_env_backup /opt/wger/.env
rm -rf /opt/wger_media_backup /opt/wger_env_backup
msg_ok "Restored Data"
msg_info "Updating wger"
cd /opt/wger
set -a && source /opt/wger/.env && set +a
export DJANGO_SETTINGS_MODULE=settings.main
$STD uv pip install .
$STD uv run python manage.py migrate
$STD uv run python manage.py collectstatic --no-input
msg_ok "Updated wger"
msg_info "Starting Services"
systemctl start redis-server nginx celery celery-beat wger
msg_ok "Started Services"
msg_ok "Updated Successfully"
fi
exit
}
@@ -63,7 +69,7 @@ start
build_container
description
msg_ok "Completed successfully!\n"
msg_ok "Completed Successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"

View File

@@ -35,6 +35,10 @@
{
"text": "The Disk space initially allocated by the script is only a placeholder, as we can't know how much space you will ever need. You should increase it to match your workload.",
"type": "info"
},
{
"text": "Please copy your `BICHON_ENCRYPT_PASSWORD` from `/opt/bichon/bichon.env` to a safe place.",
"type": "warning"
}
]
}

View File

@@ -0,0 +1,35 @@
{
"name": "Draw.IO",
"slug": "drawio",
"categories": [
12
],
"date_created": "2026-02-11",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 8080,
"documentation": "https://www.drawio.com/doc/",
"website": "https://www.drawio.com/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/draw-io.webp",
"config_path": "",
"description": "draw.io is a configurable diagramming and whiteboarding application, jointly owned and developed by draw.io Ltd (previously named JGraph) and draw.io AG.",
"install_methods": [
{
"type": "default",
"script": "ct/drawio.sh",
"resources": {
"cpu": 1,
"ram": 2048,
"hdd": 4,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-09T06:27:10Z",
"generated": "2026-02-12T12:15:15Z",
"versions": [
{
"slug": "2fauth",
@@ -15,6 +15,13 @@
"pinned": false,
"date": "2025-12-08T14:34:55Z"
},
{
"slug": "adguardhome-sync",
"repo": "bakito/adguardhome-sync",
"version": "v0.8.2",
"pinned": false,
"date": "2025-10-24T17:13:47Z"
},
{
"slug": "adventurelog",
"repo": "seanmorley15/adventurelog",
@@ -109,9 +116,9 @@
{
"slug": "bentopdf",
"repo": "alam00000/bentopdf",
"version": "v2.1.0",
"version": "v2.2.0",
"pinned": false,
"date": "2026-02-02T14:30:55Z"
"date": "2026-02-09T07:07:40Z"
},
{
"slug": "beszel",
@@ -200,16 +207,16 @@
{
"slug": "comfyui",
"repo": "comfyanonymous/ComfyUI",
"version": "v0.12.3",
"version": "v0.13.0",
"pinned": false,
"date": "2026-02-05T07:04:07Z"
"date": "2026-02-10T20:27:38Z"
},
{
"slug": "commafeed",
"repo": "Athou/commafeed",
"version": "6.1.1",
"version": "6.2.0",
"pinned": false,
"date": "2026-01-26T15:14:16Z"
"date": "2026-02-09T19:44:58Z"
},
{
"slug": "configarr",
@@ -235,16 +242,16 @@
{
"slug": "cronicle",
"repo": "jhuckaby/Cronicle",
"version": "v0.9.105",
"version": "v0.9.106",
"pinned": false,
"date": "2026-02-05T18:16:11Z"
"date": "2026-02-11T17:11:46Z"
},
{
"slug": "cryptpad",
"repo": "cryptpad/cryptpad",
"version": "2025.9.0",
"version": "2026.2.0",
"pinned": false,
"date": "2025-10-22T10:06:29Z"
"date": "2026-02-11T15:39:05Z"
},
{
"slug": "dawarich",
@@ -256,44 +263,51 @@
{
"slug": "discopanel",
"repo": "nickheyer/discopanel",
"version": "v1.0.35",
"version": "v1.0.36",
"pinned": false,
"date": "2026-02-02T05:20:12Z"
"date": "2026-02-09T21:15:44Z"
},
{
"slug": "dispatcharr",
"repo": "Dispatcharr/Dispatcharr",
"version": "v0.18.1",
"version": "v0.19.0",
"pinned": false,
"date": "2026-01-27T17:09:11Z"
"date": "2026-02-10T21:18:10Z"
},
{
"slug": "docmost",
"repo": "docmost/docmost",
"version": "v0.25.2",
"version": "v0.25.3",
"pinned": false,
"date": "2026-02-06T19:50:55Z"
"date": "2026-02-10T02:58:23Z"
},
{
"slug": "domain-locker",
"repo": "Lissy93/domain-locker",
"version": "v0.1.2",
"version": "v0.1.3",
"pinned": false,
"date": "2025-11-14T22:08:23Z"
"date": "2026-02-11T10:03:32Z"
},
{
"slug": "domain-monitor",
"repo": "Hosteroid/domain-monitor",
"version": "v1.1.1",
"version": "v1.1.3",
"pinned": false,
"date": "2025-11-18T11:32:30Z"
"date": "2026-02-11T15:48:18Z"
},
{
"slug": "donetick",
"repo": "donetick/donetick",
"version": "v0.1.64",
"version": "v0.1.71",
"pinned": false,
"date": "2025-10-03T05:18:24Z"
"date": "2026-02-11T06:01:13Z"
},
{
"slug": "drawio",
"repo": "jgraph/drawio",
"version": "v29.3.6",
"pinned": false,
"date": "2026-01-28T18:25:02Z"
},
{
"slug": "duplicati",
@@ -319,9 +333,9 @@
{
"slug": "endurain",
"repo": "endurain-project/endurain",
"version": "v0.17.3",
"version": "v0.17.4",
"pinned": false,
"date": "2026-01-23T22:02:05Z"
"date": "2026-02-11T04:54:22Z"
},
{
"slug": "ersatztv",
@@ -529,9 +543,16 @@
{
"slug": "huntarr",
"repo": "plexguide/Huntarr.io",
"version": "9.2.3",
"version": "9.2.4",
"pinned": false,
"date": "2026-02-07T04:44:20Z"
"date": "2026-02-12T08:31:23Z"
},
{
"slug": "immich-public-proxy",
"repo": "alangrainger/immich-public-proxy",
"version": "v1.15.1",
"pinned": false,
"date": "2026-01-26T08:04:27Z"
},
{
"slug": "inspircd",
@@ -550,16 +571,23 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.55",
"version": "v5.12.57",
"pinned": false,
"date": "2026-02-05T01:06:15Z"
"date": "2026-02-11T23:08:56Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1074",
"version": "v0.24.1098",
"pinned": false,
"date": "2026-02-09T06:01:19Z"
"date": "2026-02-12T05:56:25Z"
},
{
"slug": "jellystat",
"repo": "CyferShepard/Jellystat",
"version": "V1.1.8",
"pinned": false,
"date": "2026-02-08T08:15:00Z"
},
{
"slug": "joplin-server",
@@ -571,9 +599,9 @@
{
"slug": "jotty",
"repo": "fccview/jotty",
"version": "1.19.1",
"version": "1.20.0",
"pinned": false,
"date": "2026-01-26T21:30:39Z"
"date": "2026-02-12T09:23:30Z"
},
{
"slug": "kapowarr",
@@ -599,9 +627,9 @@
{
"slug": "keycloak",
"repo": "keycloak/keycloak",
"version": "26.5.2",
"version": "26.5.3",
"pinned": false,
"date": "2026-01-23T14:26:58Z"
"date": "2026-02-10T07:30:08Z"
},
{
"slug": "kimai",
@@ -634,9 +662,9 @@
{
"slug": "kometa",
"repo": "Kometa-Team/Kometa",
"version": "v2.2.2",
"version": "v2.3.0",
"pinned": false,
"date": "2025-10-06T21:31:07Z"
"date": "2026-02-09T21:26:56Z"
},
{
"slug": "komga",
@@ -683,9 +711,9 @@
{
"slug": "libretranslate",
"repo": "LibreTranslate/LibreTranslate",
"version": "v1.8.4",
"version": "v1.9.0",
"pinned": false,
"date": "2026-02-02T17:45:16Z"
"date": "2026-02-10T19:05:48Z"
},
{
"slug": "lidarr",
@@ -718,9 +746,9 @@
{
"slug": "lubelogger",
"repo": "hargata/lubelog",
"version": "v1.5.8",
"version": "v1.6.0",
"pinned": false,
"date": "2026-01-26T18:18:03Z"
"date": "2026-02-10T20:16:32Z"
},
{
"slug": "mafl",
@@ -739,9 +767,9 @@
{
"slug": "mail-archiver",
"repo": "s1t5/mail-archiver",
"version": "2601.3",
"version": "2602.1",
"pinned": false,
"date": "2026-01-25T12:52:24Z"
"date": "2026-02-11T06:23:11Z"
},
{
"slug": "managemydamnlife",
@@ -753,9 +781,9 @@
{
"slug": "manyfold",
"repo": "manyfold3d/manyfold",
"version": "v0.132.0",
"version": "v0.132.1",
"pinned": false,
"date": "2026-01-29T13:53:21Z"
"date": "2026-02-09T22:02:28Z"
},
{
"slug": "mealie",
@@ -767,9 +795,9 @@
{
"slug": "mediamanager",
"repo": "maxdorninger/MediaManager",
"version": "v1.12.2",
"version": "v1.12.3",
"pinned": false,
"date": "2026-02-08T19:18:29Z"
"date": "2026-02-11T16:45:40Z"
},
{
"slug": "mediamtx",
@@ -816,9 +844,9 @@
{
"slug": "myip",
"repo": "jason5ng32/MyIP",
"version": "v5.2.0",
"version": "v5.2.1",
"pinned": false,
"date": "2026-01-05T05:56:57Z"
"date": "2026-02-10T07:38:47Z"
},
{
"slug": "mylar3",
@@ -837,9 +865,9 @@
{
"slug": "navidrome",
"repo": "navidrome/navidrome",
"version": "v0.60.2",
"version": "v0.60.3",
"pinned": false,
"date": "2026-02-07T19:42:33Z"
"date": "2026-02-10T23:55:04Z"
},
{
"slug": "netbox",
@@ -848,6 +876,13 @@
"pinned": false,
"date": "2026-02-03T13:54:26Z"
},
{
"slug": "nextcloud-exporter",
"repo": "xperimental/nextcloud-exporter",
"version": "v0.9.0",
"pinned": false,
"date": "2025-10-12T20:03:10Z"
},
{
"slug": "nginx-ui",
"repo": "0xJacky/nginx-ui",
@@ -963,9 +998,9 @@
{
"slug": "pangolin",
"repo": "fosrl/pangolin",
"version": "1.15.2",
"version": "1.15.3",
"pinned": false,
"date": "2026-02-05T19:23:58Z"
"date": "2026-02-12T06:10:19Z"
},
{
"slug": "paperless-ai",
@@ -1012,16 +1047,16 @@
{
"slug": "pelican-panel",
"repo": "pelican-dev/panel",
"version": "v1.0.0-beta31",
"version": "v1.0.0-beta32",
"pinned": false,
"date": "2026-01-18T22:43:24Z"
"date": "2026-02-09T22:15:44Z"
},
{
"slug": "pelican-wings",
"repo": "pelican-dev/wings",
"version": "v1.0.0-beta22",
"version": "v1.0.0-beta23",
"pinned": false,
"date": "2026-01-18T22:38:36Z"
"date": "2026-02-09T22:10:26Z"
},
{
"slug": "pf2etools",
@@ -1037,12 +1072,19 @@
"pinned": false,
"date": "2025-12-01T05:07:31Z"
},
{
"slug": "pihole-exporter",
"repo": "eko/pihole-exporter",
"version": "v1.2.0",
"pinned": false,
"date": "2025-07-29T19:15:37Z"
},
{
"slug": "planka",
"repo": "plankanban/planka",
"version": "v2.0.0-rc.4",
"version": "v2.0.0",
"pinned": false,
"date": "2025-09-04T12:41:17Z"
"date": "2026-02-11T13:50:10Z"
},
{
"slug": "plant-it",
@@ -1089,9 +1131,9 @@
{
"slug": "prometheus-alertmanager",
"repo": "prometheus/alertmanager",
"version": "v0.31.0",
"version": "v0.31.1",
"pinned": false,
"date": "2026-02-02T13:34:15Z"
"date": "2026-02-11T21:28:26Z"
},
{
"slug": "prometheus-blackbox-exporter",
@@ -1131,9 +1173,9 @@
{
"slug": "pulse",
"repo": "rcourtman/Pulse",
"version": "v5.1.5",
"version": "v5.1.9",
"pinned": false,
"date": "2026-02-08T12:19:53Z"
"date": "2026-02-11T15:34:40Z"
},
{
"slug": "pve-scripts-local",
@@ -1149,6 +1191,13 @@
"pinned": false,
"date": "2025-11-19T23:54:34Z"
},
{
"slug": "qbittorrent-exporter",
"repo": "martabal/qbittorrent-exporter",
"version": "v1.13.2",
"pinned": false,
"date": "2025-12-13T22:59:03Z"
},
{
"slug": "qdrant",
"repo": "qdrant/qdrant",
@@ -1180,9 +1229,9 @@
{
"slug": "rdtclient",
"repo": "rogerfar/rdt-client",
"version": "v2.0.119",
"version": "v2.0.120",
"pinned": false,
"date": "2025-10-13T23:15:11Z"
"date": "2026-02-12T02:53:51Z"
},
{
"slug": "reactive-resume",
@@ -1236,16 +1285,16 @@
{
"slug": "scanopy",
"repo": "scanopy/scanopy",
"version": "v0.14.3",
"version": "v0.14.4",
"pinned": false,
"date": "2026-02-04T01:41:01Z"
"date": "2026-02-10T03:57:28Z"
},
{
"slug": "scraparr",
"repo": "thecfu/scraparr",
"version": "v2.2.5",
"version": "v3.0.1",
"pinned": false,
"date": "2025-10-07T12:34:31Z"
"date": "2026-02-11T17:42:23Z"
},
{
"slug": "seelf",
@@ -1282,6 +1331,13 @@
"pinned": false,
"date": "2026-01-16T12:08:28Z"
},
{
"slug": "slskd",
"repo": "slskd/slskd",
"version": "0.24.3",
"pinned": false,
"date": "2026-01-15T14:40:15Z"
},
{
"slug": "snipeit",
"repo": "grokability/snipe-it",
@@ -1292,9 +1348,9 @@
{
"slug": "snowshare",
"repo": "TuroYT/snowshare",
"version": "v1.2.12",
"version": "v1.3.5",
"pinned": false,
"date": "2026-01-30T13:35:56Z"
"date": "2026-02-11T10:24:51Z"
},
{
"slug": "sonarr",
@@ -1327,9 +1383,9 @@
{
"slug": "stirling-pdf",
"repo": "Stirling-Tools/Stirling-PDF",
"version": "v2.4.5",
"version": "v2.4.6",
"pinned": false,
"date": "2026-02-06T23:12:20Z"
"date": "2026-02-12T00:01:19Z"
},
{
"slug": "streamlink-webui",
@@ -1418,9 +1474,9 @@
{
"slug": "tracearr",
"repo": "connorgallopo/Tracearr",
"version": "v1.4.12",
"version": "v1.4.17",
"pinned": false,
"date": "2026-01-28T23:29:37Z"
"date": "2026-02-11T01:33:21Z"
},
{
"slug": "tracktor",
@@ -1432,9 +1488,9 @@
{
"slug": "traefik",
"repo": "traefik/traefik",
"version": "v3.6.7",
"version": "v3.6.8",
"pinned": false,
"date": "2026-01-14T14:11:45Z"
"date": "2026-02-11T16:44:37Z"
},
{
"slug": "trilium",
@@ -1446,9 +1502,9 @@
{
"slug": "trip",
"repo": "itskovacs/TRIP",
"version": "1.39.0",
"version": "1.40.0",
"pinned": false,
"date": "2026-02-07T16:59:51Z"
"date": "2026-02-10T20:12:53Z"
},
{
"slug": "tududi",
@@ -1509,9 +1565,9 @@
{
"slug": "vaultwarden",
"repo": "dani-garcia/vaultwarden",
"version": "1.35.2",
"version": "1.35.3",
"pinned": false,
"date": "2026-01-09T18:37:04Z"
"date": "2026-02-10T20:37:03Z"
},
{
"slug": "victoriametrics",
@@ -1523,9 +1579,9 @@
{
"slug": "vikunja",
"repo": "go-vikunja/vikunja",
"version": "v1.0.0",
"version": "v1.1.0",
"pinned": false,
"date": "2026-01-28T11:12:59Z"
"date": "2026-02-09T10:34:29Z"
},
{
"slug": "wallabag",
@@ -1537,9 +1593,9 @@
{
"slug": "wallos",
"repo": "ellite/Wallos",
"version": "v4.6.0",
"version": "v4.6.1",
"pinned": false,
"date": "2025-12-20T15:57:51Z"
"date": "2026-02-10T21:06:46Z"
},
{
"slug": "wanderer",
@@ -1572,9 +1628,9 @@
{
"slug": "wavelog",
"repo": "wavelog/wavelog",
"version": "2.2.2",
"version": "2.3",
"pinned": false,
"date": "2025-12-31T16:53:34Z"
"date": "2026-02-11T15:46:40Z"
},
{
"slug": "wealthfolio",
@@ -1590,19 +1646,26 @@
"pinned": false,
"date": "2025-11-11T14:30:28Z"
},
{
"slug": "wger",
"repo": "wger-project/wger",
"version": "2.4",
"pinned": false,
"date": "2026-01-18T12:12:02Z"
},
{
"slug": "wikijs",
"repo": "requarks/wiki",
"version": "v2.5.311",
"version": "v2.5.312",
"pinned": false,
"date": "2026-01-08T09:50:00Z"
"date": "2026-02-12T02:45:22Z"
},
{
"slug": "wishlist",
"repo": "cmintey/wishlist",
"version": "v0.59.0",
"version": "v0.60.0",
"pinned": false,
"date": "2026-01-19T16:42:14Z"
"date": "2026-02-10T04:05:26Z"
},
{
"slug": "wizarr",
@@ -1628,9 +1691,9 @@
{
"slug": "yubal",
"repo": "guillevc/yubal",
"version": "v0.4.2",
"version": "v0.5.0",
"pinned": false,
"date": "2026-02-08T21:35:13Z"
"date": "2026-02-09T22:11:32Z"
},
{
"slug": "zigbee2mqtt",
@@ -1642,9 +1705,9 @@
{
"slug": "zipline",
"repo": "diced/zipline",
"version": "v4.4.1",
"version": "v4.4.2",
"pinned": false,
"date": "2026-01-20T01:29:01Z"
"date": "2026-02-11T04:58:54Z"
},
{
"slug": "zitadel",

View File

@@ -33,7 +33,7 @@
},
"notes": [
{
"text": "Kutt needs so be served with an SSL certificate for its login to work. During install, you will be prompted to choose if you want to have Caddy installed for SSL termination or if you want to use your own reverse proxy (in that case point your reverse porxy to port 3000).",
"text": "Kutt needs so be served with an SSL certificate for its login to work. During install, you will be prompted to choose if you want to have Caddy installed for SSL termination or if you want to use your own reverse proxy (in that case point your reverse proxy to port 3000).",
"type": "info"
}
]

View File

@@ -28,10 +28,14 @@
}
],
"default_credentials": {
"username": "admin",
"username": null,
"password": null
},
"notes": [
{
"text": "On first visit, the setup wizard will guide you to create an admin account and configure ACME email.",
"type": "warning"
},
{
"text": "Nginx runs on ports 80/443, Nginx UI management interface on port 9000.",
"type": "info"
@@ -39,10 +43,6 @@
{
"text": "SSL certificates can be managed automatically with Let's Encrypt integration.",
"type": "info"
},
{
"text": "Initial Login data: `cat ~/nginx-ui.creds`",
"type": "info"
}
]
}

View File

@@ -30,10 +30,14 @@
}
],
"default_credentials": {
"username": "admin@example.com",
"password": "changeme"
"username": null,
"password": null
},
"notes": [
{
"text": "On first launch, a setup wizard will guide you through creating an admin account. There are no default credentials.",
"type": "info"
},
{
"text": "You can install the specific one certbot you prefer, or you can Running /app/scripts/install-certbot-plugins within the Nginx Proxy Manager (NPM) LXC shell will install many common plugins. Important: This script does not install all Certbot plugins, as some require additional, external system dependencies (like specific packages for certain DNS providers). These external dependencies must be manually installed within the LXC container before you can successfully install and use the corresponding Certbot plugin. Consult the plugin's documentation for required packages.",
"type": "info"

View File

@@ -1,44 +1,35 @@
{
"name": "Prometheus Paperless NGX Exporter",
"slug": "prometheus-paperless-ngx-exporter",
"categories": [
9
],
"date_created": "2025-02-07",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 8081,
"documentation": null,
"website": "https://github.com/hansmi/prometheus-paperless-exporter",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/paperless-ngx.webp",
"config_path": "",
"description": "Prometheus metrics exporter for Paperless-NGX, a document management system transforming physical documents into a searchable online archive. The exporter relies on Paperless' REST API.",
"install_methods": [
{
"type": "default",
"script": "ct/prometheus-paperless-ngx-exporter.sh",
"resources": {
"cpu": 1,
"ram": 256,
"hdd": 2,
"os": "debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "Please adjust the Paperless URL in the systemd unit file: /etc/systemd/system/prometheus-paperless-ngx-exporter.service",
"type": "info"
},
{
"text": "Please adjust the Paperless authentication token in the configuration file: /etc/prometheus-paperless-ngx-exporter/paperless_auth_token_file",
"type": "info"
}
]
"name": "Prometheus Paperless NGX Exporter",
"slug": "prometheus-paperless-ngx-exporter",
"categories": [
9
],
"date_created": "2025-02-07",
"type": "addon",
"updateable": true,
"privileged": false,
"interface_port": 8081,
"documentation": "https://github.com/hansmi/prometheus-paperless-exporter",
"website": "https://github.com/hansmi/prometheus-paperless-exporter",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/paperless-ngx.webp",
"config_path": "/etc/prometheus-paperless-ngx-exporter/config.env",
"description": "Prometheus metrics exporter for Paperless-NGX, a document management system transforming physical documents into a searchable online archive. The exporter relies on Paperless' REST API.",
"install_methods": [
{
"type": "default",
"script": "tools/addon/prometheus-paperless-ngx-exporter.sh",
"resources": {
"cpu": null,
"ram": null,
"hdd": null,
"os": null,
"version": null
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": []
}

View File

@@ -1,5 +1,5 @@
{
"name": "slskd",
"name": "Slskd",
"slug": "slskd",
"categories": [
11
@@ -35,10 +35,6 @@
{
"text": "See /opt/slskd/config/slskd.yml to add your Soulseek credentials",
"type": "info"
},
{
"text": "This LXC includes Soularr; it needs to be configured (/opt/soularr/config.ini) before it will work",
"type": "info"
}
]
}

View File

@@ -10,7 +10,7 @@
"privileged": false,
"interface_port": 3000,
"documentation": "https://github.com/TuroYT/snowshare",
"config_path": "/opt/snowshare/.env",
"config_path": "/opt/snowshare.env",
"website": "https://github.com/TuroYT/snowshare",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/png/snowshare.png",
"description": "A modern, secure file and link sharing platform built with Next.js, Prisma, and NextAuth. Share URLs, code snippets, and files with customizable expiration, privacy, and QR codes.",

View File

@@ -19,9 +19,9 @@
"type": "default",
"script": "ct/wger.sh",
"resources": {
"cpu": 1,
"ram": 1024,
"hdd": 6,
"cpu": 2,
"ram": 2048,
"hdd": 8,
"os": "debian",
"version": "13"
}
@@ -33,7 +33,7 @@
},
"notes": [
{
"text": "Enable proxy support by uncommenting this line in `/home/wger/src/settings.py` and pointing it to your URL: `# CSRF_TRUSTED_ORIGINS = ['http://127.0.0.1', 'https://my.domain.example.com']`, then restart the service `systemctl restart wger`.",
"text": "This LXC also runs Celery and Redis to synchronize workouts and ingredients",
"type": "info"
}
]

View File

@@ -58,7 +58,7 @@ DISABLE_REGISTRATION=False
EOF
cd /opt/adventurelog/backend/server
mkdir -p /opt/adventurelog/backend/server/media
$STD uv venv /opt/adventurelog/backend/server/.venv
$STD uv venv --clear /opt/adventurelog/backend/server/.venv
$STD /opt/adventurelog/backend/server/.venv/bin/python -m ensurepip --upgrade
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install --upgrade pip
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install -r requirements.txt

View File

@@ -77,7 +77,7 @@ echo "${KEPUB_VERSION#v}" >"$INSTALL_DIR"/KEPUBIFY_RELEASE
sed 's/^/v/' ~/.autocaliweb >"$INSTALL_DIR"/ACW_RELEASE
cd "$INSTALL_DIR"
$STD uv venv "$VIRTUAL_ENV"
$STD uv venv --clear "$VIRTUAL_ENV"
$STD uv sync --all-extras --active
cat <<EOF >./dirs.json
{

View File

@@ -29,7 +29,7 @@ fetch_and_deploy_gh_release "babybuddy" "babybuddy/babybuddy" "tarball"
msg_info "Installing Babybuddy"
mkdir -p /opt/data
cd /opt/babybuddy
$STD uv venv .venv
$STD uv venv --clear .venv
$STD source .venv/bin/activate
$STD uv pip install -r requirements.txt
cp babybuddy/settings/production.example.py babybuddy/settings/production.py

View File

@@ -20,7 +20,7 @@ msg_info "Installing Bazarr"
mkdir -p /var/lib/bazarr/
chmod 775 /opt/bazarr /var/lib/bazarr/
sed -i.bak 's/--only-binary=Pillow//g' /opt/bazarr/requirements.txt
$STD uv venv /opt/bazarr/venv --python 3.12
$STD uv venv --clear /opt/bazarr/venv --python 3.12
$STD uv pip install -r /opt/bazarr/requirements.txt --python /opt/bazarr/venv/bin/python3
msg_ok "Installed Bazarr"

View File

@@ -36,7 +36,7 @@ PYTHON_VERSION="3.12" setup_uv
fetch_and_deploy_gh_release "ComfyUI" "comfyanonymous/ComfyUI" "tarball" "latest" "/opt/ComfyUI"
msg_info "Python dependencies"
$STD uv venv "/opt/ComfyUI/venv"
$STD uv venv --clear "/opt/ComfyUI/venv"
if [[ "${comfyui_gpu_type,,}" == "nvidia" ]]; then
pytorch_url="https://download.pytorch.org/whl/cu130"

View File

@@ -16,7 +16,8 @@ update_os
msg_info "Installing Dependencies"
$STD apt install -y \
python3-pip \
python3-libtorrent
python3-libtorrent \
python3-setuptools
msg_ok "Installed Dependencies"
msg_info "Installing Deluge"

View File

@@ -36,8 +36,8 @@ fetch_and_deploy_gh_release "dispatcharr" "Dispatcharr/Dispatcharr" "tarball"
msg_info "Installing Python Dependencies with uv"
cd /opt/dispatcharr
$STD uv venv
$STD uv pip install -r requirements.txt --index-strategy unsafe-best-match
$STD uv venv --clear
$STD uv sync
$STD uv pip install gunicorn gevent celery redis daphne
msg_ok "Installed Python Dependencies"

25
install/drawio-install.sh Normal file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: Slaviša Arežina (tremor021)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.drawio.com/
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
setup_hwaccel
msg_info "Installing Dependencies"
$STD apt install -y tomcat11
msg_ok "Installed Dependencies"
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "drawio" "jgraph/drawio" "singlefile" "latest" "/var/lib/tomcat11/webapps" "draw.war"
motd_ssh
customize
cleanup_lxc

View File

@@ -31,8 +31,10 @@ setup_deb822_repo "matrix-org" \
"main"
echo "matrix-synapse-py3 matrix-synapse/server-name string $servername" | debconf-set-selections
echo "matrix-synapse-py3 matrix-synapse/report-stats boolean false" | debconf-set-selections
echo "exit 101" >/usr/sbin/policy-rc.d
chmod +x /usr/sbin/policy-rc.d
$STD apt install matrix-synapse-py3 -y
systemctl stop matrix-synapse
rm -f /usr/sbin/policy-rc.d
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/matrix-synapse/homeserver.yaml
sed -i 's/'\''::1'\'', //g' /etc/matrix-synapse/homeserver.yaml
SECRET=$(openssl rand -hex 32)

View File

@@ -86,7 +86,7 @@ $STD uv tool update-shell
export PATH="/root/.local/bin:$PATH"
$STD poetry self add poetry-plugin-export
$STD poetry export -f requirements.txt --output requirements.txt --without-hashes
$STD uv venv
$STD uv venv --clear
$STD uv pip install -r requirements.txt
msg_ok "Setup Backend"

View File

@@ -23,7 +23,7 @@ msg_info "Setting up Virtual Environment"
mkdir -p /opt/esphome
mkdir -p /root/config
cd /opt/esphome
$STD uv venv /opt/esphome/.venv
$STD uv venv --clear /opt/esphome/.venv
$STD /opt/esphome/.venv/bin/python -m ensurepip --upgrade
$STD /opt/esphome/.venv/bin/python -m pip install --upgrade pip
$STD /opt/esphome/.venv/bin/python -m pip install esphome tornado esptool

View File

@@ -15,31 +15,30 @@ network_check
update_os
msg_info "Installing Dependencies"
$STD apt-get install -y \
$STD apt install -y \
ffmpeg \
jq \
imagemagick
msg_ok "Installed Dependencies"
setup_hwaccel
msg_info "Installing ASP.NET Core Runtime"
curl -fsSL https://packages.microsoft.com/config/debian/13/packages-microsoft-prod.deb -o packages-microsoft-prod.deb
$STD dpkg -i packages-microsoft-prod.deb
rm -rf packages-microsoft-prod.deb
$STD apt-get update
$STD apt-get install -y aspnetcore-runtime-8.0
setup_deb822_repo \
"microsoft" \
"https://packages.microsoft.com/keys/microsoft-2025.asc" \
"https://packages.microsoft.com/debian/13/prod/" \
"trixie"
$STD apt install -y aspnetcore-runtime-8.0
msg_ok "Installed ASP.NET Core Runtime"
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
msg_info "Setup FileFlows"
$STD ln -svf /usr/bin/ffmpeg /usr/local/bin/ffmpeg
$STD ln -svf /usr/bin/ffprobe /usr/local/bin/ffprobe
temp_file=$(mktemp)
curl -fsSL https://fileflows.com/downloads/zip -o "$temp_file"
$STD unzip -d /opt/fileflows "$temp_file"
$STD bash -c "cd /opt/fileflows/Server && dotnet FileFlows.Server.dll --systemd install --root true"
cd /opt/fileflows/Server
dotnet FileFlows.Server.dll --systemd install --root true
systemctl enable -q --now fileflows
rm -f "$temp_file"
msg_ok "Setup FileFlows"
motd_ssh

View File

@@ -17,7 +17,7 @@ PYTHON_VERSION="3.12" setup_uv
fetch_and_deploy_gh_release "huntarr" "plexguide/Huntarr.io" "tarball"
msg_info "Configure Huntarr"
$STD uv venv /opt/huntarr/.venv
$STD uv venv --clear /opt/huntarr/.venv
$STD uv pip install --python /opt/huntarr/.venv/bin/python -r /opt/huntarr/requirements.txt
msg_ok "Configured Huntrarr"

View File

@@ -289,7 +289,7 @@ ML_DIR="${APP_DIR}/machine-learning"
GEO_DIR="${INSTALL_DIR}/geodata"
mkdir -p {"${APP_DIR}","${UPLOAD_DIR}","${GEO_DIR}","${INSTALL_DIR}"/cache}
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.5.5" "$SRC_DIR"
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.5.6" "$SRC_DIR"
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1]' ${SRC_DIR}/package.json)"
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs

View File

@@ -18,7 +18,7 @@ PYTHON_VERSION="3.12" setup_uv
msg_info "Installing Jupyter"
mkdir -p /opt/jupyter
cd /opt/jupyter
$STD uv venv /opt/jupyter/.venv
$STD uv venv --clear /opt/jupyter/.venv
$STD /opt/jupyter/.venv/bin/python -m ensurepip --upgrade
$STD /opt/jupyter/.venv/bin/python -m pip install --upgrade pip
$STD /opt/jupyter/.venv/bin/python -m pip install jupyter

View File

@@ -22,7 +22,7 @@ fetch_and_deploy_gh_release "kapowarr" "Casvt/Kapowarr" "tarball"
msg_info "Setup Kapowarr"
cd /opt/kapowarr
$STD uv venv .venv
$STD uv venv --clear .venv
$STD source .venv/bin/activate
$STD uv pip install --upgrade pip
$STD uv pip install --no-cache-dir -r requirements.txt

View File

@@ -20,10 +20,19 @@ msg_ok "Installed Docker"
msg_info "Detecting latest Kasm Workspaces release"
KASM_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_[0-9]+\.[0-9]+\.[0-9]+\.[a-z0-9]+\.tar\.gz' | head -n 1)
if [[ -z "$KASM_URL" ]]; then
SERVICE_IMAGE_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_service_images_amd64_[0-9]+\.[0-9]+\.[0-9]+\.tar\.gz' | head -n 1)
if [[ -n "$SERVICE_IMAGE_URL" ]]; then
KASM_VERSION=$(echo "$SERVICE_IMAGE_URL" | sed -E 's/.*kasm_release_service_images_amd64_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
KASM_URL="https://kasm-static-content.s3.amazonaws.com/kasm_release_${KASM_VERSION}.tar.gz"
fi
else
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
fi
if [[ -z "$KASM_URL" ]] || [[ -z "$KASM_VERSION" ]]; then
msg_error "Unable to detect latest Kasm release URL."
exit 1
fi
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
msg_ok "Detected Kasm Workspaces version $KASM_VERSION"
msg_warn "WARNING: This script will run an external installer from a third-party source (https://www.kasmweb.com/)."

View File

@@ -50,7 +50,7 @@ $STD useradd librenms -d /opt/librenms -M -r -s "$(which bash)"
mkdir -p /opt/librenms/{rrd,logs,bootstrap/cache,storage,html}
cd /opt/librenms
APP_KEY=$(openssl rand -base64 40 | tr -dc 'a-zA-Z0-9')
$STD uv venv .venv
$STD uv venv --clear .venv
$STD source .venv/bin/activate
$STD uv pip install -r requirements.txt
cat <<EOF >/opt/librenms/.env

View File

@@ -37,18 +37,13 @@ PYTHON_VERSION="3.12" setup_uv
fetch_and_deploy_gh_release "libretranslate" "LibreTranslate/LibreTranslate" "tarball"
msg_info "Setup LibreTranslate (Patience)"
TORCH_VERSION=$(grep -Eo '"torch ==[0-9]+\.[0-9]+\.[0-9]+' /opt/libretranslate/pyproject.toml |
tail -n1 | sed 's/.*==//')
if [[ -z "$TORCH_VERSION" ]]; then
TORCH_VERSION="2.5.0"
fi
cd /opt/libretranslate
$STD uv venv .venv --python 3.12
$STD uv venv --clear .venv --python 3.12
$STD source .venv/bin/activate
$STD uv pip install --upgrade pip setuptools
$STD uv pip install --upgrade pip
$STD uv pip install "setuptools<81"
$STD uv pip install Babel==2.12.1
$STD .venv/bin/python scripts/compile_locales.py
$STD uv pip install "torch==${TORCH_VERSION}" --extra-index-url https://download.pytorch.org/whl/cpu
$STD uv pip install "numpy<2"
$STD uv pip install .
$STD uv pip install libretranslate

View File

@@ -42,7 +42,7 @@ msg_ok "Set up PostgreSQL"
msg_info "Setting up Virtual Environment"
mkdir -p /opt/litellm
cd /opt/litellm
$STD uv venv /opt/litellm/.venv
$STD uv venv --clear /opt/litellm/.venv
$STD /opt/litellm/.venv/bin/python -m ensurepip --upgrade
$STD /opt/litellm/.venv/bin/python -m pip install --upgrade pip
$STD /opt/litellm/.venv/bin/python -m pip install litellm[proxy] prisma

View File

@@ -29,7 +29,7 @@ fetch_and_deploy_gh_release "mylar3" "mylar3/mylar3" "tarball"
msg_info "Installing ${APPLICATION}"
mkdir -p /opt/mylar3-data
$STD uv venv /opt/mylar3/.venv
$STD uv venv --clear /opt/mylar3/.venv
$STD /opt/mylar3/.venv/bin/python -m ensurepip --upgrade
$STD /opt/mylar3/.venv/bin/python -m pip install --upgrade pip
$STD /opt/mylar3/.venv/bin/python -m pip install --no-cache-dir -r /opt/mylar3/requirements.txt

View File

@@ -30,29 +30,19 @@ msg_ok "Installed Nginx UI"
msg_info "Configuring Nginx UI"
mkdir -p /usr/local/etc/nginx-ui
cat <<EOF >/usr/local/etc/nginx-ui/app.ini
[server]
HttpHost = 0.0.0.0
HttpPort = 9000
RunMode = release
JwtSecret = $(openssl rand -hex 32)
[nginx]
AccessLogPath = /var/log/nginx/access.log
ErrorLogPath = /var/log/nginx/error.log
ConfigDir = /etc/nginx
PIDPath = /run/nginx.pid
TestConfigCmd = nginx -t
ReloadCmd = nginx -s reload
RestartCmd = systemctl restart nginx
[app]
PageSize = 10
[server]
Host = 0.0.0.0
Port = 9000
RunMode = release
[cert]
Email =
CADir =
RenewalInterval = 7
RecursiveNameservers =
HTTPChallengePort = 9180
[terminal]
StartCmd = login
EOF
msg_ok "Configured Nginx UI"
@@ -78,17 +68,6 @@ EOF
systemctl daemon-reload
msg_ok "Created Service"
msg_info "Creating Initial Admin User"
systemctl start nginx-ui
sleep 3
systemctl stop nginx-ui
sleep 1
/usr/local/bin/nginx-ui reset-password --config /usr/local/etc/nginx-ui/app.ini &>/tmp/nginx-ui-reset.log || true
ADMIN_PASS=$(grep -oP 'Password: \K\S+' /tmp/nginx-ui-reset.log || echo "admin")
echo -e "Nginx-UI Credentials\nUsername: admin\nPassword: $ADMIN_PASS" >~/nginx-ui.creds
rm -f /tmp/nginx-ui-reset.log
msg_ok "Created Initial Admin User"
msg_info "Starting Service"
systemctl enable -q --now nginx-ui
rm -rf /etc/nginx/sites-enabled/default

View File

@@ -130,10 +130,11 @@ if [ ! -f /app/config/production.json ]; then
"database": {
"engine": "knex-native",
"knex": {
"client": "sqlite3",
"client": "better-sqlite3",
"connection": {
"filename": "/data/database.sqlite"
}
},
"useNullAsDefault": true
}
}
}

View File

@@ -38,6 +38,10 @@ for server in "${servers[@]}"; do
fi
done
msg_info "Installing dependencies"
$STD apt install -y inotify-tools
msg_ok "Installed dependencies"
msg_info "Installing Collabora Online"
curl -fsSL https://collaboraoffice.com/downloads/gpg/collaboraonline-release-keyring.gpg -o /etc/apt/keyrings/collaboraonline-release-keyring.gpg
cat <<EOF >/etc/apt/sources.list.d/colloboraonline.sources
@@ -148,8 +152,15 @@ COLLABORATION_JWT_SECRET=
# FRONTEND_FULL_TEXT_SEARCH_ENABLED=true
# SEARCH_EXTRACTOR_TIKA_TIKA_URL=<your-tika-url>
## External storage test - Only NFS v4.2+ is supported
## User files
## Uncomment below to enable PosixFS Collaborative Mode
## Increase inotify watch/instance limits on your PVE host:
### sysctl -w fs.inotify.max_user_watches=1048576
### sysctl -w fs.inotify.max_user_instances=1024
# STORAGE_USERS_POSIX_ENABLE_COLLABORATION=true
# STORAGE_USERS_POSIX_WATCH_TYPE=inotifywait
# STORAGE_USERS_POSIX_WATCH_FS=true
# STORAGE_USERS_POSIX_WATCH_PATH=<path-to-storage-or-bind-mount>
## User files location - experimental - use at your own risk! - ZFS, NFS v4.2+ supported - CIFS/SMB not supported
# STORAGE_USERS_POSIX_ROOT=<path-to-your-bind_mount>
EOF

View File

@@ -36,7 +36,7 @@ $STD npm ci
$STD npm run set:sqlite
$STD npm run set:oss
rm -rf server/private
$STD npm run db:generate
$STD npm run db:sqlite:generate
$STD npm run build
$STD npm run build:cli
cp -R .next/standalone ./

View File

@@ -1,47 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: Andy Grunwald (andygrunwald)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/hansmi/prometheus-paperless-exporter
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
fetch_and_deploy_gh_release "prom-paperless-exp" "hansmi/prometheus-paperless-exporter" "binary"
msg_info "Configuring Prometheus Paperless NGX Exporter"
mkdir -p /etc/prometheus-paperless-ngx-exporter
echo "SECRET_AUTH_TOKEN" >/etc/prometheus-paperless-ngx-exporter/paperless_auth_token_file
msg_ok "Configured Prometheus Paperless NGX Exporter"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/prometheus-paperless-ngx-exporter.service
[Unit]
Description=Prometheus Paperless NGX Exporter
Wants=network-online.target
After=network-online.target
[Service]
User=root
Restart=always
Type=simple
ExecStart=/usr/bin/prometheus-paperless-exporter \
--paperless_url=http://paperless.example.org \
--paperless_auth_token_file=/etc/prometheus-paperless-ngx-exporter/paperless_auth_token_file
ExecReload=/bin/kill -HUP \$MAINPID
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now prometheus-paperless-ngx-exporter
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -19,7 +19,7 @@ msg_info "Installing Prometheus Proxmox VE Exporter"
mkdir -p /opt/prometheus-pve-exporter
cd /opt/prometheus-pve-exporter
$STD uv venv /opt/prometheus-pve-exporter/.venv
$STD uv venv --clear /opt/prometheus-pve-exporter/.venv
$STD /opt/prometheus-pve-exporter/.venv/bin/python -m ensurepip --upgrade
$STD /opt/prometheus-pve-exporter/.venv/bin/python -m pip install --upgrade pip
$STD /opt/prometheus-pve-exporter/.venv/bin/python -m pip install prometheus-pve-exporter

View File

@@ -36,7 +36,7 @@ msg_ok "Setup Unrar"
fetch_and_deploy_gh_release "sabnzbd-org" "sabnzbd/sabnzbd" "prebuild" "latest" "/opt/sabnzbd" "SABnzbd-*-src.tar.gz"
msg_info "Installing SABnzbd"
$STD uv venv /opt/sabnzbd/venv
$STD uv venv --clear /opt/sabnzbd/venv
$STD uv pip install -r /opt/sabnzbd/requirements.txt --python=/opt/sabnzbd/venv/bin/python
msg_ok "Installed SABnzbd"

View File

@@ -18,7 +18,7 @@ fetch_and_deploy_gh_release "scrappar" "thecfu/scraparr" "tarball" "latest" "/op
msg_info "Installing Scraparr"
cd /opt/scraparr
$STD uv venv /opt/scraparr/.venv
$STD uv venv --clear /opt/scraparr/.venv
$STD /opt/scraparr/.venv/bin/python -m ensurepip --upgrade
$STD /opt/scraparr/.venv/bin/python -m pip install --upgrade pip
$STD /opt/scraparr/.venv/bin/python -m pip install -r /opt/scraparr/src/scraparr/requirements.txt

View File

@@ -131,7 +131,7 @@ msg_ok "Built Shelfmark frontend"
msg_info "Configuring Shelfmark"
cd /opt/shelfmark
$STD uv venv ./venv
$STD uv venv --clear ./venv
$STD source ./venv/bin/activate
$STD uv pip install -r ./requirements-base.txt
[[ "$DEPLOYMENT_TYPE" == "1" ]] && $STD uv pip install -r ./requirements-shelfmark.txt

View File

@@ -3,7 +3,7 @@
# Copyright (c) 2021-2026 community-scripts ORG
# Author: vhsdream
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/slskd/slskd/, https://soularr.net
# Source: https://github.com/slskd/slskd/, https://github.com/mrusse/soularr
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
@@ -13,71 +13,71 @@ setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
python3-pip
msg_ok "Installed Dependencies"
fetch_and_deploy_gh_release "Slskd" "slskd/slskd" "prebuild" "latest" "/opt/slskd" "slskd-*-linux-x64.zip"
msg_info "Setup ${APPLICATION}"
tmp_file=$(mktemp)
RELEASE=$(curl -s https://api.github.com/repos/slskd/slskd/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3) }')
curl -fsSL "https://github.com/slskd/slskd/releases/download/${RELEASE}/slskd-${RELEASE}-linux-x64.zip" -o $tmp_file
$STD unzip $tmp_file -d /opt/${APPLICATION}
echo "${RELEASE}" >/opt/${APPLICATION}_version.txt
msg_info "Configuring Slskd"
JWT_KEY=$(openssl rand -base64 44)
SLSKD_API_KEY=$(openssl rand -base64 44)
cp /opt/${APPLICATION}/config/slskd.example.yml /opt/${APPLICATION}/config/slskd.yml
cp /opt/slskd/config/slskd.example.yml /opt/slskd/config/slskd.yml
sed -i \
-e "\|web:|,\|cidr|s|^#||" \
-e "\|https:|,\|5031|s|false|true|" \
-e '/web:/,/cidr/s/^# //' \
-e '/https:/,/port: 5031/s/false/true/' \
-e '/port: 5030/,/socket/s/,.*$//' \
-e '/content_path:/,/authentication/s/false/true/' \
-e "\|api_keys|,\|cidr|s|<some.*$|$SLSKD_API_KEY|; \
s|role: readonly|role: readwrite|; \
s|0.0.0.0/0,::/0|& # Replace this with your subnet|" \
-e "\|soulseek|,\|write_queue|s|^#||" \
-e "\|jwt:|,\|ttl|s|key: ~|key: $JWT_KEY|" \
-e "s|^ picture|# picture|" \
/opt/${APPLICATION}/config/slskd.yml
msg_ok "Setup ${APPLICATION}"
-e '/soulseek/,/write_queue/s/^# //' \
-e 's/^.*picture/#&/' /opt/slskd/config/slskd.yml
msg_ok "Configured Slskd"
msg_info "Installing Soularr"
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
cd /tmp
curl -fsSL -o main.zip https://github.com/mrusse/soularr/archive/refs/heads/main.zip
$STD unzip main.zip
mv soularr-main /opt/soularr
cd /opt/soularr
$STD pip install -r requirements.txt
sed -i \
-e "\|[Slskd]|,\|host_url|s|yourslskdapikeygoeshere|$SLSKD_API_KEY|" \
-e "/host_url/s/slskd/localhost/" \
/opt/soularr/config.ini
sed -i \
-e "/#This\|#Default\|INTERVAL/{N;d;}" \
-e "/while\|#Pass/d" \
-e "\|python|s|app|opt/soularr|; s|python|python3|" \
-e "/dt/,+2d" \
/opt/soularr/run.sh
sed -i -E "/(soularr.py)/s/.{5}$//; /if/,/fi/s/.{4}//" /opt/soularr/run.sh
chmod +x /opt/soularr/run.sh
msg_ok "Installed Soularr"
read -rp "${TAB3}Do you want to install Soularr? y/N " soularr
if [[ ${soularr,,} =~ ^(y|yes)$ ]]; then
PYTHON_VERSION="3.11" setup_uv
fetch_and_deploy_gh_release "Soularr" "mrusse/soularr" "tarball" "latest" "/opt/soularr"
cd /opt/soularr
$STD uv venv venv
$STD source venv/bin/activate
$STD uv pip install -r requirements.txt
sed -i \
-e "\|[Slskd]|,\|host_url|s|yourslskdapikeygoeshere|$SLSKD_API_KEY|" \
-e "/host_url/s/slskd/localhost/" \
/opt/soularr/config.ini
cat <<EOF >/opt/soularr/run.sh
#!/usr/bin/env bash
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/${APPLICATION}.service
if ps aux | grep "[s]oularr.py" >/dev/null; then
echo "Soularr is already running. Exiting..."
exit 1
else
source /opt/soularr/venv/bin/activate
uv run python3 -u /opt/soularr/soularr.py --config-dir /opt/soularr
fi
EOF
chmod +x /opt/soularr/run.sh
deactivate
msg_ok "Installed Soularr"
fi
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/slskd.service
[Unit]
Description=${APPLICATION} Service
Description=Slskd Service
After=network.target
Wants=network.target
[Service]
WorkingDirectory=/opt/${APPLICATION}
ExecStart=/opt/${APPLICATION}/slskd --config /opt/${APPLICATION}/config/slskd.yml
WorkingDirectory=/opt/slskd
ExecStart=/opt/slskd/slskd --config /opt/slskd/config/slskd.yml
Restart=always
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/soularr.timer
if [[ -d /opt/soularr ]]; then
cat <<EOF >/etc/systemd/system/soularr.timer
[Unit]
Description=Soularr service timer
RefuseManualStart=no
@@ -85,15 +85,15 @@ RefuseManualStop=no
[Timer]
Persistent=true
# run every 5 minutes
OnCalendar=*-*-* *:0/5:00
# run every 10 minutes
OnCalendar=*-*-* *:0/10:00
Unit=soularr.service
[Install]
WantedBy=timers.target
EOF
cat <<EOF >/etc/systemd/system/soularr.service
cat <<EOF >/etc/systemd/system/soularr.service
[Unit]
Description=Soularr service
After=network.target slskd.service
@@ -106,10 +106,9 @@ ExecStart=/bin/bash -c /opt/soularr/run.sh
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now ${APPLICATION}
systemctl enable -q soularr.timer
rm -rf $tmp_file
rm -rf /tmp/main.zip
msg_warn "Add your Lidarr API key to Soularr in '/opt/soularr/config.ini', then run 'systemctl enable --now soularr.timer'"
fi
systemctl enable -q --now slskd
msg_ok "Created Services"
motd_ssh

View File

@@ -61,7 +61,7 @@ msg_ok "Installed LibreOffice Components"
msg_info "Installing Python Dependencies"
mkdir -p /tmp/stirling-pdf
$STD uv venv /opt/.venv
$STD uv venv --clear /opt/.venv
export PATH="/opt/.venv/bin:$PATH"
source /opt/.venv/bin/activate
$STD uv pip install --upgrade pip

View File

@@ -22,7 +22,7 @@ fetch_and_deploy_gh_release "streamlink-webui" "CrazyWolf13/streamlink-webui" "t
msg_info "Setup ${APPLICATION}"
mkdir -p "/opt/${APPLICATION}-download"
$STD uv venv /opt/"${APPLICATION}"/backend/src/.venv
$STD uv venv --clear /opt/"${APPLICATION}"/backend/src/.venv
source /opt/"${APPLICATION}"/backend/src/.venv/bin/activate
$STD uv pip install -r /opt/streamlink-webui/backend/src/requirements.txt --python=/opt/"${APPLICATION}"/backend/src/.venv
cd /opt/"${APPLICATION}"/frontend/src

View File

@@ -40,7 +40,7 @@ SECRET_KEY=$(openssl rand -base64 45 | sed 's/\//\\\//g')
msg_info "Setup Tandoor"
mkdir -p /opt/tandoor/{config,api,mediafiles,staticfiles}
cd /opt/tandoor
$STD uv venv .venv --python=python3
$STD uv venv --clear .venv --python=python3
$STD uv pip install -r requirements.txt --python .venv/bin/python
cd /opt/tandoor/vue3
$STD yarn install

View File

@@ -25,7 +25,7 @@ cd /opt/Tautulli
TAUTULLI_VERSION=$(get_latest_github_release "Tautulli/Tautulli" "false")
echo "${TAUTULLI_VERSION}" >/opt/Tautulli/version.txt
echo "master" >/opt/Tautulli/branch.txt
$STD uv venv
$STD uv venv --clear
$STD source /opt/Tautulli/.venv/bin/activate
$STD uv pip install -r requirements.txt
$STD uv pip install pyopenssl

View File

@@ -30,7 +30,7 @@ msg_ok "Built Frontend"
msg_info "Setting up Backend"
cd /opt/trip/backend
$STD uv venv /opt/trip/.venv
$STD uv venv --clear /opt/trip/.venv
$STD uv pip install --python /opt/trip/.venv/bin/python -r trip/requirements.txt
msg_ok "Set up Backend"

View File

@@ -27,68 +27,6 @@ msg_ok "Installed Dependencies"
fetch_and_deploy_gh_release "UmlautAdaptarr" "PCJones/Umlautadaptarr" "prebuild" "latest" "/opt/UmlautAdaptarr" "linux-x64.zip"
msg_info "Setting up UmlautAdaptarr"
cat <<EOF >/opt/UmlautAdaptarr/appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
},
"Console": {
"TimestampFormat": "yyyy-MM-dd HH:mm:ss::"
}
},
"AllowedHosts": "*",
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://[::]:5005"
}
}
},
"Settings": {
"UserAgent": "UmlautAdaptarr/1.0",
"UmlautAdaptarrApiHost": "https://umlautadaptarr.pcjones.de/api/v1",
"IndexerRequestsCacheDurationInMinutes": 12
},
"Sonarr": [
{
"Enabled": false,
"Name": "Sonarr",
"Host": "http://192.168.1.100:8989",
"ApiKey": "dein_sonarr_api_key"
}
],
"Radarr": [
{
"Enabled": false,
"Name": "Radarr",
"Host": "http://192.168.1.101:7878",
"ApiKey": "dein_radarr_api_key"
}
],
"Lidarr": [
{
"Enabled": false,
"Host": "http://192.168.1.102:8686",
"ApiKey": "dein_lidarr_api_key"
},
],
"Readarr": [
{
"Enabled": false,
"Host": "http://192.168.1.103:8787",
"ApiKey": "dein_readarr_api_key"
},
],
"IpLeakTest": {
"Enabled": false
}
}
EOF
msg_ok "Setup UmlautAdaptarr"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/umlautadaptarr.service
[Unit]

View File

@@ -49,7 +49,7 @@ fetch_and_deploy_gh_release "warracker" "sassanix/Warracker" "tarball" "latest"
msg_info "Installing Warracker"
cd /opt/warracker/backend
$STD uv venv .venv
$STD uv venv --clear .venv
$STD source .venv/bin/activate
$STD uv pip install -r requirements.txt
mv /opt/warracker/env.example /opt/.env

View File

@@ -15,92 +15,167 @@ update_os
msg_info "Installing Dependencies"
$STD apt install -y \
git \
apache2 \
libapache2-mod-wsgi-py3
build-essential \
nginx \
redis-server \
libpq-dev
msg_ok "Installed Dependencies"
msg_info "Installing Python"
$STD apt install -y python3-pip
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
msg_ok "Installed Python"
NODE_VERSION="22" NODE_MODULE="yarn,sass" setup_nodejs
NODE_VERSION="22" NODE_MODULE="sass" setup_nodejs
setup_uv
PG_VERSION="16" setup_postgresql
PG_DB_NAME="wger" PG_DB_USER="wger" setup_postgresql_db
fetch_and_deploy_gh_release "wger" "wger-project/wger" "tarball"
msg_info "Setting up wger"
$STD adduser wger --disabled-password --gecos ""
mkdir /home/wger/db
touch /home/wger/db/database.sqlite
chown :www-data -R /home/wger/db
chmod g+w /home/wger/db /home/wger/db/database.sqlite
mkdir /home/wger/{static,media}
chmod o+w /home/wger/media
temp_dir=$(mktemp -d)
cd "$temp_dir"
RELEASE=$(curl -fsSL https://api.github.com/repos/wger-project/wger/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3)}')
curl -fsSL "https://github.com/wger-project/wger/archive/refs/tags/$RELEASE.tar.gz" -o "$RELEASE.tar.gz"
tar xzf "$RELEASE".tar.gz
mv wger-"$RELEASE" /home/wger/src
cd /home/wger/src
$STD pip install -r requirements_prod.txt --ignore-installed
$STD pip install -e .
$STD wger create-settings --database-path /home/wger/db/database.sqlite
sed -i "s#home/wger/src/media#home/wger/media#g" /home/wger/src/settings.py
sed -i "/MEDIA_ROOT = '\/home\/wger\/media'/a STATIC_ROOT = '/home/wger/static'" /home/wger/src/settings.py
$STD wger bootstrap
$STD python3 manage.py collectstatic
rm -rf "$temp_dir"
echo "${RELEASE}" >/opt/wger_version.txt
msg_ok "Finished setting up wger"
mkdir -p /opt/wger/{static,media}
chmod o+w /opt/wger/media
cd /opt/wger
$STD corepack enable
$STD npm install
$STD npm run build:css:sass
$STD uv venv
$STD uv pip install . --group docker
SECRET_KEY=$(openssl rand -base64 40)
cat <<EOF >/opt/wger/.env
DJANGO_SETTINGS_MODULE=settings.main
PYTHONPATH=/opt/wger
msg_info "Creating Service"
cat <<EOF >/etc/apache2/sites-available/wger.conf
<Directory /home/wger/src>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
DJANGO_DB_ENGINE=django.db.backends.postgresql
DJANGO_DB_DATABASE=${PG_DB_NAME}
DJANGO_DB_USER=${PG_DB_USER}
DJANGO_DB_PASSWORD=${PG_DB_PASS}
DJANGO_DB_HOST=localhost
DJANGO_DB_PORT=5432
DATABASE_URL=postgresql://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}
<VirtualHost *:80>
WSGIApplicationGroup %{GLOBAL}
WSGIDaemonProcess wger python-path=/home/wger/src python-home=/home/wger
WSGIProcessGroup wger
WSGIScriptAlias / /home/wger/src/wger/wsgi.py
WSGIPassAuthorization On
DJANGO_MEDIA_ROOT=/opt/wger/media
DJANGO_STATIC_ROOT=/opt/wger/static
DJANGO_STATIC_URL=/static/
Alias /static/ /home/wger/static/
<Directory /home/wger/static>
Require all granted
</Directory>
ALLOWED_HOSTS=${LOCAL_IP},localhost,127.0.0.1
CSRF_TRUSTED_ORIGINS=http://${LOCAL_IP}:3000
Alias /media/ /home/wger/media/
<Directory /home/wger/media>
Require all granted
</Directory>
USE_X_FORWARDED_HOST=True
SECURE_PROXY_SSL_HEADER=HTTP_X_FORWARDED_PROTO,http
ErrorLog /var/log/apache2/wger-error.log
CustomLog /var/log/apache2/wger-access.log combined
</VirtualHost>
DJANGO_CACHE_BACKEND=django_redis.cache.RedisCache
DJANGO_CACHE_LOCATION=redis://127.0.0.1:6379/1
DJANGO_CACHE_TIMEOUT=300
DJANGO_CACHE_CLIENT_CLASS=django_redis.client.DefaultClient
AXES_CACHE_ALIAS=default
USE_CELERY=True
CELERY_BROKER=redis://127.0.0.1:6379/2
CELERY_BACKEND=redis://127.0.0.1:6379/2
SITE_URL=http://${LOCAL_IP}:3000
SECRET_KEY=${SECRET_KEY}
EOF
$STD a2dissite 000-default.conf
$STD a2ensite wger
systemctl restart apache2
set -a && source /opt/wger/.env && set +a
$STD uv run wger bootstrap
$STD uv run python manage.py collectstatic --no-input
cat <<EOF | uv run python manage.py shell
from django.contrib.auth import get_user_model
User = get_user_model()
user, created = User.objects.get_or_create(
username="admin",
defaults={"email": "admin@localhost"},
)
if created:
user.set_password("${PG_DB_PASS}")
user.is_superuser = True
user.is_staff = True
user.save()
EOF
msg_ok "Set up wger"
msg_info "Creating Config and Services"
cat <<EOF >/etc/systemd/system/wger.service
[Unit]
Description=wger Service
Description=wger Gunicorn
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/wger start -a 0.0.0.0 -p 3000
WorkingDirectory=/opt/wger
EnvironmentFile=/opt/wger/.env
ExecStart=/opt/wger/.venv/bin/gunicorn \
--bind 127.0.0.1:8000 \
--workers 3 \
--threads 2 \
--timeout 120 \
wger.wsgi:application
Restart=always
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now wger
msg_ok "Created Service"
cat <<EOF >/etc/systemd/system/celery.service
[Unit]
Description=wger Celery Worker
After=network.target redis-server.service
Requires=redis-server.service
[Service]
WorkingDirectory=/opt/wger
EnvironmentFile=/opt/wger/.env
ExecStart=/opt/wger/.venv/bin/celery -A wger worker -l info
Restart=always
[Install]
WantedBy=multi-user.target
EOF
mkdir -p /var/lib/wger/celery
chmod 700 /var/lib/wger/celery
cat <<EOF >/etc/systemd/system/celery-beat.service
[Unit]
Description=wger Celery Beat
After=network.target redis-server.service
Requires=redis-server.service
[Service]
WorkingDirectory=/opt/wger
EnvironmentFile=/opt/wger/.env
ExecStart=/opt/wger/.venv/bin/celery -A wger beat -l info \
--schedule /var/lib/wger/celery/celerybeat-schedule
Restart=always
[Install]
WantedBy=multi-user.target
EOF
cat <<'EOF' >/etc/nginx/sites-available/wger
server {
listen 3000;
server_name _;
client_max_body_size 20M;
location /static/ {
alias /opt/wger/static/;
expires 30d;
}
location /media/ {
alias /opt/wger/media/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
}
EOF
$STD rm -f /etc/nginx/sites-enabled/default
$STD ln -sf /etc/nginx/sites-available/wger /etc/nginx/sites-enabled/wger
systemctl enable -q --now redis-server nginx wger celery celery-beat
systemctl restart nginx
msg_ok "Created Config and Services"
motd_ssh
customize

File diff suppressed because it is too large Load Diff

View File

@@ -3078,10 +3078,10 @@ settings_menu() {
case "$choice" in
1) diagnostics_menu ;;
2) nano /usr/local/community-scripts/default.vars ;;
2) ${EDITOR:-nano} /usr/local/community-scripts/default.vars ;;
3)
if [ -f "$(get_app_defaults_path)" ]; then
nano "$(get_app_defaults_path)"
${EDITOR:-nano} "$(get_app_defaults_path)"
else
# Back was selected (no app.vars available)
return
@@ -3351,19 +3351,21 @@ msg_menu() {
return 0
fi
# Display menu
echo ""
msg_custom "📋" "${BL}" "${title}"
echo ""
for i in "${!tags[@]}"; do
local marker=" "
[[ $i -eq 0 ]] && marker="* "
printf "${TAB3}${marker}%s) %s\n" "${tags[$i]}" "${descs[$i]}"
done
echo ""
# Display menu to /dev/tty so it doesn't get captured by command substitution
{
echo ""
msg_custom "📋" "${BL}" "${title}"
echo ""
for i in "${!tags[@]}"; do
local marker=" "
[[ $i -eq 0 ]] && marker="* "
printf "${TAB3}${marker}%s) %s\n" "${tags[$i]}" "${descs[$i]}"
done
echo ""
} >/dev/tty
local selection=""
read -r -t 10 -p "${TAB3}Select [default=${default_tag}, timeout 10s]: " selection || true
read -r -t 10 -p "${TAB3}Select [default=${default_tag}, timeout 10s]: " selection </dev/tty >/dev/tty || true
# Validate selection
if [[ -n "$selection" ]]; then
@@ -3634,6 +3636,9 @@ $PCT_OPTIONS_STRING"
exit 214
fi
msg_ok "Storage space validated"
# Report installation start to API (early - captures failed installs too)
post_to_api
fi
create_lxc_container || exit $?
@@ -4008,6 +4013,9 @@ EOF'
# Install SSH keys
install_ssh_keys_into_ct
# Start timer for duration tracking
start_install_timer
# Run application installer
# Disable error trap - container errors are handled internally via flag file
set +Eeuo pipefail # Disable ALL error handling temporarily
@@ -4038,6 +4046,9 @@ EOF'
if [[ $install_exit_code -ne 0 ]]; then
msg_error "Installation failed in container ${CTID} (exit code: ${install_exit_code})"
# Report failure to telemetry API
post_update_to_api "failed" "$install_exit_code"
# Copy both logs from container before potential deletion
local build_log_copied=false
local install_log_copied=false
@@ -5121,9 +5132,9 @@ EOF
# api_exit_script()
#
# - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to API using centralized error descriptions
# - Uses explain_exit_code() from error_handler.func for consistent error messages
# - Posts failure status with exit code to API (error description added automatically)
# - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages
# - Posts failure status with exit code to API (error description resolved automatically)
# - Only executes on non-zero exit codes
# ------------------------------------------------------------------------------
api_exit_script() {
@@ -5136,6 +5147,6 @@ api_exit_script() {
if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT
fi
trap 'post_update_to_api "failed" "$BASH_COMMAND"' ERR
trap 'post_update_to_api "failed" "INTERRUPTED"' SIGINT
trap 'post_update_to_api "failed" "TERMINATED"' SIGTERM
trap 'post_update_to_api "failed" "$?"' ERR
trap 'post_update_to_api "failed" "130"' SIGINT
trap 'post_update_to_api "failed" "143"' SIGTERM

View File

@@ -27,100 +27,90 @@
# ------------------------------------------------------------------------------
# explain_exit_code()
#
# - Maps numeric exit codes to human-readable error descriptions
# - Supports:
# * Generic/Shell errors (1, 2, 126, 127, 128, 130, 137, 139, 143)
# * Package manager errors (APT, DPKG: 100, 101, 255)
# * Node.js/npm errors (243-249, 254)
# * Python/pip/uv errors (210-212)
# * PostgreSQL errors (231-234)
# * MySQL/MariaDB errors (241-244)
# * MongoDB errors (251-254)
# * Proxmox custom codes (200-231)
# - Returns description string for given exit code
# - Canonical version is defined in api.func (sourced before this file)
# - This section only provides a fallback if api.func was not loaded
# - See api.func SECTION 1 for the authoritative exit code mappings
# ------------------------------------------------------------------------------
explain_exit_code() {
local code="$1"
case "$code" in
# --- Generic / Shell ---
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
143) echo "Terminated (SIGTERM)" ;;
# --- Package manager / APT / DPKG ---
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
255) echo "DPKG: Fatal internal error" ;;
# --- Node.js / npm / pnpm / yarn ---
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
247) echo "Node.js: Fatal internal error" ;;
248) echo "Node.js: Invalid C++ addon / N-API failure" ;;
249) echo "Node.js: Inspector error" ;;
254) echo "npm/pnpm/yarn: Unknown fatal error" ;;
# --- Python / pip / uv ---
210) echo "Python: Virtualenv / uv environment missing or broken" ;;
211) echo "Python: Dependency resolution failed" ;;
212) echo "Python: Installation aborted (permissions or EXTERNALLY-MANAGED)" ;;
# --- PostgreSQL ---
231) echo "PostgreSQL: Connection failed (server not running / wrong socket)" ;;
232) echo "PostgreSQL: Authentication failed (bad user/password)" ;;
233) echo "PostgreSQL: Database does not exist" ;;
234) echo "PostgreSQL: Fatal error in query / syntax" ;;
# --- MySQL / MariaDB ---
241) echo "MySQL/MariaDB: Connection failed (server not running / wrong socket)" ;;
242) echo "MySQL/MariaDB: Authentication failed (bad user/password)" ;;
243) echo "MySQL/MariaDB: Database does not exist" ;;
244) echo "MySQL/MariaDB: Fatal error in query / syntax" ;;
# --- MongoDB ---
251) echo "MongoDB: Connection failed (server not running)" ;;
252) echo "MongoDB: Authentication failed (bad user/password)" ;;
253) echo "MongoDB: Database not found" ;;
254) echo "MongoDB: Fatal query error" ;;
# --- Proxmox Custom Codes ---
200) echo "Proxmox: Failed to create lock file" ;;
203) echo "Proxmox: Missing CTID variable" ;;
204) echo "Proxmox: Missing PCT_OSTYPE variable" ;;
205) echo "Proxmox: Invalid CTID (<100)" ;;
206) echo "Proxmox: CTID already in use" ;;
207) echo "Proxmox: Password contains unescaped special characters" ;;
208) echo "Proxmox: Invalid configuration (DNS/MAC/Network format)" ;;
209) echo "Proxmox: Container creation failed" ;;
210) echo "Proxmox: Cluster not quorate" ;;
211) echo "Proxmox: Timeout waiting for template lock" ;;
212) echo "Proxmox: Storage type 'iscsidirect' does not support containers (VMs only)" ;;
213) echo "Proxmox: Storage type does not support 'rootdir' content" ;;
214) echo "Proxmox: Not enough storage space" ;;
215) echo "Proxmox: Container created but not listed (ghost state)" ;;
216) echo "Proxmox: RootFS entry missing in config" ;;
217) echo "Proxmox: Storage not accessible" ;;
219) echo "Proxmox: CephFS does not support containers - use RBD" ;;
224) echo "Proxmox: PBS storage is for backups only" ;;
218) echo "Proxmox: Template file corrupted or incomplete" ;;
220) echo "Proxmox: Unable to resolve template path" ;;
221) echo "Proxmox: Template file not readable" ;;
222) echo "Proxmox: Template download failed" ;;
223) echo "Proxmox: Template not available after download" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
# --- Default ---
*) echo "Unknown error" ;;
esac
}
if ! declare -f explain_exit_code &>/dev/null; then
explain_exit_code() {
local code="$1"
case "$code" in
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
124) echo "Command timed out (timeout command)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
150) echo "Systemd: Service failed to start" ;;
151) echo "Systemd: Service unit not found" ;;
152) echo "Permission denied (EACCES)" ;;
153) echo "Build/compile failed (make/gcc/cmake)" ;;
154) echo "Node.js: Native addon build failed (node-gyp)" ;;
160) echo "Python: Virtualenv / uv environment missing or broken" ;;
161) echo "Python: Dependency resolution failed" ;;
162) echo "Python: Installation aborted (permissions or EXTERNALLY-MANAGED)" ;;
170) echo "PostgreSQL: Connection failed (server not running / wrong socket)" ;;
171) echo "PostgreSQL: Authentication failed (bad user/password)" ;;
172) echo "PostgreSQL: Database does not exist" ;;
173) echo "PostgreSQL: Fatal error in query / syntax" ;;
180) echo "MySQL/MariaDB: Connection failed (server not running / wrong socket)" ;;
181) echo "MySQL/MariaDB: Authentication failed (bad user/password)" ;;
182) echo "MySQL/MariaDB: Database does not exist" ;;
183) echo "MySQL/MariaDB: Fatal error in query / syntax" ;;
190) echo "MongoDB: Connection failed (server not running)" ;;
191) echo "MongoDB: Authentication failed (bad user/password)" ;;
192) echo "MongoDB: Database not found" ;;
193) echo "MongoDB: Fatal query error" ;;
200) echo "Proxmox: Failed to create lock file" ;;
203) echo "Proxmox: Missing CTID variable" ;;
204) echo "Proxmox: Missing PCT_OSTYPE variable" ;;
205) echo "Proxmox: Invalid CTID (<100)" ;;
206) echo "Proxmox: CTID already in use" ;;
207) echo "Proxmox: Password contains unescaped special characters" ;;
208) echo "Proxmox: Invalid configuration (DNS/MAC/Network format)" ;;
209) echo "Proxmox: Container creation failed" ;;
210) echo "Proxmox: Cluster not quorate" ;;
211) echo "Proxmox: Timeout waiting for template lock" ;;
212) echo "Proxmox: Storage type 'iscsidirect' does not support containers (VMs only)" ;;
213) echo "Proxmox: Storage type does not support 'rootdir' content" ;;
214) echo "Proxmox: Not enough storage space" ;;
215) echo "Proxmox: Container created but not listed (ghost state)" ;;
216) echo "Proxmox: RootFS entry missing in config" ;;
217) echo "Proxmox: Storage not accessible" ;;
218) echo "Proxmox: Template file corrupted or incomplete" ;;
219) echo "Proxmox: CephFS does not support containers - use RBD" ;;
220) echo "Proxmox: Unable to resolve template path" ;;
221) echo "Proxmox: Template file not readable" ;;
222) echo "Proxmox: Template download failed" ;;
223) echo "Proxmox: Template not available after download" ;;
224) echo "Proxmox: PBS storage is for backups only" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
247) echo "Node.js: Fatal internal error" ;;
248) echo "Node.js: Invalid C++ addon / N-API failure" ;;
249) echo "npm/pnpm/yarn: Unknown fatal error" ;;
255) echo "DPKG: Fatal internal error" ;;
*) echo "Unknown error" ;;
esac
}
fi
# ==============================================================================
# SECTION 2: ERROR HANDLERS
@@ -197,12 +187,7 @@ error_handler() {
# Create error flag file with exit code for host detection
echo "$exit_code" >"/root/.install-${SESSION_ID:-error}.failed" 2>/dev/null || true
if declare -f msg_custom >/dev/null 2>&1; then
msg_custom "📋" "${YW}" "Log saved to: ${container_log}"
else
echo -e "${YW}Log saved to:${CL} ${BL}${container_log}${CL}"
fi
# Log path is shown by host as combined log - no need to show container path
else
# HOST CONTEXT: Show local log path and offer container cleanup
if declare -f msg_custom >/dev/null 2>&1; then
@@ -213,6 +198,11 @@ error_handler() {
# Offer to remove container if it exists (build errors after container creation)
if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then
# Report failure to API before container cleanup
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code"
fi
echo ""
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"

View File

@@ -1294,12 +1294,32 @@ setup_deb822_repo() {
return 1
}
# Import GPG
curl -fsSL "$gpg_url" | gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" || {
msg_error "Failed to import GPG key for ${name}"
# Import GPG key (auto-detect binary vs ASCII-armored format)
local tmp_gpg
tmp_gpg=$(mktemp) || return 1
curl -fsSL "$gpg_url" -o "$tmp_gpg" || {
msg_error "Failed to download GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
if file "$tmp_gpg" | grep -qi 'PGP\|GPG\|public key'; then
# Already in binary GPG format — copy directly
cp "$tmp_gpg" "/etc/apt/keyrings/${name}.gpg" || {
msg_error "Failed to install GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
else
# ASCII-armored — dearmor to binary
gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" < "$tmp_gpg" || {
msg_error "Failed to dearmor GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
fi
rm -f "$tmp_gpg"
# Write deb822
{
echo "Types: deb"
@@ -3595,6 +3615,7 @@ _setup_intel_arc() {
$STD apt -y install \
intel-media-va-driver-non-free \
intel-opencl-icd \
libmfx-gen1.2 \
vainfo \
intel-gpu-tools 2>/dev/null || msg_warn "Some Intel Arc packages failed"
@@ -3621,6 +3642,7 @@ _setup_intel_arc() {
intel-media-va-driver-non-free \
ocl-icd-libopencl1 \
libvpl2 \
libmfx-gen1.2 \
vainfo \
intel-gpu-tools 2>/dev/null || msg_warn "Some Intel Arc packages failed"
fi

View File

@@ -79,11 +79,24 @@ EOF
header_info
msg "Installing NetBird..."
pct exec "$CTID" -- bash -c '
if ! command -v curl &>/dev/null; then
apt-get update -qq
apt-get install -y curl >/dev/null
fi
apt install -y ca-certificates gpg &>/dev/null
curl -fsSL "https://pkgs.netbird.io/debian/public.key" | gpg --dearmor >/usr/share/keyrings/netbird-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/netbird-archive-keyring.gpg] https://pkgs.netbird.io/debian stable main" >/etc/apt/sources.list.d/netbird.list
apt-get update &>/dev/null
apt-get install -y netbird-ui &>/dev/null
if systemctl list-unit-files docker.service &>/dev/null; then
mkdir -p /etc/systemd/system/netbird.service.d
cat <<OVERRIDE >/etc/systemd/system/netbird.service.d/after-docker.conf
[Unit]
After=docker.service
Wants=docker.service
OVERRIDE
systemctl daemon-reload
fi
'
msg "\e[1;32m ✔ Installed NetBird.\e[0m"
sleep 2

View File

@@ -89,6 +89,12 @@ if ! dig +short pkgs.tailscale.com | grep -qvE "^127\.|^0\.0\.0\.0$"; then
echo "nameserver 1.1.1.1" >"$ORIG_RESOLV"
fi
if ! command -v curl &>/dev/null; then
echo "[INFO] curl not found, installing..."
apt-get update -qq
apt-get install -y curl >/dev/null
fi
curl -fsSL https://pkgs.tailscale.com/stable/${ID}/${VER}.noarmor.gpg \
| tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null

View File

@@ -5,6 +5,11 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/bakito/adguardhome-sync
if ! command -v curl &>/dev/null; then
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)

View File

@@ -5,6 +5,11 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/9001/copyparty
if ! command -v curl &>/dev/null; then
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)

View File

@@ -110,6 +110,7 @@ if [[ -f "$INSTALL_PATH" ]]; then
read -r update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL https://github.com/gtsteffaniak/filebrowser/releases/latest/download/linux-amd64-filebrowser -o "$TMP_BIN"
chmod +x "$TMP_BIN"
mv -f "$TMP_BIN" /usr/local/bin/filebrowser

View File

@@ -88,6 +88,7 @@ if [ -f "$INSTALL_PATH" ]; then
read -r -p "Would you like to update ${APP}? (y/N): " update_prompt
if [[ "${update_prompt,,}" =~ ^(y|yes)$ ]]; then
msg_info "Updating ${APP}"
if ! command -v curl &>/dev/null; then $PKG_MANAGER curl &>/dev/null; fi
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/latest/download/linux-amd64-filebrowser.tar.gz" | tar -xzv -C /usr/local/bin &>/dev/null
chmod +x "$INSTALL_PATH"
msg_ok "Updated ${APP}"

View File

@@ -44,7 +44,7 @@ IP=$(get_lxc_ip)
install_glances_debian() {
msg_info "Installing dependencies"
apt-get update >/dev/null 2>&1
apt-get install -y gcc lm-sensors wireless-tools >/dev/null 2>&1
apt-get install -y gcc lm-sensors wireless-tools curl >/dev/null 2>&1
msg_ok "Installed dependencies"
msg_info "Setting up Python + uv"
@@ -56,7 +56,7 @@ install_glances_debian() {
cd /opt
mkdir -p glances
cd glances
uv venv
uv venv --clear
source .venv/bin/activate >/dev/null 2>&1
uv pip install --upgrade pip wheel setuptools >/dev/null 2>&1
uv pip install "glances[web]" >/dev/null 2>&1
@@ -114,7 +114,7 @@ install_glances_alpine() {
apk update >/dev/null 2>&1
$STD apk add --no-cache \
gcc musl-dev linux-headers python3-dev \
python3 py3-pip py3-virtualenv lm-sensors wireless-tools >/dev/null 2>&1
python3 py3-pip py3-virtualenv lm-sensors wireless-tools curl >/dev/null 2>&1
msg_ok "Installed dependencies"
msg_info "Setting up Python + uv"
@@ -126,7 +126,7 @@ install_glances_alpine() {
cd /opt
mkdir -p glances
cd glances
uv venv
uv venv --clear
source .venv/bin/activate
uv pip install --upgrade pip wheel setuptools >/dev/null 2>&1
uv pip install "glances[web]" >/dev/null 2>&1

View File

@@ -5,6 +5,11 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/alangrainger/immich-public-proxy
if ! command -v curl &>/dev/null; then
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)

View File

@@ -5,6 +5,11 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/CyferShepard/Jellystat
if ! command -v curl &>/dev/null; then
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)

View File

@@ -5,6 +5,11 @@
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/xperimental/nextcloud-exporter
if ! command -v curl &>/dev/null; then
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)

View File

@@ -51,6 +51,10 @@ function msg_ok() {
}
msg_info "Installing ${APP}"
if ! command -v curl &>/dev/null; then
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
curl -fsSL "https://github.com/OliveTin/OliveTin/releases/latest/download/OliveTin_linux_amd64.deb" -o $(basename "https://github.com/OliveTin/OliveTin/releases/latest/download/OliveTin_linux_amd64.deb")
dpkg -i OliveTin_linux_amd64.deb &>/dev/null
systemctl enable --now OliveTin &>/dev/null

View File

@@ -57,6 +57,10 @@ function msg_ok() { echo -e "${CM} ${GN}${1}${CL}"; }
function msg_error() { echo -e "${CROSS} ${RD}${1}${CL}"; }
function check_internet() {
if ! command -v curl &>/dev/null; then
apt-get update >/dev/null 2>&1
apt-get install -y curl >/dev/null 2>&1
fi
msg_info "Checking Internet connectivity to GitHub"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" https://github.com)
if [[ "$HTTP_CODE" -ge 200 && "$HTTP_CODE" -lt 400 ]]; then

Some files were not shown because too many files have changed in this diff Show More