Compare commits

...

53 Commits

Author SHA1 Message Date
CanbiZ (MickLesk)
a270879be2 error-handler: Implement json_escape and enhance error handling
Added json_escape function for safe JSON embedding and updated error handling to include user abort messages.
2026-02-13 10:38:25 +01:00
CanbiZ (MickLesk)
3156e8e363 downgrade openwebui 2026-02-13 10:12:04 +01:00
CanbiZ (MickLesk)
60ebdc97a5 fix unifi gpg 2026-02-13 09:24:13 +01:00
community-scripts-pr-app[bot]
20ec369338 Update CHANGELOG.md (#11871)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 08:18:50 +00:00
Chris
4907a906c3 Refactor: Radicale (#11850)
* Refactor: Radicale

* Create explicit config at `/etc/radicale/config`

* grammar

---------

Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-13 09:18:29 +01:00
community-scripts-pr-app[bot]
27e3a4301e Update CHANGELOG.md (#11870)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 08:16:37 +00:00
CanbiZ (MickLesk)
43fb75f2b4 Switch sqlite-specific db scripts to generic (#11868)
Replace npm script calls to db:sqlite:generate and db:sqlite:push with db:generate and db:push in ct/pangolin.sh and install/pangolin-install.sh. This makes the build/install steps use the generic DB task names for consistency across update and install workflows.
2026-02-13 09:16:13 +01:00
community-scripts-pr-app[bot]
899d0e4baa Update CHANGELOG.md (#11869)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 08:11:20 +00:00
CanbiZ (MickLesk)
85584b105d SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning (#11829)
* docs(sqlserver2025): add PVE9/Kernel 6.x incompatibility warning

* Update warning note for SQL Server SQLPAL compatibility

* Update frontend/public/json/sqlserver2025.json

Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>

---------

Co-authored-by: Tobias <96661824+CrazyWolf13@users.noreply.github.com>
2026-02-13 09:10:50 +01:00
CanbiZ (MickLesk)
3fe6f50414 hotfix unifi wrong url 2026-02-13 08:50:41 +01:00
community-scripts-pr-app[bot]
724a066aed chore: update github-versions.json (#11867)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 06:23:08 +00:00
community-scripts-pr-app[bot]
cd6e8ecbbe Update CHANGELOG.md (#11866)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 06:17:38 +00:00
Chris
8083c0c0e1 [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config (#11864) 2026-02-13 07:17:08 +01:00
community-scripts-pr-app[bot]
29836f35ed Update CHANGELOG.md (#11863)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 00:24:04 +00:00
community-scripts-pr-app[bot]
17d3d4297c chore: update github-versions.json (#11862)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 00:23:36 +00:00
MickLesk
2b921736e6 fix(tools.func): fix GPG key format detection in setup_deb822_repo
The previous logic using 'file | grep PGP' was inverted — both
ASCII-armored and binary GPG keys matched the pattern, causing
ASCII-armored keys to be copied directly instead of being
dearmored. This resulted in APT failing with NO_PUBKEY errors
on Debian 12 (bookworm).

Use 'grep BEGIN PGP' to reliably detect ASCII-armored keys and
dearmor them, otherwise copy binary keys directly.
2026-02-12 22:30:34 +01:00
MickLesk
ddabe81dd8 fix(tools.func): set GPG keyring files to 644 permissions
gpg --dearmor creates files with restrictive permissions (600),
which prevents Debian 13's sqv signature verifier from reading
the keyring files. This causes apt update to fail with
'Permission denied' errors for all repositories using custom
GPG keys (adoptium, pgdg, pdm, etc.).

Set chmod 644 after creating .gpg files in both setup_deb822_repo()
and the MongoDB GPG key import in manage_tool_repository().
2026-02-12 22:24:24 +01:00
community-scripts-pr-app[bot]
19c5671d3f Update CHANGELOG.md (#11854)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:06:26 +00:00
CanbiZ (MickLesk)
2326520d17 Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics (#11842)
* fix(archlinux-vm): fix LVM/LVM-thin storage and improve error reporting

- Add catch-all (*) case for storage types (LVM, LVM-thin, zfspool)
  Previously only nfs/dir/cifs and btrfs were handled, leaving
  DISK_EXT, DISK_REF, and DISK_IMPORT unset on LVM/LVM-thin storage
- Fix error_handler to send numeric exit_code to API instead of
  bash command text (which caused 'Unknown error' in telemetry)
- Replace fragile pvesm alloc for EFI disk with Proxmox-managed
  :0,efitype=4m (consistent with docker-vm.sh)
- Modernize disk import: auto-detect qm disk import vs qm importdisk,
  parse output to get correct disk reference instead of guessing names
- Use --format flag (double dash) consistent with modern Proxmox API
- Remove unused FORMAT variable (EFI type now always set correctly)
- Remove fragile eval-based disk name construction

* fix(vm): fix LVM/LVM-thin storage and error reporting for all VM scripts

- Add catch-all (*) case to storage type detection in all VM scripts
  that were missing it (debian-vm, debian-13-vm, ubuntu2204/2404/2504,
  nextcloud-vm, owncloud-vm, opnsense-vm, pimox-haos-vm)
- Add catch-all to mikrotik-routeros (had zfspool but not lvm/lvmthin)
- Fix error_handler in ALL 14 VM scripts to send numeric exit_code
  to post_update_to_api instead of bash command text, which caused
  'Unknown error' in telemetry because the API expects a number
2026-02-12 20:06:02 +01:00
community-scripts-pr-app[bot]
7964d39e32 Update CHANGELOG.md (#11853)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:05:55 +00:00
CanbiZ (MickLesk)
f7cf7c8adc fix(telemetry): prevent stuck 'installing' status in error_handler.func (#11845)
The catch_errors() function in CT scripts overrides the API telemetry
traps set by build.func. This caused on_exit, on_interrupt, and
on_terminate to never call post_update_to_api, leaving telemetry
records permanently stuck on 'installing'.

Changes:
- on_exit: Report orphaned 'installing' records on ANY exit where
  post_to_api was called but post_update_to_api was not
- on_interrupt: Call post_update_to_api('failed', '130') before exit
- on_terminate: Call post_update_to_api('failed', '143') before exit

All calls are guarded by POST_UPDATE_DONE flag to prevent duplicates.
2026-02-12 20:05:27 +01:00
community-scripts-pr-app[bot]
744191cb84 Update CHANGELOG.md (#11852)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:05:20 +00:00
CanbiZ (MickLesk)
291ed4c5ad fix(emqx): increase disk to 6GB and add optional MQ disable prompt (#11844)
EMQX 6.1+ preallocates significant disk space for the MQ feature,
causing high CPU/disk usage on small containers (emqx/emqx#16649).

- Increase default disk from 4GB to 6GB
- Add read -rp prompt during install to optionally disable MQ feature
  via mq.enable=false in emqx.conf (reduces disk/CPU overhead)
- Setting is in install script (not CT script) per reviewer feedback

Co-authored-by: sim-san <sim-san@users.noreply.github.com>
2026-02-12 20:04:47 +01:00
community-scripts-pr-app[bot]
f9612c5aba Update CHANGELOG.md (#11851)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:04:18 +00:00
CanbiZ (MickLesk)
403a839ac0 fix(tools): auto-detect binary vs armored GPG keys in setup_deb822_repo (#11841)
The UniFi GPG key at dl.ui.com/unifi/unifi-repo.gpg is already in binary
format. setup_deb822_repo unconditionally ran gpg --dearmor which expects
ASCII-armored input, corrupting binary keys and causing apt to fail with
'Unable to locate package unifi'.

setup_deb822_repo now downloads the key to a temp file first and uses
the file command to detect whether it is already a binary PGP/GPG key.
Binary keys are copied directly; armored keys are dearmored as before.

This also reverts unifi-install.sh back to using setup_deb822_repo for
consistency with all other install scripts.
2026-02-12 20:03:33 +01:00
community-scripts-pr-app[bot]
41c89413ef chore: update github-versions.json (#11848)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 18:21:07 +00:00
community-scripts-pr-app[bot]
fa11528a7b Update CHANGELOG.md (#11847)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 18:12:53 +00:00
CanbiZ (MickLesk)
2a03c86384 Tailscale: fix DNS check and keyrings directory issues (#11837)
* fix(tailscale-addon): fix DNS check and keyrings directory issues

- Source /etc/os-release instead of grep to handle quoted values properly
- Use VERSION_CODENAME variable instead of VER for correct URL
- Add fallback DNS resolution methods (host, nslookup, getent) when dig is missing
- Create /usr/share/keyrings directory if it doesn't exist
- Skip DNS check gracefully when no DNS tools are available

Fixes installation failures with 'dig: command not found' and
'No such file or directory' for keyrings path

* Update tools/addon/add-tailscale-lxc.sh

Co-authored-by: Chris <punk.sand7393@fastmail.com>

* Update tools/addon/add-tailscale-lxc.sh

Co-authored-by: Chris <punk.sand7393@fastmail.com>

---------

Co-authored-by: Chris <punk.sand7393@fastmail.com>
2026-02-12 19:12:23 +01:00
community-scripts-pr-app[bot]
57b4e10b93 Update CHANGELOG.md (#11846)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 17:40:58 +00:00
Stefan Tomas
4b22c7cc2d Increased the Grafana container default disk size. (#11840)
Co-authored-by: Stefan Tomas <stefan.tomas@proton.me>
2026-02-12 18:40:31 +01:00
community-scripts-pr-app[bot]
79fd0d1dda Update CHANGELOG.md (#11839)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 13:51:22 +00:00
community-scripts-pr-app[bot]
280778d53b Update CHANGELOG.md (#11838)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 13:51:13 +00:00
CanbiZ (MickLesk)
1c3a3107f1 Deluge: add python3-setuptools as dep (#11833)
* fix(deluge): add python3-setuptools for pkg_resources on Python 3.13

* Ensure dependencies before updating Deluge
2026-02-12 14:50:56 +01:00
CanbiZ (MickLesk)
ee2c3a20ee fix(dispatcharr): migrate from requirements.txt to uv sync (pyproject.toml) (#11831) 2026-02-12 14:50:42 +01:00
CanbiZ (MickLesk)
5ee4f4e34b fix(api): prevent duplicate post_to_api submissions (#11836)
Add POST_TO_API_DONE idempotency guard to post_to_api() to prevent
the same telemetry record from being submitted twice with the same
RANDOM_UUID. This mirrors the existing POST_UPDATE_DONE pattern in
post_update_to_api().

post_to_api() is called twice in build.func:
- After storage validation (inside CONTAINER_STORAGE check)
- After create_lxc_container() completes

When both execute, the second call fails with a random_id uniqueness
violation on PocketBase, generating server-side errors.
2026-02-12 13:45:32 +01:00
community-scripts-pr-app[bot]
4b0e893bf1 Update CHANGELOG.md (#11834)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 12:26:51 +00:00
CanbiZ (MickLesk)
3676157a7c Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection (#11810)
* fix(debian-13-vm): disable systemd-firstboot and add autologin via virt-customize

Newer Debian 13 (Trixie) nocloud images (Jan 29+) have systemd-firstboot
enabled, which blocks the boot process by prompting for timezone and root
password on the serial console. The noVNC console stalls after auditd checks.

Fix by using virt-customize (like docker-vm.sh) to:
- Disable systemd-firstboot.service and mask it
- Pre-seed timezone to Etc/UTC
- Configure autologin on ttyS0 and tty1 for nocloud images
- Set hostname and clear machine-id for unique IDs
- Install libguestfs-tools if not present

Fixes #11807

* feat(debian-13-vm): add cloud-init selection dialog for default and advanced settings

Add select_cloud_init() function consistent with docker-vm.sh that prompts
the user to choose between Cloud-Init (genericcloud image) and nocloud
(with auto-login) in both default and advanced settings.

Previously, default settings hardcoded CLOUD_INIT=no without asking.
2026-02-12 13:26:20 +01:00
CanbiZ (MickLesk)
e437e50882 Merge branch 'main' of https://github.com/community-scripts/ProxmoxVE 2026-02-12 13:25:49 +01:00
CanbiZ (MickLesk)
6e9a94b46d quickfix: missing fields for db 2026-02-12 13:25:42 +01:00
community-scripts-pr-app[bot]
137ae6775e chore: update github-versions.json (#11832)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 12:15:25 +00:00
community-scripts-pr-app[bot]
dd8c998d43 Update CHANGELOG.md (#11827)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 11:48:41 +00:00
Slaviša Arežina
b215bac01d Update database generation command in install script (#11825) 2026-02-12 12:48:16 +01:00
community-scripts-pr-app[bot]
c3cd9df12f Update CHANGELOG.md (#11826)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 10:55:38 +00:00
CanbiZ (MickLesk)
406d53ea2f core: remove old Go API and extend misc/api.func with new backend (#11822)
* Remove Go API and extend misc/api.func

Delete the Go-based API (api/main.go, api/go.mod, api/go.sum, api/.env.example) and significantly enhance misc/api.func. The shell telemetry file now includes telemetry configuration, repo source detection, GPU/CPU/RAM detection, expanded explain_exit_code mappings, and refactored post_to_api/post_to_api_vm to send non-blocking telemetry to telemetry.community-scripts.org while respecting DIAGNOSTICS/DEV_MODE and adding richer metadata (cpu/gpu/ram/repo_source). Also updates header/author info and improves privacy/robustness and error handling.

* Start install timer and refine error reporting

Call start_install_timer during build startup and overhaul exit/error reporting.

Changes:
- Invoke start_install_timer early in misc/build.func to track install duration.
- Update api_exit_script comments to reference PocketBase/api.func and adjust ERR/SIGINT/SIGTERM traps to post numeric exit codes (use $? / 130 / 143) instead of command strings.
- Replace the previous explain_exit_code implementation with a conditional fallback: only define explain_exit_code if not already provided (api.func is the canonical source). Expanded and reorganized exit code mappings (curl, timeout, systemd, Node/Python/Postgres/MySQL/MongoDB, Proxmox, etc.).
- In error_handler: stop echoing the container log path (host shows combined log), and post a "failed" update to the API with the exit code before offering container cleanup.

Rationale: these changes make telemetry more consistent and robust (numeric codes), provide a safe fallback for exit descriptions when api.func isn't loaded, and ensure failures are reported to the API prior to any automatic cleanup.

* Report install start/failure to telemetry API

Add telemetry hooks in misc/build.func: call post_to_api at installation start to capture early or immediately-failing installs, and call post_update_to_api with status "failed" and the install exit code when a container installation fails. This improves visibility into install failures for monitoring/telemetry.
2026-02-12 11:55:13 +01:00
community-scripts-pr-app[bot]
c8b278f26f chore: update github-versions.json (#11821)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 06:25:48 +00:00
community-scripts-pr-app[bot]
a6f0d7233e Update CHANGELOG.md (#11818)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 00:21:24 +00:00
community-scripts-pr-app[bot]
079a436286 chore: update github-versions.json (#11817)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 00:21:01 +00:00
community-scripts-pr-app[bot]
1c2ed6ff10 Update CHANGELOG.md (#11813)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 21:30:42 +00:00
Chris
c1d7f23a17 [Feature] OpenCloud: support PosixFS Collaborative Mode (#11806)
* [Feature] OpenCloud: support PosixFS Collaborative Mode

* Use ensure_dependencies in opencloud.sh
2026-02-11 22:30:09 +01:00
community-scripts-pr-app[bot]
fdbe48badb Update CHANGELOG.md (#11812)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 21:05:59 +00:00
CanbiZ (MickLesk)
d09dd0b664 dispatcharr: include port 9191 in success-message (#11808) 2026-02-11 22:05:32 +01:00
community-scripts-pr-app[bot]
cba6717469 Update CHANGELOG.md (#11809)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-11 19:48:18 +00:00
Tom Frenzel
0a12acf6bd fix: make donetick 0.1.71 compatible (#11804) 2026-02-11 20:47:52 +01:00
49 changed files with 1443 additions and 950 deletions

View File

@@ -401,6 +401,59 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-13
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- Pangolin: switch sqlite-specific back to generic [@MickLesk](https://github.com/MickLesk) ([#11868](https://github.com/community-scripts/ProxmoxVE/pull/11868))
- [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config [@vhsdream](https://github.com/vhsdream) ([#11864](https://github.com/community-scripts/ProxmoxVE/pull/11864))
- #### 🔧 Refactor
- Refactor: Radicale [@vhsdream](https://github.com/vhsdream) ([#11850](https://github.com/community-scripts/ProxmoxVE/pull/11850))
### 🌐 Website
- #### 📝 Script Information
- SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning [@MickLesk](https://github.com/MickLesk) ([#11829](https://github.com/community-scripts/ProxmoxVE/pull/11829))
## 2026-02-12
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- EMQX: increase disk to 6GB and add optional MQ disable prompt [@MickLesk](https://github.com/MickLesk) ([#11844](https://github.com/community-scripts/ProxmoxVE/pull/11844))
- Increased the Grafana container default disk size. [@shtefko](https://github.com/shtefko) ([#11840](https://github.com/community-scripts/ProxmoxVE/pull/11840))
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
- Deluge: add python3-setuptools as dep [@MickLesk](https://github.com/MickLesk) ([#11833](https://github.com/community-scripts/ProxmoxVE/pull/11833))
- Dispatcharr: migrate to uv sync [@MickLesk](https://github.com/MickLesk) ([#11831](https://github.com/community-scripts/ProxmoxVE/pull/11831))
- #### ✨ New Features
- Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics [@MickLesk](https://github.com/MickLesk) ([#11842](https://github.com/community-scripts/ProxmoxVE/pull/11842))
- Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection [@MickLesk](https://github.com/MickLesk) ([#11810](https://github.com/community-scripts/ProxmoxVE/pull/11810))
### 💾 Core
- #### ✨ New Features
- tools.func: auto-detect binary vs armored GPG keys in setup_deb822_repo [@MickLesk](https://github.com/MickLesk) ([#11841](https://github.com/community-scripts/ProxmoxVE/pull/11841))
- core: remove old Go API and extend misc/api.func with new backend [@MickLesk](https://github.com/MickLesk) ([#11822](https://github.com/community-scripts/ProxmoxVE/pull/11822))
- #### 🔧 Refactor
- error_handler: prevent stuck 'installing' status [@MickLesk](https://github.com/MickLesk) ([#11845](https://github.com/community-scripts/ProxmoxVE/pull/11845))
### 🧰 Tools
- #### 🐞 Bug Fixes
- Tailscale: fix DNS check and keyrings directory issues [@MickLesk](https://github.com/MickLesk) ([#11837](https://github.com/community-scripts/ProxmoxVE/pull/11837))
## 2026-02-11
### 🆕 New Scripts
@@ -411,10 +464,16 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
- #### 🐞 Bug Fixes
- dispatcharr: include port 9191 in success-message [@MickLesk](https://github.com/MickLesk) ([#11808](https://github.com/community-scripts/ProxmoxVE/pull/11808))
- fix: make donetick 0.1.71 compatible [@tomfrenzel](https://github.com/tomfrenzel) ([#11804](https://github.com/community-scripts/ProxmoxVE/pull/11804))
- Kasm: Support new version URL format without hash suffix [@MickLesk](https://github.com/MickLesk) ([#11787](https://github.com/community-scripts/ProxmoxVE/pull/11787))
- LibreTranslate: Remove Torch [@tremor021](https://github.com/tremor021) ([#11783](https://github.com/community-scripts/ProxmoxVE/pull/11783))
- Snowshare: fix update script [@TuroYT](https://github.com/TuroYT) ([#11726](https://github.com/community-scripts/ProxmoxVE/pull/11726))
- #### ✨ New Features
- [Feature] OpenCloud: support PosixFS Collaborative Mode [@vhsdream](https://github.com/vhsdream) ([#11806](https://github.com/community-scripts/ProxmoxVE/pull/11806))
### 💾 Core
- #### 🔧 Refactor

View File

@@ -1,5 +0,0 @@
MONGO_USER=
MONGO_PASSWORD=
MONGO_IP=
MONGO_PORT=
MONGO_DATABASE=

View File

@@ -1,23 +0,0 @@
module proxmox-api
go 1.24.0
require (
github.com/gorilla/mux v1.8.1
github.com/joho/godotenv v1.5.1
github.com/rs/cors v1.11.1
go.mongodb.org/mongo-driver v1.17.2
)
require (
github.com/golang/snappy v0.0.4 // indirect
github.com/klauspost/compress v1.16.7 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/text v0.31.0 // indirect
)

View File

@@ -1,56 +0,0 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.mongodb.org/mongo-driver v1.17.2 h1:gvZyk8352qSfzyZ2UMWcpDpMSGEr1eqE4T793SqyhzM=
go.mongodb.org/mongo-driver v1.17.2/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -1,450 +0,0 @@
// Copyright (c) 2021-2026 community-scripts ORG
// Author: Michel Roegl-Brunner (michelroegl-brunner)
// License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"strconv"
"time"
"github.com/gorilla/mux"
"github.com/joho/godotenv"
"github.com/rs/cors"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var client *mongo.Client
var collection *mongo.Collection
func loadEnv() {
if err := godotenv.Load(); err != nil {
log.Fatal("Error loading .env file")
}
}
// DataModel represents a single document in MongoDB
type DataModel struct {
ID primitive.ObjectID `json:"id" bson:"_id,omitempty"`
CT_TYPE uint `json:"ct_type" bson:"ct_type"`
DISK_SIZE float32 `json:"disk_size" bson:"disk_size"`
CORE_COUNT uint `json:"core_count" bson:"core_count"`
RAM_SIZE uint `json:"ram_size" bson:"ram_size"`
OS_TYPE string `json:"os_type" bson:"os_type"`
OS_VERSION string `json:"os_version" bson:"os_version"`
DISABLEIP6 string `json:"disableip6" bson:"disableip6"`
NSAPP string `json:"nsapp" bson:"nsapp"`
METHOD string `json:"method" bson:"method"`
CreatedAt time.Time `json:"created_at" bson:"created_at"`
PVEVERSION string `json:"pve_version" bson:"pve_version"`
STATUS string `json:"status" bson:"status"`
RANDOM_ID string `json:"random_id" bson:"random_id"`
TYPE string `json:"type" bson:"type"`
ERROR string `json:"error" bson:"error"`
}
type StatusModel struct {
RANDOM_ID string `json:"random_id" bson:"random_id"`
ERROR string `json:"error" bson:"error"`
STATUS string `json:"status" bson:"status"`
}
type CountResponse struct {
TotalEntries int64 `json:"total_entries"`
StatusCount map[string]int64 `json:"status_count"`
NSAPPCount map[string]int64 `json:"nsapp_count"`
}
// ConnectDatabase initializes the MongoDB connection
func ConnectDatabase() {
loadEnv()
mongoURI := fmt.Sprintf("mongodb://%s:%s@%s:%s",
os.Getenv("MONGO_USER"),
os.Getenv("MONGO_PASSWORD"),
os.Getenv("MONGO_IP"),
os.Getenv("MONGO_PORT"))
database := os.Getenv("MONGO_DATABASE")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var err error
client, err = mongo.Connect(ctx, options.Client().ApplyURI(mongoURI))
if err != nil {
log.Fatal("Failed to connect to MongoDB!", err)
}
collection = client.Database(database).Collection("data_models")
fmt.Println("Connected to MongoDB on 10.10.10.18")
}
// UploadJSON handles API requests and stores data as a document in MongoDB
func UploadJSON(w http.ResponseWriter, r *http.Request) {
var input DataModel
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
input.CreatedAt = time.Now()
_, err := collection.InsertOne(context.Background(), input)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Println("Received data:", input)
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(map[string]string{"message": "Data saved successfully"})
}
// UpdateStatus updates the status of a record based on RANDOM_ID
func UpdateStatus(w http.ResponseWriter, r *http.Request) {
var input StatusModel
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
filter := bson.M{"random_id": input.RANDOM_ID}
update := bson.M{"$set": bson.M{"status": input.STATUS, "error": input.ERROR}}
_, err := collection.UpdateOne(context.Background(), filter, update)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Println("Updated data:", input)
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"message": "Record updated successfully"})
}
// GetDataJSON fetches all data from MongoDB
func GetDataJSON(w http.ResponseWriter, r *http.Request) {
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetPaginatedData(w http.ResponseWriter, r *http.Request) {
page, _ := strconv.Atoi(r.URL.Query().Get("page"))
limit, _ := strconv.Atoi(r.URL.Query().Get("limit"))
if page < 1 {
page = 1
}
if limit < 1 {
limit = 10
}
skip := (page - 1) * limit
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
options := options.Find().SetSkip(int64(skip)).SetLimit(int64(limit))
cursor, err := collection.Find(ctx, bson.M{}, options)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetSummary(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
totalCount, err := collection.CountDocuments(ctx, bson.M{})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
statusCount := make(map[string]int64)
nsappCount := make(map[string]int64)
pipeline := []bson.M{
{"$group": bson.M{"_id": "$status", "count": bson.M{"$sum": 1}}},
}
cursor, err := collection.Aggregate(ctx, pipeline)
if err == nil {
for cursor.Next(ctx) {
var result struct {
ID string `bson:"_id"`
Count int64 `bson:"count"`
}
if err := cursor.Decode(&result); err == nil {
statusCount[result.ID] = result.Count
}
}
}
pipeline = []bson.M{
{"$group": bson.M{"_id": "$nsapp", "count": bson.M{"$sum": 1}}},
}
cursor, err = collection.Aggregate(ctx, pipeline)
if err == nil {
for cursor.Next(ctx) {
var result struct {
ID string `bson:"_id"`
Count int64 `bson:"count"`
}
if err := cursor.Decode(&result); err == nil {
nsappCount[result.ID] = result.Count
}
}
}
response := CountResponse{
TotalEntries: totalCount,
StatusCount: statusCount,
NSAPPCount: nsappCount,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
func GetByNsapp(w http.ResponseWriter, r *http.Request) {
nsapp := r.URL.Query().Get("nsapp")
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"nsapp": nsapp})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetByDateRange(w http.ResponseWriter, r *http.Request) {
startDate := r.URL.Query().Get("start_date")
endDate := r.URL.Query().Get("end_date")
if startDate == "" || endDate == "" {
http.Error(w, "Both start_date and end_date are required", http.StatusBadRequest)
return
}
start, err := time.Parse("2006-01-02T15:04:05.999999+00:00", startDate+"T00:00:00+00:00")
if err != nil {
http.Error(w, "Invalid start_date format", http.StatusBadRequest)
return
}
end, err := time.Parse("2006-01-02T15:04:05.999999+00:00", endDate+"T23:59:59+00:00")
if err != nil {
http.Error(w, "Invalid end_date format", http.StatusBadRequest)
return
}
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{
"created_at": bson.M{
"$gte": start,
"$lte": end,
},
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetByStatus(w http.ResponseWriter, r *http.Request) {
status := r.URL.Query().Get("status")
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"status": status})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetByOS(w http.ResponseWriter, r *http.Request) {
osType := r.URL.Query().Get("os_type")
osVersion := r.URL.Query().Get("os_version")
var records []DataModel
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"os_type": osType, "os_version": osVersion})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
records = append(records, record)
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(records)
}
func GetErrors(w http.ResponseWriter, r *http.Request) {
errorCount := make(map[string]int)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
cursor, err := collection.Find(ctx, bson.M{"error": bson.M{"$ne": ""}})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
var record DataModel
if err := cursor.Decode(&record); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if record.ERROR != "" {
errorCount[record.ERROR]++
}
}
type ErrorCountResponse struct {
Error string `json:"error"`
Count int `json:"count"`
}
var errorCounts []ErrorCountResponse
for err, count := range errorCount {
errorCounts = append(errorCounts, ErrorCountResponse{
Error: err,
Count: count,
})
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(struct {
ErrorCounts []ErrorCountResponse `json:"error_counts"`
}{
ErrorCounts: errorCounts,
})
}
func main() {
ConnectDatabase()
router := mux.NewRouter()
router.HandleFunc("/upload", UploadJSON).Methods("POST")
router.HandleFunc("/upload/updatestatus", UpdateStatus).Methods("POST")
router.HandleFunc("/data/json", GetDataJSON).Methods("GET")
router.HandleFunc("/data/paginated", GetPaginatedData).Methods("GET")
router.HandleFunc("/data/summary", GetSummary).Methods("GET")
router.HandleFunc("/data/nsapp", GetByNsapp).Methods("GET")
router.HandleFunc("/data/date", GetByDateRange).Methods("GET")
router.HandleFunc("/data/status", GetByStatus).Methods("GET")
router.HandleFunc("/data/os", GetByOS).Methods("GET")
router.HandleFunc("/data/errors", GetErrors).Methods("GET")
c := cors.New(cors.Options{
AllowedOrigins: []string{"*"},
AllowedMethods: []string{"GET", "POST"},
AllowedHeaders: []string{"Content-Type", "Authorization"},
AllowCredentials: true,
})
handler := c.Handler(router)
fmt.Println("Server running on port 8080")
log.Fatal(http.ListenAndServe(":8080", handler))
}

View File

@@ -9,7 +9,7 @@ APP="Alpine-Grafana"
var_tags="${var_tags:-alpine;monitoring}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-256}"
var_disk="${var_disk:-1}"
var_disk="${var_disk:-2}"
var_os="${var_os:-alpine}"
var_version="${var_version:-3.23}"
var_unprivileged="${var_unprivileged:-1}"

View File

@@ -28,6 +28,7 @@ function update_script() {
exit
fi
msg_info "Updating Deluge"
ensure_dependencies python3-setuptools
$STD apt update
$STD pip3 install deluge[all] --upgrade
msg_ok "Updated Deluge"

View File

@@ -104,7 +104,7 @@ function update_script() {
cd /opt/dispatcharr
rm -rf .venv
$STD uv venv --clear
$STD uv pip install -r requirements.txt --index-strategy unsafe-best-match
$STD uv sync
$STD uv pip install gunicorn gevent celery redis daphne
msg_ok "Updated Dispatcharr Backend"
@@ -144,4 +144,4 @@ description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:9191${CL}"

View File

@@ -35,13 +35,14 @@ function update_script() {
msg_ok "Stopped Service"
msg_info "Backing Up Configurations"
mv /opt/donetick/config/selfhosted.yml /opt/donetick/donetick.db /opt
mv /opt/donetick/config/selfhosted.yaml /opt/donetick/donetick.db /opt
msg_ok "Backed Up Configurations"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "donetick" "donetick/donetick" "prebuild" "latest" "/opt/donetick" "donetick_Linux_x86_64.tar.gz"
msg_info "Restoring Configurations"
mv /opt/selfhosted.yml /opt/donetick/config
mv /opt/selfhosted.yaml /opt/donetick/config
sed -i '/capacitor:\/\/localhost/d' /opt/donetick/config/selfhosted.yaml
mv /opt/donetick.db /opt/donetick
msg_ok "Restored Configurations"

View File

@@ -9,7 +9,7 @@ APP="EMQX"
var_tags="${var_tags:-mqtt}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-1024}"
var_disk="${var_disk:-4}"
var_disk="${var_disk:-6}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"

View File

@@ -9,7 +9,7 @@ APP="Grafana"
var_tags="${var_tags:-monitoring;visualization}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-512}"
var_disk="${var_disk:-2}"
var_disk="${var_disk:-4}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"

View File

@@ -46,7 +46,7 @@ function update_script() {
msg_info "Restoring configuration & data"
mv /opt/app.env /opt/jotty/.env
[[ -d /opt/data ]] && mv /opt/data /opt/jotty/data
[[ -d /opt/jotty/config ]] && mv /opt/config/* /opt/jotty/config
[[ -d /opt/jotty/config ]] && cp -a /opt/config/* /opt/jotty/config && rm -rf /opt/config
msg_ok "Restored configuration & data"
msg_info "Starting Service"

View File

@@ -30,7 +30,7 @@ function update_script() {
fi
RELEASE="v5.0.2"
if check_for_gh_release "opencloud" "opencloud-eu/opencloud" "${RELEASE}"; then
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
msg_info "Stopping services"
systemctl stop opencloud opencloud-wopi
msg_ok "Stopped services"
@@ -38,9 +38,21 @@ function update_script() {
msg_info "Updating packages"
$STD apt-get update
$STD apt-get dist-upgrade -y
ensure_dependencies "inotify-tools"
msg_ok "Updated packages"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "${RELEASE}" "/usr/bin" "opencloud-*-linux-amd64"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "OpenCloud" "opencloud-eu/opencloud" "singlefile" "${RELEASE}" "/usr/bin" "opencloud-*-linux-amd64"
if ! grep -q 'POSIX_WATCH' /etc/opencloud/opencloud.env; then
sed -i '/^## External/i ## Uncomment below to enable PosixFS Collaborative Mode\
## Increase inotify watch/instance limits on your PVE host:\
### sysctl -w fs.inotify.max_user_watches=1048576\
### sysctl -w fs.inotify.max_user_instances=1024\
# STORAGE_USERS_POSIX_ENABLE_COLLABORATION=true\
# STORAGE_USERS_POSIX_WATCH_TYPE=inotifywait\
# STORAGE_USERS_POSIX_WATCH_FS=true\
# STORAGE_USERS_POSIX_WATCH_PATH=<path-to-storage-or-bind-mount>' /etc/opencloud/opencloud.env
fi
msg_info "Starting services"
systemctl start opencloud opencloud-wopi

View File

@@ -43,8 +43,8 @@ function update_script() {
msg_ok "Removed legacy installation"
msg_info "Installing uv-based Open-WebUI"
PYTHON_VERSION="3.12" setup_uv
$STD uv tool install --python 3.12 open-webui[all]
PYTHON_VERSION="3.11" setup_uv
$STD uv tool install --python 3.11 open-webui[all]
msg_ok "Installed uv-based Open-WebUI"
msg_info "Restoring data"

View File

@@ -51,7 +51,7 @@ function update_script() {
$STD npm run db:generate
$STD npm run build
$STD npm run build:cli
$STD npm run db:sqlite:push
$STD npm run db:push
cp -R .next/standalone ./
chmod +x ./dist/cli.mjs
cp server/db/names.json ./dist/names.json

View File

@@ -28,16 +28,55 @@ function update_script() {
exit
fi
msg_info "Updating ${APP}"
$STD python3 -m venv /opt/radicale
source /opt/radicale/bin/activate
$STD python3 -m pip install --upgrade https://github.com/Kozea/Radicale/archive/master.tar.gz
msg_ok "Updated ${APP}"
if check_for_gh_release "Radicale" "Kozea/Radicale"; then
msg_info "Stopping service"
systemctl stop radicale
msg_ok "Stopped service"
msg_info "Starting Service"
systemctl enable -q --now radicale
msg_ok "Started Service"
msg_ok "Updated successfully!"
msg_info "Backing up users file"
cp /opt/radicale/users /opt/radicale_users_backup
msg_ok "Backed up users file"
PYTHON_VERSION="3.13" setup_uv
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Radicale" "Kozea/Radicale" "tarball" "latest" "/opt/radicale"
msg_info "Restoring users file"
rm -f /opt/radicale/users
mv /opt/radicale_users_backup /opt/radicale/users
msg_ok "Restored users file"
if grep -q 'start.sh' /etc/systemd/system/radicale.service; then
sed -i -e '/^Description/i[Unit]' \
-e '\|^ExecStart|iWorkingDirectory=/opt/radicale' \
-e 's|^ExecStart=.*|ExecStart=/usr/local/bin/uv run -m radicale --config /etc/radicale/config|' /etc/systemd/system/radicale.service
systemctl daemon-reload
fi
if [[ ! -f /etc/radicale/config ]]; then
msg_info "Migrating to config file (/etc/radicale/config)"
mkdir -p /etc/radicale
cat <<EOF >/etc/radicale/config
[server]
hosts = 0.0.0.0:5232
[auth]
type = htpasswd
htpasswd_filename = /opt/radicale/users
htpasswd_encryption = sha512
[storage]
type = multifilesystem
filesystem_folder = /var/lib/radicale/collections
[web]
type = internal
EOF
msg_ok "Migrated to config (/etc/radicale/config)"
fi
msg_info "Starting service"
systemctl start radicale
msg_ok "Started service"
msg_ok "Updated Successfully!"
fi
exit
}

View File

@@ -21,7 +21,7 @@
"resources": {
"cpu": 2,
"ram": 1024,
"hdd": 4,
"hdd": 6,
"os": "debian",
"version": "13"
}

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-11T18:22:25Z",
"generated": "2026-02-13T06:23:00Z",
"versions": [
{
"slug": "2fauth",
@@ -193,9 +193,9 @@
{
"slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr",
"version": "v2.5.1",
"version": "v2.6.0",
"pinned": false,
"date": "2026-01-11T00:46:17Z"
"date": "2026-02-13T00:14:21Z"
},
{
"slug": "cloudreve",
@@ -298,9 +298,9 @@
{
"slug": "donetick",
"repo": "donetick/donetick",
"version": "v0.1.71",
"version": "v0.1.73",
"pinned": false,
"date": "2026-02-11T06:01:13Z"
"date": "2026-02-12T23:42:30Z"
},
{
"slug": "drawio",
@@ -403,9 +403,9 @@
{
"slug": "ghostfolio",
"repo": "ghostfolio/ghostfolio",
"version": "2.237.0",
"version": "2.238.0",
"pinned": false,
"date": "2026-02-08T13:59:53Z"
"date": "2026-02-12T18:28:55Z"
},
{
"slug": "gitea",
@@ -543,9 +543,9 @@
{
"slug": "huntarr",
"repo": "plexguide/Huntarr.io",
"version": "9.2.3",
"version": "9.2.4.1",
"pinned": false,
"date": "2026-02-07T04:44:20Z"
"date": "2026-02-12T22:17:47Z"
},
{
"slug": "immich-public-proxy",
@@ -571,16 +571,16 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.55",
"version": "v5.12.59",
"pinned": false,
"date": "2026-02-05T01:06:15Z"
"date": "2026-02-13T02:26:13Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1094",
"version": "v0.24.1103",
"pinned": false,
"date": "2026-02-11T06:01:16Z"
"date": "2026-02-13T05:53:23Z"
},
{
"slug": "jellystat",
@@ -599,9 +599,9 @@
{
"slug": "jotty",
"repo": "fccview/jotty",
"version": "1.19.1",
"version": "1.20.0",
"pinned": false,
"date": "2026-01-26T21:30:39Z"
"date": "2026-02-12T09:23:30Z"
},
{
"slug": "kapowarr",
@@ -823,9 +823,9 @@
{
"slug": "metube",
"repo": "alexta69/metube",
"version": "2026.02.08",
"version": "2026.02.12",
"pinned": false,
"date": "2026-02-08T17:01:37Z"
"date": "2026-02-12T21:05:49Z"
},
{
"slug": "miniflux",
@@ -998,9 +998,9 @@
{
"slug": "pangolin",
"repo": "fosrl/pangolin",
"version": "1.15.2",
"version": "1.15.4",
"pinned": false,
"date": "2026-02-05T19:23:58Z"
"date": "2026-02-13T00:54:02Z"
},
{
"slug": "paperless-ai",
@@ -1131,9 +1131,9 @@
{
"slug": "prometheus-alertmanager",
"repo": "prometheus/alertmanager",
"version": "v0.31.0",
"version": "v0.31.1",
"pinned": false,
"date": "2026-02-02T13:34:15Z"
"date": "2026-02-11T21:28:26Z"
},
{
"slug": "prometheus-blackbox-exporter",
@@ -1229,9 +1229,9 @@
{
"slug": "rdtclient",
"repo": "rogerfar/rdt-client",
"version": "v2.0.119",
"version": "v2.0.120",
"pinned": false,
"date": "2025-10-13T23:15:11Z"
"date": "2026-02-12T02:53:51Z"
},
{
"slug": "reactive-resume",
@@ -1292,9 +1292,9 @@
{
"slug": "scraparr",
"repo": "thecfu/scraparr",
"version": "v3.0.1",
"version": "v3.0.3",
"pinned": false,
"date": "2026-02-11T17:42:23Z"
"date": "2026-02-12T14:20:56Z"
},
{
"slug": "seelf",
@@ -1383,9 +1383,9 @@
{
"slug": "stirling-pdf",
"repo": "Stirling-Tools/Stirling-PDF",
"version": "v2.4.5",
"version": "v2.4.6",
"pinned": false,
"date": "2026-02-06T23:12:20Z"
"date": "2026-02-12T00:01:19Z"
},
{
"slug": "streamlink-webui",
@@ -1432,9 +1432,9 @@
{
"slug": "termix",
"repo": "Termix-SSH/Termix",
"version": "release-1.11.0-tag",
"version": "release-1.11.1-tag",
"pinned": false,
"date": "2026-01-25T02:09:52Z"
"date": "2026-02-13T04:49:16Z"
},
{
"slug": "the-lounge",
@@ -1460,9 +1460,9 @@
{
"slug": "tianji",
"repo": "msgbyte/tianji",
"version": "v1.31.10",
"version": "v1.31.12",
"pinned": false,
"date": "2026-02-04T17:21:04Z"
"date": "2026-02-12T19:06:14Z"
},
{
"slug": "traccar",
@@ -1551,9 +1551,9 @@
{
"slug": "upsnap",
"repo": "seriousm4x/UpSnap",
"version": "5.2.7",
"version": "5.2.8",
"pinned": false,
"date": "2026-01-07T23:48:00Z"
"date": "2026-02-13T00:02:37Z"
},
{
"slug": "uptimekuma",
@@ -1656,9 +1656,9 @@
{
"slug": "wikijs",
"repo": "requarks/wiki",
"version": "v2.5.311",
"version": "v2.5.312",
"pinned": false,
"date": "2026-01-08T09:50:00Z"
"date": "2026-02-12T02:45:22Z"
},
{
"slug": "wishlist",

View File

@@ -21,7 +21,7 @@
"resources": {
"cpu": 1,
"ram": 512,
"hdd": 2,
"hdd": 4,
"os": "debian",
"version": "13"
}
@@ -32,7 +32,7 @@
"resources": {
"cpu": 1,
"ram": 256,
"hdd": 1,
"hdd": 2,
"os": "alpine",
"version": "3.23"
}

View File

@@ -12,7 +12,7 @@
"documentation": "https://radicale.org/master.html#documentation-1",
"website": "https://radicale.org/",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/radicale.webp",
"config_path": "/etc/radicale/config or ~/.config/radicale/config",
"config_path": "/etc/radicale/config",
"description": "Radicale is a small but powerful CalDAV (calendars, to-do lists) and CardDAV (contacts)",
"install_methods": [
{

View File

@@ -32,6 +32,10 @@
"password": null
},
"notes": [
{
"text": "SQL Server (2025) SQLPAL is incompatible with Proxmox VE 9 (Kernel 6.12+) in LXC containers. Use a VM instead or the SQL-Server 2022 LXC.",
"type": "warning"
},
{
"text": "If you choose not to run the installation setup, execute: `/opt/mssql/bin/mssql-conf setup` in LXC shell.",
"type": "info"

View File

@@ -16,7 +16,8 @@ update_os
msg_info "Installing Dependencies"
$STD apt install -y \
python3-pip \
python3-libtorrent
python3-libtorrent \
python3-setuptools
msg_ok "Installed Dependencies"
msg_info "Installing Deluge"

View File

@@ -37,7 +37,7 @@ fetch_and_deploy_gh_release "dispatcharr" "Dispatcharr/Dispatcharr" "tarball"
msg_info "Installing Python Dependencies with uv"
cd /opt/dispatcharr
$STD uv venv --clear
$STD uv pip install -r requirements.txt --index-strategy unsafe-best-match
$STD uv sync
$STD uv pip install gunicorn gevent celery redis daphne
msg_ok "Installed Python Dependencies"

View File

@@ -38,6 +38,18 @@ rm -f "$DEB_FILE"
echo "$LATEST_VERSION" >~/.emqx
msg_ok "Installed EMQX"
read -r -p "${TAB3}Would you like to disable the EMQX MQ feature? (reduces disk/CPU usage) <y/N> " prompt
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
msg_info "Disabling EMQX MQ feature"
mkdir -p /etc/emqx
if ! grep -q "^mq.enable" /etc/emqx/emqx.conf 2>/dev/null; then
echo "mq.enable = false" >>/etc/emqx/emqx.conf
else
sed -i 's/^mq.enable.*/mq.enable = false/' /etc/emqx/emqx.conf
fi
msg_ok "Disabled EMQX MQ feature"
fi
msg_info "Starting EMQX service"
$STD systemctl enable -q --now emqx
msg_ok "Enabled EMQX service"

View File

@@ -38,6 +38,10 @@ for server in "${servers[@]}"; do
fi
done
msg_info "Installing dependencies"
$STD apt install -y inotify-tools
msg_ok "Installed dependencies"
msg_info "Installing Collabora Online"
curl -fsSL https://collaboraoffice.com/downloads/gpg/collaboraonline-release-keyring.gpg -o /etc/apt/keyrings/collaboraonline-release-keyring.gpg
cat <<EOF >/etc/apt/sources.list.d/colloboraonline.sources
@@ -148,8 +152,15 @@ COLLABORATION_JWT_SECRET=
# FRONTEND_FULL_TEXT_SEARCH_ENABLED=true
# SEARCH_EXTRACTOR_TIKA_TIKA_URL=<your-tika-url>
## External storage test - Only NFS v4.2+ is supported
## User files
## Uncomment below to enable PosixFS Collaborative Mode
## Increase inotify watch/instance limits on your PVE host:
### sysctl -w fs.inotify.max_user_watches=1048576
### sysctl -w fs.inotify.max_user_instances=1024
# STORAGE_USERS_POSIX_ENABLE_COLLABORATION=true
# STORAGE_USERS_POSIX_WATCH_TYPE=inotifywait
# STORAGE_USERS_POSIX_WATCH_FS=true
# STORAGE_USERS_POSIX_WATCH_PATH=<path-to-storage-or-bind-mount>
## User files location - experimental - use at your own risk! - ZFS, NFS v4.2+ supported - CIFS/SMB not supported
# STORAGE_USERS_POSIX_ROOT=<path-to-your-bind_mount>
EOF

View File

@@ -21,10 +21,10 @@ msg_ok "Installed Dependencies"
setup_hwaccel
PYTHON_VERSION="3.12" setup_uv
PYTHON_VERSION="3.11" setup_uv
msg_info "Installing Open WebUI"
$STD uv tool install --python 3.12 open-webui[all]
$STD uv tool install --python 3.11 open-webui[all]
msg_ok "Installed Open WebUI"
read -r -p "${TAB3}Would you like to add Ollama? <y/N> " prompt

View File

@@ -178,7 +178,7 @@ http:
servers:
- url: "http://$LOCAL_IP:3000"
EOF
$STD npm run db:sqlite:push
$STD npm run db:push
. /etc/os-release
if [ "$VERSION_CODENAME" = "trixie" ]; then

View File

@@ -14,42 +14,51 @@ network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
apache2-utils \
python3-pip \
python3-venv
$STD apt install -y apache2-utils
msg_ok "Installed Dependencies"
PYTHON_VERSION="3.13" setup_uv
fetch_and_deploy_gh_release "Radicale" "Kozea/Radicale" "tarball" "latest" "/opt/radicale"
msg_info "Setting up Radicale"
python3 -m venv /opt/radicale
source /opt/radicale/bin/activate
$STD python3 -m pip install --upgrade https://github.com/Kozea/Radicale/archive/master.tar.gz
cd /opt/radicale
RNDPASS=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | head -c13)
$STD htpasswd -c -b -5 /opt/radicale/users admin $RNDPASS
$STD htpasswd -c -b -5 /opt/radicale/users admin "$RNDPASS"
{
echo "Radicale Credentials"
echo "Admin User: admin"
echo "Admin Password: $RNDPASS"
} >>~/radicale.creds
msg_ok "Done setting up Radicale"
msg_info "Setup Service"
mkdir -p /etc/radicale
cat <<EOF >/etc/radicale/config
[server]
hosts = 0.0.0.0:5232
cat <<EOF >/opt/radicale/start.sh
#!/usr/bin/env bash
source /opt/radicale/bin/activate
python3 -m radicale --storage-filesystem-folder=/var/lib/radicale/collections --hosts 0.0.0.0:5232 --auth-type htpasswd --auth-htpasswd-filename /opt/radicale/users --auth-htpasswd-encryption sha512
[auth]
type = htpasswd
htpasswd_filename = /opt/radicale/users
htpasswd_encryption = sha512
[storage]
type = multifilesystem
filesystem_folder = /var/lib/radicale/collections
[web]
type = internal
EOF
msg_ok "Set up Radicale"
chmod +x /opt/radicale/start.sh
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/radicale.service
[Unit]
Description=A simple CalDAV (calendar) and CardDAV (contact) server
After=network.target
Requires=network.target
[Service]
ExecStart=/opt/radicale/start.sh
WorkingDirectory=/opt/radicale
ExecStart=/usr/local/bin/uv run -m radicale --config /etc/radicale/config
Restart=on-failure
# User=radicale
# Deny other users access to the calendar data

View File

@@ -15,16 +15,18 @@ update_os
msg_info "Installing Dependencies"
$STD apt install -y apt-transport-https
curl -fsSL "https://dl.ui.com/unifi/unifi-repo.gpg" -o "/usr/share/keyrings/unifi-repo.gpg"
cat <<EOF | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.sources >/dev/null
Types: deb
URIs: https://www.ui.com/downloads/unifi/debian
Suites: stable
Components: ubiquiti
Architectures: amd64
Signed-By: /usr/share/keyrings/unifi-repo.gpg
EOF
$STD apt update
msg_ok "Installed Dependencies"
setup_deb822_repo \
"unifi" \
"https://dl.ui.com/unifi/unifi-repo.gpg" \
"https://www.ui.com/downloads/unifi/debian" \
"stable" \
"ubiquiti" \
"amd64"
JAVA_VERSION="21" setup_java
if lscpu | grep -q 'avx'; then

File diff suppressed because it is too large Load Diff

View File

@@ -3636,6 +3636,9 @@ $PCT_OPTIONS_STRING"
exit 214
fi
msg_ok "Storage space validated"
# Report installation start to API (early - captures failed installs too)
post_to_api
fi
create_lxc_container || exit $?
@@ -4010,6 +4013,9 @@ EOF'
# Install SSH keys
install_ssh_keys_into_ct
# Start timer for duration tracking
start_install_timer
# Run application installer
# Disable error trap - container errors are handled internally via flag file
set +Eeuo pipefail # Disable ALL error handling temporarily
@@ -4040,6 +4046,9 @@ EOF'
if [[ $install_exit_code -ne 0 ]]; then
msg_error "Installation failed in container ${CTID} (exit code: ${install_exit_code})"
# Report failure to telemetry API
post_update_to_api "failed" "$install_exit_code"
# Copy both logs from container before potential deletion
local build_log_copied=false
local install_log_copied=false
@@ -5123,9 +5132,9 @@ EOF
# api_exit_script()
#
# - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to API using centralized error descriptions
# - Uses explain_exit_code() from error_handler.func for consistent error messages
# - Posts failure status with exit code to API (error description added automatically)
# - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages
# - Posts failure status with exit code to API (error description resolved automatically)
# - Only executes on non-zero exit codes
# ------------------------------------------------------------------------------
api_exit_script() {
@@ -5138,6 +5147,6 @@ api_exit_script() {
if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT
fi
trap 'post_update_to_api "failed" "$BASH_COMMAND"' ERR
trap 'post_update_to_api "failed" "INTERRUPTED"' SIGINT
trap 'post_update_to_api "failed" "TERMINATED"' SIGTERM
trap 'post_update_to_api "failed" "$?"' ERR
trap 'post_update_to_api "failed" "130"' SIGINT
trap 'post_update_to_api "failed" "143"' SIGTERM

View File

@@ -27,100 +27,90 @@
# ------------------------------------------------------------------------------
# explain_exit_code()
#
# - Maps numeric exit codes to human-readable error descriptions
# - Supports:
# * Generic/Shell errors (1, 2, 126, 127, 128, 130, 137, 139, 143)
# * Package manager errors (APT, DPKG: 100, 101, 255)
# * Node.js/npm errors (243-249, 254)
# * Python/pip/uv errors (210-212)
# * PostgreSQL errors (231-234)
# * MySQL/MariaDB errors (241-244)
# * MongoDB errors (251-254)
# * Proxmox custom codes (200-231)
# - Returns description string for given exit code
# - Canonical version is defined in api.func (sourced before this file)
# - This section only provides a fallback if api.func was not loaded
# - See api.func SECTION 1 for the authoritative exit code mappings
# ------------------------------------------------------------------------------
explain_exit_code() {
local code="$1"
case "$code" in
# --- Generic / Shell ---
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
143) echo "Terminated (SIGTERM)" ;;
# --- Package manager / APT / DPKG ---
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
255) echo "DPKG: Fatal internal error" ;;
# --- Node.js / npm / pnpm / yarn ---
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
247) echo "Node.js: Fatal internal error" ;;
248) echo "Node.js: Invalid C++ addon / N-API failure" ;;
249) echo "Node.js: Inspector error" ;;
254) echo "npm/pnpm/yarn: Unknown fatal error" ;;
# --- Python / pip / uv ---
210) echo "Python: Virtualenv / uv environment missing or broken" ;;
211) echo "Python: Dependency resolution failed" ;;
212) echo "Python: Installation aborted (permissions or EXTERNALLY-MANAGED)" ;;
# --- PostgreSQL ---
231) echo "PostgreSQL: Connection failed (server not running / wrong socket)" ;;
232) echo "PostgreSQL: Authentication failed (bad user/password)" ;;
233) echo "PostgreSQL: Database does not exist" ;;
234) echo "PostgreSQL: Fatal error in query / syntax" ;;
# --- MySQL / MariaDB ---
241) echo "MySQL/MariaDB: Connection failed (server not running / wrong socket)" ;;
242) echo "MySQL/MariaDB: Authentication failed (bad user/password)" ;;
243) echo "MySQL/MariaDB: Database does not exist" ;;
244) echo "MySQL/MariaDB: Fatal error in query / syntax" ;;
# --- MongoDB ---
251) echo "MongoDB: Connection failed (server not running)" ;;
252) echo "MongoDB: Authentication failed (bad user/password)" ;;
253) echo "MongoDB: Database not found" ;;
254) echo "MongoDB: Fatal query error" ;;
# --- Proxmox Custom Codes ---
200) echo "Proxmox: Failed to create lock file" ;;
203) echo "Proxmox: Missing CTID variable" ;;
204) echo "Proxmox: Missing PCT_OSTYPE variable" ;;
205) echo "Proxmox: Invalid CTID (<100)" ;;
206) echo "Proxmox: CTID already in use" ;;
207) echo "Proxmox: Password contains unescaped special characters" ;;
208) echo "Proxmox: Invalid configuration (DNS/MAC/Network format)" ;;
209) echo "Proxmox: Container creation failed" ;;
210) echo "Proxmox: Cluster not quorate" ;;
211) echo "Proxmox: Timeout waiting for template lock" ;;
212) echo "Proxmox: Storage type 'iscsidirect' does not support containers (VMs only)" ;;
213) echo "Proxmox: Storage type does not support 'rootdir' content" ;;
214) echo "Proxmox: Not enough storage space" ;;
215) echo "Proxmox: Container created but not listed (ghost state)" ;;
216) echo "Proxmox: RootFS entry missing in config" ;;
217) echo "Proxmox: Storage not accessible" ;;
219) echo "Proxmox: CephFS does not support containers - use RBD" ;;
224) echo "Proxmox: PBS storage is for backups only" ;;
218) echo "Proxmox: Template file corrupted or incomplete" ;;
220) echo "Proxmox: Unable to resolve template path" ;;
221) echo "Proxmox: Template file not readable" ;;
222) echo "Proxmox: Template download failed" ;;
223) echo "Proxmox: Template not available after download" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
# --- Default ---
*) echo "Unknown error" ;;
esac
}
if ! declare -f explain_exit_code &>/dev/null; then
explain_exit_code() {
local code="$1"
case "$code" in
1) echo "General error / Operation not permitted" ;;
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
6) echo "curl: DNS resolution failed (could not resolve host)" ;;
7) echo "curl: Failed to connect (network unreachable / host down)" ;;
22) echo "curl: HTTP error returned (404, 429, 500+)" ;;
28) echo "curl: Operation timeout (network slow or server not responding)" ;;
35) echo "curl: SSL/TLS handshake failed (certificate error)" ;;
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
102) echo "APT: Lock held by another process (dpkg/apt still running)" ;;
124) echo "Command timed out (timeout command)" ;;
126) echo "Command invoked cannot execute (permission problem?)" ;;
127) echo "Command not found" ;;
128) echo "Invalid argument to exit" ;;
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
134) echo "Process aborted (SIGABRT - possibly Node.js heap overflow)" ;;
137) echo "Killed (SIGKILL / Out of memory?)" ;;
139) echo "Segmentation fault (core dumped)" ;;
141) echo "Broken pipe (SIGPIPE - output closed prematurely)" ;;
143) echo "Terminated (SIGTERM)" ;;
150) echo "Systemd: Service failed to start" ;;
151) echo "Systemd: Service unit not found" ;;
152) echo "Permission denied (EACCES)" ;;
153) echo "Build/compile failed (make/gcc/cmake)" ;;
154) echo "Node.js: Native addon build failed (node-gyp)" ;;
160) echo "Python: Virtualenv / uv environment missing or broken" ;;
161) echo "Python: Dependency resolution failed" ;;
162) echo "Python: Installation aborted (permissions or EXTERNALLY-MANAGED)" ;;
170) echo "PostgreSQL: Connection failed (server not running / wrong socket)" ;;
171) echo "PostgreSQL: Authentication failed (bad user/password)" ;;
172) echo "PostgreSQL: Database does not exist" ;;
173) echo "PostgreSQL: Fatal error in query / syntax" ;;
180) echo "MySQL/MariaDB: Connection failed (server not running / wrong socket)" ;;
181) echo "MySQL/MariaDB: Authentication failed (bad user/password)" ;;
182) echo "MySQL/MariaDB: Database does not exist" ;;
183) echo "MySQL/MariaDB: Fatal error in query / syntax" ;;
190) echo "MongoDB: Connection failed (server not running)" ;;
191) echo "MongoDB: Authentication failed (bad user/password)" ;;
192) echo "MongoDB: Database not found" ;;
193) echo "MongoDB: Fatal query error" ;;
200) echo "Proxmox: Failed to create lock file" ;;
203) echo "Proxmox: Missing CTID variable" ;;
204) echo "Proxmox: Missing PCT_OSTYPE variable" ;;
205) echo "Proxmox: Invalid CTID (<100)" ;;
206) echo "Proxmox: CTID already in use" ;;
207) echo "Proxmox: Password contains unescaped special characters" ;;
208) echo "Proxmox: Invalid configuration (DNS/MAC/Network format)" ;;
209) echo "Proxmox: Container creation failed" ;;
210) echo "Proxmox: Cluster not quorate" ;;
211) echo "Proxmox: Timeout waiting for template lock" ;;
212) echo "Proxmox: Storage type 'iscsidirect' does not support containers (VMs only)" ;;
213) echo "Proxmox: Storage type does not support 'rootdir' content" ;;
214) echo "Proxmox: Not enough storage space" ;;
215) echo "Proxmox: Container created but not listed (ghost state)" ;;
216) echo "Proxmox: RootFS entry missing in config" ;;
217) echo "Proxmox: Storage not accessible" ;;
218) echo "Proxmox: Template file corrupted or incomplete" ;;
219) echo "Proxmox: CephFS does not support containers - use RBD" ;;
220) echo "Proxmox: Unable to resolve template path" ;;
221) echo "Proxmox: Template file not readable" ;;
222) echo "Proxmox: Template download failed" ;;
223) echo "Proxmox: Template not available after download" ;;
224) echo "Proxmox: PBS storage is for backups only" ;;
225) echo "Proxmox: No template available for OS/Version" ;;
231) echo "Proxmox: LXC stack upgrade failed" ;;
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
245) echo "Node.js: Invalid command-line option" ;;
246) echo "Node.js: Internal JavaScript Parse Error" ;;
247) echo "Node.js: Fatal internal error" ;;
248) echo "Node.js: Invalid C++ addon / N-API failure" ;;
249) echo "npm/pnpm/yarn: Unknown fatal error" ;;
255) echo "DPKG: Fatal internal error" ;;
*) echo "Unknown error" ;;
esac
}
fi
# ==============================================================================
# SECTION 2: ERROR HANDLERS
@@ -197,12 +187,7 @@ error_handler() {
# Create error flag file with exit code for host detection
echo "$exit_code" >"/root/.install-${SESSION_ID:-error}.failed" 2>/dev/null || true
if declare -f msg_custom >/dev/null 2>&1; then
msg_custom "📋" "${YW}" "Log saved to: ${container_log}"
else
echo -e "${YW}Log saved to:${CL} ${BL}${container_log}${CL}"
fi
# Log path is shown by host as combined log - no need to show container path
else
# HOST CONTEXT: Show local log path and offer container cleanup
if declare -f msg_custom >/dev/null 2>&1; then
@@ -213,6 +198,11 @@ error_handler() {
# Offer to remove container if it exists (build errors after container creation)
if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then
# Report failure to API before container cleanup
if declare -f post_update_to_api &>/dev/null; then
post_update_to_api "failed" "$exit_code"
fi
echo ""
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
@@ -253,6 +243,18 @@ error_handler() {
# ------------------------------------------------------------------------------
on_exit() {
local exit_code=$?
# Report orphaned "installing" records to telemetry API
# Catches ALL exit paths: errors (non-zero), signals, AND clean exits where
# post_to_api was called ("installing" sent) but post_update_to_api was never called
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
else
post_update_to_api "failed" "1"
fi
fi
fi
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
exit "$exit_code"
}
@@ -265,6 +267,10 @@ on_exit() {
# - Exits with code 130 (128 + SIGINT=2)
# ------------------------------------------------------------------------------
on_interrupt() {
# Report interruption to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "130"
fi
if declare -f msg_error >/dev/null 2>&1; then
msg_error "Interrupted by user (SIGINT)"
else
@@ -282,6 +288,10 @@ on_interrupt() {
# - Triggered by external process termination
# ------------------------------------------------------------------------------
on_terminate() {
# Report termination to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "143"
fi
if declare -f msg_error >/dev/null 2>&1; then
msg_error "Terminated by signal (SIGTERM)"
else

View File

@@ -465,6 +465,7 @@ manage_tool_repository() {
msg_error "Failed to download MongoDB GPG key"
return 1
fi
chmod 644 "/etc/apt/keyrings/mongodb-server-${version}.gpg"
# Setup repository
local distro_codename
@@ -1294,12 +1295,33 @@ setup_deb822_repo() {
return 1
}
# Import GPG
curl -fsSL "$gpg_url" | gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" || {
msg_error "Failed to import GPG key for ${name}"
# Import GPG key (auto-detect binary vs ASCII-armored format)
local tmp_gpg
tmp_gpg=$(mktemp) || return 1
curl -fsSL "$gpg_url" -o "$tmp_gpg" || {
msg_error "Failed to download GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
if grep -q "BEGIN PGP" "$tmp_gpg" 2>/dev/null; then
# ASCII-armored — dearmor to binary
gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" < "$tmp_gpg" || {
msg_error "Failed to dearmor GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
else
# Already in binary GPG format — copy directly
cp "$tmp_gpg" "/etc/apt/keyrings/${name}.gpg" || {
msg_error "Failed to install GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
fi
rm -f "$tmp_gpg"
chmod 644 "/etc/apt/keyrings/${name}.gpg"
# Write deb822
{
echo "Types: deb"

View File

@@ -75,14 +75,37 @@ pct exec "$CTID" -- bash -c '
set -e
export DEBIAN_FRONTEND=noninteractive
ID=$(grep "^ID=" /etc/os-release | cut -d"=" -f2)
VER=$(grep "^VERSION_CODENAME=" /etc/os-release | cut -d"=" -f2)
# Source os-release properly (handles quoted values)
source /etc/os-release
# fallback if DNS is poisoned or blocked
# Fallback if DNS is poisoned or blocked
ORIG_RESOLV="/etc/resolv.conf"
BACKUP_RESOLV="/tmp/resolv.conf.backup"
if ! dig +short pkgs.tailscale.com | grep -qvE "^127\.|^0\.0\.0\.0$"; then
# Check DNS resolution using multiple methods (dig may not be installed)
dns_check_failed=true
if command -v dig &>/dev/null; then
if dig +short pkgs.tailscale.com 2>/dev/null | grep -qvE "^127\.|^0\.0\.0\.0$|^$"; then
dns_check_failed=false
fi
elif command -v host &>/dev/null; then
if host pkgs.tailscale.com 2>/dev/null | grep -q "has address"; then
dns_check_failed=false
fi
elif command -v nslookup &>/dev/null; then
if nslookup pkgs.tailscale.com 2>/dev/null | grep -q "Address:"; then
dns_check_failed=false
fi
elif command -v getent &>/dev/null; then
if getent hosts pkgs.tailscale.com &>/dev/null; then
dns_check_failed=false
fi
else
# No DNS tools available, try curl directly and assume DNS works
dns_check_failed=false
fi
if $dns_check_failed; then
echo "[INFO] DNS resolution for pkgs.tailscale.com failed (blocked or redirected)."
echo "[INFO] Temporarily overriding /etc/resolv.conf with Cloudflare DNS (1.1.1.1)"
cp "$ORIG_RESOLV" "$BACKUP_RESOLV"
@@ -92,17 +115,22 @@ fi
if ! command -v curl &>/dev/null; then
echo "[INFO] curl not found, installing..."
apt-get update -qq
apt-get install -y curl >/dev/null
apt update -qq
apt install -y curl >/dev/null
fi
curl -fsSL https://pkgs.tailscale.com/stable/${ID}/${VER}.noarmor.gpg \
# Ensure keyrings directory exists
mkdir -p /usr/share/keyrings
curl -fsSL "https://pkgs.tailscale.com/stable/${ID}/${VERSION_CODENAME}.noarmor.gpg" \
| tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/tailscale-archive-keyring.gpg] https://pkgs.tailscale.com/stable/${ID} ${VER} main" \
echo "deb [signed-by=/usr/share/keyrings/tailscale-archive-keyring.gpg] https://pkgs.tailscale.com/stable/${ID} ${VERSION_CODENAME} main" \
>/etc/apt/sources.list.d/tailscale.list
apt-get update -qq
apt-get install -y tailscale >/dev/null
apt update -qq
apt install -y tailscale >/dev/null
if [[ -f /tmp/resolv.conf.backup ]]; then
echo "[INFO] Restoring original /etc/resolv.conf"

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -203,7 +203,6 @@ function exit-script() {
function default_settings() {
VMID=$(get_valid_nextid)
FORMAT=",efitype=4m"
MACHINE=""
DISK_SIZE="4G"
DISK_CACHE=""
@@ -259,11 +258,9 @@ function advanced_settings() {
3>&1 1>&2 2>&3); then
if [ "$MACH" = q35 ]; then
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Machine Type: ${BGN}$MACH${CL}"
FORMAT=""
MACHINE=" -machine q35"
else
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Machine Type: ${BGN}$MACH${CL}"
FORMAT=",efitype=4m"
MACHINE=""
fi
else
@@ -476,31 +473,45 @@ case $STORAGE_TYPE in
nfs | dir | cifs)
DISK_EXT=".qcow2"
DISK_REF="$VMID/"
DISK_IMPORT="-format qcow2"
DISK_IMPORT="--format qcow2"
THIN=""
;;
btrfs)
DISK_EXT=".raw"
DISK_REF="$VMID/"
DISK_IMPORT="-format raw"
FORMAT=",efitype=4m"
DISK_IMPORT="--format raw"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="--format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"
eval DISK"${i}"=vm-"${VMID}"-disk-"${i}"${DISK_EXT:-}
eval DISK"${i}"_REF="${STORAGE}":"${DISK_REF:-}"${!disk}
done
msg_info "Creating a Arch Linux VM"
qm create $VMID -agent 1${MACHINE} -tablet 0 -localtime 1 -bios ovmf${CPU_TYPE} -cores $CORE_COUNT -memory $RAM_SIZE \
-name $HN -tags community-script -net0 virtio,bridge=$BRG,macaddr=$MAC$VLAN$MTU -onboot 1 -ostype l26 -scsihw virtio-scsi-pci
pvesm alloc $STORAGE $VMID $DISK0 4M 1>&/dev/null
qm importdisk $VMID ${FILE} $STORAGE ${DISK_IMPORT:-} 1>&/dev/null
if qm disk import --help >/dev/null 2>&1; then
IMPORT_CMD=(qm disk import)
else
IMPORT_CMD=(qm importdisk)
fi
IMPORT_OUT="$("${IMPORT_CMD[@]}" "$VMID" "${FILE}" "$STORAGE" ${DISK_IMPORT:-} 2>&1 || true)"
DISK_REF_IMPORTED="$(printf '%s\n' "$IMPORT_OUT" | sed -n "s/.*successfully imported disk '\([^']\+\)'.*/\1/p" | tr -d "\r\"'")"
[[ -z "$DISK_REF_IMPORTED" ]] && DISK_REF_IMPORTED="$(pvesm list "$STORAGE" | awk -v id="$VMID" '$5 ~ ("vm-"id"-disk-") {print $1":"$5}' | sort | tail -n1)"
[[ -z "$DISK_REF_IMPORTED" ]] && {
msg_error "Unable to determine imported disk reference."
echo "$IMPORT_OUT"
exit 1
}
msg_ok "Imported disk (${CL}${BL}${DISK_REF_IMPORTED}${CL})"
qm set $VMID \
-efidisk0 ${DISK0_REF}${FORMAT} \
-scsi0 ${DISK1_REF},${DISK_CACHE}${THIN}size=${DISK_SIZE} \
-efidisk0 ${STORAGE}:0,efitype=4m \
-scsi0 ${DISK_REF_IMPORTED},${DISK_CACHE}${THIN%,} \
-ide2 ${STORAGE}:cloudinit \
-boot order=scsi0 \
-serial0 socket >/dev/null

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -201,6 +201,17 @@ function exit-script() {
exit
}
function select_cloud_init() {
if (whiptail --backtitle "Proxmox VE Helper Scripts" --title "CLOUD-INIT" \
--yesno "Enable Cloud-Init for VM configuration?\n\nCloud-Init allows automatic configuration of:\n- User accounts and passwords\n- SSH keys\n- Network settings (DHCP/Static)\n- DNS configuration\n\nYou can also configure these settings later in Proxmox UI.\n\nNote: Without Cloud-Init, the nocloud image will be used with console auto-login." --defaultno 18 68); then
CLOUD_INIT="yes"
echo -e "${CLOUD}${BOLD}${DGN}Cloud-Init: ${BGN}yes${CL}"
else
CLOUD_INIT="no"
echo -e "${CLOUD}${BOLD}${DGN}Cloud-Init: ${BGN}no${CL}"
fi
}
function default_settings() {
VMID=$(get_valid_nextid)
FORMAT=",efitype=4m"
@@ -216,7 +227,6 @@ function default_settings() {
VLAN=""
MTU=""
START_VM="yes"
CLOUD_INIT="no"
METHOD="default"
echo -e "${CONTAINERID}${BOLD}${DGN}Virtual Machine ID: ${BGN}${VMID}${CL}"
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Machine Type: ${BGN}i440fx${CL}"
@@ -230,7 +240,7 @@ function default_settings() {
echo -e "${MACADDRESS}${BOLD}${DGN}MAC Address: ${BGN}${MAC}${CL}"
echo -e "${VLANTAG}${BOLD}${DGN}VLAN: ${BGN}Default${CL}"
echo -e "${DEFAULT}${BOLD}${DGN}Interface MTU Size: ${BGN}Default${CL}"
echo -e "${CLOUD}${BOLD}${DGN}Configure Cloud-init: ${BGN}no${CL}"
select_cloud_init
echo -e "${GATEWAY}${BOLD}${DGN}Start VM when completed: ${BGN}yes${CL}"
echo -e "${CREATING}${BOLD}${DGN}Creating a Debian 13 VM using the above default settings${CL}"
}
@@ -400,13 +410,7 @@ function advanced_settings() {
exit-script
fi
if (whiptail --backtitle "Proxmox VE Helper Scripts" --title "CLOUD-INIT" --yesno "Configure the VM with Cloud-init?" --defaultno 10 58); then
echo -e "${CLOUD}${BOLD}${DGN}Configure Cloud-init: ${BGN}yes${CL}"
CLOUD_INIT="yes"
else
echo -e "${CLOUD}${BOLD}${DGN}Configure Cloud-init: ${BGN}no${CL}"
CLOUD_INIT="no"
fi
select_cloud_init
if (whiptail --backtitle "Proxmox VE Helper Scripts" --title "START VIRTUAL MACHINE" --yesno "Start VM when completed?" 10 58); then
echo -e "${GATEWAY}${BOLD}${DGN}Start VM when completed: ${BGN}yes${CL}"
@@ -473,6 +477,17 @@ else
fi
msg_ok "Using ${CL}${BL}$STORAGE${CL} ${GN}for Storage Location."
msg_ok "Virtual Machine ID is ${CL}${BL}$VMID${CL}."
# ==============================================================================
# PREREQUISITES
# ==============================================================================
if ! command -v virt-customize &>/dev/null; then
msg_info "Installing libguestfs-tools"
apt-get update >/dev/null 2>&1
apt-get install -y libguestfs-tools >/dev/null 2>&1
msg_ok "Installed libguestfs-tools"
fi
msg_info "Retrieving the URL for the Debian 13 Qcow2 Disk Image"
if [ "$CLOUD_INIT" == "yes" ]; then
URL=https://cloud.debian.org/images/cloud/trixie/latest/debian-13-genericcloud-amd64.qcow2
@@ -486,6 +501,50 @@ echo -en "\e[1A\e[0K"
FILE=$(basename $URL)
msg_ok "Downloaded ${CL}${BL}${FILE}${CL}"
# ==============================================================================
# IMAGE CUSTOMIZATION
# ==============================================================================
msg_info "Customizing ${FILE} image"
WORK_FILE=$(mktemp --suffix=.qcow2)
cp "$FILE" "$WORK_FILE"
# Set hostname
virt-customize -q -a "$WORK_FILE" --hostname "${HN}" >/dev/null 2>&1
# Prepare for unique machine-id on first boot
virt-customize -q -a "$WORK_FILE" --run-command "truncate -s 0 /etc/machine-id" >/dev/null 2>&1
virt-customize -q -a "$WORK_FILE" --run-command "rm -f /var/lib/dbus/machine-id" >/dev/null 2>&1
# Disable systemd-firstboot to prevent interactive prompts blocking the console
virt-customize -q -a "$WORK_FILE" --run-command "systemctl disable systemd-firstboot.service 2>/dev/null; rm -f /etc/systemd/system/sysinit.target.wants/systemd-firstboot.service; ln -sf /dev/null /etc/systemd/system/systemd-firstboot.service" >/dev/null 2>&1 || true
# Pre-seed firstboot settings so it won't prompt even if triggered
virt-customize -q -a "$WORK_FILE" --run-command "echo 'Etc/UTC' > /etc/timezone && ln -sf /usr/share/zoneinfo/Etc/UTC /etc/localtime" >/dev/null 2>&1 || true
virt-customize -q -a "$WORK_FILE" --run-command "touch /etc/locale.conf" >/dev/null 2>&1 || true
if [ "$CLOUD_INIT" == "yes" ]; then
# Cloud-Init handles SSH and login
virt-customize -q -a "$WORK_FILE" --run-command "sed -i 's/^#*PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config" >/dev/null 2>&1 || true
virt-customize -q -a "$WORK_FILE" --run-command "sed -i 's/^#*PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config" >/dev/null 2>&1 || true
else
# Configure auto-login on serial console (ttyS0) and virtual console (tty1)
virt-customize -q -a "$WORK_FILE" --run-command "mkdir -p /etc/systemd/system/serial-getty@ttyS0.service.d" >/dev/null 2>&1 || true
virt-customize -q -a "$WORK_FILE" --run-command 'cat > /etc/systemd/system/serial-getty@ttyS0.service.d/autologin.conf << EOF
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --noclear %I \$TERM
EOF' >/dev/null 2>&1 || true
virt-customize -q -a "$WORK_FILE" --run-command "mkdir -p /etc/systemd/system/getty@tty1.service.d" >/dev/null 2>&1 || true
virt-customize -q -a "$WORK_FILE" --run-command 'cat > /etc/systemd/system/getty@tty1.service.d/autologin.conf << EOF
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --noclear %I \$TERM
EOF' >/dev/null 2>&1 || true
fi
msg_ok "Customized image"
STORAGE_TYPE=$(pvesm status -storage "$STORAGE" | awk 'NR>1 {print $2}')
case $STORAGE_TYPE in
nfs | dir)
@@ -501,6 +560,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"
@@ -512,7 +576,7 @@ msg_info "Creating a Debian 13 VM"
qm create $VMID -agent 1${MACHINE} -tablet 0 -localtime 1 -bios ovmf${CPU_TYPE} -cores $CORE_COUNT -memory $RAM_SIZE \
-name $HN -tags community-script -net0 virtio,bridge=$BRG,macaddr=$MAC$VLAN$MTU -onboot 1 -ostype l26 -scsihw virtio-scsi-pci
pvesm alloc $STORAGE $VMID $DISK0 4M 1>&/dev/null
qm importdisk $VMID ${FILE} $STORAGE ${DISK_IMPORT:-} 1>&/dev/null
qm importdisk $VMID ${WORK_FILE} $STORAGE ${DISK_IMPORT:-} 1>&/dev/null
if [ "$CLOUD_INIT" == "yes" ]; then
qm set $VMID \
-efidisk0 ${DISK0_REF}${FORMAT} \
@@ -527,6 +591,10 @@ else
-boot order=scsi0 \
-serial0 socket >/dev/null
fi
# Clean up work file
rm -f "$WORK_FILE"
DESCRIPTION=$(
cat <<EOF
<div align='center'>

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -501,6 +501,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -45,7 +45,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}

View File

@@ -74,7 +74,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}

View File

@@ -71,7 +71,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -566,6 +566,11 @@ zfspool)
DISK_REF=""
DISK_IMPORT="-format raw"
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
DISK_VAR="vm-${VMID}-disk-0${DISK_EXT:-}"

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -487,6 +487,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1,2}; do
disk="DISK$i"

View File

@@ -74,7 +74,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid

View File

@@ -48,7 +48,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -619,6 +619,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -71,7 +71,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -500,6 +500,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1,2}; do
disk="DISK$i"

View File

@@ -79,7 +79,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -402,6 +402,11 @@ nfs | dir)
DISK_REF="$VMID/"
DISK_IMPORT="-format qcow2"
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -66,7 +66,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -482,6 +482,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -69,7 +69,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -484,6 +484,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -68,7 +68,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -483,6 +483,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -69,7 +69,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}