mirror of
https://github.com/community-scripts/ProxmoxVE.git
synced 2026-02-16 10:13:26 +01:00
Compare commits
1 Commits
tremor021-
...
fix/uv-ven
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
588f459a0d |
216
.github/changelogs/2026/02.md
generated
vendored
216
.github/changelogs/2026/02.md
generated
vendored
@@ -1,219 +1,3 @@
|
||||
## 2026-02-14
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Increase disk allocation for OpenWebUI and Ollama to prevent installation failures [@Copilot](https://github.com/Copilot) ([#11920](https://github.com/community-scripts/ProxmoxVE/pull/11920))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- core: handle missing RAM speed in nested VMs [@MickLesk](https://github.com/MickLesk) ([#11913](https://github.com/community-scripts/ProxmoxVE/pull/11913))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- core: overwriteable app version [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11753](https://github.com/community-scripts/ProxmoxVE/pull/11753))
|
||||
- core: validate container IDs cluster-wide across all nodes [@MickLesk](https://github.com/MickLesk) ([#11906](https://github.com/community-scripts/ProxmoxVE/pull/11906))
|
||||
- core: improve error reporting with structured error strings and better categorization + output formatting [@MickLesk](https://github.com/MickLesk) ([#11907](https://github.com/community-scripts/ProxmoxVE/pull/11907))
|
||||
- core: unified logging system with combined logs [@MickLesk](https://github.com/MickLesk) ([#11761](https://github.com/community-scripts/ProxmoxVE/pull/11761))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- lxc-updater: add patchmon aware [@failure101](https://github.com/failure101) ([#11905](https://github.com/community-scripts/ProxmoxVE/pull/11905))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 📝 Script Information
|
||||
|
||||
- Disable UniFi script - APT packages no longer available [@Copilot](https://github.com/Copilot) ([#11898](https://github.com/community-scripts/ProxmoxVE/pull/11898))
|
||||
|
||||
## 2026-02-13
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- OpenWebUI: pin numba constraint [@MickLesk](https://github.com/MickLesk) ([#11874](https://github.com/community-scripts/ProxmoxVE/pull/11874))
|
||||
- Planka: add migrate step to update function [@ZimmermannLeon](https://github.com/ZimmermannLeon) ([#11877](https://github.com/community-scripts/ProxmoxVE/pull/11877))
|
||||
- Pangolin: switch sqlite-specific back to generic [@MickLesk](https://github.com/MickLesk) ([#11868](https://github.com/community-scripts/ProxmoxVE/pull/11868))
|
||||
- [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config [@vhsdream](https://github.com/vhsdream) ([#11864](https://github.com/community-scripts/ProxmoxVE/pull/11864))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: Radicale [@vhsdream](https://github.com/vhsdream) ([#11850](https://github.com/community-scripts/ProxmoxVE/pull/11850))
|
||||
- chore(donetick): add config entry for v0.1.73 [@tomfrenzel](https://github.com/tomfrenzel) ([#11872](https://github.com/community-scripts/ProxmoxVE/pull/11872))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- core: retry reporting with fallback payloads [@MickLesk](https://github.com/MickLesk) ([#11885](https://github.com/community-scripts/ProxmoxVE/pull/11885))
|
||||
|
||||
### 📡 API
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- error-handler: Implement json_escape and enhance error handling [@MickLesk](https://github.com/MickLesk) ([#11875](https://github.com/community-scripts/ProxmoxVE/pull/11875))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 📝 Script Information
|
||||
|
||||
- SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning [@MickLesk](https://github.com/MickLesk) ([#11829](https://github.com/community-scripts/ProxmoxVE/pull/11829))
|
||||
|
||||
## 2026-02-12
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- EMQX: increase disk to 6GB and add optional MQ disable prompt [@MickLesk](https://github.com/MickLesk) ([#11844](https://github.com/community-scripts/ProxmoxVE/pull/11844))
|
||||
- Increased the Grafana container default disk size. [@shtefko](https://github.com/shtefko) ([#11840](https://github.com/community-scripts/ProxmoxVE/pull/11840))
|
||||
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
|
||||
- Deluge: add python3-setuptools as dep [@MickLesk](https://github.com/MickLesk) ([#11833](https://github.com/community-scripts/ProxmoxVE/pull/11833))
|
||||
- Dispatcharr: migrate to uv sync [@MickLesk](https://github.com/MickLesk) ([#11831](https://github.com/community-scripts/ProxmoxVE/pull/11831))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics [@MickLesk](https://github.com/MickLesk) ([#11842](https://github.com/community-scripts/ProxmoxVE/pull/11842))
|
||||
- Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection [@MickLesk](https://github.com/MickLesk) ([#11810](https://github.com/community-scripts/ProxmoxVE/pull/11810))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- tools.func: auto-detect binary vs armored GPG keys in setup_deb822_repo [@MickLesk](https://github.com/MickLesk) ([#11841](https://github.com/community-scripts/ProxmoxVE/pull/11841))
|
||||
- core: remove old Go API and extend misc/api.func with new backend [@MickLesk](https://github.com/MickLesk) ([#11822](https://github.com/community-scripts/ProxmoxVE/pull/11822))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- error_handler: prevent stuck 'installing' status [@MickLesk](https://github.com/MickLesk) ([#11845](https://github.com/community-scripts/ProxmoxVE/pull/11845))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Tailscale: fix DNS check and keyrings directory issues [@MickLesk](https://github.com/MickLesk) ([#11837](https://github.com/community-scripts/ProxmoxVE/pull/11837))
|
||||
|
||||
## 2026-02-11
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Draw.io ([#11788](https://github.com/community-scripts/ProxmoxVE/pull/11788))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- dispatcharr: include port 9191 in success-message [@MickLesk](https://github.com/MickLesk) ([#11808](https://github.com/community-scripts/ProxmoxVE/pull/11808))
|
||||
- fix: make donetick 0.1.71 compatible [@tomfrenzel](https://github.com/tomfrenzel) ([#11804](https://github.com/community-scripts/ProxmoxVE/pull/11804))
|
||||
- Kasm: Support new version URL format without hash suffix [@MickLesk](https://github.com/MickLesk) ([#11787](https://github.com/community-scripts/ProxmoxVE/pull/11787))
|
||||
- LibreTranslate: Remove Torch [@tremor021](https://github.com/tremor021) ([#11783](https://github.com/community-scripts/ProxmoxVE/pull/11783))
|
||||
- Snowshare: fix update script [@TuroYT](https://github.com/TuroYT) ([#11726](https://github.com/community-scripts/ProxmoxVE/pull/11726))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- [Feature] OpenCloud: support PosixFS Collaborative Mode [@vhsdream](https://github.com/vhsdream) ([#11806](https://github.com/community-scripts/ProxmoxVE/pull/11806))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- core: respect EDITOR variable for config editing [@ls-root](https://github.com/ls-root) ([#11693](https://github.com/community-scripts/ProxmoxVE/pull/11693))
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- Fix formatting in kutt.json notes section [@tiagodenoronha](https://github.com/tiagodenoronha) ([#11774](https://github.com/community-scripts/ProxmoxVE/pull/11774))
|
||||
|
||||
## 2026-02-10
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Immich: Pin version to 2.5.6 [@vhsdream](https://github.com/vhsdream) ([#11775](https://github.com/community-scripts/ProxmoxVE/pull/11775))
|
||||
- Libretranslate: Fix setuptools [@tremor021](https://github.com/tremor021) ([#11772](https://github.com/community-scripts/ProxmoxVE/pull/11772))
|
||||
- Element Synapse: prevent systemd invoke failure during apt install [@MickLesk](https://github.com/MickLesk) ([#11758](https://github.com/community-scripts/ProxmoxVE/pull/11758))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- Refactor: Slskd & Soularr [@vhsdream](https://github.com/vhsdream) ([#11674](https://github.com/community-scripts/ProxmoxVE/pull/11674))
|
||||
|
||||
### 🗑️ Deleted Scripts
|
||||
|
||||
- move paperless-exporter from LXC to addon ([#11737](https://github.com/community-scripts/ProxmoxVE/pull/11737))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- feat: improve storage parsing & add guestname [@carlosmaroot](https://github.com/carlosmaroot) ([#11752](https://github.com/community-scripts/ProxmoxVE/pull/11752))
|
||||
|
||||
### 📂 Github
|
||||
|
||||
- Github-Version Workflow: include addon scripts in extraction [@MickLesk](https://github.com/MickLesk) ([#11757](https://github.com/community-scripts/ProxmoxVE/pull/11757))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 📝 Script Information
|
||||
|
||||
- Snowshare: fix typo in config file path on website [@BirdMakingStuff](https://github.com/BirdMakingStuff) ([#11754](https://github.com/community-scripts/ProxmoxVE/pull/11754))
|
||||
|
||||
## 2026-02-09
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- several scripts: add --clear to uv venv calls for uv 0.10 compatibility [@MickLesk](https://github.com/MickLesk) ([#11723](https://github.com/community-scripts/ProxmoxVE/pull/11723))
|
||||
- Koillection: ensure setup_composer is in update script [@MickLesk](https://github.com/MickLesk) ([#11734](https://github.com/community-scripts/ProxmoxVE/pull/11734))
|
||||
- PeaNUT: symlink server.js after update [@vhsdream](https://github.com/vhsdream) ([#11696](https://github.com/community-scripts/ProxmoxVE/pull/11696))
|
||||
- Umlautadaptarr: use release appsettings.json instead of hardcoded copy [@MickLesk](https://github.com/MickLesk) ([#11725](https://github.com/community-scripts/ProxmoxVE/pull/11725))
|
||||
- tracearr: prepare for next stable release [@durzo](https://github.com/durzo) ([#11673](https://github.com/community-scripts/ProxmoxVE/pull/11673))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- remove whiptail from update scripts for unattended update support [@MickLesk](https://github.com/MickLesk) ([#11712](https://github.com/community-scripts/ProxmoxVE/pull/11712))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: FileFlows [@tremor021](https://github.com/tremor021) ([#11108](https://github.com/community-scripts/ProxmoxVE/pull/11108))
|
||||
- Refactor: wger [@MickLesk](https://github.com/MickLesk) ([#11722](https://github.com/community-scripts/ProxmoxVE/pull/11722))
|
||||
- Nginx-UI: better User Handling | ACME [@MickLesk](https://github.com/MickLesk) ([#11715](https://github.com/community-scripts/ProxmoxVE/pull/11715))
|
||||
- NginxProxymanager: use better-sqlite3 [@MickLesk](https://github.com/MickLesk) ([#11708](https://github.com/community-scripts/ProxmoxVE/pull/11708))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- hwaccel: add libmfx-gen1.2 to Intel Arc setup for QSV support [@MickLesk](https://github.com/MickLesk) ([#11707](https://github.com/community-scripts/ProxmoxVE/pull/11707))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- addons: ensure curl is installed before use [@MickLesk](https://github.com/MickLesk) ([#11718](https://github.com/community-scripts/ProxmoxVE/pull/11718))
|
||||
- Netbird (addon): add systemd ordering to start after Docker [@MickLesk](https://github.com/MickLesk) ([#11716](https://github.com/community-scripts/ProxmoxVE/pull/11716))
|
||||
|
||||
### ❔ Uncategorized
|
||||
|
||||
- Bichon: Update website [@tremor021](https://github.com/tremor021) ([#11711](https://github.com/community-scripts/ProxmoxVE/pull/11711))
|
||||
|
||||
## 2026-02-08
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- feat(healthchecks): add sendalerts service [@Mika56](https://github.com/Mika56) ([#11694](https://github.com/community-scripts/ProxmoxVE/pull/11694))
|
||||
- ComfyUI: Dynamic Fetch PyTorch Versions [@MickLesk](https://github.com/MickLesk) ([#11657](https://github.com/community-scripts/ProxmoxVE/pull/11657))
|
||||
|
||||
- #### 💥 Breaking Changes
|
||||
|
||||
- Semaphore: switch from Debian to Ubuntu 24.04 [@MickLesk](https://github.com/MickLesk) ([#11670](https://github.com/community-scripts/ProxmoxVE/pull/11670))
|
||||
|
||||
## 2026-02-07
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
12
.github/workflows/update-versions-github.yml
generated
vendored
12
.github/workflows/update-versions-github.yml
generated
vendored
@@ -89,15 +89,9 @@ jobs:
|
||||
slug=$(jq -r '.slug // empty' "$json_file" 2>/dev/null)
|
||||
[[ -z "$slug" ]] && continue
|
||||
|
||||
# Find corresponding script (install script or addon script)
|
||||
install_script=""
|
||||
if [[ -f "install/${slug}-install.sh" ]]; then
|
||||
install_script="install/${slug}-install.sh"
|
||||
elif [[ -f "tools/addon/${slug}.sh" ]]; then
|
||||
install_script="tools/addon/${slug}.sh"
|
||||
else
|
||||
continue
|
||||
fi
|
||||
# Find corresponding install script
|
||||
install_script="install/${slug}-install.sh"
|
||||
[[ ! -f "$install_script" ]] && continue
|
||||
|
||||
# Look for fetch_and_deploy_gh_release calls
|
||||
# Pattern: fetch_and_deploy_gh_release "app" "owner/repo" ["mode"] ["version"]
|
||||
|
||||
365
CHANGELOG.md
365
CHANGELOG.md
@@ -15,9 +15,6 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<details>
|
||||
<summary><h2>📜 History</h2></summary>
|
||||
|
||||
@@ -27,7 +24,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
||||
|
||||
|
||||
<details>
|
||||
<summary><h4>February (14 entries)</h4></summary>
|
||||
<summary><h4>February (7 entries)</h4></summary>
|
||||
|
||||
[View February 2026 Changelog](.github/changelogs/2026/02.md)
|
||||
|
||||
@@ -404,217 +401,19 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
||||
|
||||
</details>
|
||||
|
||||
## 2026-02-15
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- add: seer script and migrations [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11930](https://github.com/community-scripts/ProxmoxVE/pull/11930))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 💥 Breaking Changes
|
||||
|
||||
- Refactor: Patchmon [@vhsdream](https://github.com/vhsdream) ([#11888](https://github.com/community-scripts/ProxmoxVE/pull/11888))
|
||||
|
||||
## 2026-02-14
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Increase disk allocation for OpenWebUI and Ollama to prevent installation failures [@Copilot](https://github.com/Copilot) ([#11920](https://github.com/community-scripts/ProxmoxVE/pull/11920))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- core: handle missing RAM speed in nested VMs [@MickLesk](https://github.com/MickLesk) ([#11913](https://github.com/community-scripts/ProxmoxVE/pull/11913))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- core: overwriteable app version [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11753](https://github.com/community-scripts/ProxmoxVE/pull/11753))
|
||||
- core: validate container IDs cluster-wide across all nodes [@MickLesk](https://github.com/MickLesk) ([#11906](https://github.com/community-scripts/ProxmoxVE/pull/11906))
|
||||
- core: improve error reporting with structured error strings and better categorization + output formatting [@MickLesk](https://github.com/MickLesk) ([#11907](https://github.com/community-scripts/ProxmoxVE/pull/11907))
|
||||
- core: unified logging system with combined logs [@MickLesk](https://github.com/MickLesk) ([#11761](https://github.com/community-scripts/ProxmoxVE/pull/11761))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- lxc-updater: add patchmon aware [@failure101](https://github.com/failure101) ([#11905](https://github.com/community-scripts/ProxmoxVE/pull/11905))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 📝 Script Information
|
||||
|
||||
- Disable UniFi script - APT packages no longer available [@Copilot](https://github.com/Copilot) ([#11898](https://github.com/community-scripts/ProxmoxVE/pull/11898))
|
||||
|
||||
## 2026-02-13
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- OpenWebUI: pin numba constraint [@MickLesk](https://github.com/MickLesk) ([#11874](https://github.com/community-scripts/ProxmoxVE/pull/11874))
|
||||
- Planka: add migrate step to update function [@ZimmermannLeon](https://github.com/ZimmermannLeon) ([#11877](https://github.com/community-scripts/ProxmoxVE/pull/11877))
|
||||
- Pangolin: switch sqlite-specific back to generic [@MickLesk](https://github.com/MickLesk) ([#11868](https://github.com/community-scripts/ProxmoxVE/pull/11868))
|
||||
- [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config [@vhsdream](https://github.com/vhsdream) ([#11864](https://github.com/community-scripts/ProxmoxVE/pull/11864))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: Radicale [@vhsdream](https://github.com/vhsdream) ([#11850](https://github.com/community-scripts/ProxmoxVE/pull/11850))
|
||||
- chore(donetick): add config entry for v0.1.73 [@tomfrenzel](https://github.com/tomfrenzel) ([#11872](https://github.com/community-scripts/ProxmoxVE/pull/11872))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- core: retry reporting with fallback payloads [@MickLesk](https://github.com/MickLesk) ([#11885](https://github.com/community-scripts/ProxmoxVE/pull/11885))
|
||||
|
||||
### 📡 API
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- error-handler: Implement json_escape and enhance error handling [@MickLesk](https://github.com/MickLesk) ([#11875](https://github.com/community-scripts/ProxmoxVE/pull/11875))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 📝 Script Information
|
||||
|
||||
- SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning [@MickLesk](https://github.com/MickLesk) ([#11829](https://github.com/community-scripts/ProxmoxVE/pull/11829))
|
||||
|
||||
## 2026-02-12
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- EMQX: increase disk to 6GB and add optional MQ disable prompt [@MickLesk](https://github.com/MickLesk) ([#11844](https://github.com/community-scripts/ProxmoxVE/pull/11844))
|
||||
- Increased the Grafana container default disk size. [@shtefko](https://github.com/shtefko) ([#11840](https://github.com/community-scripts/ProxmoxVE/pull/11840))
|
||||
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
|
||||
- Deluge: add python3-setuptools as dep [@MickLesk](https://github.com/MickLesk) ([#11833](https://github.com/community-scripts/ProxmoxVE/pull/11833))
|
||||
- Dispatcharr: migrate to uv sync [@MickLesk](https://github.com/MickLesk) ([#11831](https://github.com/community-scripts/ProxmoxVE/pull/11831))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics [@MickLesk](https://github.com/MickLesk) ([#11842](https://github.com/community-scripts/ProxmoxVE/pull/11842))
|
||||
- Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection [@MickLesk](https://github.com/MickLesk) ([#11810](https://github.com/community-scripts/ProxmoxVE/pull/11810))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- tools.func: auto-detect binary vs armored GPG keys in setup_deb822_repo [@MickLesk](https://github.com/MickLesk) ([#11841](https://github.com/community-scripts/ProxmoxVE/pull/11841))
|
||||
- core: remove old Go API and extend misc/api.func with new backend [@MickLesk](https://github.com/MickLesk) ([#11822](https://github.com/community-scripts/ProxmoxVE/pull/11822))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- error_handler: prevent stuck 'installing' status [@MickLesk](https://github.com/MickLesk) ([#11845](https://github.com/community-scripts/ProxmoxVE/pull/11845))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Tailscale: fix DNS check and keyrings directory issues [@MickLesk](https://github.com/MickLesk) ([#11837](https://github.com/community-scripts/ProxmoxVE/pull/11837))
|
||||
|
||||
## 2026-02-11
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Draw.io ([#11788](https://github.com/community-scripts/ProxmoxVE/pull/11788))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- dispatcharr: include port 9191 in success-message [@MickLesk](https://github.com/MickLesk) ([#11808](https://github.com/community-scripts/ProxmoxVE/pull/11808))
|
||||
- fix: make donetick 0.1.71 compatible [@tomfrenzel](https://github.com/tomfrenzel) ([#11804](https://github.com/community-scripts/ProxmoxVE/pull/11804))
|
||||
- Kasm: Support new version URL format without hash suffix [@MickLesk](https://github.com/MickLesk) ([#11787](https://github.com/community-scripts/ProxmoxVE/pull/11787))
|
||||
- LibreTranslate: Remove Torch [@tremor021](https://github.com/tremor021) ([#11783](https://github.com/community-scripts/ProxmoxVE/pull/11783))
|
||||
- Snowshare: fix update script [@TuroYT](https://github.com/TuroYT) ([#11726](https://github.com/community-scripts/ProxmoxVE/pull/11726))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- [Feature] OpenCloud: support PosixFS Collaborative Mode [@vhsdream](https://github.com/vhsdream) ([#11806](https://github.com/community-scripts/ProxmoxVE/pull/11806))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- core: respect EDITOR variable for config editing [@ls-root](https://github.com/ls-root) ([#11693](https://github.com/community-scripts/ProxmoxVE/pull/11693))
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- Fix formatting in kutt.json notes section [@tiagodenoronha](https://github.com/tiagodenoronha) ([#11774](https://github.com/community-scripts/ProxmoxVE/pull/11774))
|
||||
|
||||
## 2026-02-10
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Immich: Pin version to 2.5.6 [@vhsdream](https://github.com/vhsdream) ([#11775](https://github.com/community-scripts/ProxmoxVE/pull/11775))
|
||||
- Libretranslate: Fix setuptools [@tremor021](https://github.com/tremor021) ([#11772](https://github.com/community-scripts/ProxmoxVE/pull/11772))
|
||||
- Element Synapse: prevent systemd invoke failure during apt install [@MickLesk](https://github.com/MickLesk) ([#11758](https://github.com/community-scripts/ProxmoxVE/pull/11758))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- Refactor: Slskd & Soularr [@vhsdream](https://github.com/vhsdream) ([#11674](https://github.com/community-scripts/ProxmoxVE/pull/11674))
|
||||
|
||||
### 🗑️ Deleted Scripts
|
||||
|
||||
- move paperless-exporter from LXC to addon ([#11737](https://github.com/community-scripts/ProxmoxVE/pull/11737))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- feat: improve storage parsing & add guestname [@carlosmaroot](https://github.com/carlosmaroot) ([#11752](https://github.com/community-scripts/ProxmoxVE/pull/11752))
|
||||
|
||||
### 📂 Github
|
||||
|
||||
- Github-Version Workflow: include addon scripts in extraction [@MickLesk](https://github.com/MickLesk) ([#11757](https://github.com/community-scripts/ProxmoxVE/pull/11757))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 📝 Script Information
|
||||
|
||||
- Snowshare: fix typo in config file path on website [@BirdMakingStuff](https://github.com/BirdMakingStuff) ([#11754](https://github.com/community-scripts/ProxmoxVE/pull/11754))
|
||||
|
||||
## 2026-02-09
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- several scripts: add --clear to uv venv calls for uv 0.10 compatibility [@MickLesk](https://github.com/MickLesk) ([#11723](https://github.com/community-scripts/ProxmoxVE/pull/11723))
|
||||
- Koillection: ensure setup_composer is in update script [@MickLesk](https://github.com/MickLesk) ([#11734](https://github.com/community-scripts/ProxmoxVE/pull/11734))
|
||||
- PeaNUT: symlink server.js after update [@vhsdream](https://github.com/vhsdream) ([#11696](https://github.com/community-scripts/ProxmoxVE/pull/11696))
|
||||
- Umlautadaptarr: use release appsettings.json instead of hardcoded copy [@MickLesk](https://github.com/MickLesk) ([#11725](https://github.com/community-scripts/ProxmoxVE/pull/11725))
|
||||
- tracearr: prepare for next stable release [@durzo](https://github.com/durzo) ([#11673](https://github.com/community-scripts/ProxmoxVE/pull/11673))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- remove whiptail from update scripts for unattended update support [@MickLesk](https://github.com/MickLesk) ([#11712](https://github.com/community-scripts/ProxmoxVE/pull/11712))
|
||||
- PeaNUT: symlink server.js after update [@vhsdream](https://github.com/vhsdream) ([#11696](https://github.com/community-scripts/ProxmoxVE/pull/11696))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: FileFlows [@tremor021](https://github.com/tremor021) ([#11108](https://github.com/community-scripts/ProxmoxVE/pull/11108))
|
||||
- Refactor: wger [@MickLesk](https://github.com/MickLesk) ([#11722](https://github.com/community-scripts/ProxmoxVE/pull/11722))
|
||||
- Nginx-UI: better User Handling | ACME [@MickLesk](https://github.com/MickLesk) ([#11715](https://github.com/community-scripts/ProxmoxVE/pull/11715))
|
||||
- NginxProxymanager: use better-sqlite3 [@MickLesk](https://github.com/MickLesk) ([#11708](https://github.com/community-scripts/ProxmoxVE/pull/11708))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- hwaccel: add libmfx-gen1.2 to Intel Arc setup for QSV support [@MickLesk](https://github.com/MickLesk) ([#11707](https://github.com/community-scripts/ProxmoxVE/pull/11707))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- addons: ensure curl is installed before use [@MickLesk](https://github.com/MickLesk) ([#11718](https://github.com/community-scripts/ProxmoxVE/pull/11718))
|
||||
- Netbird (addon): add systemd ordering to start after Docker [@MickLesk](https://github.com/MickLesk) ([#11716](https://github.com/community-scripts/ProxmoxVE/pull/11716))
|
||||
|
||||
### ❔ Uncategorized
|
||||
|
||||
- Bichon: Update website [@tremor021](https://github.com/tremor021) ([#11711](https://github.com/community-scripts/ProxmoxVE/pull/11711))
|
||||
@@ -1367,4 +1166,162 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
||||
|
||||
### ❔ Uncategorized
|
||||
|
||||
- qui: fix: category [@CrazyWolf13](https://github.com/CrazyWolf13) ([#10847](https://github.com/community-scripts/ProxmoxVE/pull/10847))
|
||||
- qui: fix: category [@CrazyWolf13](https://github.com/CrazyWolf13) ([#10847](https://github.com/community-scripts/ProxmoxVE/pull/10847))
|
||||
|
||||
## 2026-01-15
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Qui ([#10829](https://github.com/community-scripts/ProxmoxVE/pull/10829))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- Refactor: FreshRSS + Bump to Debian 13 [@MickLesk](https://github.com/MickLesk) ([#10824](https://github.com/community-scripts/ProxmoxVE/pull/10824))
|
||||
|
||||
## 2026-01-14
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Kutt ([#10812](https://github.com/community-scripts/ProxmoxVE/pull/10812))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Switch Ollama install to .tar.zst and add zstd dependency [@MickLesk](https://github.com/MickLesk) ([#10814](https://github.com/community-scripts/ProxmoxVE/pull/10814))
|
||||
- Immich: Install libde265-dev from Debian Testing [@vhsdream](https://github.com/vhsdream) ([#10810](https://github.com/community-scripts/ProxmoxVE/pull/10810))
|
||||
- nginxproxymanager: allow updates now the build is fixed [@durzo](https://github.com/durzo) ([#10796](https://github.com/community-scripts/ProxmoxVE/pull/10796))
|
||||
- Fixed Apache Guacamole installer [@horvatbenjamin](https://github.com/horvatbenjamin) ([#10798](https://github.com/community-scripts/ProxmoxVE/pull/10798))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- core: Improve NVIDIA GPU setup (5000x Series) [@MickLesk](https://github.com/MickLesk) ([#10807](https://github.com/community-scripts/ProxmoxVE/pull/10807))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- Fix whiptail dialog hanging in Proxmox web console [@comk22](https://github.com/comk22) ([#10794](https://github.com/community-scripts/ProxmoxVE/pull/10794))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Add search filtering to CommandDialog for improved script search functionality [@BramSuurdje](https://github.com/BramSuurdje) ([#10800](https://github.com/community-scripts/ProxmoxVE/pull/10800))
|
||||
|
||||
## 2026-01-13
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Investbrain ([#10774](https://github.com/community-scripts/ProxmoxVE/pull/10774))
|
||||
- Fladder ([#10768](https://github.com/community-scripts/ProxmoxVE/pull/10768))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Immich: Fix Intel version check; install legacy Intel packages during new install [@vhsdream](https://github.com/vhsdream) ([#10787](https://github.com/community-scripts/ProxmoxVE/pull/10787))
|
||||
- Openwrt: Remove default VLAN for LAN [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#10782](https://github.com/community-scripts/ProxmoxVE/pull/10782))
|
||||
- Refactor: Joplin Server [@tremor021](https://github.com/tremor021) ([#10769](https://github.com/community-scripts/ProxmoxVE/pull/10769))
|
||||
- Fix Zammad nginx configuration causing installation failure [@Copilot](https://github.com/Copilot) ([#10757](https://github.com/community-scripts/ProxmoxVE/pull/10757))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Backrest: Bump to Trixie [@tremor021](https://github.com/tremor021) ([#10758](https://github.com/community-scripts/ProxmoxVE/pull/10758))
|
||||
- Refactor: Caddy [@tremor021](https://github.com/tremor021) ([#10759](https://github.com/community-scripts/ProxmoxVE/pull/10759))
|
||||
- Refactor: Leantime [@tremor021](https://github.com/tremor021) ([#10760](https://github.com/community-scripts/ProxmoxVE/pull/10760))
|
||||
|
||||
### 🧰 Tools
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- update_lxcs.sh: Add the option to skip stopped LXC [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#10783](https://github.com/community-scripts/ProxmoxVE/pull/10783))
|
||||
|
||||
## 2026-01-12
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Jellystat ([#10628](https://github.com/community-scripts/ProxmoxVE/pull/10628))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- InfluxSB: fix If / fi [@chrnie](https://github.com/chrnie) ([#10753](https://github.com/community-scripts/ProxmoxVE/pull/10753))
|
||||
- Cockpit: Downgrade to Debian 12 Bookworm (45Drives Issue) [@MickLesk](https://github.com/MickLesk) ([#10717](https://github.com/community-scripts/ProxmoxVE/pull/10717))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- InfluxDB: add setup for influxdb v3 [@victorlap](https://github.com/victorlap) ([#10736](https://github.com/community-scripts/ProxmoxVE/pull/10736))
|
||||
- Apache Guacamole: add schema upgrades and extension updates [@MickLesk](https://github.com/MickLesk) ([#10746](https://github.com/community-scripts/ProxmoxVE/pull/10746))
|
||||
- Apache Tomcat: update support and refactor install script + debian 13 [@MickLesk](https://github.com/MickLesk) ([#10739](https://github.com/community-scripts/ProxmoxVE/pull/10739))
|
||||
- Apache Guacamole: Function Bump + update_script [@MickLesk](https://github.com/MickLesk) ([#10728](https://github.com/community-scripts/ProxmoxVE/pull/10728))
|
||||
- Apache CouchDB: bump to debian 13 and add update support [@MickLesk](https://github.com/MickLesk) ([#10721](https://github.com/community-scripts/ProxmoxVE/pull/10721))
|
||||
- Apache Cassandra: bump to debian 13 and add update support [@MickLesk](https://github.com/MickLesk) ([#10720](https://github.com/community-scripts/ProxmoxVE/pull/10720))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: Booklore [@MickLesk](https://github.com/MickLesk) ([#10742](https://github.com/community-scripts/ProxmoxVE/pull/10742))
|
||||
- Bump Argus to Debian 13 [@MickLesk](https://github.com/MickLesk) ([#10718](https://github.com/community-scripts/ProxmoxVE/pull/10718))
|
||||
- Refactor Docker/Dockge & Bump to Debian 13 [@MickLesk](https://github.com/MickLesk) ([#10719](https://github.com/community-scripts/ProxmoxVE/pull/10719))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- core: remove duplicated pve_version in advanced installs [@MickLesk](https://github.com/MickLesk) ([#10743](https://github.com/community-scripts/ProxmoxVE/pull/10743))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- core: add storage validation & fix GB/MB display [@MickLesk](https://github.com/MickLesk) ([#10745](https://github.com/community-scripts/ProxmoxVE/pull/10745))
|
||||
- core: validate container ID before pct create to prevent failures [@MickLesk](https://github.com/MickLesk) ([#10729](https://github.com/community-scripts/ProxmoxVE/pull/10729))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Enforce non-interactive apt mode in DB setup scripts [@MickLesk](https://github.com/MickLesk) ([#10714](https://github.com/community-scripts/ProxmoxVE/pull/10714))
|
||||
|
||||
## 2026-01-11
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Fix Invoice Ninja Error 500 by restoring file ownership after artisan commands [@Copilot](https://github.com/Copilot) ([#10709](https://github.com/community-scripts/ProxmoxVE/pull/10709))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: Infisical [@tremor021](https://github.com/tremor021) ([#10693](https://github.com/community-scripts/ProxmoxVE/pull/10693))
|
||||
- Refactor: HortusFox [@tremor021](https://github.com/tremor021) ([#10697](https://github.com/community-scripts/ProxmoxVE/pull/10697))
|
||||
- Refactor: Homer [@tremor021](https://github.com/tremor021) ([#10698](https://github.com/community-scripts/ProxmoxVE/pull/10698))
|
||||
|
||||
## 2026-01-10
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- [Endurain] Increase default RAM from 2048 to 4096 [@FutureCow](https://github.com/FutureCow) ([#10690](https://github.com/community-scripts/ProxmoxVE/pull/10690))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- tools.func: hwaccel - make beignet-opencl-icd optional for legacy Intel GPUs [@MickLesk](https://github.com/MickLesk) ([#10677](https://github.com/community-scripts/ProxmoxVE/pull/10677))
|
||||
|
||||
## 2026-01-09
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Jenkins: Fix application repository setup [@tremor021](https://github.com/tremor021) ([#10671](https://github.com/community-scripts/ProxmoxVE/pull/10671))
|
||||
- deCONZ: Fix sources check in update script [@tremor021](https://github.com/tremor021) ([#10664](https://github.com/community-scripts/ProxmoxVE/pull/10664))
|
||||
- Remove '--cpu' option from ExecStart command [@sethgregory](https://github.com/sethgregory) ([#10659](https://github.com/community-scripts/ProxmoxVE/pull/10659))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 💥 Breaking Changes
|
||||
|
||||
- fix: setup_mariadb hangs on [@CrazyWolf13](https://github.com/CrazyWolf13) ([#10672](https://github.com/community-scripts/ProxmoxVE/pull/10672))
|
||||
5
api/.env.example
Normal file
5
api/.env.example
Normal file
@@ -0,0 +1,5 @@
|
||||
MONGO_USER=
|
||||
MONGO_PASSWORD=
|
||||
MONGO_IP=
|
||||
MONGO_PORT=
|
||||
MONGO_DATABASE=
|
||||
23
api/go.mod
Normal file
23
api/go.mod
Normal file
@@ -0,0 +1,23 @@
|
||||
module proxmox-api
|
||||
|
||||
go 1.24.0
|
||||
|
||||
require (
|
||||
github.com/gorilla/mux v1.8.1
|
||||
github.com/joho/godotenv v1.5.1
|
||||
github.com/rs/cors v1.11.1
|
||||
go.mongodb.org/mongo-driver v1.17.2
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/golang/snappy v0.0.4 // indirect
|
||||
github.com/klauspost/compress v1.16.7 // indirect
|
||||
github.com/montanaflynn/stats v0.7.1 // indirect
|
||||
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
|
||||
github.com/xdg-go/scram v1.1.2 // indirect
|
||||
github.com/xdg-go/stringprep v1.0.4 // indirect
|
||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
|
||||
golang.org/x/crypto v0.45.0 // indirect
|
||||
golang.org/x/sync v0.18.0 // indirect
|
||||
golang.org/x/text v0.31.0 // indirect
|
||||
)
|
||||
56
api/go.sum
Normal file
56
api/go.sum
Normal file
@@ -0,0 +1,56 @@
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
|
||||
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
|
||||
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
|
||||
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
|
||||
github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
|
||||
github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
|
||||
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
|
||||
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
|
||||
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
|
||||
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
|
||||
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
|
||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
|
||||
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
|
||||
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
|
||||
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
|
||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
go.mongodb.org/mongo-driver v1.17.2 h1:gvZyk8352qSfzyZ2UMWcpDpMSGEr1eqE4T793SqyhzM=
|
||||
go.mongodb.org/mongo-driver v1.17.2/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
|
||||
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
||||
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
450
api/main.go
Normal file
450
api/main.go
Normal file
@@ -0,0 +1,450 @@
|
||||
// Copyright (c) 2021-2026 community-scripts ORG
|
||||
// Author: Michel Roegl-Brunner (michelroegl-brunner)
|
||||
// License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/gorilla/mux"
|
||||
"github.com/joho/godotenv"
|
||||
"github.com/rs/cors"
|
||||
"go.mongodb.org/mongo-driver/bson"
|
||||
"go.mongodb.org/mongo-driver/bson/primitive"
|
||||
"go.mongodb.org/mongo-driver/mongo"
|
||||
"go.mongodb.org/mongo-driver/mongo/options"
|
||||
)
|
||||
|
||||
var client *mongo.Client
|
||||
var collection *mongo.Collection
|
||||
|
||||
func loadEnv() {
|
||||
if err := godotenv.Load(); err != nil {
|
||||
log.Fatal("Error loading .env file")
|
||||
}
|
||||
}
|
||||
|
||||
// DataModel represents a single document in MongoDB
|
||||
type DataModel struct {
|
||||
ID primitive.ObjectID `json:"id" bson:"_id,omitempty"`
|
||||
CT_TYPE uint `json:"ct_type" bson:"ct_type"`
|
||||
DISK_SIZE float32 `json:"disk_size" bson:"disk_size"`
|
||||
CORE_COUNT uint `json:"core_count" bson:"core_count"`
|
||||
RAM_SIZE uint `json:"ram_size" bson:"ram_size"`
|
||||
OS_TYPE string `json:"os_type" bson:"os_type"`
|
||||
OS_VERSION string `json:"os_version" bson:"os_version"`
|
||||
DISABLEIP6 string `json:"disableip6" bson:"disableip6"`
|
||||
NSAPP string `json:"nsapp" bson:"nsapp"`
|
||||
METHOD string `json:"method" bson:"method"`
|
||||
CreatedAt time.Time `json:"created_at" bson:"created_at"`
|
||||
PVEVERSION string `json:"pve_version" bson:"pve_version"`
|
||||
STATUS string `json:"status" bson:"status"`
|
||||
RANDOM_ID string `json:"random_id" bson:"random_id"`
|
||||
TYPE string `json:"type" bson:"type"`
|
||||
ERROR string `json:"error" bson:"error"`
|
||||
}
|
||||
|
||||
type StatusModel struct {
|
||||
RANDOM_ID string `json:"random_id" bson:"random_id"`
|
||||
ERROR string `json:"error" bson:"error"`
|
||||
STATUS string `json:"status" bson:"status"`
|
||||
}
|
||||
|
||||
type CountResponse struct {
|
||||
TotalEntries int64 `json:"total_entries"`
|
||||
StatusCount map[string]int64 `json:"status_count"`
|
||||
NSAPPCount map[string]int64 `json:"nsapp_count"`
|
||||
}
|
||||
|
||||
// ConnectDatabase initializes the MongoDB connection
|
||||
func ConnectDatabase() {
|
||||
loadEnv()
|
||||
|
||||
mongoURI := fmt.Sprintf("mongodb://%s:%s@%s:%s",
|
||||
os.Getenv("MONGO_USER"),
|
||||
os.Getenv("MONGO_PASSWORD"),
|
||||
os.Getenv("MONGO_IP"),
|
||||
os.Getenv("MONGO_PORT"))
|
||||
|
||||
database := os.Getenv("MONGO_DATABASE")
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
var err error
|
||||
client, err = mongo.Connect(ctx, options.Client().ApplyURI(mongoURI))
|
||||
if err != nil {
|
||||
log.Fatal("Failed to connect to MongoDB!", err)
|
||||
}
|
||||
collection = client.Database(database).Collection("data_models")
|
||||
fmt.Println("Connected to MongoDB on 10.10.10.18")
|
||||
}
|
||||
|
||||
// UploadJSON handles API requests and stores data as a document in MongoDB
|
||||
func UploadJSON(w http.ResponseWriter, r *http.Request) {
|
||||
var input DataModel
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
input.CreatedAt = time.Now()
|
||||
|
||||
_, err := collection.InsertOne(context.Background(), input)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
log.Println("Received data:", input)
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
json.NewEncoder(w).Encode(map[string]string{"message": "Data saved successfully"})
|
||||
}
|
||||
|
||||
// UpdateStatus updates the status of a record based on RANDOM_ID
|
||||
func UpdateStatus(w http.ResponseWriter, r *http.Request) {
|
||||
var input StatusModel
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
filter := bson.M{"random_id": input.RANDOM_ID}
|
||||
update := bson.M{"$set": bson.M{"status": input.STATUS, "error": input.ERROR}}
|
||||
|
||||
_, err := collection.UpdateOne(context.Background(), filter, update)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
log.Println("Updated data:", input)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(map[string]string{"message": "Record updated successfully"})
|
||||
}
|
||||
|
||||
// GetDataJSON fetches all data from MongoDB
|
||||
func GetDataJSON(w http.ResponseWriter, r *http.Request) {
|
||||
var records []DataModel
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cursor, err := collection.Find(ctx, bson.M{})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(records)
|
||||
}
|
||||
func GetPaginatedData(w http.ResponseWriter, r *http.Request) {
|
||||
page, _ := strconv.Atoi(r.URL.Query().Get("page"))
|
||||
limit, _ := strconv.Atoi(r.URL.Query().Get("limit"))
|
||||
if page < 1 {
|
||||
page = 1
|
||||
}
|
||||
if limit < 1 {
|
||||
limit = 10
|
||||
}
|
||||
skip := (page - 1) * limit
|
||||
var records []DataModel
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
options := options.Find().SetSkip(int64(skip)).SetLimit(int64(limit))
|
||||
cursor, err := collection.Find(ctx, bson.M{}, options)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(records)
|
||||
}
|
||||
|
||||
func GetSummary(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
totalCount, err := collection.CountDocuments(ctx, bson.M{})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
statusCount := make(map[string]int64)
|
||||
nsappCount := make(map[string]int64)
|
||||
|
||||
pipeline := []bson.M{
|
||||
{"$group": bson.M{"_id": "$status", "count": bson.M{"$sum": 1}}},
|
||||
}
|
||||
cursor, err := collection.Aggregate(ctx, pipeline)
|
||||
if err == nil {
|
||||
for cursor.Next(ctx) {
|
||||
var result struct {
|
||||
ID string `bson:"_id"`
|
||||
Count int64 `bson:"count"`
|
||||
}
|
||||
if err := cursor.Decode(&result); err == nil {
|
||||
statusCount[result.ID] = result.Count
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pipeline = []bson.M{
|
||||
{"$group": bson.M{"_id": "$nsapp", "count": bson.M{"$sum": 1}}},
|
||||
}
|
||||
cursor, err = collection.Aggregate(ctx, pipeline)
|
||||
if err == nil {
|
||||
for cursor.Next(ctx) {
|
||||
var result struct {
|
||||
ID string `bson:"_id"`
|
||||
Count int64 `bson:"count"`
|
||||
}
|
||||
if err := cursor.Decode(&result); err == nil {
|
||||
nsappCount[result.ID] = result.Count
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
response := CountResponse{
|
||||
TotalEntries: totalCount,
|
||||
StatusCount: statusCount,
|
||||
NSAPPCount: nsappCount,
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(response)
|
||||
}
|
||||
|
||||
func GetByNsapp(w http.ResponseWriter, r *http.Request) {
|
||||
nsapp := r.URL.Query().Get("nsapp")
|
||||
var records []DataModel
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cursor, err := collection.Find(ctx, bson.M{"nsapp": nsapp})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(records)
|
||||
}
|
||||
|
||||
func GetByDateRange(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
startDate := r.URL.Query().Get("start_date")
|
||||
endDate := r.URL.Query().Get("end_date")
|
||||
|
||||
if startDate == "" || endDate == "" {
|
||||
http.Error(w, "Both start_date and end_date are required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
start, err := time.Parse("2006-01-02T15:04:05.999999+00:00", startDate+"T00:00:00+00:00")
|
||||
if err != nil {
|
||||
http.Error(w, "Invalid start_date format", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
end, err := time.Parse("2006-01-02T15:04:05.999999+00:00", endDate+"T23:59:59+00:00")
|
||||
if err != nil {
|
||||
http.Error(w, "Invalid end_date format", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
var records []DataModel
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cursor, err := collection.Find(ctx, bson.M{
|
||||
"created_at": bson.M{
|
||||
"$gte": start,
|
||||
"$lte": end,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(records)
|
||||
}
|
||||
func GetByStatus(w http.ResponseWriter, r *http.Request) {
|
||||
status := r.URL.Query().Get("status")
|
||||
var records []DataModel
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cursor, err := collection.Find(ctx, bson.M{"status": status})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(records)
|
||||
}
|
||||
|
||||
func GetByOS(w http.ResponseWriter, r *http.Request) {
|
||||
osType := r.URL.Query().Get("os_type")
|
||||
osVersion := r.URL.Query().Get("os_version")
|
||||
var records []DataModel
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cursor, err := collection.Find(ctx, bson.M{"os_type": osType, "os_version": osVersion})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
records = append(records, record)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(records)
|
||||
}
|
||||
|
||||
func GetErrors(w http.ResponseWriter, r *http.Request) {
|
||||
errorCount := make(map[string]int)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cursor, err := collection.Find(ctx, bson.M{"error": bson.M{"$ne": ""}})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer cursor.Close(ctx)
|
||||
|
||||
for cursor.Next(ctx) {
|
||||
var record DataModel
|
||||
if err := cursor.Decode(&record); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if record.ERROR != "" {
|
||||
errorCount[record.ERROR]++
|
||||
}
|
||||
}
|
||||
|
||||
type ErrorCountResponse struct {
|
||||
Error string `json:"error"`
|
||||
Count int `json:"count"`
|
||||
}
|
||||
|
||||
var errorCounts []ErrorCountResponse
|
||||
for err, count := range errorCount {
|
||||
errorCounts = append(errorCounts, ErrorCountResponse{
|
||||
Error: err,
|
||||
Count: count,
|
||||
})
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(struct {
|
||||
ErrorCounts []ErrorCountResponse `json:"error_counts"`
|
||||
}{
|
||||
ErrorCounts: errorCounts,
|
||||
})
|
||||
}
|
||||
|
||||
func main() {
|
||||
ConnectDatabase()
|
||||
|
||||
router := mux.NewRouter()
|
||||
router.HandleFunc("/upload", UploadJSON).Methods("POST")
|
||||
router.HandleFunc("/upload/updatestatus", UpdateStatus).Methods("POST")
|
||||
router.HandleFunc("/data/json", GetDataJSON).Methods("GET")
|
||||
router.HandleFunc("/data/paginated", GetPaginatedData).Methods("GET")
|
||||
router.HandleFunc("/data/summary", GetSummary).Methods("GET")
|
||||
router.HandleFunc("/data/nsapp", GetByNsapp).Methods("GET")
|
||||
router.HandleFunc("/data/date", GetByDateRange).Methods("GET")
|
||||
router.HandleFunc("/data/status", GetByStatus).Methods("GET")
|
||||
router.HandleFunc("/data/os", GetByOS).Methods("GET")
|
||||
router.HandleFunc("/data/errors", GetErrors).Methods("GET")
|
||||
|
||||
c := cors.New(cors.Options{
|
||||
AllowedOrigins: []string{"*"},
|
||||
AllowedMethods: []string{"GET", "POST"},
|
||||
AllowedHeaders: []string{"Content-Type", "Authorization"},
|
||||
AllowCredentials: true,
|
||||
})
|
||||
|
||||
handler := c.Handler(router)
|
||||
|
||||
fmt.Println("Server running on port 8080")
|
||||
log.Fatal(http.ListenAndServe(":8080", handler))
|
||||
}
|
||||
@@ -20,9 +20,28 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --menu "Select option" 11 58 1 \
|
||||
"1" "Check for Docker Updates" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ APP="Alpine-Grafana"
|
||||
var_tags="${var_tags:-alpine;monitoring}"
|
||||
var_cpu="${var_cpu:-1}"
|
||||
var_ram="${var_ram:-256}"
|
||||
var_disk="${var_disk:-2}"
|
||||
var_disk="${var_disk:-1}"
|
||||
var_os="${var_os:-alpine}"
|
||||
var_version="${var_version:-3.23}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
@@ -20,32 +20,43 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
LXCIP=$(ip a s dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1)
|
||||
|
||||
CHOICE=$(msg_menu "Grafana Update Options" \
|
||||
"1" "Check for Grafana Updates" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=0.0.0.0/g" /etc/conf.d/grafana
|
||||
service grafana restart
|
||||
msg_ok "Allowed listening on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=$LXCIP/g" /etc/conf.d/grafana
|
||||
service grafana restart
|
||||
msg_ok "Allowed listening only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --menu "Select option" 11 58 3 \
|
||||
"1" "Check for Grafana Updates" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=0.0.0.0/g" /etc/conf.d/grafana
|
||||
service grafana restart
|
||||
msg_ok "Allowed listening on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=$LXCIP/g" /etc/conf.d/grafana
|
||||
service grafana restart
|
||||
msg_ok "Allowed listening only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
@@ -20,32 +20,43 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
LXCIP=$(ip a s dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1)
|
||||
|
||||
CHOICE=$(msg_menu "Loki Update Options" \
|
||||
"1" "Check for Loki Updates" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=0.0.0.0/g" /etc/conf.d/loki
|
||||
service loki restart
|
||||
msg_ok "Allowed listening on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=$LXCIP/g" /etc/conf.d/loki
|
||||
service loki restart
|
||||
msg_ok "Allowed listening only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --menu "Select option" 11 58 3 \
|
||||
"1" "Check for Loki Updates" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=0.0.0.0/g" /etc/conf.d/loki
|
||||
service loki restart
|
||||
msg_ok "Allowed listening on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
sed -i -e "s/cfg:server.http_addr=.*/cfg:server.http_addr=$LXCIP/g" /etc/conf.d/loki
|
||||
service loki restart
|
||||
msg_ok "Allowed listening only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
@@ -24,31 +24,33 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
CHOICE=$(msg_menu "Nextcloud Options" \
|
||||
"1" "Update Alpine Packages" \
|
||||
"2" "Nextcloud Login Credentials" \
|
||||
"3" "Renew Self-signed Certificate")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
msg_info "Updating Alpine Packages"
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated Alpine Packages"
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
cat nextcloud.creds
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/ssl/private/nextcloud-selfsigned.key -out /etc/ssl/certs/nextcloud-selfsigned.crt -subj "/C=US/O=Nextcloud/OU=Domain Control Validated/CN=nextcloud.local" >/dev/null 2>&1
|
||||
rc-service nginx restart
|
||||
msg_ok "Renewed self-signed certificate"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
while true; do
|
||||
CHOICE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 3 \
|
||||
"1" "Nextcloud Login Credentials" ON \
|
||||
"2" "Renew Self-signed Certificate" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
cat nextcloud.creds
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/ssl/private/nextcloud-selfsigned.key -out /etc/ssl/certs/nextcloud-selfsigned.crt -subj "/C=US/O=Nextcloud/OU=Domain Control Validated/CN=nextcloud.local" >/dev/null 2>&1
|
||||
rc-service nginx restart
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
exit 0
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -20,36 +20,47 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
LXCIP=$(ip a s dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1)
|
||||
|
||||
CHOICE=$(msg_menu "Redis Management" \
|
||||
"1" "Update Redis" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
msg_info "Updating Redis"
|
||||
apk update && apk upgrade redis
|
||||
rc-service redis restart
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
msg_info "Setting Redis to listen on all interfaces"
|
||||
sed -i 's/^bind .*/bind 0.0.0.0/' /etc/redis.conf
|
||||
rc-service redis restart
|
||||
msg_ok "Redis now listens on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
msg_info "Setting Redis to listen only on ${LXCIP}"
|
||||
sed -i "s/^bind .*/bind ${LXCIP}/" /etc/redis.conf
|
||||
rc-service redis restart
|
||||
msg_ok "Redis now listens only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Redis Management" --menu "Select option" 11 58 3 \
|
||||
"1" "Update Redis" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
msg_info "Updating Redis"
|
||||
apk update && apk upgrade redis
|
||||
rc-service redis restart
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
msg_info "Setting Redis to listen on all interfaces"
|
||||
sed -i 's/^bind .*/bind 0.0.0.0/' /etc/redis.conf
|
||||
rc-service redis restart
|
||||
msg_ok "Redis now listens on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
msg_info "Setting Redis to listen only on ${LXCIP}"
|
||||
sed -i "s/^bind .*/bind ${LXCIP}/" /etc/redis.conf
|
||||
rc-service redis restart
|
||||
msg_ok "Redis now listens only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -20,37 +20,48 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
LXCIP=$(ip a s dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1)
|
||||
|
||||
CHOICE=$(msg_menu "Valkey Management" \
|
||||
"1" "Update Valkey" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
msg_info "Updating Valkey"
|
||||
apk update && apk upgrade valkey
|
||||
rc-service valkey restart
|
||||
msg_ok "Updated Valkey"
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
msg_info "Setting Valkey to listen on all interfaces"
|
||||
sed -i 's/^bind .*/bind 0.0.0.0/' /etc/valkey/valkey.conf
|
||||
rc-service valkey restart
|
||||
msg_ok "Valkey now listens on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
msg_info "Setting Valkey to listen only on ${LXCIP}"
|
||||
sed -i "s/^bind .*/bind ${LXCIP}/" /etc/valkey/valkey.conf
|
||||
rc-service valkey restart
|
||||
msg_ok "Valkey now listens only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "Valkey Management" --menu "Select option" 11 58 3 \
|
||||
"1" "Update Valkey" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LXCIP} for listening" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
msg_info "Updating Valkey"
|
||||
apk update && apk upgrade valkey
|
||||
rc-service valkey restart
|
||||
msg_ok "Updated Valkey"
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
msg_info "Setting Valkey to listen on all interfaces"
|
||||
sed -i 's/^bind .*/bind 0.0.0.0/' /etc/valkey/valkey.conf
|
||||
rc-service valkey restart
|
||||
msg_ok "Valkey now listens on all interfaces!"
|
||||
exit
|
||||
;;
|
||||
3)
|
||||
msg_info "Setting Valkey to listen only on ${LXCIP}"
|
||||
sed -i "s/^bind .*/bind ${LXCIP}/" /etc/valkey/valkey.conf
|
||||
rc-service valkey restart
|
||||
msg_ok "Valkey now listens only on ${LXCIP}!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -20,38 +20,45 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
CHOICE=$(msg_menu "Vaultwarden Update Options" \
|
||||
"1" "Update Vaultwarden" \
|
||||
"2" "Reset ADMIN_TOKEN")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
rc-service vaultwarden restart -q
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
if [[ "${PHS_SILENT:-0}" == "1" ]]; then
|
||||
msg_warn "Reset ADMIN_TOKEN requires interactive mode, skipping."
|
||||
exit
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --menu "Select option" 11 58 2 \
|
||||
"1" "Update Vaultwarden" \
|
||||
"2" "Reset ADMIN_TOKEN" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
read -r -s -p "Setup your ADMIN_TOKEN (make it strong): " NEWTOKEN
|
||||
echo ""
|
||||
if [[ -n "$NEWTOKEN" ]]; then
|
||||
if ! command -v argon2 >/dev/null 2>&1; then apk add argon2 &>/dev/null; fi
|
||||
TOKEN=$(echo -n "${NEWTOKEN}" | argon2 "$(openssl rand -base64 32)" -e -id -k 19456 -t 2 -p 1)
|
||||
if [[ ! -f /var/lib/vaultwarden/config.json ]]; then
|
||||
sed -i "s|export ADMIN_TOKEN=.*|export ADMIN_TOKEN='${TOKEN}'|" /etc/conf.d/vaultwarden
|
||||
else
|
||||
sed -i "s|\"admin_token\": .*|\"admin_token\": \"${TOKEN}\",|" /var/lib/vaultwarden/config.json
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
rc-service vaultwarden restart -q
|
||||
msg_ok "Admin token updated"
|
||||
fi
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
2)
|
||||
if NEWTOKEN=$(whiptail --backtitle "Proxmox VE Helper Scripts" --passwordbox "Setup your ADMIN_TOKEN (make it strong)" 10 58 3>&1 1>&2 2>&3); then
|
||||
if [[ -z "$NEWTOKEN" ]]; then exit-script; fi
|
||||
if ! command -v argon2 >/dev/null 2>&1; then apk add argon2 &>/dev/null; fi
|
||||
TOKEN=$(echo -n "${NEWTOKEN}" | argon2 "$(openssl rand -base64 32)" -e -id -k 19456 -t 2 -p 1)
|
||||
if [[ ! -f /var/lib/vaultwarden/config.json ]]; then
|
||||
sed -i "s|export ADMIN_TOKEN=.*|export ADMIN_TOKEN='${TOKEN}'|" /etc/conf.d/vaultwarden
|
||||
else
|
||||
sed -i "s|\"admin_token\": .*|\"admin_token\": \"${TOKEN}\",|" /var/lib/vaultwarden/config.json
|
||||
fi
|
||||
rc-service vaultwarden restart -q
|
||||
fi
|
||||
clear
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -20,10 +20,28 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit 0
|
||||
if ! apk -e info newt >/dev/null 2>&1; then
|
||||
apk add -q newt
|
||||
fi
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --menu "Select option" 11 58 1 \
|
||||
"1" "Check for Zigbee2MQTT Updates" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
14
ct/alpine.sh
14
ct/alpine.sh
@@ -20,10 +20,18 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
UPD=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 1 \
|
||||
"1" "Check for Alpine Updates" ON \
|
||||
3>&1 1>&2 2>&3
|
||||
)
|
||||
|
||||
header_info
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit 0
|
||||
if [ "$UPD" == "1" ]; then
|
||||
$STD apk -U upgrade
|
||||
msg_ok "Updated successfully!"
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -23,9 +23,10 @@ function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
UPD=$(msg_menu "Cronicle Update Options" \
|
||||
"1" "Update ${APP}" \
|
||||
"2" "Install ${APP} Worker")
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 2 \
|
||||
"1" "Update ${APP}" ON \
|
||||
"2" "Install ${APP} Worker" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
|
||||
if [ "$UPD" == "1" ]; then
|
||||
if [[ ! -d /opt/cronicle ]]; then
|
||||
|
||||
@@ -28,7 +28,6 @@ function update_script() {
|
||||
exit
|
||||
fi
|
||||
msg_info "Updating Deluge"
|
||||
ensure_dependencies python3-setuptools
|
||||
$STD apt update
|
||||
$STD pip3 install deluge[all] --upgrade
|
||||
msg_ok "Updated Deluge"
|
||||
|
||||
@@ -104,7 +104,7 @@ function update_script() {
|
||||
cd /opt/dispatcharr
|
||||
rm -rf .venv
|
||||
$STD uv venv --clear
|
||||
$STD uv sync
|
||||
$STD uv pip install -r requirements.txt --index-strategy unsafe-best-match
|
||||
$STD uv pip install gunicorn gevent celery redis daphne
|
||||
msg_ok "Updated Dispatcharr Backend"
|
||||
|
||||
@@ -144,4 +144,4 @@ description
|
||||
msg_ok "Completed successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:9191${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
|
||||
|
||||
@@ -35,15 +35,13 @@ function update_script() {
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Backing Up Configurations"
|
||||
mv /opt/donetick/config/selfhosted.yaml /opt/donetick/donetick.db /opt
|
||||
mv /opt/donetick/config/selfhosted.yml /opt/donetick/donetick.db /opt
|
||||
msg_ok "Backed Up Configurations"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "donetick" "donetick/donetick" "prebuild" "latest" "/opt/donetick" "donetick_Linux_x86_64.tar.gz"
|
||||
|
||||
msg_info "Restoring Configurations"
|
||||
mv /opt/selfhosted.yaml /opt/donetick/config
|
||||
grep -q 'http://localhost"$' /opt/donetick/config/selfhosted.yaml || sed -i '/https:\/\/localhost"$/a\ - "http://localhost"' /opt/donetick/config/selfhosted.yaml
|
||||
grep -q 'capacitor://localhost' /opt/donetick/config/selfhosted.yaml || sed -i '/http:\/\/localhost"$/a\ - "capacitor://localhost"' /opt/donetick/config/selfhosted.yaml
|
||||
mv /opt/selfhosted.yml /opt/donetick/config
|
||||
mv /opt/donetick.db /opt/donetick
|
||||
msg_ok "Restored Configurations"
|
||||
|
||||
|
||||
58
ct/drawio.sh
58
ct/drawio.sh
@@ -1,58 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: Slaviša Arežina (tremor021)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://www.drawio.com/
|
||||
|
||||
APP="DrawIO"
|
||||
var_tags="${var_tags:-diagrams}"
|
||||
var_cpu="${var_cpu:-1}"
|
||||
var_ram="${var_ram:-2048}"
|
||||
var_disk="${var_disk:-4}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
if [[ ! -f /var/lib/tomcat11/webapps/draw.war ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
if check_for_gh_release "drawio" "jgraph/drawio"; then
|
||||
msg_info "Stopping service"
|
||||
systemctl stop tomcat11
|
||||
msg_ok "Service stopped"
|
||||
|
||||
msg_info "Updating Debian LXC"
|
||||
$STD apt update
|
||||
$STD apt upgrade -y
|
||||
msg_ok "Updated Debian LXC"
|
||||
|
||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "drawio" "jgraph/drawio" "singlefile" "latest" "/var/lib/tomcat11/webapps" "draw.war"
|
||||
|
||||
msg_info "Starting service"
|
||||
systemctl start tomcat11
|
||||
msg_ok "Service started"
|
||||
msg_ok "Updated successfully!"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8080/draw${CL}"
|
||||
@@ -9,7 +9,7 @@ APP="EMQX"
|
||||
var_tags="${var_tags:-mqtt}"
|
||||
var_cpu="${var_cpu:-2}"
|
||||
var_ram="${var_ram:-1024}"
|
||||
var_disk="${var_disk:-6}"
|
||||
var_disk="${var_disk:-4}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
@@ -37,12 +37,17 @@ function update_script() {
|
||||
msg_info "Stopped Service"
|
||||
|
||||
msg_info "Creating Backup"
|
||||
ls /opt/*.tar.gz &>/dev/null && rm -f /opt/*.tar.gz
|
||||
backup_filename="/opt/${APP}_backup_$(date +%F).tar.gz"
|
||||
tar -czf "$backup_filename" -C /opt/fileflows Data
|
||||
msg_ok "Backup Created"
|
||||
|
||||
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
|
||||
msg_info "Updating $APP to latest version"
|
||||
temp_file=$(mktemp)
|
||||
curl -fsSL https://fileflows.com/downloads/zip -o "$temp_file"
|
||||
$STD unzip -o -d /opt/fileflows "$temp_file"
|
||||
rm -rf "$temp_file"
|
||||
rm -rf "$backup_filename"
|
||||
msg_ok "Updated $APP to latest version"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start fileflows
|
||||
|
||||
@@ -31,27 +31,18 @@ function update_script() {
|
||||
|
||||
APP_VERSION=$(grep -o '"version": *"[^"]*"' /opt/gitea-mirror/package.json | cut -d'"' -f4)
|
||||
if [[ $APP_VERSION =~ ^2\. ]]; then
|
||||
if [[ "${PHS_SILENT:-0}" == "1" ]]; then
|
||||
msg_warn "Version $APP_VERSION detected. Major version upgrade requires interactive confirmation, skipping."
|
||||
exit 75
|
||||
fi
|
||||
msg_warn "WARNING: Version $APP_VERSION detected!"
|
||||
msg_warn "Updating from version 2.x will CLEAR ALL CONFIGURATION."
|
||||
msg_warn "This includes: API tokens, User settings, Repository configurations, All custom settings"
|
||||
echo ""
|
||||
read -r -p "Do you want to continue? (y/N): " CONFIRM
|
||||
if [[ ! "$CONFIRM" =~ ^[Yy]$ ]]; then
|
||||
if ! whiptail --backtitle "Gitea Mirror Update" --title "⚠️ VERSION 2.x DETECTED" --yesno \
|
||||
"WARNING: Version $APP_VERSION detected!\n\nUpdating from version 2.x will CLEAR ALL CONFIGURATION.\n\nThis includes:\n• API tokens\n• User settings\n• Repository configurations\n• All custom settings\n\nDo you want to continue with the update process?" 15 70 --defaultno; then
|
||||
exit 0
|
||||
fi
|
||||
msg_warn "FINAL WARNING: This update WILL clear all configuration!"
|
||||
msg_warn "Please ensure you have backed up API tokens, custom configurations, and repository settings."
|
||||
echo ""
|
||||
read -r -p "Final confirmation - proceed? (y/N): " CONFIRM2
|
||||
if [[ ! "$CONFIRM2" =~ ^[Yy]$ ]]; then
|
||||
msg_info "Update cancelled. Please backup your configuration before proceeding."
|
||||
|
||||
if ! whiptail --backtitle "Gitea Mirror Update" --title "⚠️ FINAL CONFIRMATION" --yesno \
|
||||
"FINAL WARNING: This update WILL clear all configuration!\n\nBEFORE PROCEEDING, please:\n\n• Copy API tokens to a safe location\n• Backup any custom configurations\n• Note down repository settings\n\nThis action CANNOT be undone!" 18 70 --defaultno; then
|
||||
whiptail --backtitle "Gitea Mirror Update" --title "Update Cancelled" --msgbox "Update process cancelled. Please backup your configuration before proceeding." 8 60
|
||||
exit 0
|
||||
fi
|
||||
msg_info "Proceeding with version $APP_VERSION update. All configuration will be cleared as warned."
|
||||
whiptail --backtitle "Gitea Mirror Update" --title "Proceeding with Update" --msgbox \
|
||||
"Proceeding with version $APP_VERSION update.\n\nAll configuration will be cleared as warned." 8 50
|
||||
rm -rf /opt/gitea-mirror
|
||||
fi
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ APP="Grafana"
|
||||
var_tags="${var_tags:-monitoring;visualization}"
|
||||
var_cpu="${var_cpu:-1}"
|
||||
var_ram="${var_ram:-512}"
|
||||
var_disk="${var_disk:-4}"
|
||||
var_disk="${var_disk:-2}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
____ ________
|
||||
/ __ \_________ __ __/ _/ __ \
|
||||
/ / / / ___/ __ `/ | /| / // // / / /
|
||||
/ /_/ / / / /_/ /| |/ |/ // // /_/ /
|
||||
/_____/_/ \__,_/ |__/|__/___/\____/
|
||||
|
||||
6
ct/headers/prometheus-paperless-ngx-exporter
Normal file
6
ct/headers/prometheus-paperless-ngx-exporter
Normal file
@@ -0,0 +1,6 @@
|
||||
____ __ __ ____ __ _ _________ __ ______ __
|
||||
/ __ \_________ ____ ___ ___ / /_/ /_ ___ __ _______ / __ \____ _____ ___ _____/ /__ __________ / | / / ____/ |/ / / ____/ ______ ____ _____/ /____ _____
|
||||
/ /_/ / ___/ __ \/ __ `__ \/ _ \/ __/ __ \/ _ \/ / / / ___/_____/ /_/ / __ `/ __ \/ _ \/ ___/ / _ \/ ___/ ___/_____/ |/ / / __ | /_____/ __/ | |/_/ __ \/ __ \/ ___/ __/ _ \/ ___/
|
||||
/ ____/ / / /_/ / / / / / / __/ /_/ / / / __/ /_/ (__ )_____/ ____/ /_/ / /_/ / __/ / / / __(__ |__ )_____/ /| / /_/ // /_____/ /____> </ /_/ / /_/ / / / /_/ __/ /
|
||||
/_/ /_/ \____/_/ /_/ /_/\___/\__/_/ /_/\___/\__,_/____/ /_/ \__,_/ .___/\___/_/ /_/\___/____/____/ /_/ |_/\____//_/|_| /_____/_/|_/ .___/\____/_/ \__/\___/_/
|
||||
/_/ /_/
|
||||
@@ -1,6 +0,0 @@
|
||||
_____
|
||||
/ ___/___ ___ __________
|
||||
\__ \/ _ \/ _ \/ ___/ ___/
|
||||
___/ / __/ __/ / / /
|
||||
/____/\___/\___/_/ /_/
|
||||
|
||||
@@ -27,11 +27,12 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
UPD=$(msg_menu "Home Assistant Update Options" \
|
||||
"1" "Update ALL Containers" \
|
||||
"2" "Remove ALL Unused Images" \
|
||||
"3" "Install HACS" \
|
||||
"4" "Install FileBrowser")
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "UPDATE" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 4 \
|
||||
"1" "Update ALL Containers" ON \
|
||||
"2" "Remove ALL Unused Images" OFF \
|
||||
"3" "Install HACS" OFF \
|
||||
"4" "Install FileBrowser" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
|
||||
if [ "$UPD" == "1" ]; then
|
||||
msg_info "Updating All Containers"
|
||||
|
||||
@@ -105,7 +105,7 @@ EOF
|
||||
msg_ok "Image-processing libraries up to date"
|
||||
fi
|
||||
|
||||
RELEASE="2.5.6"
|
||||
RELEASE="2.5.5"
|
||||
if check_for_gh_release "Immich" "immich-app/immich" "${RELEASE}"; then
|
||||
if [[ $(cat ~/.immich) > "2.5.1" ]]; then
|
||||
msg_info "Enabling Maintenance Mode"
|
||||
|
||||
@@ -29,43 +29,47 @@ function update_script() {
|
||||
exit
|
||||
fi
|
||||
|
||||
if [[ -f "/opt/jellyseerr/package.json" ]] && [[ "$(grep -m1 '"version"' /opt/jellyseerr/package.json | awk -F'"' '{print $4}')" == "2.7.3" ]]; then
|
||||
echo
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Jellyseerr v2.7.3 detected."
|
||||
echo
|
||||
echo "Seerr is the new unified Jellyseerr and Overseerr."
|
||||
echo "More info: https://docs.seerr.dev/blog/seerr-release"
|
||||
echo
|
||||
read -rp "Do you want to migrate to Seerr now? (y/N): " MIGRATE
|
||||
echo
|
||||
if [[ ! "$MIGRATE" =~ ^[Yy]$ ]]; then
|
||||
msg_info "Migration cancelled. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
if [ "$(node -v | cut -c2-3)" -ne 22 ]; then
|
||||
msg_info "Updating Node.js Repository"
|
||||
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_22.x nodistro main" >/etc/apt/sources.list.d/nodesource.list
|
||||
msg_ok "Updating Node.js Repository"
|
||||
|
||||
msg_info "Switching update script to Seerr"
|
||||
sed -i 's|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/jellyseerr.sh|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/seerr.sh|g' /usr/bin/update
|
||||
msg_ok "Switched update script to Seerr. Running update..."
|
||||
exec /usr/bin/update
|
||||
msg_info "Updating Packages"
|
||||
$STD apt-get update
|
||||
$STD apt-get -y upgrade
|
||||
msg_ok "Updating Packages"
|
||||
fi
|
||||
|
||||
cd /opt/jellyseerr
|
||||
output=$(git pull --no-rebase)
|
||||
|
||||
pnpm_current=$(pnpm --version 2>/dev/null)
|
||||
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
|
||||
|
||||
if [ -z "$pnpm_current" ]; then
|
||||
msg_error "pnpm not found. Installing version $pnpm_desired..."
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||
elif ! node -e "const semver = require('semver'); process.exit(semver.satisfies('$pnpm_current', '$pnpm_desired') ? 0 : 1)"; then
|
||||
msg_error "Updating pnpm from version $pnpm_current to $pnpm_desired..."
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||
else
|
||||
msg_ok "pnpm is already installed and satisfies version $pnpm_desired."
|
||||
fi
|
||||
|
||||
msg_info "Updating Jellyseerr"
|
||||
cd /opt/jellyseerr
|
||||
systemctl stop jellyseerr
|
||||
output=$(git pull --no-rebase)
|
||||
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||
if echo "$output" | grep -q "Already up to date."; then
|
||||
msg_ok "$APP is already up to date."
|
||||
exit
|
||||
fi
|
||||
|
||||
systemctl stop jellyseerr
|
||||
rm -rf dist .next node_modules
|
||||
export CYPRESS_INSTALL_BINARY=0
|
||||
cd /opt/jellyseerr
|
||||
$STD pnpm install --frozen-lockfile
|
||||
export NODE_OPTIONS="--max-old-space-size=3072"
|
||||
$STD pnpm build
|
||||
|
||||
cat <<EOF >/etc/systemd/system/jellyseerr.service
|
||||
[Unit]
|
||||
Description=jellyseerr Service
|
||||
@@ -81,6 +85,7 @@ ExecStart=/usr/bin/node dist/index.js
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl start jellyseerr
|
||||
msg_ok "Updated Jellyseerr"
|
||||
|
||||
@@ -46,7 +46,7 @@ function update_script() {
|
||||
msg_info "Restoring configuration & data"
|
||||
mv /opt/app.env /opt/jotty/.env
|
||||
[[ -d /opt/data ]] && mv /opt/data /opt/jotty/data
|
||||
[[ -d /opt/jotty/config ]] && cp -a /opt/config/* /opt/jotty/config && rm -rf /opt/config
|
||||
[[ -d /opt/jotty/config ]] && mv /opt/config/* /opt/jotty/config
|
||||
msg_ok "Restored configuration & data"
|
||||
|
||||
msg_info "Starting Service"
|
||||
|
||||
11
ct/kasm.sh
11
ct/kasm.sh
@@ -34,19 +34,10 @@ function update_script() {
|
||||
CURRENT_VERSION=$(readlink -f /opt/kasm/current | awk -F'/' '{print $4}')
|
||||
KASM_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_[0-9]+\.[0-9]+\.[0-9]+\.[a-z0-9]+\.tar\.gz' | head -n 1)
|
||||
if [[ -z "$KASM_URL" ]]; then
|
||||
SERVICE_IMAGE_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_service_images_amd64_[0-9]+\.[0-9]+\.[0-9]+\.tar\.gz' | head -n 1)
|
||||
if [[ -n "$SERVICE_IMAGE_URL" ]]; then
|
||||
KASM_VERSION=$(echo "$SERVICE_IMAGE_URL" | sed -E 's/.*kasm_release_service_images_amd64_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
|
||||
KASM_URL="https://kasm-static-content.s3.amazonaws.com/kasm_release_${KASM_VERSION}.tar.gz"
|
||||
fi
|
||||
else
|
||||
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
|
||||
fi
|
||||
|
||||
if [[ -z "$KASM_URL" ]] || [[ -z "$KASM_VERSION" ]]; then
|
||||
msg_error "Unable to detect latest Kasm release URL."
|
||||
exit 1
|
||||
fi
|
||||
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
|
||||
msg_info "Checked for new version"
|
||||
|
||||
msg_info "Removing outdated docker-compose plugin"
|
||||
|
||||
@@ -33,8 +33,7 @@ function update_script() {
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
PHP_VERSION="8.5" PHP_APACHE="YES" setup_php
|
||||
setup_composer
|
||||
|
||||
|
||||
msg_info "Creating a backup"
|
||||
mv /opt/koillection/ /opt/koillection-backup
|
||||
msg_ok "Backup created"
|
||||
|
||||
23
ct/loki.sh
23
ct/loki.sh
@@ -29,13 +29,21 @@ function update_script() {
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CHOICE=$(msg_menu "Loki Update Options" \
|
||||
"1" "Update Loki & Promtail" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LOCAL_IP} for listening")
|
||||
|
||||
case $CHOICE in
|
||||
1)
|
||||
while true; do
|
||||
CHOICE=$(
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --menu "Select option" 11 58 3 \
|
||||
"1" "Update Loki & Promtail" \
|
||||
"2" "Allow 0.0.0.0 for listening" \
|
||||
"3" "Allow only ${LOCAL_IP} for listening" 3>&2 2>&1 1>&3
|
||||
)
|
||||
exit_status=$?
|
||||
if [ $exit_status == 1 ]; then
|
||||
clear
|
||||
exit-script
|
||||
fi
|
||||
header_info
|
||||
case $CHOICE in
|
||||
1)
|
||||
msg_info "Stopping Loki"
|
||||
systemctl stop loki
|
||||
if systemctl is-active --quiet promtail 2>/dev/null || dpkg -s promtail >/dev/null 2>&1; then
|
||||
@@ -77,6 +85,7 @@ function update_script() {
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
done
|
||||
exit 0
|
||||
}
|
||||
|
||||
|
||||
@@ -24,9 +24,21 @@ function update_script() {
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
setup_meilisearch
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "Meilisearch Update" --radiolist --cancel-button Exit-Script "Spacebar = Select" 10 58 2 \
|
||||
"1" "Update Meilisearch" ON \
|
||||
"2" "Update Meilisearch-UI" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
|
||||
if [[ -d /opt/meilisearch-ui ]]; then
|
||||
if [ "$UPD" == "1" ]; then
|
||||
setup_meilisearch
|
||||
exit
|
||||
fi
|
||||
|
||||
if [ "$UPD" == "2" ]; then
|
||||
if [[ ! -d /opt/meilisearch-ui ]]; then
|
||||
msg_error "No Meilisearch-UI Installation Found!"
|
||||
exit
|
||||
fi
|
||||
if check_for_gh_release "meilisearch-ui" "riccox/meilisearch-ui"; then
|
||||
msg_info "Stopping Meilisearch-UI"
|
||||
systemctl stop meilisearch-ui
|
||||
@@ -46,11 +58,10 @@ function update_script() {
|
||||
msg_info "Starting Meilisearch-UI"
|
||||
systemctl start meilisearch-ui
|
||||
msg_ok "Started Meilisearch-UI"
|
||||
msg_ok "Updated successfully!"
|
||||
fi
|
||||
exit
|
||||
fi
|
||||
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -27,9 +27,10 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
UPD=$(msg_menu "Node-Red Update Options" \
|
||||
"1" "Update ${APP}" \
|
||||
"2" "Install Themes")
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 2 \
|
||||
"1" "Update ${APP}" ON \
|
||||
"2" "Install Themes" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
if [ "$UPD" == "1" ]; then
|
||||
NODE_VERSION="22" setup_nodejs
|
||||
|
||||
@@ -48,31 +49,32 @@ function update_script() {
|
||||
exit
|
||||
fi
|
||||
if [ "$UPD" == "2" ]; then
|
||||
THEME=$(msg_menu "Node-Red Themes" \
|
||||
"midnight-red" "Midnight Red (default)" \
|
||||
"aurora" "Aurora" \
|
||||
"cobalt2" "Cobalt2" \
|
||||
"dark" "Dark" \
|
||||
"dracula" "Dracula" \
|
||||
"espresso-libre" "Espresso Libre" \
|
||||
"github-dark" "GitHub Dark" \
|
||||
"github-dark-default" "GitHub Dark Default" \
|
||||
"github-dark-dimmed" "GitHub Dark Dimmed" \
|
||||
"monoindustrial" "Monoindustrial" \
|
||||
"monokai" "Monokai" \
|
||||
"monokai-dimmed" "Monokai Dimmed" \
|
||||
"noctis" "Noctis" \
|
||||
"oceanic-next" "Oceanic Next" \
|
||||
"oled" "OLED" \
|
||||
"one-dark-pro" "One Dark Pro" \
|
||||
"one-dark-pro-darker" "One Dark Pro Darker" \
|
||||
"solarized-dark" "Solarized Dark" \
|
||||
"solarized-light" "Solarized Light" \
|
||||
"tokyo-night" "Tokyo Night" \
|
||||
"tokyo-night-light" "Tokyo Night Light" \
|
||||
"tokyo-night-storm" "Tokyo Night Storm" \
|
||||
"totallyinformation" "TotallyInformation" \
|
||||
"zenburn" "Zenburn")
|
||||
THEME=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "NODE-RED THEMES" --radiolist --cancel-button Exit-Script "Choose Theme" 15 58 6 \
|
||||
"aurora" "" OFF \
|
||||
"cobalt2" "" OFF \
|
||||
"dark" "" OFF \
|
||||
"dracula" "" OFF \
|
||||
"espresso-libre" "" OFF \
|
||||
"github-dark" "" OFF \
|
||||
"github-dark-default" "" OFF \
|
||||
"github-dark-dimmed" "" OFF \
|
||||
"midnight-red" "" ON \
|
||||
"monoindustrial" "" OFF \
|
||||
"monokai" "" OFF \
|
||||
"monokai-dimmed" "" OFF \
|
||||
"noctis" "" OFF \
|
||||
"oceanic-next" "" OFF \
|
||||
"oled" "" OFF \
|
||||
"one-dark-pro" "" OFF \
|
||||
"one-dark-pro-darker" "" OFF \
|
||||
"solarized-dark" "" OFF \
|
||||
"solarized-light" "" OFF \
|
||||
"tokyo-night" "" OFF \
|
||||
"tokyo-night-light" "" OFF \
|
||||
"tokyo-night-storm" "" OFF \
|
||||
"totallyinformation" "" OFF \
|
||||
"zenburn" "" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
header_info
|
||||
msg_info "Installing ${THEME} Theme"
|
||||
cd /root/.node-red
|
||||
|
||||
@@ -20,21 +20,32 @@ color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "UPDATE MODE" --radiolist --cancel-button Exit-Script "Spacebar = Select" 14 60 2 \
|
||||
"1" "Check for Alpine Updates" OFF \
|
||||
"2" "Update NPMplus Docker Container" ON \
|
||||
3>&1 1>&2 2>&3)
|
||||
|
||||
header_info "$APP"
|
||||
|
||||
msg_info "Updating Alpine OS"
|
||||
$STD apk -U upgrade
|
||||
msg_ok "System updated"
|
||||
|
||||
msg_info "Pulling latest NPMplus container image"
|
||||
cd /opt
|
||||
$STD docker compose pull
|
||||
msg_info "Recreating container"
|
||||
$STD docker compose up -d
|
||||
msg_ok "Updated NPMplus container"
|
||||
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
case "$UPD" in
|
||||
"1")
|
||||
msg_info "Updating Alpine OS"
|
||||
$STD apk -U upgrade
|
||||
msg_ok "System updated"
|
||||
exit
|
||||
;;
|
||||
"2")
|
||||
msg_info "Updating NPMplus Container"
|
||||
cd /opt
|
||||
msg_info "Pulling latest container image"
|
||||
$STD docker compose pull
|
||||
msg_info "Recreating container"
|
||||
$STD docker compose up -d
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
;;
|
||||
esac
|
||||
exit 0
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -9,7 +9,7 @@ APP="Ollama"
|
||||
var_tags="${var_tags:-ai}"
|
||||
var_cpu="${var_cpu:-4}"
|
||||
var_ram="${var_ram:-4096}"
|
||||
var_disk="${var_disk:-40}"
|
||||
var_disk="${var_disk:-35}"
|
||||
var_os="${var_os:-ubuntu}"
|
||||
var_version="${var_version:-24.04}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
@@ -30,7 +30,7 @@ function update_script() {
|
||||
fi
|
||||
|
||||
RELEASE="v5.0.2"
|
||||
if check_for_gh_release "OpenCloud" "opencloud-eu/opencloud" "${RELEASE}"; then
|
||||
if check_for_gh_release "opencloud" "opencloud-eu/opencloud" "${RELEASE}"; then
|
||||
msg_info "Stopping services"
|
||||
systemctl stop opencloud opencloud-wopi
|
||||
msg_ok "Stopped services"
|
||||
@@ -38,21 +38,9 @@ function update_script() {
|
||||
msg_info "Updating packages"
|
||||
$STD apt-get update
|
||||
$STD apt-get dist-upgrade -y
|
||||
ensure_dependencies "inotify-tools"
|
||||
msg_ok "Updated packages"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "OpenCloud" "opencloud-eu/opencloud" "singlefile" "${RELEASE}" "/usr/bin" "opencloud-*-linux-amd64"
|
||||
|
||||
if ! grep -q 'POSIX_WATCH' /etc/opencloud/opencloud.env; then
|
||||
sed -i '/^## External/i ## Uncomment below to enable PosixFS Collaborative Mode\
|
||||
## Increase inotify watch/instance limits on your PVE host:\
|
||||
### sysctl -w fs.inotify.max_user_watches=1048576\
|
||||
### sysctl -w fs.inotify.max_user_instances=1024\
|
||||
# STORAGE_USERS_POSIX_ENABLE_COLLABORATION=true\
|
||||
# STORAGE_USERS_POSIX_WATCH_TYPE=inotifywait\
|
||||
# STORAGE_USERS_POSIX_WATCH_FS=true\
|
||||
# STORAGE_USERS_POSIX_WATCH_PATH=<path-to-storage-or-bind-mount>' /etc/opencloud/opencloud.env
|
||||
fi
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "opencloud" "opencloud-eu/opencloud" "singlefile" "${RELEASE}" "/usr/bin" "opencloud-*-linux-amd64"
|
||||
|
||||
msg_info "Starting services"
|
||||
systemctl start opencloud opencloud-wopi
|
||||
|
||||
@@ -9,7 +9,7 @@ APP="Open WebUI"
|
||||
var_tags="${var_tags:-ai;interface}"
|
||||
var_cpu="${var_cpu:-4}"
|
||||
var_ram="${var_ram:-8192}"
|
||||
var_disk="${var_disk:-50}"
|
||||
var_disk="${var_disk:-25}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
@@ -44,7 +44,7 @@ function update_script() {
|
||||
|
||||
msg_info "Installing uv-based Open-WebUI"
|
||||
PYTHON_VERSION="3.12" setup_uv
|
||||
$STD uv tool install --python 3.12 --constraint <(echo "numba>=0.60") open-webui[all]
|
||||
$STD uv tool install --python 3.12 open-webui[all]
|
||||
msg_ok "Installed uv-based Open-WebUI"
|
||||
|
||||
msg_info "Restoring data"
|
||||
@@ -126,7 +126,7 @@ EOF
|
||||
|
||||
msg_info "Updating Open WebUI via uv"
|
||||
PYTHON_VERSION="3.12" setup_uv
|
||||
$STD uv tool install --force --python 3.12 --constraint <(echo "numba>=0.60") open-webui[all]
|
||||
$STD uv tool upgrade --python 3.12 open-webui[all]
|
||||
systemctl restart open-webui
|
||||
msg_ok "Updated Open WebUI"
|
||||
msg_ok "Updated successfully!"
|
||||
|
||||
@@ -27,28 +27,6 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
if [[ -f "$HOME/.overseerr" ]] && [[ "$(cat "$HOME/.overseerr")" == "1.34.0" ]]; then
|
||||
echo
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Overseerr v1.34.0 detected."
|
||||
echo
|
||||
echo "Seerr is the new unified Jellyseerr and Overseerr."
|
||||
echo "More info: https://docs.seerr.dev/blog/seerr-release"
|
||||
echo
|
||||
read -rp "Do you want to migrate to Seerr now? (y/N): " MIGRATE
|
||||
echo
|
||||
if [[ ! "$MIGRATE" =~ ^[Yy]$ ]]; then
|
||||
msg_info "Migration cancelled. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
msg_info "Switching update script to Seerr"
|
||||
sed -i 's|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/overseerr.sh|https://github.com/community-scripts/ProxmoxVE/raw/main/ct/seerr.sh|g' /usr/bin/update
|
||||
msg_ok "Switched update script to Seerr. Running update..."
|
||||
exec /usr/bin/update
|
||||
fi
|
||||
|
||||
if check_for_gh_release "overseerr" "sct/overseerr"; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop overseerr
|
||||
|
||||
@@ -51,7 +51,7 @@ function update_script() {
|
||||
$STD npm run db:generate
|
||||
$STD npm run build
|
||||
$STD npm run build:cli
|
||||
$STD npm run db:push
|
||||
$STD npm run db:sqlite:push
|
||||
cp -R .next/standalone ./
|
||||
chmod +x ./dist/cli.mjs
|
||||
cp server/db/names.json ./dist/names.json
|
||||
|
||||
@@ -43,40 +43,22 @@ function update_script() {
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "PatchMon" "PatchMon/PatchMon" "tarball" "latest" "/opt/patchmon"
|
||||
|
||||
msg_info "Updating PatchMon"
|
||||
VERSION=$(get_latest_github_release "PatchMon/PatchMon")
|
||||
PROTO="$(sed -n '/SERVER_PROTOCOL/s/[^=]*=//p' /opt/backend.env)"
|
||||
HOST="$(sed -n '/SERVER_HOST/s/[^=]*=//p' /opt/backend.env)"
|
||||
[[ "${PROTO:-http}" == "http" ]] && PORT=":3001"
|
||||
sed -i 's/PORT=3399/PORT=3001/' /opt/backend.env
|
||||
sed -i -e "s/VERSION=.*/VERSION=$VERSION/" \
|
||||
-e "\|VITE_API_URL=|s|http.*|${PROTO:-http}://${HOST:-$LOCAL_IP}${PORT:-}/api/v1|" /opt/frontend.env
|
||||
export NODE_ENV=production
|
||||
cd /opt/patchmon
|
||||
export NODE_ENV=production
|
||||
$STD npm install --no-audit --no-fund --no-save --ignore-scripts
|
||||
cd /opt/patchmon/backend
|
||||
$STD npm install --no-audit --no-fund --no-save --ignore-scripts
|
||||
cd /opt/patchmon/frontend
|
||||
mv /opt/frontend.env /opt/patchmon/frontend/.env
|
||||
$STD npm install --no-audit --no-fund --no-save --ignore-scripts --include=dev
|
||||
$STD npm install --include=dev --no-audit --no-fund --no-save --ignore-scripts
|
||||
$STD npm run build
|
||||
cd /opt/patchmon/backend
|
||||
mv /opt/backend.env /opt/patchmon/backend/.env
|
||||
$STD npm run db:generate
|
||||
mv /opt/frontend.env /opt/patchmon/frontend/.env
|
||||
$STD npx prisma migrate deploy
|
||||
cp /opt/patchmon/docker/nginx.conf.template /etc/nginx/sites-available/patchmon.conf
|
||||
sed -i -e 's|proxy_pass .*|proxy_pass http://127.0.0.1:3001;|' \
|
||||
-e '\|try_files |i\ root /opt/patchmon/frontend/dist;' \
|
||||
-e 's|alias.*|alias /opt/patchmon/frontend/dist/assets;|' \
|
||||
-e '\|expires 1y|i\ root /opt/patchmon/frontend/dist;' /etc/nginx/sites-available/patchmon.conf
|
||||
ln -sf /etc/nginx/sites-available/patchmon.conf /etc/nginx/sites-enabled/
|
||||
rm -f /etc/nginx/sites-enabled/default
|
||||
$STD nginx -t
|
||||
systemctl restart nginx
|
||||
$STD npx prisma generate
|
||||
msg_ok "Updated PatchMon"
|
||||
|
||||
msg_info "Starting Service"
|
||||
if grep -q '/usr/bin/node' /etc/systemd/system/patchmon-server.service; then
|
||||
sed -i 's|ExecStart=.*|ExecStart=/usr/bin/npm run start|' /etc/systemd/system/patchmon-server.service
|
||||
systemctl daemon-reload
|
||||
fi
|
||||
systemctl start patchmon-server
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
@@ -91,4 +73,4 @@ description
|
||||
msg_ok "Completed successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
|
||||
|
||||
@@ -61,12 +61,6 @@ function update_script() {
|
||||
rm -rf "$BK"
|
||||
msg_ok "Restored data"
|
||||
|
||||
msg_ok "Migrate Database"
|
||||
cd /opt/planka
|
||||
$STD npm run db:upgrade
|
||||
$STD npm run db:migrate
|
||||
msg_ok "Migrated Database"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start planka
|
||||
msg_ok "Started Service"
|
||||
|
||||
@@ -29,9 +29,10 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
UPD=$(msg_menu "Plex Update Options" \
|
||||
"1" "Update LXC" \
|
||||
"2" "Install plexupdate")
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --radiolist --cancel-button Exit-Script "Spacebar = Select \nplexupdate info >> https://github.com/mrworf/plexupdate" 10 59 2 \
|
||||
"1" "Update LXC" ON \
|
||||
"2" "Install plexupdate" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
if [ "$UPD" == "1" ]; then
|
||||
msg_info "Updating ${APP} LXC"
|
||||
$STD apt update
|
||||
|
||||
@@ -27,11 +27,12 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
UPD=$(msg_menu "Home Assistant Update Options" \
|
||||
"1" "Update system and containers" \
|
||||
"2" "Install HACS" \
|
||||
"3" "Install FileBrowser" \
|
||||
"4" "Remove ALL Unused Images")
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "UPDATE" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 4 \
|
||||
"1" "Update system and containers" ON \
|
||||
"2" "Install HACS" OFF \
|
||||
"3" "Install FileBrowser" OFF \
|
||||
"4" "Remove ALL Unused Images" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
|
||||
if [ "$UPD" == "1" ]; then
|
||||
msg_info "Updating ${APP} LXC"
|
||||
|
||||
52
ct/prometheus-paperless-ngx-exporter.sh
Executable file
52
ct/prometheus-paperless-ngx-exporter.sh
Executable file
@@ -0,0 +1,52 @@
|
||||
#!/usr/bin/env bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: Andy Grunwald (andygrunwald)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/hansmi/prometheus-paperless-exporter
|
||||
|
||||
APP="Prometheus-Paperless-NGX-Exporter"
|
||||
var_tags="${var_tags:-monitoring;alerting}"
|
||||
var_cpu="${var_cpu:-1}"
|
||||
var_ram="${var_ram:-256}"
|
||||
var_disk="${var_disk:-2}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
if [[ ! -f /etc/systemd/system/prometheus-paperless-ngx-exporter.service ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
if check_for_gh_release "prom-paperless-exp" "hansmi/prometheus-paperless-exporter"; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop prometheus-paperless-ngx-exporter
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
fetch_and_deploy_gh_release "prom-paperless-exp" "hansmi/prometheus-paperless-exporter" "binary"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start prometheus-paperless-ngx-exporter
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8081/metrics${CL}"
|
||||
@@ -28,55 +28,16 @@ function update_script() {
|
||||
exit
|
||||
fi
|
||||
|
||||
if check_for_gh_release "Radicale" "Kozea/Radicale"; then
|
||||
msg_info "Stopping service"
|
||||
systemctl stop radicale
|
||||
msg_ok "Stopped service"
|
||||
msg_info "Updating ${APP}"
|
||||
$STD python3 -m venv /opt/radicale
|
||||
source /opt/radicale/bin/activate
|
||||
$STD python3 -m pip install --upgrade https://github.com/Kozea/Radicale/archive/master.tar.gz
|
||||
msg_ok "Updated ${APP}"
|
||||
|
||||
msg_info "Backing up users file"
|
||||
cp /opt/radicale/users /opt/radicale_users_backup
|
||||
msg_ok "Backed up users file"
|
||||
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Radicale" "Kozea/Radicale" "tarball" "latest" "/opt/radicale"
|
||||
|
||||
msg_info "Restoring users file"
|
||||
rm -f /opt/radicale/users
|
||||
mv /opt/radicale_users_backup /opt/radicale/users
|
||||
msg_ok "Restored users file"
|
||||
|
||||
if grep -q 'start.sh' /etc/systemd/system/radicale.service; then
|
||||
sed -i -e '/^Description/i[Unit]' \
|
||||
-e '\|^ExecStart|iWorkingDirectory=/opt/radicale' \
|
||||
-e 's|^ExecStart=.*|ExecStart=/usr/local/bin/uv run -m radicale --config /etc/radicale/config|' /etc/systemd/system/radicale.service
|
||||
systemctl daemon-reload
|
||||
fi
|
||||
if [[ ! -f /etc/radicale/config ]]; then
|
||||
msg_info "Migrating to config file (/etc/radicale/config)"
|
||||
mkdir -p /etc/radicale
|
||||
cat <<EOF >/etc/radicale/config
|
||||
[server]
|
||||
hosts = 0.0.0.0:5232
|
||||
|
||||
[auth]
|
||||
type = htpasswd
|
||||
htpasswd_filename = /opt/radicale/users
|
||||
htpasswd_encryption = sha512
|
||||
|
||||
[storage]
|
||||
type = multifilesystem
|
||||
filesystem_folder = /var/lib/radicale/collections
|
||||
|
||||
[web]
|
||||
type = internal
|
||||
EOF
|
||||
msg_ok "Migrated to config (/etc/radicale/config)"
|
||||
fi
|
||||
msg_info "Starting service"
|
||||
systemctl start radicale
|
||||
msg_ok "Started service"
|
||||
msg_ok "Updated Successfully!"
|
||||
fi
|
||||
msg_info "Starting Service"
|
||||
systemctl enable -q --now radicale
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
}
|
||||
|
||||
|
||||
165
ct/seerr.sh
165
ct/seerr.sh
@@ -1,165 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: CrazyWolf13
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://docs.seerr.dev/
|
||||
|
||||
APP="Seerr"
|
||||
var_tags="${var_tags:-media}"
|
||||
var_cpu="${var_cpu:-4}"
|
||||
var_ram="${var_ram:-4096}"
|
||||
var_disk="${var_disk:-12}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
if [[ ! -d /opt/seerr && ! -d /opt/jellyseerr && ! -d /opt/overseerr ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
# Start Migration from Jellyseerr
|
||||
if [[ -f /etc/systemd/system/jellyseerr.service ]]; then
|
||||
msg_info "Stopping Jellyseerr"
|
||||
$STD systemctl stop jellyseerr || true
|
||||
$STD systemctl disable jellyseerr || true
|
||||
[ -f /etc/systemd/system/jellyseerr.service ] && rm -f /etc/systemd/system/jellyseerr.service
|
||||
msg_ok "Stopped Jellyseerr"
|
||||
|
||||
msg_info "Creating Backup (Patience)"
|
||||
tar -czf /opt/jellyseerr_backup_$(date +%Y%m%d_%H%M%S).tar.gz -C /opt jellyseerr
|
||||
msg_ok "Created Backup"
|
||||
|
||||
msg_info "Migrating Jellyseerr to seerr"
|
||||
[ -d /opt/jellyseerr ] && mv /opt/jellyseerr /opt/seerr
|
||||
[ -d /etc/jellyseerr ] && mv /etc/jellyseerr /etc/seerr
|
||||
[ -f /etc/seerr/jellyseerr.conf ] && mv /etc/seerr/jellyseerr.conf /etc/seerr/seerr.conf
|
||||
cat <<EOF >/etc/systemd/system/seerr.service
|
||||
[Unit]
|
||||
Description=Seerr Service
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/etc/seerr/seerr.conf
|
||||
Environment=NODE_ENV=production
|
||||
Type=exec
|
||||
Restart=on-failure
|
||||
WorkingDirectory=/opt/seerr
|
||||
ExecStart=/usr/bin/node dist/index.js
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable -q --now seerr
|
||||
msg_ok "Migrated Jellyserr to Seerr"
|
||||
fi
|
||||
# END Jellyseerr Migration
|
||||
|
||||
# Start Migration from Overseerr
|
||||
if [[ -f /etc/systemd/system/overseerr.service ]]; then
|
||||
msg_info "Stopping Overseerr"
|
||||
$STD systemctl stop overseerr || true
|
||||
$STD systemctl disable overseerr || true
|
||||
[ -f /etc/systemd/system/overseerr.service ] && rm -f /etc/systemd/system/overseerr.service
|
||||
msg_ok "Stopped Overseerr"
|
||||
|
||||
msg_info "Creating Backup (Patience)"
|
||||
tar -czf /opt/overseerr_backup_$(date +%Y%m%d_%H%M%S).tar.gz -C /opt overseerr
|
||||
msg_ok "Created Backup"
|
||||
|
||||
msg_info "Migrating Overseerr to seerr"
|
||||
[ -d /opt/overseerr ] && mv /opt/overseerr /opt/seerr
|
||||
mkdir -p /etc/seerr
|
||||
cat <<EOF >/etc/seerr/seerr.conf
|
||||
## Seerr's default port is 5055, if you want to use both, change this.
|
||||
## specify on which port to listen
|
||||
PORT=5055
|
||||
|
||||
## specify on which interface to listen, by default seerr listens on all interfaces
|
||||
#HOST=127.0.0.1
|
||||
|
||||
## Uncomment if you want to force Node.js to resolve IPv4 before IPv6 (advanced users only)
|
||||
# FORCE_IPV4_FIRST=true
|
||||
EOF
|
||||
cat <<EOF >/etc/systemd/system/seerr.service
|
||||
[Unit]
|
||||
Description=Seerr Service
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/etc/seerr/seerr.conf
|
||||
Environment=NODE_ENV=production
|
||||
Type=exec
|
||||
Restart=on-failure
|
||||
WorkingDirectory=/opt/seerr
|
||||
ExecStart=/usr/bin/node dist/index.js
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable -q --now seerr
|
||||
msg_ok "Migrated Overseerr to Seerr"
|
||||
fi
|
||||
# END Overseerr Migration
|
||||
|
||||
if check_for_gh_release "seerr" "seerr-team/seerr"; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop seerr
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Creating Backup"
|
||||
cp -a /opt/seerr/config /opt/seerr_backup
|
||||
msg_ok "Created Backup"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
|
||||
|
||||
msg_info "Updating PNPM Version"
|
||||
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/seerr/package.json)
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||
msg_ok "Updated PNPM Version"
|
||||
|
||||
msg_info "Updating Seerr"
|
||||
cd /opt/seerr
|
||||
rm -rf dist .next node_modules
|
||||
export CYPRESS_INSTALL_BINARY=0
|
||||
$STD pnpm install --frozen-lockfile
|
||||
export NODE_OPTIONS="--max-old-space-size=3072"
|
||||
$STD pnpm build
|
||||
msg_ok "Updated Seerr"
|
||||
|
||||
msg_info "Restoring Backup"
|
||||
rm -rf /opt/seerr/config
|
||||
mv /opt/seerr_backup /opt/seerr/config
|
||||
msg_ok "Restored Backup"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start seerr
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:5055${CL}"
|
||||
89
ct/slskd.sh
89
ct/slskd.sh
@@ -3,7 +3,7 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: vhsdream
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/slskd/slskd, https://github.com/mrusse/soularr
|
||||
# Source: https://github.com/slskd/slskd, https://soularr.net
|
||||
|
||||
APP="slskd"
|
||||
var_tags="${var_tags:-arr;p2p}"
|
||||
@@ -24,65 +24,50 @@ function update_script() {
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
if [[ ! -d /opt/slskd ]]; then
|
||||
msg_error "No Slskd Installation Found!"
|
||||
if [[ ! -d /opt/slskd ]] || [[ ! -d /opt/soularr ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
if check_for_gh_release "Slskd" "slskd/slskd"; then
|
||||
msg_info "Stopping Service(s)"
|
||||
systemctl stop slskd
|
||||
[[ -f /etc/systemd/system/soularr.service ]] && systemctl stop soularr.timer soularr.service
|
||||
msg_ok "Stopped Service(s)"
|
||||
RELEASE=$(curl -s https://api.github.com/repos/slskd/slskd/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3) }')
|
||||
if [[ "${RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]] || [[ ! -f /opt/${APP}_version.txt ]]; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop slskd soularr.timer soularr.service
|
||||
msg_info "Stopped Service"
|
||||
|
||||
msg_info "Backing up config"
|
||||
cp /opt/slskd/config/slskd.yml /opt/slskd.yml.bak
|
||||
msg_ok "Backed up config"
|
||||
msg_info "Updating $APP to v${RELEASE}"
|
||||
tmp_file=$(mktemp)
|
||||
curl -fsSL "https://github.com/slskd/slskd/releases/download/${RELEASE}/slskd-${RELEASE}-linux-x64.zip" -o $tmp_file
|
||||
$STD unzip -oj $tmp_file slskd -d /opt/${APP}
|
||||
echo "${RELEASE}" >/opt/${APP}_version.txt
|
||||
msg_ok "Updated $APP to v${RELEASE}"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Slskd" "slskd/slskd" "prebuild" "latest" "/opt/slskd" "slskd-*-linux-x64.zip"
|
||||
|
||||
msg_info "Restoring config"
|
||||
mv /opt/slskd.yml.bak /opt/slskd/config/slskd.yml
|
||||
msg_ok "Restored config"
|
||||
|
||||
msg_info "Starting Service(s)"
|
||||
msg_info "Starting Service"
|
||||
systemctl start slskd
|
||||
[[ -f /etc/systemd/system/soularr.service ]] && systemctl start soularr.timer
|
||||
msg_ok "Started Service(s)"
|
||||
msg_ok "Updated Slskd successfully!"
|
||||
msg_ok "Started Service"
|
||||
rm -rf $tmp_file
|
||||
else
|
||||
msg_ok "No ${APP} update required. ${APP} is already at v${RELEASE}"
|
||||
fi
|
||||
[[ -d /opt/soularr ]] && if check_for_gh_release "Soularr" "mrusse/soularr"; then
|
||||
if systemctl is-active soularr.timer >/dev/null; then
|
||||
msg_info "Stopping Timer and Service"
|
||||
systemctl stop soularr.timer soularr.service
|
||||
msg_ok "Stopped Timer and Service"
|
||||
fi
|
||||
msg_info "Updating Soularr"
|
||||
cp /opt/soularr/config.ini /opt/config.ini.bak
|
||||
cp /opt/soularr/run.sh /opt/run.sh.bak
|
||||
cd /tmp
|
||||
rm -rf /opt/soularr
|
||||
curl -fsSL -o main.zip https://github.com/mrusse/soularr/archive/refs/heads/main.zip
|
||||
$STD unzip main.zip
|
||||
mv soularr-main /opt/soularr
|
||||
cd /opt/soularr
|
||||
$STD pip install -r requirements.txt
|
||||
mv /opt/config.ini.bak /opt/soularr/config.ini
|
||||
mv /opt/run.sh.bak /opt/soularr/run.sh
|
||||
rm -rf /tmp/main.zip
|
||||
msg_ok "Updated soularr"
|
||||
|
||||
msg_info "Backing up Soularr config"
|
||||
cp /opt/soularr/config.ini /opt/soularr_config.ini.bak
|
||||
cp /opt/soularr/run.sh /opt/soularr_run.sh.bak
|
||||
msg_ok "Backed up Soularr config"
|
||||
|
||||
PYTHON_VERSION="3.11" setup_uv
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Soularr" "mrusse/soularr" "tarball" "latest" "/opt/soularr"
|
||||
msg_info "Updating Soularr"
|
||||
cd /opt/soularr
|
||||
$STD uv venv -c venv
|
||||
$STD source venv/bin/activate
|
||||
$STD uv pip install -r requirements.txt
|
||||
deactivate
|
||||
msg_ok "Updated Soularr"
|
||||
|
||||
msg_info "Restoring Soularr config"
|
||||
mv /opt/soularr_config.ini.bak /opt/soularr/config.ini
|
||||
mv /opt/soularr_run.sh.bak /opt/soularr/run.sh
|
||||
msg_ok "Restored Soularr config"
|
||||
|
||||
msg_info "Starting Soularr Timer"
|
||||
systemctl restart soularr.timer
|
||||
msg_ok "Started Soularr Timer"
|
||||
msg_ok "Updated Soularr successfully!"
|
||||
fi
|
||||
msg_info "Starting soularr timer"
|
||||
systemctl start soularr.timer
|
||||
msg_ok "Started soularr timer"
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@@ -33,15 +33,7 @@ function update_script() {
|
||||
systemctl stop snowshare
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Backing up uploads"
|
||||
[ -d /opt/snowshare/uploads ] && cp -a /opt/snowshare/uploads /opt/.snowshare_uploads_backup
|
||||
msg_ok "Uploads backed up"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "snowshare" "TuroYT/snowshare" "tarball"
|
||||
|
||||
msg_info "Restoring uploads"
|
||||
[ -d /opt/.snowshare_uploads_backup ] && rm -rf /opt/snowshare/uploads && cp -a /opt/.snowshare_uploads_backup /opt/snowshare/uploads
|
||||
msg_ok "Uploads restored"
|
||||
fetch_and_deploy_gh_release "snowshare" "TuroYT/snowshare" "tarball"
|
||||
|
||||
msg_info "Updating Snowshare"
|
||||
cd /opt/snowshare
|
||||
|
||||
@@ -33,9 +33,7 @@ function update_script() {
|
||||
systemctl stop umlautadaptarr
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
cp /opt/UmlautAdaptarr/appsettings.json /opt/UmlautAdaptarr/appsettings.json.bak
|
||||
fetch_and_deploy_gh_release "UmlautAdaptarr" "PCJones/Umlautadaptarr" "prebuild" "latest" "/opt/UmlautAdaptarr" "linux-x64.zip"
|
||||
cp /opt/UmlautAdaptarr/appsettings.json.bak /opt/UmlautAdaptarr/appsettings.json
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start umlautadaptarr
|
||||
|
||||
@@ -31,9 +31,11 @@ function update_script() {
|
||||
VAULT=$(get_latest_github_release "dani-garcia/vaultwarden")
|
||||
WVRELEASE=$(get_latest_github_release "dani-garcia/bw_web_builds")
|
||||
|
||||
UPD=$(msg_menu "Vaultwarden Update Options" \
|
||||
"1" "Update VaultWarden + Web-Vault" \
|
||||
"2" "Set Admin Token")
|
||||
UPD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "SUPPORT" --radiolist --cancel-button Exit-Script "Spacebar = Select" 11 58 3 \
|
||||
"1" "VaultWarden $VAULT" ON \
|
||||
"2" "Web-Vault $WVRELEASE" OFF \
|
||||
"3" "Set Admin Token" OFF \
|
||||
3>&1 1>&2 2>&3)
|
||||
|
||||
if [ "$UPD" == "1" ]; then
|
||||
if check_for_gh_release "vaultwarden" "dani-garcia/vaultwarden"; then
|
||||
@@ -57,10 +59,14 @@ function update_script() {
|
||||
msg_info "Starting Service"
|
||||
systemctl start vaultwarden
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
else
|
||||
msg_ok "VaultWarden is already up-to-date"
|
||||
fi
|
||||
exit
|
||||
fi
|
||||
|
||||
if [ "$UPD" == "2" ]; then
|
||||
if check_for_gh_release "vaultwarden_webvault" "dani-garcia/bw_web_builds"; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop vaultwarden
|
||||
@@ -78,22 +84,16 @@ function update_script() {
|
||||
msg_info "Starting Service"
|
||||
systemctl start vaultwarden
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
else
|
||||
msg_ok "Web-Vault is already up-to-date"
|
||||
fi
|
||||
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
fi
|
||||
|
||||
if [ "$UPD" == "2" ]; then
|
||||
if [[ "${PHS_SILENT:-0}" == "1" ]]; then
|
||||
msg_warn "Set Admin Token requires interactive mode, skipping."
|
||||
exit
|
||||
fi
|
||||
read -r -s -p "Set the ADMIN_TOKEN: " NEWTOKEN
|
||||
echo ""
|
||||
if [[ -n "$NEWTOKEN" ]]; then
|
||||
if [ "$UPD" == "3" ]; then
|
||||
if NEWTOKEN=$(whiptail --backtitle "Proxmox VE Helper Scripts" --passwordbox "Set the ADMIN_TOKEN" 10 58 3>&1 1>&2 2>&3); then
|
||||
if [[ -z "$NEWTOKEN" ]]; then exit; fi
|
||||
ensure_dependencies argon2
|
||||
TOKEN=$(echo -n "${NEWTOKEN}" | argon2 "$(openssl rand -base64 32)" -t 2 -m 16 -p 4 -l 64 -e)
|
||||
sed -i "s|ADMIN_TOKEN=.*|ADMIN_TOKEN='${TOKEN}'|" /opt/vaultwarden/.env
|
||||
|
||||
64
ct/wger.sh
64
ct/wger.sh
@@ -7,9 +7,9 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
|
||||
|
||||
APP="wger"
|
||||
var_tags="${var_tags:-management;fitness}"
|
||||
var_cpu="${var_cpu:-2}"
|
||||
var_ram="${var_ram:-2048}"
|
||||
var_disk="${var_disk:-8}"
|
||||
var_cpu="${var_cpu:-1}"
|
||||
var_ram="${var_ram:-1024}"
|
||||
var_disk="${var_disk:-6}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
@@ -23,44 +23,38 @@ function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
if [[ ! -d /opt/wger ]]; then
|
||||
if [[ ! -d /home/wger ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
if check_for_gh_release "wger" "wger-project/wger"; then
|
||||
RELEASE=$(curl -fsSL https://api.github.com/repos/wger-project/wger/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3)}')
|
||||
if [[ "${RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]] || [[ ! -f /opt/${APP}_version.txt ]]; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop redis-server nginx celery celery-beat wger
|
||||
systemctl stop wger
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Backing up Data"
|
||||
cp -r /opt/wger/media /opt/wger_media_backup
|
||||
cp /opt/wger/.env /opt/wger_env_backup
|
||||
msg_ok "Backed up Data"
|
||||
msg_info "Updating $APP to v${RELEASE}"
|
||||
temp_file=$(mktemp)
|
||||
curl -fsSL "https://github.com/wger-project/wger/archive/refs/tags/$RELEASE.tar.gz" -o "$temp_file"
|
||||
tar xzf "$temp_file"
|
||||
cp -rf wger-"$RELEASE"/* /home/wger/src
|
||||
cd /home/wger/src
|
||||
$STD pip install -r requirements_prod.txt --ignore-installed
|
||||
$STD pip install -e .
|
||||
$STD python3 manage.py migrate
|
||||
$STD python3 manage.py collectstatic --no-input
|
||||
$STD yarn install
|
||||
$STD yarn build:css:sass
|
||||
rm -rf "$temp_file"
|
||||
echo "${RELEASE}" >/opt/${APP}_version.txt
|
||||
msg_ok "Updated $APP to v${RELEASE}"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "wger" "wger-project/wger" "tarball"
|
||||
|
||||
msg_info "Restoring Data"
|
||||
cp -r /opt/wger_media_backup/. /opt/wger/media
|
||||
cp /opt/wger_env_backup /opt/wger/.env
|
||||
rm -rf /opt/wger_media_backup /opt/wger_env_backup
|
||||
|
||||
msg_ok "Restored Data"
|
||||
|
||||
msg_info "Updating wger"
|
||||
cd /opt/wger
|
||||
set -a && source /opt/wger/.env && set +a
|
||||
export DJANGO_SETTINGS_MODULE=settings.main
|
||||
$STD uv pip install .
|
||||
$STD uv run python manage.py migrate
|
||||
$STD uv run python manage.py collectstatic --no-input
|
||||
msg_ok "Updated wger"
|
||||
|
||||
msg_info "Starting Services"
|
||||
systemctl start redis-server nginx celery celery-beat wger
|
||||
msg_ok "Started Services"
|
||||
msg_ok "Updated Successfully"
|
||||
msg_info "Starting Service"
|
||||
systemctl start wger
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
else
|
||||
msg_ok "No update required. ${APP} is already at v${RELEASE}"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
@@ -69,7 +63,7 @@ start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed Successfully!\n"
|
||||
msg_ok "Completed successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"name": "Draw.IO",
|
||||
"slug": "drawio",
|
||||
"categories": [
|
||||
12
|
||||
],
|
||||
"date_created": "2026-02-11",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 8080,
|
||||
"documentation": "https://www.drawio.com/doc/",
|
||||
"website": "https://www.drawio.com/",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/draw-io.webp",
|
||||
"config_path": "",
|
||||
"description": "draw.io is a configurable diagramming and whiteboarding application, jointly owned and developed by draw.io Ltd (previously named JGraph) and draw.io AG.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/drawio.sh",
|
||||
"resources": {
|
||||
"cpu": 1,
|
||||
"ram": 2048,
|
||||
"hdd": 4,
|
||||
"os": "Debian",
|
||||
"version": "13"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": []
|
||||
}
|
||||
@@ -21,7 +21,7 @@
|
||||
"resources": {
|
||||
"cpu": 2,
|
||||
"ram": 1024,
|
||||
"hdd": 6,
|
||||
"hdd": 4,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"generated": "2026-02-15T18:07:51Z",
|
||||
"generated": "2026-02-09T06:27:10Z",
|
||||
"versions": [
|
||||
{
|
||||
"slug": "2fauth",
|
||||
@@ -15,13 +15,6 @@
|
||||
"pinned": false,
|
||||
"date": "2025-12-08T14:34:55Z"
|
||||
},
|
||||
{
|
||||
"slug": "adguardhome-sync",
|
||||
"repo": "bakito/adguardhome-sync",
|
||||
"version": "v0.8.2",
|
||||
"pinned": false,
|
||||
"date": "2025-10-24T17:13:47Z"
|
||||
},
|
||||
{
|
||||
"slug": "adventurelog",
|
||||
"repo": "seanmorley15/adventurelog",
|
||||
@@ -67,9 +60,9 @@
|
||||
{
|
||||
"slug": "autobrr",
|
||||
"repo": "autobrr/autobrr",
|
||||
"version": "v1.73.0",
|
||||
"version": "v1.72.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T16:37:28Z"
|
||||
"date": "2026-01-30T12:57:58Z"
|
||||
},
|
||||
{
|
||||
"slug": "autocaliweb",
|
||||
@@ -116,9 +109,9 @@
|
||||
{
|
||||
"slug": "bentopdf",
|
||||
"repo": "alam00000/bentopdf",
|
||||
"version": "v2.2.1",
|
||||
"version": "v2.1.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T16:33:47Z"
|
||||
"date": "2026-02-02T14:30:55Z"
|
||||
},
|
||||
{
|
||||
"slug": "beszel",
|
||||
@@ -193,30 +186,30 @@
|
||||
{
|
||||
"slug": "cleanuparr",
|
||||
"repo": "Cleanuparr/Cleanuparr",
|
||||
"version": "v2.6.2",
|
||||
"version": "v2.5.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T02:15:19Z"
|
||||
"date": "2026-01-11T00:46:17Z"
|
||||
},
|
||||
{
|
||||
"slug": "cloudreve",
|
||||
"repo": "cloudreve/cloudreve",
|
||||
"version": "4.14.1",
|
||||
"version": "4.13.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T01:39:24Z"
|
||||
"date": "2026-02-05T12:53:24Z"
|
||||
},
|
||||
{
|
||||
"slug": "comfyui",
|
||||
"repo": "comfyanonymous/ComfyUI",
|
||||
"version": "v0.13.0",
|
||||
"version": "v0.12.3",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T20:27:38Z"
|
||||
"date": "2026-02-05T07:04:07Z"
|
||||
},
|
||||
{
|
||||
"slug": "commafeed",
|
||||
"repo": "Athou/commafeed",
|
||||
"version": "6.2.0",
|
||||
"version": "6.1.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-09T19:44:58Z"
|
||||
"date": "2026-01-26T15:14:16Z"
|
||||
},
|
||||
{
|
||||
"slug": "configarr",
|
||||
@@ -242,16 +235,16 @@
|
||||
{
|
||||
"slug": "cronicle",
|
||||
"repo": "jhuckaby/Cronicle",
|
||||
"version": "v0.9.106",
|
||||
"version": "v0.9.105",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T17:11:46Z"
|
||||
"date": "2026-02-05T18:16:11Z"
|
||||
},
|
||||
{
|
||||
"slug": "cryptpad",
|
||||
"repo": "cryptpad/cryptpad",
|
||||
"version": "2026.2.0",
|
||||
"version": "2025.9.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T15:39:05Z"
|
||||
"date": "2025-10-22T10:06:29Z"
|
||||
},
|
||||
{
|
||||
"slug": "dawarich",
|
||||
@@ -263,51 +256,44 @@
|
||||
{
|
||||
"slug": "discopanel",
|
||||
"repo": "nickheyer/discopanel",
|
||||
"version": "v1.0.36",
|
||||
"version": "v1.0.35",
|
||||
"pinned": false,
|
||||
"date": "2026-02-09T21:15:44Z"
|
||||
"date": "2026-02-02T05:20:12Z"
|
||||
},
|
||||
{
|
||||
"slug": "dispatcharr",
|
||||
"repo": "Dispatcharr/Dispatcharr",
|
||||
"version": "v0.19.0",
|
||||
"version": "v0.18.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T21:18:10Z"
|
||||
"date": "2026-01-27T17:09:11Z"
|
||||
},
|
||||
{
|
||||
"slug": "docmost",
|
||||
"repo": "docmost/docmost",
|
||||
"version": "v0.25.3",
|
||||
"version": "v0.25.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T02:58:23Z"
|
||||
"date": "2026-02-06T19:50:55Z"
|
||||
},
|
||||
{
|
||||
"slug": "domain-locker",
|
||||
"repo": "Lissy93/domain-locker",
|
||||
"version": "v0.1.4",
|
||||
"version": "v0.1.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T07:41:29Z"
|
||||
"date": "2025-11-14T22:08:23Z"
|
||||
},
|
||||
{
|
||||
"slug": "domain-monitor",
|
||||
"repo": "Hosteroid/domain-monitor",
|
||||
"version": "v1.1.3",
|
||||
"version": "v1.1.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T15:48:18Z"
|
||||
"date": "2025-11-18T11:32:30Z"
|
||||
},
|
||||
{
|
||||
"slug": "donetick",
|
||||
"repo": "donetick/donetick",
|
||||
"version": "v0.1.74",
|
||||
"version": "v0.1.64",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T23:21:45Z"
|
||||
},
|
||||
{
|
||||
"slug": "drawio",
|
||||
"repo": "jgraph/drawio",
|
||||
"version": "v29.3.6",
|
||||
"pinned": false,
|
||||
"date": "2026-01-28T18:25:02Z"
|
||||
"date": "2025-10-03T05:18:24Z"
|
||||
},
|
||||
{
|
||||
"slug": "duplicati",
|
||||
@@ -333,9 +319,9 @@
|
||||
{
|
||||
"slug": "endurain",
|
||||
"repo": "endurain-project/endurain",
|
||||
"version": "v0.17.4",
|
||||
"version": "v0.17.3",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T04:54:22Z"
|
||||
"date": "2026-01-23T22:02:05Z"
|
||||
},
|
||||
{
|
||||
"slug": "ersatztv",
|
||||
@@ -354,9 +340,9 @@
|
||||
{
|
||||
"slug": "firefly",
|
||||
"repo": "firefly-iii/firefly-iii",
|
||||
"version": "v6.4.21",
|
||||
"version": "v6.4.18",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T19:40:46Z"
|
||||
"date": "2026-02-08T07:28:00Z"
|
||||
},
|
||||
{
|
||||
"slug": "fladder",
|
||||
@@ -403,9 +389,9 @@
|
||||
{
|
||||
"slug": "ghostfolio",
|
||||
"repo": "ghostfolio/ghostfolio",
|
||||
"version": "2.239.0",
|
||||
"version": "2.237.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T09:51:16Z"
|
||||
"date": "2026-02-08T13:59:53Z"
|
||||
},
|
||||
{
|
||||
"slug": "gitea",
|
||||
@@ -445,9 +431,9 @@
|
||||
{
|
||||
"slug": "gotify",
|
||||
"repo": "gotify/server",
|
||||
"version": "v2.9.0",
|
||||
"version": "v2.8.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T15:22:31Z"
|
||||
"date": "2026-01-02T11:56:16Z"
|
||||
},
|
||||
{
|
||||
"slug": "grist",
|
||||
@@ -508,9 +494,9 @@
|
||||
{
|
||||
"slug": "homarr",
|
||||
"repo": "homarr-labs/homarr",
|
||||
"version": "v1.53.1",
|
||||
"version": "v1.53.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T19:47:11Z"
|
||||
"date": "2026-02-06T19:42:58Z"
|
||||
},
|
||||
{
|
||||
"slug": "homebox",
|
||||
@@ -543,16 +529,9 @@
|
||||
{
|
||||
"slug": "huntarr",
|
||||
"repo": "plexguide/Huntarr.io",
|
||||
"version": "9.2.4.1",
|
||||
"version": "9.2.3",
|
||||
"pinned": false,
|
||||
"date": "2026-02-12T22:17:47Z"
|
||||
},
|
||||
{
|
||||
"slug": "immich-public-proxy",
|
||||
"repo": "alangrainger/immich-public-proxy",
|
||||
"version": "v1.15.1",
|
||||
"pinned": false,
|
||||
"date": "2026-01-26T08:04:27Z"
|
||||
"date": "2026-02-07T04:44:20Z"
|
||||
},
|
||||
{
|
||||
"slug": "inspircd",
|
||||
@@ -571,23 +550,16 @@
|
||||
{
|
||||
"slug": "invoiceninja",
|
||||
"repo": "invoiceninja/invoiceninja",
|
||||
"version": "v5.12.60",
|
||||
"version": "v5.12.55",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T00:11:31Z"
|
||||
"date": "2026-02-05T01:06:15Z"
|
||||
},
|
||||
{
|
||||
"slug": "jackett",
|
||||
"repo": "Jackett/Jackett",
|
||||
"version": "v0.24.1124",
|
||||
"version": "v0.24.1074",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T05:54:22Z"
|
||||
},
|
||||
{
|
||||
"slug": "jellystat",
|
||||
"repo": "CyferShepard/Jellystat",
|
||||
"version": "V1.1.8",
|
||||
"pinned": false,
|
||||
"date": "2026-02-08T08:15:00Z"
|
||||
"date": "2026-02-09T06:01:19Z"
|
||||
},
|
||||
{
|
||||
"slug": "joplin-server",
|
||||
@@ -599,9 +571,9 @@
|
||||
{
|
||||
"slug": "jotty",
|
||||
"repo": "fccview/jotty",
|
||||
"version": "1.20.0",
|
||||
"version": "1.19.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-12T09:23:30Z"
|
||||
"date": "2026-01-26T21:30:39Z"
|
||||
},
|
||||
{
|
||||
"slug": "kapowarr",
|
||||
@@ -627,9 +599,9 @@
|
||||
{
|
||||
"slug": "keycloak",
|
||||
"repo": "keycloak/keycloak",
|
||||
"version": "26.5.3",
|
||||
"version": "26.5.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T07:30:08Z"
|
||||
"date": "2026-01-23T14:26:58Z"
|
||||
},
|
||||
{
|
||||
"slug": "kimai",
|
||||
@@ -662,9 +634,9 @@
|
||||
{
|
||||
"slug": "kometa",
|
||||
"repo": "Kometa-Team/Kometa",
|
||||
"version": "v2.3.0",
|
||||
"version": "v2.2.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-09T21:26:56Z"
|
||||
"date": "2025-10-06T21:31:07Z"
|
||||
},
|
||||
{
|
||||
"slug": "komga",
|
||||
@@ -711,9 +683,9 @@
|
||||
{
|
||||
"slug": "libretranslate",
|
||||
"repo": "LibreTranslate/LibreTranslate",
|
||||
"version": "v1.9.0",
|
||||
"version": "v1.8.4",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T19:05:48Z"
|
||||
"date": "2026-02-02T17:45:16Z"
|
||||
},
|
||||
{
|
||||
"slug": "lidarr",
|
||||
@@ -746,9 +718,9 @@
|
||||
{
|
||||
"slug": "lubelogger",
|
||||
"repo": "hargata/lubelog",
|
||||
"version": "v1.6.0",
|
||||
"version": "v1.5.8",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T20:16:32Z"
|
||||
"date": "2026-01-26T18:18:03Z"
|
||||
},
|
||||
{
|
||||
"slug": "mafl",
|
||||
@@ -767,9 +739,9 @@
|
||||
{
|
||||
"slug": "mail-archiver",
|
||||
"repo": "s1t5/mail-archiver",
|
||||
"version": "2602.1",
|
||||
"version": "2601.3",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T06:23:11Z"
|
||||
"date": "2026-01-25T12:52:24Z"
|
||||
},
|
||||
{
|
||||
"slug": "managemydamnlife",
|
||||
@@ -781,9 +753,9 @@
|
||||
{
|
||||
"slug": "manyfold",
|
||||
"repo": "manyfold3d/manyfold",
|
||||
"version": "v0.132.1",
|
||||
"version": "v0.132.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-09T22:02:28Z"
|
||||
"date": "2026-01-29T13:53:21Z"
|
||||
},
|
||||
{
|
||||
"slug": "mealie",
|
||||
@@ -795,9 +767,9 @@
|
||||
{
|
||||
"slug": "mediamanager",
|
||||
"repo": "maxdorninger/MediaManager",
|
||||
"version": "v1.12.3",
|
||||
"version": "v1.12.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T16:45:40Z"
|
||||
"date": "2026-02-08T19:18:29Z"
|
||||
},
|
||||
{
|
||||
"slug": "mediamtx",
|
||||
@@ -823,16 +795,16 @@
|
||||
{
|
||||
"slug": "metube",
|
||||
"repo": "alexta69/metube",
|
||||
"version": "2026.02.14",
|
||||
"version": "2026.02.08",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T07:49:11Z"
|
||||
"date": "2026-02-08T17:01:37Z"
|
||||
},
|
||||
{
|
||||
"slug": "miniflux",
|
||||
"repo": "miniflux/v2",
|
||||
"version": "2.2.17",
|
||||
"version": "2.2.16",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T20:30:17Z"
|
||||
"date": "2026-01-07T03:26:27Z"
|
||||
},
|
||||
{
|
||||
"slug": "monica",
|
||||
@@ -844,9 +816,9 @@
|
||||
{
|
||||
"slug": "myip",
|
||||
"repo": "jason5ng32/MyIP",
|
||||
"version": "v5.2.1",
|
||||
"version": "v5.2.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T07:38:47Z"
|
||||
"date": "2026-01-05T05:56:57Z"
|
||||
},
|
||||
{
|
||||
"slug": "mylar3",
|
||||
@@ -865,9 +837,9 @@
|
||||
{
|
||||
"slug": "navidrome",
|
||||
"repo": "navidrome/navidrome",
|
||||
"version": "v0.60.3",
|
||||
"version": "v0.60.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T23:55:04Z"
|
||||
"date": "2026-02-07T19:42:33Z"
|
||||
},
|
||||
{
|
||||
"slug": "netbox",
|
||||
@@ -876,19 +848,12 @@
|
||||
"pinned": false,
|
||||
"date": "2026-02-03T13:54:26Z"
|
||||
},
|
||||
{
|
||||
"slug": "nextcloud-exporter",
|
||||
"repo": "xperimental/nextcloud-exporter",
|
||||
"version": "v0.9.0",
|
||||
"pinned": false,
|
||||
"date": "2025-10-12T20:03:10Z"
|
||||
},
|
||||
{
|
||||
"slug": "nginx-ui",
|
||||
"repo": "0xJacky/nginx-ui",
|
||||
"version": "v2.3.3",
|
||||
"version": "v2.3.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T00:58:14Z"
|
||||
"date": "2025-12-09T09:47:15Z"
|
||||
},
|
||||
{
|
||||
"slug": "nightscout",
|
||||
@@ -963,9 +928,16 @@
|
||||
{
|
||||
"slug": "outline",
|
||||
"repo": "outline/outline",
|
||||
"version": "v1.5.0",
|
||||
"version": "v1.4.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T18:04:16Z"
|
||||
"date": "2026-01-27T23:43:03Z"
|
||||
},
|
||||
{
|
||||
"slug": "overseerr",
|
||||
"repo": "sct/overseerr",
|
||||
"version": "v1.34.0",
|
||||
"pinned": false,
|
||||
"date": "2025-03-26T08:48:34Z"
|
||||
},
|
||||
{
|
||||
"slug": "owncast",
|
||||
@@ -991,9 +963,9 @@
|
||||
{
|
||||
"slug": "pangolin",
|
||||
"repo": "fosrl/pangolin",
|
||||
"version": "1.15.4",
|
||||
"version": "1.15.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T23:01:29Z"
|
||||
"date": "2026-02-05T19:23:58Z"
|
||||
},
|
||||
{
|
||||
"slug": "paperless-ai",
|
||||
@@ -1019,9 +991,9 @@
|
||||
{
|
||||
"slug": "patchmon",
|
||||
"repo": "PatchMon/PatchMon",
|
||||
"version": "v1.4.0",
|
||||
"version": "v1.3.7",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T10:39:03Z"
|
||||
"date": "2025-12-25T11:08:14Z"
|
||||
},
|
||||
{
|
||||
"slug": "paymenter",
|
||||
@@ -1040,16 +1012,16 @@
|
||||
{
|
||||
"slug": "pelican-panel",
|
||||
"repo": "pelican-dev/panel",
|
||||
"version": "v1.0.0-beta32",
|
||||
"version": "v1.0.0-beta31",
|
||||
"pinned": false,
|
||||
"date": "2026-02-09T22:15:44Z"
|
||||
"date": "2026-01-18T22:43:24Z"
|
||||
},
|
||||
{
|
||||
"slug": "pelican-wings",
|
||||
"repo": "pelican-dev/wings",
|
||||
"version": "v1.0.0-beta24",
|
||||
"version": "v1.0.0-beta22",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T16:09:56Z"
|
||||
"date": "2026-01-18T22:38:36Z"
|
||||
},
|
||||
{
|
||||
"slug": "pf2etools",
|
||||
@@ -1065,19 +1037,12 @@
|
||||
"pinned": false,
|
||||
"date": "2025-12-01T05:07:31Z"
|
||||
},
|
||||
{
|
||||
"slug": "pihole-exporter",
|
||||
"repo": "eko/pihole-exporter",
|
||||
"version": "v1.2.0",
|
||||
"pinned": false,
|
||||
"date": "2025-07-29T19:15:37Z"
|
||||
},
|
||||
{
|
||||
"slug": "planka",
|
||||
"repo": "plankanban/planka",
|
||||
"version": "v2.0.0",
|
||||
"version": "v2.0.0-rc.4",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T13:50:10Z"
|
||||
"date": "2025-09-04T12:41:17Z"
|
||||
},
|
||||
{
|
||||
"slug": "plant-it",
|
||||
@@ -1089,9 +1054,9 @@
|
||||
{
|
||||
"slug": "pocketbase",
|
||||
"repo": "pocketbase/pocketbase",
|
||||
"version": "v0.36.3",
|
||||
"version": "v0.36.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T18:38:58Z"
|
||||
"date": "2026-02-01T08:12:42Z"
|
||||
},
|
||||
{
|
||||
"slug": "pocketid",
|
||||
@@ -1124,9 +1089,9 @@
|
||||
{
|
||||
"slug": "prometheus-alertmanager",
|
||||
"repo": "prometheus/alertmanager",
|
||||
"version": "v0.31.1",
|
||||
"version": "v0.31.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T21:28:26Z"
|
||||
"date": "2026-02-02T13:34:15Z"
|
||||
},
|
||||
{
|
||||
"slug": "prometheus-blackbox-exporter",
|
||||
@@ -1166,9 +1131,9 @@
|
||||
{
|
||||
"slug": "pulse",
|
||||
"repo": "rcourtman/Pulse",
|
||||
"version": "v5.1.9",
|
||||
"version": "v5.1.5",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T15:34:40Z"
|
||||
"date": "2026-02-08T12:19:53Z"
|
||||
},
|
||||
{
|
||||
"slug": "pve-scripts-local",
|
||||
@@ -1184,13 +1149,6 @@
|
||||
"pinned": false,
|
||||
"date": "2025-11-19T23:54:34Z"
|
||||
},
|
||||
{
|
||||
"slug": "qbittorrent-exporter",
|
||||
"repo": "martabal/qbittorrent-exporter",
|
||||
"version": "v1.13.2",
|
||||
"pinned": false,
|
||||
"date": "2025-12-13T22:59:03Z"
|
||||
},
|
||||
{
|
||||
"slug": "qdrant",
|
||||
"repo": "qdrant/qdrant",
|
||||
@@ -1212,13 +1170,6 @@
|
||||
"pinned": false,
|
||||
"date": "2025-11-16T22:39:01Z"
|
||||
},
|
||||
{
|
||||
"slug": "radicale",
|
||||
"repo": "Kozea/Radicale",
|
||||
"version": "v3.6.0",
|
||||
"pinned": false,
|
||||
"date": "2026-01-10T06:56:46Z"
|
||||
},
|
||||
{
|
||||
"slug": "rclone",
|
||||
"repo": "rclone/rclone",
|
||||
@@ -1229,9 +1180,9 @@
|
||||
{
|
||||
"slug": "rdtclient",
|
||||
"repo": "rogerfar/rdt-client",
|
||||
"version": "v2.0.120",
|
||||
"version": "v2.0.119",
|
||||
"pinned": false,
|
||||
"date": "2026-02-12T02:53:51Z"
|
||||
"date": "2025-10-13T23:15:11Z"
|
||||
},
|
||||
{
|
||||
"slug": "reactive-resume",
|
||||
@@ -1285,16 +1236,16 @@
|
||||
{
|
||||
"slug": "scanopy",
|
||||
"repo": "scanopy/scanopy",
|
||||
"version": "v0.14.4",
|
||||
"version": "v0.14.3",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T03:57:28Z"
|
||||
"date": "2026-02-04T01:41:01Z"
|
||||
},
|
||||
{
|
||||
"slug": "scraparr",
|
||||
"repo": "thecfu/scraparr",
|
||||
"version": "v3.0.3",
|
||||
"version": "v2.2.5",
|
||||
"pinned": false,
|
||||
"date": "2026-02-12T14:20:56Z"
|
||||
"date": "2025-10-07T12:34:31Z"
|
||||
},
|
||||
{
|
||||
"slug": "seelf",
|
||||
@@ -1303,19 +1254,12 @@
|
||||
"pinned": false,
|
||||
"date": "2025-03-08T10:49:04Z"
|
||||
},
|
||||
{
|
||||
"slug": "seerr",
|
||||
"repo": "seerr-team/seerr",
|
||||
"version": "v3.0.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T19:30:24Z"
|
||||
},
|
||||
{
|
||||
"slug": "semaphore",
|
||||
"repo": "semaphoreui/semaphore",
|
||||
"version": "v2.17.0",
|
||||
"version": "v2.16.51",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T21:08:30Z"
|
||||
"date": "2026-01-12T16:26:38Z"
|
||||
},
|
||||
{
|
||||
"slug": "shelfmark",
|
||||
@@ -1338,13 +1282,6 @@
|
||||
"pinned": false,
|
||||
"date": "2026-01-16T12:08:28Z"
|
||||
},
|
||||
{
|
||||
"slug": "slskd",
|
||||
"repo": "slskd/slskd",
|
||||
"version": "0.24.3",
|
||||
"pinned": false,
|
||||
"date": "2026-01-15T14:40:15Z"
|
||||
},
|
||||
{
|
||||
"slug": "snipeit",
|
||||
"repo": "grokability/snipe-it",
|
||||
@@ -1355,9 +1292,9 @@
|
||||
{
|
||||
"slug": "snowshare",
|
||||
"repo": "TuroYT/snowshare",
|
||||
"version": "v1.3.5",
|
||||
"version": "v1.2.12",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T10:24:51Z"
|
||||
"date": "2026-01-30T13:35:56Z"
|
||||
},
|
||||
{
|
||||
"slug": "sonarr",
|
||||
@@ -1390,9 +1327,9 @@
|
||||
{
|
||||
"slug": "stirling-pdf",
|
||||
"repo": "Stirling-Tools/Stirling-PDF",
|
||||
"version": "v2.4.6",
|
||||
"version": "v2.4.5",
|
||||
"pinned": false,
|
||||
"date": "2026-02-12T00:01:19Z"
|
||||
"date": "2026-02-06T23:12:20Z"
|
||||
},
|
||||
{
|
||||
"slug": "streamlink-webui",
|
||||
@@ -1411,9 +1348,9 @@
|
||||
{
|
||||
"slug": "tandoor",
|
||||
"repo": "TandoorRecipes/recipes",
|
||||
"version": "2.5.3",
|
||||
"version": "2.5.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-14T12:42:14Z"
|
||||
"date": "2026-02-08T13:23:02Z"
|
||||
},
|
||||
{
|
||||
"slug": "tasmoadmin",
|
||||
@@ -1439,9 +1376,9 @@
|
||||
{
|
||||
"slug": "termix",
|
||||
"repo": "Termix-SSH/Termix",
|
||||
"version": "release-1.11.1-tag",
|
||||
"version": "release-1.11.0-tag",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T04:49:16Z"
|
||||
"date": "2026-01-25T02:09:52Z"
|
||||
},
|
||||
{
|
||||
"slug": "the-lounge",
|
||||
@@ -1467,9 +1404,9 @@
|
||||
{
|
||||
"slug": "tianji",
|
||||
"repo": "msgbyte/tianji",
|
||||
"version": "v1.31.13",
|
||||
"version": "v1.31.10",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T16:30:09Z"
|
||||
"date": "2026-02-04T17:21:04Z"
|
||||
},
|
||||
{
|
||||
"slug": "traccar",
|
||||
@@ -1481,9 +1418,9 @@
|
||||
{
|
||||
"slug": "tracearr",
|
||||
"repo": "connorgallopo/Tracearr",
|
||||
"version": "v1.4.17",
|
||||
"version": "v1.4.12",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T01:33:21Z"
|
||||
"date": "2026-01-28T23:29:37Z"
|
||||
},
|
||||
{
|
||||
"slug": "tracktor",
|
||||
@@ -1495,9 +1432,9 @@
|
||||
{
|
||||
"slug": "traefik",
|
||||
"repo": "traefik/traefik",
|
||||
"version": "v3.6.8",
|
||||
"version": "v3.6.7",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T16:44:37Z"
|
||||
"date": "2026-01-14T14:11:45Z"
|
||||
},
|
||||
{
|
||||
"slug": "trilium",
|
||||
@@ -1509,16 +1446,16 @@
|
||||
{
|
||||
"slug": "trip",
|
||||
"repo": "itskovacs/TRIP",
|
||||
"version": "1.40.0",
|
||||
"version": "1.39.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T20:12:53Z"
|
||||
"date": "2026-02-07T16:59:51Z"
|
||||
},
|
||||
{
|
||||
"slug": "tududi",
|
||||
"repo": "chrisvel/tududi",
|
||||
"version": "v0.88.5",
|
||||
"version": "v0.88.4",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T13:54:14Z"
|
||||
"date": "2026-01-20T15:11:58Z"
|
||||
},
|
||||
{
|
||||
"slug": "tunarr",
|
||||
@@ -1558,23 +1495,23 @@
|
||||
{
|
||||
"slug": "upsnap",
|
||||
"repo": "seriousm4x/UpSnap",
|
||||
"version": "5.2.8",
|
||||
"version": "5.2.7",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T00:02:37Z"
|
||||
"date": "2026-01-07T23:48:00Z"
|
||||
},
|
||||
{
|
||||
"slug": "uptimekuma",
|
||||
"repo": "louislam/uptime-kuma",
|
||||
"version": "2.1.1",
|
||||
"version": "2.1.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-13T16:07:33Z"
|
||||
"date": "2026-02-07T02:31:49Z"
|
||||
},
|
||||
{
|
||||
"slug": "vaultwarden",
|
||||
"repo": "dani-garcia/vaultwarden",
|
||||
"version": "1.35.3",
|
||||
"version": "1.35.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T20:37:03Z"
|
||||
"date": "2026-01-09T18:37:04Z"
|
||||
},
|
||||
{
|
||||
"slug": "victoriametrics",
|
||||
@@ -1586,9 +1523,9 @@
|
||||
{
|
||||
"slug": "vikunja",
|
||||
"repo": "go-vikunja/vikunja",
|
||||
"version": "v1.1.0",
|
||||
"version": "v1.0.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-09T10:34:29Z"
|
||||
"date": "2026-01-28T11:12:59Z"
|
||||
},
|
||||
{
|
||||
"slug": "wallabag",
|
||||
@@ -1600,9 +1537,9 @@
|
||||
{
|
||||
"slug": "wallos",
|
||||
"repo": "ellite/Wallos",
|
||||
"version": "v4.6.1",
|
||||
"version": "v4.6.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T21:06:46Z"
|
||||
"date": "2025-12-20T15:57:51Z"
|
||||
},
|
||||
{
|
||||
"slug": "wanderer",
|
||||
@@ -1635,9 +1572,9 @@
|
||||
{
|
||||
"slug": "wavelog",
|
||||
"repo": "wavelog/wavelog",
|
||||
"version": "2.3",
|
||||
"version": "2.2.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T15:46:40Z"
|
||||
"date": "2025-12-31T16:53:34Z"
|
||||
},
|
||||
{
|
||||
"slug": "wealthfolio",
|
||||
@@ -1653,26 +1590,19 @@
|
||||
"pinned": false,
|
||||
"date": "2025-11-11T14:30:28Z"
|
||||
},
|
||||
{
|
||||
"slug": "wger",
|
||||
"repo": "wger-project/wger",
|
||||
"version": "2.4",
|
||||
"pinned": false,
|
||||
"date": "2026-01-18T12:12:02Z"
|
||||
},
|
||||
{
|
||||
"slug": "wikijs",
|
||||
"repo": "requarks/wiki",
|
||||
"version": "v2.5.312",
|
||||
"version": "v2.5.311",
|
||||
"pinned": false,
|
||||
"date": "2026-02-12T02:45:22Z"
|
||||
"date": "2026-01-08T09:50:00Z"
|
||||
},
|
||||
{
|
||||
"slug": "wishlist",
|
||||
"repo": "cmintey/wishlist",
|
||||
"version": "v0.60.0",
|
||||
"version": "v0.59.0",
|
||||
"pinned": false,
|
||||
"date": "2026-02-10T04:05:26Z"
|
||||
"date": "2026-01-19T16:42:14Z"
|
||||
},
|
||||
{
|
||||
"slug": "wizarr",
|
||||
@@ -1698,9 +1628,9 @@
|
||||
{
|
||||
"slug": "yubal",
|
||||
"repo": "guillevc/yubal",
|
||||
"version": "v0.6.0",
|
||||
"version": "v0.4.2",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T17:47:56Z"
|
||||
"date": "2026-02-08T21:35:13Z"
|
||||
},
|
||||
{
|
||||
"slug": "zigbee2mqtt",
|
||||
@@ -1712,9 +1642,9 @@
|
||||
{
|
||||
"slug": "zipline",
|
||||
"repo": "diced/zipline",
|
||||
"version": "v4.4.2",
|
||||
"version": "v4.4.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-11T04:58:54Z"
|
||||
"date": "2026-01-20T01:29:01Z"
|
||||
},
|
||||
{
|
||||
"slug": "zitadel",
|
||||
@@ -1726,9 +1656,9 @@
|
||||
{
|
||||
"slug": "zoraxy",
|
||||
"repo": "tobychui/zoraxy",
|
||||
"version": "v3.3.2-rc1",
|
||||
"version": "v3.3.1",
|
||||
"pinned": false,
|
||||
"date": "2026-02-15T02:16:17Z"
|
||||
"date": "2026-01-28T13:52:02Z"
|
||||
},
|
||||
{
|
||||
"slug": "zwave-js-ui",
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
"resources": {
|
||||
"cpu": 1,
|
||||
"ram": 512,
|
||||
"hdd": 4,
|
||||
"hdd": 2,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
@@ -32,7 +32,7 @@
|
||||
"resources": {
|
||||
"cpu": 1,
|
||||
"ram": 256,
|
||||
"hdd": 2,
|
||||
"hdd": 1,
|
||||
"os": "alpine",
|
||||
"version": "3.23"
|
||||
}
|
||||
|
||||
35
frontend/public/json/jellyseerr.json
Normal file
35
frontend/public/json/jellyseerr.json
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"name": "Jellyseerr",
|
||||
"slug": "jellyseerr",
|
||||
"categories": [
|
||||
14
|
||||
],
|
||||
"date_created": "2024-05-02",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 5055,
|
||||
"documentation": "https://docs.jellyseerr.dev/",
|
||||
"website": "https://github.com/Fallenbagel/jellyseerr",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/jellyseerr.webp",
|
||||
"config_path": "/etc/jellyseerr/jellyseerr.conf",
|
||||
"description": "Jellyseerr is a free and open source software application for managing requests for your media library. It is a a fork of Overseerr built to bring support for Jellyfin & Emby media servers.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/jellyseerr.sh",
|
||||
"resources": {
|
||||
"cpu": 4,
|
||||
"ram": 4096,
|
||||
"hdd": 8,
|
||||
"os": "debian",
|
||||
"version": "12"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": []
|
||||
}
|
||||
@@ -33,7 +33,7 @@
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "Kutt needs so be served with an SSL certificate for its login to work. During install, you will be prompted to choose if you want to have Caddy installed for SSL termination or if you want to use your own reverse proxy (in that case point your reverse proxy to port 3000).",
|
||||
"text": "Kutt needs so be served with an SSL certificate for its login to work. During install, you will be prompted to choose if you want to have Caddy installed for SSL termination or if you want to use your own reverse proxy (in that case point your reverse porxy to port 3000).",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
|
||||
@@ -28,14 +28,10 @@
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"username": "admin",
|
||||
"password": null
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "On first visit, the setup wizard will guide you to create an admin account and configure ACME email.",
|
||||
"type": "warning"
|
||||
},
|
||||
{
|
||||
"text": "Nginx runs on ports 80/443, Nginx UI management interface on port 9000.",
|
||||
"type": "info"
|
||||
@@ -43,6 +39,10 @@
|
||||
{
|
||||
"text": "SSL certificates can be managed automatically with Let's Encrypt integration.",
|
||||
"type": "info"
|
||||
},
|
||||
{
|
||||
"text": "Initial Login data: `cat ~/nginx-ui.creds`",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
"resources": {
|
||||
"cpu": 4,
|
||||
"ram": 4096,
|
||||
"hdd": 40,
|
||||
"hdd": 35,
|
||||
"os": "Ubuntu",
|
||||
"version": "24.04"
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
"resources": {
|
||||
"cpu": 4,
|
||||
"ram": 8192,
|
||||
"hdd": 50,
|
||||
"hdd": 25,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
|
||||
35
frontend/public/json/overseerr.json
Normal file
35
frontend/public/json/overseerr.json
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"name": "Overseerr",
|
||||
"slug": "overseerr",
|
||||
"categories": [
|
||||
14
|
||||
],
|
||||
"date_created": "2024-05-02",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 5055,
|
||||
"documentation": "https://docs.overseerr.dev/",
|
||||
"website": "https://overseerr.dev/",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/overseerr.webp",
|
||||
"config_path": "/opt/overseerr/config/settings.json",
|
||||
"description": "Overseerr is a request management and media discovery tool built to work with your existing Plex ecosystem.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/overseerr.sh",
|
||||
"resources": {
|
||||
"cpu": 2,
|
||||
"ram": 4096,
|
||||
"hdd": 8,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": []
|
||||
}
|
||||
@@ -8,7 +8,7 @@
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 3000,
|
||||
"interface_port": 3399,
|
||||
"documentation": "https://docs.patchmon.net",
|
||||
"website": "https://patchmon.net",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/patchmon.webp",
|
||||
|
||||
@@ -1,35 +1,44 @@
|
||||
{
|
||||
"name": "Prometheus Paperless NGX Exporter",
|
||||
"slug": "prometheus-paperless-ngx-exporter",
|
||||
"categories": [
|
||||
9
|
||||
],
|
||||
"date_created": "2025-02-07",
|
||||
"type": "addon",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 8081,
|
||||
"documentation": "https://github.com/hansmi/prometheus-paperless-exporter",
|
||||
"website": "https://github.com/hansmi/prometheus-paperless-exporter",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/paperless-ngx.webp",
|
||||
"config_path": "/etc/prometheus-paperless-ngx-exporter/config.env",
|
||||
"description": "Prometheus metrics exporter for Paperless-NGX, a document management system transforming physical documents into a searchable online archive. The exporter relies on Paperless' REST API.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "tools/addon/prometheus-paperless-ngx-exporter.sh",
|
||||
"resources": {
|
||||
"cpu": null,
|
||||
"ram": null,
|
||||
"hdd": null,
|
||||
"os": null,
|
||||
"version": null
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": []
|
||||
"name": "Prometheus Paperless NGX Exporter",
|
||||
"slug": "prometheus-paperless-ngx-exporter",
|
||||
"categories": [
|
||||
9
|
||||
],
|
||||
"date_created": "2025-02-07",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 8081,
|
||||
"documentation": null,
|
||||
"website": "https://github.com/hansmi/prometheus-paperless-exporter",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/paperless-ngx.webp",
|
||||
"config_path": "",
|
||||
"description": "Prometheus metrics exporter for Paperless-NGX, a document management system transforming physical documents into a searchable online archive. The exporter relies on Paperless' REST API.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/prometheus-paperless-ngx-exporter.sh",
|
||||
"resources": {
|
||||
"cpu": 1,
|
||||
"ram": 256,
|
||||
"hdd": 2,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "Please adjust the Paperless URL in the systemd unit file: /etc/systemd/system/prometheus-paperless-ngx-exporter.service",
|
||||
"type": "info"
|
||||
},
|
||||
{
|
||||
"text": "Please adjust the Paperless authentication token in the configuration file: /etc/prometheus-paperless-ngx-exporter/paperless_auth_token_file",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
"documentation": "https://radicale.org/master.html#documentation-1",
|
||||
"website": "https://radicale.org/",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/radicale.webp",
|
||||
"config_path": "/etc/radicale/config",
|
||||
"config_path": "/etc/radicale/config or ~/.config/radicale/config",
|
||||
"description": "Radicale is a small but powerful CalDAV (calendars, to-do lists) and CardDAV (contacts)",
|
||||
"install_methods": [
|
||||
{
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"name": "Seerr",
|
||||
"slug": "seerr",
|
||||
"categories": [
|
||||
13
|
||||
],
|
||||
"date_created": "2026-02-15",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 5055,
|
||||
"documentation": "https://docs.seerr.dev/",
|
||||
"website": "https://seerr.dev/",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/seerr.webp",
|
||||
"config_path": "/etc/seerr/seerr.conf",
|
||||
"description": "Open-source media request and discovery manager for Jellyfin, Plex, and Emby. Unified version of Overseerr and Jellyseerr.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/seerr.sh",
|
||||
"resources": {
|
||||
"cpu": 4,
|
||||
"ram": 4096,
|
||||
"hdd": 12,
|
||||
"os": "Debian",
|
||||
"version": "13"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": []
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"name": "Slskd",
|
||||
"name": "slskd",
|
||||
"slug": "slskd",
|
||||
"categories": [
|
||||
11
|
||||
@@ -35,6 +35,10 @@
|
||||
{
|
||||
"text": "See /opt/slskd/config/slskd.yml to add your Soulseek credentials",
|
||||
"type": "info"
|
||||
},
|
||||
{
|
||||
"text": "This LXC includes Soularr; it needs to be configured (/opt/soularr/config.ini) before it will work",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
"privileged": false,
|
||||
"interface_port": 3000,
|
||||
"documentation": "https://github.com/TuroYT/snowshare",
|
||||
"config_path": "/opt/snowshare.env",
|
||||
"config_path": "/opt/snowshare/.env",
|
||||
"website": "https://github.com/TuroYT/snowshare",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/png/snowshare.png",
|
||||
"description": "A modern, secure file and link sharing platform built with Next.js, Prisma, and NextAuth. Share URLs, code snippets, and files with customizable expiration, privacy, and QR codes.",
|
||||
|
||||
@@ -32,10 +32,6 @@
|
||||
"password": null
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "SQL Server (2025) SQLPAL is incompatible with Proxmox VE 9 (Kernel 6.12+) in LXC containers. Use a VM instead or the SQL-Server 2022 LXC.",
|
||||
"type": "warning"
|
||||
},
|
||||
{
|
||||
"text": "If you choose not to run the installation setup, execute: `/opt/mssql/bin/mssql-conf setup` in LXC shell.",
|
||||
"type": "info"
|
||||
|
||||
@@ -14,8 +14,6 @@
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/ubiquiti-unifi.webp",
|
||||
"config_path": "",
|
||||
"description": "UniFi Network Server is a software that helps manage and monitor UniFi networks (Wi-Fi, Ethernet, etc.) by providing an intuitive user interface and advanced features. It allows network administrators to configure, monitor, and upgrade network devices, as well as view network statistics, client devices, and historical events. The aim of the application is to make the management of UniFi networks easier and more efficient.",
|
||||
"disable": true,
|
||||
"disable_description": "This script is disabled because UniFi no longer delivers APT packages for Debian systems. The installation relies on APT repositories that are no longer maintained or available. For more details, see: https://github.com/community-scripts/ProxmoxVE/issues/11876",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
|
||||
@@ -19,9 +19,9 @@
|
||||
"type": "default",
|
||||
"script": "ct/wger.sh",
|
||||
"resources": {
|
||||
"cpu": 2,
|
||||
"ram": 2048,
|
||||
"hdd": 8,
|
||||
"cpu": 1,
|
||||
"ram": 1024,
|
||||
"hdd": 6,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
@@ -33,7 +33,7 @@
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "This LXC also runs Celery and Redis to synchronize workouts and ingredients",
|
||||
"text": "Enable proxy support by uncommenting this line in `/home/wger/src/settings.py` and pointing it to your URL: `# CSRF_TRUSTED_ORIGINS = ['http://127.0.0.1', 'https://my.domain.example.com']`, then restart the service `systemctl restart wger`.",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
|
||||
@@ -16,8 +16,7 @@ update_os
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
python3-pip \
|
||||
python3-libtorrent \
|
||||
python3-setuptools
|
||||
python3-libtorrent
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
msg_info "Installing Deluge"
|
||||
|
||||
@@ -37,7 +37,7 @@ fetch_and_deploy_gh_release "dispatcharr" "Dispatcharr/Dispatcharr" "tarball"
|
||||
msg_info "Installing Python Dependencies with uv"
|
||||
cd /opt/dispatcharr
|
||||
$STD uv venv --clear
|
||||
$STD uv sync
|
||||
$STD uv pip install -r requirements.txt --index-strategy unsafe-best-match
|
||||
$STD uv pip install gunicorn gevent celery redis daphne
|
||||
msg_ok "Installed Python Dependencies"
|
||||
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: Slaviša Arežina (tremor021)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://www.drawio.com/
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
setup_hwaccel
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y tomcat11
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "drawio" "jgraph/drawio" "singlefile" "latest" "/var/lib/tomcat11/webapps" "draw.war"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@@ -31,10 +31,8 @@ setup_deb822_repo "matrix-org" \
|
||||
"main"
|
||||
echo "matrix-synapse-py3 matrix-synapse/server-name string $servername" | debconf-set-selections
|
||||
echo "matrix-synapse-py3 matrix-synapse/report-stats boolean false" | debconf-set-selections
|
||||
echo "exit 101" >/usr/sbin/policy-rc.d
|
||||
chmod +x /usr/sbin/policy-rc.d
|
||||
$STD apt install matrix-synapse-py3 -y
|
||||
rm -f /usr/sbin/policy-rc.d
|
||||
systemctl stop matrix-synapse
|
||||
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/matrix-synapse/homeserver.yaml
|
||||
sed -i 's/'\''::1'\'', //g' /etc/matrix-synapse/homeserver.yaml
|
||||
SECRET=$(openssl rand -hex 32)
|
||||
|
||||
@@ -38,18 +38,6 @@ rm -f "$DEB_FILE"
|
||||
echo "$LATEST_VERSION" >~/.emqx
|
||||
msg_ok "Installed EMQX"
|
||||
|
||||
read -r -p "${TAB3}Would you like to disable the EMQX MQ feature? (reduces disk/CPU usage) <y/N> " prompt
|
||||
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||
msg_info "Disabling EMQX MQ feature"
|
||||
mkdir -p /etc/emqx
|
||||
if ! grep -q "^mq.enable" /etc/emqx/emqx.conf 2>/dev/null; then
|
||||
echo "mq.enable = false" >>/etc/emqx/emqx.conf
|
||||
else
|
||||
sed -i 's/^mq.enable.*/mq.enable = false/' /etc/emqx/emqx.conf
|
||||
fi
|
||||
msg_ok "Disabled EMQX MQ feature"
|
||||
fi
|
||||
|
||||
msg_info "Starting EMQX service"
|
||||
$STD systemctl enable -q --now emqx
|
||||
msg_ok "Enabled EMQX service"
|
||||
|
||||
@@ -15,30 +15,31 @@ network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
$STD apt-get install -y \
|
||||
ffmpeg \
|
||||
jq \
|
||||
imagemagick
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
setup_hwaccel
|
||||
|
||||
msg_info "Installing ASP.NET Core Runtime"
|
||||
setup_deb822_repo \
|
||||
"microsoft" \
|
||||
"https://packages.microsoft.com/keys/microsoft-2025.asc" \
|
||||
"https://packages.microsoft.com/debian/13/prod/" \
|
||||
"trixie"
|
||||
$STD apt install -y aspnetcore-runtime-8.0
|
||||
curl -fsSL https://packages.microsoft.com/config/debian/13/packages-microsoft-prod.deb -o packages-microsoft-prod.deb
|
||||
$STD dpkg -i packages-microsoft-prod.deb
|
||||
rm -rf packages-microsoft-prod.deb
|
||||
$STD apt-get update
|
||||
$STD apt-get install -y aspnetcore-runtime-8.0
|
||||
msg_ok "Installed ASP.NET Core Runtime"
|
||||
|
||||
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
|
||||
|
||||
msg_info "Setup FileFlows"
|
||||
$STD ln -svf /usr/bin/ffmpeg /usr/local/bin/ffmpeg
|
||||
$STD ln -svf /usr/bin/ffprobe /usr/local/bin/ffprobe
|
||||
cd /opt/fileflows/Server
|
||||
dotnet FileFlows.Server.dll --systemd install --root true
|
||||
temp_file=$(mktemp)
|
||||
curl -fsSL https://fileflows.com/downloads/zip -o "$temp_file"
|
||||
$STD unzip -d /opt/fileflows "$temp_file"
|
||||
$STD bash -c "cd /opt/fileflows/Server && dotnet FileFlows.Server.dll --systemd install --root true"
|
||||
systemctl enable -q --now fileflows
|
||||
rm -f "$temp_file"
|
||||
msg_ok "Setup FileFlows"
|
||||
|
||||
motd_ssh
|
||||
|
||||
@@ -289,7 +289,7 @@ ML_DIR="${APP_DIR}/machine-learning"
|
||||
GEO_DIR="${INSTALL_DIR}/geodata"
|
||||
mkdir -p {"${APP_DIR}","${UPLOAD_DIR}","${GEO_DIR}","${INSTALL_DIR}"/cache}
|
||||
|
||||
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.5.6" "$SRC_DIR"
|
||||
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.5.5" "$SRC_DIR"
|
||||
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1]' ${SRC_DIR}/package.json)"
|
||||
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
||||
|
||||
|
||||
64
install/jellyseerr-install.sh
Normal file
64
install/jellyseerr-install.sh
Normal file
@@ -0,0 +1,64 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2026 tteck
|
||||
# Author: tteck (tteckster)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://docs.jellyseerr.dev/
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt-get install -y \
|
||||
git \
|
||||
build-essential
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
git clone -q https://github.com/Fallenbagel/jellyseerr.git /opt/jellyseerr
|
||||
cd /opt/jellyseerr
|
||||
$STD git checkout main
|
||||
|
||||
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/jellyseerr/package.json)
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||
|
||||
msg_info "Installing Jellyseerr (Patience)"
|
||||
export CYPRESS_INSTALL_BINARY=0
|
||||
cd /opt/jellyseerr
|
||||
$STD pnpm install --frozen-lockfile
|
||||
export NODE_OPTIONS="--max-old-space-size=3072"
|
||||
$STD pnpm build
|
||||
mkdir -p /etc/jellyseerr/
|
||||
cat <<EOF >/etc/jellyseerr/jellyseerr.conf
|
||||
PORT=5055
|
||||
# HOST=0.0.0.0
|
||||
# JELLYFIN_TYPE=emby
|
||||
EOF
|
||||
msg_ok "Installed Jellyseerr"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/jellyseerr.service
|
||||
[Unit]
|
||||
Description=jellyseerr Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/etc/jellyseerr/jellyseerr.conf
|
||||
Environment=NODE_ENV=production
|
||||
Type=exec
|
||||
WorkingDirectory=/opt/jellyseerr
|
||||
ExecStart=/usr/bin/node dist/index.js
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now jellyseerr
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@@ -20,19 +20,10 @@ msg_ok "Installed Docker"
|
||||
msg_info "Detecting latest Kasm Workspaces release"
|
||||
KASM_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_[0-9]+\.[0-9]+\.[0-9]+\.[a-z0-9]+\.tar\.gz' | head -n 1)
|
||||
if [[ -z "$KASM_URL" ]]; then
|
||||
SERVICE_IMAGE_URL=$(curl -fsSL "https://www.kasm.com/downloads" | tr '\n' ' ' | grep -oE 'https://kasm-static-content[^"]*kasm_release_service_images_amd64_[0-9]+\.[0-9]+\.[0-9]+\.tar\.gz' | head -n 1)
|
||||
if [[ -n "$SERVICE_IMAGE_URL" ]]; then
|
||||
KASM_VERSION=$(echo "$SERVICE_IMAGE_URL" | sed -E 's/.*kasm_release_service_images_amd64_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
|
||||
KASM_URL="https://kasm-static-content.s3.amazonaws.com/kasm_release_${KASM_VERSION}.tar.gz"
|
||||
fi
|
||||
else
|
||||
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
|
||||
fi
|
||||
|
||||
if [[ -z "$KASM_URL" ]] || [[ -z "$KASM_VERSION" ]]; then
|
||||
msg_error "Unable to detect latest Kasm release URL."
|
||||
exit 1
|
||||
fi
|
||||
KASM_VERSION=$(echo "$KASM_URL" | sed -E 's/.*kasm_release_([0-9]+\.[0-9]+\.[0-9]+).*/\1/')
|
||||
msg_ok "Detected Kasm Workspaces version $KASM_VERSION"
|
||||
|
||||
msg_warn "WARNING: This script will run an external installer from a third-party source (https://www.kasmweb.com/)."
|
||||
|
||||
@@ -37,13 +37,18 @@ PYTHON_VERSION="3.12" setup_uv
|
||||
fetch_and_deploy_gh_release "libretranslate" "LibreTranslate/LibreTranslate" "tarball"
|
||||
|
||||
msg_info "Setup LibreTranslate (Patience)"
|
||||
TORCH_VERSION=$(grep -Eo '"torch ==[0-9]+\.[0-9]+\.[0-9]+' /opt/libretranslate/pyproject.toml |
|
||||
tail -n1 | sed 's/.*==//')
|
||||
if [[ -z "$TORCH_VERSION" ]]; then
|
||||
TORCH_VERSION="2.5.0"
|
||||
fi
|
||||
cd /opt/libretranslate
|
||||
$STD uv venv --clear .venv --python 3.12
|
||||
$STD source .venv/bin/activate
|
||||
$STD uv pip install --upgrade pip
|
||||
$STD uv pip install "setuptools<81"
|
||||
$STD uv pip install --upgrade pip setuptools
|
||||
$STD uv pip install Babel==2.12.1
|
||||
$STD .venv/bin/python scripts/compile_locales.py
|
||||
$STD uv pip install "torch==${TORCH_VERSION}" --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
$STD uv pip install "numpy<2"
|
||||
$STD uv pip install .
|
||||
$STD uv pip install libretranslate
|
||||
|
||||
@@ -30,19 +30,29 @@ msg_ok "Installed Nginx UI"
|
||||
msg_info "Configuring Nginx UI"
|
||||
mkdir -p /usr/local/etc/nginx-ui
|
||||
cat <<EOF >/usr/local/etc/nginx-ui/app.ini
|
||||
[server]
|
||||
HttpHost = 0.0.0.0
|
||||
HttpPort = 9000
|
||||
RunMode = release
|
||||
JwtSecret = $(openssl rand -hex 32)
|
||||
|
||||
[nginx]
|
||||
AccessLogPath = /var/log/nginx/access.log
|
||||
ErrorLogPath = /var/log/nginx/error.log
|
||||
ConfigDir = /etc/nginx
|
||||
PIDPath = /run/nginx.pid
|
||||
TestConfigCmd = nginx -t
|
||||
ReloadCmd = nginx -s reload
|
||||
RestartCmd = systemctl restart nginx
|
||||
|
||||
[app]
|
||||
PageSize = 10
|
||||
|
||||
[server]
|
||||
Host = 0.0.0.0
|
||||
Port = 9000
|
||||
RunMode = release
|
||||
|
||||
[cert]
|
||||
HTTPChallengePort = 9180
|
||||
|
||||
[terminal]
|
||||
StartCmd = login
|
||||
Email =
|
||||
CADir =
|
||||
RenewalInterval = 7
|
||||
RecursiveNameservers =
|
||||
EOF
|
||||
msg_ok "Configured Nginx UI"
|
||||
|
||||
@@ -68,6 +78,17 @@ EOF
|
||||
systemctl daemon-reload
|
||||
msg_ok "Created Service"
|
||||
|
||||
msg_info "Creating Initial Admin User"
|
||||
systemctl start nginx-ui
|
||||
sleep 3
|
||||
systemctl stop nginx-ui
|
||||
sleep 1
|
||||
/usr/local/bin/nginx-ui reset-password --config /usr/local/etc/nginx-ui/app.ini &>/tmp/nginx-ui-reset.log || true
|
||||
ADMIN_PASS=$(grep -oP 'Password: \K\S+' /tmp/nginx-ui-reset.log || echo "admin")
|
||||
echo -e "Nginx-UI Credentials\nUsername: admin\nPassword: $ADMIN_PASS" >~/nginx-ui.creds
|
||||
rm -f /tmp/nginx-ui-reset.log
|
||||
msg_ok "Created Initial Admin User"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl enable -q --now nginx-ui
|
||||
rm -rf /etc/nginx/sites-enabled/default
|
||||
|
||||
@@ -38,10 +38,6 @@ for server in "${servers[@]}"; do
|
||||
fi
|
||||
done
|
||||
|
||||
msg_info "Installing dependencies"
|
||||
$STD apt install -y inotify-tools
|
||||
msg_ok "Installed dependencies"
|
||||
|
||||
msg_info "Installing Collabora Online"
|
||||
curl -fsSL https://collaboraoffice.com/downloads/gpg/collaboraonline-release-keyring.gpg -o /etc/apt/keyrings/collaboraonline-release-keyring.gpg
|
||||
cat <<EOF >/etc/apt/sources.list.d/colloboraonline.sources
|
||||
@@ -152,15 +148,8 @@ COLLABORATION_JWT_SECRET=
|
||||
# FRONTEND_FULL_TEXT_SEARCH_ENABLED=true
|
||||
# SEARCH_EXTRACTOR_TIKA_TIKA_URL=<your-tika-url>
|
||||
|
||||
## Uncomment below to enable PosixFS Collaborative Mode
|
||||
## Increase inotify watch/instance limits on your PVE host:
|
||||
### sysctl -w fs.inotify.max_user_watches=1048576
|
||||
### sysctl -w fs.inotify.max_user_instances=1024
|
||||
# STORAGE_USERS_POSIX_ENABLE_COLLABORATION=true
|
||||
# STORAGE_USERS_POSIX_WATCH_TYPE=inotifywait
|
||||
# STORAGE_USERS_POSIX_WATCH_FS=true
|
||||
# STORAGE_USERS_POSIX_WATCH_PATH=<path-to-storage-or-bind-mount>
|
||||
## User files location - experimental - use at your own risk! - ZFS, NFS v4.2+ supported - CIFS/SMB not supported
|
||||
## External storage test - Only NFS v4.2+ is supported
|
||||
## User files
|
||||
# STORAGE_USERS_POSIX_ROOT=<path-to-your-bind_mount>
|
||||
EOF
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ setup_hwaccel
|
||||
PYTHON_VERSION="3.12" setup_uv
|
||||
|
||||
msg_info "Installing Open WebUI"
|
||||
$STD uv tool install --python 3.12 --constraint <(echo "numba>=0.60") open-webui[all]
|
||||
$STD uv tool install --python 3.12 open-webui[all]
|
||||
msg_ok "Installed Open WebUI"
|
||||
|
||||
read -r -p "${TAB3}Would you like to add Ollama? <y/N> " prompt
|
||||
|
||||
44
install/overseerr-install.sh
Normal file
44
install/overseerr-install.sh
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2026 tteck
|
||||
# Author: tteck (tteckster)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://overseerr.dev/
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
NODE_VERSION="22" NODE_MODULE="yarn@latest" setup_nodejs
|
||||
fetch_and_deploy_gh_release "overseerr" "sct/overseerr" "tarball"
|
||||
|
||||
msg_info "Configuring Overseerr (Patience)"
|
||||
cd /opt/overseerr
|
||||
$STD yarn install
|
||||
$STD yarn build
|
||||
msg_ok "Configured Overseerr"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/overseerr.service
|
||||
[Unit]
|
||||
Description=Overseerr Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=exec
|
||||
WorkingDirectory=/opt/overseerr
|
||||
ExecStart=/usr/bin/yarn start
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now overseerr
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@@ -178,7 +178,7 @@ http:
|
||||
servers:
|
||||
- url: "http://$LOCAL_IP:3000"
|
||||
EOF
|
||||
$STD npm run db:push
|
||||
$STD npm run db:sqlite:push
|
||||
|
||||
. /etc/os-release
|
||||
if [ "$VERSION_CODENAME" = "trixie" ]; then
|
||||
|
||||
@@ -27,42 +27,162 @@ PG_DB_NAME="patchmon_db" PG_DB_USER="patchmon_usr" setup_postgresql_db
|
||||
fetch_and_deploy_gh_release "PatchMon" "PatchMon/PatchMon" "tarball" "latest" "/opt/patchmon"
|
||||
|
||||
msg_info "Configuring PatchMon"
|
||||
VERSION=$(get_latest_github_release "PatchMon/PatchMon")
|
||||
export NODE_ENV=production
|
||||
cd /opt/patchmon
|
||||
export NODE_ENV=production
|
||||
$STD npm install --no-audit --no-fund --no-save --ignore-scripts
|
||||
cd /opt/patchmon/backend
|
||||
$STD npm install --no-audit --no-fund --no-save --ignore-scripts
|
||||
|
||||
cd /opt/patchmon/frontend
|
||||
cat <<EOF >./.env
|
||||
VITE_API_URL=http://${LOCAL_IP}:3001/api/v1
|
||||
VITE_APP_NAME=PatchMon
|
||||
VITE_APP_VERSION=${VERSION}
|
||||
EOF
|
||||
$STD npm install --no-audit --no-fund --no-save --ignore-scripts --include=dev
|
||||
$STD npm install --include=dev --no-audit --no-fund --no-save --ignore-scripts
|
||||
$STD npm run build
|
||||
JWT_SECRET="$(openssl rand -base64 64 | tr -d "=+/" | cut -c1-50)"
|
||||
cat <<EOF >/opt/patchmon/backend/.env
|
||||
# Database Configuration
|
||||
DATABASE_URL="postgresql://$PG_DB_USER:$PG_DB_PASS@localhost:5432/$PG_DB_NAME"
|
||||
PY_THRESHOLD=3M_DB_CONN_MAX_ATTEMPTS=30
|
||||
PM_DB_CONN_WAIT_INTERVAL=2
|
||||
|
||||
JWT_SECRET="$(openssl rand -hex 64)"
|
||||
mv /opt/patchmon/backend/env.example /opt/patchmon/backend/.env
|
||||
sed -i -e "s|DATABASE_URL=.*|DATABASE_URL=\"postgresql://$PG_DB_USER:$PG_DB_PASS@localhost:5432/$PG_DB_NAME\"|" \
|
||||
-e "/JWT_SECRET/s/[=$].*/=$JWT_SECRET/" \
|
||||
-e "\|CORS_ORIGIN|s|localhost|$LOCAL_IP|" \
|
||||
-e "/PORT=3001/aSERVER_PROTOCOL=http \\
|
||||
SERVER_HOST=$LOCAL_IP \\
|
||||
SERVER_PORT=3000" \
|
||||
-e '/_ENV=production/aTRUST_PROXY=1' \
|
||||
-e '/REDIS_USER=.*/,+1d' /opt/patchmon/backend/.env
|
||||
# JWT Configuration
|
||||
JWT_SECRET="$JWT_SECRET"
|
||||
JWT_EXPIRES_IN=1h
|
||||
JWT_REFRESH_EXPIRES_IN=7d
|
||||
|
||||
# Server Configuration
|
||||
PORT=3399
|
||||
NODE_ENV=production
|
||||
|
||||
# API Configuration
|
||||
API_VERSION=v1
|
||||
|
||||
# CORS Configuration
|
||||
CORS_ORIGIN="http://$LOCAL_IP"
|
||||
|
||||
# Session Configuration
|
||||
SESSION_INACTIVITY_TIMEOUT_MINUTES=30
|
||||
|
||||
# User Configuration
|
||||
DEFAULT_USER_ROLE=user
|
||||
|
||||
# Rate Limiting (times in milliseconds)
|
||||
RATE_LIMIT_WINDOW_MS=900000
|
||||
RATE_LIMIT_MAX=5000
|
||||
AUTH_RATE_LIMIT_WINDOW_MS=600000
|
||||
AUTH_RATE_LIMIT_MAX=500
|
||||
AGENT_RATE_LIMIT_WINDOW_MS=60000
|
||||
AGENT_RATE_LIMIT_MAX=1000
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL=info
|
||||
ENABLE_LOGGING=true
|
||||
|
||||
# TFA Configuration
|
||||
TFA_REMEMBER_ME_EXPIRES_IN=30d
|
||||
TFA_MAX_REMEMBER_SESSIONS=5
|
||||
TFA_SUSPICIOUS_ACTIVITY_THRESHOLD=3
|
||||
EOF
|
||||
|
||||
cat <<EOF >/opt/patchmon/frontend/.env
|
||||
VITE_API_URL=http://$LOCAL_IP/api/v1
|
||||
VITE_APP_NAME=PatchMon
|
||||
VITE_APP_VERSION=1.3.0
|
||||
EOF
|
||||
|
||||
cd /opt/patchmon/backend
|
||||
$STD npm run db:generate
|
||||
$STD npx prisma migrate deploy
|
||||
$STD npx prisma generate
|
||||
msg_ok "Configured PatchMon"
|
||||
|
||||
msg_info "Configuring Nginx"
|
||||
cp /opt/patchmon/docker/nginx.conf.template /etc/nginx/sites-available/patchmon.conf
|
||||
sed -i -e 's|proxy_pass .*|proxy_pass http://127.0.0.1:3001;|' \
|
||||
-e '\|try_files |i\ root /opt/patchmon/frontend/dist;' \
|
||||
-e 's|alias.*|alias /opt/patchmon/frontend/dist/assets;|' \
|
||||
-e '\|expires 1y|i\ root /opt/patchmon/frontend/dist;' /etc/nginx/sites-available/patchmon.conf
|
||||
cat <<EOF >/etc/nginx/sites-available/patchmon.conf
|
||||
map \$http_x_forwarded_proto \$proxy_corrected_scheme {
|
||||
default \$scheme; # Fallback to Nginx's actual connection scheme if no X-Forwarded-Proto header was set
|
||||
https https; # If X-Forwarded-Proto is 'https', use 'https'
|
||||
http http; # If X-Forwarded-Proto is 'http', use 'http'
|
||||
}
|
||||
|
||||
server {
|
||||
# Listen on both IPv4 and IPv6 (with all hostnames)
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options DENY always;
|
||||
add_header X-Content-Type-Options nosniff always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
|
||||
# Frontend
|
||||
location / {
|
||||
root /opt/patchmon/frontend/dist;
|
||||
try_files \$uri \$uri/ /index.html;
|
||||
}
|
||||
|
||||
# Bull Board proxy
|
||||
location /bullboard {
|
||||
proxy_pass http://127.0.0.1:3399;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade \$http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host \$host;
|
||||
proxy_set_header X-Real-IP \$remote_addr;
|
||||
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto \$proxy_corrected_scheme;
|
||||
proxy_set_header X-Forwarded-Host \$host;
|
||||
proxy_set_header Cookie \$http_cookie;
|
||||
proxy_cache_bypass \$http_upgrade;
|
||||
proxy_read_timeout 300s;
|
||||
proxy_connect_timeout 75s;
|
||||
|
||||
# Enable cookie passthrough
|
||||
proxy_pass_header Set-Cookie;
|
||||
proxy_cookie_path / /;
|
||||
|
||||
# Preserve original client IP
|
||||
proxy_set_header X-Original-Forwarded-For \$http_x_forwarded_for;
|
||||
if (\$request_method = 'OPTIONS') {
|
||||
return 204;
|
||||
}
|
||||
}
|
||||
|
||||
# API proxy
|
||||
location /api/ {
|
||||
proxy_pass http://127.0.0.1:3399;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade \$http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host \$host;
|
||||
proxy_set_header X-Real-IP \$remote_addr;
|
||||
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto \$proxy_corrected_scheme;
|
||||
proxy_cache_bypass \$http_upgrade;
|
||||
proxy_read_timeout 300s;
|
||||
proxy_connect_timeout 75s;
|
||||
|
||||
# Preserve original client IP
|
||||
proxy_set_header X-Original-Forwarded-For \$http_x_forwarded_for;
|
||||
if (\$request_method = 'OPTIONS') {
|
||||
return 204;
|
||||
}
|
||||
}
|
||||
|
||||
# Static assets caching (exclude Bull Board assets)
|
||||
location ~* ^/(?!bullboard).*\.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
root /opt/patchmon/frontend/dist;
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location /health {
|
||||
proxy_pass http://127.0.0.1:3399/health;
|
||||
access_log off;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
ln -sf /etc/nginx/sites-available/patchmon.conf /etc/nginx/sites-enabled/
|
||||
rm -f /etc/nginx/sites-enabled/default
|
||||
$STD nginx -t
|
||||
@@ -78,7 +198,7 @@ After=network.target postgresql.service
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/opt/patchmon/backend
|
||||
ExecStart=/usr/bin/npm run start
|
||||
ExecStart=/usr/bin/node src/server.js
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
Environment=NODE_ENV=production
|
||||
@@ -95,6 +215,57 @@ EOF
|
||||
systemctl enable -q --now patchmon-server
|
||||
msg_ok "Created and started service"
|
||||
|
||||
msg_info "Updating settings"
|
||||
cat <<EOF >/opt/patchmon/backend/update-settings.js
|
||||
const { PrismaClient } = require('@prisma/client');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const prisma = new PrismaClient();
|
||||
|
||||
async function updateSettings() {
|
||||
try {
|
||||
const existingSettings = await prisma.settings.findFirst();
|
||||
|
||||
const settingsData = {
|
||||
id: uuidv4(),
|
||||
server_url: 'http://$LOCAL_IP',
|
||||
server_protocol: 'http',
|
||||
server_host: '$LOCAL_IP',
|
||||
server_port: 3399,
|
||||
update_interval: 60,
|
||||
auto_update: true,
|
||||
signup_enabled: false,
|
||||
ignore_ssl_self_signed: false,
|
||||
updated_at: new Date()
|
||||
};
|
||||
|
||||
if (existingSettings) {
|
||||
// Update existing settings
|
||||
await prisma.settings.update({
|
||||
where: { id: existingSettings.id },
|
||||
data: settingsData
|
||||
});
|
||||
} else {
|
||||
// Create new settings record
|
||||
await prisma.settings.create({
|
||||
data: settingsData
|
||||
});
|
||||
}
|
||||
|
||||
console.log('✅ Database settings updated successfully');
|
||||
} catch (error) {
|
||||
console.error('❌ Error updating settings:', error.message);
|
||||
process.exit(1);
|
||||
} finally {
|
||||
await prisma.\$disconnect();
|
||||
}
|
||||
}
|
||||
|
||||
updateSettings();
|
||||
EOF
|
||||
cd /opt/patchmon/backend
|
||||
$STD node update-settings.js
|
||||
msg_ok "Settings updated successfully"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
|
||||
47
install/prometheus-paperless-ngx-exporter-install.sh
Executable file
47
install/prometheus-paperless-ngx-exporter-install.sh
Executable file
@@ -0,0 +1,47 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: Andy Grunwald (andygrunwald)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/hansmi/prometheus-paperless-exporter
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
fetch_and_deploy_gh_release "prom-paperless-exp" "hansmi/prometheus-paperless-exporter" "binary"
|
||||
|
||||
msg_info "Configuring Prometheus Paperless NGX Exporter"
|
||||
mkdir -p /etc/prometheus-paperless-ngx-exporter
|
||||
echo "SECRET_AUTH_TOKEN" >/etc/prometheus-paperless-ngx-exporter/paperless_auth_token_file
|
||||
msg_ok "Configured Prometheus Paperless NGX Exporter"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/prometheus-paperless-ngx-exporter.service
|
||||
[Unit]
|
||||
Description=Prometheus Paperless NGX Exporter
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
User=root
|
||||
Restart=always
|
||||
Type=simple
|
||||
ExecStart=/usr/bin/prometheus-paperless-exporter \
|
||||
--paperless_url=http://paperless.example.org \
|
||||
--paperless_auth_token_file=/etc/prometheus-paperless-ngx-exporter/paperless_auth_token_file
|
||||
ExecReload=/bin/kill -HUP \$MAINPID
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now prometheus-paperless-ngx-exporter
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@@ -14,51 +14,42 @@ network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y apache2-utils
|
||||
$STD apt install -y \
|
||||
apache2-utils \
|
||||
python3-pip \
|
||||
python3-venv
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
fetch_and_deploy_gh_release "Radicale" "Kozea/Radicale" "tarball" "latest" "/opt/radicale"
|
||||
|
||||
msg_info "Setting up Radicale"
|
||||
cd /opt/radicale
|
||||
python3 -m venv /opt/radicale
|
||||
source /opt/radicale/bin/activate
|
||||
$STD python3 -m pip install --upgrade https://github.com/Kozea/Radicale/archive/master.tar.gz
|
||||
RNDPASS=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | head -c13)
|
||||
$STD htpasswd -c -b -5 /opt/radicale/users admin "$RNDPASS"
|
||||
$STD htpasswd -c -b -5 /opt/radicale/users admin $RNDPASS
|
||||
{
|
||||
echo "Radicale Credentials"
|
||||
echo "Admin User: admin"
|
||||
echo "Admin Password: $RNDPASS"
|
||||
} >>~/radicale.creds
|
||||
msg_ok "Done setting up Radicale"
|
||||
|
||||
mkdir -p /etc/radicale
|
||||
cat <<EOF >/etc/radicale/config
|
||||
[server]
|
||||
hosts = 0.0.0.0:5232
|
||||
msg_info "Setup Service"
|
||||
|
||||
[auth]
|
||||
type = htpasswd
|
||||
htpasswd_filename = /opt/radicale/users
|
||||
htpasswd_encryption = sha512
|
||||
|
||||
[storage]
|
||||
type = multifilesystem
|
||||
filesystem_folder = /var/lib/radicale/collections
|
||||
|
||||
[web]
|
||||
type = internal
|
||||
cat <<EOF >/opt/radicale/start.sh
|
||||
#!/usr/bin/env bash
|
||||
source /opt/radicale/bin/activate
|
||||
python3 -m radicale --storage-filesystem-folder=/var/lib/radicale/collections --hosts 0.0.0.0:5232 --auth-type htpasswd --auth-htpasswd-filename /opt/radicale/users --auth-htpasswd-encryption sha512
|
||||
EOF
|
||||
msg_ok "Set up Radicale"
|
||||
|
||||
msg_info "Creating Service"
|
||||
chmod +x /opt/radicale/start.sh
|
||||
|
||||
cat <<EOF >/etc/systemd/system/radicale.service
|
||||
[Unit]
|
||||
Description=A simple CalDAV (calendar) and CardDAV (contact) server
|
||||
After=network.target
|
||||
Requires=network.target
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/opt/radicale
|
||||
ExecStart=/usr/local/bin/uv run -m radicale --config /etc/radicale/config
|
||||
ExecStart=/opt/radicale/start.sh
|
||||
Restart=on-failure
|
||||
# User=radicale
|
||||
# Deny other users access to the calendar data
|
||||
|
||||
@@ -1,67 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: CrazyWolf13
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://docs.seerr.dev/
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt-get install -y build-essential
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
|
||||
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/seerr/package.json)
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||
|
||||
msg_info "Installing Seerr (Patience)"
|
||||
export CYPRESS_INSTALL_BINARY=0
|
||||
cd /opt/seerr
|
||||
$STD pnpm install --frozen-lockfile
|
||||
export NODE_OPTIONS="--max-old-space-size=3072"
|
||||
$STD pnpm build
|
||||
mkdir -p /etc/seerr/
|
||||
cat <<EOF >/etc/seerr/seerr.conf
|
||||
## Seerr's default port is 5055, if you want to use both, change this.
|
||||
## specify on which port to listen
|
||||
PORT=5055
|
||||
|
||||
## specify on which interface to listen, by default seerr listens on all interfaces
|
||||
HOST=0.0.0.0
|
||||
|
||||
## Uncomment if you want to force Node.js to resolve IPv4 before IPv6 (advanced users only)
|
||||
# FORCE_IPV4_FIRST=true
|
||||
EOF
|
||||
msg_ok "Installed Seerr"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/seerr.service
|
||||
[Unit]
|
||||
Description=Seerr Service
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
EnvironmentFile=/etc/seerr/seerr.conf
|
||||
Environment=NODE_ENV=production
|
||||
Type=exec
|
||||
Restart=on-failure
|
||||
WorkingDirectory=/opt/seerr
|
||||
ExecStart=/usr/bin/node dist/index.js
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now seerr
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@@ -3,7 +3,7 @@
|
||||
# Copyright (c) 2021-2026 community-scripts ORG
|
||||
# Author: vhsdream
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/slskd/slskd/, https://github.com/mrusse/soularr
|
||||
# Source: https://github.com/slskd/slskd/, https://soularr.net
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
@@ -13,71 +13,71 @@ setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
fetch_and_deploy_gh_release "Slskd" "slskd/slskd" "prebuild" "latest" "/opt/slskd" "slskd-*-linux-x64.zip"
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
python3-pip
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
msg_info "Configuring Slskd"
|
||||
msg_info "Setup ${APPLICATION}"
|
||||
tmp_file=$(mktemp)
|
||||
RELEASE=$(curl -s https://api.github.com/repos/slskd/slskd/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3) }')
|
||||
curl -fsSL "https://github.com/slskd/slskd/releases/download/${RELEASE}/slskd-${RELEASE}-linux-x64.zip" -o $tmp_file
|
||||
$STD unzip $tmp_file -d /opt/${APPLICATION}
|
||||
echo "${RELEASE}" >/opt/${APPLICATION}_version.txt
|
||||
JWT_KEY=$(openssl rand -base64 44)
|
||||
SLSKD_API_KEY=$(openssl rand -base64 44)
|
||||
cp /opt/slskd/config/slskd.example.yml /opt/slskd/config/slskd.yml
|
||||
cp /opt/${APPLICATION}/config/slskd.example.yml /opt/${APPLICATION}/config/slskd.yml
|
||||
sed -i \
|
||||
-e '/web:/,/cidr/s/^# //' \
|
||||
-e '/https:/,/port: 5031/s/false/true/' \
|
||||
-e '/port: 5030/,/socket/s/,.*$//' \
|
||||
-e '/content_path:/,/authentication/s/false/true/' \
|
||||
-e "\|web:|,\|cidr|s|^#||" \
|
||||
-e "\|https:|,\|5031|s|false|true|" \
|
||||
-e "\|api_keys|,\|cidr|s|<some.*$|$SLSKD_API_KEY|; \
|
||||
s|role: readonly|role: readwrite|; \
|
||||
s|0.0.0.0/0,::/0|& # Replace this with your subnet|" \
|
||||
-e "\|soulseek|,\|write_queue|s|^#||" \
|
||||
-e "\|jwt:|,\|ttl|s|key: ~|key: $JWT_KEY|" \
|
||||
-e '/soulseek/,/write_queue/s/^# //' \
|
||||
-e 's/^.*picture/#&/' /opt/slskd/config/slskd.yml
|
||||
msg_ok "Configured Slskd"
|
||||
-e "s|^ picture|# picture|" \
|
||||
/opt/${APPLICATION}/config/slskd.yml
|
||||
msg_ok "Setup ${APPLICATION}"
|
||||
|
||||
read -rp "${TAB3}Do you want to install Soularr? y/N " soularr
|
||||
if [[ ${soularr,,} =~ ^(y|yes)$ ]]; then
|
||||
PYTHON_VERSION="3.11" setup_uv
|
||||
fetch_and_deploy_gh_release "Soularr" "mrusse/soularr" "tarball" "latest" "/opt/soularr"
|
||||
cd /opt/soularr
|
||||
$STD uv venv venv
|
||||
$STD source venv/bin/activate
|
||||
$STD uv pip install -r requirements.txt
|
||||
sed -i \
|
||||
-e "\|[Slskd]|,\|host_url|s|yourslskdapikeygoeshere|$SLSKD_API_KEY|" \
|
||||
-e "/host_url/s/slskd/localhost/" \
|
||||
/opt/soularr/config.ini
|
||||
cat <<EOF >/opt/soularr/run.sh
|
||||
#!/usr/bin/env bash
|
||||
msg_info "Installing Soularr"
|
||||
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
|
||||
cd /tmp
|
||||
curl -fsSL -o main.zip https://github.com/mrusse/soularr/archive/refs/heads/main.zip
|
||||
$STD unzip main.zip
|
||||
mv soularr-main /opt/soularr
|
||||
cd /opt/soularr
|
||||
$STD pip install -r requirements.txt
|
||||
sed -i \
|
||||
-e "\|[Slskd]|,\|host_url|s|yourslskdapikeygoeshere|$SLSKD_API_KEY|" \
|
||||
-e "/host_url/s/slskd/localhost/" \
|
||||
/opt/soularr/config.ini
|
||||
sed -i \
|
||||
-e "/#This\|#Default\|INTERVAL/{N;d;}" \
|
||||
-e "/while\|#Pass/d" \
|
||||
-e "\|python|s|app|opt/soularr|; s|python|python3|" \
|
||||
-e "/dt/,+2d" \
|
||||
/opt/soularr/run.sh
|
||||
sed -i -E "/(soularr.py)/s/.{5}$//; /if/,/fi/s/.{4}//" /opt/soularr/run.sh
|
||||
chmod +x /opt/soularr/run.sh
|
||||
msg_ok "Installed Soularr"
|
||||
|
||||
if ps aux | grep "[s]oularr.py" >/dev/null; then
|
||||
echo "Soularr is already running. Exiting..."
|
||||
exit 1
|
||||
else
|
||||
source /opt/soularr/venv/bin/activate
|
||||
uv run python3 -u /opt/soularr/soularr.py --config-dir /opt/soularr
|
||||
fi
|
||||
EOF
|
||||
chmod +x /opt/soularr/run.sh
|
||||
deactivate
|
||||
msg_ok "Installed Soularr"
|
||||
fi
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/slskd.service
|
||||
msg_info "Creating Services"
|
||||
cat <<EOF >/etc/systemd/system/${APPLICATION}.service
|
||||
[Unit]
|
||||
Description=Slskd Service
|
||||
Description=${APPLICATION} Service
|
||||
After=network.target
|
||||
Wants=network.target
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/opt/slskd
|
||||
ExecStart=/opt/slskd/slskd --config /opt/slskd/config/slskd.yml
|
||||
WorkingDirectory=/opt/${APPLICATION}
|
||||
ExecStart=/opt/${APPLICATION}/slskd --config /opt/${APPLICATION}/config/slskd.yml
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
if [[ -d /opt/soularr ]]; then
|
||||
cat <<EOF >/etc/systemd/system/soularr.timer
|
||||
cat <<EOF >/etc/systemd/system/soularr.timer
|
||||
[Unit]
|
||||
Description=Soularr service timer
|
||||
RefuseManualStart=no
|
||||
@@ -85,15 +85,15 @@ RefuseManualStop=no
|
||||
|
||||
[Timer]
|
||||
Persistent=true
|
||||
# run every 10 minutes
|
||||
OnCalendar=*-*-* *:0/10:00
|
||||
# run every 5 minutes
|
||||
OnCalendar=*-*-* *:0/5:00
|
||||
Unit=soularr.service
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
EOF
|
||||
|
||||
cat <<EOF >/etc/systemd/system/soularr.service
|
||||
cat <<EOF >/etc/systemd/system/soularr.service
|
||||
[Unit]
|
||||
Description=Soularr service
|
||||
After=network.target slskd.service
|
||||
@@ -106,9 +106,10 @@ ExecStart=/bin/bash -c /opt/soularr/run.sh
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
msg_warn "Add your Lidarr API key to Soularr in '/opt/soularr/config.ini', then run 'systemctl enable --now soularr.timer'"
|
||||
fi
|
||||
systemctl enable -q --now slskd
|
||||
systemctl enable -q --now ${APPLICATION}
|
||||
systemctl enable -q soularr.timer
|
||||
rm -rf $tmp_file
|
||||
rm -rf /tmp/main.zip
|
||||
msg_ok "Created Services"
|
||||
|
||||
motd_ssh
|
||||
|
||||
@@ -29,7 +29,6 @@ $STD uv venv --clear
|
||||
$STD source /opt/Tautulli/.venv/bin/activate
|
||||
$STD uv pip install -r requirements.txt
|
||||
$STD uv pip install pyopenssl
|
||||
$STD uv pip install "setuptools<81"
|
||||
msg_ok "Installed Tautulli"
|
||||
|
||||
msg_info "Creating Service"
|
||||
|
||||
@@ -27,6 +27,68 @@ msg_ok "Installed Dependencies"
|
||||
|
||||
fetch_and_deploy_gh_release "UmlautAdaptarr" "PCJones/Umlautadaptarr" "prebuild" "latest" "/opt/UmlautAdaptarr" "linux-x64.zip"
|
||||
|
||||
msg_info "Setting up UmlautAdaptarr"
|
||||
cat <<EOF >/opt/UmlautAdaptarr/appsettings.json
|
||||
{
|
||||
"Logging": {
|
||||
"LogLevel": {
|
||||
"Default": "Information",
|
||||
"Microsoft.AspNetCore": "Warning"
|
||||
},
|
||||
"Console": {
|
||||
"TimestampFormat": "yyyy-MM-dd HH:mm:ss::"
|
||||
}
|
||||
},
|
||||
"AllowedHosts": "*",
|
||||
"Kestrel": {
|
||||
"Endpoints": {
|
||||
"Http": {
|
||||
"Url": "http://[::]:5005"
|
||||
}
|
||||
}
|
||||
},
|
||||
"Settings": {
|
||||
"UserAgent": "UmlautAdaptarr/1.0",
|
||||
"UmlautAdaptarrApiHost": "https://umlautadaptarr.pcjones.de/api/v1",
|
||||
"IndexerRequestsCacheDurationInMinutes": 12
|
||||
},
|
||||
"Sonarr": [
|
||||
{
|
||||
"Enabled": false,
|
||||
"Name": "Sonarr",
|
||||
"Host": "http://192.168.1.100:8989",
|
||||
"ApiKey": "dein_sonarr_api_key"
|
||||
}
|
||||
],
|
||||
"Radarr": [
|
||||
{
|
||||
"Enabled": false,
|
||||
"Name": "Radarr",
|
||||
"Host": "http://192.168.1.101:7878",
|
||||
"ApiKey": "dein_radarr_api_key"
|
||||
}
|
||||
],
|
||||
"Lidarr": [
|
||||
{
|
||||
"Enabled": false,
|
||||
"Host": "http://192.168.1.102:8686",
|
||||
"ApiKey": "dein_lidarr_api_key"
|
||||
},
|
||||
],
|
||||
"Readarr": [
|
||||
{
|
||||
"Enabled": false,
|
||||
"Host": "http://192.168.1.103:8787",
|
||||
"ApiKey": "dein_readarr_api_key"
|
||||
},
|
||||
],
|
||||
"IpLeakTest": {
|
||||
"Enabled": false
|
||||
}
|
||||
}
|
||||
EOF
|
||||
msg_ok "Setup UmlautAdaptarr"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/umlautadaptarr.service
|
||||
[Unit]
|
||||
|
||||
@@ -15,18 +15,16 @@ update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y apt-transport-https
|
||||
curl -fsSL "https://dl.ui.com/unifi/unifi-repo.gpg" -o "/usr/share/keyrings/unifi-repo.gpg"
|
||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.sources >/dev/null
|
||||
Types: deb
|
||||
URIs: https://www.ui.com/downloads/unifi/debian
|
||||
Suites: stable
|
||||
Components: ubiquiti
|
||||
Architectures: amd64
|
||||
Signed-By: /usr/share/keyrings/unifi-repo.gpg
|
||||
EOF
|
||||
$STD apt update
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
setup_deb822_repo \
|
||||
"unifi" \
|
||||
"https://dl.ui.com/unifi/unifi-repo.gpg" \
|
||||
"https://www.ui.com/downloads/unifi/debian" \
|
||||
"stable" \
|
||||
"ubiquiti" \
|
||||
"amd64"
|
||||
|
||||
JAVA_VERSION="21" setup_java
|
||||
|
||||
if lscpu | grep -q 'avx'; then
|
||||
|
||||
@@ -15,167 +15,92 @@ update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
build-essential \
|
||||
nginx \
|
||||
redis-server \
|
||||
libpq-dev
|
||||
git \
|
||||
apache2 \
|
||||
libapache2-mod-wsgi-py3
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
NODE_VERSION="22" NODE_MODULE="sass" setup_nodejs
|
||||
setup_uv
|
||||
PG_VERSION="16" setup_postgresql
|
||||
PG_DB_NAME="wger" PG_DB_USER="wger" setup_postgresql_db
|
||||
fetch_and_deploy_gh_release "wger" "wger-project/wger" "tarball"
|
||||
msg_info "Installing Python"
|
||||
$STD apt install -y python3-pip
|
||||
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
|
||||
msg_ok "Installed Python"
|
||||
|
||||
NODE_VERSION="22" NODE_MODULE="yarn,sass" setup_nodejs
|
||||
|
||||
msg_info "Setting up wger"
|
||||
mkdir -p /opt/wger/{static,media}
|
||||
chmod o+w /opt/wger/media
|
||||
cd /opt/wger
|
||||
$STD corepack enable
|
||||
$STD npm install
|
||||
$STD npm run build:css:sass
|
||||
$STD uv venv
|
||||
$STD uv pip install . --group docker
|
||||
SECRET_KEY=$(openssl rand -base64 40)
|
||||
cat <<EOF >/opt/wger/.env
|
||||
DJANGO_SETTINGS_MODULE=settings.main
|
||||
PYTHONPATH=/opt/wger
|
||||
$STD adduser wger --disabled-password --gecos ""
|
||||
mkdir /home/wger/db
|
||||
touch /home/wger/db/database.sqlite
|
||||
chown :www-data -R /home/wger/db
|
||||
chmod g+w /home/wger/db /home/wger/db/database.sqlite
|
||||
mkdir /home/wger/{static,media}
|
||||
chmod o+w /home/wger/media
|
||||
temp_dir=$(mktemp -d)
|
||||
cd "$temp_dir"
|
||||
RELEASE=$(curl -fsSL https://api.github.com/repos/wger-project/wger/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3)}')
|
||||
curl -fsSL "https://github.com/wger-project/wger/archive/refs/tags/$RELEASE.tar.gz" -o "$RELEASE.tar.gz"
|
||||
tar xzf "$RELEASE".tar.gz
|
||||
mv wger-"$RELEASE" /home/wger/src
|
||||
cd /home/wger/src
|
||||
$STD pip install -r requirements_prod.txt --ignore-installed
|
||||
$STD pip install -e .
|
||||
$STD wger create-settings --database-path /home/wger/db/database.sqlite
|
||||
sed -i "s#home/wger/src/media#home/wger/media#g" /home/wger/src/settings.py
|
||||
sed -i "/MEDIA_ROOT = '\/home\/wger\/media'/a STATIC_ROOT = '/home/wger/static'" /home/wger/src/settings.py
|
||||
$STD wger bootstrap
|
||||
$STD python3 manage.py collectstatic
|
||||
rm -rf "$temp_dir"
|
||||
echo "${RELEASE}" >/opt/wger_version.txt
|
||||
msg_ok "Finished setting up wger"
|
||||
|
||||
DJANGO_DB_ENGINE=django.db.backends.postgresql
|
||||
DJANGO_DB_DATABASE=${PG_DB_NAME}
|
||||
DJANGO_DB_USER=${PG_DB_USER}
|
||||
DJANGO_DB_PASSWORD=${PG_DB_PASS}
|
||||
DJANGO_DB_HOST=localhost
|
||||
DJANGO_DB_PORT=5432
|
||||
DATABASE_URL=postgresql://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/apache2/sites-available/wger.conf
|
||||
<Directory /home/wger/src>
|
||||
<Files wsgi.py>
|
||||
Require all granted
|
||||
</Files>
|
||||
</Directory>
|
||||
|
||||
DJANGO_MEDIA_ROOT=/opt/wger/media
|
||||
DJANGO_STATIC_ROOT=/opt/wger/static
|
||||
DJANGO_STATIC_URL=/static/
|
||||
<VirtualHost *:80>
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIDaemonProcess wger python-path=/home/wger/src python-home=/home/wger
|
||||
WSGIProcessGroup wger
|
||||
WSGIScriptAlias / /home/wger/src/wger/wsgi.py
|
||||
WSGIPassAuthorization On
|
||||
|
||||
ALLOWED_HOSTS=${LOCAL_IP},localhost,127.0.0.1
|
||||
CSRF_TRUSTED_ORIGINS=http://${LOCAL_IP}:3000
|
||||
Alias /static/ /home/wger/static/
|
||||
<Directory /home/wger/static>
|
||||
Require all granted
|
||||
</Directory>
|
||||
|
||||
USE_X_FORWARDED_HOST=True
|
||||
SECURE_PROXY_SSL_HEADER=HTTP_X_FORWARDED_PROTO,http
|
||||
Alias /media/ /home/wger/media/
|
||||
<Directory /home/wger/media>
|
||||
Require all granted
|
||||
</Directory>
|
||||
|
||||
DJANGO_CACHE_BACKEND=django_redis.cache.RedisCache
|
||||
DJANGO_CACHE_LOCATION=redis://127.0.0.1:6379/1
|
||||
DJANGO_CACHE_TIMEOUT=300
|
||||
DJANGO_CACHE_CLIENT_CLASS=django_redis.client.DefaultClient
|
||||
AXES_CACHE_ALIAS=default
|
||||
|
||||
USE_CELERY=True
|
||||
CELERY_BROKER=redis://127.0.0.1:6379/2
|
||||
CELERY_BACKEND=redis://127.0.0.1:6379/2
|
||||
|
||||
SITE_URL=http://${LOCAL_IP}:3000
|
||||
SECRET_KEY=${SECRET_KEY}
|
||||
ErrorLog /var/log/apache2/wger-error.log
|
||||
CustomLog /var/log/apache2/wger-access.log combined
|
||||
</VirtualHost>
|
||||
EOF
|
||||
set -a && source /opt/wger/.env && set +a
|
||||
$STD uv run wger bootstrap
|
||||
$STD uv run python manage.py collectstatic --no-input
|
||||
cat <<EOF | uv run python manage.py shell
|
||||
from django.contrib.auth import get_user_model
|
||||
User = get_user_model()
|
||||
|
||||
user, created = User.objects.get_or_create(
|
||||
username="admin",
|
||||
defaults={"email": "admin@localhost"},
|
||||
)
|
||||
|
||||
if created:
|
||||
user.set_password("${PG_DB_PASS}")
|
||||
user.is_superuser = True
|
||||
user.is_staff = True
|
||||
user.save()
|
||||
EOF
|
||||
msg_ok "Set up wger"
|
||||
msg_info "Creating Config and Services"
|
||||
$STD a2dissite 000-default.conf
|
||||
$STD a2ensite wger
|
||||
systemctl restart apache2
|
||||
cat <<EOF >/etc/systemd/system/wger.service
|
||||
[Unit]
|
||||
Description=wger Gunicorn
|
||||
Description=wger Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/wger
|
||||
EnvironmentFile=/opt/wger/.env
|
||||
ExecStart=/opt/wger/.venv/bin/gunicorn \
|
||||
--bind 127.0.0.1:8000 \
|
||||
--workers 3 \
|
||||
--threads 2 \
|
||||
--timeout 120 \
|
||||
wger.wsgi:application
|
||||
ExecStart=/usr/local/bin/wger start -a 0.0.0.0 -p 3000
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
cat <<EOF >/etc/systemd/system/celery.service
|
||||
[Unit]
|
||||
Description=wger Celery Worker
|
||||
After=network.target redis-server.service
|
||||
Requires=redis-server.service
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/opt/wger
|
||||
EnvironmentFile=/opt/wger/.env
|
||||
ExecStart=/opt/wger/.venv/bin/celery -A wger worker -l info
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
mkdir -p /var/lib/wger/celery
|
||||
chmod 700 /var/lib/wger/celery
|
||||
cat <<EOF >/etc/systemd/system/celery-beat.service
|
||||
[Unit]
|
||||
Description=wger Celery Beat
|
||||
After=network.target redis-server.service
|
||||
Requires=redis-server.service
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/opt/wger
|
||||
EnvironmentFile=/opt/wger/.env
|
||||
ExecStart=/opt/wger/.venv/bin/celery -A wger beat -l info \
|
||||
--schedule /var/lib/wger/celery/celerybeat-schedule
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
cat <<'EOF' >/etc/nginx/sites-available/wger
|
||||
server {
|
||||
listen 3000;
|
||||
server_name _;
|
||||
|
||||
client_max_body_size 20M;
|
||||
|
||||
location /static/ {
|
||||
alias /opt/wger/static/;
|
||||
expires 30d;
|
||||
}
|
||||
|
||||
location /media/ {
|
||||
alias /opt/wger/media/;
|
||||
}
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_redirect off;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
$STD rm -f /etc/nginx/sites-enabled/default
|
||||
$STD ln -sf /etc/nginx/sites-available/wger /etc/nginx/sites-enabled/wger
|
||||
systemctl enable -q --now redis-server nginx wger celery celery-beat
|
||||
systemctl restart nginx
|
||||
msg_ok "Created Config and Services"
|
||||
systemctl enable -q --now wger
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
|
||||
1163
misc/api.func
1163
misc/api.func
File diff suppressed because it is too large
Load Diff
601
misc/build.func
601
misc/build.func
@@ -38,16 +38,15 @@
|
||||
# - Captures app-declared resource defaults (CPU, RAM, Disk)
|
||||
# ------------------------------------------------------------------------------
|
||||
variables() {
|
||||
NSAPP=$(echo "${APP,,}" | tr -d ' ') # This function sets the NSAPP variable by converting the value of the APP variable to lowercase and removing any spaces.
|
||||
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
|
||||
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
|
||||
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
|
||||
DIAGNOSTICS="yes" # sets the DIAGNOSTICS variable to "yes", used for the API call.
|
||||
METHOD="default" # sets the METHOD variable to "default", used for the API call.
|
||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
|
||||
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
|
||||
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
|
||||
combined_log="/tmp/install-${SESSION_ID}-combined.log" # Combined log (build + install) for failed installations
|
||||
NSAPP=$(echo "${APP,,}" | tr -d ' ') # This function sets the NSAPP variable by converting the value of the APP variable to lowercase and removing any spaces.
|
||||
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
|
||||
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
|
||||
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
|
||||
DIAGNOSTICS="yes" # sets the DIAGNOSTICS variable to "yes", used for the API call.
|
||||
METHOD="default" # sets the METHOD variable to "default", used for the API call.
|
||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
|
||||
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
|
||||
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
|
||||
CTTYPE="${CTTYPE:-${CT_TYPE:-1}}"
|
||||
|
||||
# Parse dev_mode early
|
||||
@@ -218,7 +217,7 @@ update_motd_ip() {
|
||||
local current_os="$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '"') - Version: $(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '"')"
|
||||
local current_hostname="$(hostname)"
|
||||
local current_ip="$(hostname -I | awk '{print $1}')"
|
||||
|
||||
|
||||
# Update only if values actually changed
|
||||
if ! grep -q "OS:.*$current_os" "$PROFILE_FILE" 2>/dev/null; then
|
||||
sed -i "s|OS:.*|OS: \${GN}$current_os\${CL}\\\"|" "$PROFILE_FILE"
|
||||
@@ -277,9 +276,8 @@ install_ssh_keys_into_ct() {
|
||||
# ------------------------------------------------------------------------------
|
||||
# validate_container_id()
|
||||
#
|
||||
# - Validates if a container ID is available for use (CLUSTER-WIDE)
|
||||
# - Checks cluster resources via pvesh for VMs/CTs on ALL nodes
|
||||
# - Falls back to local config file check if pvesh unavailable
|
||||
# - Validates if a container ID is available for use
|
||||
# - Checks if ID is already used by VM or LXC container
|
||||
# - Checks if ID is used in LVM logical volumes
|
||||
# - Returns 0 if ID is available, 1 if already in use
|
||||
# ------------------------------------------------------------------------------
|
||||
@@ -291,35 +289,11 @@ validate_container_id() {
|
||||
return 1
|
||||
fi
|
||||
|
||||
# CLUSTER-WIDE CHECK: Query all VMs/CTs across all nodes
|
||||
# This catches IDs used on other nodes in the cluster
|
||||
# NOTE: Works on single-node too - Proxmox always has internal cluster structure
|
||||
# Falls back gracefully if pvesh unavailable or returns empty
|
||||
if command -v pvesh &>/dev/null; then
|
||||
local cluster_ids
|
||||
cluster_ids=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null |
|
||||
grep -oP '"vmid":\s*\K[0-9]+' 2>/dev/null || true)
|
||||
if [[ -n "$cluster_ids" ]] && echo "$cluster_ids" | grep -qw "$ctid"; then
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# LOCAL FALLBACK: Check if config file exists for VM or LXC
|
||||
# This handles edge cases where pvesh might not return all info
|
||||
# Check if config file exists for VM or LXC
|
||||
if [[ -f "/etc/pve/qemu-server/${ctid}.conf" ]] || [[ -f "/etc/pve/lxc/${ctid}.conf" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check ALL nodes in cluster for config files (handles pmxcfs sync delays)
|
||||
# NOTE: On single-node, /etc/pve/nodes/ contains just the one node - still works
|
||||
if [[ -d "/etc/pve/nodes" ]]; then
|
||||
for node_dir in /etc/pve/nodes/*/; do
|
||||
if [[ -f "${node_dir}qemu-server/${ctid}.conf" ]] || [[ -f "${node_dir}lxc/${ctid}.conf" ]]; then
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Check if ID is used in LVM logical volumes
|
||||
if lvs --noheadings -o lv_name 2>/dev/null | grep -qE "(^|[-_])${ctid}($|[-_])"; then
|
||||
return 1
|
||||
@@ -331,30 +305,63 @@ validate_container_id() {
|
||||
# ------------------------------------------------------------------------------
|
||||
# get_valid_container_id()
|
||||
#
|
||||
# - Returns a valid, unused container ID (CLUSTER-AWARE)
|
||||
# - Uses pvesh /cluster/nextid as starting point (already cluster-aware)
|
||||
# - Returns a valid, unused container ID
|
||||
# - If provided ID is valid, returns it
|
||||
# - Otherwise increments until a free one is found across entire cluster
|
||||
# - Otherwise increments from suggested ID until a free one is found
|
||||
# - Calls validate_container_id() to check availability
|
||||
# ------------------------------------------------------------------------------
|
||||
get_valid_container_id() {
|
||||
local suggested_id="${1:-$(pvesh get /cluster/nextid 2>/dev/null || echo 100)}"
|
||||
|
||||
# Ensure we have a valid starting ID
|
||||
if ! [[ "$suggested_id" =~ ^[0-9]+$ ]]; then
|
||||
suggested_id=$(pvesh get /cluster/nextid 2>/dev/null || echo 100)
|
||||
fi
|
||||
|
||||
local max_attempts=1000
|
||||
local attempts=0
|
||||
local suggested_id="${1:-$(pvesh get /cluster/nextid)}"
|
||||
|
||||
while ! validate_container_id "$suggested_id"; do
|
||||
suggested_id=$((suggested_id + 1))
|
||||
done
|
||||
|
||||
echo "$suggested_id"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# validate_container_id()
|
||||
#
|
||||
# - Validates if a container ID is available for use
|
||||
# - Checks if ID is already used by VM or LXC container
|
||||
# - Checks if ID is used in LVM logical volumes
|
||||
# - Returns 0 if ID is available, 1 if already in use
|
||||
# ------------------------------------------------------------------------------
|
||||
validate_container_id() {
|
||||
local ctid="$1"
|
||||
|
||||
# Check if ID is numeric
|
||||
if ! [[ "$ctid" =~ ^[0-9]+$ ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if config file exists for VM or LXC
|
||||
if [[ -f "/etc/pve/qemu-server/${ctid}.conf" ]] || [[ -f "/etc/pve/lxc/${ctid}.conf" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if ID is used in LVM logical volumes
|
||||
if lvs --noheadings -o lv_name 2>/dev/null | grep -qE "(^|[-_])${ctid}($|[-_])"; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# get_valid_container_id()
|
||||
#
|
||||
# - Returns a valid, unused container ID
|
||||
# - If provided ID is valid, returns it
|
||||
# - Otherwise increments from suggested ID until a free one is found
|
||||
# - Calls validate_container_id() to check availability
|
||||
# ------------------------------------------------------------------------------
|
||||
get_valid_container_id() {
|
||||
local suggested_id="${1:-$(pvesh get /cluster/nextid)}"
|
||||
|
||||
while ! validate_container_id "$suggested_id"; do
|
||||
suggested_id=$((suggested_id + 1))
|
||||
attempts=$((attempts + 1))
|
||||
if [[ $attempts -ge $max_attempts ]]; then
|
||||
msg_error "Could not find available container ID after $max_attempts attempts"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "$suggested_id"
|
||||
@@ -378,7 +385,7 @@ validate_hostname() {
|
||||
|
||||
# Split by dots and validate each label
|
||||
local IFS='.'
|
||||
read -ra labels <<<"$hostname"
|
||||
read -ra labels <<< "$hostname"
|
||||
for label in "${labels[@]}"; do
|
||||
# Each label: 1-63 chars, alphanumeric, hyphens allowed (not at start/end)
|
||||
if [[ -z "$label" ]] || [[ ${#label} -gt 63 ]]; then
|
||||
@@ -482,7 +489,7 @@ validate_ipv6_address() {
|
||||
# Check that no segment exceeds 4 hex chars
|
||||
local IFS=':'
|
||||
local -a segments
|
||||
read -ra segments <<<"$addr"
|
||||
read -ra segments <<< "$addr"
|
||||
for seg in "${segments[@]}"; do
|
||||
if [[ ${#seg} -gt 4 ]]; then
|
||||
return 1
|
||||
@@ -532,14 +539,14 @@ validate_gateway_in_subnet() {
|
||||
|
||||
# Convert IPs to integers
|
||||
local IFS='.'
|
||||
read -r i1 i2 i3 i4 <<<"$ip"
|
||||
read -r g1 g2 g3 g4 <<<"$gateway"
|
||||
read -r i1 i2 i3 i4 <<< "$ip"
|
||||
read -r g1 g2 g3 g4 <<< "$gateway"
|
||||
|
||||
local ip_int=$(((i1 << 24) + (i2 << 16) + (i3 << 8) + i4))
|
||||
local gw_int=$(((g1 << 24) + (g2 << 16) + (g3 << 8) + g4))
|
||||
local ip_int=$(( (i1 << 24) + (i2 << 16) + (i3 << 8) + i4 ))
|
||||
local gw_int=$(( (g1 << 24) + (g2 << 16) + (g3 << 8) + g4 ))
|
||||
|
||||
# Check if both are in same network
|
||||
if (((ip_int & mask) != (gw_int & mask))); then
|
||||
if (( (ip_int & mask) != (gw_int & mask) )); then
|
||||
return 1
|
||||
fi
|
||||
|
||||
@@ -1072,117 +1079,117 @@ load_vars_file() {
|
||||
# Validate values before setting (skip empty values - they use defaults)
|
||||
if [[ -n "$var_val" ]]; then
|
||||
case "$var_key" in
|
||||
var_mac)
|
||||
if ! validate_mac_address "$var_val"; then
|
||||
msg_warn "Invalid MAC address '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_vlan)
|
||||
if ! validate_vlan_tag "$var_val"; then
|
||||
msg_warn "Invalid VLAN tag '$var_val' in $file (must be 1-4094), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_mtu)
|
||||
if ! validate_mtu "$var_val"; then
|
||||
msg_warn "Invalid MTU '$var_val' in $file (must be 576-65535), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_tags)
|
||||
if ! validate_tags "$var_val"; then
|
||||
msg_warn "Invalid tags '$var_val' in $file (alphanumeric, -, _, ; only), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_timezone)
|
||||
if ! validate_timezone "$var_val"; then
|
||||
msg_warn "Invalid timezone '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_brg)
|
||||
if ! validate_bridge "$var_val"; then
|
||||
msg_warn "Bridge '$var_val' not found in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_gateway)
|
||||
if ! validate_gateway_ip "$var_val"; then
|
||||
msg_warn "Invalid gateway IP '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_hostname)
|
||||
if ! validate_hostname "$var_val"; then
|
||||
msg_warn "Invalid hostname '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_cpu)
|
||||
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1 || var_val > 128)); then
|
||||
msg_warn "Invalid CPU count '$var_val' in $file (must be 1-128), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_ram)
|
||||
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 256)); then
|
||||
msg_warn "Invalid RAM '$var_val' in $file (must be >= 256 MiB), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_disk)
|
||||
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1)); then
|
||||
msg_warn "Invalid disk size '$var_val' in $file (must be >= 1 GB), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_unprivileged)
|
||||
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
|
||||
msg_warn "Invalid unprivileged value '$var_val' in $file (must be 0 or 1), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_nesting)
|
||||
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
|
||||
msg_warn "Invalid nesting value '$var_val' in $file (must be 0 or 1), ignoring"
|
||||
continue
|
||||
fi
|
||||
# Warn about potential issues with systemd-based OS when nesting is disabled via vars file
|
||||
if [[ "$var_val" == "0" && "${var_os:-debian}" != "alpine" ]]; then
|
||||
msg_warn "Nesting disabled in $file - modern systemd-based distributions may require nesting for proper operation"
|
||||
fi
|
||||
;;
|
||||
var_keyctl)
|
||||
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
|
||||
msg_warn "Invalid keyctl value '$var_val' in $file (must be 0 or 1), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_net)
|
||||
# var_net can be: dhcp, static IP/CIDR, or IP range
|
||||
if [[ "$var_val" != "dhcp" ]]; then
|
||||
if is_ip_range "$var_val"; then
|
||||
: # IP range is valid, will be resolved at runtime
|
||||
elif ! validate_ip_address "$var_val"; then
|
||||
msg_warn "Invalid network '$var_val' in $file (must be dhcp or IP/CIDR), ignoring"
|
||||
var_mac)
|
||||
if ! validate_mac_address "$var_val"; then
|
||||
msg_warn "Invalid MAC address '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
var_fuse | var_tun | var_gpu | var_ssh | var_verbose | var_protection)
|
||||
if [[ "$var_val" != "yes" && "$var_val" != "no" ]]; then
|
||||
msg_warn "Invalid boolean '$var_val' for $var_key in $file (must be yes/no), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_ipv6_method)
|
||||
if [[ "$var_val" != "auto" && "$var_val" != "dhcp" && "$var_val" != "static" && "$var_val" != "none" ]]; then
|
||||
msg_warn "Invalid IPv6 method '$var_val' in $file (must be auto/dhcp/static/none), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
;;
|
||||
var_vlan)
|
||||
if ! validate_vlan_tag "$var_val"; then
|
||||
msg_warn "Invalid VLAN tag '$var_val' in $file (must be 1-4094), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_mtu)
|
||||
if ! validate_mtu "$var_val"; then
|
||||
msg_warn "Invalid MTU '$var_val' in $file (must be 576-65535), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_tags)
|
||||
if ! validate_tags "$var_val"; then
|
||||
msg_warn "Invalid tags '$var_val' in $file (alphanumeric, -, _, ; only), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_timezone)
|
||||
if ! validate_timezone "$var_val"; then
|
||||
msg_warn "Invalid timezone '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_brg)
|
||||
if ! validate_bridge "$var_val"; then
|
||||
msg_warn "Bridge '$var_val' not found in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_gateway)
|
||||
if ! validate_gateway_ip "$var_val"; then
|
||||
msg_warn "Invalid gateway IP '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_hostname)
|
||||
if ! validate_hostname "$var_val"; then
|
||||
msg_warn "Invalid hostname '$var_val' in $file, ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_cpu)
|
||||
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1 || var_val > 128)); then
|
||||
msg_warn "Invalid CPU count '$var_val' in $file (must be 1-128), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_ram)
|
||||
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 256)); then
|
||||
msg_warn "Invalid RAM '$var_val' in $file (must be >= 256 MiB), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_disk)
|
||||
if ! [[ "$var_val" =~ ^[0-9]+$ ]] || ((var_val < 1)); then
|
||||
msg_warn "Invalid disk size '$var_val' in $file (must be >= 1 GB), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_unprivileged)
|
||||
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
|
||||
msg_warn "Invalid unprivileged value '$var_val' in $file (must be 0 or 1), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_nesting)
|
||||
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
|
||||
msg_warn "Invalid nesting value '$var_val' in $file (must be 0 or 1), ignoring"
|
||||
continue
|
||||
fi
|
||||
# Warn about potential issues with systemd-based OS when nesting is disabled via vars file
|
||||
if [[ "$var_val" == "0" && "${var_os:-debian}" != "alpine" ]]; then
|
||||
msg_warn "Nesting disabled in $file - modern systemd-based distributions may require nesting for proper operation"
|
||||
fi
|
||||
;;
|
||||
var_keyctl)
|
||||
if [[ "$var_val" != "0" && "$var_val" != "1" ]]; then
|
||||
msg_warn "Invalid keyctl value '$var_val' in $file (must be 0 or 1), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_net)
|
||||
# var_net can be: dhcp, static IP/CIDR, or IP range
|
||||
if [[ "$var_val" != "dhcp" ]]; then
|
||||
if is_ip_range "$var_val"; then
|
||||
: # IP range is valid, will be resolved at runtime
|
||||
elif ! validate_ip_address "$var_val"; then
|
||||
msg_warn "Invalid network '$var_val' in $file (must be dhcp or IP/CIDR), ignoring"
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
var_fuse|var_tun|var_gpu|var_ssh|var_verbose|var_protection)
|
||||
if [[ "$var_val" != "yes" && "$var_val" != "no" ]]; then
|
||||
msg_warn "Invalid boolean '$var_val' for $var_key in $file (must be yes/no), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
var_ipv6_method)
|
||||
if [[ "$var_val" != "auto" && "$var_val" != "dhcp" && "$var_val" != "static" && "$var_val" != "none" ]]; then
|
||||
msg_warn "Invalid IPv6 method '$var_val' in $file (must be auto/dhcp/static/none), ignoring"
|
||||
continue
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
@@ -2757,26 +2764,6 @@ Advanced:
|
||||
[[ "$APT_CACHER" == "yes" ]] && echo -e "${INFO}${BOLD}${DGN}APT Cacher: ${BGN}$APT_CACHER_IP${CL}"
|
||||
echo -e "${SEARCH}${BOLD}${DGN}Verbose Mode: ${BGN}$VERBOSE${CL}"
|
||||
echo -e "${CREATING}${BOLD}${RD}Creating an LXC of ${APP} using the above advanced settings${CL}"
|
||||
|
||||
# Log settings to file
|
||||
log_section "CONTAINER SETTINGS (ADVANCED) - ${APP}"
|
||||
log_msg "Application: ${APP}"
|
||||
log_msg "PVE Version: ${PVEVERSION} (Kernel: ${KERNEL_VERSION})"
|
||||
log_msg "Operating System: $var_os ($var_version)"
|
||||
log_msg "Container Type: $([ "$CT_TYPE" == "1" ] && echo "Unprivileged" || echo "Privileged")"
|
||||
log_msg "Container ID: $CT_ID"
|
||||
log_msg "Hostname: $HN"
|
||||
log_msg "Disk Size: ${DISK_SIZE} GB"
|
||||
log_msg "CPU Cores: $CORE_COUNT"
|
||||
log_msg "RAM Size: ${RAM_SIZE} MiB"
|
||||
log_msg "Bridge: $BRG"
|
||||
log_msg "IPv4: $NET"
|
||||
log_msg "IPv6: $IPV6_METHOD"
|
||||
log_msg "FUSE Support: ${ENABLE_FUSE:-no}"
|
||||
log_msg "Nesting: $([ "${ENABLE_NESTING:-1}" == "1" ] && echo "Enabled" || echo "Disabled")"
|
||||
log_msg "GPU Passthrough: ${ENABLE_GPU:-no}"
|
||||
log_msg "Verbose Mode: $VERBOSE"
|
||||
log_msg "Session ID: ${SESSION_ID}"
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
@@ -2884,7 +2871,6 @@ diagnostics_menu() {
|
||||
# - Prints summary of default values (ID, OS, type, disk, RAM, CPU, etc.)
|
||||
# - Uses icons and formatting for readability
|
||||
# - Convert CT_TYPE to description
|
||||
# - Also logs settings to log file for debugging
|
||||
# ------------------------------------------------------------------------------
|
||||
echo_default() {
|
||||
CT_TYPE_DESC="Unprivileged"
|
||||
@@ -2906,20 +2892,6 @@ echo_default() {
|
||||
fi
|
||||
echo -e "${CREATING}${BOLD}${BL}Creating a ${APP} LXC using the above default settings${CL}"
|
||||
echo -e " "
|
||||
|
||||
# Log settings to file
|
||||
log_section "CONTAINER SETTINGS - ${APP}"
|
||||
log_msg "Application: ${APP}"
|
||||
log_msg "PVE Version: ${PVEVERSION} (Kernel: ${KERNEL_VERSION})"
|
||||
log_msg "Container ID: ${CT_ID}"
|
||||
log_msg "Operating System: $var_os ($var_version)"
|
||||
log_msg "Container Type: $CT_TYPE_DESC"
|
||||
log_msg "Disk Size: ${DISK_SIZE} GB"
|
||||
log_msg "CPU Cores: ${CORE_COUNT}"
|
||||
log_msg "RAM Size: ${RAM_SIZE} MiB"
|
||||
[[ -n "${var_gpu:-}" && "${var_gpu}" == "yes" ]] && log_msg "GPU Passthrough: Enabled"
|
||||
[[ "$VERBOSE" == "yes" ]] && log_msg "Verbose Mode: Enabled"
|
||||
log_msg "Session ID: ${SESSION_ID}"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
@@ -3106,10 +3078,10 @@ settings_menu() {
|
||||
|
||||
case "$choice" in
|
||||
1) diagnostics_menu ;;
|
||||
2) ${EDITOR:-nano} /usr/local/community-scripts/default.vars ;;
|
||||
2) nano /usr/local/community-scripts/default.vars ;;
|
||||
3)
|
||||
if [ -f "$(get_app_defaults_path)" ]; then
|
||||
${EDITOR:-nano} "$(get_app_defaults_path)"
|
||||
nano "$(get_app_defaults_path)"
|
||||
else
|
||||
# Back was selected (no app.vars available)
|
||||
return
|
||||
@@ -3346,70 +3318,6 @@ configure_ssh_settings() {
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_menu()
|
||||
#
|
||||
# - Displays a numbered menu for update_script() functions
|
||||
# - In silent mode (PHS_SILENT=1): auto-selects the default option
|
||||
# - In interactive mode: shows menu via read with 10s timeout + default fallback
|
||||
# - Usage: CHOICE=$(msg_menu "Title" "tag1" "Description 1" "tag2" "Desc 2" ...)
|
||||
# - The first item is always the default
|
||||
# - Returns the selected tag to stdout
|
||||
# - If no valid selection or timeout, returns the default (first) tag
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_menu() {
|
||||
local title="$1"
|
||||
shift
|
||||
|
||||
# Parse items into parallel arrays: tags[] and descriptions[]
|
||||
local -a tags=()
|
||||
local -a descs=()
|
||||
while [[ $# -ge 2 ]]; do
|
||||
tags+=("$1")
|
||||
descs+=("$2")
|
||||
shift 2
|
||||
done
|
||||
|
||||
local default_tag="${tags[0]}"
|
||||
local count=${#tags[@]}
|
||||
|
||||
# Silent mode: return default immediately
|
||||
if [[ -n "${PHS_SILENT+x}" ]] && [[ "${PHS_SILENT}" == "1" ]]; then
|
||||
echo "$default_tag"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Display menu to /dev/tty so it doesn't get captured by command substitution
|
||||
{
|
||||
echo ""
|
||||
msg_custom "📋" "${BL}" "${title}"
|
||||
echo ""
|
||||
for i in "${!tags[@]}"; do
|
||||
local marker=" "
|
||||
[[ $i -eq 0 ]] && marker="* "
|
||||
printf "${TAB3}${marker}%s) %s\n" "${tags[$i]}" "${descs[$i]}"
|
||||
done
|
||||
echo ""
|
||||
} >/dev/tty
|
||||
|
||||
local selection=""
|
||||
read -r -t 10 -p "${TAB3}Select [default=${default_tag}, timeout 10s]: " selection </dev/tty >/dev/tty || true
|
||||
|
||||
# Validate selection
|
||||
if [[ -n "$selection" ]]; then
|
||||
for tag in "${tags[@]}"; do
|
||||
if [[ "$selection" == "$tag" ]]; then
|
||||
echo "$selection"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
msg_warn "Invalid selection '${selection}' - using default: ${default_tag}"
|
||||
fi
|
||||
|
||||
echo "$default_tag"
|
||||
return 0
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# start()
|
||||
#
|
||||
@@ -3664,9 +3572,6 @@ $PCT_OPTIONS_STRING"
|
||||
exit 214
|
||||
fi
|
||||
msg_ok "Storage space validated"
|
||||
|
||||
# Report installation start to API (early - captures failed installs too)
|
||||
post_to_api
|
||||
fi
|
||||
|
||||
create_lxc_container || exit $?
|
||||
@@ -4041,9 +3946,6 @@ EOF'
|
||||
# Install SSH keys
|
||||
install_ssh_keys_into_ct
|
||||
|
||||
# Start timer for duration tracking
|
||||
start_install_timer
|
||||
|
||||
# Run application installer
|
||||
# Disable error trap - container errors are handled internally via flag file
|
||||
set +Eeuo pipefail # Disable ALL error handling temporarily
|
||||
@@ -4074,58 +3976,25 @@ EOF'
|
||||
if [[ $install_exit_code -ne 0 ]]; then
|
||||
msg_error "Installation failed in container ${CTID} (exit code: ${install_exit_code})"
|
||||
|
||||
# Copy install log from container BEFORE API call so get_error_text() can read it
|
||||
# Copy both logs from container before potential deletion
|
||||
local build_log_copied=false
|
||||
local install_log_copied=false
|
||||
local combined_log="/tmp/${NSAPP:-lxc}-${CTID}-${SESSION_ID}.log"
|
||||
|
||||
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
|
||||
# Create combined log with header
|
||||
{
|
||||
echo "================================================================================"
|
||||
echo "COMBINED INSTALLATION LOG - ${APP:-LXC}"
|
||||
echo "Container ID: ${CTID}"
|
||||
echo "Session ID: ${SESSION_ID}"
|
||||
echo "Timestamp: $(date '+%Y-%m-%d %H:%M:%S')"
|
||||
echo "================================================================================"
|
||||
echo ""
|
||||
} >"$combined_log"
|
||||
|
||||
# Append BUILD_LOG (host-side creation log) if it exists
|
||||
# Copy BUILD_LOG (creation log) if it exists
|
||||
if [[ -f "${BUILD_LOG}" ]]; then
|
||||
{
|
||||
echo "================================================================================"
|
||||
echo "PHASE 1: CONTAINER CREATION (Host)"
|
||||
echo "================================================================================"
|
||||
cat "${BUILD_LOG}"
|
||||
echo ""
|
||||
} >>"$combined_log"
|
||||
build_log_copied=true
|
||||
cp "${BUILD_LOG}" "/tmp/create-lxc-${CTID}-${SESSION_ID}.log" 2>/dev/null && build_log_copied=true
|
||||
fi
|
||||
|
||||
# Copy and append INSTALL_LOG from container
|
||||
local temp_install_log="/tmp/.install-temp-${SESSION_ID}.log"
|
||||
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_install_log" 2>/dev/null; then
|
||||
{
|
||||
echo "================================================================================"
|
||||
echo "PHASE 2: APPLICATION INSTALLATION (Container)"
|
||||
echo "================================================================================"
|
||||
cat "$temp_install_log"
|
||||
echo ""
|
||||
} >>"$combined_log"
|
||||
rm -f "$temp_install_log"
|
||||
# Copy INSTALL_LOG from container
|
||||
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "/tmp/install-lxc-${CTID}-${SESSION_ID}.log" 2>/dev/null; then
|
||||
install_log_copied=true
|
||||
# Point INSTALL_LOG to combined log so get_error_text() finds it
|
||||
INSTALL_LOG="$combined_log"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Report failure to telemetry API (now with log available on host)
|
||||
post_update_to_api "failed" "$install_exit_code"
|
||||
|
||||
# Show combined log location
|
||||
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
|
||||
msg_custom "📋" "${YW}" "Installation log: ${combined_log}"
|
||||
# Show available logs
|
||||
echo ""
|
||||
[[ "$build_log_copied" == true ]] && echo -e "${GN}✔${CL} Container creation log: ${BL}/tmp/create-lxc-${CTID}-${SESSION_ID}.log${CL}"
|
||||
[[ "$install_log_copied" == true ]] && echo -e "${GN}✔${CL} Installation log: ${BL}/tmp/install-lxc-${CTID}-${SESSION_ID}.log${CL}"
|
||||
fi
|
||||
|
||||
# Dev mode: Keep container or open breakpoint shell
|
||||
@@ -4148,21 +4017,19 @@ EOF'
|
||||
exit $install_exit_code
|
||||
fi
|
||||
|
||||
# Prompt user for cleanup with 60s timeout
|
||||
# Prompt user for cleanup with 60s timeout (plain echo - no msg_info to avoid spinner)
|
||||
echo ""
|
||||
echo -en "${TAB}❓${TAB}${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
|
||||
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
|
||||
|
||||
if read -t 60 -r response; then
|
||||
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
|
||||
# Remove container
|
||||
echo ""
|
||||
msg_info "Removing container ${CTID}"
|
||||
echo -e "\n${TAB}${HOLD}${YW}Removing container ${CTID}${CL}"
|
||||
pct stop "$CTID" &>/dev/null || true
|
||||
pct destroy "$CTID" &>/dev/null || true
|
||||
msg_ok "Container ${CTID} removed"
|
||||
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
|
||||
elif [[ "$response" =~ ^[Nn]$ ]]; then
|
||||
echo ""
|
||||
msg_warn "Container ${CTID} kept for debugging"
|
||||
echo -e "\n${TAB}${YW}Container ${CTID} kept for debugging${CL}"
|
||||
|
||||
# Dev mode: Setup MOTD/SSH for debugging access to broken container
|
||||
if [[ "${DEV_MODE_MOTD:-false}" == "true" ]]; then
|
||||
@@ -4178,17 +4045,13 @@ EOF'
|
||||
fi
|
||||
else
|
||||
# Timeout - auto-remove
|
||||
echo ""
|
||||
msg_info "No response - removing container ${CTID}"
|
||||
echo -e "\n${YW}No response - auto-removing container${CL}"
|
||||
echo -e "${TAB}${HOLD}${YW}Removing container ${CTID}${CL}"
|
||||
pct stop "$CTID" &>/dev/null || true
|
||||
pct destroy "$CTID" &>/dev/null || true
|
||||
msg_ok "Container ${CTID} removed"
|
||||
echo -e "${BFR}${CM}${GN}Container ${CTID} removed${CL}"
|
||||
fi
|
||||
|
||||
# Force one final status update attempt after cleanup
|
||||
# This ensures status is updated even if the first attempt failed (e.g., HTTP 400)
|
||||
post_update_to_api "failed" "$install_exit_code" "force"
|
||||
|
||||
exit $install_exit_code
|
||||
fi
|
||||
}
|
||||
@@ -5192,74 +5055,18 @@ EOF
|
||||
# SECTION 10: ERROR HANDLING & EXIT TRAPS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# ensure_log_on_host()
|
||||
#
|
||||
# - Ensures INSTALL_LOG points to a readable file on the host
|
||||
# - If INSTALL_LOG points to a container path (e.g. /root/.install-*),
|
||||
# tries to pull it from the container and create a combined log
|
||||
# - This allows get_error_text() to find actual error output for telemetry
|
||||
# ------------------------------------------------------------------------------
|
||||
ensure_log_on_host() {
|
||||
# Already readable on host? Nothing to do.
|
||||
[[ -n "${INSTALL_LOG:-}" && -s "${INSTALL_LOG}" ]] && return 0
|
||||
|
||||
# Try pulling from container and creating combined log
|
||||
if [[ -n "${CTID:-}" && -n "${SESSION_ID:-}" ]] && command -v pct &>/dev/null; then
|
||||
local combined_log="/tmp/${NSAPP:-lxc}-${CTID}-${SESSION_ID}.log"
|
||||
if [[ ! -s "$combined_log" ]]; then
|
||||
# Create combined log
|
||||
{
|
||||
echo "================================================================================"
|
||||
echo "COMBINED INSTALLATION LOG - ${APP:-LXC}"
|
||||
echo "Container ID: ${CTID}"
|
||||
echo "Session ID: ${SESSION_ID}"
|
||||
echo "Timestamp: $(date '+%Y-%m-%d %H:%M:%S')"
|
||||
echo "================================================================================"
|
||||
echo ""
|
||||
} >"$combined_log" 2>/dev/null || return 0
|
||||
# Append BUILD_LOG if it exists
|
||||
if [[ -f "${BUILD_LOG:-}" ]]; then
|
||||
{
|
||||
echo "================================================================================"
|
||||
echo "PHASE 1: CONTAINER CREATION (Host)"
|
||||
echo "================================================================================"
|
||||
cat "${BUILD_LOG}"
|
||||
echo ""
|
||||
} >>"$combined_log"
|
||||
fi
|
||||
# Pull INSTALL_LOG from container
|
||||
local temp_log="/tmp/.install-temp-${SESSION_ID}.log"
|
||||
if pct pull "$CTID" "/root/.install-${SESSION_ID}.log" "$temp_log" 2>/dev/null; then
|
||||
{
|
||||
echo "================================================================================"
|
||||
echo "PHASE 2: APPLICATION INSTALLATION (Container)"
|
||||
echo "================================================================================"
|
||||
cat "$temp_log"
|
||||
echo ""
|
||||
} >>"$combined_log"
|
||||
rm -f "$temp_log"
|
||||
fi
|
||||
fi
|
||||
if [[ -s "$combined_log" ]]; then
|
||||
INSTALL_LOG="$combined_log"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# api_exit_script()
|
||||
#
|
||||
# - Exit trap handler for reporting to API telemetry
|
||||
# - Captures exit code and reports to PocketBase using centralized error descriptions
|
||||
# - Uses explain_exit_code() from api.func for consistent error messages
|
||||
# - Posts failure status with exit code to API (error description resolved automatically)
|
||||
# - Captures exit code and reports to API using centralized error descriptions
|
||||
# - Uses explain_exit_code() from error_handler.func for consistent error messages
|
||||
# - Posts failure status with exit code to API (error description added automatically)
|
||||
# - Only executes on non-zero exit codes
|
||||
# ------------------------------------------------------------------------------
|
||||
api_exit_script() {
|
||||
exit_code=$?
|
||||
if [ $exit_code -ne 0 ]; then
|
||||
ensure_log_on_host
|
||||
post_update_to_api "failed" "$exit_code"
|
||||
fi
|
||||
}
|
||||
@@ -5267,6 +5074,6 @@ api_exit_script() {
|
||||
if command -v pveversion >/dev/null 2>&1; then
|
||||
trap 'api_exit_script' EXIT
|
||||
fi
|
||||
trap 'ensure_log_on_host; post_update_to_api "failed" "$?"' ERR
|
||||
trap 'ensure_log_on_host; post_update_to_api "failed" "130"; exit 130' SIGINT
|
||||
trap 'ensure_log_on_host; post_update_to_api "failed" "143"; exit 143' SIGTERM
|
||||
trap 'post_update_to_api "failed" "$BASH_COMMAND"' ERR
|
||||
trap 'post_update_to_api "failed" "INTERRUPTED"' SIGINT
|
||||
trap 'post_update_to_api "failed" "TERMINATED"' SIGTERM
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user