mirror of
https://github.com/community-scripts/ProxmoxVE.git
synced 2026-04-12 21:15:05 +02:00
Compare commits
1 Commits
fix/crafty
...
fix/frigat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e7d999a514 |
325
.github/changelogs/2026/03.md
generated
vendored
325
.github/changelogs/2026/03.md
generated
vendored
@@ -1,328 +1,3 @@
|
|||||||
## 2026-03-28
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Fix: Update gokapi binary name for v2.2.4+ and add migration step [@krazos](https://github.com/krazos) ([#13377](https://github.com/community-scripts/ProxmoxVE/pull/13377))
|
|
||||||
- Fix: update gokapi asset matching for v2.2.4+ naming convention [@krazos](https://github.com/krazos) ([#13369](https://github.com/community-scripts/ProxmoxVE/pull/13369))
|
|
||||||
- Tandoor Recipes: Add missing env variable [@tremor021](https://github.com/tremor021) ([#13365](https://github.com/community-scripts/ProxmoxVE/pull/13365))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- FileFlows: add option to install Node [@tremor021](https://github.com/tremor021) ([#13368](https://github.com/community-scripts/ProxmoxVE/pull/13368))
|
|
||||||
|
|
||||||
## 2026-03-27
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Matter-Server ([#13355](https://github.com/community-scripts/ProxmoxVE/pull/13355))
|
|
||||||
- GeoPulse ([#13320](https://github.com/community-scripts/ProxmoxVE/pull/13320))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- RevealJS: Switch from gulp to vite [@tremor021](https://github.com/tremor021) ([#13336](https://github.com/community-scripts/ProxmoxVE/pull/13336))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Dispatcharr add custom Postgres port support for upgrade [@MickLesk](https://github.com/MickLesk) ([#13347](https://github.com/community-scripts/ProxmoxVE/pull/13347))
|
|
||||||
- Immich: bump to v2.6.3 [@MickLesk](https://github.com/MickLesk) ([#13324](https://github.com/community-scripts/ProxmoxVE/pull/13324))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Refactor/Feature-Bump/Security: Update-Cron-LXCs (Now Local Mode!) [@MickLesk](https://github.com/MickLesk) ([#13339](https://github.com/community-scripts/ProxmoxVE/pull/13339))
|
|
||||||
|
|
||||||
## 2026-03-26
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- BirdNET ([#13313](https://github.com/community-scripts/ProxmoxVE/pull/13313))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Immich: Bump to 2.6.2 | use start.sh in service, ensure DB_HOSTNAME in .env | Fix Rights Issue with ZFS Shares [@MickLesk](https://github.com/MickLesk) ([#13199](https://github.com/community-scripts/ProxmoxVE/pull/13199))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- SparkyFitness: add garmin microservice as addon [@tomfrenzel](https://github.com/tomfrenzel) ([#12642](https://github.com/community-scripts/ProxmoxVE/pull/12642))
|
|
||||||
- Frigate: bump to v0.17.1 & change build order [@MickLesk](https://github.com/MickLesk) ([#13304](https://github.com/community-scripts/ProxmoxVE/pull/13304))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- tools.func: pin npm to 11.11.0 to work around Node.js 22.22.2 regression [@MickLesk](https://github.com/MickLesk) ([#13296](https://github.com/community-scripts/ProxmoxVE/pull/13296))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- core: APT/APK Mirror Fallback for CDN Failures [@MickLesk](https://github.com/MickLesk) ([#13316](https://github.com/community-scripts/ProxmoxVE/pull/13316))
|
|
||||||
- core/tools: replace generic return 1 exit_codes with more specific exit_codes [@MickLesk](https://github.com/MickLesk) ([#13311](https://github.com/community-scripts/ProxmoxVE/pull/13311))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: use /usr/bin/install to prevent function shadowing [@MickLesk](https://github.com/MickLesk) ([#13299](https://github.com/community-scripts/ProxmoxVE/pull/13299))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- SparkyFitness-Garmin: fix app name [@tomfrenzel](https://github.com/tomfrenzel) ([#13325](https://github.com/community-scripts/ProxmoxVE/pull/13325))
|
|
||||||
|
|
||||||
## 2026-03-25
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Komodo v2: migrate env vars to v2 and update source [@MickLesk](https://github.com/MickLesk) ([#13262](https://github.com/community-scripts/ProxmoxVE/pull/13262))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: make shell command substitutions safe with || true [@MickLesk](https://github.com/MickLesk) ([#13279](https://github.com/community-scripts/ProxmoxVE/pull/13279))
|
|
||||||
|
|
||||||
## 2026-03-24
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Homebrew (Addon) ([#13249](https://github.com/community-scripts/ProxmoxVE/pull/13249))
|
|
||||||
- NextExplorer ([#13252](https://github.com/community-scripts/ProxmoxVE/pull/13252))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Turnkey: modernize turnkey.sh with shared libraries [@MickLesk](https://github.com/MickLesk) ([#13242](https://github.com/community-scripts/ProxmoxVE/pull/13242))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- chore: replace helper-scripts.com with community-scripts.com [@MickLesk](https://github.com/MickLesk) ([#13244](https://github.com/community-scripts/ProxmoxVE/pull/13244))
|
|
||||||
|
|
||||||
### 🗑️ Deleted Scripts
|
|
||||||
|
|
||||||
- Remove: Booklore [@MickLesk](https://github.com/MickLesk) ([#13265](https://github.com/community-scripts/ProxmoxVE/pull/13265))
|
|
||||||
|
|
||||||
## 2026-03-23
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: harden shell scripts against injection and insecure permissions [@MickLesk](https://github.com/MickLesk) ([#13239](https://github.com/community-scripts/ProxmoxVE/pull/13239))
|
|
||||||
|
|
||||||
## 2026-03-22
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- versitygw ([#13180](https://github.com/community-scripts/ProxmoxVE/pull/13180))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Adventurelog: pin DRF <3.15 to fix coreapi module removal [@MickLesk](https://github.com/MickLesk) ([#13194](https://github.com/community-scripts/ProxmoxVE/pull/13194))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- ConvertX: add libreoffice-writer for ODT/document conversions [@MickLesk](https://github.com/MickLesk) ([#13196](https://github.com/community-scripts/ProxmoxVE/pull/13196))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- iSponsorblockTV: add AVX CPU check before installation [@MickLesk](https://github.com/MickLesk) ([#13197](https://github.com/community-scripts/ProxmoxVE/pull/13197))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- core: guard against empty IPv6 address in static mode [@MickLesk](https://github.com/MickLesk) ([#13195](https://github.com/community-scripts/ProxmoxVE/pull/13195))
|
|
||||||
|
|
||||||
## 2026-03-21
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Anytype-server: wait for MongoDB readiness before rs.initiate() [@MickLesk](https://github.com/MickLesk) ([#13165](https://github.com/community-scripts/ProxmoxVE/pull/13165))
|
|
||||||
- Frigate: use correct CPU model fallback path [@MickLesk](https://github.com/MickLesk) ([#13164](https://github.com/community-scripts/ProxmoxVE/pull/13164))
|
|
||||||
- iSponsorBlockTV: Fix release fetching [@tremor021](https://github.com/tremor021) ([#13157](https://github.com/community-scripts/ProxmoxVE/pull/13157))
|
|
||||||
- Isponsorblocktv: use quoted heredoc to prevent unbound variable error during CLI wrapper creation [@Copilot](https://github.com/Copilot) ([#13146](https://github.com/community-scripts/ProxmoxVE/pull/13146))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Headscale: Enable TUN [@tremor021](https://github.com/tremor021) ([#13158](https://github.com/community-scripts/ProxmoxVE/pull/13158))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- core: add missing -searchdomain/-nameserver prefix in base_settings [@MickLesk](https://github.com/MickLesk) ([#13166](https://github.com/community-scripts/ProxmoxVE/pull/13166))
|
|
||||||
|
|
||||||
## 2026-03-20
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- iSponsorBlockTV ([#13123](https://github.com/community-scripts/ProxmoxVE/pull/13123))
|
|
||||||
- Alpine-Wakapi ([#13119](https://github.com/community-scripts/ProxmoxVE/pull/13119))
|
|
||||||
- teleport ([#13086](https://github.com/community-scripts/ProxmoxVE/pull/13086))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Reactive-Resume: add git dependency for v5.0.13+ [@MickLesk](https://github.com/MickLesk) ([#13133](https://github.com/community-scripts/ProxmoxVE/pull/13133))
|
|
||||||
- Scanopy: increase default CPU, RAM, and HDD to prevent OOM during Rust build [@Copilot](https://github.com/Copilot) ([#13130](https://github.com/community-scripts/ProxmoxVE/pull/13130))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Immich: v2.6.1 [@vhsdream](https://github.com/vhsdream) ([#13111](https://github.com/community-scripts/ProxmoxVE/pull/13111))
|
|
||||||
- VM's: add input validation and hostname sanitization to all VM scripts [@MickLesk](https://github.com/MickLesk) ([#12973](https://github.com/community-scripts/ProxmoxVE/pull/12973))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- Harden code-server addon install script [@MickLesk](https://github.com/MickLesk) ([#13116](https://github.com/community-scripts/ProxmoxVE/pull/13116))
|
|
||||||
|
|
||||||
## 2026-03-19
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- Owncast: increase default disk size from 2GB to 10GB [@Copilot](https://github.com/Copilot) ([#13079](https://github.com/community-scripts/ProxmoxVE/pull/13079))
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- fix: remove extra backslash to match single quoted here-doc [@Zelnes](https://github.com/Zelnes) ([#13108](https://github.com/community-scripts/ProxmoxVE/pull/13108))
|
|
||||||
- Reactive-Resume: Upgrade Node to 24 and enable Corepack [@MickLesk](https://github.com/MickLesk) ([#13093](https://github.com/community-scripts/ProxmoxVE/pull/13093))
|
|
||||||
- Increase Tracearr RAM; derive APP_VERSION [@MickLesk](https://github.com/MickLesk) ([#13087](https://github.com/community-scripts/ProxmoxVE/pull/13087))
|
|
||||||
- ProjectSend: Update application access URL [@tremor021](https://github.com/tremor021) ([#13078](https://github.com/community-scripts/ProxmoxVE/pull/13078))
|
|
||||||
- Dispatcharr: use npm install --no-audit --progress=false [@MickLesk](https://github.com/MickLesk) ([#13074](https://github.com/community-scripts/ProxmoxVE/pull/13074))
|
|
||||||
- core: reorder hwaccel setup and adjust GPU group usermod [@MickLesk](https://github.com/MickLesk) ([#13072](https://github.com/community-scripts/ProxmoxVE/pull/13072))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- tools.func: display pin reason in release-check messages [@MickLesk](https://github.com/MickLesk) ([#13095](https://github.com/community-scripts/ProxmoxVE/pull/13095))
|
|
||||||
- NocoDB: Unpin Version to latest [@MickLesk](https://github.com/MickLesk) ([#13094](https://github.com/community-scripts/ProxmoxVE/pull/13094))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- tools.func: use dpkg-query for reliable JDK version detection [@MickLesk](https://github.com/MickLesk) ([#13101](https://github.com/community-scripts/ProxmoxVE/pull/13101))
|
|
||||||
|
|
||||||
### 📚 Documentation
|
|
||||||
|
|
||||||
- Update link from helper-scripts.com to community-scripts.org [@adnanvaldes](https://github.com/adnanvaldes) ([#13098](https://github.com/community-scripts/ProxmoxVE/pull/13098))
|
|
||||||
- github: add PocketBase bot workflow [@MickLesk](https://github.com/MickLesk) ([#13075](https://github.com/community-scripts/ProxmoxVE/pull/13075))
|
|
||||||
|
|
||||||
## 2026-03-18
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Alpine-Ntfy [@MickLesk](https://github.com/MickLesk) ([#13048](https://github.com/community-scripts/ProxmoxVE/pull/13048))
|
|
||||||
- Split-Pro ([#12975](https://github.com/community-scripts/ProxmoxVE/pull/12975))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Tdarr: use curl_with_retry and correct exit code [@MickLesk](https://github.com/MickLesk) ([#13060](https://github.com/community-scripts/ProxmoxVE/pull/13060))
|
|
||||||
- reitti: fix: v4 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#13039](https://github.com/community-scripts/ProxmoxVE/pull/13039))
|
|
||||||
- Paperless-NGX: increase default RAM to 3GB [@MickLesk](https://github.com/MickLesk) ([#13018](https://github.com/community-scripts/ProxmoxVE/pull/13018))
|
|
||||||
- Plex: restart service after update to apply new version [@MickLesk](https://github.com/MickLesk) ([#13017](https://github.com/community-scripts/ProxmoxVE/pull/13017))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- tools: centralize GPU group setup via setup_hwaccel [@MickLesk](https://github.com/MickLesk) ([#13044](https://github.com/community-scripts/ProxmoxVE/pull/13044))
|
|
||||||
- Termix: add guacd build and systemd integration [@MickLesk](https://github.com/MickLesk) ([#12999](https://github.com/community-scripts/ProxmoxVE/pull/12999))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- Podman: replace deprecated commands with Quadlets [@MickLesk](https://github.com/MickLesk) ([#13052](https://github.com/community-scripts/ProxmoxVE/pull/13052))
|
|
||||||
- Refactor: Jellyfin repo, ffmpeg package and symlinks [@MickLesk](https://github.com/MickLesk) ([#13045](https://github.com/community-scripts/ProxmoxVE/pull/13045))
|
|
||||||
- pve-scripts-local: Increase default disk size from 4GB to 10GB [@MickLesk](https://github.com/MickLesk) ([#13009](https://github.com/community-scripts/ProxmoxVE/pull/13009))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- tools.func Implement pg_cron setup for setup_postgresql [@MickLesk](https://github.com/MickLesk) ([#13053](https://github.com/community-scripts/ProxmoxVE/pull/13053))
|
|
||||||
- tools.func: Implement check_for_gh_tag function [@MickLesk](https://github.com/MickLesk) ([#12998](https://github.com/community-scripts/ProxmoxVE/pull/12998))
|
|
||||||
- tools.func: Implement fetch_and_deploy_gh_tag function [@MickLesk](https://github.com/MickLesk) ([#13000](https://github.com/community-scripts/ProxmoxVE/pull/13000))
|
|
||||||
|
|
||||||
## 2026-03-17
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Gluetun: add OpenVPN process user and cleanup stale config [@MickLesk](https://github.com/MickLesk) ([#13016](https://github.com/community-scripts/ProxmoxVE/pull/13016))
|
|
||||||
- Frigate: check OpenVino model files exist before configuring detector and use curl_with_retry instead of default wget [@MickLesk](https://github.com/MickLesk) ([#13019](https://github.com/community-scripts/ProxmoxVE/pull/13019))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- tools.func: Update `create_self_signed_cert()` [@tremor021](https://github.com/tremor021) ([#13008](https://github.com/community-scripts/ProxmoxVE/pull/13008))
|
|
||||||
|
|
||||||
## 2026-03-16
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Gluetun ([#12976](https://github.com/community-scripts/ProxmoxVE/pull/12976))
|
|
||||||
- Anytype-Server ([#12974](https://github.com/community-scripts/ProxmoxVE/pull/12974))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Immich: use gcc-13 for compilation & add uv python pre-install with retry logic [@MickLesk](https://github.com/MickLesk) ([#12935](https://github.com/community-scripts/ProxmoxVE/pull/12935))
|
|
||||||
- Tautulli: add setuptools<81 constraint to update script [@MickLesk](https://github.com/MickLesk) ([#12959](https://github.com/community-scripts/ProxmoxVE/pull/12959))
|
|
||||||
- Seerr: add missing build deps [@MickLesk](https://github.com/MickLesk) ([#12960](https://github.com/community-scripts/ProxmoxVE/pull/12960))
|
|
||||||
- fix: yubal update [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12961](https://github.com/community-scripts/ProxmoxVE/pull/12961))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- hwaccel: remove ROCm install from AMD APU setup [@MickLesk](https://github.com/MickLesk) ([#12958](https://github.com/community-scripts/ProxmoxVE/pull/12958))
|
|
||||||
|
|
||||||
## 2026-03-15
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Yamtrack ([#12936](https://github.com/community-scripts/ProxmoxVE/pull/12936))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Wishlist: use --frozen-lockfile for pnpm install [@MickLesk](https://github.com/MickLesk) ([#12892](https://github.com/community-scripts/ProxmoxVE/pull/12892))
|
|
||||||
- SparkyFitness: use --legacy-peer-deps for npm install [@MickLesk](https://github.com/MickLesk) ([#12888](https://github.com/community-scripts/ProxmoxVE/pull/12888))
|
|
||||||
- Frigate: add fallback for OpenVino labelmap file [@MickLesk](https://github.com/MickLesk) ([#12889](https://github.com/community-scripts/ProxmoxVE/pull/12889))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- Refactor: ITSM-NG [@MickLesk](https://github.com/MickLesk) ([#12918](https://github.com/community-scripts/ProxmoxVE/pull/12918))
|
|
||||||
- core: unify RELEASE variable for check_for_gh_release and fetch_and_deploy [@MickLesk](https://github.com/MickLesk) ([#12917](https://github.com/community-scripts/ProxmoxVE/pull/12917))
|
|
||||||
- Standardize NSAPP names across VM scripts [@MickLesk](https://github.com/MickLesk) ([#12924](https://github.com/community-scripts/ProxmoxVE/pull/12924))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- core: retry downloads with exponential backoff [@MickLesk](https://github.com/MickLesk) ([#12896](https://github.com/community-scripts/ProxmoxVE/pull/12896))
|
|
||||||
|
|
||||||
### ❔ Uncategorized
|
|
||||||
|
|
||||||
- [go2rtc] Add ffmpeg dependency to install script [@Copilot](https://github.com/Copilot) ([#12944](https://github.com/community-scripts/ProxmoxVE/pull/12944))
|
|
||||||
|
|
||||||
## 2026-03-14
|
## 2026-03-14
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
### 🚀 Updated Scripts
|
||||||
|
|||||||
2
.github/workflows/close-tteck-issues.yaml
generated
vendored
2
.github/workflows/close-tteck-issues.yaml
generated
vendored
@@ -21,7 +21,7 @@ jobs:
|
|||||||
const message = `Hello, it looks like you are referencing the **old tteck repo**.
|
const message = `Hello, it looks like you are referencing the **old tteck repo**.
|
||||||
|
|
||||||
This repository is no longer used for active scripts.
|
This repository is no longer used for active scripts.
|
||||||
**Please update your bookmarks** and use: [https://community-scripts.com](https://community-scripts.com)
|
**Please update your bookmarks** and use: [https://helper-scripts.com](https://helper-scripts.com)
|
||||||
|
|
||||||
Also make sure your Bash command starts with:
|
Also make sure your Bash command starts with:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
|
|||||||
16
.github/workflows/update-script-timestamp-on-sh-change.yml
generated
vendored
16
.github/workflows/update-script-timestamp-on-sh-change.yml
generated
vendored
@@ -155,21 +155,13 @@ jobs:
|
|||||||
console.log('Slug not in DB, skipping: ' + slug);
|
console.log('Slug not in DB, skipping: ' + slug);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
const today = new Date().toISOString().split('T')[0];
|
|
||||||
const patchBody = {
|
|
||||||
script_updated: today,
|
|
||||||
last_update_commit: process.env.PR_URL || process.env.COMMIT_URL || ''
|
|
||||||
};
|
|
||||||
// When a dev script is merged into main, promote it to production
|
|
||||||
if (record.is_dev === true) {
|
|
||||||
patchBody.is_dev = false;
|
|
||||||
patchBody.script_created = today;
|
|
||||||
console.log('Promoting dev script to production: ' + slug);
|
|
||||||
}
|
|
||||||
const patchRes = await request(recordsUrl + '/' + record.id, {
|
const patchRes = await request(recordsUrl + '/' + record.id, {
|
||||||
method: 'PATCH',
|
method: 'PATCH',
|
||||||
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
body: JSON.stringify(patchBody)
|
body: JSON.stringify({
|
||||||
|
script_updated: new Date().toISOString().split('T')[0],
|
||||||
|
last_update_commit: process.env.PR_URL || process.env.COMMIT_URL || ''
|
||||||
|
})
|
||||||
});
|
});
|
||||||
if (!patchRes.ok) {
|
if (!patchRes.ok) {
|
||||||
console.warn('PATCH failed for slug ' + slug + ': ' + patchRes.body);
|
console.warn('PATCH failed for slug ' + slug + ': ' + patchRes.body);
|
||||||
|
|||||||
694
CHANGELOG.md
694
CHANGELOG.md
@@ -23,12 +23,6 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -45,7 +39,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary><h4>March (28 entries)</h4></summary>
|
<summary><h4>March (14 entries)</h4></summary>
|
||||||
|
|
||||||
[View March 2026 Changelog](.github/changelogs/2026/03.md)
|
[View March 2026 Changelog](.github/changelogs/2026/03.md)
|
||||||
|
|
||||||
@@ -429,253 +423,14 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
## 2026-04-03
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- netboot.xyz ([#13480](https://github.com/community-scripts/ProxmoxVE/pull/13480))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Wealthfolio: update to v3.2.1 and Node.js 24 [@afadil](https://github.com/afadil) ([#13486](https://github.com/community-scripts/ProxmoxVE/pull/13486))
|
|
||||||
|
|
||||||
## 2026-04-02
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Grist: Guard backup restore for empty docs/db files [@MickLesk](https://github.com/MickLesk) ([#13472](https://github.com/community-scripts/ProxmoxVE/pull/13472))
|
|
||||||
- fix(zigbee2mqtt): suppress grep error when pnpm-workspace.yaml is absent on update [@Copilot](https://github.com/Copilot) ([#13476](https://github.com/community-scripts/ProxmoxVE/pull/13476))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Cron LXC Updater: Add full PATH for cron environment [@MickLesk](https://github.com/MickLesk) ([#13473](https://github.com/community-scripts/ProxmoxVE/pull/13473))
|
|
||||||
|
|
||||||
## 2026-04-01
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- DrawDB ([#13454](https://github.com/community-scripts/ProxmoxVE/pull/13454))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Filebrowser: make noauth setup use correct database [@MickLesk](https://github.com/MickLesk) ([#13457](https://github.com/community-scripts/ProxmoxVE/pull/13457))
|
|
||||||
|
|
||||||
## 2026-03-31
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Graylog: set vm.max_map_count on host for OpenSearch [@MickLesk](https://github.com/MickLesk) ([#13441](https://github.com/community-scripts/ProxmoxVE/pull/13441))
|
|
||||||
- Koillection: ensure newline before appending to .env.local [@MickLesk](https://github.com/MickLesk) ([#13440](https://github.com/community-scripts/ProxmoxVE/pull/13440))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: skip empty gateway value in network config [@MickLesk](https://github.com/MickLesk) ([#13442](https://github.com/community-scripts/ProxmoxVE/pull/13442))
|
|
||||||
|
|
||||||
## 2026-03-30
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Bambuddy ([#13411](https://github.com/community-scripts/ProxmoxVE/pull/13411))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 💥 Breaking Changes
|
|
||||||
|
|
||||||
- Rename: BirdNET > BirdNET-Go [@MickLesk](https://github.com/MickLesk) ([#13410](https://github.com/community-scripts/ProxmoxVE/pull/13410))
|
|
||||||
|
|
||||||
## 2026-03-29
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- YOURLS ([#13379](https://github.com/community-scripts/ProxmoxVE/pull/13379))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- fix(victoriametrics): use jq to filter releases [@Joery-M](https://github.com/Joery-M) ([#13393](https://github.com/community-scripts/ProxmoxVE/pull/13393))
|
|
||||||
- Ollama: add error handling for Intel GPG key imports [@MickLesk](https://github.com/MickLesk) ([#13397](https://github.com/community-scripts/ProxmoxVE/pull/13397))
|
|
||||||
- Immich: ignore Redis connection error on maintenance mode disable [@MickLesk](https://github.com/MickLesk) ([#13398](https://github.com/community-scripts/ProxmoxVE/pull/13398))
|
|
||||||
- NPM: unmask openresty after migration from package [@MickLesk](https://github.com/MickLesk) ([#13399](https://github.com/community-scripts/ProxmoxVE/pull/13399))
|
|
||||||
|
|
||||||
## 2026-03-28
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Fix: Update gokapi binary name for v2.2.4+ and add migration step [@krazos](https://github.com/krazos) ([#13377](https://github.com/community-scripts/ProxmoxVE/pull/13377))
|
|
||||||
- Fix: update gokapi asset matching for v2.2.4+ naming convention [@krazos](https://github.com/krazos) ([#13369](https://github.com/community-scripts/ProxmoxVE/pull/13369))
|
|
||||||
- Tandoor Recipes: Add missing env variable [@tremor021](https://github.com/tremor021) ([#13365](https://github.com/community-scripts/ProxmoxVE/pull/13365))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- FileFlows: add option to install Node [@tremor021](https://github.com/tremor021) ([#13368](https://github.com/community-scripts/ProxmoxVE/pull/13368))
|
|
||||||
|
|
||||||
## 2026-03-27
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Matter-Server ([#13355](https://github.com/community-scripts/ProxmoxVE/pull/13355))
|
|
||||||
- GeoPulse ([#13320](https://github.com/community-scripts/ProxmoxVE/pull/13320))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- RevealJS: Switch from gulp to vite [@tremor021](https://github.com/tremor021) ([#13336](https://github.com/community-scripts/ProxmoxVE/pull/13336))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Dispatcharr add custom Postgres port support for upgrade [@MickLesk](https://github.com/MickLesk) ([#13347](https://github.com/community-scripts/ProxmoxVE/pull/13347))
|
|
||||||
- Immich: bump to v2.6.3 [@MickLesk](https://github.com/MickLesk) ([#13324](https://github.com/community-scripts/ProxmoxVE/pull/13324))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Refactor/Feature-Bump/Security: Update-Cron-LXCs (Now Local Mode!) [@MickLesk](https://github.com/MickLesk) ([#13339](https://github.com/community-scripts/ProxmoxVE/pull/13339))
|
|
||||||
|
|
||||||
## 2026-03-26
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- BirdNET ([#13313](https://github.com/community-scripts/ProxmoxVE/pull/13313))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Immich: Bump to 2.6.2 | use start.sh in service, ensure DB_HOSTNAME in .env | Fix Rights Issue with ZFS Shares [@MickLesk](https://github.com/MickLesk) ([#13199](https://github.com/community-scripts/ProxmoxVE/pull/13199))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- SparkyFitness: add garmin microservice as addon [@tomfrenzel](https://github.com/tomfrenzel) ([#12642](https://github.com/community-scripts/ProxmoxVE/pull/12642))
|
|
||||||
- Frigate: bump to v0.17.1 & change build order [@MickLesk](https://github.com/MickLesk) ([#13304](https://github.com/community-scripts/ProxmoxVE/pull/13304))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- tools.func: pin npm to 11.11.0 to work around Node.js 22.22.2 regression [@MickLesk](https://github.com/MickLesk) ([#13296](https://github.com/community-scripts/ProxmoxVE/pull/13296))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- core: APT/APK Mirror Fallback for CDN Failures [@MickLesk](https://github.com/MickLesk) ([#13316](https://github.com/community-scripts/ProxmoxVE/pull/13316))
|
|
||||||
- core/tools: replace generic return 1 exit_codes with more specific exit_codes [@MickLesk](https://github.com/MickLesk) ([#13311](https://github.com/community-scripts/ProxmoxVE/pull/13311))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: use /usr/bin/install to prevent function shadowing [@MickLesk](https://github.com/MickLesk) ([#13299](https://github.com/community-scripts/ProxmoxVE/pull/13299))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- SparkyFitness-Garmin: fix app name [@tomfrenzel](https://github.com/tomfrenzel) ([#13325](https://github.com/community-scripts/ProxmoxVE/pull/13325))
|
|
||||||
|
|
||||||
## 2026-03-25
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Komodo v2: migrate env vars to v2 and update source [@MickLesk](https://github.com/MickLesk) ([#13262](https://github.com/community-scripts/ProxmoxVE/pull/13262))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: make shell command substitutions safe with || true [@MickLesk](https://github.com/MickLesk) ([#13279](https://github.com/community-scripts/ProxmoxVE/pull/13279))
|
|
||||||
|
|
||||||
## 2026-03-24
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Homebrew (Addon) ([#13249](https://github.com/community-scripts/ProxmoxVE/pull/13249))
|
|
||||||
- NextExplorer ([#13252](https://github.com/community-scripts/ProxmoxVE/pull/13252))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Turnkey: modernize turnkey.sh with shared libraries [@MickLesk](https://github.com/MickLesk) ([#13242](https://github.com/community-scripts/ProxmoxVE/pull/13242))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- chore: replace helper-scripts.com with community-scripts.com [@MickLesk](https://github.com/MickLesk) ([#13244](https://github.com/community-scripts/ProxmoxVE/pull/13244))
|
|
||||||
|
|
||||||
### 🗑️ Deleted Scripts
|
|
||||||
|
|
||||||
- Remove: Booklore [@MickLesk](https://github.com/MickLesk) ([#13265](https://github.com/community-scripts/ProxmoxVE/pull/13265))
|
|
||||||
|
|
||||||
## 2026-03-23
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: harden shell scripts against injection and insecure permissions [@MickLesk](https://github.com/MickLesk) ([#13239](https://github.com/community-scripts/ProxmoxVE/pull/13239))
|
|
||||||
|
|
||||||
## 2026-03-22
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- versitygw ([#13180](https://github.com/community-scripts/ProxmoxVE/pull/13180))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Adventurelog: pin DRF <3.15 to fix coreapi module removal [@MickLesk](https://github.com/MickLesk) ([#13194](https://github.com/community-scripts/ProxmoxVE/pull/13194))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- ConvertX: add libreoffice-writer for ODT/document conversions [@MickLesk](https://github.com/MickLesk) ([#13196](https://github.com/community-scripts/ProxmoxVE/pull/13196))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- iSponsorblockTV: add AVX CPU check before installation [@MickLesk](https://github.com/MickLesk) ([#13197](https://github.com/community-scripts/ProxmoxVE/pull/13197))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- core: guard against empty IPv6 address in static mode [@MickLesk](https://github.com/MickLesk) ([#13195](https://github.com/community-scripts/ProxmoxVE/pull/13195))
|
|
||||||
|
|
||||||
## 2026-03-21
|
## 2026-03-21
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
- Anytype-server: wait for MongoDB readiness before rs.initiate() [@MickLesk](https://github.com/MickLesk) ([#13165](https://github.com/community-scripts/ProxmoxVE/pull/13165))
|
|
||||||
- Frigate: use correct CPU model fallback path [@MickLesk](https://github.com/MickLesk) ([#13164](https://github.com/community-scripts/ProxmoxVE/pull/13164))
|
|
||||||
- iSponsorBlockTV: Fix release fetching [@tremor021](https://github.com/tremor021) ([#13157](https://github.com/community-scripts/ProxmoxVE/pull/13157))
|
|
||||||
- Isponsorblocktv: use quoted heredoc to prevent unbound variable error during CLI wrapper creation [@Copilot](https://github.com/Copilot) ([#13146](https://github.com/community-scripts/ProxmoxVE/pull/13146))
|
- Isponsorblocktv: use quoted heredoc to prevent unbound variable error during CLI wrapper creation [@Copilot](https://github.com/Copilot) ([#13146](https://github.com/community-scripts/ProxmoxVE/pull/13146))
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Headscale: Enable TUN [@tremor021](https://github.com/tremor021) ([#13158](https://github.com/community-scripts/ProxmoxVE/pull/13158))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- core: add missing -searchdomain/-nameserver prefix in base_settings [@MickLesk](https://github.com/MickLesk) ([#13166](https://github.com/community-scripts/ProxmoxVE/pull/13166))
|
|
||||||
|
|
||||||
## 2026-03-20
|
## 2026-03-20
|
||||||
|
|
||||||
### 🆕 New Scripts
|
### 🆕 New Scripts
|
||||||
@@ -1227,4 +982,449 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
- #### ✨ New Features
|
- #### ✨ New Features
|
||||||
|
|
||||||
- core: graceful fallback for apt-get update failures [@MickLesk](https://github.com/MickLesk) ([#12386](https://github.com/community-scripts/ProxmoxVE/pull/12386))
|
- core: graceful fallback for apt-get update failures [@MickLesk](https://github.com/MickLesk) ([#12386](https://github.com/community-scripts/ProxmoxVE/pull/12386))
|
||||||
- core: Improve error outputs across core functions [@MickLesk](https://github.com/MickLesk) ([#12378](https://github.com/community-scripts/ProxmoxVE/pull/12378))
|
- core: Improve error outputs across core functions [@MickLesk](https://github.com/MickLesk) ([#12378](https://github.com/community-scripts/ProxmoxVE/pull/12378))
|
||||||
|
|
||||||
|
## 2026-02-26
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Kima-Hub ([#12319](https://github.com/community-scripts/ProxmoxVE/pull/12319))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- tools.func: update glx alternatives / nvidia alternative if nvidia glx are missing [@MickLesk](https://github.com/MickLesk) ([#12372](https://github.com/community-scripts/ProxmoxVE/pull/12372))
|
||||||
|
- hotfix: overseer version [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12366](https://github.com/community-scripts/ProxmoxVE/pull/12366))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- Add ffmpeg for booklore (ffprobe) [@MickLesk](https://github.com/MickLesk) ([#12371](https://github.com/community-scripts/ProxmoxVE/pull/12371))
|
||||||
|
- [QOL] Immich: add warning regarding library compilation time [@vhsdream](https://github.com/vhsdream) ([#12345](https://github.com/community-scripts/ProxmoxVE/pull/12345))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Improves adguardhome-sync addon when running on alpine LXCs [@Darkangeel-hd](https://github.com/Darkangeel-hd) ([#12362](https://github.com/community-scripts/ProxmoxVE/pull/12362))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- Add Alpine support and improve Tailscale install [@MickLesk](https://github.com/MickLesk) ([#12370](https://github.com/community-scripts/ProxmoxVE/pull/12370))
|
||||||
|
|
||||||
|
### 📚 Documentation
|
||||||
|
|
||||||
|
- fix wrong link on contributions README.md [@Darkangeel-hd](https://github.com/Darkangeel-hd) ([#12363](https://github.com/community-scripts/ProxmoxVE/pull/12363))
|
||||||
|
|
||||||
|
### 📂 Github
|
||||||
|
|
||||||
|
- github: add workflow to autom. close unauthorized new-script PRs [@MickLesk](https://github.com/MickLesk) ([#12356](https://github.com/community-scripts/ProxmoxVE/pull/12356))
|
||||||
|
|
||||||
|
## 2026-02-25
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Zerobyte ([#12321](https://github.com/community-scripts/ProxmoxVE/pull/12321))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- fix: overseer migration [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12340](https://github.com/community-scripts/ProxmoxVE/pull/12340))
|
||||||
|
- add: vikunja: daemon reload [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12323](https://github.com/community-scripts/ProxmoxVE/pull/12323))
|
||||||
|
- opnsense-VM: Use ip link to verify bridge existence [@MickLesk](https://github.com/MickLesk) ([#12329](https://github.com/community-scripts/ProxmoxVE/pull/12329))
|
||||||
|
- wger: Use $http_host for proxy Host header [@MickLesk](https://github.com/MickLesk) ([#12327](https://github.com/community-scripts/ProxmoxVE/pull/12327))
|
||||||
|
- Passbolt: Update Nginx config `client_max_body_size` [@tremor021](https://github.com/tremor021) ([#12313](https://github.com/community-scripts/ProxmoxVE/pull/12313))
|
||||||
|
- Zammad: configure Elasticsearch before zammad start [@MickLesk](https://github.com/MickLesk) ([#12308](https://github.com/community-scripts/ProxmoxVE/pull/12308))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- OpenProject: Various fixes [@tremor021](https://github.com/tremor021) ([#12246](https://github.com/community-scripts/ProxmoxVE/pull/12246))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Fix detection of ssh keys [@1-tempest](https://github.com/1-tempest) ([#12230](https://github.com/community-scripts/ProxmoxVE/pull/12230))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools.func: Improve GitHub/Codeberg API error handling and error output [@MickLesk](https://github.com/MickLesk) ([#12330](https://github.com/community-scripts/ProxmoxVE/pull/12330))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- core: remove duplicate traps, consolidate error handling and harden signal traps [@MickLesk](https://github.com/MickLesk) ([#12316](https://github.com/community-scripts/ProxmoxVE/pull/12316))
|
||||||
|
|
||||||
|
### 📂 Github
|
||||||
|
|
||||||
|
- github: improvements for node drift wf [@MickLesk](https://github.com/MickLesk) ([#12309](https://github.com/community-scripts/ProxmoxVE/pull/12309))
|
||||||
|
|
||||||
|
## 2026-02-24
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- several scripts: add additional github link in source [@MickLesk](https://github.com/MickLesk) ([#12282](https://github.com/community-scripts/ProxmoxVE/pull/12282))
|
||||||
|
- adds further documentation during the installation script. [@d12rio](https://github.com/d12rio) ([#12248](https://github.com/community-scripts/ProxmoxVE/pull/12248))
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- [Fix] PatchMon: remove VITE_API_URL from frontend env [@vhsdream](https://github.com/vhsdream) ([#12294](https://github.com/community-scripts/ProxmoxVE/pull/12294))
|
||||||
|
- fix(searxng): remove orphaned fi causing syntax error [@mark-jeffrey](https://github.com/mark-jeffrey) ([#12283](https://github.com/community-scripts/ProxmoxVE/pull/12283))
|
||||||
|
- Refactor n8n [@MickLesk](https://github.com/MickLesk) ([#12264](https://github.com/community-scripts/ProxmoxVE/pull/12264))
|
||||||
|
- Firefly: PHP bump [@tremor021](https://github.com/tremor021) ([#12247](https://github.com/community-scripts/ProxmoxVE/pull/12247))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- Databasus: add mariadb path for mysql/mariadb backups | add mongodb database tools [@MickLesk](https://github.com/MickLesk) ([#12259](https://github.com/community-scripts/ProxmoxVE/pull/12259))
|
||||||
|
- make searxng updateable [@shtefko](https://github.com/shtefko) ([#12207](https://github.com/community-scripts/ProxmoxVE/pull/12207))
|
||||||
|
|
||||||
|
- #### 💥 Breaking Changes
|
||||||
|
|
||||||
|
- fix: wealthfolio for v3 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11765](https://github.com/community-scripts/ProxmoxVE/pull/11765))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- bump various scripts from Node 22 to 24 [@MickLesk](https://github.com/MickLesk) ([#12265](https://github.com/community-scripts/ProxmoxVE/pull/12265))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- core: fix broken "command not found" after err_trap [@MickLesk](https://github.com/MickLesk) ([#12280](https://github.com/community-scripts/ProxmoxVE/pull/12280))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools.func: add get_latest_gh_tag helper function [@MickLesk](https://github.com/MickLesk) ([#12261](https://github.com/community-scripts/ProxmoxVE/pull/12261))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- Arcane ([#12263](https://github.com/community-scripts/ProxmoxVE/pull/12263))
|
||||||
|
|
||||||
|
### 📂 Github
|
||||||
|
|
||||||
|
- github: add weekly Node.js version drift check workflow [@MickLesk](https://github.com/MickLesk) ([#12267](https://github.com/community-scripts/ProxmoxVE/pull/12267))
|
||||||
|
- add: workflow to close stale PRs [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12243](https://github.com/community-scripts/ProxmoxVE/pull/12243))
|
||||||
|
|
||||||
|
## 2026-02-23
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- SeaweedFS ([#12220](https://github.com/community-scripts/ProxmoxVE/pull/12220))
|
||||||
|
- Sonobarr ([#12221](https://github.com/community-scripts/ProxmoxVE/pull/12221))
|
||||||
|
- SparkyFitness ([#12185](https://github.com/community-scripts/ProxmoxVE/pull/12185))
|
||||||
|
- Frigate v16.4 [@MickLesk](https://github.com/MickLesk) ([#11887](https://github.com/community-scripts/ProxmoxVE/pull/11887))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- memos: unpin version due new release artifacts [@MickLesk](https://github.com/MickLesk) ([#12224](https://github.com/community-scripts/ProxmoxVE/pull/12224))
|
||||||
|
- core: Enhance signal handling, reported "status" and logs [@MickLesk](https://github.com/MickLesk) ([#12216](https://github.com/community-scripts/ProxmoxVE/pull/12216))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- booklore v2: embed frontend, bump Java to 25, remove nginx [@MickLesk](https://github.com/MickLesk) ([#12223](https://github.com/community-scripts/ProxmoxVE/pull/12223))
|
||||||
|
|
||||||
|
### 🗑️ Deleted Scripts
|
||||||
|
|
||||||
|
- Remove: Huntarr (deprecated & Security) [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12226](https://github.com/community-scripts/ProxmoxVE/pull/12226))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- core: Improve error handling and logging for LXC builds [@MickLesk](https://github.com/MickLesk) ([#12208](https://github.com/community-scripts/ProxmoxVE/pull/12208))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- calibre-web: update default credentials [@LaevaertK](https://github.com/LaevaertK) ([#12201](https://github.com/community-scripts/ProxmoxVE/pull/12201))
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- chore: update Frigate documentation and website URLs [@JohnICB](https://github.com/JohnICB) ([#12218](https://github.com/community-scripts/ProxmoxVE/pull/12218))
|
||||||
|
|
||||||
|
## 2026-02-22
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Gramps-Web ([#12157](https://github.com/community-scripts/ProxmoxVE/pull/12157))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- fix: Apache Guacamole - bump to Temurin JDK 17 to resolve Debian 13 (Trixie) install failure [@Copilot](https://github.com/Copilot) ([#12161](https://github.com/community-scripts/ProxmoxVE/pull/12161))
|
||||||
|
- Docker-VM: add error handling for virt-customize finalization [@MickLesk](https://github.com/MickLesk) ([#12127](https://github.com/community-scripts/ProxmoxVE/pull/12127))
|
||||||
|
- [Fix] Sure: add Sidekiq service [@vhsdream](https://github.com/vhsdream) ([#12186](https://github.com/community-scripts/ProxmoxVE/pull/12186))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- Refactor & Bump to v2: Plex [@MickLesk](https://github.com/MickLesk) ([#12179](https://github.com/community-scripts/ProxmoxVE/pull/12179))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- karakeep: bump to node 24 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12183](https://github.com/community-scripts/ProxmoxVE/pull/12183))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools.func: add GitHub API rate-limit detection and GITHUB_TOKEN support [@MickLesk](https://github.com/MickLesk) ([#12176](https://github.com/community-scripts/ProxmoxVE/pull/12176))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- CR*NMASTER ([#12065](https://github.com/community-scripts/ProxmoxVE/pull/12065))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Update package management commands in clean-lxcs.sh [@heinemannj](https://github.com/heinemannj) ([#12166](https://github.com/community-scripts/ProxmoxVE/pull/12166))
|
||||||
|
|
||||||
|
### ❔ Uncategorized
|
||||||
|
|
||||||
|
- calibre-web: Update logo URL [@MickLesk](https://github.com/MickLesk) ([#12178](https://github.com/community-scripts/ProxmoxVE/pull/12178))
|
||||||
|
|
||||||
|
## 2026-02-21
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Pangolin: restore config before db migration, use drizzle-kit push [@MickLesk](https://github.com/MickLesk) ([#12130](https://github.com/community-scripts/ProxmoxVE/pull/12130))
|
||||||
|
- PLANKA: fix msg's [@danielalanbates](https://github.com/danielalanbates) ([#12143](https://github.com/community-scripts/ProxmoxVE/pull/12143))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- MediaManager: Update documentation URL [@tremor021](https://github.com/tremor021) ([#12154](https://github.com/community-scripts/ProxmoxVE/pull/12154))
|
||||||
|
|
||||||
|
## 2026-02-20
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Sure ([#12114](https://github.com/community-scripts/ProxmoxVE/pull/12114))
|
||||||
|
- Calibre-Web ([#12115](https://github.com/community-scripts/ProxmoxVE/pull/12115))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Zammad: fix Elasticsearch JVM config and add daemon-reload [@MickLesk](https://github.com/MickLesk) ([#12125](https://github.com/community-scripts/ProxmoxVE/pull/12125))
|
||||||
|
- Huntarr: add build-essential for native pip dependencies [@MickLesk](https://github.com/MickLesk) ([#12126](https://github.com/community-scripts/ProxmoxVE/pull/12126))
|
||||||
|
- Dokploy: fix update function [@vhsdream](https://github.com/vhsdream) ([#12116](https://github.com/community-scripts/ProxmoxVE/pull/12116))
|
||||||
|
|
||||||
|
- #### 💥 Breaking Changes
|
||||||
|
|
||||||
|
- recyclarr: adjust paths for v8.0 breaking changes [@MickLesk](https://github.com/MickLesk) ([#12129](https://github.com/community-scripts/ProxmoxVE/pull/12129))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Planka: migrate data paths to new v2 directory structure [@MickLesk](https://github.com/MickLesk) ([#12128](https://github.com/community-scripts/ProxmoxVE/pull/12128))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- fixen broken link to dawarich documentation [@RiX012](https://github.com/RiX012) ([#12103](https://github.com/community-scripts/ProxmoxVE/pull/12103))
|
||||||
|
|
||||||
|
## 2026-02-19
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- TrueNAS-VM ([#12059](https://github.com/community-scripts/ProxmoxVE/pull/12059))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- add: patchmon breaking change msg [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12075](https://github.com/community-scripts/ProxmoxVE/pull/12075))
|
||||||
|
- LibreNMS: Various fixes [@tremor021](https://github.com/tremor021) ([#12089](https://github.com/community-scripts/ProxmoxVE/pull/12089))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- truenas-vm: slug fix for source code link [@juronja](https://github.com/juronja) ([#12088](https://github.com/community-scripts/ProxmoxVE/pull/12088))
|
||||||
|
|
||||||
|
## 2026-02-18
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 💥 Breaking Changes
|
||||||
|
|
||||||
|
- [Fix] PatchMon: use `SERVER_PORT` in Nginx config if set in env [@vhsdream](https://github.com/vhsdream) ([#12053](https://github.com/community-scripts/ProxmoxVE/pull/12053))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- core: Execution ID & Telemetry Improvements [@MickLesk](https://github.com/MickLesk) ([#12041](https://github.com/community-scripts/ProxmoxVE/pull/12041))
|
||||||
|
|
||||||
|
## 2026-02-17
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Databasus ([#12018](https://github.com/community-scripts/ProxmoxVE/pull/12018))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- [Hotfix] Cleanuparr: backup config before update [@vhsdream](https://github.com/vhsdream) ([#12039](https://github.com/community-scripts/ProxmoxVE/pull/12039))
|
||||||
|
- fix: pterodactyl-panel add symlink [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11997](https://github.com/community-scripts/ProxmoxVE/pull/11997))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- core: call get_lxc_ip in start() before updates [@MickLesk](https://github.com/MickLesk) ([#12015](https://github.com/community-scripts/ProxmoxVE/pull/12015))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools/pve: add data analytics / formatting / linting [@MickLesk](https://github.com/MickLesk) ([#12034](https://github.com/community-scripts/ProxmoxVE/pull/12034))
|
||||||
|
- core: smart recovery for failed installs | extend exit_codes [@MickLesk](https://github.com/MickLesk) ([#11221](https://github.com/community-scripts/ProxmoxVE/pull/11221))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- core: error-handler improvements | better exit_code handling | better tools.func source check [@MickLesk](https://github.com/MickLesk) ([#12019](https://github.com/community-scripts/ProxmoxVE/pull/12019))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Immich Public Proxy: centralize and fix systemd service creation [@MickLesk](https://github.com/MickLesk) ([#12025](https://github.com/community-scripts/ProxmoxVE/pull/12025))
|
||||||
|
|
||||||
|
### 📚 Documentation
|
||||||
|
|
||||||
|
- fix contribution/setup-fork [@andreasabeck](https://github.com/andreasabeck) ([#12047](https://github.com/community-scripts/ProxmoxVE/pull/12047))
|
||||||
|
|
||||||
|
## 2026-02-16
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- RomM ([#11987](https://github.com/community-scripts/ProxmoxVE/pull/11987))
|
||||||
|
- LinkDing ([#11976](https://github.com/community-scripts/ProxmoxVE/pull/11976))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- Opencloud: Pin version to 5.1.0 [@vhsdream](https://github.com/vhsdream) ([#12004](https://github.com/community-scripts/ProxmoxVE/pull/12004))
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Tududi: Fix sed command for DB_FILE configuration [@tremor021](https://github.com/tremor021) ([#11988](https://github.com/community-scripts/ProxmoxVE/pull/11988))
|
||||||
|
- slskd: fix exit position [@MickLesk](https://github.com/MickLesk) ([#11963](https://github.com/community-scripts/ProxmoxVE/pull/11963))
|
||||||
|
- cryptpad: restore config earlier and run onlyoffice upgrade [@MickLesk](https://github.com/MickLesk) ([#11964](https://github.com/community-scripts/ProxmoxVE/pull/11964))
|
||||||
|
- jellyseerr/overseerr: Migrate update script to Seerr; prompt rerun [@MickLesk](https://github.com/MickLesk) ([#11965](https://github.com/community-scripts/ProxmoxVE/pull/11965))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- core/vm's: ensure script state is sent on script exit [@MickLesk](https://github.com/MickLesk) ([#11991](https://github.com/community-scripts/ProxmoxVE/pull/11991))
|
||||||
|
- Vaultwarden: export VW_VERSION as version number [@MickLesk](https://github.com/MickLesk) ([#11966](https://github.com/community-scripts/ProxmoxVE/pull/11966))
|
||||||
|
- Zabbix: Improve zabbix-agent service detection [@MickLesk](https://github.com/MickLesk) ([#11968](https://github.com/community-scripts/ProxmoxVE/pull/11968))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools.func: ensure /usr/local/bin PATH persists for pct enter sessions [@MickLesk](https://github.com/MickLesk) ([#11970](https://github.com/community-scripts/ProxmoxVE/pull/11970))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- core: remove duplicate error handler from alpine-install.func [@MickLesk](https://github.com/MickLesk) ([#11971](https://github.com/community-scripts/ProxmoxVE/pull/11971))
|
||||||
|
|
||||||
|
### 📂 Github
|
||||||
|
|
||||||
|
- github: add "website" label if "json" changed [@MickLesk](https://github.com/MickLesk) ([#11975](https://github.com/community-scripts/ProxmoxVE/pull/11975))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- Update Wishlist LXC webpage to include reverse proxy info [@summoningpixels](https://github.com/summoningpixels) ([#11973](https://github.com/community-scripts/ProxmoxVE/pull/11973))
|
||||||
|
- Update OpenCloud LXC webpage to include services ports [@summoningpixels](https://github.com/summoningpixels) ([#11969](https://github.com/community-scripts/ProxmoxVE/pull/11969))
|
||||||
|
|
||||||
|
## 2026-02-15
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- ebusd ([#11942](https://github.com/community-scripts/ProxmoxVE/pull/11942))
|
||||||
|
- add: seer script and migrations [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11930](https://github.com/community-scripts/ProxmoxVE/pull/11930))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Fix seerr URL in jellyseerr script [@lucacome](https://github.com/lucacome) ([#11951](https://github.com/community-scripts/ProxmoxVE/pull/11951))
|
||||||
|
- Fix jellyseer and overseer script replacement [@lucacome](https://github.com/lucacome) ([#11949](https://github.com/community-scripts/ProxmoxVE/pull/11949))
|
||||||
|
- Tautulli: Add setuptools < 81 [@tremor021](https://github.com/tremor021) ([#11943](https://github.com/community-scripts/ProxmoxVE/pull/11943))
|
||||||
|
|
||||||
|
- #### 💥 Breaking Changes
|
||||||
|
|
||||||
|
- Refactor: Patchmon [@vhsdream](https://github.com/vhsdream) ([#11888](https://github.com/community-scripts/ProxmoxVE/pull/11888))
|
||||||
|
|
||||||
|
## 2026-02-14
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Increase disk allocation for OpenWebUI and Ollama to prevent installation failures [@Copilot](https://github.com/Copilot) ([#11920](https://github.com/community-scripts/ProxmoxVE/pull/11920))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- core: handle missing RAM speed in nested VMs [@MickLesk](https://github.com/MickLesk) ([#11913](https://github.com/community-scripts/ProxmoxVE/pull/11913))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- core: overwriteable app version [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11753](https://github.com/community-scripts/ProxmoxVE/pull/11753))
|
||||||
|
- core: validate container IDs cluster-wide across all nodes [@MickLesk](https://github.com/MickLesk) ([#11906](https://github.com/community-scripts/ProxmoxVE/pull/11906))
|
||||||
|
- core: improve error reporting with structured error strings and better categorization + output formatting [@MickLesk](https://github.com/MickLesk) ([#11907](https://github.com/community-scripts/ProxmoxVE/pull/11907))
|
||||||
|
- core: unified logging system with combined logs [@MickLesk](https://github.com/MickLesk) ([#11761](https://github.com/community-scripts/ProxmoxVE/pull/11761))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- lxc-updater: add patchmon aware [@failure101](https://github.com/failure101) ([#11905](https://github.com/community-scripts/ProxmoxVE/pull/11905))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- Disable UniFi script - APT packages no longer available [@Copilot](https://github.com/Copilot) ([#11898](https://github.com/community-scripts/ProxmoxVE/pull/11898))
|
||||||
|
|
||||||
|
## 2026-02-13
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- OpenWebUI: pin numba constraint [@MickLesk](https://github.com/MickLesk) ([#11874](https://github.com/community-scripts/ProxmoxVE/pull/11874))
|
||||||
|
- Planka: add migrate step to update function [@ZimmermannLeon](https://github.com/ZimmermannLeon) ([#11877](https://github.com/community-scripts/ProxmoxVE/pull/11877))
|
||||||
|
- Pangolin: switch sqlite-specific back to generic [@MickLesk](https://github.com/MickLesk) ([#11868](https://github.com/community-scripts/ProxmoxVE/pull/11868))
|
||||||
|
- [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config [@vhsdream](https://github.com/vhsdream) ([#11864](https://github.com/community-scripts/ProxmoxVE/pull/11864))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Refactor: Radicale [@vhsdream](https://github.com/vhsdream) ([#11850](https://github.com/community-scripts/ProxmoxVE/pull/11850))
|
||||||
|
- chore(donetick): add config entry for v0.1.73 [@tomfrenzel](https://github.com/tomfrenzel) ([#11872](https://github.com/community-scripts/ProxmoxVE/pull/11872))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- core: retry reporting with fallback payloads [@MickLesk](https://github.com/MickLesk) ([#11885](https://github.com/community-scripts/ProxmoxVE/pull/11885))
|
||||||
|
|
||||||
|
### 📡 API
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- error-handler: Implement json_escape and enhance error handling [@MickLesk](https://github.com/MickLesk) ([#11875](https://github.com/community-scripts/ProxmoxVE/pull/11875))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 📝 Script Information
|
||||||
|
|
||||||
|
- SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning [@MickLesk](https://github.com/MickLesk) ([#11829](https://github.com/community-scripts/ProxmoxVE/pull/11829))
|
||||||
@@ -5,7 +5,7 @@
|
|||||||
<p><em>A Community Legacy in Memory of @tteck</em></p>
|
<p><em>A Community Legacy in Memory of @tteck</em></p>
|
||||||
|
|
||||||
<p>
|
<p>
|
||||||
<a href="https://community-scripts.org">
|
<a href="https://helper-scripts.com">
|
||||||
<img src="https://img.shields.io/badge/🌐_Website-Visit-4c9b3f?style=for-the-badge&labelColor=2d3748" alt="Website" />
|
<img src="https://img.shields.io/badge/🌐_Website-Visit-4c9b3f?style=for-the-badge&labelColor=2d3748" alt="Website" />
|
||||||
</a>
|
</a>
|
||||||
<a href="https://discord.gg/3AnUqsXnmK">
|
<a href="https://discord.gg/3AnUqsXnmK">
|
||||||
|
|||||||
@@ -56,7 +56,6 @@ function update_script() {
|
|||||||
fi
|
fi
|
||||||
$STD .venv/bin/python -m pip install --upgrade pip
|
$STD .venv/bin/python -m pip install --upgrade pip
|
||||||
$STD .venv/bin/python -m pip install -r requirements.txt
|
$STD .venv/bin/python -m pip install -r requirements.txt
|
||||||
$STD .venv/bin/python -m pip install 'djangorestframework<3.15'
|
|
||||||
$STD .venv/bin/python -m manage collectstatic --noinput
|
$STD .venv/bin/python -m manage collectstatic --noinput
|
||||||
$STD .venv/bin/python -m manage migrate
|
$STD .venv/bin/python -m manage migrate
|
||||||
|
|
||||||
|
|||||||
@@ -1,107 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: Sander Koenders (sanderkoenders)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://www.borgbackup.org/
|
|
||||||
|
|
||||||
APP="Alpine-BorgBackup-Server"
|
|
||||||
var_tags="${var_tags:-alpine;backup}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-1024}"
|
|
||||||
var_disk="${var_disk:-20}"
|
|
||||||
var_os="${var_os:-alpine}"
|
|
||||||
var_version="${var_version:-3.23}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
|
|
||||||
if [[ ! -f /usr/bin/borg ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
CHOICE=$(msg_menu "BorgBackup Server Update Options" \
|
|
||||||
"1" "Update BorgBackup Server" \
|
|
||||||
"2" "Reset SSH Access" \
|
|
||||||
"3" "Enable password authentication for backup user (not recommended, use SSH key instead)" \
|
|
||||||
"4" "Disable password authentication for backup user (recommended for security, use SSH key)")
|
|
||||||
|
|
||||||
case $CHOICE in
|
|
||||||
1)
|
|
||||||
msg_info "Updating $APP LXC"
|
|
||||||
$STD apk -U upgrade
|
|
||||||
msg_ok "Updated $APP LXC successfully!"
|
|
||||||
;;
|
|
||||||
2)
|
|
||||||
if [[ "${PHS_SILENT:-0}" == "1" ]]; then
|
|
||||||
msg_warn "Reset SSH Public key requires interactive mode, skipping."
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
msg_info "Setting up SSH Public Key for backup user"
|
|
||||||
|
|
||||||
msg_info "Please paste your SSH public key (e.g., ssh-rsa AAAAB3... user@host): \n"
|
|
||||||
read -p "Key: " SSH_PUBLIC_KEY
|
|
||||||
echo
|
|
||||||
|
|
||||||
if [[ -z "$SSH_PUBLIC_KEY" ]]; then
|
|
||||||
msg_error "No SSH public key provided!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ ! "$SSH_PUBLIC_KEY" =~ ^(ssh-rsa|ssh-dss|ssh-ed25519|ecdsa-sha2-) ]]; then
|
|
||||||
msg_error "Invalid SSH public key format!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
msg_info "Setting up SSH access"
|
|
||||||
mkdir -p /home/backup/.ssh
|
|
||||||
echo "$SSH_PUBLIC_KEY" >/home/backup/.ssh/authorized_keys
|
|
||||||
|
|
||||||
chown -R backup:backup /home/backup/.ssh
|
|
||||||
chmod 700 /home/backup/.ssh
|
|
||||||
chmod 600 /home/backup/.ssh/authorized_keys
|
|
||||||
|
|
||||||
msg_ok "SSH access configured for backup user"
|
|
||||||
;;
|
|
||||||
3)
|
|
||||||
if [[ "${PHS_SILENT:-0}" == "1" ]]; then
|
|
||||||
msg_warn "Enabling password authentication requires interactive mode, skipping."
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
msg_info "Enabling password authentication for backup user"
|
|
||||||
msg_warn "Password authentication is less secure than using SSH keys. Consider using SSH keys instead."
|
|
||||||
passwd backup
|
|
||||||
sed -i 's/^#*\s*PasswordAuthentication\s\+\(yes\|no\)/PasswordAuthentication yes/' /etc/ssh/sshd_config
|
|
||||||
rc-service sshd restart
|
|
||||||
msg_ok "Password authentication enabled for backup user"
|
|
||||||
;;
|
|
||||||
4)
|
|
||||||
msg_info "Disabling password authentication for backup user"
|
|
||||||
sed -i 's/^#*\s*PasswordAuthentication\s\+\(yes\|no\)/PasswordAuthentication no/' /etc/ssh/sshd_config
|
|
||||||
rc-service sshd restart
|
|
||||||
msg_ok "Password authentication disabled for backup user"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
exit 0
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW}Connection information:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}ssh backup@${IP}${CL}"
|
|
||||||
echo -e "${TAB}${VERIFYPW}${YW}To set SSH key, run this script with the 'update' option and select option 2${CL}"
|
|
||||||
@@ -35,8 +35,6 @@ function update_script() {
|
|||||||
read -r -p "${TAB}Migrate update function now? [y/N]: " CONFIRM
|
read -r -p "${TAB}Migrate update function now? [y/N]: " CONFIRM
|
||||||
if [[ ! "${CONFIRM,,}" =~ ^(y|yes)$ ]]; then
|
if [[ ! "${CONFIRM,,}" =~ ^(y|yes)$ ]]; then
|
||||||
msg_warn "Migration skipped. The old update will continue to work for now."
|
msg_warn "Migration skipped. The old update will continue to work for now."
|
||||||
msg_warn "⚠️ Komodo v2 uses :2 image tags. The :latest tag is deprecated and will not receive v2 updates."
|
|
||||||
msg_warn "Please migrate to the addon script to receive Komodo v2."
|
|
||||||
msg_info "Updating ${APP} (legacy)"
|
msg_info "Updating ${APP} (legacy)"
|
||||||
COMPOSE_FILE=$(find /opt/komodo -maxdepth 1 -type f -name '*.compose.yaml' ! -name 'compose.env' | head -n1)
|
COMPOSE_FILE=$(find /opt/komodo -maxdepth 1 -type f -name '*.compose.yaml' ! -name 'compose.env' | head -n1)
|
||||||
if [[ -z "$COMPOSE_FILE" ]]; then
|
if [[ -z "$COMPOSE_FILE" ]]; then
|
||||||
|
|||||||
@@ -1,78 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: Adrian-RDA
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/maziggy/bambuddy
|
|
||||||
|
|
||||||
APP="Bambuddy"
|
|
||||||
var_tags="${var_tags:-media;3d-printing}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-2048}"
|
|
||||||
var_disk="${var_disk:-10}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -d /opt/bambuddy ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "bambuddy" "maziggy/bambuddy"; then
|
|
||||||
msg_info "Stopping Service"
|
|
||||||
systemctl stop bambuddy
|
|
||||||
msg_ok "Stopped Service"
|
|
||||||
|
|
||||||
msg_info "Backing up Configuration and Data"
|
|
||||||
cp /opt/bambuddy/.env /opt/bambuddy.env.bak
|
|
||||||
cp -r /opt/bambuddy/data /opt/bambuddy_data_bak
|
|
||||||
msg_ok "Backed up Configuration and Data"
|
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "bambuddy" "maziggy/bambuddy" "tarball" "latest" "/opt/bambuddy"
|
|
||||||
|
|
||||||
msg_info "Updating Python Dependencies"
|
|
||||||
cd /opt/bambuddy
|
|
||||||
$STD uv venv
|
|
||||||
$STD uv pip install -r requirements.txt
|
|
||||||
msg_ok "Updated Python Dependencies"
|
|
||||||
|
|
||||||
msg_info "Rebuilding Frontend"
|
|
||||||
cd /opt/bambuddy/frontend
|
|
||||||
$STD npm install
|
|
||||||
$STD npm run build
|
|
||||||
msg_ok "Rebuilt Frontend"
|
|
||||||
|
|
||||||
msg_info "Restoring Configuration and Data"
|
|
||||||
cp /opt/bambuddy.env.bak /opt/bambuddy/.env
|
|
||||||
cp -r /opt/bambuddy_data_bak/. /opt/bambuddy/data/
|
|
||||||
rm -f /opt/bambuddy.env.bak
|
|
||||||
rm -rf /opt/bambuddy_data_bak
|
|
||||||
msg_ok "Restored Configuration and Data"
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
|
||||||
systemctl start bambuddy
|
|
||||||
msg_ok "Started Service"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8000${CL}"
|
|
||||||
@@ -1,63 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/tphakala/birdnet-go
|
|
||||||
|
|
||||||
APP="BirdNET-Go"
|
|
||||||
var_tags="${var_tags:-monitoring;ai;nature}"
|
|
||||||
var_cpu="${var_cpu:-4}"
|
|
||||||
var_ram="${var_ram:-2048}"
|
|
||||||
var_disk="${var_disk:-12}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
var_gpu="${var_gpu:-no}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -f /usr/local/bin/birdnet-go ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "birdnet" "tphakala/birdnet-go"; then
|
|
||||||
msg_info "Stopping Service"
|
|
||||||
systemctl stop birdnet
|
|
||||||
msg_ok "Stopped Service"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "birdnet" "tphakala/birdnet-go" "prebuild" "latest" "/opt/birdnet" "birdnet-go-linux-amd64.tar.gz"
|
|
||||||
|
|
||||||
msg_info "Deploying Binary"
|
|
||||||
cp /opt/birdnet/birdnet-go /usr/local/bin/birdnet-go
|
|
||||||
chmod +x /usr/local/bin/birdnet-go
|
|
||||||
cp -r /opt/birdnet/libtensorflowlite_c.so /usr/local/lib/ || true
|
|
||||||
ldconfig
|
|
||||||
msg_ok "Deployed Binary"
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
|
||||||
systemctl start birdnet
|
|
||||||
msg_ok "Started Service"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8080${CL}"
|
|
||||||
113
ct/booklore.sh
Normal file
113
ct/booklore.sh
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/booklore-app/BookLore
|
||||||
|
|
||||||
|
APP="BookLore"
|
||||||
|
var_tags="${var_tags:-books;library}"
|
||||||
|
var_cpu="${var_cpu:-3}"
|
||||||
|
var_ram="${var_ram:-3072}"
|
||||||
|
var_disk="${var_disk:-7}"
|
||||||
|
var_os="${var_os:-debian}"
|
||||||
|
var_version="${var_version:-13}"
|
||||||
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
|
||||||
|
header_info "$APP"
|
||||||
|
variables
|
||||||
|
color
|
||||||
|
catch_errors
|
||||||
|
|
||||||
|
function update_script() {
|
||||||
|
header_info
|
||||||
|
check_container_storage
|
||||||
|
check_container_resources
|
||||||
|
|
||||||
|
if [[ ! -d /opt/booklore ]]; then
|
||||||
|
msg_error "No ${APP} Installation Found!"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_for_gh_release "booklore" "booklore-app/BookLore"; then
|
||||||
|
JAVA_VERSION="25" setup_java
|
||||||
|
NODE_VERSION="22" setup_nodejs
|
||||||
|
setup_mariadb
|
||||||
|
setup_yq
|
||||||
|
ensure_dependencies ffmpeg
|
||||||
|
|
||||||
|
msg_info "Stopping Service"
|
||||||
|
systemctl stop booklore
|
||||||
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
|
if grep -qE "^BOOKLORE_(DATA_PATH|BOOKDROP_PATH|BOOKS_PATH|PORT)=" /opt/booklore_storage/.env 2>/dev/null; then
|
||||||
|
msg_info "Migrating old environment variables"
|
||||||
|
sed -i 's/^BOOKLORE_DATA_PATH=/APP_PATH_CONFIG=/g' /opt/booklore_storage/.env
|
||||||
|
sed -i 's/^BOOKLORE_BOOKDROP_PATH=/APP_BOOKDROP_FOLDER=/g' /opt/booklore_storage/.env
|
||||||
|
sed -i '/^BOOKLORE_BOOKS_PATH=/d' /opt/booklore_storage/.env
|
||||||
|
sed -i '/^BOOKLORE_PORT=/d' /opt/booklore_storage/.env
|
||||||
|
msg_ok "Migrated old environment variables"
|
||||||
|
fi
|
||||||
|
|
||||||
|
msg_info "Backing up old installation"
|
||||||
|
mv /opt/booklore /opt/booklore_bak
|
||||||
|
msg_ok "Backed up old installation"
|
||||||
|
|
||||||
|
fetch_and_deploy_gh_release "booklore" "booklore-app/BookLore" "tarball"
|
||||||
|
|
||||||
|
msg_info "Building Frontend"
|
||||||
|
cd /opt/booklore/booklore-ui
|
||||||
|
$STD npm install --force
|
||||||
|
$STD npm run build --configuration=production
|
||||||
|
msg_ok "Built Frontend"
|
||||||
|
|
||||||
|
msg_info "Embedding Frontend into Backend"
|
||||||
|
mkdir -p /opt/booklore/booklore-api/src/main/resources/static
|
||||||
|
cp -r /opt/booklore/booklore-ui/dist/booklore/browser/* /opt/booklore/booklore-api/src/main/resources/static/
|
||||||
|
msg_ok "Embedded Frontend into Backend"
|
||||||
|
|
||||||
|
msg_info "Building Backend"
|
||||||
|
cd /opt/booklore/booklore-api
|
||||||
|
APP_VERSION=$(get_latest_github_release "booklore-app/BookLore")
|
||||||
|
yq eval ".app.version = \"${APP_VERSION}\"" -i src/main/resources/application.yaml
|
||||||
|
$STD ./gradlew clean build -x test --no-daemon
|
||||||
|
mkdir -p /opt/booklore/dist
|
||||||
|
JAR_PATH=$(find /opt/booklore/booklore-api/build/libs -maxdepth 1 -type f -name "booklore-api-*.jar" ! -name "*plain*" | head -n1)
|
||||||
|
if [[ -z "$JAR_PATH" ]]; then
|
||||||
|
msg_error "Backend JAR not found"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
cp "$JAR_PATH" /opt/booklore/dist/app.jar
|
||||||
|
msg_ok "Built Backend"
|
||||||
|
|
||||||
|
if systemctl is-active --quiet nginx 2>/dev/null; then
|
||||||
|
msg_info "Removing Nginx (no longer needed)"
|
||||||
|
systemctl disable --now nginx
|
||||||
|
$STD apt-get purge -y nginx nginx-common
|
||||||
|
msg_ok "Removed Nginx"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! grep -q "^SERVER_PORT=" /opt/booklore_storage/.env 2>/dev/null; then
|
||||||
|
echo "SERVER_PORT=6060" >>/opt/booklore_storage/.env
|
||||||
|
fi
|
||||||
|
|
||||||
|
sed -i 's|ExecStart=.*|ExecStart=/usr/bin/java -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UseCompactObjectHeaders -XX:MaxRAMPercentage=75.0 -XX:+ExitOnOutOfMemoryError -jar /opt/booklore/dist/app.jar|' /etc/systemd/system/booklore.service
|
||||||
|
systemctl daemon-reload
|
||||||
|
|
||||||
|
msg_info "Starting Service"
|
||||||
|
systemctl start booklore
|
||||||
|
rm -rf /opt/booklore_bak
|
||||||
|
msg_ok "Started Service"
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
|
fi
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
start
|
||||||
|
build_container
|
||||||
|
description
|
||||||
|
|
||||||
|
msg_ok "Completed successfully!\n"
|
||||||
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
|
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:6060${CL}"
|
||||||
@@ -24,7 +24,7 @@ function update_script() {
|
|||||||
header_info
|
header_info
|
||||||
check_container_storage
|
check_container_storage
|
||||||
check_container_resources
|
check_container_resources
|
||||||
if [[ ! -d /opt/convertx ]]; then
|
if [[ ! -d /var ]]; then
|
||||||
msg_error "No ${APP} Installation Found!"
|
msg_error "No ${APP} Installation Found!"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
@@ -33,8 +33,6 @@ function update_script() {
|
|||||||
systemctl stop convertx
|
systemctl stop convertx
|
||||||
msg_info "Stopped Service"
|
msg_info "Stopped Service"
|
||||||
|
|
||||||
ensure_dependencies libreoffice-writer
|
|
||||||
|
|
||||||
msg_info "Move data-Folder"
|
msg_info "Move data-Folder"
|
||||||
if [[ -d /opt/convertx/data ]]; then
|
if [[ -d /opt/convertx/data ]]; then
|
||||||
mv /opt/convertx/data /opt/data
|
mv /opt/convertx/data /opt/data
|
||||||
|
|||||||
@@ -70,7 +70,7 @@ function update_script() {
|
|||||||
source /opt/dispatcharr/.env
|
source /opt/dispatcharr/.env
|
||||||
set +o allexport
|
set +o allexport
|
||||||
if [[ -n "$POSTGRES_DB" ]] && [[ -n "$POSTGRES_USER" ]] && [[ -n "$POSTGRES_PASSWORD" ]]; then
|
if [[ -n "$POSTGRES_DB" ]] && [[ -n "$POSTGRES_USER" ]] && [[ -n "$POSTGRES_PASSWORD" ]]; then
|
||||||
PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U "$POSTGRES_USER" -h "${POSTGRES_HOST:-localhost}" -p "${POSTGRES_PORT:-5432}" "$POSTGRES_DB" >/tmp/dispatcharr_db_$(date +%F).sql
|
PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U $POSTGRES_USER -h ${POSTGRES_HOST:-localhost} $POSTGRES_DB >/tmp/dispatcharr_db_$(date +%F).sql
|
||||||
msg_info "Database backup created"
|
msg_info "Database backup created"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|||||||
53
ct/drawdb.sh
53
ct/drawdb.sh
@@ -1,53 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/drawdb-io/drawdb
|
|
||||||
|
|
||||||
APP="DrawDB"
|
|
||||||
var_tags="${var_tags:-database;dev-tools}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-6144}"
|
|
||||||
var_disk="${var_disk:-5}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -d /opt/drawdb ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_tag "drawdb" "drawdb-io/drawdb"; then
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_tag "drawdb" "drawdb-io/drawdb" "latest" "/opt/drawdb"
|
|
||||||
|
|
||||||
msg_info "Rebuilding Frontend"
|
|
||||||
cd /opt/drawdb
|
|
||||||
$STD npm ci
|
|
||||||
NODE_OPTIONS="--max-old-space-size=4096" $STD npm run build
|
|
||||||
sed -i '/<head>/a <script>if(!crypto.randomUUID){crypto.randomUUID=function(){return([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g,function(c){return(c^(crypto.getRandomValues(new Uint8Array(1))[0]&(15>>c/4))).toString(16)})}};</script>' /opt/drawdb/dist/index.html
|
|
||||||
msg_ok "Rebuilt Frontend"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"
|
|
||||||
@@ -29,11 +29,11 @@ function update_script() {
|
|||||||
msg_error "No ${APP} Installation Found!"
|
msg_error "No ${APP} Installation Found!"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
update_available=$(curl -fsSL -X 'GET' "http://localhost:19200/api/status/update-available" -H 'accept: application/json' | jq .UpdateAvailable)
|
update_available=$(curl -fsSL -X 'GET' "http://localhost:19200/api/status/update-available" -H 'accept: application/json' | jq .UpdateAvailable)
|
||||||
if [[ "${update_available}" == "true" ]]; then
|
if [[ "${update_available}" == "true" ]]; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop fileflows*
|
systemctl stop fileflows
|
||||||
msg_info "Stopped Service"
|
msg_info "Stopped Service"
|
||||||
|
|
||||||
msg_info "Creating Backup"
|
msg_info "Creating Backup"
|
||||||
@@ -45,7 +45,7 @@ function update_script() {
|
|||||||
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
|
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Service"
|
||||||
systemctl start fileflows*
|
systemctl start fileflows
|
||||||
msg_ok "Started Service"
|
msg_ok "Started Service"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -1,71 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: CrazyWolf13
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/tess1o/geopulse
|
|
||||||
|
|
||||||
APP="GeoPulse"
|
|
||||||
var_tags="${var_tags:-location;tracking;gps}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-1024}"
|
|
||||||
var_disk="${var_disk:-8}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -f /opt/geopulse/backend/geopulse-backend ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "geopulse-backend" "tess1o/geopulse"; then
|
|
||||||
msg_info "Stopping Service"
|
|
||||||
systemctl stop geopulse-backend
|
|
||||||
msg_ok "Stopped Service"
|
|
||||||
|
|
||||||
if [[ "$(uname -m)" == "aarch64" ]]; then
|
|
||||||
if grep -qi "raspberry\|bcm" /proc/cpuinfo 2>/dev/null; then
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-arm64-compat-*"
|
|
||||||
else
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-arm64-[!c]*"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
if grep -q avx2 /proc/cpuinfo && grep -q bmi2 /proc/cpuinfo && grep -q fma /proc/cpuinfo; then
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-amd64-[!c]*"
|
|
||||||
else
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-amd64-compat-*"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "geopulse-backend" "tess1o/geopulse" "singlefile" "latest" "/opt/geopulse/backend" "${BINARY_PATTERN}"
|
|
||||||
fetch_and_deploy_gh_release "geopulse-frontend" "tess1o/geopulse" "prebuild" "latest" "/var/www/geopulse" "geopulse-frontend-*.tar.gz"
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
|
||||||
systemctl start geopulse-backend
|
|
||||||
msg_ok "Started Service"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
|
|
||||||
echo -e "${INFO}${YW} To create an admin account, run:${CL}"
|
|
||||||
echo -e "${TAB}${BGN}/usr/local/bin/create-geopulse-admin${CL}"
|
|
||||||
11
ct/gokapi.sh
11
ct/gokapi.sh
@@ -32,16 +32,7 @@ function update_script() {
|
|||||||
systemctl stop gokapi
|
systemctl stop gokapi
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "gokapi" "Forceu/Gokapi" "prebuild" "latest" "/opt/gokapi" "*linux*amd64.zip"
|
fetch_and_deploy_gh_release "gokapi" "Forceu/Gokapi" "prebuild" "latest" "/opt/gokapi" "gokapi-linux_amd64.zip"
|
||||||
|
|
||||||
# Migrate from pre-v2.2.4 binary name (gokapi-linux_amd64 -> gokapi)
|
|
||||||
if [[ -f /opt/gokapi/gokapi-linux_amd64 ]]; then
|
|
||||||
rm -f /opt/gokapi/gokapi-linux_amd64
|
|
||||||
fi
|
|
||||||
if grep -q "gokapi-linux_amd64" /etc/systemd/system/gokapi.service 2>/dev/null; then
|
|
||||||
sed -i 's|gokapi-linux_amd64|gokapi|g' /etc/systemd/system/gokapi.service
|
|
||||||
systemctl daemon-reload
|
|
||||||
fi
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Service"
|
||||||
systemctl start gokapi
|
systemctl start gokapi
|
||||||
|
|||||||
@@ -64,12 +64,6 @@ function update_script() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
start
|
start
|
||||||
|
|
||||||
if [[ $(sysctl -n vm.max_map_count 2>/dev/null) -lt 262144 ]]; then
|
|
||||||
sysctl -w vm.max_map_count=262144 >/dev/null 2>&1
|
|
||||||
echo "vm.max_map_count=262144" >/etc/sysctl.d/graylog.conf
|
|
||||||
fi
|
|
||||||
|
|
||||||
build_container
|
build_container
|
||||||
description
|
description
|
||||||
|
|
||||||
|
|||||||
@@ -46,11 +46,9 @@ function update_script() {
|
|||||||
msg_info "Updating Grist"
|
msg_info "Updating Grist"
|
||||||
mkdir -p /opt/grist/docs
|
mkdir -p /opt/grist/docs
|
||||||
cp -n /opt/grist_bak/.env /opt/grist/.env
|
cp -n /opt/grist_bak/.env /opt/grist/.env
|
||||||
if ls /opt/grist_bak/docs/* &>/dev/null; then
|
cp -r /opt/grist_bak/docs/* /opt/grist/docs/
|
||||||
cp -r /opt/grist_bak/docs/* /opt/grist/docs/
|
cp /opt/grist_bak/grist-sessions.db /opt/grist/grist-sessions.db
|
||||||
fi
|
cp /opt/grist_bak/landing.db /opt/grist/landing.db
|
||||||
[[ -f /opt/grist_bak/grist-sessions.db ]] && cp /opt/grist_bak/grist-sessions.db /opt/grist/grist-sessions.db
|
|
||||||
[[ -f /opt/grist_bak/landing.db ]] && cp /opt/grist_bak/landing.db /opt/grist/landing.db
|
|
||||||
cd /opt/grist
|
cd /opt/grist
|
||||||
$STD yarn install
|
$STD yarn install
|
||||||
$STD yarn run install:ee
|
$STD yarn run install:ee
|
||||||
|
|||||||
@@ -1,6 +0,0 @@
|
|||||||
___ __ _ ____ ____ __ _____
|
|
||||||
/ | / /___ (_)___ ___ / __ )____ _________ _/ __ )____ ______/ /____ ______ / ___/___ ______ _____ _____
|
|
||||||
/ /| | / / __ \/ / __ \/ _ \______/ __ / __ \/ ___/ __ `/ __ / __ `/ ___/ //_/ / / / __ \______\__ \/ _ \/ ___/ | / / _ \/ ___/
|
|
||||||
/ ___ |/ / /_/ / / / / / __/_____/ /_/ / /_/ / / / /_/ / /_/ / /_/ / /__/ ,< / /_/ / /_/ /_____/__/ / __/ / | |/ / __/ /
|
|
||||||
/_/ |_/_/ .___/_/_/ /_/\___/ /_____/\____/_/ \__, /_____/\__,_/\___/_/|_|\__,_/ .___/ /____/\___/_/ |___/\___/_/
|
|
||||||
/_/ /____/ /_/
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
____ __ __ __
|
|
||||||
/ __ )____ _____ ___ / /_ __ ______/ /___/ /_ __
|
|
||||||
/ __ / __ `/ __ `__ \/ __ \/ / / / __ / __ / / / /
|
|
||||||
/ /_/ / /_/ / / / / / / /_/ / /_/ / /_/ / /_/ / /_/ /
|
|
||||||
/_____/\__,_/_/ /_/ /_/_.___/\__,_/\__,_/\__,_/\__, /
|
|
||||||
/____/
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
____ _ ___ ______________ ______
|
|
||||||
/ __ )(_)________/ / | / / ____/_ __/ / ____/___
|
|
||||||
/ __ / / ___/ __ / |/ / __/ / /_____/ / __/ __ \
|
|
||||||
/ /_/ / / / / /_/ / /| / /___ / /_____/ /_/ / /_/ /
|
|
||||||
/_____/_/_/ \__,_/_/ |_/_____/ /_/ \____/\____/
|
|
||||||
|
|
||||||
6
ct/headers/booklore
Normal file
6
ct/headers/booklore
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
____ __ __
|
||||||
|
/ __ )____ ____ / /__/ / ____ ________
|
||||||
|
/ __ / __ \/ __ \/ //_/ / / __ \/ ___/ _ \
|
||||||
|
/ /_/ / /_/ / /_/ / ,< / /___/ /_/ / / / __/
|
||||||
|
/_____/\____/\____/_/|_/_____/\____/_/ \___/
|
||||||
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
____ ____ ____
|
|
||||||
/ __ \_________ __ __/ __ \/ __ )
|
|
||||||
/ / / / ___/ __ `/ | /| / / / / / __ |
|
|
||||||
/ /_/ / / / /_/ /| |/ |/ / /_/ / /_/ /
|
|
||||||
/_____/_/ \__,_/ |__/|__/_____/_____/
|
|
||||||
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
______ ____ __
|
|
||||||
/ ____/__ ____ / __ \__ __/ /_______
|
|
||||||
/ / __/ _ \/ __ \/ /_/ / / / / / ___/ _ \
|
|
||||||
/ /_/ / __/ /_/ / ____/ /_/ / (__ ) __/
|
|
||||||
\____/\___/\____/_/ \__,_/_/____/\___/
|
|
||||||
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
__ ___ __ __ _____
|
|
||||||
/ |/ /___ _/ /_/ /____ _____ / ___/___ ______ _____ _____
|
|
||||||
/ /|_/ / __ `/ __/ __/ _ \/ ___/_____\__ \/ _ \/ ___/ | / / _ \/ ___/
|
|
||||||
/ / / / /_/ / /_/ /_/ __/ / /_____/__/ / __/ / | |/ / __/ /
|
|
||||||
/_/ /_/\__,_/\__/\__/\___/_/ /____/\___/_/ |___/\___/_/
|
|
||||||
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
__ __ __
|
|
||||||
____ ___ / /_/ /_ ____ ____ / /_ _ ____ ______
|
|
||||||
/ __ \/ _ \/ __/ __ \/ __ \/ __ \/ __/ | |/_/ / / /_ /
|
|
||||||
/ / / / __/ /_/ /_/ / /_/ / /_/ / /__ _> </ /_/ / / /_
|
|
||||||
/_/ /_/\___/\__/_.___/\____/\____/\__(_)_/|_|\__, / /___/
|
|
||||||
/____/
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
__ ______ __
|
|
||||||
____ ___ _ __/ /_/ ____/ ______ / /___ ________ _____
|
|
||||||
/ __ \/ _ \| |/_/ __/ __/ | |/_/ __ \/ / __ \/ ___/ _ \/ ___/
|
|
||||||
/ / / / __/> </ /_/ /____> </ /_/ / / /_/ / / / __/ /
|
|
||||||
/_/ /_/\___/_/|_|\__/_____/_/|_/ .___/_/\____/_/ \___/_/
|
|
||||||
/_/
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
_ __ _ __ _______ __
|
|
||||||
| | / /__ __________(_) /___ __/ ____/ | / /
|
|
||||||
| | / / _ \/ ___/ ___/ / __/ / / / / __ | | /| / /
|
|
||||||
| |/ / __/ / (__ ) / /_/ /_/ / /_/ / | |/ |/ /
|
|
||||||
|___/\___/_/ /____/_/\__/\__, /\____/ |__/|__/
|
|
||||||
/____/
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
__ ______ __ ______ __ _____
|
|
||||||
\ \/ / __ \/ / / / __ \/ / / ___/
|
|
||||||
\ / / / / / / / /_/ / / \__ \
|
|
||||||
/ / /_/ / /_/ / _, _/ /______/ /
|
|
||||||
/_/\____/\____/_/ |_/_____/____/
|
|
||||||
|
|
||||||
@@ -13,7 +13,6 @@ var_disk="${var_disk:-2}"
|
|||||||
var_os="${var_os:-debian}"
|
var_os="${var_os:-debian}"
|
||||||
var_version="${var_version:-13}"
|
var_version="${var_version:-13}"
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
var_tun="${var_tun:-yes}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
header_info "$APP"
|
||||||
variables
|
variables
|
||||||
|
|||||||
@@ -73,7 +73,7 @@ function update_script() {
|
|||||||
$STD curl -fsSL https://github.com/filebrowser/filebrowser/releases/download/v2.23.0/linux-amd64-filebrowser.tar.gz | tar -xzv -C /usr/local/bin
|
$STD curl -fsSL https://github.com/filebrowser/filebrowser/releases/download/v2.23.0/linux-amd64-filebrowser.tar.gz | tar -xzv -C /usr/local/bin
|
||||||
$STD filebrowser config init -a '0.0.0.0'
|
$STD filebrowser config init -a '0.0.0.0'
|
||||||
$STD filebrowser config set -a '0.0.0.0'
|
$STD filebrowser config set -a '0.0.0.0'
|
||||||
$STD filebrowser users add admin community-scripts.org --perm.admin
|
$STD filebrowser users add admin helper-scripts.com --perm.admin
|
||||||
msg_ok "Installed FileBrowser"
|
msg_ok "Installed FileBrowser"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
@@ -93,7 +93,7 @@ WantedBy=default.target" >$service_path
|
|||||||
|
|
||||||
msg_ok "Completed successfully!\n"
|
msg_ok "Completed successfully!\n"
|
||||||
echo -e "FileBrowser should be reachable by going to the following URL.
|
echo -e "FileBrowser should be reachable by going to the following URL.
|
||||||
${BL}http://$LOCAL_IP:8080${CL} admin|community-scripts.org\n"
|
${BL}http://$LOCAL_IP:8080${CL} admin|helper-scripts.com\n"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|||||||
24
ct/immich.sh
24
ct/immich.sh
@@ -109,7 +109,7 @@ EOF
|
|||||||
msg_ok "Image-processing libraries up to date"
|
msg_ok "Image-processing libraries up to date"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
RELEASE="v2.6.3"
|
RELEASE="v2.6.1"
|
||||||
if check_for_gh_release "Immich" "immich-app/immich" "${RELEASE}" "each release is tested individually before the version is updated. Please do not open issues for this"; then
|
if check_for_gh_release "Immich" "immich-app/immich" "${RELEASE}" "each release is tested individually before the version is updated. Please do not open issues for this"; then
|
||||||
if [[ $(cat ~/.immich) > "2.5.1" ]]; then
|
if [[ $(cat ~/.immich) > "2.5.1" ]]; then
|
||||||
msg_info "Enabling Maintenance Mode"
|
msg_info "Enabling Maintenance Mode"
|
||||||
@@ -214,10 +214,7 @@ EOF
|
|||||||
|
|
||||||
cd "$SRC_DIR"/machine-learning
|
cd "$SRC_DIR"/machine-learning
|
||||||
mkdir -p "$ML_DIR"
|
mkdir -p "$ML_DIR"
|
||||||
# chown excluding upload dir contents (may be a mount with restricted permissions)
|
chown -R immich:immich "$INSTALL_DIR"
|
||||||
chown immich:immich "$INSTALL_DIR"
|
|
||||||
find "$INSTALL_DIR" -maxdepth 1 -mindepth 1 ! -name upload -exec chown -R immich:immich {} +
|
|
||||||
chown immich:immich "${UPLOAD_DIR:-$INSTALL_DIR/upload}" 2>/dev/null || true
|
|
||||||
chown immich:immich ./uv.lock
|
chown immich:immich ./uv.lock
|
||||||
export VIRTUAL_ENV="${ML_DIR}"/ml-venv
|
export VIRTUAL_ENV="${ML_DIR}"/ml-venv
|
||||||
export UV_HTTP_TIMEOUT=300
|
export UV_HTTP_TIMEOUT=300
|
||||||
@@ -266,24 +263,11 @@ EOF
|
|||||||
[[ ! -f /usr/bin/immich ]] && ln -sf "$APP_DIR"/cli/bin/immich /usr/bin/immich
|
[[ ! -f /usr/bin/immich ]] && ln -sf "$APP_DIR"/cli/bin/immich /usr/bin/immich
|
||||||
[[ ! -f /usr/bin/immich-admin ]] && ln -sf "$APP_DIR"/bin/immich-admin /usr/bin/immich-admin
|
[[ ! -f /usr/bin/immich-admin ]] && ln -sf "$APP_DIR"/bin/immich-admin /usr/bin/immich-admin
|
||||||
|
|
||||||
if ! grep -q '^DB_HOSTNAME=' "$INSTALL_DIR"/.env; then
|
chown -R immich:immich "$INSTALL_DIR"
|
||||||
sed -i '/^DB_DATABASE_NAME/a DB_HOSTNAME=127.0.0.1' "$INSTALL_DIR"/.env
|
|
||||||
fi
|
|
||||||
|
|
||||||
if grep -q 'ExecStart=/usr/bin/node' /etc/systemd/system/immich-web.service; then
|
|
||||||
sed -i '/^EnvironmentFile=/d' /etc/systemd/system/immich-web.service
|
|
||||||
sed -i "s|^ExecStart=.*|ExecStart=${APP_DIR}/bin/start.sh|" /etc/systemd/system/immich-web.service
|
|
||||||
systemctl daemon-reload
|
|
||||||
fi
|
|
||||||
|
|
||||||
# chown excluding upload dir contents (may be a mount with restricted permissions)
|
|
||||||
chown immich:immich "$INSTALL_DIR"
|
|
||||||
find "$INSTALL_DIR" -maxdepth 1 -mindepth 1 ! -name upload -exec chown -R immich:immich {} +
|
|
||||||
chown immich:immich "${UPLOAD_DIR:-$INSTALL_DIR/upload}" 2>/dev/null || true
|
|
||||||
if [[ "${MAINT_MODE:-0}" == 1 ]]; then
|
if [[ "${MAINT_MODE:-0}" == 1 ]]; then
|
||||||
msg_info "Disabling Maintenance Mode"
|
msg_info "Disabling Maintenance Mode"
|
||||||
cd /opt/immich/app/bin
|
cd /opt/immich/app/bin
|
||||||
$STD ./immich-admin disable-maintenance-mode || true
|
$STD ./immich-admin disable-maintenance-mode
|
||||||
unset MAINT_MODE
|
unset MAINT_MODE
|
||||||
$STD cd -
|
$STD cd -
|
||||||
msg_ok "Disabled Maintenance Mode"
|
msg_ok "Disabled Maintenance Mode"
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ function update_script() {
|
|||||||
systemctl stop isponsorblocktv
|
systemctl stop isponsorblocktv
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "isponsorblocktv" "dmunozv04/iSponsorBlockTV" "singlefile" "latest" "/opt/isponsorblocktv" "iSponsorBlockTV-x86_64-linux"
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "isponsorblocktv" "dmunozv04/iSponsorBlockTV" "singlefile" "latest" "/opt/isponsorblocktv" "iSponsorBlockTV-*-linux"
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Service"
|
||||||
systemctl start isponsorblocktv
|
systemctl start isponsorblocktv
|
||||||
|
|||||||
@@ -48,9 +48,7 @@ function update_script() {
|
|||||||
|
|
||||||
# Ensure APP_RUNTIME is in .env.local for CLI commands (upgrades from older versions)
|
# Ensure APP_RUNTIME is in .env.local for CLI commands (upgrades from older versions)
|
||||||
if ! grep -q "APP_RUNTIME" /opt/koillection/.env.local 2>/dev/null; then
|
if ! grep -q "APP_RUNTIME" /opt/koillection/.env.local 2>/dev/null; then
|
||||||
# Ensure file ends with newline before appending to avoid concatenation
|
echo 'APP_RUNTIME="Symfony\Component\Runtime\SymfonyRuntime"' >> /opt/koillection/.env.local
|
||||||
[[ -s /opt/koillection/.env.local && -n "$(tail -c 1 /opt/koillection/.env.local)" ]] && echo "" >>/opt/koillection/.env.local
|
|
||||||
echo 'APP_RUNTIME="Symfony\Component\Runtime\SymfonyRuntime"' >>/opt/koillection/.env.local
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
export COMPOSER_ALLOW_SUPERUSER=1
|
export COMPOSER_ALLOW_SUPERUSER=1
|
||||||
|
|||||||
@@ -61,5 +61,5 @@ description
|
|||||||
|
|
||||||
msg_ok "Completed successfully!\n"
|
msg_ok "Completed successfully!\n"
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
echo -e "${INFO}${YW} Access Kometa Quickstart:${CL}"
|
echo -e "${INFO}${YW} Access the LXC at following IP address:${CL}"
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:7171${CL}"
|
echo -e "${TAB}${GATEWAY}${BGN}${IP}${CL}"
|
||||||
|
|||||||
@@ -39,8 +39,6 @@ function update_script() {
|
|||||||
read -r -p "${TAB}Migrate update function now? [y/N]: " CONFIRM
|
read -r -p "${TAB}Migrate update function now? [y/N]: " CONFIRM
|
||||||
if [[ ! "${CONFIRM,,}" =~ ^(y|yes)$ ]]; then
|
if [[ ! "${CONFIRM,,}" =~ ^(y|yes)$ ]]; then
|
||||||
msg_warn "Migration skipped. The old update will continue to work for now."
|
msg_warn "Migration skipped. The old update will continue to work for now."
|
||||||
msg_warn "⚠️ Komodo v2 uses :2 image tags. The :latest tag is deprecated and will not receive v2 updates."
|
|
||||||
msg_warn "Please migrate to the addon script to receive Komodo v2."
|
|
||||||
msg_info "Updating ${APP} (legacy)"
|
msg_info "Updating ${APP} (legacy)"
|
||||||
COMPOSE_FILE=$(find /opt/komodo -maxdepth 1 -type f -name '*.compose.yaml' ! -name 'compose.env' | head -n1)
|
COMPOSE_FILE=$(find /opt/komodo -maxdepth 1 -type f -name '*.compose.yaml' ! -name 'compose.env' | head -n1)
|
||||||
if [[ -z "$COMPOSE_FILE" ]]; then
|
if [[ -z "$COMPOSE_FILE" ]]; then
|
||||||
|
|||||||
@@ -1,60 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/matter-js/python-matter-server
|
|
||||||
|
|
||||||
APP="Matter-Server"
|
|
||||||
var_tags="${var_tags:-matter;iot;smart-home}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-2048}"
|
|
||||||
var_disk="${var_disk:-4}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -d /opt/matter-server ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "matter-server" "matter-js/python-matter-server"; then
|
|
||||||
msg_info "Stopping Service"
|
|
||||||
systemctl stop matter-server
|
|
||||||
msg_ok "Stopped Service"
|
|
||||||
|
|
||||||
msg_info "Updating Matter Server"
|
|
||||||
MATTER_VERSION=$(get_latest_github_release "matter-js/python-matter-server")
|
|
||||||
$STD uv pip install --python /opt/matter-server/.venv/bin/python --upgrade "python-matter-server[server]==${MATTER_VERSION}"
|
|
||||||
echo "${MATTER_VERSION}" >~/.matter-server
|
|
||||||
msg_ok "Updated Matter Server"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "chip-ota-provider-app" "home-assistant-libs/matter-linux-ota-provider" "singlefile" "latest" "/usr/local/bin" "chip-ota-provider-app-x86-64"
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
|
||||||
systemctl start matter-server
|
|
||||||
msg_ok "Started Service"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Matter Server WebSocket API is running on port 5580.${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}ws://${IP}:5580/ws${CL}"
|
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://netboot.xyz
|
|
||||||
|
|
||||||
APP="netboot.xyz"
|
|
||||||
var_tags="${var_tags:-network;pxe;boot}"
|
|
||||||
var_cpu="${var_cpu:-1}"
|
|
||||||
var_ram="${var_ram:-512}"
|
|
||||||
var_disk="${var_disk:-8}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
NSAPP="netboot-xyz"
|
|
||||||
var_install="${NSAPP}-install"
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -f ~/.netboot-xyz ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "netboot-xyz" "netbootxyz/netboot.xyz"; then
|
|
||||||
msg_info "Backing up Configuration"
|
|
||||||
cp /var/www/html/boot.cfg /opt/netboot-xyz-boot.cfg.bak
|
|
||||||
msg_ok "Backed up Configuration"
|
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "netboot-xyz" "netbootxyz/netboot.xyz" "prebuild" "latest" "/var/www/html" "menus.tar.gz"
|
|
||||||
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-efi" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-efi-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-snp-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-snp.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-snponly.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-snp-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-snp.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-snponly.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-kpxe" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.kpxe"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-undionly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-undionly.kpxe"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-kpxe" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal.kpxe"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-lkrn" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.lkrn"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-linux-bin" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-linux.bin"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-pdsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.pdsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64-snponly.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-arm64" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-arm64.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-arm64-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-arm64-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-arm64-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-arm64-snponly.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-iso" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.iso"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-img" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.img"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-iso" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64.iso"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-img" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64.img"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-multiarch-iso" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-multiarch.iso"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-multiarch-img" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-multiarch.img"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-checksums" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-sha256-checksums.txt"
|
|
||||||
|
|
||||||
msg_info "Restoring Configuration"
|
|
||||||
cp /opt/netboot-xyz-boot.cfg.bak /var/www/html/boot.cfg
|
|
||||||
rm -f /opt/netboot-xyz-boot.cfg.bak
|
|
||||||
msg_ok "Restored Configuration"
|
|
||||||
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
|
|
||||||
@@ -1,76 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: vhsdream
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/nxzai/nextExplorer
|
|
||||||
|
|
||||||
APP="nextExplorer"
|
|
||||||
var_tags="${var_tags:-files;documents}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-3072}"
|
|
||||||
var_disk="${var_disk:-8}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -d /opt/nextExplorer ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
NODE_VERSION="24" setup_nodejs
|
|
||||||
|
|
||||||
if check_for_gh_release "nextExplorer" "nxzai/nextExplorer"; then
|
|
||||||
msg_info "Stopping nextExplorer"
|
|
||||||
$STD systemctl stop nextexplorer
|
|
||||||
msg_ok "Stopped nextExplorer"
|
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "nextExplorer" "nxzai/nextExplorer" "tarball" "latest" "/opt/nextExplorer"
|
|
||||||
|
|
||||||
msg_info "Updating nextExplorer"
|
|
||||||
APP_DIR="/opt/nextExplorer/app"
|
|
||||||
mkdir -p "$APP_DIR"
|
|
||||||
cd /opt/nextExplorer
|
|
||||||
export NODE_ENV=production
|
|
||||||
$STD npm ci --omit=dev --workspace backend
|
|
||||||
mv node_modules "$APP_DIR"
|
|
||||||
mv backend/{src,package.json} "$APP_DIR"
|
|
||||||
unset NODE_ENV
|
|
||||||
export NODE_ENV=development
|
|
||||||
$STD npm ci --workspace frontend
|
|
||||||
$STD npm run -w frontend build -- --sourcemap false
|
|
||||||
unset NODE_ENV
|
|
||||||
mv frontend/dist/ "$APP_DIR"/src/public
|
|
||||||
chown -R explorer:explorer "$APP_DIR" /etc/nextExplorer
|
|
||||||
sed -i "\|version|s|$(jq -cr '.version' ${APP_DIR}/package.json)|$(cat ~/.nextexplorer)|" "$APP_DIR"/package.json
|
|
||||||
sed -i 's/app.js/server.js/' /etc/systemd/system/nextexplorer.service && systemctl daemon-reload
|
|
||||||
msg_ok "Updated nextExplorer"
|
|
||||||
|
|
||||||
msg_info "Starting nextExplorer"
|
|
||||||
$STD systemctl start nextexplorer
|
|
||||||
msg_ok "Started nextExplorer"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
# Author: tteck (tteckster) | Co-Author: CrazyWolf13, MickLesk (CanbiZ)
|
# Author: tteck (tteckster) | Co-Author: CrazyWolf13
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
# Source: https://nginxproxymanager.com/ | Github: https://github.com/NginxProxyManager/nginx-proxy-manager
|
# Source: https://nginxproxymanager.com/ | Github: https://github.com/NginxProxyManager/nginx-proxy-manager
|
||||||
|
|
||||||
@@ -28,13 +28,18 @@ function update_script() {
|
|||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [[ $(grep -E '^VERSION_ID=' /etc/os-release) == *"12"* ]]; then
|
||||||
|
msg_error "Wrong Debian version detected!"
|
||||||
|
msg_error "Please create a snapshot first. You must upgrade your LXC to Debian Trixie before updating. Visit: https://github.com/community-scripts/ProxmoxVE/discussions/7489"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
if command -v node &>/dev/null; then
|
if command -v node &>/dev/null; then
|
||||||
CURRENT_NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
|
CURRENT_NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
|
||||||
if [[ "$CURRENT_NODE_VERSION" != "22" ]]; then
|
if [[ "$CURRENT_NODE_VERSION" != "22" ]]; then
|
||||||
systemctl stop openresty
|
systemctl stop openresty
|
||||||
$STD apt purge -y nodejs npm
|
apt-get purge -y nodejs npm
|
||||||
$STD apt autoremove -y
|
apt-get autoremove -y
|
||||||
rm -rf /usr/local/bin/node /usr/local/bin/npm
|
rm -rf /usr/local/bin/node /usr/local/bin/npm
|
||||||
rm -rf /usr/local/lib/node_modules
|
rm -rf /usr/local/lib/node_modules
|
||||||
rm -rf ~/.npm
|
rm -rf ~/.npm
|
||||||
@@ -44,151 +49,92 @@ function update_script() {
|
|||||||
|
|
||||||
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
|
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
|
||||||
|
|
||||||
if dpkg -s openresty &>/dev/null 2>&1; then
|
RELEASE=$(curl -fsSL https://api.github.com/repos/NginxProxyManager/nginx-proxy-manager/releases/latest |
|
||||||
msg_info "Migrating from packaged OpenResty to source"
|
grep "tag_name" |
|
||||||
rm -f /etc/apt/trusted.gpg.d/openresty-archive-keyring.gpg /etc/apt/trusted.gpg.d/openresty.gpg
|
awk '{print substr($2, 3, length($2)-4) }')
|
||||||
rm -f /etc/apt/sources.list.d/openresty.list /etc/apt/sources.list.d/openresty.sources
|
|
||||||
$STD apt remove -y openresty
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager" "tarball" "v${RELEASE}" "/opt/nginxproxymanager"
|
||||||
$STD apt autoremove -y
|
|
||||||
rm -f ~/.openresty
|
msg_info "Stopping Services"
|
||||||
msg_ok "Migrated from packaged OpenResty to source"
|
systemctl stop openresty
|
||||||
|
systemctl stop npm
|
||||||
|
msg_ok "Stopped Services"
|
||||||
|
|
||||||
|
msg_info "Cleaning old files"
|
||||||
|
$STD rm -rf /app \
|
||||||
|
/var/www/html \
|
||||||
|
/etc/nginx \
|
||||||
|
/var/log/nginx \
|
||||||
|
/var/lib/nginx \
|
||||||
|
/var/cache/nginx
|
||||||
|
msg_ok "Cleaned old files"
|
||||||
|
|
||||||
|
msg_info "Setting up Environment"
|
||||||
|
ln -sf /usr/bin/python3 /usr/bin/python
|
||||||
|
ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/sbin/nginx
|
||||||
|
ln -sf /usr/local/openresty/nginx/ /etc/nginx
|
||||||
|
sed -i "s|\"version\": \"2.0.0\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/backend/package.json
|
||||||
|
sed -i "s|\"version\": \"2.0.0\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/frontend/package.json
|
||||||
|
sed -i 's+^daemon+#daemon+g' /opt/nginxproxymanager/docker/rootfs/etc/nginx/nginx.conf
|
||||||
|
NGINX_CONFS=$(find /opt/nginxproxymanager -type f -name "*.conf")
|
||||||
|
for NGINX_CONF in $NGINX_CONFS; do
|
||||||
|
sed -i 's+include conf.d+include /etc/nginx/conf.d+g' "$NGINX_CONF"
|
||||||
|
done
|
||||||
|
|
||||||
|
mkdir -p /var/www/html /etc/nginx/logs
|
||||||
|
cp -r /opt/nginxproxymanager/docker/rootfs/var/www/html/* /var/www/html/
|
||||||
|
cp -r /opt/nginxproxymanager/docker/rootfs/etc/nginx/* /etc/nginx/
|
||||||
|
cp /opt/nginxproxymanager/docker/rootfs/etc/letsencrypt.ini /etc/letsencrypt.ini
|
||||||
|
cp /opt/nginxproxymanager/docker/rootfs/etc/logrotate.d/nginx-proxy-manager /etc/logrotate.d/nginx-proxy-manager
|
||||||
|
ln -sf /etc/nginx/nginx.conf /etc/nginx/conf/nginx.conf
|
||||||
|
rm -f /etc/nginx/conf.d/dev.conf
|
||||||
|
|
||||||
|
mkdir -p /tmp/nginx/body \
|
||||||
|
/run/nginx \
|
||||||
|
/data/nginx \
|
||||||
|
/data/custom_ssl \
|
||||||
|
/data/logs \
|
||||||
|
/data/access \
|
||||||
|
/data/nginx/default_host \
|
||||||
|
/data/nginx/default_www \
|
||||||
|
/data/nginx/proxy_host \
|
||||||
|
/data/nginx/redirection_host \
|
||||||
|
/data/nginx/stream \
|
||||||
|
/data/nginx/dead_host \
|
||||||
|
/data/nginx/temp \
|
||||||
|
/var/lib/nginx/cache/public \
|
||||||
|
/var/lib/nginx/cache/private \
|
||||||
|
/var/cache/nginx/proxy_temp
|
||||||
|
|
||||||
|
chmod -R 777 /var/cache/nginx
|
||||||
|
chown root /tmp/nginx
|
||||||
|
|
||||||
|
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" {print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf);" >/etc/nginx/conf.d/include/resolvers.conf
|
||||||
|
|
||||||
|
if [ ! -f /data/nginx/dummycert.pem ] || [ ! -f /data/nginx/dummykey.pem ]; then
|
||||||
|
$STD openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/O=Nginx Proxy Manager/OU=Dummy Certificate/CN=localhost" -keyout /data/nginx/dummykey.pem -out /data/nginx/dummycert.pem
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local pcre_pkg="libpcre3-dev"
|
mkdir -p /app/frontend/images
|
||||||
if grep -qE 'VERSION_ID="1[3-9]"' /etc/os-release 2>/dev/null; then
|
cp -r /opt/nginxproxymanager/backend/* /app
|
||||||
pcre_pkg="libpcre2-dev"
|
msg_ok "Set up Environment"
|
||||||
fi
|
|
||||||
$STD apt install -y build-essential "$pcre_pkg" libssl-dev zlib1g-dev
|
|
||||||
|
|
||||||
if check_for_gh_release "openresty" "openresty/openresty"; then
|
msg_info "Building Frontend"
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "openresty" "openresty/openresty" "prebuild" "${CHECK_UPDATE_RELEASE}" "/opt/openresty" "openresty-*.tar.gz"
|
export NODE_OPTIONS="--max_old_space_size=2048 --openssl-legacy-provider"
|
||||||
|
cd /opt/nginxproxymanager/frontend
|
||||||
|
# Replace node-sass with sass in package.json before installation
|
||||||
|
sed -E -i 's/"node-sass" *: *"([^"]*)"/"sass": "\1"/g' package.json
|
||||||
|
$STD yarn install --network-timeout 600000
|
||||||
|
$STD yarn locale-compile
|
||||||
|
$STD yarn build
|
||||||
|
cp -r /opt/nginxproxymanager/frontend/dist/* /app/frontend
|
||||||
|
cp -r /opt/nginxproxymanager/frontend/public/images/* /app/frontend/images
|
||||||
|
msg_ok "Built Frontend"
|
||||||
|
|
||||||
msg_info "Building OpenResty"
|
msg_info "Initializing Backend"
|
||||||
cd /opt/openresty
|
rm -rf /app/config/default.json
|
||||||
$STD ./configure \
|
if [ ! -f /app/config/production.json ]; then
|
||||||
--with-http_v2_module \
|
cat <<'EOF' >/app/config/production.json
|
||||||
--with-http_realip_module \
|
|
||||||
--with-http_stub_status_module \
|
|
||||||
--with-http_ssl_module \
|
|
||||||
--with-http_sub_module \
|
|
||||||
--with-http_auth_request_module \
|
|
||||||
--with-pcre-jit \
|
|
||||||
--with-stream \
|
|
||||||
--with-stream_ssl_module
|
|
||||||
$STD make -j"$(nproc)"
|
|
||||||
$STD make install
|
|
||||||
rm -rf /opt/openresty
|
|
||||||
cat <<'EOF' >/lib/systemd/system/openresty.service
|
|
||||||
[Unit]
|
|
||||||
Description=The OpenResty Application Platform
|
|
||||||
After=syslog.target network-online.target remote-fs.target nss-lookup.target
|
|
||||||
Wants=network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
ExecStartPre=/usr/local/openresty/nginx/sbin/nginx -t
|
|
||||||
ExecStart=/usr/local/openresty/nginx/sbin/nginx -g 'daemon off;'
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
systemctl daemon-reload
|
|
||||||
systemctl unmask openresty 2>/dev/null || true
|
|
||||||
systemctl restart openresty
|
|
||||||
msg_ok "Built OpenResty"
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd /root
|
|
||||||
if [ -d /opt/certbot ]; then
|
|
||||||
msg_info "Updating Certbot"
|
|
||||||
$STD /opt/certbot/bin/pip install --upgrade pip setuptools wheel
|
|
||||||
$STD /opt/certbot/bin/pip install --upgrade certbot certbot-dns-cloudflare
|
|
||||||
msg_ok "Updated Certbot"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager"; then
|
|
||||||
msg_info "Stopping Services"
|
|
||||||
systemctl stop openresty
|
|
||||||
systemctl stop npm
|
|
||||||
msg_ok "Stopped Services"
|
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager" "tarball" "${CHECK_UPDATE_RELEASE}" "/opt/nginxproxymanager"
|
|
||||||
|
|
||||||
msg_info "Cleaning old files"
|
|
||||||
$STD rm -rf /app \
|
|
||||||
/var/www/html \
|
|
||||||
/etc/nginx \
|
|
||||||
/var/log/nginx \
|
|
||||||
/var/lib/nginx \
|
|
||||||
/var/cache/nginx
|
|
||||||
msg_ok "Cleaned old files"
|
|
||||||
|
|
||||||
local RELEASE="${CHECK_UPDATE_RELEASE#v}"
|
|
||||||
msg_info "Setting up Environment"
|
|
||||||
ln -sf /usr/bin/python3 /usr/bin/python
|
|
||||||
ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/sbin/nginx
|
|
||||||
ln -sf /usr/local/openresty/nginx/ /etc/nginx
|
|
||||||
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/backend/package.json
|
|
||||||
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/frontend/package.json
|
|
||||||
sed -i 's+^daemon+#daemon+g' /opt/nginxproxymanager/docker/rootfs/etc/nginx/nginx.conf
|
|
||||||
NGINX_CONFS=$(find /opt/nginxproxymanager -type f -name "*.conf")
|
|
||||||
for NGINX_CONF in $NGINX_CONFS; do
|
|
||||||
sed -i 's+include conf.d+include /etc/nginx/conf.d+g' "$NGINX_CONF"
|
|
||||||
done
|
|
||||||
|
|
||||||
mkdir -p /var/www/html /etc/nginx/logs
|
|
||||||
cp -r /opt/nginxproxymanager/docker/rootfs/var/www/html/* /var/www/html/
|
|
||||||
cp -r /opt/nginxproxymanager/docker/rootfs/etc/nginx/* /etc/nginx/
|
|
||||||
cp /opt/nginxproxymanager/docker/rootfs/etc/letsencrypt.ini /etc/letsencrypt.ini
|
|
||||||
cp /opt/nginxproxymanager/docker/rootfs/etc/logrotate.d/nginx-proxy-manager /etc/logrotate.d/nginx-proxy-manager
|
|
||||||
ln -sf /etc/nginx/nginx.conf /etc/nginx/conf/nginx.conf
|
|
||||||
rm -f /etc/nginx/conf.d/dev.conf
|
|
||||||
|
|
||||||
mkdir -p /tmp/nginx/body \
|
|
||||||
/run/nginx \
|
|
||||||
/data/nginx \
|
|
||||||
/data/custom_ssl \
|
|
||||||
/data/logs \
|
|
||||||
/data/access \
|
|
||||||
/data/nginx/default_host \
|
|
||||||
/data/nginx/default_www \
|
|
||||||
/data/nginx/proxy_host \
|
|
||||||
/data/nginx/redirection_host \
|
|
||||||
/data/nginx/stream \
|
|
||||||
/data/nginx/dead_host \
|
|
||||||
/data/nginx/temp \
|
|
||||||
/var/lib/nginx/cache/public \
|
|
||||||
/var/lib/nginx/cache/private \
|
|
||||||
/var/cache/nginx/proxy_temp
|
|
||||||
|
|
||||||
chmod -R 777 /var/cache/nginx
|
|
||||||
chown root /tmp/nginx
|
|
||||||
|
|
||||||
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" {print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf);" >/etc/nginx/conf.d/include/resolvers.conf
|
|
||||||
|
|
||||||
if [ ! -f /data/nginx/dummycert.pem ] || [ ! -f /data/nginx/dummykey.pem ]; then
|
|
||||||
$STD openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/O=Nginx Proxy Manager/OU=Dummy Certificate/CN=localhost" -keyout /data/nginx/dummykey.pem -out /data/nginx/dummycert.pem
|
|
||||||
fi
|
|
||||||
|
|
||||||
mkdir -p /app/frontend/images
|
|
||||||
cp -r /opt/nginxproxymanager/backend/* /app
|
|
||||||
msg_ok "Set up Environment"
|
|
||||||
|
|
||||||
msg_info "Building Frontend"
|
|
||||||
export NODE_OPTIONS="--max_old_space_size=2048 --openssl-legacy-provider"
|
|
||||||
cd /opt/nginxproxymanager/frontend
|
|
||||||
sed -E -i 's/"node-sass" *: *"([^"]*)"/"sass": "\1"/g' package.json
|
|
||||||
$STD yarn install --network-timeout 600000
|
|
||||||
$STD yarn locale-compile
|
|
||||||
$STD yarn build
|
|
||||||
cp -r /opt/nginxproxymanager/frontend/dist/* /app/frontend
|
|
||||||
cp -r /opt/nginxproxymanager/frontend/public/images/* /app/frontend/images
|
|
||||||
msg_ok "Built Frontend"
|
|
||||||
|
|
||||||
msg_info "Initializing Backend"
|
|
||||||
rm -rf /app/config/default.json
|
|
||||||
if [ ! -f /app/config/production.json ]; then
|
|
||||||
cat <<'EOF' >/app/config/production.json
|
|
||||||
{
|
{
|
||||||
"database": {
|
"database": {
|
||||||
"engine": "knex-native",
|
"engine": "knex-native",
|
||||||
@@ -202,21 +148,40 @@ EOF
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
fi
|
|
||||||
sed -i 's/"client": "sqlite3"/"client": "better-sqlite3"/' /app/config/production.json
|
|
||||||
cd /app
|
|
||||||
$STD yarn install --network-timeout 600000
|
|
||||||
msg_ok "Initialized Backend"
|
|
||||||
|
|
||||||
msg_info "Starting Services"
|
|
||||||
sed -i 's/user npm/user root/g; s/^pid/#pid/g' /usr/local/openresty/nginx/conf/nginx.conf
|
|
||||||
sed -r -i 's/^([[:space:]]*)su npm npm/\1#su npm npm/g;' /etc/logrotate.d/nginx-proxy-manager
|
|
||||||
systemctl daemon-reload
|
|
||||||
systemctl enable -q --now openresty
|
|
||||||
systemctl enable -q --now npm
|
|
||||||
msg_ok "Started Services"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
fi
|
||||||
|
sed -i 's/"client": "sqlite3"/"client": "better-sqlite3"/' /app/config/production.json
|
||||||
|
cd /app
|
||||||
|
$STD yarn install --network-timeout 600000
|
||||||
|
msg_ok "Initialized Backend"
|
||||||
|
|
||||||
|
msg_info "Updating Certbot"
|
||||||
|
[ -f /etc/apt/trusted.gpg.d/openresty-archive-keyring.gpg ] && rm -f /etc/apt/trusted.gpg.d/openresty-archive-keyring.gpg
|
||||||
|
[ -f /etc/apt/sources.list.d/openresty.list ] && rm -f /etc/apt/sources.list.d/openresty.list
|
||||||
|
[ ! -f /etc/apt/trusted.gpg.d/openresty.gpg ] && curl -fsSL https://openresty.org/package/pubkey.gpg | gpg --dearmor --yes -o /etc/apt/trusted.gpg.d/openresty.gpg
|
||||||
|
[ ! -f /etc/apt/sources.list.d/openresty.sources ] && cat <<'EOF' >/etc/apt/sources.list.d/openresty.sources
|
||||||
|
Types: deb
|
||||||
|
URIs: http://openresty.org/package/debian/
|
||||||
|
Suites: bookworm
|
||||||
|
Components: openresty
|
||||||
|
Signed-By: /etc/apt/trusted.gpg.d/openresty.gpg
|
||||||
|
EOF
|
||||||
|
$STD apt update
|
||||||
|
$STD apt -y install openresty
|
||||||
|
if [ -d /opt/certbot ]; then
|
||||||
|
$STD /opt/certbot/bin/pip install --upgrade pip setuptools wheel
|
||||||
|
$STD /opt/certbot/bin/pip install --upgrade certbot certbot-dns-cloudflare
|
||||||
|
fi
|
||||||
|
msg_ok "Updated Certbot"
|
||||||
|
|
||||||
|
msg_info "Starting Services"
|
||||||
|
sed -i 's/user npm/user root/g; s/^pid/#pid/g' /usr/local/openresty/nginx/conf/nginx.conf
|
||||||
|
sed -r -i 's/^([[:space:]]*)su npm npm/\1#su npm npm/g;' /etc/logrotate.d/nginx-proxy-manager
|
||||||
|
systemctl enable -q --now openresty
|
||||||
|
systemctl enable -q --now npm
|
||||||
|
systemctl restart openresty
|
||||||
|
msg_ok "Started Services"
|
||||||
|
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
exit
|
exit
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -27,27 +27,36 @@ function update_script() {
|
|||||||
msg_error "No ${APP} Installation Found!"
|
msg_error "No ${APP} Installation Found!"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
RELEASE=$(get_latest_github_release "Part-DB/Part-DB-server")
|
||||||
if check_for_gh_release "partdb" "Part-DB/Part-DB-server"; then
|
if check_for_gh_release "partdb" "Part-DB/Part-DB-server"; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop apache2
|
systemctl stop apache2
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
|
msg_info "Updating $APP to v${RELEASE}"
|
||||||
|
cd /opt
|
||||||
mv /opt/partdb/ /opt/partdb-backup
|
mv /opt/partdb/ /opt/partdb-backup
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "partdb" "Part-DB/Part-DB-server" "prebuild" "latest" "/opt/partdb" "partdb_with_assets.zip"
|
curl -fsSL "https://github.com/Part-DB/Part-DB-server/archive/refs/tags/v${RELEASE}.zip" -o "/opt/v${RELEASE}.zip"
|
||||||
|
$STD unzip "v${RELEASE}.zip"
|
||||||
|
mv /opt/Part-DB-server-${RELEASE}/ /opt/partdb
|
||||||
|
|
||||||
msg_info "Updating Part-DB"
|
|
||||||
cd /opt/partdb/
|
cd /opt/partdb/
|
||||||
cp -r /opt/partdb-backup/.env.local /opt/partdb/
|
cp -r "/opt/partdb-backup/.env.local" /opt/partdb/
|
||||||
cp -r /opt/partdb-backup/public/media /opt/partdb/public/
|
cp -r "/opt/partdb-backup/public/media" /opt/partdb/public/
|
||||||
cp -r /opt/partdb-backup/config/banner.md /opt/partdb/config/
|
cp -r "/opt/partdb-backup/config/banner.md" /opt/partdb/config/
|
||||||
|
|
||||||
export COMPOSER_ALLOW_SUPERUSER=1
|
export COMPOSER_ALLOW_SUPERUSER=1
|
||||||
$STD composer install --no-dev -o --no-interaction
|
$STD composer install --no-dev -o --no-interaction
|
||||||
|
$STD yarn install
|
||||||
|
$STD yarn build
|
||||||
$STD php bin/console cache:clear
|
$STD php bin/console cache:clear
|
||||||
$STD php bin/console doctrine:migrations:migrate -n
|
$STD php bin/console doctrine:migrations:migrate -n
|
||||||
chown -R www-data:www-data /opt/partdb
|
chown -R www-data:www-data /opt/partdb
|
||||||
|
rm -r "/opt/v${RELEASE}.zip"
|
||||||
rm -r /opt/partdb-backup
|
rm -r /opt/partdb-backup
|
||||||
msg_ok "Updated Part-DB"
|
echo "${RELEASE}" >~/.partdb
|
||||||
|
msg_ok "Updated $APP to v${RELEASE}"
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Service"
|
||||||
systemctl start apache2
|
systemctl start apache2
|
||||||
|
|||||||
@@ -68,7 +68,7 @@ function update_script() {
|
|||||||
$STD curl -fsSL https://raw.githubusercontent.com/filebrowser/get/master/get.sh | bash
|
$STD curl -fsSL https://raw.githubusercontent.com/filebrowser/get/master/get.sh | bash
|
||||||
$STD filebrowser config init -a '0.0.0.0'
|
$STD filebrowser config init -a '0.0.0.0'
|
||||||
$STD filebrowser config set -a '0.0.0.0'
|
$STD filebrowser config set -a '0.0.0.0'
|
||||||
$STD filebrowser users add admin community-scripts.org --perm.admin
|
$STD filebrowser users add admin helper-scripts.com --perm.admin
|
||||||
msg_ok "Installed FileBrowser"
|
msg_ok "Installed FileBrowser"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
@@ -90,7 +90,7 @@ EOF
|
|||||||
|
|
||||||
msg_ok "Completed successfully!\n"
|
msg_ok "Completed successfully!\n"
|
||||||
echo -e "FileBrowser should be reachable by going to the following URL.
|
echo -e "FileBrowser should be reachable by going to the following URL.
|
||||||
${BL}http://$LOCAL_IP:8080${CL} admin|community-scripts.org\n"
|
${BL}http://$LOCAL_IP:8080${CL} admin|helper-scripts.com\n"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
if [ "$UPD" == "4" ]; then
|
if [ "$UPD" == "4" ]; then
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ function update_script() {
|
|||||||
cd /opt/revealjs
|
cd /opt/revealjs
|
||||||
$STD npm install
|
$STD npm install
|
||||||
cp -f /opt/index.html /opt/revealjs
|
cp -f /opt/index.html /opt/revealjs
|
||||||
sed -i 's/"vite"/"vite --host"/g' package.json
|
sed -i '25s/localhost/0.0.0.0/g' /opt/revealjs/gulpfile.js
|
||||||
rm -f /opt/index.html
|
rm -f /opt/index.html
|
||||||
msg_ok "Updated RevealJS"
|
msg_ok "Updated RevealJS"
|
||||||
|
|
||||||
|
|||||||
@@ -50,7 +50,7 @@ function update_script() {
|
|||||||
/opt/semaphore/config.json
|
/opt/semaphore/config.json
|
||||||
SEM_PW=$(cat ~/semaphore.creds)
|
SEM_PW=$(cat ~/semaphore.creds)
|
||||||
systemctl start semaphore
|
systemctl start semaphore
|
||||||
$STD semaphore user add --admin --login admin --email admin@community-scripts.org --name Administrator --password "${SEM_PW}" --config /opt/semaphore/config.json
|
$STD semaphore user add --admin --login admin --email admin@helper-scripts.com --name Administrator --password "${SEM_PW}" --config /opt/semaphore/config.json
|
||||||
|
|
||||||
msg_ok "Moved from BoltDB to SQLite"
|
msg_ok "Moved from BoltDB to SQLite"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -33,10 +33,6 @@ function update_script() {
|
|||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if ! grep -q "^ALLOWED_HOSTS=" /opt/tandoor/.env; then
|
|
||||||
echo "ALLOWED_HOSTS=${LOCAL_IP}" >>/opt/tandoor/.env
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "tandoor" "TandoorRecipes/recipes"; then
|
if check_for_gh_release "tandoor" "TandoorRecipes/recipes"; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop tandoor
|
systemctl stop tandoor
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
|
|||||||
APP="Tracearr"
|
APP="Tracearr"
|
||||||
var_tags="${var_tags:-media}"
|
var_tags="${var_tags:-media}"
|
||||||
var_cpu="${var_cpu:-2}"
|
var_cpu="${var_cpu:-2}"
|
||||||
var_ram="${var_ram:-8192}"
|
var_ram="${var_ram:-4096}"
|
||||||
var_disk="${var_disk:-10}"
|
var_disk="${var_disk:-10}"
|
||||||
var_os="${var_os:-debian}"
|
var_os="${var_os:-debian}"
|
||||||
var_version="${var_version:-13}"
|
var_version="${var_version:-13}"
|
||||||
@@ -102,7 +102,7 @@ EOF
|
|||||||
|
|
||||||
if check_for_gh_release "tracearr" "connorgallopo/Tracearr"; then
|
if check_for_gh_release "tracearr" "connorgallopo/Tracearr"; then
|
||||||
msg_info "Stopping Services"
|
msg_info "Stopping Services"
|
||||||
systemctl stop tracearr postgresql redis-server
|
systemctl stop tracearr postgresql redis
|
||||||
msg_ok "Stopped Services"
|
msg_ok "Stopped Services"
|
||||||
|
|
||||||
msg_info "Updating pnpm"
|
msg_info "Updating pnpm"
|
||||||
@@ -115,7 +115,6 @@ EOF
|
|||||||
|
|
||||||
msg_info "Building Tracearr"
|
msg_info "Building Tracearr"
|
||||||
export TZ=$(cat /etc/timezone)
|
export TZ=$(cat /etc/timezone)
|
||||||
export NODE_OPTIONS="--max-old-space-size=4096"
|
|
||||||
cd /opt/tracearr.build
|
cd /opt/tracearr.build
|
||||||
$STD pnpm install --frozen-lockfile --force
|
$STD pnpm install --frozen-lockfile --force
|
||||||
$STD pnpm turbo telemetry disable
|
$STD pnpm turbo telemetry disable
|
||||||
@@ -149,7 +148,7 @@ EOF
|
|||||||
msg_ok "Configured Tracearr"
|
msg_ok "Configured Tracearr"
|
||||||
|
|
||||||
msg_info "Starting services"
|
msg_info "Starting services"
|
||||||
systemctl start postgresql redis-server tracearr
|
systemctl start postgresql redis tracearr
|
||||||
msg_ok "Started services"
|
msg_ok "Started services"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -1,54 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/versity/versitygw
|
|
||||||
|
|
||||||
APP="VersityGW"
|
|
||||||
var_tags="${var_tags:-s3;storage;gateway}"
|
|
||||||
var_cpu="${var_cpu:-2}"
|
|
||||||
var_ram="${var_ram:-2048}"
|
|
||||||
var_disk="${var_disk:-8}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -f /usr/bin/versitygw ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "versitygw" "versity/versitygw"; then
|
|
||||||
msg_info "Stopping Service"
|
|
||||||
systemctl stop versitygw@gateway
|
|
||||||
msg_ok "Stopped Service"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "versitygw" "versity/versitygw" "binary"
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
|
||||||
systemctl start versitygw@gateway
|
|
||||||
msg_ok "Started Service"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:7070 (Gateway) or http://${IP}:7070 (WebUI)${CL}"
|
|
||||||
@@ -34,26 +34,24 @@ function update_script() {
|
|||||||
[[ -f /etc/systemd/system/victoriametrics-logs.service ]] && systemctl stop victoriametrics-logs
|
[[ -f /etc/systemd/system/victoriametrics-logs.service ]] && systemctl stop victoriametrics-logs
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
victoriametrics_release=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaMetrics/releases" |
|
victoriametrics_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaMetrics/releases/latest" |
|
||||||
jq -r '.[] | select(.assets[].name | match("^victoria-metrics-linux-amd64-v[0-9.]+.tar.gz$")) | .tag_name' |
|
jq -r '.assets[].name' |
|
||||||
head -n 1)
|
grep -E '^victoria-metrics-linux-amd64-v[0-9.]+\.tar\.gz$')
|
||||||
|
vmutils_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaMetrics/releases/latest" |
|
||||||
|
jq -r '.assets[].name' |
|
||||||
|
grep -E '^vmutils-linux-amd64-v[0-9.]+\.tar\.gz$')
|
||||||
|
|
||||||
msg_debug "Using release $victoriametrics_release"
|
fetch_and_deploy_gh_release "victoriametrics" "VictoriaMetrics/VictoriaMetrics" "prebuild" "latest" "/opt/victoriametrics" "$victoriametrics_filename"
|
||||||
|
fetch_and_deploy_gh_release "vmutils" "VictoriaMetrics/VictoriaMetrics" "prebuild" "latest" "/opt/victoriametrics" "$vmutils_filename"
|
||||||
victoriametrics_filename="victoria-metrics-linux-amd64-${victoriametrics_release}.tar.gz"
|
|
||||||
vmutils_filename="vmutils-linux-amd64-${victoriametrics_release}.tar.gz"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "victoriametrics" "VictoriaMetrics/VictoriaMetrics" "prebuild" "$victoriametrics_release" "/opt/victoriametrics" "$victoriametrics_filename"
|
|
||||||
fetch_and_deploy_gh_release "vmutils" "VictoriaMetrics/VictoriaMetrics" "prebuild" "$victoriametrics_release" "/opt/victoriametrics" "$vmutils_filename"
|
|
||||||
|
|
||||||
if [[ -f /etc/systemd/system/victoriametrics-logs.service ]]; then
|
if [[ -f /etc/systemd/system/victoriametrics-logs.service ]]; then
|
||||||
vmlogs_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaLogs/releases/latest" |
|
vmlogs_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaLogs/releases/latest" |
|
||||||
jq -r '.assets[].name' |
|
jq -r '.assets[].name' |
|
||||||
grep -E '^victoria-logs-linux-amd64-v[0-9.]+\.tar\.gz$')
|
grep -E '^victoria-logs-linux-amd64-v[0-9.]+\.tar\.gz$')
|
||||||
vlutils_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaLogs/releases/latest" |
|
vlutils_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaLogs/releases/latest" |
|
||||||
jq -r '.assets[].name' |
|
jq -r '.assets[].name' |
|
||||||
grep -E '^vlutils-linux-amd64-v[0-9.]+\.tar\.gz$')
|
grep -E '^vlutils-linux-amd64-v[0-9.]+\.tar\.gz$')
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "victorialogs" "VictoriaMetrics/VictoriaLogs" "prebuild" "latest" "/opt/victoriametrics" "$vmlogs_filename"
|
fetch_and_deploy_gh_release "victorialogs" "VictoriaMetrics/VictoriaLogs" "prebuild" "latest" "/opt/victoriametrics" "$vmlogs_filename"
|
||||||
fetch_and_deploy_gh_release "vlutils" "VictoriaMetrics/VictoriaLogs" "prebuild" "latest" "/opt/victoriametrics" "$vlutils_filename"
|
fetch_and_deploy_gh_release "vlutils" "VictoriaMetrics/VictoriaLogs" "prebuild" "latest" "/opt/victoriametrics" "$vlutils_filename"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -29,8 +29,6 @@ function update_script() {
|
|||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
NODE_VERSION="24" NODE_MODULE="pnpm" setup_nodejs
|
|
||||||
|
|
||||||
if grep -q '^WF_CORS_ALLOW_ORIGINS=\*$' /opt/wealthfolio/.env; then
|
if grep -q '^WF_CORS_ALLOW_ORIGINS=\*$' /opt/wealthfolio/.env; then
|
||||||
sed -i "s|^WF_CORS_ALLOW_ORIGINS=\*$|WF_CORS_ALLOW_ORIGINS=http://${LOCAL_IP}:8080|" /opt/wealthfolio/.env
|
sed -i "s|^WF_CORS_ALLOW_ORIGINS=\*$|WF_CORS_ALLOW_ORIGINS=http://${LOCAL_IP}:8080|" /opt/wealthfolio/.env
|
||||||
fi
|
fi
|
||||||
|
|||||||
66
ct/yourls.sh
66
ct/yourls.sh
@@ -1,66 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://yourls.org/
|
|
||||||
|
|
||||||
APP="YOURLS"
|
|
||||||
var_tags="${var_tags:-url-shortener;php}"
|
|
||||||
var_cpu="${var_cpu:-1}"
|
|
||||||
var_ram="${var_ram:-512}"
|
|
||||||
var_disk="${var_disk:-4}"
|
|
||||||
var_os="${var_os:-debian}"
|
|
||||||
var_version="${var_version:-13}"
|
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
|
||||||
|
|
||||||
header_info "$APP"
|
|
||||||
variables
|
|
||||||
color
|
|
||||||
catch_errors
|
|
||||||
|
|
||||||
function update_script() {
|
|
||||||
header_info
|
|
||||||
check_container_storage
|
|
||||||
check_container_resources
|
|
||||||
|
|
||||||
if [[ ! -f /opt/yourls/yourls-loader.php ]]; then
|
|
||||||
msg_error "No ${APP} Installation Found!"
|
|
||||||
exit
|
|
||||||
fi
|
|
||||||
|
|
||||||
if check_for_gh_release "yourls" "YOURLS/YOURLS"; then
|
|
||||||
msg_info "Stopping Service"
|
|
||||||
systemctl stop nginx
|
|
||||||
msg_ok "Stopped Service"
|
|
||||||
|
|
||||||
msg_info "Backing up Configuration"
|
|
||||||
cp -r /opt/yourls/user /opt/yourls_user.bak
|
|
||||||
msg_ok "Backed up Configuration"
|
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "yourls" "YOURLS/YOURLS" "tarball"
|
|
||||||
chown -R www-data:www-data /opt/yourls
|
|
||||||
|
|
||||||
msg_info "Restoring Configuration"
|
|
||||||
cp -r /opt/yourls_user.bak/. /opt/yourls/user/
|
|
||||||
rm -rf /opt/yourls_user.bak
|
|
||||||
msg_ok "Restored Configuration"
|
|
||||||
|
|
||||||
msg_info "Starting Service"
|
|
||||||
systemctl start nginx
|
|
||||||
msg_ok "Started Service"
|
|
||||||
msg_ok "Updated successfully!"
|
|
||||||
fi
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
|
|
||||||
start
|
|
||||||
build_container
|
|
||||||
description
|
|
||||||
|
|
||||||
msg_ok "Completed Successfully!\n"
|
|
||||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
|
||||||
echo -e "${INFO}${YW} First, complete the database setup at:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}/admin/install.php${CL}"
|
|
||||||
echo -e "${INFO}${YW} Admin credentials are in the install log:${CL}"
|
|
||||||
echo -e "${TAB}${GATEWAY}${BGN}grep -A2 'admin' /opt/yourls/user/config.php${CL}"
|
|
||||||
@@ -50,7 +50,7 @@ function update_script() {
|
|||||||
rm -rf /opt/zigbee2mqtt/data
|
rm -rf /opt/zigbee2mqtt/data
|
||||||
mv /opt/z2m_backup/data /opt/zigbee2mqtt
|
mv /opt/z2m_backup/data /opt/zigbee2mqtt
|
||||||
cd /opt/zigbee2mqtt
|
cd /opt/zigbee2mqtt
|
||||||
grep -q "^packageImportMethod" ./pnpm-workspace.yaml 2>/dev/null || echo "packageImportMethod: hardlink" >>./pnpm-workspace.yaml
|
grep -q "^packageImportMethod" ./pnpm-workspace.yaml || echo "packageImportMethod: hardlink" >>./pnpm-workspace.yaml
|
||||||
$STD pnpm install --frozen-lockfile
|
$STD pnpm install --frozen-lockfile
|
||||||
$STD pnpm build
|
$STD pnpm build
|
||||||
rm -rf /opt/z2m_backup
|
rm -rf /opt/z2m_backup
|
||||||
|
|||||||
@@ -62,7 +62,6 @@ $STD uv venv --clear /opt/adventurelog/backend/server/.venv
|
|||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m ensurepip --upgrade
|
$STD /opt/adventurelog/backend/server/.venv/bin/python -m ensurepip --upgrade
|
||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install --upgrade pip
|
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install --upgrade pip
|
||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install -r requirements.txt
|
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install -r requirements.txt
|
||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m pip install 'djangorestframework<3.15'
|
|
||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m manage collectstatic --noinput
|
$STD /opt/adventurelog/backend/server/.venv/bin/python -m manage collectstatic --noinput
|
||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m manage migrate
|
$STD /opt/adventurelog/backend/server/.venv/bin/python -m manage migrate
|
||||||
$STD /opt/adventurelog/backend/server/.venv/bin/python -m manage download-countries
|
$STD /opt/adventurelog/backend/server/.venv/bin/python -m manage download-countries
|
||||||
|
|||||||
@@ -1,34 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: Sander Koenders (sanderkoenders)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://www.borgbackup.org/
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing BorgBackup"
|
|
||||||
$STD apk add --no-cache borgbackup openssh
|
|
||||||
$STD rc-update add sshd
|
|
||||||
$STD rc-service sshd start
|
|
||||||
msg_ok "Installed BorgBackup"
|
|
||||||
|
|
||||||
msg_info "Creating backup user"
|
|
||||||
$STD adduser -D -s /bin/bash -h /home/backup backup
|
|
||||||
$STD passwd -d backup
|
|
||||||
msg_ok "Created backup user"
|
|
||||||
|
|
||||||
msg_info "Configure SSH, disabling password authentication and enabling public key authentication"
|
|
||||||
$STD sed -i -e 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
|
|
||||||
$STD rc-service sshd restart
|
|
||||||
msg_ok "Configured SSH"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -22,12 +22,7 @@ replication:
|
|||||||
replSetName: "rs0"
|
replSetName: "rs0"
|
||||||
EOF
|
EOF
|
||||||
systemctl restart mongod
|
systemctl restart mongod
|
||||||
for i in $(seq 1 30); do
|
sleep 3
|
||||||
if mongosh --quiet --eval "db.adminCommand('ping')" &>/dev/null; then
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
sleep 2
|
|
||||||
done
|
|
||||||
$STD mongosh --eval 'rs.initiate({_id: "rs0", members: [{_id: 0, host: "127.0.0.1:27017"}]})'
|
$STD mongosh --eval 'rs.initiate({_id: "rs0", members: [{_id: 0, host: "127.0.0.1:27017"}]})'
|
||||||
msg_ok "Configured MongoDB Replica Set"
|
msg_ok "Configured MongoDB Replica Set"
|
||||||
|
|
||||||
|
|||||||
@@ -62,10 +62,10 @@ expect "Email address"
|
|||||||
send "\r"
|
send "\r"
|
||||||
|
|
||||||
expect "Password"
|
expect "Password"
|
||||||
send "community-scripts.org\r"
|
send "helper-scripts.com\r"
|
||||||
|
|
||||||
expect "Password (again)"
|
expect "Password (again)"
|
||||||
send "community-scripts.org\r"
|
send "helper-scripts.com\r"
|
||||||
|
|
||||||
expect eof
|
expect eof
|
||||||
EOF
|
EOF
|
||||||
|
|||||||
@@ -58,7 +58,7 @@ service:
|
|||||||
use_prerelease: false
|
use_prerelease: false
|
||||||
dashboard:
|
dashboard:
|
||||||
icon: https://raw.githubusercontent.com/community-scripts/ProxmoxVE/refs/heads/main/misc/images/logo.png
|
icon: https://raw.githubusercontent.com/community-scripts/ProxmoxVE/refs/heads/main/misc/images/logo.png
|
||||||
icon_link_to: https://community-scripts.org/
|
icon_link_to: https://helper-scripts.com/
|
||||||
web_url: https://github.com/community-scripts/ProxmoxVE/releases
|
web_url: https://github.com/community-scripts/ProxmoxVE/releases
|
||||||
EOF
|
EOF
|
||||||
msg_ok "Setup Config"
|
msg_ok "Setup Config"
|
||||||
|
|||||||
@@ -1,67 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: Adrian-RDA
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/maziggy/bambuddy
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y libglib2.0-0
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
PYTHON_VERSION="3.13" setup_uv
|
|
||||||
NODE_VERSION="22" setup_nodejs
|
|
||||||
fetch_and_deploy_gh_release "bambuddy" "maziggy/bambuddy" "tarball" "latest" "/opt/bambuddy"
|
|
||||||
|
|
||||||
msg_info "Setting up Python Environment"
|
|
||||||
cd /opt/bambuddy
|
|
||||||
$STD uv venv
|
|
||||||
$STD uv pip install -r requirements.txt
|
|
||||||
msg_ok "Set up Python Environment"
|
|
||||||
|
|
||||||
msg_info "Building Frontend"
|
|
||||||
cd /opt/bambuddy/frontend
|
|
||||||
$STD npm install
|
|
||||||
$STD npm run build
|
|
||||||
msg_ok "Built Frontend"
|
|
||||||
|
|
||||||
msg_info "Configuring Bambuddy"
|
|
||||||
mkdir -p /opt/bambuddy/data /opt/bambuddy/logs
|
|
||||||
cat <<EOF >/opt/bambuddy/.env
|
|
||||||
DEBUG=false
|
|
||||||
LOG_LEVEL=INFO
|
|
||||||
LOG_TO_FILE=true
|
|
||||||
EOF
|
|
||||||
msg_ok "Configured Bambuddy"
|
|
||||||
|
|
||||||
msg_info "Creating Service"
|
|
||||||
cat <<EOF >/etc/systemd/system/bambuddy.service
|
|
||||||
[Unit]
|
|
||||||
Description=Bambuddy - Bambu Lab Print Management
|
|
||||||
Documentation=https://github.com/maziggy/bambuddy
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
WorkingDirectory=/opt/bambuddy
|
|
||||||
ExecStart=/opt/bambuddy/.venv/bin/uvicorn backend.app.main:app --host 0.0.0.0 --port 8000
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
systemctl enable -q --now bambuddy
|
|
||||||
msg_ok "Created Service"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/tphakala/birdnet-go
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y \
|
|
||||||
libasound2 \
|
|
||||||
sox \
|
|
||||||
alsa-utils \
|
|
||||||
ffmpeg
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "birdnet" "tphakala/birdnet-go" "prebuild" "latest" "/opt/birdnet" "birdnet-go-linux-amd64.tar.gz"
|
|
||||||
|
|
||||||
msg_info "Setting up BirdNET-Go"
|
|
||||||
cp /opt/birdnet/birdnet-go /usr/local/bin/birdnet-go
|
|
||||||
chmod +x /usr/local/bin/birdnet-go
|
|
||||||
cp -r /opt/birdnet/libtensorflowlite_c.so /usr/local/lib/ || true
|
|
||||||
ldconfig
|
|
||||||
mkdir -p /opt/birdnet/data/clips
|
|
||||||
msg_ok "Set up BirdNET-Go"
|
|
||||||
|
|
||||||
msg_info "Creating Service"
|
|
||||||
cat <<EOF >/etc/systemd/system/birdnet.service
|
|
||||||
[Unit]
|
|
||||||
Description=BirdNET
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=root
|
|
||||||
WorkingDirectory=/opt/birdnet/data
|
|
||||||
ExecStart=/usr/local/bin/birdnet-go realtime
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
systemctl enable -q --now birdnet
|
|
||||||
msg_ok "Created Service"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
92
install/booklore-install.sh
Normal file
92
install/booklore-install.sh
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/booklore-app/BookLore
|
||||||
|
|
||||||
|
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||||
|
color
|
||||||
|
verb_ip6
|
||||||
|
catch_errors
|
||||||
|
setting_up_container
|
||||||
|
network_check
|
||||||
|
update_os
|
||||||
|
|
||||||
|
msg_info "Installing Dependencies"
|
||||||
|
$STD apt install -y ffmpeg
|
||||||
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
|
JAVA_VERSION="25" setup_java
|
||||||
|
NODE_VERSION="22" setup_nodejs
|
||||||
|
setup_mariadb
|
||||||
|
setup_yq
|
||||||
|
MARIADB_DB_NAME="booklore_db" MARIADB_DB_USER="booklore_user" MARIADB_DB_EXTRA_GRANTS="GRANT SELECT ON \`mysql\`.\`time_zone_name\`" setup_mariadb_db
|
||||||
|
fetch_and_deploy_gh_release "booklore" "booklore-app/BookLore" "tarball"
|
||||||
|
|
||||||
|
msg_info "Building Frontend"
|
||||||
|
cd /opt/booklore/booklore-ui
|
||||||
|
$STD npm install --force
|
||||||
|
$STD npm run build --configuration=production
|
||||||
|
msg_ok "Built Frontend"
|
||||||
|
|
||||||
|
msg_info "Embedding Frontend into Backend"
|
||||||
|
mkdir -p /opt/booklore/booklore-api/src/main/resources/static
|
||||||
|
cp -r /opt/booklore/booklore-ui/dist/booklore/browser/* /opt/booklore/booklore-api/src/main/resources/static/
|
||||||
|
msg_ok "Embedded Frontend into Backend"
|
||||||
|
|
||||||
|
msg_info "Creating Environment"
|
||||||
|
mkdir -p /opt/booklore_storage/{data,books,bookdrop}
|
||||||
|
cat <<EOF >/opt/booklore_storage/.env
|
||||||
|
# Database Configuration
|
||||||
|
DATABASE_URL=jdbc:mariadb://localhost:3306/${MARIADB_DB_NAME}
|
||||||
|
DATABASE_USERNAME=${MARIADB_DB_USER}
|
||||||
|
DATABASE_PASSWORD=${MARIADB_DB_PASS}
|
||||||
|
|
||||||
|
# App Configuration (Spring Boot mapping from app.* properties)
|
||||||
|
APP_PATH_CONFIG=/opt/booklore_storage/data
|
||||||
|
APP_BOOKDROP_FOLDER=/opt/booklore_storage/bookdrop
|
||||||
|
SERVER_PORT=6060
|
||||||
|
EOF
|
||||||
|
msg_ok "Created Environment"
|
||||||
|
|
||||||
|
msg_info "Building Backend"
|
||||||
|
cd /opt/booklore/booklore-api
|
||||||
|
APP_VERSION=$(get_latest_github_release "booklore-app/BookLore")
|
||||||
|
yq eval ".app.version = \"${APP_VERSION}\"" -i src/main/resources/application.yaml
|
||||||
|
$STD ./gradlew clean build -x test --no-daemon
|
||||||
|
mkdir -p /opt/booklore/dist
|
||||||
|
JAR_PATH=$(find /opt/booklore/booklore-api/build/libs -maxdepth 1 -type f -name "booklore-api-*.jar" ! -name "*plain*" | head -n1)
|
||||||
|
if [[ -z "$JAR_PATH" ]]; then
|
||||||
|
msg_error "Backend JAR not found"
|
||||||
|
exit 153
|
||||||
|
fi
|
||||||
|
cp "$JAR_PATH" /opt/booklore/dist/app.jar
|
||||||
|
msg_ok "Built Backend"
|
||||||
|
|
||||||
|
msg_info "Creating Service"
|
||||||
|
cat <<EOF >/etc/systemd/system/booklore.service
|
||||||
|
[Unit]
|
||||||
|
Description=BookLore Java Service
|
||||||
|
After=network.target mariadb.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory=/opt/booklore/dist
|
||||||
|
ExecStart=/usr/bin/java -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UseCompactObjectHeaders -XX:MaxRAMPercentage=75.0 -XX:+ExitOnOutOfMemoryError -jar /opt/booklore/dist/app.jar
|
||||||
|
EnvironmentFile=/opt/booklore_storage/.env
|
||||||
|
SuccessExitStatus=143
|
||||||
|
TimeoutStopSec=10
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl enable -q --now booklore
|
||||||
|
msg_ok "Created Service"
|
||||||
|
|
||||||
|
motd_ssh
|
||||||
|
customize
|
||||||
|
cleanup_lxc
|
||||||
@@ -24,7 +24,6 @@ $STD apt install -y \
|
|||||||
dvisvgm \
|
dvisvgm \
|
||||||
ffmpeg \
|
ffmpeg \
|
||||||
inkscape \
|
inkscape \
|
||||||
libreoffice-writer \
|
|
||||||
libva2 \
|
libva2 \
|
||||||
libvips-tools \
|
libvips-tools \
|
||||||
lmodern \
|
lmodern \
|
||||||
|
|||||||
@@ -15,8 +15,8 @@ update_os
|
|||||||
|
|
||||||
msg_info "Setting up TemurinJDK"
|
msg_info "Setting up TemurinJDK"
|
||||||
setup_java
|
setup_java
|
||||||
$STD apt install -y temurin-{8,11,17,21,25}-jre
|
$STD apt install -y temurin-{8,11,17,21}-jre
|
||||||
sudo update-alternatives --set java /usr/lib/jvm/temurin-25-jre-amd64/bin/java
|
sudo update-alternatives --set java /usr/lib/jvm/temurin-21-jre-amd64/bin/java
|
||||||
msg_ok "Installed TemurinJDK"
|
msg_ok "Installed TemurinJDK"
|
||||||
|
|
||||||
msg_info "Setup Python3"
|
msg_info "Setup Python3"
|
||||||
@@ -59,7 +59,7 @@ After=network.target
|
|||||||
Type=simple
|
Type=simple
|
||||||
User=crafty
|
User=crafty
|
||||||
WorkingDirectory=/opt/crafty-controller/crafty/crafty-4
|
WorkingDirectory=/opt/crafty-controller/crafty/crafty-4
|
||||||
Environment=PATH=/usr/lib/jvm/temurin-25-jre-amd64/bin:/opt/crafty-controller/crafty/.venv/bin:$PATH
|
Environment=PATH=/usr/lib/jvm/temurin-21-jre-amd64/bin:/opt/crafty-controller/crafty/.venv/bin:$PATH
|
||||||
ExecStart=/opt/crafty-controller/crafty/.venv/bin/python3 main.py -d
|
ExecStart=/opt/crafty-controller/crafty/.venv/bin/python3 main.py -d
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
|
|
||||||
|
|||||||
@@ -1,52 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/drawdb-io/drawdb
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y nginx
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
NODE_VERSION="20" setup_nodejs
|
|
||||||
fetch_and_deploy_gh_tag "drawdb" "drawdb-io/drawdb" "latest" "/opt/drawdb"
|
|
||||||
|
|
||||||
msg_info "Building Frontend"
|
|
||||||
cd /opt/drawdb
|
|
||||||
$STD npm ci
|
|
||||||
NODE_OPTIONS="--max-old-space-size=4096" $STD npm run build
|
|
||||||
msg_ok "Built Frontend"
|
|
||||||
|
|
||||||
msg_info "Applying crypto.randomUUID Polyfill"
|
|
||||||
sed -i '/<head>/a <script>if(!crypto.randomUUID){crypto.randomUUID=function(){return([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g,function(c){return(c^(crypto.getRandomValues(new Uint8Array(1))[0]&(15>>c/4))).toString(16)})}};</script>' /opt/drawdb/dist/index.html
|
|
||||||
msg_ok "Applied Polyfill"
|
|
||||||
|
|
||||||
msg_info "Configuring Nginx"
|
|
||||||
cat <<EOF >/etc/nginx/conf.d/drawdb.conf
|
|
||||||
server {
|
|
||||||
listen 3000;
|
|
||||||
server_name _;
|
|
||||||
root /opt/drawdb/dist;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
try_files \$uri /index.html;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
rm -f /etc/nginx/sites-enabled/default
|
|
||||||
systemctl enable -q --now nginx
|
|
||||||
systemctl reload nginx
|
|
||||||
msg_ok "Configured Nginx"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -33,25 +33,13 @@ msg_ok "Installed ASP.NET Core Runtime"
|
|||||||
|
|
||||||
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
|
fetch_and_deploy_from_url "https://fileflows.com/downloads/zip" "/opt/fileflows"
|
||||||
|
|
||||||
|
msg_info "Setup FileFlows"
|
||||||
$STD ln -svf /usr/bin/ffmpeg /usr/local/bin/ffmpeg
|
$STD ln -svf /usr/bin/ffmpeg /usr/local/bin/ffmpeg
|
||||||
$STD ln -svf /usr/bin/ffprobe /usr/local/bin/ffprobe
|
$STD ln -svf /usr/bin/ffprobe /usr/local/bin/ffprobe
|
||||||
|
cd /opt/fileflows/Server
|
||||||
read -r -p "${TAB3}Do you want to install FileFlows Server or Node? (S/N): " install_server
|
dotnet FileFlows.Server.dll --systemd install --root true
|
||||||
|
systemctl enable -q --now fileflows
|
||||||
if [[ "$install_server" =~ ^[Ss]$ ]]; then
|
msg_ok "Setup FileFlows"
|
||||||
msg_info "Installing FileFlows Server"
|
|
||||||
cd /opt/fileflows/Server
|
|
||||||
$STD dotnet FileFlows.Server.dll --systemd install --root true
|
|
||||||
systemctl enable -q --now fileflows
|
|
||||||
msg_ok "Installed FileFlows Server"
|
|
||||||
else
|
|
||||||
msg_info "Installing FileFlows Node"
|
|
||||||
cd /opt/fileflows/Node
|
|
||||||
$STD dotnet FileFlows.Node.dll
|
|
||||||
$STD dotnet FileFlows.Node.dll --systemd install --root true
|
|
||||||
systemctl enable -q --now fileflows-node
|
|
||||||
msg_ok "Installed FileFlows Node"
|
|
||||||
fi
|
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
customize
|
customize
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ export AUTOGRAPH_VERBOSITY=0
|
|||||||
export GLOG_minloglevel=3
|
export GLOG_minloglevel=3
|
||||||
export GLOG_logtostderr=0
|
export GLOG_logtostderr=0
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "frigate" "blakeblackshear/frigate" "tarball" "v0.17.1" "/opt/frigate"
|
fetch_and_deploy_gh_release "frigate" "blakeblackshear/frigate" "tarball" "v0.17.0" "/opt/frigate"
|
||||||
|
|
||||||
msg_info "Building Nginx"
|
msg_info "Building Nginx"
|
||||||
$STD bash /opt/frigate/docker/main/build_nginx.sh
|
$STD bash /opt/frigate/docker/main/build_nginx.sh
|
||||||
@@ -182,6 +182,23 @@ cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
|
|||||||
rm -f /tmp/yamnet.tar.gz
|
rm -f /tmp/yamnet.tar.gz
|
||||||
msg_ok "Downloaded Audio Model"
|
msg_ok "Downloaded Audio Model"
|
||||||
|
|
||||||
|
msg_info "Installing HailoRT Runtime"
|
||||||
|
$STD bash /opt/frigate/docker/main/install_hailort.sh
|
||||||
|
cp -a /opt/frigate/docker/main/rootfs/. /
|
||||||
|
sed -i '/^.*unset DEBIAN_FRONTEND.*$/d' /opt/frigate/docker/main/install_deps.sh
|
||||||
|
echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections
|
||||||
|
echo "libedgetpu1-max libedgetpu/install-confirm-max boolean true" | debconf-set-selections
|
||||||
|
echo 'force-overwrite' >/etc/dpkg/dpkg.cfg.d/force-overwrite
|
||||||
|
$STD bash /opt/frigate/docker/main/install_deps.sh
|
||||||
|
rm -f /etc/dpkg/dpkg.cfg.d/force-overwrite
|
||||||
|
$STD pip3 install -U /wheels/*.whl
|
||||||
|
ldconfig
|
||||||
|
msg_ok "Installed HailoRT Runtime"
|
||||||
|
|
||||||
|
msg_info "Installing MemryX Runtime"
|
||||||
|
$STD bash /opt/frigate/docker/main/install_memryx.sh
|
||||||
|
msg_ok "Installed MemryX Runtime"
|
||||||
|
|
||||||
msg_info "Installing OpenVino"
|
msg_info "Installing OpenVino"
|
||||||
$STD pip3 install -r /opt/frigate/docker/main/requirements-ov.txt
|
$STD pip3 install -r /opt/frigate/docker/main/requirements-ov.txt
|
||||||
msg_ok "Installed OpenVino"
|
msg_ok "Installed OpenVino"
|
||||||
@@ -211,23 +228,6 @@ else
|
|||||||
msg_warn "OpenVino build failed (CPU may not support required instructions). Frigate will use CPU model."
|
msg_warn "OpenVino build failed (CPU may not support required instructions). Frigate will use CPU model."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
msg_info "Installing HailoRT Runtime"
|
|
||||||
$STD bash /opt/frigate/docker/main/install_hailort.sh
|
|
||||||
cp -a /opt/frigate/docker/main/rootfs/. /
|
|
||||||
sed -i '/^.*unset DEBIAN_FRONTEND.*$/d' /opt/frigate/docker/main/install_deps.sh
|
|
||||||
echo "libedgetpu1-max libedgetpu/accepted-eula boolean true" | debconf-set-selections
|
|
||||||
echo "libedgetpu1-max libedgetpu/install-confirm-max boolean true" | debconf-set-selections
|
|
||||||
echo 'force-overwrite' >/etc/dpkg/dpkg.cfg.d/force-overwrite
|
|
||||||
$STD bash /opt/frigate/docker/main/install_deps.sh
|
|
||||||
rm -f /etc/dpkg/dpkg.cfg.d/force-overwrite
|
|
||||||
$STD pip3 install -U /wheels/*.whl
|
|
||||||
ldconfig
|
|
||||||
msg_ok "Installed HailoRT Runtime"
|
|
||||||
|
|
||||||
msg_info "Installing MemryX Runtime"
|
|
||||||
$STD bash /opt/frigate/docker/main/install_memryx.sh
|
|
||||||
msg_ok "Installed MemryX Runtime"
|
|
||||||
|
|
||||||
msg_info "Building Frigate Application (Patience)"
|
msg_info "Building Frigate Application (Patience)"
|
||||||
cd /opt/frigate
|
cd /opt/frigate
|
||||||
$STD pip3 install -r /opt/frigate/docker/main/requirements-dev.txt
|
$STD pip3 install -r /opt/frigate/docker/main/requirements-dev.txt
|
||||||
|
|||||||
@@ -1,205 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: CrazyWolf13
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/tess1o/geopulse
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y \
|
|
||||||
openssl \
|
|
||||||
nginx
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
PG_VERSION="17" PG_MODULES="postgis" setup_postgresql
|
|
||||||
PG_DB_NAME="geopulse" PG_DB_USER="geopulse" PG_DB_EXTENSIONS="postgis,postgis_topology" setup_postgresql_db
|
|
||||||
|
|
||||||
msg_info "Generating Security Keys"
|
|
||||||
mkdir -p /opt/geopulse/{backend,keys}
|
|
||||||
mkdir -p /etc/geopulse /var/www/geopulse /var/lib/geopulse/dumps
|
|
||||||
mkdir -p /var/log/geopulse/{backend,nginx}
|
|
||||||
openssl genpkey -algorithm RSA -out /opt/geopulse/keys/jwt-private-key.pem 2>/dev/null
|
|
||||||
openssl rsa -pubout -in /opt/geopulse/keys/jwt-private-key.pem -out /opt/geopulse/keys/jwt-public-key.pem 2>/dev/null
|
|
||||||
openssl rand -base64 32 >/opt/geopulse/keys/ai-encryption-key.txt
|
|
||||||
chmod 640 /opt/geopulse/keys/jwt-private-key.pem /opt/geopulse/keys/jwt-public-key.pem /opt/geopulse/keys/ai-encryption-key.txt
|
|
||||||
msg_ok "Generated Security Keys"
|
|
||||||
|
|
||||||
if [[ "$(uname -m)" == "aarch64" ]]; then
|
|
||||||
if grep -qi "raspberry\|bcm" /proc/cpuinfo 2>/dev/null; then
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-arm64-compat-*"
|
|
||||||
else
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-arm64-[!c]*"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
if grep -q avx2 /proc/cpuinfo && grep -q bmi2 /proc/cpuinfo && grep -q fma /proc/cpuinfo; then
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-amd64-[!c]*"
|
|
||||||
else
|
|
||||||
BINARY_PATTERN="geopulse-backend-native-amd64-compat-*"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "geopulse-backend" "tess1o/geopulse" "singlefile" "latest" "/opt/geopulse/backend" "${BINARY_PATTERN}"
|
|
||||||
fetch_and_deploy_gh_release "geopulse-frontend" "tess1o/geopulse" "prebuild" "latest" "/var/www/geopulse" "geopulse-frontend-*.tar.gz"
|
|
||||||
|
|
||||||
msg_info "Configuring GeoPulse"
|
|
||||||
cat <<EOF >/etc/geopulse/geopulse.env
|
|
||||||
GEOPULSE_PUBLIC_BASE_URL=http://${LOCAL_IP}
|
|
||||||
GEOPULSE_UI_URL=http://${LOCAL_IP}
|
|
||||||
GEOPULSE_CORS_ENABLED=false
|
|
||||||
GEOPULSE_CORS_ORIGINS=
|
|
||||||
QUARKUS_HTTP_PORT=8080
|
|
||||||
GEOPULSE_POSTGRES_URL=jdbc:postgresql://localhost:5432/${PG_DB_NAME}
|
|
||||||
GEOPULSE_POSTGRES_HOST=localhost
|
|
||||||
GEOPULSE_POSTGRES_PORT=5432
|
|
||||||
GEOPULSE_POSTGRES_DB=${PG_DB_NAME}
|
|
||||||
GEOPULSE_POSTGRES_USERNAME=${PG_DB_USER}
|
|
||||||
GEOPULSE_POSTGRES_PASSWORD=${PG_DB_PASS}
|
|
||||||
GEOPULSE_JWT_PRIVATE_KEY_LOCATION=file:/opt/geopulse/keys/jwt-private-key.pem
|
|
||||||
GEOPULSE_JWT_PUBLIC_KEY_LOCATION=file:/opt/geopulse/keys/jwt-public-key.pem
|
|
||||||
GEOPULSE_AI_ENCRYPTION_KEY_LOCATION=file:/opt/geopulse/keys/ai-encryption-key.txt
|
|
||||||
QUARKUS_LOG_FILE_ENABLE=true
|
|
||||||
QUARKUS_LOG_FILE_PATH=/var/log/geopulse/backend/geopulse.log
|
|
||||||
QUARKUS_LOG_FILE_ROTATION_MAX_FILE_SIZE=10M
|
|
||||||
QUARKUS_LOG_FILE_ROTATION_MAX_BACKUP_INDEX=5
|
|
||||||
EOF
|
|
||||||
chmod 640 /etc/geopulse/geopulse.env
|
|
||||||
msg_ok "Configured GeoPulse"
|
|
||||||
|
|
||||||
msg_info "Creating Service"
|
|
||||||
cat <<EOF >/etc/systemd/system/geopulse-backend.service
|
|
||||||
[Unit]
|
|
||||||
Description=GeoPulse Backend
|
|
||||||
After=network.target postgresql.service
|
|
||||||
Wants=postgresql.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=root
|
|
||||||
WorkingDirectory=/opt/geopulse/backend
|
|
||||||
EnvironmentFile=/etc/geopulse/geopulse.env
|
|
||||||
ExecStart=/opt/geopulse/backend/geopulse-backend -Dquarkus.http.host=0.0.0.0 -XX:MaximumHeapSizePercent=70 -XX:MaximumYoungGenerationSizePercent=15
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=10
|
|
||||||
StandardOutput=append:/var/log/geopulse/backend/geopulse-stdout.log
|
|
||||||
StandardError=append:/var/log/geopulse/backend/geopulse-stderr.log
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
systemctl enable -q --now geopulse-backend
|
|
||||||
msg_ok "Created Service"
|
|
||||||
|
|
||||||
msg_info "Configuring Nginx"
|
|
||||||
mkdir -p /var/cache/nginx/osm_tiles
|
|
||||||
cat <<'EOF' >/etc/nginx/sites-available/geopulse.conf
|
|
||||||
proxy_cache_path /var/cache/nginx/osm_tiles levels=1:2 keys_zone=osm_cache:100m max_size=10g inactive=30d use_temp_path=off;
|
|
||||||
|
|
||||||
map $uri $osm_subdomain {
|
|
||||||
~^/osm/tiles/a/ "a";
|
|
||||||
~^/osm/tiles/b/ "b";
|
|
||||||
~^/osm/tiles/c/ "c";
|
|
||||||
default "a";
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name _;
|
|
||||||
|
|
||||||
root /var/www/geopulse;
|
|
||||||
index index.html;
|
|
||||||
|
|
||||||
client_max_body_size 100M;
|
|
||||||
|
|
||||||
gzip on;
|
|
||||||
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
|
|
||||||
gzip_comp_level 6;
|
|
||||||
gzip_min_length 1000;
|
|
||||||
|
|
||||||
location ~* ^/(?!osm/).*\.(jpg|jpeg|png|gif|ico|css|js)$ {
|
|
||||||
expires 1y;
|
|
||||||
add_header Cache-Control "public, max-age=31536000";
|
|
||||||
}
|
|
||||||
|
|
||||||
location ^~ /osm/tiles/ {
|
|
||||||
resolver 8.8.8.8 valid=300s;
|
|
||||||
resolver_timeout 10s;
|
|
||||||
rewrite ^/osm/tiles/[abc]/(.*)$ /$1 break;
|
|
||||||
proxy_pass https://$osm_subdomain.tile.openstreetmap.org;
|
|
||||||
proxy_cache osm_cache;
|
|
||||||
proxy_cache_key "$scheme$proxy_host$uri";
|
|
||||||
proxy_cache_valid 200 30d;
|
|
||||||
proxy_cache_valid 404 1m;
|
|
||||||
proxy_cache_valid 502 503 504 1m;
|
|
||||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
|
||||||
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
|
|
||||||
proxy_cache_background_update on;
|
|
||||||
proxy_cache_lock on;
|
|
||||||
proxy_set_header Cookie "";
|
|
||||||
proxy_set_header Authorization "";
|
|
||||||
proxy_set_header User-Agent "GeoPulse/1.0";
|
|
||||||
proxy_set_header Host $osm_subdomain.tile.openstreetmap.org;
|
|
||||||
proxy_http_version 1.1;
|
|
||||||
proxy_set_header Connection "";
|
|
||||||
proxy_connect_timeout 10s;
|
|
||||||
proxy_read_timeout 10s;
|
|
||||||
expires 30d;
|
|
||||||
add_header Cache-Control "public, immutable";
|
|
||||||
add_header X-Cache-Status $upstream_cache_status always;
|
|
||||||
}
|
|
||||||
|
|
||||||
location /api/ {
|
|
||||||
proxy_pass http://localhost:8080/api/;
|
|
||||||
proxy_connect_timeout 3600s;
|
|
||||||
proxy_send_timeout 3600s;
|
|
||||||
proxy_read_timeout 3600s;
|
|
||||||
proxy_set_header Host $host;
|
|
||||||
proxy_set_header X-Real-IP $remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
}
|
|
||||||
|
|
||||||
location / {
|
|
||||||
try_files $uri $uri/ /index.html;
|
|
||||||
}
|
|
||||||
|
|
||||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
|
||||||
add_header X-Content-Type-Options "nosniff" always;
|
|
||||||
add_header X-XSS-Protection "1; mode=block" always;
|
|
||||||
|
|
||||||
access_log /var/log/geopulse/nginx/access.log;
|
|
||||||
error_log /var/log/geopulse/nginx/error.log;
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
ln -sf /etc/nginx/sites-available/geopulse.conf /etc/nginx/sites-enabled/
|
|
||||||
rm -f /etc/nginx/sites-enabled/default
|
|
||||||
systemctl enable -q --now nginx
|
|
||||||
systemctl reload nginx
|
|
||||||
msg_ok "Configured Nginx"
|
|
||||||
|
|
||||||
msg_info "Creating Admin Helper"
|
|
||||||
cat <<'EOF' >/usr/local/bin/create-geopulse-admin
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
read -rp "Enter admin email address: " ADMIN_EMAIL
|
|
||||||
if [[ -z "$ADMIN_EMAIL" ]]; then
|
|
||||||
echo "No email provided. Aborting."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
sed -i '/^GEOPULSE_ADMIN_EMAIL=/d' /etc/geopulse/geopulse.env
|
|
||||||
echo "GEOPULSE_ADMIN_EMAIL=${ADMIN_EMAIL}" >>/etc/geopulse/geopulse.env
|
|
||||||
systemctl restart geopulse-backend
|
|
||||||
echo "Admin email set to '${ADMIN_EMAIL}'. Register with this email in the GeoPulse UI to receive admin privileges."
|
|
||||||
EOF
|
|
||||||
chmod +x /usr/local/bin/create-geopulse-admin
|
|
||||||
msg_ok "Created Admin Helper"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -13,11 +13,11 @@ setting_up_container
|
|||||||
network_check
|
network_check
|
||||||
update_os
|
update_os
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "gokapi" "Forceu/Gokapi" "prebuild" "latest" "/opt/gokapi" "*linux*amd64.zip"
|
fetch_and_deploy_gh_release "gokapi" "Forceu/Gokapi" "prebuild" "latest" "/opt/gokapi" "gokapi-linux_amd64.zip"
|
||||||
|
|
||||||
msg_info "Configuring Gokapi"
|
msg_info "Configuring Gokapi"
|
||||||
mkdir -p /opt/gokapi/{data,config}
|
mkdir -p /opt/gokapi/{data,config}
|
||||||
chmod +x /opt/gokapi/gokapi
|
chmod +x /opt/gokapi/gokapi-linux_amd64
|
||||||
msg_ok "Configured Gokapi"
|
msg_ok "Configured Gokapi"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
@@ -29,7 +29,7 @@ Description=gokapi
|
|||||||
Type=simple
|
Type=simple
|
||||||
Environment=GOKAPI_DATA_DIR=/opt/gokapi/data
|
Environment=GOKAPI_DATA_DIR=/opt/gokapi/data
|
||||||
Environment=GOKAPI_CONFIG_DIR=/opt/gokapi/config
|
Environment=GOKAPI_CONFIG_DIR=/opt/gokapi/config
|
||||||
ExecStart=/opt/gokapi/gokapi
|
ExecStart=/opt/gokapi/gokapi-linux_amd64
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ PG_DB_NAME="healthchecks_db" PG_DB_USER="hc_user" PG_DB_PASS=$(openssl rand -bas
|
|||||||
|
|
||||||
msg_info "Setup Keys (Admin / Secret)"
|
msg_info "Setup Keys (Admin / Secret)"
|
||||||
SECRET_KEY="$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-32)"
|
SECRET_KEY="$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-32)"
|
||||||
ADMIN_EMAIL="admin@community-scripts.org"
|
ADMIN_EMAIL="admin@helper-scripts.local"
|
||||||
ADMIN_PASSWORD="$PG_DB_PASS"
|
ADMIN_PASSWORD="$PG_DB_PASS"
|
||||||
{
|
{
|
||||||
echo "healthchecks Admin Email: $ADMIN_EMAIL"
|
echo "healthchecks Admin Email: $ADMIN_EMAIL"
|
||||||
|
|||||||
@@ -295,7 +295,7 @@ ML_DIR="${APP_DIR}/machine-learning"
|
|||||||
GEO_DIR="${INSTALL_DIR}/geodata"
|
GEO_DIR="${INSTALL_DIR}/geodata"
|
||||||
mkdir -p {"${APP_DIR}","${UPLOAD_DIR}","${GEO_DIR}","${INSTALL_DIR}"/cache}
|
mkdir -p {"${APP_DIR}","${UPLOAD_DIR}","${GEO_DIR}","${INSTALL_DIR}"/cache}
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.6.3" "$SRC_DIR"
|
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.6.1" "$SRC_DIR"
|
||||||
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1] | split("+")[0]' ${SRC_DIR}/package.json)"
|
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1] | split("+")[0]' ${SRC_DIR}/package.json)"
|
||||||
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
||||||
|
|
||||||
@@ -344,11 +344,7 @@ msg_ok "Installed Immich Server, Web and Plugin Components"
|
|||||||
|
|
||||||
cd "$SRC_DIR"/machine-learning
|
cd "$SRC_DIR"/machine-learning
|
||||||
$STD useradd -U -s /usr/sbin/nologin -r -M -d "$INSTALL_DIR" immich
|
$STD useradd -U -s /usr/sbin/nologin -r -M -d "$INSTALL_DIR" immich
|
||||||
mkdir -p "$ML_DIR"
|
mkdir -p "$ML_DIR" && chown -R immich:immich "$INSTALL_DIR"
|
||||||
# chown excluding upload dir contents (may be a mount with restricted permissions)
|
|
||||||
chown immich:immich "$INSTALL_DIR"
|
|
||||||
find "$INSTALL_DIR" -maxdepth 1 -mindepth 1 ! -name upload -exec chown -R immich:immich {} +
|
|
||||||
chown immich:immich "$UPLOAD_DIR" 2>/dev/null || true
|
|
||||||
export VIRTUAL_ENV="${ML_DIR}/ml-venv"
|
export VIRTUAL_ENV="${ML_DIR}/ml-venv"
|
||||||
export UV_HTTP_TIMEOUT=300
|
export UV_HTTP_TIMEOUT=300
|
||||||
if [[ -f ~/.openvino ]]; then
|
if [[ -f ~/.openvino ]]; then
|
||||||
@@ -473,7 +469,8 @@ User=immich
|
|||||||
Group=immich
|
Group=immich
|
||||||
UMask=0077
|
UMask=0077
|
||||||
WorkingDirectory=${APP_DIR}
|
WorkingDirectory=${APP_DIR}
|
||||||
ExecStart=${APP_DIR}/bin/start.sh
|
EnvironmentFile=${INSTALL_DIR}/.env
|
||||||
|
ExecStart=/usr/bin/node ${APP_DIR}/dist/main
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
SyslogIdentifier=immich-web
|
SyslogIdentifier=immich-web
|
||||||
StandardOutput=append:/var/log/immich/web.log
|
StandardOutput=append:/var/log/immich/web.log
|
||||||
@@ -503,11 +500,7 @@ StandardError=append:/var/log/immich/ml.log
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
chown -R immich:immich /var/log/immich
|
chown -R immich:immich "$INSTALL_DIR" /var/log/immich
|
||||||
# chown excluding upload dir contents (may be a mount with restricted permissions)
|
|
||||||
chown immich:immich "$INSTALL_DIR"
|
|
||||||
find "$INSTALL_DIR" -maxdepth 1 -mindepth 1 ! -name upload -exec chown -R immich:immich {} +
|
|
||||||
chown immich:immich "$UPLOAD_DIR" 2>/dev/null || true
|
|
||||||
systemctl enable -q --now immich-ml.service immich-web.service
|
systemctl enable -q --now immich-ml.service immich-web.service
|
||||||
msg_ok "Modified user, created env file, scripts and services"
|
msg_ok "Modified user, created env file, scripts and services"
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ fetch_and_deploy_gh_release "inspircd" "inspircd/inspircd" "binary" "latest" "/o
|
|||||||
|
|
||||||
msg_info "Configuring InspIRCd"
|
msg_info "Configuring InspIRCd"
|
||||||
cat <<EOF >/etc/inspircd/inspircd.conf
|
cat <<EOF >/etc/inspircd/inspircd.conf
|
||||||
<define name="networkDomain" value="community-scripts.org">
|
<define name="networkDomain" value="helper-scripts.com">
|
||||||
<define name="networkName" value="Proxmox VE Helper-Scripts">
|
<define name="networkName" value="Proxmox VE Helper-Scripts">
|
||||||
|
|
||||||
<server
|
<server
|
||||||
|
|||||||
@@ -13,12 +13,7 @@ setting_up_container
|
|||||||
network_check
|
network_check
|
||||||
update_os
|
update_os
|
||||||
|
|
||||||
if ! grep -q ' avx ' /proc/cpuinfo 2>/dev/null; then
|
fetch_and_deploy_gh_release "isponsorblocktv" "dmunozv04/iSponsorBlockTV" "singlefile" "latest" "/opt/isponsorblocktv" "iSponsorBlockTV-*-linux"
|
||||||
msg_error "CPU does not support AVX instructions (required by iSponsorBlockTV/PyApp)"
|
|
||||||
exit 106
|
|
||||||
fi
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "isponsorblocktv" "dmunozv04/iSponsorBlockTV" "singlefile" "latest" "/opt/isponsorblocktv" "iSponsorBlockTV-x86_64-linux"
|
|
||||||
|
|
||||||
msg_info "Setting up iSponsorBlockTV"
|
msg_info "Setting up iSponsorBlockTV"
|
||||||
install -d /var/lib/isponsorblocktv
|
install -d /var/lib/isponsorblocktv
|
||||||
|
|||||||
@@ -55,10 +55,10 @@ $STD expect <<EOF
|
|||||||
set timeout -1
|
set timeout -1
|
||||||
log_user 0
|
log_user 0
|
||||||
|
|
||||||
spawn bin/console kimai:user:create admin admin@community-scripts.org ROLE_SUPER_ADMIN
|
spawn bin/console kimai:user:create admin admin@helper-scripts.com ROLE_SUPER_ADMIN
|
||||||
|
|
||||||
expect "Please enter the password:"
|
expect "Please enter the password:"
|
||||||
send "community-scripts.org\r"
|
send "helper-scripts.com\r"
|
||||||
|
|
||||||
expect eof
|
expect eof
|
||||||
EOF
|
EOF
|
||||||
|
|||||||
@@ -23,20 +23,12 @@ mkdir -p config/assets
|
|||||||
cp config/config.yml.template config/config.yml
|
cp config/config.yml.template config/config.yml
|
||||||
msg_ok "Setup Kometa"
|
msg_ok "Setup Kometa"
|
||||||
|
|
||||||
read -r -p "${TAB3}Enter your TMDb API key: " TMDBKEY
|
read -p "${TAB3}Enter your TMDb API key: " TMDBKEY
|
||||||
read -r -p "${TAB3}Enter your Plex URL: " PLEXURL
|
read -p "${TAB3}Enter your Plex URL: " PLEXURL
|
||||||
read -r -p "${TAB3}Enter your Plex token: " PLEXTOKEN
|
read -p "${TAB3}Enter your Plex token: " PLEXTOKEN
|
||||||
sed -i '/^plex:/,/^[^ ]/{s| url:.*| url: '"$PLEXURL"'|}' /opt/kometa/config/config.yml
|
sed -i -e "s#url: http://192.168.1.12:32400#url: $PLEXURL #g" /opt/kometa/config/config.yml
|
||||||
sed -i '/^plex:/,/^[^ ]/{s| token:.*| token: '"$PLEXTOKEN"'|}' /opt/kometa/config/config.yml
|
sed -i -e "s/token: ####################/token: $PLEXTOKEN/g" /opt/kometa/config/config.yml
|
||||||
sed -i '/^tmdb:/,/^[^ ]/{s| apikey:.*| apikey: '"$TMDBKEY"'|}' /opt/kometa/config/config.yml
|
sed -i -e "s/apikey: ################################/apikey: $TMDBKEY/g" /opt/kometa/config/config.yml
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "kometa-quickstart" "Kometa-Team/Quickstart" "tarball"
|
|
||||||
|
|
||||||
msg_info "Installing Kometa Quickstart"
|
|
||||||
cd /opt/kometa-quickstart
|
|
||||||
$STD uv venv /opt/kometa-quickstart/.venv
|
|
||||||
$STD uv pip install -r requirements.txt -p /opt/kometa-quickstart/.venv/bin/python
|
|
||||||
msg_ok "Installed Kometa Quickstart"
|
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
cat <<EOF >/etc/systemd/system/kometa.service
|
cat <<EOF >/etc/systemd/system/kometa.service
|
||||||
@@ -54,22 +46,7 @@ RestartSec=30
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
cat <<EOF >/etc/systemd/system/kometa-quickstart.service
|
systemctl enable -q --now kometa
|
||||||
[Unit]
|
|
||||||
Description=Kometa Quickstart
|
|
||||||
After=network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
WorkingDirectory=/opt/kometa-quickstart
|
|
||||||
ExecStart=/opt/kometa-quickstart/.venv/bin/python quickstart.py
|
|
||||||
Restart=always
|
|
||||||
RestartSec=10
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
systemctl enable -q --now kometa kometa-quickstart
|
|
||||||
msg_ok "Created Service"
|
msg_ok "Created Service"
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
|
|||||||
@@ -1,72 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/matter-js/python-matter-server
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y \
|
|
||||||
libuv1 \
|
|
||||||
libjson-c5 \
|
|
||||||
libnl-3-200 \
|
|
||||||
libnl-route-3-200 \
|
|
||||||
iputils-ping \
|
|
||||||
iproute2
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
UV_PYTHON="3.12" setup_uv
|
|
||||||
|
|
||||||
msg_info "Setting up Matter Server"
|
|
||||||
mkdir -p /opt/matter-server/data/credentials
|
|
||||||
if [ -L /data ]; then
|
|
||||||
rm -f /data
|
|
||||||
fi
|
|
||||||
if [ ! -e /data ]; then
|
|
||||||
ln -s /opt/matter-server/data /data
|
|
||||||
fi
|
|
||||||
$STD uv venv /opt/matter-server/.venv
|
|
||||||
MATTER_VERSION=$(get_latest_github_release "matter-js/python-matter-server")
|
|
||||||
$STD uv pip install --python /opt/matter-server/.venv/bin/python "python-matter-server[server]==${MATTER_VERSION}"
|
|
||||||
echo "${MATTER_VERSION}" >~/.matter-server
|
|
||||||
msg_ok "Set up Matter Server"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "chip-ota-provider-app" "home-assistant-libs/matter-linux-ota-provider" "singlefile" "latest" "/usr/local/bin" "chip-ota-provider-app-x86-64"
|
|
||||||
|
|
||||||
msg_info "Configuring Network"
|
|
||||||
cat <<EOF >/etc/sysctl.d/99-matter.conf
|
|
||||||
net.ipv4.igmp_max_memberships=1024
|
|
||||||
EOF
|
|
||||||
$STD sysctl -p /etc/sysctl.d/99-matter.conf
|
|
||||||
msg_ok "Configured Network"
|
|
||||||
|
|
||||||
msg_info "Creating Service"
|
|
||||||
cat <<EOF >/etc/systemd/system/matter-server.service
|
|
||||||
[Unit]
|
|
||||||
Description=Matter Server
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=root
|
|
||||||
ExecStart=/opt/matter-server/.venv/bin/matter-server --storage-path /data --paa-root-cert-dir /data/credentials
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
systemctl enable -q --now matter-server
|
|
||||||
msg_ok "Created Service"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -33,7 +33,7 @@ $STD yarn config set ignore-engines true
|
|||||||
$STD yarn install
|
$STD yarn install
|
||||||
$STD yarn run production
|
$STD yarn run production
|
||||||
$STD php artisan key:generate
|
$STD php artisan key:generate
|
||||||
$STD php artisan setup:production --email=admin@community-scripts.org --password=community-scripts.org --force
|
$STD php artisan setup:production --email=admin@helper-scripts.com --password=helper-scripts.com --force
|
||||||
chown -R www-data:www-data /opt/monica
|
chown -R www-data:www-data /opt/monica
|
||||||
chmod -R 775 /opt/monica/storage
|
chmod -R 775 /opt/monica/storage
|
||||||
echo "* * * * * root php /opt/monica/artisan schedule:run >> /dev/null 2>&1" >>/etc/crontab
|
echo "* * * * * root php /opt/monica/artisan schedule:run >> /dev/null 2>&1" >>/etc/crontab
|
||||||
|
|||||||
@@ -1,102 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://netboot.xyz
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y \
|
|
||||||
nginx \
|
|
||||||
tftpd-hpa
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "netboot-xyz" "netbootxyz/netboot.xyz" "prebuild" "latest" "/var/www/html" "menus.tar.gz"
|
|
||||||
|
|
||||||
# x86_64 UEFI bootloaders
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-efi" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-efi-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-snp-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-snp.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-snponly.efi"
|
|
||||||
# x86_64 metal (code-signed) UEFI bootloaders
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-snp-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-snp.efi.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-snponly.efi"
|
|
||||||
# x86_64 BIOS/Legacy bootloaders
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-kpxe" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.kpxe"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-undionly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-undionly.kpxe"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-kpxe" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal.kpxe"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-lkrn" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.lkrn"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-linux-bin" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-linux.bin"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-dsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.dsk"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-pdsk" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.pdsk"
|
|
||||||
# ARM64 bootloaders
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64-snponly.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-arm64" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-arm64.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-arm64-snp" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-arm64-snp.efi"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-metal-arm64-snponly" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-metal-arm64-snponly.efi"
|
|
||||||
# ISO and IMG images (for virtual/physical media creation)
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-iso" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.iso"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-img" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz.img"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-iso" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64.iso"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-arm64-img" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-arm64.img"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-multiarch-iso" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-multiarch.iso"
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-multiarch-img" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-multiarch.img"
|
|
||||||
# SHA256 checksums
|
|
||||||
USE_ORIGINAL_FILENAME=true fetch_and_deploy_gh_release "netboot-xyz-checksums" "netbootxyz/netboot.xyz" "singlefile" "latest" "/var/www/html" "netboot.xyz-sha256-checksums.txt"
|
|
||||||
|
|
||||||
msg_info "Configuring Webserver"
|
|
||||||
rm -f /etc/nginx/sites-enabled/default
|
|
||||||
cat <<'EOF' >/etc/nginx/sites-available/netboot-xyz
|
|
||||||
server {
|
|
||||||
listen 80 default_server;
|
|
||||||
listen [::]:80 default_server;
|
|
||||||
|
|
||||||
root /var/www/html;
|
|
||||||
server_name _;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
autoindex on;
|
|
||||||
add_header Access-Control-Allow-Origin "*";
|
|
||||||
add_header Access-Control-Allow-Headers "Content-Type";
|
|
||||||
}
|
|
||||||
|
|
||||||
# The index.html from menus.tar.gz links bootloaders under /ipxe/ —
|
|
||||||
# serve them from the same root directory via alias
|
|
||||||
location /ipxe/ {
|
|
||||||
alias /var/www/html/;
|
|
||||||
autoindex on;
|
|
||||||
add_header Access-Control-Allow-Origin "*";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
ln -sf /etc/nginx/sites-available/netboot-xyz /etc/nginx/sites-enabled/netboot-xyz
|
|
||||||
$STD systemctl reload nginx
|
|
||||||
msg_ok "Configured Webserver"
|
|
||||||
|
|
||||||
msg_info "Configuring TFTP Server"
|
|
||||||
cat <<EOF >/etc/default/tftpd-hpa
|
|
||||||
TFTP_USERNAME="tftp"
|
|
||||||
TFTP_DIRECTORY="/var/www/html"
|
|
||||||
TFTP_ADDRESS="0.0.0.0:69"
|
|
||||||
TFTP_OPTIONS="--secure"
|
|
||||||
EOF
|
|
||||||
systemctl enable -q --now tftpd-hpa
|
|
||||||
msg_ok "Configured TFTP Server"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -1,164 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: vhsdream
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/nxzai/nextExplorer
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y \
|
|
||||||
ripgrep \
|
|
||||||
imagemagick \
|
|
||||||
ffmpeg \
|
|
||||||
libva-drm2 \
|
|
||||||
libva2 \
|
|
||||||
mesa-va-drivers \
|
|
||||||
vainfo
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
NODE_VERSION="24" setup_nodejs
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "nextExplorer" "nxzai/nextExplorer" "tarball" "latest" "/opt/nextExplorer"
|
|
||||||
|
|
||||||
msg_info "Building nextExplorer"
|
|
||||||
APP_DIR="/opt/nextExplorer/app"
|
|
||||||
LOCAL_IP="$(hostname -I | awk '{print $1}')"
|
|
||||||
mkdir -p "$APP_DIR"
|
|
||||||
mkdir -p /etc/nextExplorer
|
|
||||||
cd /opt/nextExplorer
|
|
||||||
export NODE_ENV=production
|
|
||||||
$STD npm ci --omit=dev --workspace backend
|
|
||||||
mv node_modules "$APP_DIR"
|
|
||||||
mv backend/{src,package.json} "$APP_DIR"
|
|
||||||
unset NODE_ENV
|
|
||||||
|
|
||||||
export NODE_ENV=development
|
|
||||||
export NODE_OPTIONS="--max-old-space-size=2048"
|
|
||||||
$STD npm ci --workspace frontend
|
|
||||||
$STD npm run -w frontend build -- --sourcemap false
|
|
||||||
unset NODE_ENV
|
|
||||||
mv frontend/dist/ "$APP_DIR"/src/public
|
|
||||||
msg_ok "Built nextExplorer"
|
|
||||||
|
|
||||||
msg_info "Configuring nextExplorer"
|
|
||||||
SECRET=$(openssl rand -hex 32)
|
|
||||||
cat <<EOF >/etc/nextExplorer/.env
|
|
||||||
NODE_ENV=production
|
|
||||||
PORT=3000
|
|
||||||
|
|
||||||
VOLUME_ROOT=/mnt
|
|
||||||
CONFIG_DIR=/etc/nextExplorer
|
|
||||||
CACHE_DIR=/etc/nextExplorer/cache
|
|
||||||
# USER_ROOT=
|
|
||||||
|
|
||||||
PUBLIC_URL=${LOCAL_IP}:3000
|
|
||||||
# TRUST_PROXY=
|
|
||||||
# CORS_ORIGINS=
|
|
||||||
|
|
||||||
TERMINAL_ENABLED=false
|
|
||||||
|
|
||||||
LOG_LEVEL=info
|
|
||||||
DEBUG=false
|
|
||||||
ENABLE_HTTP_LOGGING=false
|
|
||||||
|
|
||||||
AUTH_ENABLED=true
|
|
||||||
AUTH_MODE=both
|
|
||||||
SESSION_SECRET="${SECRET}"
|
|
||||||
# AUTH_MAX_FAILED=
|
|
||||||
# AUTH_LOCK_MINUTES=
|
|
||||||
# AUTH_USER_EMAIL=
|
|
||||||
# AUTH_USER_PASSWORD=
|
|
||||||
|
|
||||||
# OIDC_ENABLED=
|
|
||||||
# OIDC_ISSUER=
|
|
||||||
# OIDC_AUTHORIZATION_URL=
|
|
||||||
# OIDC_TOKEN_URL=
|
|
||||||
# OIDC_USERINFO_URL=
|
|
||||||
# OIDC_CLIENT_ID=
|
|
||||||
# OIDC_CLIENT_SECRET=
|
|
||||||
# OIDC_CALLBACK_URL=
|
|
||||||
# OIDC_LOGOUT_URL=
|
|
||||||
# OIDC_SCOPES=
|
|
||||||
# OIDC_AUTO_CREATE_USERS=true
|
|
||||||
|
|
||||||
# SEARCH_DEEP=
|
|
||||||
# SEARCH_RIPGREP=
|
|
||||||
# SEARCH_MAX_FILESIZE=
|
|
||||||
|
|
||||||
# ONLYOFFICE_URL=
|
|
||||||
# ONLYOFFICE_SECRET=
|
|
||||||
# ONLYOFFICE_LANG=
|
|
||||||
# ONLYOFFICE_FORCE_SAVE=
|
|
||||||
# ONLYOFFICE_FILE_EXTENSIONS=
|
|
||||||
|
|
||||||
# COLLABORA_URL=
|
|
||||||
# COLLABORA_DISCOVERY_URL=
|
|
||||||
# COLLABORA_SECRET=
|
|
||||||
# COLLABORA_LANG=
|
|
||||||
# COLLABORA_FILE_EXTENSIONS=
|
|
||||||
|
|
||||||
SHOW_VOLUME_USAGE=true
|
|
||||||
# USER_DIR_ENABLED=
|
|
||||||
# SKIP_HOME=
|
|
||||||
|
|
||||||
# EDITOR_EXTENSIONS=
|
|
||||||
|
|
||||||
# FFMPEG_PATH=
|
|
||||||
# FFPROBE_PATH=
|
|
||||||
|
|
||||||
## Hardware acceleration
|
|
||||||
# FFMPEG_HWACCEL=vaapi
|
|
||||||
# FFMPEG_HWACCEL_DEVICE=/dev/dri/renderD128
|
|
||||||
# FFMPEG_HWACCEL_OUTPUT_FORMAT=nv12
|
|
||||||
|
|
||||||
FAVORITES_DEFAULT_ICON=outline.StarIcon
|
|
||||||
|
|
||||||
SHARES_ENABLED=true
|
|
||||||
# SHARES_TOKEN_LENGTH=10
|
|
||||||
# SHARES_MAX_PER_USER=100
|
|
||||||
# SHARES_DEFAULT_EXPIRY_DAYS=30
|
|
||||||
# SHARES_GUEST_SESSION_HOURS=24
|
|
||||||
# SHARES_ALLOW_PASSWORD=true
|
|
||||||
# SHARES_ALLOW_ANONYMOUS=true
|
|
||||||
EOF
|
|
||||||
chmod 600 /etc/nextExplorer/.env
|
|
||||||
$STD useradd -U -s /usr/sbin/nologin -m -d /home/explorer explorer
|
|
||||||
chown -R explorer:explorer "$APP_DIR" /etc/nextExplorer
|
|
||||||
sed -i "\|version|s|$(jq -cr '.version' ${APP_DIR}/package.json)|$(cat ~/.nextexplorer)|" "$APP_DIR"/package.json
|
|
||||||
msg_ok "Configured nextExplorer"
|
|
||||||
|
|
||||||
msg_info "Creating nextExplorer Service"
|
|
||||||
cat <<EOF >/etc/systemd/system/nextexplorer.service
|
|
||||||
[Unit]
|
|
||||||
Description=nextExplorer Service
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
User=explorer
|
|
||||||
Group=explorer
|
|
||||||
WorkingDirectory=/opt/nextExplorer/app
|
|
||||||
EnvironmentFile=/etc/nextExplorer/.env
|
|
||||||
ExecStart=/usr/bin/node ./src/server.js
|
|
||||||
Restart=always
|
|
||||||
RestartSec=5
|
|
||||||
StandardOutput=journal
|
|
||||||
StandardError=journal
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
$STD systemctl enable -q --now nextexplorer
|
|
||||||
msg_ok "Created nextExplorer Service"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -14,20 +14,23 @@ network_check
|
|||||||
update_os
|
update_os
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
msg_info "Installing Dependencies"
|
||||||
$STD apt install -y \
|
$STD apt update
|
||||||
|
$STD apt -y install \
|
||||||
|
ca-certificates \
|
||||||
apache2-utils \
|
apache2-utils \
|
||||||
logrotate \
|
logrotate \
|
||||||
build-essential \
|
build-essential \
|
||||||
libpcre3-dev \
|
git
|
||||||
libssl-dev \
|
msg_ok "Installed Dependencies"
|
||||||
zlib1g-dev \
|
|
||||||
git \
|
msg_info "Installing Python Dependencies"
|
||||||
|
$STD apt install -y \
|
||||||
python3 \
|
python3 \
|
||||||
python3-dev \
|
python3-dev \
|
||||||
python3-pip \
|
python3-pip \
|
||||||
python3-venv \
|
python3-venv \
|
||||||
python3-cffi
|
python3-cffi
|
||||||
msg_ok "Installed Dependencies"
|
msg_ok "Installed Python Dependencies"
|
||||||
|
|
||||||
msg_info "Setting up Certbot"
|
msg_info "Setting up Certbot"
|
||||||
$STD python3 -m venv /opt/certbot
|
$STD python3 -m venv /opt/certbot
|
||||||
@@ -36,50 +39,33 @@ $STD /opt/certbot/bin/pip install certbot certbot-dns-cloudflare
|
|||||||
ln -sf /opt/certbot/bin/certbot /usr/local/bin/certbot
|
ln -sf /opt/certbot/bin/certbot /usr/local/bin/certbot
|
||||||
msg_ok "Set up Certbot"
|
msg_ok "Set up Certbot"
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "openresty" "openresty/openresty" "prebuild" "latest" "/opt/openresty" "openresty-*.tar.gz"
|
msg_info "Installing Openresty"
|
||||||
|
curl -fsSL "https://openresty.org/package/pubkey.gpg" | gpg --dearmor -o /etc/apt/trusted.gpg.d/openresty.gpg
|
||||||
msg_info "Building OpenResty"
|
cat <<'EOF' >/etc/apt/sources.list.d/openresty.sources
|
||||||
cd /opt/openresty
|
Types: deb
|
||||||
$STD ./configure \
|
URIs: http://openresty.org/package/debian/
|
||||||
--with-http_v2_module \
|
Suites: bookworm
|
||||||
--with-http_realip_module \
|
Components: openresty
|
||||||
--with-http_stub_status_module \
|
Signed-By: /etc/apt/trusted.gpg.d/openresty.gpg
|
||||||
--with-http_ssl_module \
|
|
||||||
--with-http_sub_module \
|
|
||||||
--with-http_auth_request_module \
|
|
||||||
--with-pcre-jit \
|
|
||||||
--with-stream \
|
|
||||||
--with-stream_ssl_module
|
|
||||||
$STD make -j"$(nproc)"
|
|
||||||
$STD make install
|
|
||||||
rm -rf /opt/openresty
|
|
||||||
|
|
||||||
cat <<'EOF' >/lib/systemd/system/openresty.service
|
|
||||||
[Unit]
|
|
||||||
Description=The OpenResty Application Platform
|
|
||||||
After=syslog.target network-online.target remote-fs.target nss-lookup.target
|
|
||||||
Wants=network-online.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=simple
|
|
||||||
ExecStartPre=/usr/local/openresty/nginx/sbin/nginx -t
|
|
||||||
ExecStart=/usr/local/openresty/nginx/sbin/nginx -g 'daemon off;'
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
EOF
|
||||||
msg_ok "Built OpenResty"
|
$STD apt update
|
||||||
|
$STD apt -y install openresty
|
||||||
|
msg_ok "Installed Openresty"
|
||||||
|
|
||||||
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
|
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
|
||||||
RELEASE=$(get_latest_github_release "NginxProxyManager/nginx-proxy-manager")
|
|
||||||
|
RELEASE=$(curl -fsSL https://api.github.com/repos/NginxProxyManager/nginx-proxy-manager/releases/latest |
|
||||||
|
grep "tag_name" |
|
||||||
|
awk '{print substr($2, 3, length($2)-4) }')
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager" "tarball" "v${RELEASE}"
|
fetch_and_deploy_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager" "tarball" "v${RELEASE}"
|
||||||
|
|
||||||
msg_info "Setting up Environment"
|
msg_info "Setting up Environment"
|
||||||
ln -sf /usr/bin/python3 /usr/bin/python
|
ln -sf /usr/bin/python3 /usr/bin/python
|
||||||
ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/sbin/nginx
|
ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/sbin/nginx
|
||||||
ln -sf /usr/local/openresty/nginx/ /etc/nginx
|
ln -sf /usr/local/openresty/nginx/ /etc/nginx
|
||||||
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/backend/package.json
|
sed -i "s|\"version\": \"2.0.0\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/backend/package.json
|
||||||
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/frontend/package.json
|
sed -i "s|\"version\": \"2.0.0\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/frontend/package.json
|
||||||
sed -i 's+^daemon+#daemon+g' /opt/nginxproxymanager/docker/rootfs/etc/nginx/nginx.conf
|
sed -i 's+^daemon+#daemon+g' /opt/nginxproxymanager/docker/rootfs/etc/nginx/nginx.conf
|
||||||
NGINX_CONFS=$(find /opt/nginxproxymanager -type f -name "*.conf")
|
NGINX_CONFS=$(find /opt/nginxproxymanager -type f -name "*.conf")
|
||||||
for NGINX_CONF in $NGINX_CONFS; do
|
for NGINX_CONF in $NGINX_CONFS; do
|
||||||
@@ -183,6 +169,7 @@ sed -i 's/user npm/user root/g; s/^pid/#pid/g' /usr/local/openresty/nginx/conf/n
|
|||||||
sed -r -i 's/^([[:space:]]*)su npm npm/\1#su npm npm/g;' /etc/logrotate.d/nginx-proxy-manager
|
sed -r -i 's/^([[:space:]]*)su npm npm/\1#su npm npm/g;' /etc/logrotate.d/nginx-proxy-manager
|
||||||
systemctl enable -q --now openresty
|
systemctl enable -q --now openresty
|
||||||
systemctl enable -q --now npm
|
systemctl enable -q --now npm
|
||||||
|
systemctl restart openresty
|
||||||
msg_ok "Started Services"
|
msg_ok "Started Services"
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
|
|||||||
@@ -91,16 +91,16 @@ expect "Format: mongodb://*" {
|
|||||||
send "$MONGO_CONNECTION_STRING\r"
|
send "$MONGO_CONNECTION_STRING\r"
|
||||||
}
|
}
|
||||||
expect "Administrator username" {
|
expect "Administrator username" {
|
||||||
send "community-scripts\r"
|
send "helper-scripts\r"
|
||||||
}
|
}
|
||||||
expect "Administrator email address" {
|
expect "Administrator email address" {
|
||||||
send "admin@community-scripts.org\r"
|
send "helper-scripts@local.com\r"
|
||||||
}
|
}
|
||||||
expect "Password" {
|
expect "Password" {
|
||||||
send "community-scripts\r"
|
send "helper-scripts\r"
|
||||||
}
|
}
|
||||||
expect "Confirm Password" {
|
expect "Confirm Password" {
|
||||||
send "community-scripts\r"
|
send "helper-scripts\r"
|
||||||
}
|
}
|
||||||
expect eof
|
expect eof
|
||||||
EOF
|
EOF
|
||||||
|
|||||||
@@ -60,7 +60,7 @@ read -r -p "${TAB3}Enter your ACME Email: " ACME_EMAIL_INPUT
|
|||||||
yq -i "
|
yq -i "
|
||||||
.services.npmplus.environment |=
|
.services.npmplus.environment |=
|
||||||
(map(select(. != \"TZ=*\" and . != \"ACME_EMAIL=*\" and . != \"INITIAL_ADMIN_EMAIL=*\" and . != \"INITIAL_ADMIN_PASSWORD=*\")) +
|
(map(select(. != \"TZ=*\" and . != \"ACME_EMAIL=*\" and . != \"INITIAL_ADMIN_EMAIL=*\" and . != \"INITIAL_ADMIN_PASSWORD=*\")) +
|
||||||
[\"TZ=$TZ_INPUT\", \"ACME_EMAIL=$ACME_EMAIL_INPUT\", \"INITIAL_ADMIN_EMAIL=admin@local.com\", \"INITIAL_ADMIN_PASSWORD=community-scripts.org\"])
|
[\"TZ=$TZ_INPUT\", \"ACME_EMAIL=$ACME_EMAIL_INPUT\", \"INITIAL_ADMIN_EMAIL=admin@local.com\", \"INITIAL_ADMIN_PASSWORD=helper-scripts.com\"])
|
||||||
" /opt/compose.yaml
|
" /opt/compose.yaml
|
||||||
|
|
||||||
msg_info "Building and Starting NPMplus (Patience)"
|
msg_info "Building and Starting NPMplus (Patience)"
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ msg_ok "Installed Dependencies"
|
|||||||
|
|
||||||
msg_info "Setting up Intel® Repositories"
|
msg_info "Setting up Intel® Repositories"
|
||||||
mkdir -p /usr/share/keyrings
|
mkdir -p /usr/share/keyrings
|
||||||
curl -fsSL https://repositories.intel.com/gpu/intel-graphics.key | gpg --dearmor -o /usr/share/keyrings/intel-graphics.gpg 2>/dev/null || true
|
curl -fsSL https://repositories.intel.com/gpu/intel-graphics.key | gpg --dearmor -o /usr/share/keyrings/intel-graphics.gpg
|
||||||
cat <<EOF >/etc/apt/sources.list.d/intel-gpu.sources
|
cat <<EOF >/etc/apt/sources.list.d/intel-gpu.sources
|
||||||
Types: deb
|
Types: deb
|
||||||
URIs: https://repositories.intel.com/gpu/ubuntu
|
URIs: https://repositories.intel.com/gpu/ubuntu
|
||||||
@@ -31,7 +31,7 @@ Components: client
|
|||||||
Architectures: amd64 i386
|
Architectures: amd64 i386
|
||||||
Signed-By: /usr/share/keyrings/intel-graphics.gpg
|
Signed-By: /usr/share/keyrings/intel-graphics.gpg
|
||||||
EOF
|
EOF
|
||||||
curl -fsSL https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor -o /usr/share/keyrings/oneapi-archive-keyring.gpg 2>/dev/null || true
|
curl -fsSL https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | gpg --dearmor -o /usr/share/keyrings/oneapi-archive-keyring.gpg
|
||||||
cat <<EOF >/etc/apt/sources.list.d/oneAPI.sources
|
cat <<EOF >/etc/apt/sources.list.d/oneAPI.sources
|
||||||
Types: deb
|
Types: deb
|
||||||
URIs: https://apt.repos.intel.com/oneapi
|
URIs: https://apt.repos.intel.com/oneapi
|
||||||
|
|||||||
@@ -13,19 +13,27 @@ setting_up_container
|
|||||||
network_check
|
network_check
|
||||||
update_os
|
update_os
|
||||||
|
|
||||||
|
NODE_VERSION="22" NODE_MODULE="yarn@latest" setup_nodejs
|
||||||
PG_VERSION="16" setup_postgresql
|
PG_VERSION="16" setup_postgresql
|
||||||
PG_DB_NAME="partdb" PG_DB_USER="partdb" setup_postgresql_db
|
PG_DB_NAME="partdb" PG_DB_USER="partdb" setup_postgresql_db
|
||||||
PHP_VERSION="8.4" PHP_APACHE="YES" PHP_MODULE="xsl" PHP_POST_MAX_SIZE="100M" PHP_UPLOAD_MAX_FILESIZE="100M" setup_php
|
PHP_VERSION="8.4" PHP_APACHE="YES" PHP_MODULE="xsl" PHP_POST_MAX_SIZE="100M" PHP_UPLOAD_MAX_FILESIZE="100M" setup_php
|
||||||
setup_composer
|
setup_composer
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "partdb" "Part-DB/Part-DB-server" "prebuild" "latest" "/opt/partdb" "partdb_with_assets.zip"
|
msg_info "Installing Part-DB (Patience)"
|
||||||
|
cd /opt
|
||||||
|
RELEASE=$(get_latest_github_release "Part-DB/Part-DB-server")
|
||||||
|
curl -fsSL "https://github.com/Part-DB/Part-DB-server/archive/refs/tags/v${RELEASE}.zip" -o "/opt/v${RELEASE}.zip"
|
||||||
|
$STD unzip "v${RELEASE}.zip"
|
||||||
|
mv /opt/Part-DB-server-${RELEASE}/ /opt/partdb
|
||||||
|
|
||||||
msg_info "Installing Part-DB"
|
|
||||||
cd /opt/partdb/
|
cd /opt/partdb/
|
||||||
cp .env .env.local
|
cp .env .env.local
|
||||||
sed -i "s|DATABASE_URL=\"sqlite:///%kernel.project_dir%/var/app.db\"|DATABASE_URL=\"postgresql://${PG_DB_USER}:${PG_DB_PASS}@127.0.0.1:5432/${PG_DB_NAME}?serverVersion=12.19&charset=utf8\"|" .env.local
|
sed -i "s|DATABASE_URL=\"sqlite:///%kernel.project_dir%/var/app.db\"|DATABASE_URL=\"postgresql://${PG_DB_USER}:${PG_DB_PASS}@127.0.0.1:5432/${PG_DB_NAME}?serverVersion=12.19&charset=utf8\"|" .env.local
|
||||||
|
|
||||||
export COMPOSER_ALLOW_SUPERUSER=1
|
export COMPOSER_ALLOW_SUPERUSER=1
|
||||||
$STD composer install --no-dev -o --no-interaction
|
$STD composer install --no-dev -o --no-interaction
|
||||||
|
$STD yarn install
|
||||||
|
$STD yarn build
|
||||||
$STD php bin/console cache:clear
|
$STD php bin/console cache:clear
|
||||||
php bin/console doctrine:migrations:migrate -n >~/database-migration-output
|
php bin/console doctrine:migrations:migrate -n >~/database-migration-output
|
||||||
chown -R www-data:www-data /opt/partdb
|
chown -R www-data:www-data /opt/partdb
|
||||||
@@ -36,6 +44,8 @@ ADMIN_PASS=$(grep -oP 'The initial password for the "admin" user is: \K\w+' ~/da
|
|||||||
echo "Part-DB Admin Password: $ADMIN_PASS"
|
echo "Part-DB Admin Password: $ADMIN_PASS"
|
||||||
} >>~/partdb.creds
|
} >>~/partdb.creds
|
||||||
rm -rf ~/database-migration-output
|
rm -rf ~/database-migration-output
|
||||||
|
rm -rf "/opt/v${RELEASE}.zip"
|
||||||
|
echo "${RELEASE}" >~/.partdb
|
||||||
msg_ok "Installed Part-DB"
|
msg_ok "Installed Part-DB"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
|
|||||||
@@ -99,7 +99,7 @@ PHOTOPRISM_DEBUG='false'
|
|||||||
PHOTOPRISM_LOG_LEVEL='info'
|
PHOTOPRISM_LOG_LEVEL='info'
|
||||||
|
|
||||||
# Site Info
|
# Site Info
|
||||||
PHOTOPRISM_SITE_CAPTION='https://community-scripts.org'
|
PHOTOPRISM_SITE_CAPTION='https://Helper-Scripts.com'
|
||||||
PHOTOPRISM_SITE_DESCRIPTION=''
|
PHOTOPRISM_SITE_DESCRIPTION=''
|
||||||
PHOTOPRISM_SITE_AUTHOR=''
|
PHOTOPRISM_SITE_AUTHOR=''
|
||||||
EOF
|
EOF
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ fetch_and_deploy_gh_release "revealjs" "hakimel/reveal.js" "tarball"
|
|||||||
msg_info "Configuring ${APPLICATION}"
|
msg_info "Configuring ${APPLICATION}"
|
||||||
cd /opt/revealjs
|
cd /opt/revealjs
|
||||||
$STD npm install
|
$STD npm install
|
||||||
sed -i 's/"vite"/"vite --host"/g' package.json
|
sed -i '25s/localhost/0.0.0.0/g' /opt/revealjs/gulpfile.js
|
||||||
msg_ok "Setup ${APPLICATION}"
|
msg_ok "Setup ${APPLICATION}"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ cat <<EOF >/opt/semaphore/config.json
|
|||||||
"access_key_encryption": "${SEM_KEY}"
|
"access_key_encryption": "${SEM_KEY}"
|
||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
$STD semaphore user add --admin --login admin --email admin@community-scripts.org --name Administrator --password "${SEM_PW}" --config /opt/semaphore/config.json
|
$STD semaphore user add --admin --login admin --email admin@helper-scripts.com --name Administrator --password "${SEM_PW}" --config /opt/semaphore/config.json
|
||||||
echo "${SEM_PW}" >~/semaphore.creds
|
echo "${SEM_PW}" >~/semaphore.creds
|
||||||
msg_ok "Setup Semaphore"
|
msg_ok "Setup Semaphore"
|
||||||
|
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ cd Shinobi
|
|||||||
gitVersionNumber=$(git rev-parse HEAD)
|
gitVersionNumber=$(git rev-parse HEAD)
|
||||||
theDateRightNow=$(date)
|
theDateRightNow=$(date)
|
||||||
touch version.json
|
touch version.json
|
||||||
chmod 644 version.json
|
chmod 777 version.json
|
||||||
echo '{"Product" : "'"Shinobi"'" , "Branch" : "'"master"'" , "Version" : "'"$gitVersionNumber"'" , "Date" : "'"$theDateRightNow"'" , "Repository" : "'"https://gitlab.com/Shinobi-Systems/Shinobi.git"'"}' >version.json
|
echo '{"Product" : "'"Shinobi"'" , "Branch" : "'"master"'" , "Version" : "'"$gitVersionNumber"'" , "Date" : "'"$theDateRightNow"'" , "Repository" : "'"https://gitlab.com/Shinobi-Systems/Shinobi.git"'"}' >version.json
|
||||||
msg_ok "Cloned Shinobi"
|
msg_ok "Cloned Shinobi"
|
||||||
|
|
||||||
|
|||||||
@@ -40,7 +40,6 @@ sed \
|
|||||||
-e "s|^SPARKY_FITNESS_SERVER_HOST=.*|SPARKY_FITNESS_SERVER_HOST=localhost|" \
|
-e "s|^SPARKY_FITNESS_SERVER_HOST=.*|SPARKY_FITNESS_SERVER_HOST=localhost|" \
|
||||||
-e "s|^SPARKY_FITNESS_SERVER_PORT=.*|SPARKY_FITNESS_SERVER_PORT=3010|" \
|
-e "s|^SPARKY_FITNESS_SERVER_PORT=.*|SPARKY_FITNESS_SERVER_PORT=3010|" \
|
||||||
-e "s|^SPARKY_FITNESS_FRONTEND_URL=.*|SPARKY_FITNESS_FRONTEND_URL=http://${LOCAL_IP}:80|" \
|
-e "s|^SPARKY_FITNESS_FRONTEND_URL=.*|SPARKY_FITNESS_FRONTEND_URL=http://${LOCAL_IP}:80|" \
|
||||||
-e "s|^GARMIN_MICROSERVICE_URL=.*|GARMIN_MICROSERVICE_URL=http://${LOCAL_IP}:8000|" \
|
|
||||||
-e "s|^SPARKY_FITNESS_API_ENCRYPTION_KEY=.*|SPARKY_FITNESS_API_ENCRYPTION_KEY=$(openssl rand -hex 32)|" \
|
-e "s|^SPARKY_FITNESS_API_ENCRYPTION_KEY=.*|SPARKY_FITNESS_API_ENCRYPTION_KEY=$(openssl rand -hex 32)|" \
|
||||||
-e "s|^BETTER_AUTH_SECRET=.*|BETTER_AUTH_SECRET=$(openssl rand -hex 32)|" \
|
-e "s|^BETTER_AUTH_SECRET=.*|BETTER_AUTH_SECRET=$(openssl rand -hex 32)|" \
|
||||||
"/etc/sparkyfitness/.env"
|
"/etc/sparkyfitness/.env"
|
||||||
|
|||||||
@@ -47,7 +47,6 @@ $STD yarn install
|
|||||||
$STD yarn build
|
$STD yarn build
|
||||||
cat <<EOF >/opt/tandoor/.env
|
cat <<EOF >/opt/tandoor/.env
|
||||||
SECRET_KEY=$SECRET_KEY
|
SECRET_KEY=$SECRET_KEY
|
||||||
ALLOWED_HOSTS=$LOCAL_IP
|
|
||||||
TZ=Europe/Berlin
|
TZ=Europe/Berlin
|
||||||
|
|
||||||
DB_ENGINE=django.db.backends.postgresql
|
DB_ENGINE=django.db.backends.postgresql
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ fetch_and_deploy_gh_release "tasmoadmin" "TasmoAdmin/TasmoAdmin" "prebuild" "lat
|
|||||||
msg_info "Configuring TasmoAdmin"
|
msg_info "Configuring TasmoAdmin"
|
||||||
rm -rf /etc/php/8.4/apache2/conf.d/10-opcache.ini
|
rm -rf /etc/php/8.4/apache2/conf.d/10-opcache.ini
|
||||||
chown -R www-data:www-data /var/www/tasmoadmin
|
chown -R www-data:www-data /var/www/tasmoadmin
|
||||||
chmod 775 /var/www/tasmoadmin/tmp /var/www/tasmoadmin/data
|
chmod 777 /var/www/tasmoadmin/tmp /var/www/tasmoadmin/data
|
||||||
cat <<EOF >/etc/apache2/sites-available/tasmoadmin.conf
|
cat <<EOF >/etc/apache2/sites-available/tasmoadmin.conf
|
||||||
<VirtualHost *:9999>
|
<VirtualHost *:9999>
|
||||||
ServerName tasmoadmin
|
ServerName tasmoadmin
|
||||||
|
|||||||
@@ -62,7 +62,6 @@ fetch_and_deploy_gh_release "tracearr" "connorgallopo/Tracearr" "tarball" "lates
|
|||||||
|
|
||||||
msg_info "Building Tracearr"
|
msg_info "Building Tracearr"
|
||||||
export TZ=$(cat /etc/timezone)
|
export TZ=$(cat /etc/timezone)
|
||||||
export NODE_OPTIONS="--max-old-space-size=4096"
|
|
||||||
cd /opt/tracearr.build
|
cd /opt/tracearr.build
|
||||||
$STD pnpm install --frozen-lockfile --force
|
$STD pnpm install --frozen-lockfile --force
|
||||||
$STD pnpm turbo telemetry disable
|
$STD pnpm turbo telemetry disable
|
||||||
|
|||||||
@@ -1,49 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://github.com/versity/versitygw
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "versitygw" "versity/versitygw" "binary"
|
|
||||||
|
|
||||||
WEBUI_CONF=""
|
|
||||||
read -rp "Would you like to enable the VersityGW WebGUI (Beta)? (y/N): " webui_prompt
|
|
||||||
if [[ "${webui_prompt,,}" =~ ^(y|yes)$ ]]; then
|
|
||||||
WEBUI_CONF="\nVGW_WEBUI_PORT=:7071\nVGW_WEBUI_NO_TLS=true"
|
|
||||||
msg_ok "WebGUI will be enabled on port 7071"
|
|
||||||
fi
|
|
||||||
|
|
||||||
msg_info "Configuring VersityGW"
|
|
||||||
mkdir -p /opt/versitygw-data
|
|
||||||
ACCESS_KEY=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | cut -c1-20)
|
|
||||||
SECRET_KEY=$(openssl rand -base64 36 | tr -dc 'a-zA-Z0-9' | cut -c1-40)
|
|
||||||
|
|
||||||
cat <<EOF >/etc/versitygw.d/gateway.conf
|
|
||||||
VGW_BACKEND=posix
|
|
||||||
VGW_BACKEND_ARG=/opt/versitygw-data
|
|
||||||
VGW_PORT=:7070
|
|
||||||
ROOT_ACCESS_KEY_ID=${ACCESS_KEY}
|
|
||||||
ROOT_SECRET_ACCESS_KEY=${SECRET_KEY}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
if [[ -n "$WEBUI_CONF" ]]; then
|
|
||||||
echo -e "$WEBUI_CONF" >>/etc/versitygw.d/gateway.conf
|
|
||||||
fi
|
|
||||||
msg_ok "Configured VersityGW"
|
|
||||||
|
|
||||||
msg_info "Enabling Service"
|
|
||||||
systemctl enable -q --now versitygw@gateway
|
|
||||||
msg_ok "Enabled Service"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -14,16 +14,16 @@ network_check
|
|||||||
update_os
|
update_os
|
||||||
|
|
||||||
msg_info "Getting latest version of VictoriaMetrics"
|
msg_info "Getting latest version of VictoriaMetrics"
|
||||||
|
victoriametrics_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaMetrics/releases/latest" |
|
||||||
|
jq -r '.assets[].name' |
|
||||||
|
grep -E '^victoria-metrics-linux-amd64-v[0-9.]+\.tar\.gz$')
|
||||||
|
vmutils_filename=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaMetrics/releases/latest" |
|
||||||
|
jq -r '.assets[].name' |
|
||||||
|
grep -E '^vmutils-linux-amd64-v[0-9.]+\.tar\.gz$')
|
||||||
|
msg_ok "Got latest version of VictoriaMetrics"
|
||||||
|
|
||||||
victoriametrics_release=$(curl -fsSL "https://api.github.com/repos/VictoriaMetrics/VictoriaMetrics/releases" |
|
fetch_and_deploy_gh_release "victoriametrics" "VictoriaMetrics/VictoriaMetrics" "prebuild" "latest" "/opt/victoriametrics" "$victoriametrics_filename"
|
||||||
jq -r '.[] | select(.assets[].name | match("^victoria-metrics-linux-amd64-v[0-9.]+.tar.gz$")) | .tag_name' |
|
fetch_and_deploy_gh_release "vmutils" "VictoriaMetrics/VictoriaMetrics" "prebuild" "latest" "/opt/victoriametrics" "$vmutils_filename"
|
||||||
head -n 1)
|
|
||||||
victoriametrics_filename="victoria-metrics-linux-amd64-${victoriametrics_release}.tar.gz"
|
|
||||||
vmutils_filename="vmutils-linux-amd64-${victoriametrics_release}.tar.gz"
|
|
||||||
msg_ok "Got version $victoriametrics_release of VictoriaMetrics"
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "victoriametrics" "VictoriaMetrics/VictoriaMetrics" "prebuild" "$victoriametrics_release" "/opt/victoriametrics" "$victoriametrics_filename"
|
|
||||||
fetch_and_deploy_gh_release "vmutils" "VictoriaMetrics/VictoriaMetrics" "prebuild" "$victoriametrics_release" "/opt/victoriametrics" "$vmutils_filename"
|
|
||||||
|
|
||||||
read -r -p "${TAB3}Would you like to add VictoriaLogs? <y/N> " prompt
|
read -r -p "${TAB3}Would you like to add VictoriaLogs? <y/N> " prompt
|
||||||
|
|
||||||
|
|||||||
@@ -23,8 +23,8 @@ $STD apt install -y \
|
|||||||
msg_ok "Installed Dependencies"
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
setup_rust
|
setup_rust
|
||||||
NODE_VERSION="24" NODE_MODULE="pnpm" setup_nodejs
|
NODE_VERSION="20" NODE_MODULE="pnpm" setup_nodejs
|
||||||
fetch_and_deploy_gh_release "wealthfolio" "afadil/wealthfolio" "tarball"
|
fetch_and_deploy_gh_release "wealthfolio" "afadil/wealthfolio" "tarball" "v3.0.3"
|
||||||
|
|
||||||
msg_info "Building Frontend (patience)"
|
msg_info "Building Frontend (patience)"
|
||||||
cd /opt/wealthfolio
|
cd /opt/wealthfolio
|
||||||
|
|||||||
@@ -1,91 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
|
||||||
# Author: MickLesk (CanbiZ)
|
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
|
||||||
# Source: https://yourls.org/
|
|
||||||
|
|
||||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
|
||||||
color
|
|
||||||
verb_ip6
|
|
||||||
catch_errors
|
|
||||||
setting_up_container
|
|
||||||
network_check
|
|
||||||
update_os
|
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
|
||||||
$STD apt install -y nginx
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
setup_mariadb
|
|
||||||
MARIADB_DB_NAME="yourls" MARIADB_DB_USER="yourls" setup_mariadb_db
|
|
||||||
PHP_VERSION="8.3" PHP_FPM="YES" PHP_MODULE="mysql,mbstring,gd,xml,curl" setup_php
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "yourls" "YOURLS/YOURLS" "tarball"
|
|
||||||
|
|
||||||
msg_info "Configuring YOURLS"
|
|
||||||
COOKIEKEY=$(openssl rand -hex 24)
|
|
||||||
YOURLS_PASS=$(openssl rand -base64 12 | tr -dc 'a-zA-Z0-9' | cut -c1-16)
|
|
||||||
cat <<EOF >/opt/yourls/user/config.php
|
|
||||||
<?php
|
|
||||||
define( 'YOURLS_DB_USER', '${MARIADB_DB_USER}' );
|
|
||||||
define( 'YOURLS_DB_PASS', '${MARIADB_DB_PASS}' );
|
|
||||||
define( 'YOURLS_DB_NAME', '${MARIADB_DB_NAME}' );
|
|
||||||
define( 'YOURLS_DB_HOST', 'localhost' );
|
|
||||||
define( 'YOURLS_DB_PREFIX', 'yourls_' );
|
|
||||||
define( 'YOURLS_SITE', 'http://${LOCAL_IP}' );
|
|
||||||
define( 'YOURLS_LANG', '' );
|
|
||||||
define( 'YOURLS_UNIQUE_URLS', true );
|
|
||||||
define( 'YOURLS_PRIVATE', true );
|
|
||||||
define( 'YOURLS_COOKIEKEY', '${COOKIEKEY}' );
|
|
||||||
\$yourls_user_passwords = [
|
|
||||||
'admin' => '${YOURLS_PASS}',
|
|
||||||
];
|
|
||||||
define( 'YOURLS_URL_CONVERT', 36 );
|
|
||||||
define( 'YOURLS_DEBUG', false );
|
|
||||||
EOF
|
|
||||||
chown -R www-data:www-data /opt/yourls
|
|
||||||
msg_ok "Configured YOURLS"
|
|
||||||
|
|
||||||
msg_info "Configuring Nginx"
|
|
||||||
cat <<EOF >/etc/nginx/sites-available/yourls
|
|
||||||
server {
|
|
||||||
listen 80 default_server;
|
|
||||||
server_name _;
|
|
||||||
root /opt/yourls;
|
|
||||||
index index.php;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
try_files \$uri \$uri/ /yourls-loader.php\$is_args\$args;
|
|
||||||
}
|
|
||||||
|
|
||||||
location ~ \.php\$ {
|
|
||||||
try_files \$uri =404;
|
|
||||||
fastcgi_split_path_info ^(.+\.php)(/.+)\$;
|
|
||||||
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
|
|
||||||
fastcgi_index index.php;
|
|
||||||
include fastcgi_params;
|
|
||||||
fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
|
|
||||||
fastcgi_param PATH_INFO \$fastcgi_path_info;
|
|
||||||
}
|
|
||||||
|
|
||||||
location ~* \.(jpg|jpeg|gif|css|png|js|ico|woff|woff2)\$ {
|
|
||||||
access_log off;
|
|
||||||
expires max;
|
|
||||||
}
|
|
||||||
|
|
||||||
location ~ /\.ht {
|
|
||||||
deny all;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
ln -sf /etc/nginx/sites-available/yourls /etc/nginx/sites-enabled/yourls
|
|
||||||
rm -f /etc/nginx/sites-enabled/default
|
|
||||||
$STD nginx -t
|
|
||||||
systemctl enable -q --now nginx
|
|
||||||
systemctl reload nginx
|
|
||||||
msg_ok "Configured Nginx"
|
|
||||||
|
|
||||||
motd_ssh
|
|
||||||
customize
|
|
||||||
cleanup_lxc
|
|
||||||
@@ -90,18 +90,11 @@ setting_up_container() {
|
|||||||
network_check() {
|
network_check() {
|
||||||
set +e
|
set +e
|
||||||
trap - ERR
|
trap - ERR
|
||||||
ipv4_connected=false
|
|
||||||
|
|
||||||
# Check IPv4 connectivity to Cloudflare, Google & Quad9 DNS servers
|
|
||||||
if ping -c 1 -W 1 1.1.1.1 &>/dev/null || ping -c 1 -W 1 8.8.8.8 &>/dev/null || ping -c 1 -W 1 9.9.9.9 &>/dev/null; then
|
if ping -c 1 -W 1 1.1.1.1 &>/dev/null || ping -c 1 -W 1 8.8.8.8 &>/dev/null || ping -c 1 -W 1 9.9.9.9 &>/dev/null; then
|
||||||
msg_ok "IPv4 Internet Connected"
|
ipv4_status="${GN}✔${CL} IPv4"
|
||||||
ipv4_connected=true
|
|
||||||
else
|
else
|
||||||
msg_error "IPv4 Internet Not Connected"
|
ipv4_status="${RD}✖${CL} IPv4"
|
||||||
fi
|
read -r -p "Internet NOT connected. Continue anyway? <y/N> " prompt
|
||||||
|
|
||||||
if [[ $ipv4_connected == false ]]; then
|
|
||||||
read -r -p "No Internet detected, would you like to continue anyway? <y/N> " prompt
|
|
||||||
if [[ "${prompt,,}" =~ ^(y|yes)$ ]]; then
|
if [[ "${prompt,,}" =~ ^(y|yes)$ ]]; then
|
||||||
echo -e "${INFO}${RD}Expect Issues Without Internet${CL}"
|
echo -e "${INFO}${RD}Expect Issues Without Internet${CL}"
|
||||||
else
|
else
|
||||||
@@ -109,60 +102,20 @@ network_check() {
|
|||||||
exit 122
|
exit 122
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
RESOLVEDIP=$(getent hosts github.com | awk '{ print $1 }')
|
||||||
# DNS resolution checks for GitHub-related domains
|
if [[ -z "$RESOLVEDIP" ]]; then
|
||||||
GIT_HOSTS=("github.com" "raw.githubusercontent.com" "api.github.com" "git.community-scripts.org")
|
msg_error "Internet: ${ipv4_status} DNS Failed"
|
||||||
GIT_STATUS="Git DNS:"
|
|
||||||
DNS_FAILED=false
|
|
||||||
|
|
||||||
for HOST in "${GIT_HOSTS[@]}"; do
|
|
||||||
RESOLVEDIP=$(getent hosts "$HOST" | awk '{ print $1 }' | grep -E '(^([0-9]{1,3}\.){3}[0-9]{1,3}$)|(^[a-fA-F0-9:]+$)' | head -n1)
|
|
||||||
if [[ -z "$RESOLVEDIP" ]]; then
|
|
||||||
GIT_STATUS+="$HOST:($DNSFAIL)"
|
|
||||||
DNS_FAILED=true
|
|
||||||
else
|
|
||||||
GIT_STATUS+=" $HOST:($DNSOK)"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ "$DNS_FAILED" == true ]]; then
|
|
||||||
fatal "$GIT_STATUS"
|
|
||||||
else
|
else
|
||||||
msg_ok "$GIT_STATUS"
|
msg_ok "Internet: ${ipv4_status} DNS: ${BL}${RESOLVEDIP}${CL}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||||
}
|
}
|
||||||
|
|
||||||
# This function updates the Container OS by running apk upgrade with mirror fallback
|
# This function updates the Container OS by running apt-get update and upgrade
|
||||||
update_os() {
|
update_os() {
|
||||||
msg_info "Updating Container OS"
|
msg_info "Updating Container OS"
|
||||||
if ! $STD apk -U upgrade; then
|
$STD apk -U upgrade
|
||||||
msg_warn "apk update failed (dl-cdn.alpinelinux.org), trying alternate mirrors..."
|
|
||||||
local alpine_mirrors="mirror.init7.net ftp.halifax.rwth-aachen.de mirrors.edge.kernel.org alpine.mirror.wearetriple.com mirror.leaseweb.com uk.alpinelinux.org dl-2.alpinelinux.org dl-4.alpinelinux.org"
|
|
||||||
local apk_ok=false
|
|
||||||
for m in $(printf '%s\n' $alpine_mirrors | shuf); do
|
|
||||||
if timeout 2 bash -c "echo >/dev/tcp/$m/80" 2>/dev/null; then
|
|
||||||
msg_custom "${INFO}" "${YW}" "Attempting mirror: ${m}"
|
|
||||||
cat <<EOF >/etc/apk/repositories
|
|
||||||
http://$m/alpine/latest-stable/main
|
|
||||||
http://$m/alpine/latest-stable/community
|
|
||||||
EOF
|
|
||||||
if $STD apk -U upgrade; then
|
|
||||||
msg_ok "CDN set to ${m}: tests passed"
|
|
||||||
apk_ok=true
|
|
||||||
break
|
|
||||||
else
|
|
||||||
msg_warn "Mirror ${m} failed"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
if [[ "$apk_ok" != true ]]; then
|
|
||||||
msg_error "All Alpine mirrors failed. Check network or try again later."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
local tools_content
|
local tools_content
|
||||||
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
|
tools_content=$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func) || {
|
||||||
msg_error "Failed to download tools.func"
|
msg_error "Failed to download tools.func"
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ need_tool() {
|
|||||||
msg_info "Installing tools: $*"
|
msg_info "Installing tools: $*"
|
||||||
apk add --no-cache "$@" >/dev/null 2>&1 || {
|
apk add --no-cache "$@" >/dev/null 2>&1 || {
|
||||||
msg_error "apk add failed for: $*"
|
msg_error "apk add failed for: $*"
|
||||||
return 100
|
return 1
|
||||||
}
|
}
|
||||||
msg_ok "Tools ready: $*"
|
msg_ok "Tools ready: $*"
|
||||||
fi
|
fi
|
||||||
@@ -52,17 +52,17 @@ ensure_usr_local_bin_persist() {
|
|||||||
download_with_progress() {
|
download_with_progress() {
|
||||||
# $1 url, $2 dest
|
# $1 url, $2 dest
|
||||||
local url="$1" out="$2" cl
|
local url="$1" out="$2" cl
|
||||||
need_tool curl pv || return 127
|
need_tool curl pv || return 1
|
||||||
cl=$(curl -fsSLI "$url" 2>/dev/null | awk 'tolower($0) ~ /^content-length:/ {print $2}' | tr -d '\r')
|
cl=$(curl -fsSLI "$url" 2>/dev/null | awk 'tolower($0) ~ /^content-length:/ {print $2}' | tr -d '\r')
|
||||||
if [ -n "$cl" ]; then
|
if [ -n "$cl" ]; then
|
||||||
curl -fsSL "$url" | pv -s "$cl" >"$out" || {
|
curl -fsSL "$url" | pv -s "$cl" >"$out" || {
|
||||||
msg_error "Download failed: $url"
|
msg_error "Download failed: $url"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
curl -fL# -o "$out" "$url" || {
|
curl -fL# -o "$out" "$url" || {
|
||||||
msg_error "Download failed: $url"
|
msg_error "Download failed: $url"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
@@ -82,14 +82,14 @@ check_for_gh_release() {
|
|||||||
|
|
||||||
net_resolves api.github.com || {
|
net_resolves api.github.com || {
|
||||||
msg_error "DNS/network error: api.github.com"
|
msg_error "DNS/network error: api.github.com"
|
||||||
return 6
|
return 1
|
||||||
}
|
}
|
||||||
need_tool curl jq || return 127
|
need_tool curl jq || return 1
|
||||||
|
|
||||||
tag=$(curl -fsSL "https://api.github.com/repos/${source}/releases/latest" | jq -r '.tag_name // empty')
|
tag=$(curl -fsSL "https://api.github.com/repos/${source}/releases/latest" | jq -r '.tag_name // empty')
|
||||||
[ -z "$tag" ] && {
|
[ -z "$tag" ] && {
|
||||||
msg_error "Unable to fetch latest tag for $app"
|
msg_error "Unable to fetch latest tag for $app"
|
||||||
return 22
|
return 1
|
||||||
}
|
}
|
||||||
release="${tag#v}"
|
release="${tag#v}"
|
||||||
|
|
||||||
@@ -133,12 +133,12 @@ fetch_and_deploy_gh() {
|
|||||||
|
|
||||||
net_resolves api.github.com || {
|
net_resolves api.github.com || {
|
||||||
msg_error "DNS/network error"
|
msg_error "DNS/network error"
|
||||||
return 6
|
return 1
|
||||||
}
|
}
|
||||||
need_tool curl jq tar || return 127
|
need_tool curl jq tar || return 1
|
||||||
[ "$mode" = "prebuild" ] || [ "$mode" = "singlefile" ] && need_tool unzip >/dev/null 2>&1 || true
|
[ "$mode" = "prebuild" ] || [ "$mode" = "singlefile" ] && need_tool unzip >/dev/null 2>&1 || true
|
||||||
|
|
||||||
tmpd="$(mktemp -d)" || return 252
|
tmpd="$(mktemp -d)" || return 1
|
||||||
mkdir -p "$target"
|
mkdir -p "$target"
|
||||||
|
|
||||||
# Release JSON
|
# Release JSON
|
||||||
@@ -146,13 +146,13 @@ fetch_and_deploy_gh() {
|
|||||||
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/latest")" || {
|
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/latest")" || {
|
||||||
msg_error "GitHub API failed"
|
msg_error "GitHub API failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 22
|
return 1
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/tags/$version")" || {
|
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/tags/$version")" || {
|
||||||
msg_error "GitHub API failed"
|
msg_error "GitHub API failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 22
|
return 1
|
||||||
}
|
}
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -163,7 +163,7 @@ fetch_and_deploy_gh() {
|
|||||||
[ -z "$version" ] && {
|
[ -z "$version" ] && {
|
||||||
msg_error "No tag in release json"
|
msg_error "No tag in release json"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 65
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
case "$mode" in
|
case "$mode" in
|
||||||
@@ -173,26 +173,26 @@ fetch_and_deploy_gh() {
|
|||||||
filename="${app_lc}-${version}.tar.gz"
|
filename="${app_lc}-${version}.tar.gz"
|
||||||
download_with_progress "$url" "$tmpd/$filename" || {
|
download_with_progress "$url" "$tmpd/$filename" || {
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
tar -xzf "$tmpd/$filename" -C "$tmpd" || {
|
tar -xzf "$tmpd/$filename" -C "$tmpd" || {
|
||||||
msg_error "tar extract failed"
|
msg_error "tar extract failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 251
|
return 1
|
||||||
}
|
}
|
||||||
unpack="$(find "$tmpd" -mindepth 1 -maxdepth 1 -type d | head -n1)"
|
unpack="$(find "$tmpd" -mindepth 1 -maxdepth 1 -type d | head -n1)"
|
||||||
# copy content of unpack to target
|
# copy content of unpack to target
|
||||||
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
||||||
msg_error "copy failed"
|
msg_error "copy failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
;;
|
;;
|
||||||
prebuild)
|
prebuild)
|
||||||
[ -n "$pattern" ] || {
|
[ -n "$pattern" ] || {
|
||||||
msg_error "prebuild requires asset pattern"
|
msg_error "prebuild requires asset pattern"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 65
|
return 1
|
||||||
}
|
}
|
||||||
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
|
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
|
||||||
BEGIN{IGNORECASE=1}
|
BEGIN{IGNORECASE=1}
|
||||||
@@ -201,19 +201,19 @@ fetch_and_deploy_gh() {
|
|||||||
[ -z "$url" ] && {
|
[ -z "$url" ] && {
|
||||||
msg_error "asset not found for pattern: $pattern"
|
msg_error "asset not found for pattern: $pattern"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
filename="${url##*/}"
|
filename="${url##*/}"
|
||||||
download_with_progress "$url" "$tmpd/$filename" || {
|
download_with_progress "$url" "$tmpd/$filename" || {
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
# unpack archive (Zip or tarball)
|
# unpack archive (Zip or tarball)
|
||||||
case "$filename" in
|
case "$filename" in
|
||||||
*.zip)
|
*.zip)
|
||||||
need_tool unzip || {
|
need_tool unzip || {
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 127
|
return 1
|
||||||
}
|
}
|
||||||
mkdir -p "$tmpd/unp"
|
mkdir -p "$tmpd/unp"
|
||||||
unzip -q "$tmpd/$filename" -d "$tmpd/unp"
|
unzip -q "$tmpd/$filename" -d "$tmpd/unp"
|
||||||
@@ -225,7 +225,7 @@ fetch_and_deploy_gh() {
|
|||||||
*)
|
*)
|
||||||
msg_error "unsupported archive: $filename"
|
msg_error "unsupported archive: $filename"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 251
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
# top-level folder strippen
|
# top-level folder strippen
|
||||||
@@ -234,13 +234,13 @@ fetch_and_deploy_gh() {
|
|||||||
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
||||||
msg_error "copy failed"
|
msg_error "copy failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
(cd "$tmpd/unp" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
(cd "$tmpd/unp" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
||||||
msg_error "copy failed"
|
msg_error "copy failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
fi
|
fi
|
||||||
;;
|
;;
|
||||||
@@ -248,7 +248,7 @@ fetch_and_deploy_gh() {
|
|||||||
[ -n "$pattern" ] || {
|
[ -n "$pattern" ] || {
|
||||||
msg_error "singlefile requires asset pattern"
|
msg_error "singlefile requires asset pattern"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 65
|
return 1
|
||||||
}
|
}
|
||||||
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
|
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
|
||||||
BEGIN{IGNORECASE=1}
|
BEGIN{IGNORECASE=1}
|
||||||
@@ -257,19 +257,19 @@ fetch_and_deploy_gh() {
|
|||||||
[ -z "$url" ] && {
|
[ -z "$url" ] && {
|
||||||
msg_error "asset not found for pattern: $pattern"
|
msg_error "asset not found for pattern: $pattern"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
filename="${url##*/}"
|
filename="${url##*/}"
|
||||||
download_with_progress "$url" "$target/$app" || {
|
download_with_progress "$url" "$target/$app" || {
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
chmod +x "$target/$app"
|
chmod +x "$target/$app"
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
msg_error "Unknown mode: $mode"
|
msg_error "Unknown mode: $mode"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 65
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
@@ -291,20 +291,20 @@ setup_yq() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
need_tool curl || return 127
|
need_tool curl || return 1
|
||||||
local arch bin url tmp
|
local arch bin url tmp
|
||||||
case "$(uname -m)" in
|
case "$(uname -m)" in
|
||||||
x86_64) arch="amd64" ;;
|
x86_64) arch="amd64" ;;
|
||||||
aarch64) arch="arm64" ;;
|
aarch64) arch="arm64" ;;
|
||||||
*)
|
*)
|
||||||
msg_error "Unsupported arch for yq: $(uname -m)"
|
msg_error "Unsupported arch for yq: $(uname -m)"
|
||||||
return 238
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
url="https://github.com/mikefarah/yq/releases/latest/download/yq_linux_${arch}"
|
url="https://github.com/mikefarah/yq/releases/latest/download/yq_linux_${arch}"
|
||||||
tmp="$(mktemp)"
|
tmp="$(mktemp)"
|
||||||
download_with_progress "$url" "$tmp" || return 250
|
download_with_progress "$url" "$tmp" || return 1
|
||||||
/usr/bin/install -m 0755 "$tmp" /usr/local/bin/yq
|
install -m 0755 "$tmp" /usr/local/bin/yq
|
||||||
rm -f "$tmp"
|
rm -f "$tmp"
|
||||||
msg_ok "Setup yq ($(yq --version 2>/dev/null))"
|
msg_ok "Setup yq ($(yq --version 2>/dev/null))"
|
||||||
}
|
}
|
||||||
@@ -313,13 +313,13 @@ setup_yq() {
|
|||||||
# Adminer – Alpine
|
# Adminer – Alpine
|
||||||
# ------------------------------
|
# ------------------------------
|
||||||
setup_adminer() {
|
setup_adminer() {
|
||||||
need_tool curl || return 127
|
need_tool curl || return 1
|
||||||
msg_info "Setup Adminer (Alpine)"
|
msg_info "Setup Adminer (Alpine)"
|
||||||
mkdir -p /var/www/localhost/htdocs/adminer
|
mkdir -p /var/www/localhost/htdocs/adminer
|
||||||
curl -fsSL https://github.com/vrana/adminer/releases/latest/download/adminer.php \
|
curl -fsSL https://github.com/vrana/adminer/releases/latest/download/adminer.php \
|
||||||
-o /var/www/localhost/htdocs/adminer/index.php || {
|
-o /var/www/localhost/htdocs/adminer/index.php || {
|
||||||
msg_error "Adminer download failed"
|
msg_error "Adminer download failed"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
msg_ok "Adminer at /adminer (served by your webserver)"
|
msg_ok "Adminer at /adminer (served by your webserver)"
|
||||||
}
|
}
|
||||||
@@ -329,7 +329,7 @@ setup_adminer() {
|
|||||||
# optional: PYTHON_VERSION="3.12"
|
# optional: PYTHON_VERSION="3.12"
|
||||||
# ------------------------------
|
# ------------------------------
|
||||||
setup_uv() {
|
setup_uv() {
|
||||||
need_tool curl tar || return 127
|
need_tool curl tar || return 1
|
||||||
local UV_BIN="/usr/local/bin/uv"
|
local UV_BIN="/usr/local/bin/uv"
|
||||||
local arch tarball url tmpd ver installed
|
local arch tarball url tmpd ver installed
|
||||||
|
|
||||||
@@ -338,7 +338,7 @@ setup_uv() {
|
|||||||
aarch64) arch="aarch64-unknown-linux-musl" ;;
|
aarch64) arch="aarch64-unknown-linux-musl" ;;
|
||||||
*)
|
*)
|
||||||
msg_error "Unsupported arch for uv: $(uname -m)"
|
msg_error "Unsupported arch for uv: $(uname -m)"
|
||||||
return 238
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
@@ -346,7 +346,7 @@ setup_uv() {
|
|||||||
ver="${ver#v}"
|
ver="${ver#v}"
|
||||||
[ -z "$ver" ] && {
|
[ -z "$ver" ] && {
|
||||||
msg_error "uv: cannot determine latest version"
|
msg_error "uv: cannot determine latest version"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
if has "$UV_BIN"; then
|
if has "$UV_BIN"; then
|
||||||
@@ -360,29 +360,29 @@ setup_uv() {
|
|||||||
msg_info "Setup uv $ver"
|
msg_info "Setup uv $ver"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
tmpd="$(mktemp -d)" || return 252
|
tmpd="$(mktemp -d)" || return 1
|
||||||
tarball="uv-${arch}.tar.gz"
|
tarball="uv-${arch}.tar.gz"
|
||||||
url="https://github.com/astral-sh/uv/releases/download/v${ver}/${tarball}"
|
url="https://github.com/astral-sh/uv/releases/download/v${ver}/${tarball}"
|
||||||
|
|
||||||
download_with_progress "$url" "$tmpd/uv.tar.gz" || {
|
download_with_progress "$url" "$tmpd/uv.tar.gz" || {
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
tar -xzf "$tmpd/uv.tar.gz" -C "$tmpd" || {
|
tar -xzf "$tmpd/uv.tar.gz" -C "$tmpd" || {
|
||||||
msg_error "uv: extract failed"
|
msg_error "uv: extract failed"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 251
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# tar contains ./uv
|
# tar contains ./uv
|
||||||
if [ -x "$tmpd/uv" ]; then
|
if [ -x "$tmpd/uv" ]; then
|
||||||
/usr/bin/install -m 0755 "$tmpd/uv" "$UV_BIN"
|
install -m 0755 "$tmpd/uv" "$UV_BIN"
|
||||||
else
|
else
|
||||||
# fallback: in subfolder
|
# fallback: in subfolder
|
||||||
/usr/bin/install -m 0755 "$tmpd"/*/uv "$UV_BIN" 2>/dev/null || {
|
install -m 0755 "$tmpd"/*/uv "$UV_BIN" 2>/dev/null || {
|
||||||
msg_error "uv binary not found in tar"
|
msg_error "uv binary not found in tar"
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
fi
|
fi
|
||||||
rm -rf "$tmpd"
|
rm -rf "$tmpd"
|
||||||
@@ -395,13 +395,13 @@ setup_uv() {
|
|||||||
$0 ~ "^cpython-"maj"\\." { print $0 }' | awk -F- '{print $2}' | sort -V | tail -n1)"
|
$0 ~ "^cpython-"maj"\\." { print $0 }' | awk -F- '{print $2}' | sort -V | tail -n1)"
|
||||||
[ -z "$match" ] && {
|
[ -z "$match" ] && {
|
||||||
msg_error "No matching Python for $PYTHON_VERSION"
|
msg_error "No matching Python for $PYTHON_VERSION"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
if ! uv python list | grep -q "cpython-${match}-linux"; then
|
if ! uv python list | grep -q "cpython-${match}-linux"; then
|
||||||
msg_info "Installing Python $match via uv"
|
msg_info "Installing Python $match via uv"
|
||||||
uv python install "$match" || {
|
uv python install "$match" || {
|
||||||
msg_error "uv python install failed"
|
msg_error "uv python install failed"
|
||||||
return 150
|
return 1
|
||||||
}
|
}
|
||||||
msg_ok "Python $match installed (uv)"
|
msg_ok "Python $match installed (uv)"
|
||||||
fi
|
fi
|
||||||
@@ -421,7 +421,7 @@ setup_java() {
|
|||||||
msg_info "Setup Java (OpenJDK $JAVA_VERSION)"
|
msg_info "Setup Java (OpenJDK $JAVA_VERSION)"
|
||||||
apk add --no-cache "$pkg" >/dev/null 2>&1 || {
|
apk add --no-cache "$pkg" >/dev/null 2>&1 || {
|
||||||
msg_error "apk add $pkg failed"
|
msg_error "apk add $pkg failed"
|
||||||
return 100
|
return 1
|
||||||
}
|
}
|
||||||
# set JAVA_HOME
|
# set JAVA_HOME
|
||||||
local prof="/etc/profile.d/20-java.sh"
|
local prof="/etc/profile.d/20-java.sh"
|
||||||
@@ -441,32 +441,32 @@ setup_go() {
|
|||||||
msg_info "Setup Go (apk)"
|
msg_info "Setup Go (apk)"
|
||||||
apk add --no-cache go >/dev/null 2>&1 || {
|
apk add --no-cache go >/dev/null 2>&1 || {
|
||||||
msg_error "apk add go failed"
|
msg_error "apk add go failed"
|
||||||
return 100
|
return 1
|
||||||
}
|
}
|
||||||
msg_ok "Go ready: $(go version 2>/dev/null)"
|
msg_ok "Go ready: $(go version 2>/dev/null)"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
need_tool curl tar || return 127
|
need_tool curl tar || return 1
|
||||||
local ARCH TARBALL URL TMP
|
local ARCH TARBALL URL TMP
|
||||||
case "$(uname -m)" in
|
case "$(uname -m)" in
|
||||||
x86_64) ARCH="amd64" ;;
|
x86_64) ARCH="amd64" ;;
|
||||||
aarch64) ARCH="arm64" ;;
|
aarch64) ARCH="arm64" ;;
|
||||||
*)
|
*)
|
||||||
msg_error "Unsupported arch for Go: $(uname -m)"
|
msg_error "Unsupported arch for Go: $(uname -m)"
|
||||||
return 238
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
TARBALL="go${GO_VERSION}.linux-${ARCH}.tar.gz"
|
TARBALL="go${GO_VERSION}.linux-${ARCH}.tar.gz"
|
||||||
URL="https://go.dev/dl/${TARBALL}"
|
URL="https://go.dev/dl/${TARBALL}"
|
||||||
msg_info "Setup Go $GO_VERSION (tarball)"
|
msg_info "Setup Go $GO_VERSION (tarball)"
|
||||||
TMP="$(mktemp)"
|
TMP="$(mktemp)"
|
||||||
download_with_progress "$URL" "$TMP" || return 250
|
download_with_progress "$URL" "$TMP" || return 1
|
||||||
rm -rf /usr/local/go
|
rm -rf /usr/local/go
|
||||||
tar -C /usr/local -xzf "$TMP" || {
|
tar -C /usr/local -xzf "$TMP" || {
|
||||||
msg_error "extract go failed"
|
msg_error "extract go failed"
|
||||||
rm -f "$TMP"
|
rm -f "$TMP"
|
||||||
return 251
|
return 1
|
||||||
}
|
}
|
||||||
rm -f "$TMP"
|
rm -f "$TMP"
|
||||||
ln -sf /usr/local/go/bin/go /usr/local/bin/go
|
ln -sf /usr/local/go/bin/go /usr/local/bin/go
|
||||||
@@ -488,7 +488,7 @@ setup_composer() {
|
|||||||
# Fallback to generic php if 83 not available
|
# Fallback to generic php if 83 not available
|
||||||
apk add --no-cache php-cli php-openssl php-phar php-iconv >/dev/null 2>&1 || {
|
apk add --no-cache php-cli php-openssl php-phar php-iconv >/dev/null 2>&1 || {
|
||||||
msg_error "Failed to install php-cli for composer"
|
msg_error "Failed to install php-cli for composer"
|
||||||
return 100
|
return 1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
msg_ok "PHP CLI ready: $(php -v | head -n1)"
|
msg_ok "PHP CLI ready: $(php -v | head -n1)"
|
||||||
@@ -500,14 +500,14 @@ setup_composer() {
|
|||||||
msg_info "Setup Composer"
|
msg_info "Setup Composer"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
need_tool curl || return 127
|
need_tool curl || return 1
|
||||||
curl -fsSL https://getcomposer.org/installer -o /tmp/composer-setup.php || {
|
curl -fsSL https://getcomposer.org/installer -o /tmp/composer-setup.php || {
|
||||||
msg_error "composer installer download failed"
|
msg_error "composer installer download failed"
|
||||||
return 250
|
return 1
|
||||||
}
|
}
|
||||||
php /tmp/composer-setup.php --install-dir=/usr/local/bin --filename=composer >/dev/null 2>&1 || {
|
php /tmp/composer-setup.php --install-dir=/usr/local/bin --filename=composer >/dev/null 2>&1 || {
|
||||||
msg_error "composer install failed"
|
msg_error "composer install failed"
|
||||||
return 150
|
return 1
|
||||||
}
|
}
|
||||||
rm -f /tmp/composer-setup.php
|
rm -f /tmp/composer-setup.php
|
||||||
ensure_usr_local_bin_persist
|
ensure_usr_local_bin_persist
|
||||||
|
|||||||
@@ -348,10 +348,10 @@ explain_exit_code() {
|
|||||||
json_escape() {
|
json_escape() {
|
||||||
# Escape a string for safe JSON embedding using awk (handles any input size).
|
# Escape a string for safe JSON embedding using awk (handles any input size).
|
||||||
# Pipeline: strip ANSI → remove control chars → escape \ " TAB → join lines with \n
|
# Pipeline: strip ANSI → remove control chars → escape \ " TAB → join lines with \n
|
||||||
printf '%s' "$1" |
|
printf '%s' "$1" \
|
||||||
sed 's/\x1b\[[0-9;]*[a-zA-Z]//g' |
|
| sed 's/\x1b\[[0-9;]*[a-zA-Z]//g' \
|
||||||
tr -d '\000-\010\013\014\016-\037\177\r' |
|
| tr -d '\000-\010\013\014\016-\037\177\r' \
|
||||||
awk '
|
| awk '
|
||||||
BEGIN { ORS = "" }
|
BEGIN { ORS = "" }
|
||||||
{
|
{
|
||||||
gsub(/\\/, "\\\\") # backslash → \\
|
gsub(/\\/, "\\\\") # backslash → \\
|
||||||
@@ -504,7 +504,7 @@ detect_gpu() {
|
|||||||
GPU_PASSTHROUGH="unknown"
|
GPU_PASSTHROUGH="unknown"
|
||||||
|
|
||||||
local gpu_line
|
local gpu_line
|
||||||
gpu_line=$(lspci 2>/dev/null | grep -iE "VGA|3D|Display" | head -1 || true)
|
gpu_line=$(lspci 2>/dev/null | grep -iE "VGA|3D|Display" | head -1)
|
||||||
|
|
||||||
if [[ -n "$gpu_line" ]]; then
|
if [[ -n "$gpu_line" ]]; then
|
||||||
# Extract model: everything after the colon, clean up
|
# Extract model: everything after the colon, clean up
|
||||||
@@ -543,7 +543,7 @@ detect_cpu() {
|
|||||||
|
|
||||||
if [[ -f /proc/cpuinfo ]]; then
|
if [[ -f /proc/cpuinfo ]]; then
|
||||||
local vendor_id
|
local vendor_id
|
||||||
vendor_id=$(grep -m1 "vendor_id" /proc/cpuinfo 2>/dev/null | cut -d: -f2 | tr -d ' ' || true)
|
vendor_id=$(grep -m1 "vendor_id" /proc/cpuinfo 2>/dev/null | cut -d: -f2 | tr -d ' ')
|
||||||
|
|
||||||
case "$vendor_id" in
|
case "$vendor_id" in
|
||||||
GenuineIntel) CPU_VENDOR="intel" ;;
|
GenuineIntel) CPU_VENDOR="intel" ;;
|
||||||
@@ -557,7 +557,7 @@ detect_cpu() {
|
|||||||
esac
|
esac
|
||||||
|
|
||||||
# Extract model name and clean it up
|
# Extract model name and clean it up
|
||||||
CPU_MODEL=$(grep -m1 "model name" /proc/cpuinfo 2>/dev/null | cut -d: -f2 | sed 's/^ *//' | sed 's/(R)//g' | sed 's/(TM)//g' | sed 's/ */ /g' | cut -c1-64 || true)
|
CPU_MODEL=$(grep -m1 "model name" /proc/cpuinfo 2>/dev/null | cut -d: -f2 | sed 's/^ *//' | sed 's/(R)//g' | sed 's/(TM)//g' | sed 's/ */ /g' | cut -c1-64)
|
||||||
fi
|
fi
|
||||||
|
|
||||||
export CPU_VENDOR CPU_MODEL
|
export CPU_VENDOR CPU_MODEL
|
||||||
@@ -627,8 +627,8 @@ post_to_api() {
|
|||||||
|
|
||||||
[[ "${DEV_MODE:-}" == "true" ]] && echo "[DEBUG] post_to_api() DIAGNOSTICS=$DIAGNOSTICS RANDOM_UUID=$RANDOM_UUID NSAPP=$NSAPP" >&2
|
[[ "${DEV_MODE:-}" == "true" ]] && echo "[DEBUG] post_to_api() DIAGNOSTICS=$DIAGNOSTICS RANDOM_UUID=$RANDOM_UUID NSAPP=$NSAPP" >&2
|
||||||
|
|
||||||
# Set type for later status updates (preserve if already set, e.g. turnkey)
|
# Set type for later status updates
|
||||||
TELEMETRY_TYPE="${TELEMETRY_TYPE:-lxc}"
|
TELEMETRY_TYPE="lxc"
|
||||||
|
|
||||||
local pve_version=""
|
local pve_version=""
|
||||||
if command -v pveversion &>/dev/null; then
|
if command -v pveversion &>/dev/null; then
|
||||||
@@ -664,7 +664,7 @@ post_to_api() {
|
|||||||
{
|
{
|
||||||
"random_id": "${RANDOM_UUID}",
|
"random_id": "${RANDOM_UUID}",
|
||||||
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
|
"execution_id": "${EXECUTION_ID:-${RANDOM_UUID}}",
|
||||||
"type": "${TELEMETRY_TYPE}",
|
"type": "lxc",
|
||||||
"nsapp": "${NSAPP:-unknown}",
|
"nsapp": "${NSAPP:-unknown}",
|
||||||
"status": "installing",
|
"status": "installing",
|
||||||
"ct_type": ${CT_TYPE:-1},
|
"ct_type": ${CT_TYPE:-1},
|
||||||
@@ -692,7 +692,6 @@ EOF
|
|||||||
# Send initial "installing" record with retry.
|
# Send initial "installing" record with retry.
|
||||||
# This record MUST exist for all subsequent updates to succeed.
|
# This record MUST exist for all subsequent updates to succeed.
|
||||||
local http_code="" attempt
|
local http_code="" attempt
|
||||||
local _post_success=false
|
|
||||||
for attempt in 1 2 3; do
|
for attempt in 1 2 3; do
|
||||||
if [[ "${DEV_MODE:-}" == "true" ]]; then
|
if [[ "${DEV_MODE:-}" == "true" ]]; then
|
||||||
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
|
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
|
||||||
@@ -704,19 +703,11 @@ EOF
|
|||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
|
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
|
||||||
fi
|
fi
|
||||||
if [[ "$http_code" =~ ^2[0-9]{2}$ ]]; then
|
[[ "$http_code" =~ ^2[0-9]{2}$ ]] && break
|
||||||
_post_success=true
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
[[ "$attempt" -lt 3 ]] && sleep 1
|
[[ "$attempt" -lt 3 ]] && sleep 1
|
||||||
done
|
done
|
||||||
|
|
||||||
# Only mark done if at least one attempt succeeded.
|
POST_TO_API_DONE=true
|
||||||
# If all 3 failed, POST_TO_API_DONE stays false so post_update_to_api
|
|
||||||
# and on_exit() know the initial record was never created.
|
|
||||||
# The server has fallback logic to create a new record on status updates,
|
|
||||||
# so subsequent calls can still succeed even without the initial record.
|
|
||||||
POST_TO_API_DONE=${_post_success}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
@@ -807,19 +798,15 @@ EOF
|
|||||||
|
|
||||||
# Send initial "installing" record with retry (must succeed for updates to work)
|
# Send initial "installing" record with retry (must succeed for updates to work)
|
||||||
local http_code="" attempt
|
local http_code="" attempt
|
||||||
local _post_success=false
|
|
||||||
for attempt in 1 2 3; do
|
for attempt in 1 2 3; do
|
||||||
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
|
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
|
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
|
||||||
if [[ "$http_code" =~ ^2[0-9]{2}$ ]]; then
|
[[ "$http_code" =~ ^2[0-9]{2}$ ]] && break
|
||||||
_post_success=true
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
[[ "$attempt" -lt 3 ]] && sleep 1
|
[[ "$attempt" -lt 3 ]] && sleep 1
|
||||||
done
|
done
|
||||||
|
|
||||||
POST_TO_API_DONE=${_post_success}
|
POST_TO_API_DONE=true
|
||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
@@ -1096,12 +1083,6 @@ EOF
|
|||||||
# - Used to group errors in dashboard
|
# - Used to group errors in dashboard
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
categorize_error() {
|
categorize_error() {
|
||||||
# Allow build.func to override category based on log analysis (exit code 1 subclassification)
|
|
||||||
if [[ -n "${ERROR_CATEGORY_OVERRIDE:-}" ]]; then
|
|
||||||
echo "$ERROR_CATEGORY_OVERRIDE"
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
local code="$1"
|
local code="$1"
|
||||||
case "$code" in
|
case "$code" in
|
||||||
# Network errors (curl/wget)
|
# Network errors (curl/wget)
|
||||||
@@ -1347,8 +1328,8 @@ post_addon_to_api() {
|
|||||||
# Detect OS info
|
# Detect OS info
|
||||||
local os_type="" os_version=""
|
local os_type="" os_version=""
|
||||||
if [[ -f /etc/os-release ]]; then
|
if [[ -f /etc/os-release ]]; then
|
||||||
os_type=$(grep "^ID=" /etc/os-release | cut -d= -f2 | tr -d '"' || true)
|
os_type=$(grep "^ID=" /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||||
os_version=$(grep "^VERSION_ID=" /etc/os-release | cut -d= -f2 | tr -d '"' || true)
|
os_version=$(grep "^VERSION_ID=" /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local JSON_PAYLOAD
|
local JSON_PAYLOAD
|
||||||
|
|||||||
338
misc/build.func
338
misc/build.func
@@ -173,10 +173,10 @@ get_current_ip() {
|
|||||||
# Check for Debian/Ubuntu (uses hostname -I)
|
# Check for Debian/Ubuntu (uses hostname -I)
|
||||||
if grep -qE 'ID=debian|ID=ubuntu' /etc/os-release; then
|
if grep -qE 'ID=debian|ID=ubuntu' /etc/os-release; then
|
||||||
# Try IPv4 first
|
# Try IPv4 first
|
||||||
CURRENT_IP=$(hostname -I 2>/dev/null | tr ' ' '\n' | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' | head -n1 || true)
|
CURRENT_IP=$(hostname -I 2>/dev/null | tr ' ' '\n' | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
|
||||||
# Fallback to IPv6 if no IPv4
|
# Fallback to IPv6 if no IPv4
|
||||||
if [[ -z "$CURRENT_IP" ]]; then
|
if [[ -z "$CURRENT_IP" ]]; then
|
||||||
CURRENT_IP=$(hostname -I 2>/dev/null | tr ' ' '\n' | grep -E ':' | head -n1 || true)
|
CURRENT_IP=$(hostname -I 2>/dev/null | tr ' ' '\n' | grep -E ':' | head -n1)
|
||||||
fi
|
fi
|
||||||
# Check for Alpine (uses ip command)
|
# Check for Alpine (uses ip command)
|
||||||
elif grep -q 'ID=alpine' /etc/os-release; then
|
elif grep -q 'ID=alpine' /etc/os-release; then
|
||||||
@@ -221,14 +221,6 @@ update_motd_ip() {
|
|||||||
local current_hostname="$(hostname)"
|
local current_hostname="$(hostname)"
|
||||||
local current_ip="$(hostname -I | awk '{print $1}')"
|
local current_ip="$(hostname -I | awk '{print $1}')"
|
||||||
|
|
||||||
# Escape sed special chars in replacement strings (& \ |)
|
|
||||||
current_os="${current_os//\\/\\\\}"
|
|
||||||
current_os="${current_os//&/\\&}"
|
|
||||||
current_hostname="${current_hostname//\\/\\\\}"
|
|
||||||
current_hostname="${current_hostname//&/\\&}"
|
|
||||||
current_ip="${current_ip//\\/\\\\}"
|
|
||||||
current_ip="${current_ip//&/\\&}"
|
|
||||||
|
|
||||||
# Update only if values actually changed
|
# Update only if values actually changed
|
||||||
if ! grep -q "OS:.*$current_os" "$PROFILE_FILE" 2>/dev/null; then
|
if ! grep -q "OS:.*$current_os" "$PROFILE_FILE" 2>/dev/null; then
|
||||||
sed -i "s|OS:.*|OS: \${GN}$current_os\${CL}\\\"|" "$PROFILE_FILE"
|
sed -i "s|OS:.*|OS: \${GN}$current_os\${CL}\\\"|" "$PROFILE_FILE"
|
||||||
@@ -267,12 +259,12 @@ install_ssh_keys_into_ct() {
|
|||||||
msg_info "Installing selected SSH keys into CT ${CTID}"
|
msg_info "Installing selected SSH keys into CT ${CTID}"
|
||||||
pct exec "$CTID" -- sh -c 'mkdir -p /root/.ssh && chmod 700 /root/.ssh' || {
|
pct exec "$CTID" -- sh -c 'mkdir -p /root/.ssh && chmod 700 /root/.ssh' || {
|
||||||
msg_error "prepare /root/.ssh failed"
|
msg_error "prepare /root/.ssh failed"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
pct push "$CTID" "$SSH_KEYS_FILE" /root/.ssh/authorized_keys >/dev/null 2>&1 ||
|
pct push "$CTID" "$SSH_KEYS_FILE" /root/.ssh/authorized_keys >/dev/null 2>&1 ||
|
||||||
pct exec "$CTID" -- sh -c "cat > /root/.ssh/authorized_keys" <"$SSH_KEYS_FILE" || {
|
pct exec "$CTID" -- sh -c "cat > /root/.ssh/authorized_keys" <"$SSH_KEYS_FILE" || {
|
||||||
msg_error "write authorized_keys failed"
|
msg_error "write authorized_keys failed"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
pct exec "$CTID" -- sh -c 'chmod 600 /root/.ssh/authorized_keys' || true
|
pct exec "$CTID" -- sh -c 'chmod 600 /root/.ssh/authorized_keys' || true
|
||||||
msg_ok "Installed SSH keys into CT ${CTID}"
|
msg_ok "Installed SSH keys into CT ${CTID}"
|
||||||
@@ -537,10 +529,6 @@ validate_gateway_in_subnet() {
|
|||||||
local ip="${static_ip%%/*}"
|
local ip="${static_ip%%/*}"
|
||||||
local cidr="${static_ip##*/}"
|
local cidr="${static_ip##*/}"
|
||||||
|
|
||||||
# /31 and /32 are valid point-to-point / zero-trust DMZ configurations
|
|
||||||
# where the gateway is technically outside the subnet — skip validation
|
|
||||||
((cidr >= 31)) && return 0
|
|
||||||
|
|
||||||
# Convert CIDR to netmask bits
|
# Convert CIDR to netmask bits
|
||||||
local mask=$((0xFFFFFFFF << (32 - cidr) & 0xFFFFFFFF))
|
local mask=$((0xFFFFFFFF << (32 - cidr) & 0xFFFFFFFF))
|
||||||
|
|
||||||
@@ -839,7 +827,7 @@ choose_and_set_storage_for_file() {
|
|||||||
template) key="var_template_storage" ;;
|
template) key="var_template_storage" ;;
|
||||||
*)
|
*)
|
||||||
msg_error "Unknown storage class: $class"
|
msg_error "Unknown storage class: $class"
|
||||||
return 65
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
@@ -862,7 +850,7 @@ choose_and_set_storage_for_file() {
|
|||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
# If the current value is preselectable, we could show it, but per your requirement we always offer selection
|
# If the current value is preselectable, we could show it, but per your requirement we always offer selection
|
||||||
select_storage "$class" || return 150
|
select_storage "$class" || return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
_write_storage_to_vars "$vf" "$key" "$STORAGE_RESULT"
|
_write_storage_to_vars "$vf" "$key" "$STORAGE_RESULT"
|
||||||
@@ -997,10 +985,8 @@ base_settings() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
MTU=${var_mtu:-""}
|
MTU=${var_mtu:-""}
|
||||||
_sd_val="${var_searchdomain:-""}"
|
SD=${var_searchdomain:-""}
|
||||||
[[ -n "$_sd_val" ]] && SD="-searchdomain=$_sd_val" || SD=""
|
NS=${var_ns:-""}
|
||||||
_ns_val="${var_ns:-""}"
|
|
||||||
[[ -n "$_ns_val" ]] && NS="-nameserver=$_ns_val" || NS=""
|
|
||||||
MAC=${var_mac:-""}
|
MAC=${var_mac:-""}
|
||||||
VLAN=${var_vlan:-""}
|
VLAN=${var_vlan:-""}
|
||||||
SSH=${var_ssh:-"no"}
|
SSH=${var_ssh:-"no"}
|
||||||
@@ -1264,7 +1250,7 @@ default_var_settings() {
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
done
|
done
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
# Allow override of storages via env (for non-interactive use cases)
|
# Allow override of storages via env (for non-interactive use cases)
|
||||||
[ -n "${var_template_storage:-}" ] && TEMPLATE_STORAGE="$var_template_storage"
|
[ -n "${var_template_storage:-}" ] && TEMPLATE_STORAGE="$var_template_storage"
|
||||||
@@ -1357,7 +1343,7 @@ EOF
|
|||||||
local dv
|
local dv
|
||||||
dv="$(_find_default_vars)" || {
|
dv="$(_find_default_vars)" || {
|
||||||
msg_error "default.vars not found after ensure step"
|
msg_error "default.vars not found after ensure step"
|
||||||
return 252
|
return 1
|
||||||
}
|
}
|
||||||
load_vars_file "$dv"
|
load_vars_file "$dv"
|
||||||
|
|
||||||
@@ -1642,7 +1628,7 @@ maybe_offer_save_app_defaults() {
|
|||||||
if whiptail --backtitle "Proxmox VE Helper Scripts" \
|
if whiptail --backtitle "Proxmox VE Helper Scripts" \
|
||||||
--yesno "Save these advanced settings as defaults for ${APP}?\n\nThis will create:\n${app_vars_path}" 12 72; then
|
--yesno "Save these advanced settings as defaults for ${APP}?\n\nThis will create:\n${app_vars_path}" 12 72; then
|
||||||
mkdir -p "$(dirname "$app_vars_path")"
|
mkdir -p "$(dirname "$app_vars_path")"
|
||||||
/usr/bin/install -m 0644 "$new_tmp" "$app_vars_path"
|
install -m 0644 "$new_tmp" "$app_vars_path"
|
||||||
msg_ok "Saved app defaults: ${app_vars_path}"
|
msg_ok "Saved app defaults: ${app_vars_path}"
|
||||||
fi
|
fi
|
||||||
rm -f "$new_tmp" "$diff_tmp"
|
rm -f "$new_tmp" "$diff_tmp"
|
||||||
@@ -1676,7 +1662,7 @@ maybe_offer_save_app_defaults() {
|
|||||||
|
|
||||||
case "$sel" in
|
case "$sel" in
|
||||||
"Update Defaults")
|
"Update Defaults")
|
||||||
/usr/bin/install -m 0644 "$new_tmp" "$app_vars_path"
|
install -m 0644 "$new_tmp" "$app_vars_path"
|
||||||
msg_ok "Updated app defaults: ${app_vars_path}"
|
msg_ok "Updated app defaults: ${app_vars_path}"
|
||||||
break
|
break
|
||||||
;;
|
;;
|
||||||
@@ -1704,8 +1690,8 @@ ensure_storage_selection_for_vars_file() {
|
|||||||
|
|
||||||
# Read stored values (if any)
|
# Read stored values (if any)
|
||||||
local tpl ct
|
local tpl ct
|
||||||
tpl=$(grep -E '^var_template_storage=' "$vf" | cut -d= -f2- || true)
|
tpl=$(grep -E '^var_template_storage=' "$vf" | cut -d= -f2-)
|
||||||
ct=$(grep -E '^var_container_storage=' "$vf" | cut -d= -f2- || true)
|
ct=$(grep -E '^var_container_storage=' "$vf" | cut -d= -f2-)
|
||||||
|
|
||||||
if [[ -n "$tpl" && -n "$ct" ]]; then
|
if [[ -n "$tpl" && -n "$ct" ]]; then
|
||||||
TEMPLATE_STORAGE="$tpl"
|
TEMPLATE_STORAGE="$tpl"
|
||||||
@@ -1840,7 +1826,7 @@ advanced_settings() {
|
|||||||
if [[ -n "$BRIDGES" ]]; then
|
if [[ -n "$BRIDGES" ]]; then
|
||||||
while IFS= read -r bridge; do
|
while IFS= read -r bridge; do
|
||||||
if [[ -n "$bridge" ]]; then
|
if [[ -n "$bridge" ]]; then
|
||||||
local description=$(grep -A 10 "iface $bridge" /etc/network/interfaces 2>/dev/null | grep '^#' | head -n1 | sed 's/^#\s*//;s/^[- ]*//' || true)
|
local description=$(grep -A 10 "iface $bridge" /etc/network/interfaces 2>/dev/null | grep '^#' | head -n1 | sed 's/^#\s*//;s/^[- ]*//')
|
||||||
BRIDGE_MENU_OPTIONS+=("$bridge" "${description:- }")
|
BRIDGE_MENU_OPTIONS+=("$bridge" "${description:- }")
|
||||||
fi
|
fi
|
||||||
done <<<"$BRIDGES"
|
done <<<"$BRIDGES"
|
||||||
@@ -3322,7 +3308,7 @@ configure_ssh_settings() {
|
|||||||
tag="${tag%\"}"
|
tag="${tag%\"}"
|
||||||
tag="${tag#\"}"
|
tag="${tag#\"}"
|
||||||
local line
|
local line
|
||||||
line=$(grep -E "^${tag}\|" "$MAPFILE" | head -n1 | cut -d'|' -f2- || true)
|
line=$(grep -E "^${tag}\|" "$MAPFILE" | head -n1 | cut -d'|' -f2-)
|
||||||
[[ -n "$line" ]] && printf '%s\n' "$line" >>"$SSH_KEYS_FILE"
|
[[ -n "$line" ]] && printf '%s\n' "$line" >>"$SSH_KEYS_FILE"
|
||||||
done
|
done
|
||||||
;;
|
;;
|
||||||
@@ -3349,7 +3335,7 @@ configure_ssh_settings() {
|
|||||||
tag="${tag%\"}"
|
tag="${tag%\"}"
|
||||||
tag="${tag#\"}"
|
tag="${tag#\"}"
|
||||||
local line
|
local line
|
||||||
line=$(grep -E "^${tag}\|" "$MAPFILE" | head -n1 | cut -d'|' -f2- || true)
|
line=$(grep -E "^${tag}\|" "$MAPFILE" | head -n1 | cut -d'|' -f2-)
|
||||||
[[ -n "$line" ]] && printf '%s\n' "$line" >>"$SSH_KEYS_FILE"
|
[[ -n "$line" ]] && printf '%s\n' "$line" >>"$SSH_KEYS_FILE"
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
@@ -3530,7 +3516,6 @@ build_container() {
|
|||||||
# Gateway
|
# Gateway
|
||||||
if [[ -n "$GATE" ]]; then
|
if [[ -n "$GATE" ]]; then
|
||||||
case "$GATE" in
|
case "$GATE" in
|
||||||
,gw=) ;;
|
|
||||||
,gw=*) NET_STRING+="$GATE" ;;
|
,gw=*) NET_STRING+="$GATE" ;;
|
||||||
*) NET_STRING+=",gw=$GATE" ;;
|
*) NET_STRING+=",gw=$GATE" ;;
|
||||||
esac
|
esac
|
||||||
@@ -3557,10 +3542,8 @@ build_container() {
|
|||||||
auto) NET_STRING="$NET_STRING,ip6=auto" ;;
|
auto) NET_STRING="$NET_STRING,ip6=auto" ;;
|
||||||
dhcp) NET_STRING="$NET_STRING,ip6=dhcp" ;;
|
dhcp) NET_STRING="$NET_STRING,ip6=dhcp" ;;
|
||||||
static)
|
static)
|
||||||
if [[ -n "$IPV6_ADDR" ]]; then
|
NET_STRING="$NET_STRING,ip6=$IPV6_ADDR"
|
||||||
NET_STRING="$NET_STRING,ip6=$IPV6_ADDR"
|
[ -n "$IPV6_GATE" ] && NET_STRING="$NET_STRING,gw6=$IPV6_GATE"
|
||||||
[ -n "$IPV6_GATE" ] && NET_STRING="$NET_STRING,gw6=$IPV6_GATE"
|
|
||||||
fi
|
|
||||||
;;
|
;;
|
||||||
none) ;;
|
none) ;;
|
||||||
esac
|
esac
|
||||||
@@ -4051,7 +4034,7 @@ EOF
|
|||||||
# Fix Debian 13 LXC template bug where / is owned by nobody:nogroup
|
# Fix Debian 13 LXC template bug where / is owned by nobody:nogroup
|
||||||
# This must be done from the host as unprivileged containers cannot chown /
|
# This must be done from the host as unprivileged containers cannot chown /
|
||||||
local rootfs
|
local rootfs
|
||||||
rootfs=$(pct config "$CTID" | grep -E '^rootfs:' | sed 's/rootfs: //' | cut -d',' -f1 || true)
|
rootfs=$(pct config "$CTID" | grep -E '^rootfs:' | sed 's/rootfs: //' | cut -d',' -f1)
|
||||||
if [[ -n "$rootfs" ]]; then
|
if [[ -n "$rootfs" ]]; then
|
||||||
local mount_point="/var/lib/lxc/${CTID}/rootfs"
|
local mount_point="/var/lib/lxc/${CTID}/rootfs"
|
||||||
if [[ -d "$mount_point" ]] && [[ "$(stat -c '%U' "$mount_point")" != "root" ]]; then
|
if [[ -d "$mount_point" ]] && [[ "$(stat -c '%U' "$mount_point")" != "root" ]]; then
|
||||||
@@ -4085,42 +4068,17 @@ EOF
|
|||||||
if [ "$var_os" == "alpine" ]; then
|
if [ "$var_os" == "alpine" ]; then
|
||||||
sleep 3
|
sleep 3
|
||||||
pct exec "$CTID" -- /bin/sh -c 'cat <<EOF >/etc/apk/repositories
|
pct exec "$CTID" -- /bin/sh -c 'cat <<EOF >/etc/apk/repositories
|
||||||
https://dl-cdn.alpinelinux.org/alpine/latest-stable/main
|
http://dl-cdn.alpinelinux.org/alpine/latest-stable/main
|
||||||
https://dl-cdn.alpinelinux.org/alpine/latest-stable/community
|
http://dl-cdn.alpinelinux.org/alpine/latest-stable/community
|
||||||
EOF'
|
EOF'
|
||||||
pct exec "$CTID" -- ash -c "apk add bash newt curl openssh nano mc ncurses jq" >>"$BUILD_LOG" 2>&1 || {
|
pct exec "$CTID" -- ash -c "apk add bash newt curl openssh nano mc ncurses jq" >>"$BUILD_LOG" 2>&1 || {
|
||||||
msg_warn "apk install failed (dl-cdn.alpinelinux.org), trying alternate mirrors..."
|
msg_error "Failed to install base packages in Alpine container"
|
||||||
local alpine_exit=0
|
install_exit_code=1
|
||||||
pct exec "$CTID" -- ash -c '
|
|
||||||
ALPINE_MIRRORS="mirror.init7.net ftp.halifax.rwth-aachen.de mirrors.edge.kernel.org alpine.mirror.wearetriple.com mirror.leaseweb.com uk.alpinelinux.org dl-2.alpinelinux.org dl-4.alpinelinux.org"
|
|
||||||
for m in $(printf "%s\n" $ALPINE_MIRRORS | shuf); do
|
|
||||||
if wget -q --spider --timeout=2 "http://$m/alpine/latest-stable/main/" 2>/dev/null; then
|
|
||||||
echo " Attempting mirror: $m"
|
|
||||||
cat <<EOF >/etc/apk/repositories
|
|
||||||
http://$m/alpine/latest-stable/main
|
|
||||||
http://$m/alpine/latest-stable/community
|
|
||||||
EOF
|
|
||||||
if apk update >/dev/null 2>&1 && apk add bash newt curl openssh nano mc ncurses jq >/dev/null 2>&1; then
|
|
||||||
echo " CDN set to $m: tests passed"
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo " Mirror $m failed"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
exit 2
|
|
||||||
' && alpine_exit=0 || alpine_exit=$?
|
|
||||||
if [[ $alpine_exit -ne 0 ]]; then
|
|
||||||
msg_error "Failed to install base packages in Alpine container"
|
|
||||||
install_exit_code=1
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
sleep 3
|
sleep 3
|
||||||
LANG=${LANG:-en_US.UTF-8}
|
LANG=${LANG:-en_US.UTF-8}
|
||||||
local LANG_ESC="${LANG//./\\.}"
|
pct exec "$CTID" -- bash -c "sed -i \"/$LANG/ s/^# //\" /etc/locale.gen"
|
||||||
LANG_ESC="${LANG_ESC//|/\\|}"
|
|
||||||
pct exec "$CTID" -- bash -c "sed -i \"/$LANG_ESC/ s/^# //\" /etc/locale.gen"
|
|
||||||
pct exec "$CTID" -- bash -c "locale_line=\$(grep -v '^#' /etc/locale.gen | grep -E '^[a-zA-Z]' | awk '{print \$1}' | head -n 1) && \
|
pct exec "$CTID" -- bash -c "locale_line=\$(grep -v '^#' /etc/locale.gen | grep -E '^[a-zA-Z]' | awk '{print \$1}' | head -n 1) && \
|
||||||
echo LANG=\$locale_line >/etc/default/locale && \
|
echo LANG=\$locale_line >/etc/default/locale && \
|
||||||
locale-gen >/dev/null && \
|
locale-gen >/dev/null && \
|
||||||
@@ -4139,140 +4097,9 @@ EOF
|
|||||||
msg_warn "Skipping timezone setup – zone '$tz' not found in container"
|
msg_warn "Skipping timezone setup – zone '$tz' not found in container"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Detect broken DNS resolver (e.g. Tailscale MagicDNS) and inject public DNS
|
|
||||||
if ! pct exec "$CTID" -- bash -c "getent hosts deb.debian.org >/dev/null 2>&1 && getent hosts archive.ubuntu.com >/dev/null 2>&1"; then
|
|
||||||
msg_warn "APT repository DNS resolution failed in container, injecting public DNS servers"
|
|
||||||
pct exec "$CTID" -- bash -c "echo -e 'nameserver 8.8.8.8\nnameserver 1.1.1.1' >/etc/resolv.conf"
|
|
||||||
fi
|
|
||||||
|
|
||||||
pct exec "$CTID" -- bash -c "apt-get update 2>&1 && apt-get install -y sudo curl mc gnupg2 jq 2>&1" >>"$BUILD_LOG" 2>&1 || {
|
pct exec "$CTID" -- bash -c "apt-get update 2>&1 && apt-get install -y sudo curl mc gnupg2 jq 2>&1" >>"$BUILD_LOG" 2>&1 || {
|
||||||
local failed_mirror
|
msg_error "apt-get base packages installation failed"
|
||||||
failed_mirror=$(pct exec "$CTID" -- bash -c "grep -m1 -oP '(?<=URIs: https?://)[^/]+' /etc/apt/sources.list.d/debian.sources 2>/dev/null || grep -m1 -oP '(?<=deb https?://)[^/]+' /etc/apt/sources.list 2>/dev/null" 2>/dev/null || echo "unknown")
|
install_exit_code=1
|
||||||
msg_warn "apt-get update failed (${failed_mirror}), trying alternate mirrors..."
|
|
||||||
local mirror_exit=0
|
|
||||||
pct exec "$CTID" -- bash -c '
|
|
||||||
APT_BASE="sudo curl mc gnupg2 jq"
|
|
||||||
DISTRO=$(. /etc/os-release 2>/dev/null && echo "$ID" || echo "debian")
|
|
||||||
|
|
||||||
if [ "$DISTRO" = "ubuntu" ]; then
|
|
||||||
EU_MIRRORS="de.archive.ubuntu.com fr.archive.ubuntu.com se.archive.ubuntu.com nl.archive.ubuntu.com it.archive.ubuntu.com ch.archive.ubuntu.com mirrors.xtom.de"
|
|
||||||
US_MIRRORS="us.archive.ubuntu.com archive.ubuntu.com mirrors.edge.kernel.org mirror.csclub.uwaterloo.ca mirrors.ocf.berkeley.edu mirror.math.princeton.edu"
|
|
||||||
AP_MIRRORS="au.archive.ubuntu.com jp.archive.ubuntu.com kr.archive.ubuntu.com tw.archive.ubuntu.com mirror.aarnet.edu.au"
|
|
||||||
else
|
|
||||||
EU_MIRRORS="ftp.de.debian.org ftp.fr.debian.org ftp.nl.debian.org ftp.uk.debian.org ftp.ch.debian.org ftp.se.debian.org ftp.it.debian.org ftp.fau.de ftp.halifax.rwth-aachen.de debian.mirror.lrz.de mirror.init7.net debian.ethz.ch mirrors.dotsrc.org debian.mirrors.ovh.net"
|
|
||||||
US_MIRRORS="ftp.us.debian.org ftp.ca.debian.org debian.csail.mit.edu mirrors.ocf.berkeley.edu mirrors.wikimedia.org debian.osuosl.org mirror.cogentco.com"
|
|
||||||
AP_MIRRORS="ftp.au.debian.org ftp.jp.debian.org ftp.tw.debian.org ftp.kr.debian.org ftp.hk.debian.org ftp.sg.debian.org mirror.aarnet.edu.au mirror.nitc.ac.in"
|
|
||||||
fi
|
|
||||||
|
|
||||||
TZ=$(cat /etc/timezone 2>/dev/null || echo "UTC")
|
|
||||||
case "$TZ" in
|
|
||||||
Europe/*|Arctic/*) REGIONAL="$EU_MIRRORS"; OTHERS="$US_MIRRORS $AP_MIRRORS" ;;
|
|
||||||
America/*) REGIONAL="$US_MIRRORS"; OTHERS="$EU_MIRRORS $AP_MIRRORS" ;;
|
|
||||||
Asia/*|Australia/*|Pacific/*) REGIONAL="$AP_MIRRORS"; OTHERS="$EU_MIRRORS $US_MIRRORS" ;;
|
|
||||||
*) REGIONAL=""; OTHERS="$EU_MIRRORS $US_MIRRORS $AP_MIRRORS" ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo "Acquire::By-Hash \"no\";" >/etc/apt/apt.conf.d/99no-by-hash
|
|
||||||
|
|
||||||
try_mirrors() {
|
|
||||||
for src in /etc/apt/sources.list.d/debian.sources /etc/apt/sources.list; do
|
|
||||||
[ -f "$src" ] && sed -i "s|URIs: http[s]*://[^/]*/|URIs: http://${1}/|g; s|deb http[s]*://[^/]*/|deb http://${1}/|g" "$src"
|
|
||||||
done
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
|
||||||
APT_OUT=$(apt-get update 2>&1)
|
|
||||||
APT_RC=$?
|
|
||||||
if echo "$APT_OUT" | grep -qi "hashsum\|hash sum"; then
|
|
||||||
echo " Mirror $1 failed (hash mismatch)"
|
|
||||||
return 1
|
|
||||||
elif echo "$APT_OUT" | grep -qi "SSL\|certificate"; then
|
|
||||||
echo " Mirror $1 failed (SSL/certificate error)"
|
|
||||||
return 1
|
|
||||||
elif [ $APT_RC -ne 0 ]; then
|
|
||||||
echo " Mirror $1 failed (apt-get update error)"
|
|
||||||
return 1
|
|
||||||
elif apt-get install -y $APT_BASE >/dev/null 2>&1; then
|
|
||||||
echo " CDN set to $1: tests passed"
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo " Mirror $1 failed (package install error)"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
scan_reachable() {
|
|
||||||
local result=""
|
|
||||||
for m in $1; do
|
|
||||||
if timeout 2 bash -c "echo >/dev/tcp/$m/80" 2>/dev/null; then
|
|
||||||
result="$result $m"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo "$result" | xargs
|
|
||||||
}
|
|
||||||
|
|
||||||
# Phase 1: Scan global mirrors first (independent of local CDN issues)
|
|
||||||
OTHERS_OK=$(scan_reachable "$OTHERS")
|
|
||||||
OTHERS_PICK=$(printf "%s\n" $OTHERS_OK | shuf | head -3 | xargs)
|
|
||||||
|
|
||||||
for mirror in $OTHERS_PICK; do
|
|
||||||
echo " Attempting mirror: $mirror"
|
|
||||||
try_mirrors "$mirror" && exit 0
|
|
||||||
done
|
|
||||||
|
|
||||||
# Phase 2: Try primary mirror
|
|
||||||
if [ "$DISTRO" = "ubuntu" ]; then
|
|
||||||
PRIMARY="archive.ubuntu.com"
|
|
||||||
else
|
|
||||||
PRIMARY="ftp.debian.org"
|
|
||||||
fi
|
|
||||||
if timeout 2 bash -c "echo >/dev/tcp/$PRIMARY/80" 2>/dev/null; then
|
|
||||||
echo " Attempting mirror: $PRIMARY"
|
|
||||||
try_mirrors "$PRIMARY" && exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Phase 3: Fall back to regional mirrors
|
|
||||||
REGIONAL_OK=$(scan_reachable "$REGIONAL")
|
|
||||||
REGIONAL_PICK=$(printf "%s\n" $REGIONAL_OK | shuf | head -3 | xargs)
|
|
||||||
|
|
||||||
for mirror in $REGIONAL_PICK; do
|
|
||||||
echo " Attempting mirror: $mirror"
|
|
||||||
try_mirrors "$mirror" && exit 0
|
|
||||||
done
|
|
||||||
|
|
||||||
exit 2
|
|
||||||
' && mirror_exit=0 || mirror_exit=$?
|
|
||||||
if [[ $mirror_exit -eq 2 ]]; then
|
|
||||||
msg_warn "Multiple mirrors failed (possible CDN synchronization issue)."
|
|
||||||
if [[ "$var_os" == "ubuntu" ]]; then
|
|
||||||
msg_warn "Find Ubuntu mirrors at: https://launchpad.net/ubuntu/+archivemirrors"
|
|
||||||
else
|
|
||||||
msg_warn "Find Debian mirrors at: https://www.debian.org/mirror/list"
|
|
||||||
fi
|
|
||||||
local custom_mirror=""
|
|
||||||
while true; do
|
|
||||||
read -rp " Enter a mirror hostname (or 'skip' to abort): " custom_mirror </dev/tty
|
|
||||||
[[ -z "$custom_mirror" ]] && continue
|
|
||||||
[[ "$custom_mirror" == "skip" ]] && break
|
|
||||||
[[ ! "$custom_mirror" =~ ^[a-zA-Z0-9._-]+$ ]] && {
|
|
||||||
msg_warn "Invalid hostname format."
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
pct exec "$CTID" -- bash -c "
|
|
||||||
for src in /etc/apt/sources.list.d/debian.sources /etc/apt/sources.list; do
|
|
||||||
[ -f \"\$src\" ] && sed -i \"s|URIs: http[s]*://[^/]*/|URIs: http://${custom_mirror}/|g; s|deb http[s]*://[^/]*/|deb http://${custom_mirror}/|g\" \"\$src\"
|
|
||||||
done
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
|
||||||
apt-get update >/dev/null 2>&1 && apt-get install -y sudo curl mc gnupg2 jq >/dev/null 2>&1
|
|
||||||
" && break
|
|
||||||
msg_warn "Mirror '${custom_mirror}' also failed. Try another or type 'skip'."
|
|
||||||
done
|
|
||||||
if [[ "$custom_mirror" == "skip" ]]; then
|
|
||||||
msg_error "apt-get base packages installation failed"
|
|
||||||
install_exit_code=1
|
|
||||||
fi
|
|
||||||
elif [[ $mirror_exit -ne 0 ]]; then
|
|
||||||
msg_error "apt-get base packages installation failed"
|
|
||||||
install_exit_code=1
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -4381,53 +4208,6 @@ EOF
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Defense-in-depth: Ensure error handling stays disabled during recovery.
|
|
||||||
# Some functions (e.g. silent/$STD) unconditionally re-enable set -Eeuo pipefail
|
|
||||||
# and trap 'error_handler' ERR. If any code path above called such a function,
|
|
||||||
# the grep/sed pipelines below would trigger error_handler on non-match (exit 1).
|
|
||||||
set +Eeuo pipefail
|
|
||||||
trap - ERR
|
|
||||||
|
|
||||||
# --- Exit code 1 subclassification: analyze logs BEFORE telemetry call ---
|
|
||||||
# Exit code 1 is generic ("General error"). Analyze logs to determine the
|
|
||||||
# real error category so telemetry gets a useful classification instead of "shell".
|
|
||||||
local is_oom=false
|
|
||||||
local is_network_issue=false
|
|
||||||
local is_apt_issue=false
|
|
||||||
local is_cmd_not_found=false
|
|
||||||
local is_disk_full=false
|
|
||||||
|
|
||||||
if [[ $install_exit_code -eq 1 && -f "$combined_log" ]]; then
|
|
||||||
if grep -qiE 'E: Unable to|E: Package|E: Failed to fetch|dpkg.*error|broken packages|unmet dependencies|dpkg --configure -a' "$combined_log"; then
|
|
||||||
is_apt_issue=true
|
|
||||||
fi
|
|
||||||
if grep -qiE 'Cannot allocate memory|Out of memory|oom-killer|Killed process|JavaScript heap' "$combined_log"; then
|
|
||||||
is_oom=true
|
|
||||||
fi
|
|
||||||
if grep -qiE 'Could not resolve|DNS|Connection refused|Network is unreachable|No route to host|Temporary failure resolving|Failed to fetch' "$combined_log"; then
|
|
||||||
is_network_issue=true
|
|
||||||
fi
|
|
||||||
if grep -qiE ': command not found|No such file or directory.*/s?bin/' "$combined_log"; then
|
|
||||||
is_cmd_not_found=true
|
|
||||||
fi
|
|
||||||
if grep -qiE 'ENOSPC|no space left on device|Disk quota exceeded|errno -28' "$combined_log"; then
|
|
||||||
is_disk_full=true
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set override for categorize_error() so telemetry gets the real category
|
|
||||||
if [[ "$is_apt_issue" == true ]]; then
|
|
||||||
export ERROR_CATEGORY_OVERRIDE="dependency"
|
|
||||||
elif [[ "$is_oom" == true ]]; then
|
|
||||||
export ERROR_CATEGORY_OVERRIDE="resource"
|
|
||||||
elif [[ "$is_network_issue" == true ]]; then
|
|
||||||
export ERROR_CATEGORY_OVERRIDE="network"
|
|
||||||
elif [[ "$is_disk_full" == true ]]; then
|
|
||||||
export ERROR_CATEGORY_OVERRIDE="storage"
|
|
||||||
elif [[ "$is_cmd_not_found" == true ]]; then
|
|
||||||
export ERROR_CATEGORY_OVERRIDE="dependency"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Report failure to telemetry API (now with log available on host)
|
# Report failure to telemetry API (now with log available on host)
|
||||||
# NOTE: Do NOT use msg_info/spinner here — the background spinner process
|
# NOTE: Do NOT use msg_info/spinner here — the background spinner process
|
||||||
# causes SIGTSTP in non-interactive shells (bash -c "$(curl ...)"), which
|
# causes SIGTSTP in non-interactive shells (bash -c "$(curl ...)"), which
|
||||||
@@ -4436,6 +4216,13 @@ EOF
|
|||||||
post_update_to_api "failed" "$install_exit_code"
|
post_update_to_api "failed" "$install_exit_code"
|
||||||
$STD echo -e "${TAB}${CM:-✔} Failure reported"
|
$STD echo -e "${TAB}${CM:-✔} Failure reported"
|
||||||
|
|
||||||
|
# Defense-in-depth: Ensure error handling stays disabled during recovery.
|
||||||
|
# Some functions (e.g. silent/$STD) unconditionally re-enable set -Eeuo pipefail
|
||||||
|
# and trap 'error_handler' ERR. If any code path above called such a function,
|
||||||
|
# the grep/sed pipelines below would trigger error_handler on non-match (exit 1).
|
||||||
|
set +Eeuo pipefail
|
||||||
|
trap - ERR
|
||||||
|
|
||||||
# Show combined log location
|
# Show combined log location
|
||||||
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
|
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
|
||||||
msg_custom "📋" "${YW}" "Installation log: ${combined_log}"
|
msg_custom "📋" "${YW}" "Installation log: ${combined_log}"
|
||||||
@@ -4464,9 +4251,12 @@ EOF
|
|||||||
# Prompt user for cleanup with 60s timeout
|
# Prompt user for cleanup with 60s timeout
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
# Extend error detection for non-exit-1 codes (exit 1 was already analyzed above)
|
# Detect error type for smart recovery options
|
||||||
# The is_* flags were set above for exit code 1 log analysis; here we add
|
local is_oom=false
|
||||||
# exit-code-specific detections for other codes.
|
local is_network_issue=false
|
||||||
|
local is_apt_issue=false
|
||||||
|
local is_cmd_not_found=false
|
||||||
|
local is_disk_full=false
|
||||||
local error_explanation=""
|
local error_explanation=""
|
||||||
if declare -f explain_exit_code >/dev/null 2>&1; then
|
if declare -f explain_exit_code >/dev/null 2>&1; then
|
||||||
error_explanation="$(explain_exit_code "$install_exit_code")"
|
error_explanation="$(explain_exit_code "$install_exit_code")"
|
||||||
@@ -4516,6 +4306,26 @@ EOF
|
|||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
# Exit 1 subclassification: analyze logs to identify actual root cause
|
||||||
|
# Many exit 1 errors are actually APT, OOM, network, or command-not-found issues
|
||||||
|
if [[ $install_exit_code -eq 1 && -f "$combined_log" ]]; then
|
||||||
|
if grep -qiE 'E: Unable to|E: Package|E: Failed to fetch|dpkg.*error|broken packages|unmet dependencies|dpkg --configure -a' "$combined_log"; then
|
||||||
|
is_apt_issue=true
|
||||||
|
fi
|
||||||
|
if grep -qiE 'Cannot allocate memory|Out of memory|oom-killer|Killed process|JavaScript heap' "$combined_log"; then
|
||||||
|
is_oom=true
|
||||||
|
fi
|
||||||
|
if grep -qiE 'Could not resolve|DNS|Connection refused|Network is unreachable|No route to host|Temporary failure resolving|Failed to fetch' "$combined_log"; then
|
||||||
|
is_network_issue=true
|
||||||
|
fi
|
||||||
|
if grep -qiE ': command not found|No such file or directory.*/s?bin/' "$combined_log"; then
|
||||||
|
is_cmd_not_found=true
|
||||||
|
fi
|
||||||
|
if grep -qiE 'ENOSPC|no space left on device|Disk quota exceeded|errno -28' "$combined_log"; then
|
||||||
|
is_disk_full=true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
# Show error explanation if available
|
# Show error explanation if available
|
||||||
if [[ -n "$error_explanation" ]]; then
|
if [[ -n "$error_explanation" ]]; then
|
||||||
echo -e "${TAB}${RD}Error: ${error_explanation}${CL}"
|
echo -e "${TAB}${RD}Error: ${error_explanation}${CL}"
|
||||||
@@ -4717,7 +4527,6 @@ EOF
|
|||||||
|
|
||||||
if [[ $apt_retry_code -eq 0 ]]; then
|
if [[ $apt_retry_code -eq 0 ]]; then
|
||||||
msg_ok "Installation completed successfully after APT repair!"
|
msg_ok "Installation completed successfully after APT repair!"
|
||||||
INSTALL_COMPLETE=true
|
|
||||||
post_update_to_api "done" "0" "force"
|
post_update_to_api "done" "0" "force"
|
||||||
return 0
|
return 0
|
||||||
else
|
else
|
||||||
@@ -4845,7 +4654,7 @@ EOF
|
|||||||
destroy_lxc() {
|
destroy_lxc() {
|
||||||
if [[ -z "$CT_ID" ]]; then
|
if [[ -z "$CT_ID" ]]; then
|
||||||
msg_error "No CT_ID found. Nothing to remove."
|
msg_error "No CT_ID found. Nothing to remove."
|
||||||
return 65
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Abort on Ctrl-C / Ctrl-D / ESC
|
# Abort on Ctrl-C / Ctrl-D / ESC
|
||||||
@@ -4884,12 +4693,12 @@ resolve_storage_preselect() {
|
|||||||
case "$class" in
|
case "$class" in
|
||||||
template) required_content="vztmpl" ;;
|
template) required_content="vztmpl" ;;
|
||||||
container) required_content="rootdir" ;;
|
container) required_content="rootdir" ;;
|
||||||
*) return 65 ;;
|
*) return 1 ;;
|
||||||
esac
|
esac
|
||||||
[[ -z "$preselect" ]] && return 1
|
[[ -z "$preselect" ]] && return 1
|
||||||
if ! pvesm status -content "$required_content" | awk 'NR>1{print $1}' | grep -qx -- "$preselect"; then
|
if ! pvesm status -content "$required_content" | awk 'NR>1{print $1}' | grep -qx -- "$preselect"; then
|
||||||
msg_warn "Preselected storage '${preselect}' does not support content '${required_content}' (or not found)"
|
msg_warn "Preselected storage '${preselect}' does not support content '${required_content}' (or not found)"
|
||||||
return 238
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local line total used free
|
local line total used free
|
||||||
@@ -4942,10 +4751,6 @@ fix_gpu_gids() {
|
|||||||
pct stop "$CTID" >/dev/null 2>&1
|
pct stop "$CTID" >/dev/null 2>&1
|
||||||
sleep 1
|
sleep 1
|
||||||
|
|
||||||
# Validate GIDs are numeric before sed
|
|
||||||
[[ "$render_gid" =~ ^[0-9]+$ ]] || render_gid="104"
|
|
||||||
[[ "$video_gid" =~ ^[0-9]+$ ]] || video_gid="44"
|
|
||||||
|
|
||||||
# Update dev entries with correct GIDs
|
# Update dev entries with correct GIDs
|
||||||
sed -i.bak -E "s|(dev[0-9]+: /dev/dri/renderD[0-9]+),gid=[0-9]+|\1,gid=${render_gid}|g" "$LXC_CONFIG"
|
sed -i.bak -E "s|(dev[0-9]+: /dev/dri/renderD[0-9]+),gid=[0-9]+|\1,gid=${render_gid}|g" "$LXC_CONFIG"
|
||||||
sed -i -E "s|(dev[0-9]+: /dev/dri/card[0-9]+),gid=[0-9]+|\1,gid=${video_gid}|g" "$LXC_CONFIG"
|
sed -i -E "s|(dev[0-9]+: /dev/dri/card[0-9]+),gid=[0-9]+|\1,gid=${video_gid}|g" "$LXC_CONFIG"
|
||||||
@@ -5013,7 +4818,7 @@ select_storage() {
|
|||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
msg_error "Invalid storage class '$CLASS'"
|
msg_error "Invalid storage class '$CLASS'"
|
||||||
return 65
|
return 1
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
@@ -5095,7 +4900,7 @@ validate_storage_space() {
|
|||||||
# Check if storage exists and is active
|
# Check if storage exists and is active
|
||||||
if [[ -z "$storage_line" ]]; then
|
if [[ -z "$storage_line" ]]; then
|
||||||
[[ "$show_dialog" == "yes" ]] && whiptail --msgbox "⚠️ Warning: Storage '$storage' not found!\n\nThe storage may be unavailable or disabled." 10 60
|
[[ "$show_dialog" == "yes" ]] && whiptail --msgbox "⚠️ Warning: Storage '$storage' not found!\n\nThe storage may be unavailable or disabled." 10 60
|
||||||
return 236
|
return 2
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check storage status (column 3)
|
# Check storage status (column 3)
|
||||||
@@ -5103,7 +4908,7 @@ validate_storage_space() {
|
|||||||
status=$(awk '{print $3}' <<<"$storage_line")
|
status=$(awk '{print $3}' <<<"$storage_line")
|
||||||
if [[ "$status" == "disabled" ]]; then
|
if [[ "$status" == "disabled" ]]; then
|
||||||
[[ "$show_dialog" == "yes" ]] && whiptail --msgbox "⚠️ Warning: Storage '$storage' is disabled!\n\nPlease enable the storage first." 10 60
|
[[ "$show_dialog" == "yes" ]] && whiptail --msgbox "⚠️ Warning: Storage '$storage' is disabled!\n\nPlease enable the storage first." 10 60
|
||||||
return 236
|
return 2
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Get storage type and free space (column 6)
|
# Get storage type and free space (column 6)
|
||||||
@@ -5126,7 +4931,7 @@ validate_storage_space() {
|
|||||||
if [[ "$show_dialog" == "yes" ]]; then
|
if [[ "$show_dialog" == "yes" ]]; then
|
||||||
whiptail --msgbox "⚠️ Warning: Storage '$storage' may not have enough space!\n\nStorage Type: ${storage_type}\nRequired: ${required_gb}GB\nAvailable: ${free_gb_fmt}\n\nYou can continue, but creation might fail." 14 70
|
whiptail --msgbox "⚠️ Warning: Storage '$storage' may not have enough space!\n\nStorage Type: ${storage_type}\nRequired: ${required_gb}GB\nAvailable: ${free_gb_fmt}\n\nYou can continue, but creation might fail." 14 70
|
||||||
fi
|
fi
|
||||||
return 236
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
return 0
|
return 0
|
||||||
@@ -5189,7 +4994,7 @@ create_lxc_container() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
msg_warn "An update for the Proxmox LXC stack is available"
|
msg_info "An update for the Proxmox LXC stack is available"
|
||||||
echo " pve-container: installed=${_pvec_i:-n/a} candidate=${_pvec_c:-n/a}"
|
echo " pve-container: installed=${_pvec_i:-n/a} candidate=${_pvec_c:-n/a}"
|
||||||
echo " lxc-pve : installed=${_lxcp_i:-n/a} candidate=${_lxcp_c:-n/a}"
|
echo " lxc-pve : installed=${_lxcp_i:-n/a} candidate=${_lxcp_c:-n/a}"
|
||||||
echo
|
echo
|
||||||
@@ -5297,7 +5102,7 @@ create_lxc_container() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
msg_info "Validating storage '$CONTAINER_STORAGE'"
|
msg_info "Validating storage '$CONTAINER_STORAGE'"
|
||||||
STORAGE_TYPE=$(grep -E "^[^:]+: $CONTAINER_STORAGE$" /etc/pve/storage.cfg | cut -d: -f1 | head -1 || true)
|
STORAGE_TYPE=$(grep -E "^[^:]+: $CONTAINER_STORAGE$" /etc/pve/storage.cfg | cut -d: -f1 | head -1)
|
||||||
|
|
||||||
if [[ -z "$STORAGE_TYPE" ]]; then
|
if [[ -z "$STORAGE_TYPE" ]]; then
|
||||||
msg_error "Storage '$CONTAINER_STORAGE' not found in /etc/pve/storage.cfg"
|
msg_error "Storage '$CONTAINER_STORAGE' not found in /etc/pve/storage.cfg"
|
||||||
@@ -5336,7 +5141,7 @@ create_lxc_container() {
|
|||||||
msg_ok "Storage '$CONTAINER_STORAGE' ($STORAGE_TYPE) validated"
|
msg_ok "Storage '$CONTAINER_STORAGE' ($STORAGE_TYPE) validated"
|
||||||
|
|
||||||
msg_info "Validating template storage '$TEMPLATE_STORAGE'"
|
msg_info "Validating template storage '$TEMPLATE_STORAGE'"
|
||||||
TEMPLATE_TYPE=$(grep -E "^[^:]+: $TEMPLATE_STORAGE$" /etc/pve/storage.cfg | cut -d: -f1 || true)
|
TEMPLATE_TYPE=$(grep -E "^[^:]+: $TEMPLATE_STORAGE$" /etc/pve/storage.cfg | cut -d: -f1)
|
||||||
|
|
||||||
if ! pvesm status -content vztmpl 2>/dev/null | awk 'NR>1{print $1}' | grep -qx "$TEMPLATE_STORAGE"; then
|
if ! pvesm status -content vztmpl 2>/dev/null | awk 'NR>1{print $1}' | grep -qx "$TEMPLATE_STORAGE"; then
|
||||||
msg_warn "Template storage '$TEMPLATE_STORAGE' may not support 'vztmpl'"
|
msg_warn "Template storage '$TEMPLATE_STORAGE' may not support 'vztmpl'"
|
||||||
@@ -5859,7 +5664,7 @@ description() {
|
|||||||
DESCRIPTION=$(
|
DESCRIPTION=$(
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
<div align='center'>
|
<div align='center'>
|
||||||
<a href='https://community-scripts.org' target='_blank' rel='noopener noreferrer'>
|
<a href='https://Helper-Scripts.com' target='_blank' rel='noopener noreferrer'>
|
||||||
<img src='https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png' alt='Logo' style='width:81px;height:112px;'/>
|
<img src='https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png' alt='Logo' style='width:81px;height:112px;'/>
|
||||||
</a>
|
</a>
|
||||||
|
|
||||||
@@ -5892,7 +5697,6 @@ EOF
|
|||||||
systemctl start ping-instances.service
|
systemctl start ping-instances.service
|
||||||
fi
|
fi
|
||||||
|
|
||||||
INSTALL_COMPLETE=true
|
|
||||||
post_update_to_api "done" "none"
|
post_update_to_api "done" "none"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -319,11 +319,11 @@ function setup_cloud_init() {
|
|||||||
if [ "$network_mode" = "static" ]; then
|
if [ "$network_mode" = "static" ]; then
|
||||||
if [ -n "$static_ip" ] && ! validate_ip_cidr "$static_ip"; then
|
if [ -n "$static_ip" ] && ! validate_ip_cidr "$static_ip"; then
|
||||||
_ci_msg_error "Invalid static IP format: $static_ip (expected: x.x.x.x/xx)"
|
_ci_msg_error "Invalid static IP format: $static_ip (expected: x.x.x.x/xx)"
|
||||||
return 65
|
return 1
|
||||||
fi
|
fi
|
||||||
if [ -n "$gateway" ] && ! validate_ip "$gateway"; then
|
if [ -n "$gateway" ] && ! validate_ip "$gateway"; then
|
||||||
_ci_msg_error "Invalid gateway IP format: $gateway"
|
_ci_msg_error "Invalid gateway IP format: $gateway"
|
||||||
return 65
|
return 1
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -433,7 +433,7 @@ function configure_cloud_init_interactive() {
|
|||||||
if ! command -v whiptail >/dev/null 2>&1; then
|
if ! command -v whiptail >/dev/null 2>&1; then
|
||||||
echo "Warning: whiptail not available, skipping interactive configuration"
|
echo "Warning: whiptail not available, skipping interactive configuration"
|
||||||
export CLOUDINIT_ENABLE="no"
|
export CLOUDINIT_ENABLE="no"
|
||||||
return 127
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Ask if user wants to enable Cloud-Init
|
# Ask if user wants to enable Cloud-Init
|
||||||
@@ -603,7 +603,7 @@ function get_vm_ip() {
|
|||||||
elapsed=$((elapsed + 2))
|
elapsed=$((elapsed + 2))
|
||||||
done
|
done
|
||||||
|
|
||||||
return 7
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
@@ -621,7 +621,7 @@ function wait_for_cloud_init() {
|
|||||||
|
|
||||||
if [ -z "$vm_ip" ]; then
|
if [ -z "$vm_ip" ]; then
|
||||||
_ci_msg_warn "Unable to determine VM IP address"
|
_ci_msg_warn "Unable to determine VM IP address"
|
||||||
return 7
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
_ci_msg_info "Waiting for Cloud-Init to complete on ${vm_ip}"
|
_ci_msg_info "Waiting for Cloud-Init to complete on ${vm_ip}"
|
||||||
@@ -638,7 +638,7 @@ function wait_for_cloud_init() {
|
|||||||
done
|
done
|
||||||
|
|
||||||
_ci_msg_warn "Cloud-Init did not complete within ${timeout}s"
|
_ci_msg_warn "Cloud-Init did not complete within ${timeout}s"
|
||||||
return 150
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
|
|||||||
@@ -858,7 +858,7 @@ get_header() {
|
|||||||
if [ ! -s "$local_header_path" ]; then
|
if [ ! -s "$local_header_path" ]; then
|
||||||
if ! curl -fsSL "$header_url" -o "$local_header_path"; then
|
if ! curl -fsSL "$header_url" -o "$local_header_path"; then
|
||||||
msg_warn "Failed to download header: $header_url"
|
msg_warn "Failed to download header: $header_url"
|
||||||
return 250
|
return 1
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -1358,7 +1358,7 @@ prompt_select() {
|
|||||||
if [[ $num_options -eq 0 ]]; then
|
if [[ $num_options -eq 0 ]]; then
|
||||||
msg_warn "prompt_select called with no options"
|
msg_warn "prompt_select called with no options"
|
||||||
echo "" >&2
|
echo "" >&2
|
||||||
return 65
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Validate default
|
# Validate default
|
||||||
@@ -1600,7 +1600,7 @@ check_or_create_swap() {
|
|||||||
swap_size_mb=$(prompt_input "Enter swap size in MB (e.g., 2048 for 2GB):" "2048" 60)
|
swap_size_mb=$(prompt_input "Enter swap size in MB (e.g., 2048 for 2GB):" "2048" 60)
|
||||||
if ! [[ "$swap_size_mb" =~ ^[0-9]+$ ]]; then
|
if ! [[ "$swap_size_mb" =~ ^[0-9]+$ ]]; then
|
||||||
msg_error "Invalid swap size: '${swap_size_mb}' (must be a number in MB)"
|
msg_error "Invalid swap size: '${swap_size_mb}' (must be a number in MB)"
|
||||||
return 65
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local swap_file="/swapfile"
|
local swap_file="/swapfile"
|
||||||
@@ -1608,19 +1608,19 @@ check_or_create_swap() {
|
|||||||
msg_info "Creating ${swap_size_mb}MB swap file at $swap_file"
|
msg_info "Creating ${swap_size_mb}MB swap file at $swap_file"
|
||||||
if ! dd if=/dev/zero of="$swap_file" bs=1M count="$swap_size_mb" status=progress; then
|
if ! dd if=/dev/zero of="$swap_file" bs=1M count="$swap_size_mb" status=progress; then
|
||||||
msg_error "Failed to allocate swap file (dd failed)"
|
msg_error "Failed to allocate swap file (dd failed)"
|
||||||
return 150
|
return 1
|
||||||
fi
|
fi
|
||||||
if ! chmod 600 "$swap_file"; then
|
if ! chmod 600 "$swap_file"; then
|
||||||
msg_error "Failed to set permissions on $swap_file"
|
msg_error "Failed to set permissions on $swap_file"
|
||||||
return 150
|
return 1
|
||||||
fi
|
fi
|
||||||
if ! mkswap "$swap_file"; then
|
if ! mkswap "$swap_file"; then
|
||||||
msg_error "Failed to format swap file (mkswap failed)"
|
msg_error "Failed to format swap file (mkswap failed)"
|
||||||
return 150
|
return 1
|
||||||
fi
|
fi
|
||||||
if ! swapon "$swap_file"; then
|
if ! swapon "$swap_file"; then
|
||||||
msg_error "Failed to activate swap (swapon failed)"
|
msg_error "Failed to activate swap (swapon failed)"
|
||||||
return 150
|
return 1
|
||||||
fi
|
fi
|
||||||
msg_ok "Swap file created and activated successfully"
|
msg_ok "Swap file created and activated successfully"
|
||||||
}
|
}
|
||||||
@@ -1699,13 +1699,13 @@ function get_lxc_ip() {
|
|||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
return 6
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
LOCAL_IP="$(get_current_ip || true)"
|
LOCAL_IP="$(get_current_ip || true)"
|
||||||
if [[ -z "$LOCAL_IP" ]]; then
|
if [[ -z "$LOCAL_IP" ]]; then
|
||||||
msg_error "Could not determine LOCAL_IP (checked: eth0, hostname -I, ip route, IPv6 targets)"
|
msg_error "Could not determine LOCAL_IP (checked: eth0, hostname -I, ip route, IPv6 targets)"
|
||||||
return 6
|
return 1
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user