mirror of
https://github.com/community-scripts/ProxmoxVE.git
synced 2026-03-19 16:33:01 +01:00
Compare commits
109 Commits
fix-pbs_mi
...
pocketbase
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
51a8b5365f | ||
|
|
2a7b6c8d86 | ||
|
|
ba01175bc6 | ||
|
|
607eff0939 | ||
|
|
df51f7114e | ||
|
|
79805f5f3d | ||
|
|
59c601e0e2 | ||
|
|
7c467bee7b | ||
|
|
d2e5991416 | ||
|
|
73e6d4b855 | ||
|
|
a53a851cc9 | ||
|
|
2001a43229 | ||
|
|
d35249b8f4 | ||
|
|
9f73b6756e | ||
|
|
192e2950e7 | ||
|
|
20c4657f39 | ||
|
|
e20fed1a2d | ||
|
|
8b4f0f60e1 | ||
|
|
bd91c4d07f | ||
|
|
6bb264491c | ||
|
|
852e557b1b | ||
|
|
c5caca97fd | ||
|
|
dff876fb5c | ||
|
|
02e7e5af8d | ||
|
|
0f4bfc0b5a | ||
|
|
152be10741 | ||
|
|
341489ea3f | ||
|
|
ee7829496f | ||
|
|
b67cbc1585 | ||
|
|
bf17edcc89 | ||
|
|
8aa4e5b8cb | ||
|
|
7c770e2ee7 | ||
|
|
6d15a22d94 | ||
|
|
3db4ac1050 | ||
|
|
16dfad5c77 | ||
|
|
5197d759d7 | ||
|
|
c073286d53 | ||
|
|
cbaba2f133 | ||
|
|
48afb6c017 | ||
|
|
37f2e0242d | ||
|
|
7c62147a00 | ||
|
|
7d11f9acd6 | ||
|
|
55a877a3e2 | ||
|
|
27c9d7fd07 | ||
|
|
c907e10334 | ||
|
|
804c462dd3 | ||
|
|
9df9a2831e | ||
|
|
4aa83fd98e | ||
|
|
6747f0c340 | ||
|
|
8ef2c445c8 | ||
|
|
e4a7ed6965 | ||
|
|
c69dae5326 | ||
|
|
fee4617802 | ||
|
|
339301947b | ||
|
|
5df3c2cd34 | ||
|
|
6832e23ff1 | ||
|
|
d08ba7a0c4 | ||
|
|
780c0e055f | ||
|
|
c55d0784e2 | ||
|
|
2080603464 | ||
|
|
13aea57207 | ||
|
|
815cbb4ffc | ||
|
|
5ee3ad2702 | ||
|
|
d06af6aa63 | ||
|
|
be2986075c | ||
|
|
c397a64847 | ||
|
|
16edbdd274 | ||
|
|
9658f5363e | ||
|
|
19148d23bf | ||
|
|
dc97f11171 | ||
|
|
87e42d0aa7 | ||
|
|
171bbb2f6a | ||
|
|
182f07b677 | ||
|
|
10783e1cb2 | ||
|
|
21a1e2d667 | ||
|
|
cbbf4d7eb3 | ||
|
|
d3428ff1f0 | ||
|
|
a680a5a9d0 | ||
|
|
fd9039e849 | ||
|
|
b2abe63620 | ||
|
|
7e35b6dd65 | ||
|
|
00ecf00685 | ||
|
|
7ba3e9fe5e | ||
|
|
73f36b6218 | ||
|
|
915ba410a2 | ||
|
|
66e1a3a322 | ||
|
|
4a9de4d6cd | ||
|
|
165b9e7f01 | ||
|
|
d8810d8d85 | ||
|
|
7c0c153691 | ||
|
|
77db9cef1e | ||
|
|
bbd09b40ff | ||
|
|
afb91988bd | ||
|
|
005260df87 | ||
|
|
813b11bb4f | ||
|
|
2e8203a64e | ||
|
|
1441f4c324 | ||
|
|
56fa0d0aad | ||
|
|
749da403fb | ||
|
|
de6cb110e2 | ||
|
|
40d487282a | ||
|
|
32a30edcea | ||
|
|
655a66dd34 | ||
|
|
48fb024ae8 | ||
|
|
e727c584ba | ||
|
|
b57879afc5 | ||
|
|
070f1120e6 | ||
|
|
2c27f8d457 | ||
|
|
926f1f0f4a |
161
.github/changelogs/2026/03.md
generated
vendored
161
.github/changelogs/2026/03.md
generated
vendored
@@ -1,3 +1,164 @@
|
|||||||
|
## 2026-03-14
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Patchmon: remove v prefix from pinned version [@MickLesk](https://github.com/MickLesk) ([#12891](https://github.com/community-scripts/ProxmoxVE/pull/12891))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- tools.func: don't abort on AMD repo apt update failure [@MickLesk](https://github.com/MickLesk) ([#12890](https://github.com/community-scripts/ProxmoxVE/pull/12890))
|
||||||
|
|
||||||
|
## 2026-03-13
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Hotfix: Removed clean install usage from original script. [@nickheyer](https://github.com/nickheyer) ([#12870](https://github.com/community-scripts/ProxmoxVE/pull/12870))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Discopanel: V2 Support + Script rewrite [@nickheyer](https://github.com/nickheyer) ([#12763](https://github.com/community-scripts/ProxmoxVE/pull/12763))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- update-apps: fix restore path, add PBS support and improve restore messages [@omertahaoztop](https://github.com/omertahaoztop) ([#12528](https://github.com/community-scripts/ProxmoxVE/pull/12528))
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- fix(pve-privilege-converter): handle already stopped container in manage_states [@liuqitoday](https://github.com/liuqitoday) ([#12765](https://github.com/community-scripts/ProxmoxVE/pull/12765))
|
||||||
|
|
||||||
|
### 📚 Documentation
|
||||||
|
|
||||||
|
- Update: Docs/website metadata workflow [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12858](https://github.com/community-scripts/ProxmoxVE/pull/12858))
|
||||||
|
|
||||||
|
## 2026-03-12
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- manyfold: fix incorrect port in upstream requests by forwarding original host [@anlopo](https://github.com/anlopo) ([#12812](https://github.com/community-scripts/ProxmoxVE/pull/12812))
|
||||||
|
- SparkyFitness: install pnpm dependencies from workspace root [@MickLesk](https://github.com/MickLesk) ([#12792](https://github.com/community-scripts/ProxmoxVE/pull/12792))
|
||||||
|
- n8n: add build-essential to update dependencies [@MickLesk](https://github.com/MickLesk) ([#12795](https://github.com/community-scripts/ProxmoxVE/pull/12795))
|
||||||
|
- Frigate openvino labelmap patch [@semtex1987](https://github.com/semtex1987) ([#12751](https://github.com/community-scripts/ProxmoxVE/pull/12751))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Pin Patchmon to 1.4.2 [@vhsdream](https://github.com/vhsdream) ([#12789](https://github.com/community-scripts/ProxmoxVE/pull/12789))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- tools.func: correct PATH escaping in ROCm profile script [@MickLesk](https://github.com/MickLesk) ([#12793](https://github.com/community-scripts/ProxmoxVE/pull/12793))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- core: add mode=generated for unattended frontend installs [@MickLesk](https://github.com/MickLesk) ([#12807](https://github.com/community-scripts/ProxmoxVE/pull/12807))
|
||||||
|
- core: validate storage availability when loading defaults [@MickLesk](https://github.com/MickLesk) ([#12794](https://github.com/community-scripts/ProxmoxVE/pull/12794))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- tools.func: support older NVIDIA driver versions with 2 segments (xxx.xxx) [@MickLesk](https://github.com/MickLesk) ([#12796](https://github.com/community-scripts/ProxmoxVE/pull/12796))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Fix PBS microcode naming [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12834](https://github.com/community-scripts/ProxmoxVE/pull/12834))
|
||||||
|
|
||||||
|
### 📂 Github
|
||||||
|
|
||||||
|
- Cleanup: remove old workflow files [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12818](https://github.com/community-scripts/ProxmoxVE/pull/12818))
|
||||||
|
- Cleanup: remove frontend, move JSONs to json/ top-level [@MickLesk](https://github.com/MickLesk) ([#12813](https://github.com/community-scripts/ProxmoxVE/pull/12813))
|
||||||
|
|
||||||
|
### ❔ Uncategorized
|
||||||
|
|
||||||
|
- Remove json files [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12830](https://github.com/community-scripts/ProxmoxVE/pull/12830))
|
||||||
|
|
||||||
|
## 2026-03-11
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- fix: Init telemetry in addon scripts [@MickLesk](https://github.com/MickLesk) ([#12777](https://github.com/community-scripts/ProxmoxVE/pull/12777))
|
||||||
|
- Tracearr: Increase default disk variable from 5 to 10 [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12762](https://github.com/community-scripts/ProxmoxVE/pull/12762))
|
||||||
|
- Fix Wireguard Dashboard update [@odin568](https://github.com/odin568) ([#12767](https://github.com/community-scripts/ProxmoxVE/pull/12767))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- Coder-Code-Server: Check if config file exists [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12758](https://github.com/community-scripts/ProxmoxVE/pull/12758))
|
||||||
|
|
||||||
|
## 2026-03-10
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- [Fix] Immich: Pin libvips to 8.17.3 [@vhsdream](https://github.com/vhsdream) ([#12744](https://github.com/community-scripts/ProxmoxVE/pull/12744))
|
||||||
|
|
||||||
|
## 2026-03-09
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- Pin Opencloud to 5.2.0 [@vhsdream](https://github.com/vhsdream) ([#12721](https://github.com/community-scripts/ProxmoxVE/pull/12721))
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- [Hotfix] qBittorrent: Disable UPnP port forwarding by default [@vhsdream](https://github.com/vhsdream) ([#12728](https://github.com/community-scripts/ProxmoxVE/pull/12728))
|
||||||
|
- [Quickfix] Opencloud: ensure correct case for binary [@vhsdream](https://github.com/vhsdream) ([#12729](https://github.com/community-scripts/ProxmoxVE/pull/12729))
|
||||||
|
- Omada: Bump libssl [@MickLesk](https://github.com/MickLesk) ([#12724](https://github.com/community-scripts/ProxmoxVE/pull/12724))
|
||||||
|
- openwebui: Ensure required dependencies [@MickLesk](https://github.com/MickLesk) ([#12717](https://github.com/community-scripts/ProxmoxVE/pull/12717))
|
||||||
|
- Frigate: try an OpenVino model build fallback [@MickLesk](https://github.com/MickLesk) ([#12704](https://github.com/community-scripts/ProxmoxVE/pull/12704))
|
||||||
|
- Change cronjob setup to use www-data user [@opastorello](https://github.com/opastorello) ([#12695](https://github.com/community-scripts/ProxmoxVE/pull/12695))
|
||||||
|
- RustDesk Server: Fix check_for_gh_release function call [@tremor021](https://github.com/tremor021) ([#12694](https://github.com/community-scripts/ProxmoxVE/pull/12694))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- feat: improve zigbee2mqtt backup handler [@MickLesk](https://github.com/MickLesk) ([#12714](https://github.com/community-scripts/ProxmoxVE/pull/12714))
|
||||||
|
|
||||||
|
- #### 💥 Breaking Changes
|
||||||
|
|
||||||
|
- Reactive Resume: rewrite for v5 using original repo amruthpilla/reactive-resume [@MickLesk](https://github.com/MickLesk) ([#12705](https://github.com/community-scripts/ProxmoxVE/pull/12705))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools: add Alpine (apk) support to ensure_dependencies and is_package_installed [@MickLesk](https://github.com/MickLesk) ([#12703](https://github.com/community-scripts/ProxmoxVE/pull/12703))
|
||||||
|
- tools.func: extend hwaccel with ROCm [@MickLesk](https://github.com/MickLesk) ([#12707](https://github.com/community-scripts/ProxmoxVE/pull/12707))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- feat: add CopycatWarningToast component for user warnings [@BramSuurdje](https://github.com/BramSuurdje) ([#12733](https://github.com/community-scripts/ProxmoxVE/pull/12733))
|
||||||
|
|
||||||
|
## 2026-03-08
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- [Fix] Immich: chown install dir before machine-learning update [@vhsdream](https://github.com/vhsdream) ([#12684](https://github.com/community-scripts/ProxmoxVE/pull/12684))
|
||||||
|
- [Fix] Scanopy: Build generate-fixtures [@vhsdream](https://github.com/vhsdream) ([#12686](https://github.com/community-scripts/ProxmoxVE/pull/12686))
|
||||||
|
- fix: rustdeskserver: use correct repo string [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12682](https://github.com/community-scripts/ProxmoxVE/pull/12682))
|
||||||
|
- NZBGet: Fixes for RAR5 handling [@tremor021](https://github.com/tremor021) ([#12675](https://github.com/community-scripts/ProxmoxVE/pull/12675))
|
||||||
|
|
||||||
|
### 🌐 Website
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- LXC-Execute: Fix slug [@tremor021](https://github.com/tremor021) ([#12681](https://github.com/community-scripts/ProxmoxVE/pull/12681))
|
||||||
|
|
||||||
## 2026-03-07
|
## 2026-03-07
|
||||||
|
|
||||||
### 🆕 New Scripts
|
### 🆕 New Scripts
|
||||||
|
|||||||
33
.github/workflows/delete-pocketbase-entry-on-removal.yml
generated
vendored
33
.github/workflows/delete-pocketbase-entry-on-removal.yml
generated
vendored
@@ -1,4 +1,4 @@
|
|||||||
name: Delete PocketBase entry on script/JSON removal
|
name: Set state to is_deleted in pocketbase
|
||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
@@ -52,15 +52,15 @@ jobs:
|
|||||||
|
|
||||||
slugs=$(echo $slugs | xargs -n1 | sort -u | tr '\n' ' ')
|
slugs=$(echo $slugs | xargs -n1 | sort -u | tr '\n' ' ')
|
||||||
if [[ -z "$slugs" ]]; then
|
if [[ -z "$slugs" ]]; then
|
||||||
echo "No deleted JSON or script files to remove from PocketBase."
|
echo "No deleted JSON or script files to mark as deleted in PocketBase."
|
||||||
echo "count=0" >> "$GITHUB_OUTPUT"
|
echo "count=0" >> "$GITHUB_OUTPUT"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
echo "$slugs" > slugs_to_delete.txt
|
echo "$slugs" > slugs_to_delete.txt
|
||||||
echo "count=$(echo $slugs | wc -w)" >> "$GITHUB_OUTPUT"
|
echo "count=$(echo $slugs | wc -w)" >> "$GITHUB_OUTPUT"
|
||||||
echo "Slugs to delete: $slugs"
|
echo "Slugs to mark as deleted: $slugs"
|
||||||
|
|
||||||
- name: Delete from PocketBase
|
- name: Mark as deleted in PocketBase
|
||||||
if: steps.slugs.outputs.count != '0'
|
if: steps.slugs.outputs.count != '0'
|
||||||
env:
|
env:
|
||||||
POCKETBASE_URL: ${{ secrets.POCKETBASE_URL }}
|
POCKETBASE_URL: ${{ secrets.POCKETBASE_URL }}
|
||||||
@@ -75,7 +75,8 @@ jobs:
|
|||||||
const http = require('http');
|
const http = require('http');
|
||||||
const url = require('url');
|
const url = require('url');
|
||||||
|
|
||||||
function request(fullUrl, opts) {
|
function request(fullUrl, opts, redirectCount) {
|
||||||
|
redirectCount = redirectCount || 0;
|
||||||
return new Promise(function(resolve, reject) {
|
return new Promise(function(resolve, reject) {
|
||||||
const u = url.parse(fullUrl);
|
const u = url.parse(fullUrl);
|
||||||
const isHttps = u.protocol === 'https:';
|
const isHttps = u.protocol === 'https:';
|
||||||
@@ -90,6 +91,13 @@ jobs:
|
|||||||
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
||||||
const lib = isHttps ? https : http;
|
const lib = isHttps ? https : http;
|
||||||
const req = lib.request(options, function(res) {
|
const req = lib.request(options, function(res) {
|
||||||
|
if (res.statusCode >= 300 && res.statusCode < 400 && res.headers.location) {
|
||||||
|
if (redirectCount >= 5) return reject(new Error('Too many redirects from ' + fullUrl));
|
||||||
|
const redirectUrl = url.resolve(fullUrl, res.headers.location);
|
||||||
|
res.resume();
|
||||||
|
resolve(request(redirectUrl, opts, redirectCount + 1));
|
||||||
|
return;
|
||||||
|
}
|
||||||
let data = '';
|
let data = '';
|
||||||
res.on('data', function(chunk) { data += chunk; });
|
res.on('data', function(chunk) { data += chunk; });
|
||||||
res.on('end', function() {
|
res.on('end', function() {
|
||||||
@@ -123,6 +131,8 @@ jobs:
|
|||||||
const token = JSON.parse(authRes.body).token;
|
const token = JSON.parse(authRes.body).token;
|
||||||
const recordsUrl = apiBase + '/collections/' + encodeURIComponent(coll) + '/records';
|
const recordsUrl = apiBase + '/collections/' + encodeURIComponent(coll) + '/records';
|
||||||
|
|
||||||
|
const patchBody = JSON.stringify({ is_deleted: true });
|
||||||
|
|
||||||
for (const slug of slugs) {
|
for (const slug of slugs) {
|
||||||
const filter = "(slug='" + slug + "')";
|
const filter = "(slug='" + slug + "')";
|
||||||
const listRes = await request(recordsUrl + '?filter=' + encodeURIComponent(filter) + '&perPage=1', {
|
const listRes = await request(recordsUrl + '?filter=' + encodeURIComponent(filter) + '&perPage=1', {
|
||||||
@@ -134,14 +144,15 @@ jobs:
|
|||||||
console.log('No PocketBase record for slug "' + slug + '", skipping.');
|
console.log('No PocketBase record for slug "' + slug + '", skipping.');
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
const delRes = await request(recordsUrl + '/' + existingId, {
|
const patchRes = await request(recordsUrl + '/' + existingId, {
|
||||||
method: 'DELETE',
|
method: 'PATCH',
|
||||||
headers: { 'Authorization': token }
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
|
body: patchBody
|
||||||
});
|
});
|
||||||
if (delRes.ok) {
|
if (patchRes.ok) {
|
||||||
console.log('Deleted PocketBase record for slug "' + slug + '" (id=' + existingId + ').');
|
console.log('Set is_deleted=true for slug "' + slug + '" (id=' + existingId + ').');
|
||||||
} else {
|
} else {
|
||||||
console.warn('DELETE failed for slug "' + slug + '": ' + delRes.statusCode + ' ' + delRes.body);
|
console.warn('PATCH failed for slug "' + slug + '": ' + patchRes.statusCode + ' ' + patchRes.body);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
console.log('Done.');
|
console.log('Done.');
|
||||||
|
|||||||
675
.github/workflows/pocketbase-bot.yml
generated
vendored
Normal file
675
.github/workflows/pocketbase-bot.yml
generated
vendored
Normal file
@@ -0,0 +1,675 @@
|
|||||||
|
name: PocketBase Bot
|
||||||
|
|
||||||
|
on:
|
||||||
|
issue_comment:
|
||||||
|
types: [created]
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
pull-requests: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
pocketbase-bot:
|
||||||
|
runs-on: self-hosted
|
||||||
|
|
||||||
|
# Only act on /pocketbase commands
|
||||||
|
if: startsWith(github.event.comment.body, '/pocketbase')
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Execute PocketBase bot command
|
||||||
|
env:
|
||||||
|
POCKETBASE_URL: ${{ secrets.POCKETBASE_URL }}
|
||||||
|
POCKETBASE_COLLECTION: ${{ secrets.POCKETBASE_COLLECTION }}
|
||||||
|
POCKETBASE_ADMIN_EMAIL: ${{ secrets.POCKETBASE_ADMIN_EMAIL }}
|
||||||
|
POCKETBASE_ADMIN_PASSWORD: ${{ secrets.POCKETBASE_ADMIN_PASSWORD }}
|
||||||
|
COMMENT_BODY: ${{ github.event.comment.body }}
|
||||||
|
COMMENT_ID: ${{ github.event.comment.id }}
|
||||||
|
ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||||
|
REPO_OWNER: ${{ github.repository_owner }}
|
||||||
|
REPO_NAME: ${{ github.event.repository.name }}
|
||||||
|
ACTOR: ${{ github.event.comment.user.login }}
|
||||||
|
ACTOR_ASSOCIATION: ${{ github.event.comment.author_association }}
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
run: |
|
||||||
|
node << 'ENDSCRIPT'
|
||||||
|
(async function () {
|
||||||
|
const https = require('https');
|
||||||
|
const http = require('http');
|
||||||
|
const url = require('url');
|
||||||
|
|
||||||
|
// ── HTTP helper with redirect following ────────────────────────────
|
||||||
|
function request(fullUrl, opts, redirectCount) {
|
||||||
|
redirectCount = redirectCount || 0;
|
||||||
|
return new Promise(function (resolve, reject) {
|
||||||
|
const u = url.parse(fullUrl);
|
||||||
|
const isHttps = u.protocol === 'https:';
|
||||||
|
const body = opts.body;
|
||||||
|
const options = {
|
||||||
|
hostname: u.hostname,
|
||||||
|
port: u.port || (isHttps ? 443 : 80),
|
||||||
|
path: u.path,
|
||||||
|
method: opts.method || 'GET',
|
||||||
|
headers: opts.headers || {}
|
||||||
|
};
|
||||||
|
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
||||||
|
const lib = isHttps ? https : http;
|
||||||
|
const req = lib.request(options, function (res) {
|
||||||
|
if (res.statusCode >= 300 && res.statusCode < 400 && res.headers.location) {
|
||||||
|
if (redirectCount >= 5) return reject(new Error('Too many redirects from ' + fullUrl));
|
||||||
|
const redirectUrl = url.resolve(fullUrl, res.headers.location);
|
||||||
|
res.resume();
|
||||||
|
resolve(request(redirectUrl, opts, redirectCount + 1));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
let data = '';
|
||||||
|
res.on('data', function (chunk) { data += chunk; });
|
||||||
|
res.on('end', function () {
|
||||||
|
resolve({ ok: res.statusCode >= 200 && res.statusCode < 300, statusCode: res.statusCode, body: data });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
req.on('error', reject);
|
||||||
|
if (body) req.write(body);
|
||||||
|
req.end();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── GitHub API helpers ─────────────────────────────────────────────
|
||||||
|
const owner = process.env.REPO_OWNER;
|
||||||
|
const repo = process.env.REPO_NAME;
|
||||||
|
const issueNumber = parseInt(process.env.ISSUE_NUMBER, 10);
|
||||||
|
const commentId = parseInt(process.env.COMMENT_ID, 10);
|
||||||
|
const actor = process.env.ACTOR;
|
||||||
|
|
||||||
|
function ghRequest(path, method, body) {
|
||||||
|
const headers = {
|
||||||
|
'Authorization': 'Bearer ' + process.env.GITHUB_TOKEN,
|
||||||
|
'Accept': 'application/vnd.github+json',
|
||||||
|
'X-GitHub-Api-Version': '2022-11-28',
|
||||||
|
'User-Agent': 'PocketBase-Bot'
|
||||||
|
};
|
||||||
|
const bodyStr = body ? JSON.stringify(body) : undefined;
|
||||||
|
if (bodyStr) headers['Content-Type'] = 'application/json';
|
||||||
|
return request('https://api.github.com' + path, { method: method || 'GET', headers, body: bodyStr });
|
||||||
|
}
|
||||||
|
|
||||||
|
async function addReaction(content) {
|
||||||
|
try {
|
||||||
|
await ghRequest(
|
||||||
|
'/repos/' + owner + '/' + repo + '/issues/comments/' + commentId + '/reactions',
|
||||||
|
'POST', { content }
|
||||||
|
);
|
||||||
|
} catch (e) {
|
||||||
|
console.warn('Could not add reaction:', e.message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function postComment(text) {
|
||||||
|
const res = await ghRequest(
|
||||||
|
'/repos/' + owner + '/' + repo + '/issues/' + issueNumber + '/comments',
|
||||||
|
'POST', { body: text }
|
||||||
|
);
|
||||||
|
if (!res.ok) console.warn('Could not post comment:', res.body);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Permission check ───────────────────────────────────────────────
|
||||||
|
// author_association: OWNER = repo/org owner, MEMBER = org member (includes Contributors team)
|
||||||
|
const association = process.env.ACTOR_ASSOCIATION;
|
||||||
|
if (association !== 'OWNER' && association !== 'MEMBER') {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: @' + actor + ' is not authorized to use this command.\n' +
|
||||||
|
'Only org members (Contributors team) can use `/pocketbase`.'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Acknowledge ────────────────────────────────────────────────────
|
||||||
|
await addReaction('eyes');
|
||||||
|
|
||||||
|
// ── Parse command ──────────────────────────────────────────────────
|
||||||
|
// Formats (first line of comment):
|
||||||
|
// /pocketbase <slug> field=value [field=value ...] ← field updates (simple values)
|
||||||
|
// /pocketbase <slug> set <field> ← value from code block below
|
||||||
|
// /pocketbase <slug> note list|add|edit|remove ... ← note management
|
||||||
|
// /pocketbase <slug> method list ← list install methods
|
||||||
|
// /pocketbase <slug> method <type> cpu=N ram=N hdd=N ← edit install method resources
|
||||||
|
const commentBody = process.env.COMMENT_BODY || '';
|
||||||
|
const lines = commentBody.trim().split('\n');
|
||||||
|
const firstLine = lines[0].trim();
|
||||||
|
const withoutCmd = firstLine.replace(/^\/pocketbase\s+/, '').trim();
|
||||||
|
|
||||||
|
// Extract code block content from comment body (```...``` or ```lang\n...```)
|
||||||
|
function extractCodeBlock(body) {
|
||||||
|
const m = body.match(/```[^\n]*\n([\s\S]*?)```/);
|
||||||
|
return m ? m[1].trim() : null;
|
||||||
|
}
|
||||||
|
const codeBlockValue = extractCodeBlock(commentBody);
|
||||||
|
|
||||||
|
const HELP_TEXT =
|
||||||
|
'**Field update (simple):** `/pocketbase <slug> field=value [field=value ...]`\n\n' +
|
||||||
|
'**Field update (HTML/multiline) — value from code block:**\n' +
|
||||||
|
'````\n' +
|
||||||
|
'/pocketbase <slug> set description\n' +
|
||||||
|
'```html\n' +
|
||||||
|
'<p>Your <b>HTML</b> or multi-line content here</p>\n' +
|
||||||
|
'```\n' +
|
||||||
|
'````\n\n' +
|
||||||
|
'**Note management:**\n' +
|
||||||
|
'```\n' +
|
||||||
|
'/pocketbase <slug> note list\n' +
|
||||||
|
'/pocketbase <slug> note add <type> "<text>"\n' +
|
||||||
|
'/pocketbase <slug> note edit <type> "<old text>" "<new text>"\n' +
|
||||||
|
'/pocketbase <slug> note remove <type> "<text>"\n' +
|
||||||
|
'```\n\n' +
|
||||||
|
'**Install method resources:**\n' +
|
||||||
|
'```\n' +
|
||||||
|
'/pocketbase <slug> method list\n' +
|
||||||
|
'/pocketbase <slug> method <type> hdd=10\n' +
|
||||||
|
'/pocketbase <slug> method <type> cpu=4 ram=2048 hdd=20\n' +
|
||||||
|
'```\n\n' +
|
||||||
|
'**Editable fields:** `name` `description` `logo` `documentation` `website` `project_url` `github` ' +
|
||||||
|
'`config_path` `port` `default_user` `default_passwd` ' +
|
||||||
|
'`updateable` `privileged` `has_arm` `is_dev` ' +
|
||||||
|
'`is_disabled` `disable_message` `is_deleted` `deleted_message`';
|
||||||
|
|
||||||
|
if (!withoutCmd) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: No slug or command specified.\n\n' + HELP_TEXT);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
const spaceIdx = withoutCmd.indexOf(' ');
|
||||||
|
const slug = (spaceIdx === -1 ? withoutCmd : withoutCmd.substring(0, spaceIdx)).trim();
|
||||||
|
const rest = spaceIdx === -1 ? '' : withoutCmd.substring(spaceIdx + 1).trim();
|
||||||
|
|
||||||
|
if (!rest) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: No command specified for slug `' + slug + '`.\n\n' + HELP_TEXT);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Allowed fields and their types ─────────────────────────────────
|
||||||
|
// ── PocketBase: authenticate (shared by all paths) ─────────────────
|
||||||
|
const raw = process.env.POCKETBASE_URL.replace(/\/$/, '');
|
||||||
|
const apiBase = /\/api$/i.test(raw) ? raw : raw + '/api';
|
||||||
|
const coll = process.env.POCKETBASE_COLLECTION;
|
||||||
|
|
||||||
|
const authRes = await request(apiBase + '/collections/users/auth-with-password', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({
|
||||||
|
identity: process.env.POCKETBASE_ADMIN_EMAIL,
|
||||||
|
password: process.env.POCKETBASE_ADMIN_PASSWORD
|
||||||
|
})
|
||||||
|
});
|
||||||
|
if (!authRes.ok) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: PocketBase authentication failed. CC @' + owner + '/maintainers');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
const token = JSON.parse(authRes.body).token;
|
||||||
|
|
||||||
|
// ── PocketBase: find record by slug (shared by all paths) ──────────
|
||||||
|
const recordsUrl = apiBase + '/collections/' + encodeURIComponent(coll) + '/records';
|
||||||
|
const filter = "(slug='" + slug.replace(/'/g, "''") + "')";
|
||||||
|
const listRes = await request(recordsUrl + '?filter=' + encodeURIComponent(filter) + '&perPage=1', {
|
||||||
|
headers: { 'Authorization': token }
|
||||||
|
});
|
||||||
|
const list = JSON.parse(listRes.body);
|
||||||
|
const record = list.items && list.items[0];
|
||||||
|
|
||||||
|
if (!record) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: No record found for slug `' + slug + '`.\n\n' +
|
||||||
|
'Make sure the script was already pushed to PocketBase (JSON must exist and have been synced).'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Route: dispatch to subcommand handler ──────────────────────────
|
||||||
|
const noteMatch = rest.match(/^note\s+(list|add|edit|remove)\b/i);
|
||||||
|
const methodMatch = rest.match(/^method\b/i);
|
||||||
|
const setMatch = rest.match(/^set\s+(\S+)/i);
|
||||||
|
|
||||||
|
if (noteMatch) {
|
||||||
|
// ── NOTE SUBCOMMAND (reads/writes notes_json on script record) ────
|
||||||
|
const noteAction = noteMatch[1].toLowerCase();
|
||||||
|
const noteArgsStr = rest.substring(noteMatch[0].length).trim();
|
||||||
|
|
||||||
|
// Parse notes_json from the already-fetched script record
|
||||||
|
// PocketBase may return JSON fields as already-parsed objects
|
||||||
|
let notesArr = [];
|
||||||
|
try {
|
||||||
|
const rawNotes = record.notes_json;
|
||||||
|
notesArr = Array.isArray(rawNotes) ? rawNotes : JSON.parse(rawNotes || '[]');
|
||||||
|
} catch (e) { notesArr = []; }
|
||||||
|
|
||||||
|
// Token parser: unquoted-word OR "quoted string" (supports \" escapes)
|
||||||
|
function parseNoteTokens(str) {
|
||||||
|
const tokens = [];
|
||||||
|
let pos = 0;
|
||||||
|
while (pos < str.length) {
|
||||||
|
while (pos < str.length && /\s/.test(str[pos])) pos++;
|
||||||
|
if (pos >= str.length) break;
|
||||||
|
if (str[pos] === '"') {
|
||||||
|
pos++;
|
||||||
|
let start = pos;
|
||||||
|
while (pos < str.length && str[pos] !== '"') {
|
||||||
|
if (str[pos] === '\\') pos++;
|
||||||
|
pos++;
|
||||||
|
}
|
||||||
|
tokens.push(str.substring(start, pos).replace(/\\"/g, '"'));
|
||||||
|
if (pos < str.length) pos++;
|
||||||
|
} else {
|
||||||
|
let start = pos;
|
||||||
|
while (pos < str.length && !/\s/.test(str[pos])) pos++;
|
||||||
|
tokens.push(str.substring(start, pos));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return tokens;
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatNotesList(arr) {
|
||||||
|
if (arr.length === 0) return '*None*';
|
||||||
|
return arr.map(function (n, i) {
|
||||||
|
return (i + 1) + '. **`' + (n.type || '?') + '`**: ' + (n.text || '');
|
||||||
|
}).join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function patchNotesJson(arr) {
|
||||||
|
const res = await request(recordsUrl + '/' + record.id, {
|
||||||
|
method: 'PATCH',
|
||||||
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({ notes_json: JSON.stringify(arr) })
|
||||||
|
});
|
||||||
|
if (!res.ok) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: Failed to update `notes_json`:\n```\n' + res.body + '\n```');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (noteAction === 'list') {
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'ℹ️ **PocketBase Bot**: Notes for **`' + slug + '`** (' + notesArr.length + ' total)\n\n' +
|
||||||
|
formatNotesList(notesArr)
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if (noteAction === 'add') {
|
||||||
|
const tokens = parseNoteTokens(noteArgsStr);
|
||||||
|
if (tokens.length < 2) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: `note add` requires `<type>` and `"<text>"`.\n\n' +
|
||||||
|
'**Usage:** `/pocketbase ' + slug + ' note add <type> "<text>"`'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
const noteType = tokens[0].toLowerCase();
|
||||||
|
const noteText = tokens.slice(1).join(' ');
|
||||||
|
notesArr.push({ type: noteType, text: noteText });
|
||||||
|
await patchNotesJson(notesArr);
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'✅ **PocketBase Bot**: Added note to **`' + slug + '`**\n\n' +
|
||||||
|
'- **Type:** `' + noteType + '`\n' +
|
||||||
|
'- **Text:** ' + noteText + '\n\n' +
|
||||||
|
'*Executed by @' + actor + '*'
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if (noteAction === 'edit') {
|
||||||
|
const tokens = parseNoteTokens(noteArgsStr);
|
||||||
|
if (tokens.length < 3) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: `note edit` requires `<type>`, `"<old text>"`, and `"<new text>"`.\n\n' +
|
||||||
|
'**Usage:** `/pocketbase ' + slug + ' note edit <type> "<old text>" "<new text>"`\n\n' +
|
||||||
|
'Use `/pocketbase ' + slug + ' note list` to see current notes.'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
const noteType = tokens[0].toLowerCase();
|
||||||
|
const oldText = tokens[1];
|
||||||
|
const newText = tokens[2];
|
||||||
|
const idx = notesArr.findIndex(function (n) {
|
||||||
|
return n.type.toLowerCase() === noteType && n.text === oldText;
|
||||||
|
});
|
||||||
|
if (idx === -1) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: No `' + noteType + '` note found with that exact text.\n\n' +
|
||||||
|
'**Current notes for `' + slug + '`:**\n' + formatNotesList(notesArr)
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
notesArr[idx].text = newText;
|
||||||
|
await patchNotesJson(notesArr);
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'✅ **PocketBase Bot**: Edited note in **`' + slug + '`**\n\n' +
|
||||||
|
'- **Type:** `' + noteType + '`\n' +
|
||||||
|
'- **Old:** ' + oldText + '\n' +
|
||||||
|
'- **New:** ' + newText + '\n\n' +
|
||||||
|
'*Executed by @' + actor + '*'
|
||||||
|
);
|
||||||
|
|
||||||
|
} else if (noteAction === 'remove') {
|
||||||
|
const tokens = parseNoteTokens(noteArgsStr);
|
||||||
|
if (tokens.length < 2) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: `note remove` requires `<type>` and `"<text>"`.\n\n' +
|
||||||
|
'**Usage:** `/pocketbase ' + slug + ' note remove <type> "<text>"`\n\n' +
|
||||||
|
'Use `/pocketbase ' + slug + ' note list` to see current notes.'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
const noteType = tokens[0].toLowerCase();
|
||||||
|
const noteText = tokens[1];
|
||||||
|
const before = notesArr.length;
|
||||||
|
notesArr = notesArr.filter(function (n) {
|
||||||
|
return !(n.type.toLowerCase() === noteType && n.text === noteText);
|
||||||
|
});
|
||||||
|
if (notesArr.length === before) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: No `' + noteType + '` note found with that exact text.\n\n' +
|
||||||
|
'**Current notes for `' + slug + '`:**\n' + formatNotesList(notesArr)
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
await patchNotesJson(notesArr);
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'✅ **PocketBase Bot**: Removed note from **`' + slug + '`**\n\n' +
|
||||||
|
'- **Type:** `' + noteType + '`\n' +
|
||||||
|
'- **Text:** ' + noteText + '\n\n' +
|
||||||
|
'*Executed by @' + actor + '*'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
} else if (methodMatch) {
|
||||||
|
// ── METHOD SUBCOMMAND (reads/writes install_methods_json on script record) ──
|
||||||
|
const methodArgs = rest.replace(/^method\s*/i, '').trim();
|
||||||
|
const methodListMode = !methodArgs || methodArgs.toLowerCase() === 'list';
|
||||||
|
|
||||||
|
// Parse install_methods_json from the already-fetched script record
|
||||||
|
// PocketBase may return JSON fields as already-parsed objects
|
||||||
|
let methodsArr = [];
|
||||||
|
try {
|
||||||
|
const rawMethods = record.install_methods_json;
|
||||||
|
methodsArr = Array.isArray(rawMethods) ? rawMethods : JSON.parse(rawMethods || '[]');
|
||||||
|
} catch (e) { methodsArr = []; }
|
||||||
|
|
||||||
|
function formatMethodsList(arr) {
|
||||||
|
if (arr.length === 0) return '*None*';
|
||||||
|
return arr.map(function (im, i) {
|
||||||
|
const r = im.resources || {};
|
||||||
|
return (i + 1) + '. **`' + (im.type || '?') + '`** — CPU: `' + (r.cpu != null ? r.cpu : '?') +
|
||||||
|
'` · RAM: `' + (r.ram != null ? r.ram : '?') + ' MB` · HDD: `' + (r.hdd != null ? r.hdd : '?') + ' GB`';
|
||||||
|
}).join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
async function patchInstallMethodsJson(arr) {
|
||||||
|
const res = await request(recordsUrl + '/' + record.id, {
|
||||||
|
method: 'PATCH',
|
||||||
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({ install_methods_json: JSON.stringify(arr) })
|
||||||
|
});
|
||||||
|
if (!res.ok) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: Failed to update `install_methods_json`:\n```\n' + res.body + '\n```');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (methodListMode) {
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'ℹ️ **PocketBase Bot**: Install methods for **`' + slug + '`** (' + methodsArr.length + ' total)\n\n' +
|
||||||
|
formatMethodsList(methodsArr)
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
// Parse: <type> cpu=N ram=N hdd=N
|
||||||
|
const methodParts = methodArgs.match(/^(\S+)\s+(.+)$/);
|
||||||
|
if (!methodParts) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: Invalid `method` syntax.\n\n' +
|
||||||
|
'**Usage:**\n```\n/pocketbase ' + slug + ' method list\n/pocketbase ' + slug + ' method <type> hdd=10\n/pocketbase ' + slug + ' method <type> cpu=4 ram=2048 hdd=20\n```'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
const targetType = methodParts[1].toLowerCase();
|
||||||
|
const resourcesStr = methodParts[2];
|
||||||
|
|
||||||
|
// Parse resource fields (only cpu/ram/hdd allowed)
|
||||||
|
const RESOURCE_FIELDS = { cpu: true, ram: true, hdd: true };
|
||||||
|
const resourceChanges = {};
|
||||||
|
const rePairs = /([a-z]+)=(\d+)/gi;
|
||||||
|
let m;
|
||||||
|
while ((m = rePairs.exec(resourcesStr)) !== null) {
|
||||||
|
const key = m[1].toLowerCase();
|
||||||
|
if (RESOURCE_FIELDS[key]) resourceChanges[key] = parseInt(m[2], 10);
|
||||||
|
}
|
||||||
|
if (Object.keys(resourceChanges).length === 0) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: No valid resource fields found. Use `cpu=N`, `ram=N`, `hdd=N`.');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find matching method by type name (case-insensitive)
|
||||||
|
const idx = methodsArr.findIndex(function (im) {
|
||||||
|
return (im.type || '').toLowerCase() === targetType;
|
||||||
|
});
|
||||||
|
if (idx === -1) {
|
||||||
|
await addReaction('-1');
|
||||||
|
const availableTypes = methodsArr.map(function (im) { return im.type || '?'; });
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: No install method with type `' + targetType + '` found for `' + slug + '`.\n\n' +
|
||||||
|
'**Available types:** `' + (availableTypes.length ? availableTypes.join('`, `') : '(none)') + '`\n\n' +
|
||||||
|
'Use `/pocketbase ' + slug + ' method list` to see all methods.'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!methodsArr[idx].resources) methodsArr[idx].resources = {};
|
||||||
|
if (resourceChanges.cpu != null) methodsArr[idx].resources.cpu = resourceChanges.cpu;
|
||||||
|
if (resourceChanges.ram != null) methodsArr[idx].resources.ram = resourceChanges.ram;
|
||||||
|
if (resourceChanges.hdd != null) methodsArr[idx].resources.hdd = resourceChanges.hdd;
|
||||||
|
|
||||||
|
await patchInstallMethodsJson(methodsArr);
|
||||||
|
|
||||||
|
const changesLines = Object.entries(resourceChanges)
|
||||||
|
.map(function ([k, v]) { return '- `' + k + '` → `' + v + (k === 'ram' ? ' MB' : k === 'hdd' ? ' GB' : '') + '`'; })
|
||||||
|
.join('\n');
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'✅ **PocketBase Bot**: Updated install method **`' + methodsArr[idx].type + '`** for **`' + slug + '`**\n\n' +
|
||||||
|
'**Changes applied:**\n' + changesLines + '\n\n' +
|
||||||
|
'*Executed by @' + actor + '*'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
} else if (setMatch) {
|
||||||
|
// ── SET SUBCOMMAND (multi-line / HTML / special chars via code block) ──
|
||||||
|
const fieldName = setMatch[1].toLowerCase();
|
||||||
|
const SET_ALLOWED = {
|
||||||
|
name: 'string', description: 'string', logo: 'string',
|
||||||
|
documentation: 'string', website: 'string', project_url: 'string', github: 'string',
|
||||||
|
config_path: 'string', disable_message: 'string', deleted_message: 'string'
|
||||||
|
};
|
||||||
|
if (!SET_ALLOWED[fieldName]) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: `set` only supports text fields.\n\n' +
|
||||||
|
'**Allowed:** `' + Object.keys(SET_ALLOWED).join('`, `') + '`\n\n' +
|
||||||
|
'For boolean/number fields use `field=value` syntax instead.'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
if (!codeBlockValue) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: `set` requires a code block with the value.\n\n' +
|
||||||
|
'**Usage:**\n````\n/pocketbase ' + slug + ' set ' + fieldName + '\n```\nYour content here (HTML, multiline, special chars all fine)\n```\n````'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
const setPayload = {};
|
||||||
|
setPayload[fieldName] = codeBlockValue;
|
||||||
|
const setPatchRes = await request(recordsUrl + '/' + record.id, {
|
||||||
|
method: 'PATCH',
|
||||||
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify(setPayload)
|
||||||
|
});
|
||||||
|
if (!setPatchRes.ok) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: PATCH failed for `' + slug + '`:\n```\n' + setPatchRes.body + '\n```');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
const preview = codeBlockValue.length > 300 ? codeBlockValue.substring(0, 300) + '…' : codeBlockValue;
|
||||||
|
await addReaction('+1');
|
||||||
|
await postComment(
|
||||||
|
'✅ **PocketBase Bot**: Set `' + fieldName + '` for **`' + slug + '`**\n\n' +
|
||||||
|
'**Value set:**\n```\n' + preview + '\n```\n\n' +
|
||||||
|
'*Executed by @' + actor + '*'
|
||||||
|
);
|
||||||
|
|
||||||
|
} else {
|
||||||
|
// ── FIELD=VALUE PATH ─────────────────────────────────────────────
|
||||||
|
const fieldsStr = rest;
|
||||||
|
|
||||||
|
// Skipped: slug, script_created/updated, created (auto), categories/
|
||||||
|
// install_methods/notes/type (relations), github_data/install_methods_json/
|
||||||
|
// notes_json (auto-generated), execute_in (select relation), last_update_commit (auto)
|
||||||
|
const ALLOWED_FIELDS = {
|
||||||
|
name: 'string',
|
||||||
|
description: 'string',
|
||||||
|
logo: 'string',
|
||||||
|
documentation: 'string',
|
||||||
|
website: 'string',
|
||||||
|
project_url: 'string',
|
||||||
|
github: 'string',
|
||||||
|
config_path: 'string',
|
||||||
|
port: 'number',
|
||||||
|
default_user: 'nullable_string',
|
||||||
|
default_passwd: 'nullable_string',
|
||||||
|
updateable: 'boolean',
|
||||||
|
privileged: 'boolean',
|
||||||
|
has_arm: 'boolean',
|
||||||
|
is_dev: 'boolean',
|
||||||
|
is_disabled: 'boolean',
|
||||||
|
disable_message: 'string',
|
||||||
|
is_deleted: 'boolean',
|
||||||
|
deleted_message: 'string',
|
||||||
|
};
|
||||||
|
|
||||||
|
// Field=value parser (handles quoted values and empty=null)
|
||||||
|
function parseFields(str) {
|
||||||
|
const fields = {};
|
||||||
|
let pos = 0;
|
||||||
|
while (pos < str.length) {
|
||||||
|
while (pos < str.length && /\s/.test(str[pos])) pos++;
|
||||||
|
if (pos >= str.length) break;
|
||||||
|
let keyStart = pos;
|
||||||
|
while (pos < str.length && str[pos] !== '=' && !/\s/.test(str[pos])) pos++;
|
||||||
|
const key = str.substring(keyStart, pos).trim();
|
||||||
|
if (!key || pos >= str.length || str[pos] !== '=') { pos++; continue; }
|
||||||
|
pos++;
|
||||||
|
let value;
|
||||||
|
if (str[pos] === '"') {
|
||||||
|
pos++;
|
||||||
|
let valStart = pos;
|
||||||
|
while (pos < str.length && str[pos] !== '"') {
|
||||||
|
if (str[pos] === '\\') pos++;
|
||||||
|
pos++;
|
||||||
|
}
|
||||||
|
value = str.substring(valStart, pos).replace(/\\"/g, '"');
|
||||||
|
if (pos < str.length) pos++;
|
||||||
|
} else {
|
||||||
|
let valStart = pos;
|
||||||
|
while (pos < str.length && !/\s/.test(str[pos])) pos++;
|
||||||
|
value = str.substring(valStart, pos);
|
||||||
|
}
|
||||||
|
fields[key] = value;
|
||||||
|
}
|
||||||
|
return fields;
|
||||||
|
}
|
||||||
|
|
||||||
|
const parsedFields = parseFields(fieldsStr);
|
||||||
|
|
||||||
|
const unknownFields = Object.keys(parsedFields).filter(function (f) { return !ALLOWED_FIELDS[f]; });
|
||||||
|
if (unknownFields.length > 0) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment(
|
||||||
|
'❌ **PocketBase Bot**: Unknown field(s): `' + unknownFields.join('`, `') + '`\n\n' +
|
||||||
|
'**Allowed fields:** `' + Object.keys(ALLOWED_FIELDS).join('`, `') + '`'
|
||||||
|
);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Object.keys(parsedFields).length === 0) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: Could not parse any valid `field=value` pairs.\n\n' + HELP_TEXT);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cast values to correct types
|
||||||
|
const payload = {};
|
||||||
|
for (const [key, rawVal] of Object.entries(parsedFields)) {
|
||||||
|
const type = ALLOWED_FIELDS[key];
|
||||||
|
if (type === 'boolean') {
|
||||||
|
if (rawVal === 'true') payload[key] = true;
|
||||||
|
else if (rawVal === 'false') payload[key] = false;
|
||||||
|
else {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: `' + key + '` must be `true` or `false`, got: `' + rawVal + '`');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
} else if (type === 'number') {
|
||||||
|
const n = parseInt(rawVal, 10);
|
||||||
|
if (isNaN(n)) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: `' + key + '` must be a number, got: `' + rawVal + '`');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
payload[key] = n;
|
||||||
|
} else if (type === 'nullable_string') {
|
||||||
|
payload[key] = rawVal === '' ? null : rawVal;
|
||||||
|
} else {
|
||||||
|
payload[key] = rawVal;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const patchRes = await request(recordsUrl + '/' + record.id, {
|
||||||
|
method: 'PATCH',
|
||||||
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify(payload)
|
||||||
|
});
|
||||||
|
if (!patchRes.ok) {
|
||||||
|
await addReaction('-1');
|
||||||
|
await postComment('❌ **PocketBase Bot**: PATCH failed for `' + slug + '`:\n```\n' + patchRes.body + '\n```');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
await addReaction('+1');
|
||||||
|
const changesLines = Object.entries(payload)
|
||||||
|
.map(function ([k, v]) { return '- `' + k + '` → `' + JSON.stringify(v) + '`'; })
|
||||||
|
.join('\n');
|
||||||
|
await postComment(
|
||||||
|
'✅ **PocketBase Bot**: Updated **`' + slug + '`** successfully!\n\n' +
|
||||||
|
'**Changes applied:**\n' + changesLines + '\n\n' +
|
||||||
|
'*Executed by @' + actor + '*'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('Done.');
|
||||||
|
})().catch(function (e) {
|
||||||
|
console.error('Fatal error:', e.message || e);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
ENDSCRIPT
|
||||||
|
shell: bash
|
||||||
24
.github/workflows/push-json-to-pocketbase.yml
generated
vendored
24
.github/workflows/push-json-to-pocketbase.yml
generated
vendored
@@ -48,7 +48,8 @@ jobs:
|
|||||||
const https = require('https');
|
const https = require('https');
|
||||||
const http = require('http');
|
const http = require('http');
|
||||||
const url = require('url');
|
const url = require('url');
|
||||||
function request(fullUrl, opts) {
|
function request(fullUrl, opts, redirectCount) {
|
||||||
|
redirectCount = redirectCount || 0;
|
||||||
return new Promise(function(resolve, reject) {
|
return new Promise(function(resolve, reject) {
|
||||||
const u = url.parse(fullUrl);
|
const u = url.parse(fullUrl);
|
||||||
const isHttps = u.protocol === 'https:';
|
const isHttps = u.protocol === 'https:';
|
||||||
@@ -63,6 +64,13 @@ jobs:
|
|||||||
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
||||||
const lib = isHttps ? https : http;
|
const lib = isHttps ? https : http;
|
||||||
const req = lib.request(options, function(res) {
|
const req = lib.request(options, function(res) {
|
||||||
|
if (res.statusCode >= 300 && res.statusCode < 400 && res.headers.location) {
|
||||||
|
if (redirectCount >= 5) return reject(new Error('Too many redirects from ' + fullUrl));
|
||||||
|
const redirectUrl = url.resolve(fullUrl, res.headers.location);
|
||||||
|
res.resume();
|
||||||
|
resolve(request(redirectUrl, opts, redirectCount + 1));
|
||||||
|
return;
|
||||||
|
}
|
||||||
let data = '';
|
let data = '';
|
||||||
res.on('data', function(chunk) { data += chunk; });
|
res.on('data', function(chunk) { data += chunk; });
|
||||||
res.on('end', function() {
|
res.on('end', function() {
|
||||||
@@ -125,15 +133,15 @@ jobs:
|
|||||||
var osVersionToId = {};
|
var osVersionToId = {};
|
||||||
try {
|
try {
|
||||||
const res = await request(apiBase + '/collections/z_ref_note_types/records?perPage=500', { headers: { 'Authorization': token } });
|
const res = await request(apiBase + '/collections/z_ref_note_types/records?perPage=500', { headers: { 'Authorization': token } });
|
||||||
if (res.ok) JSON.parse(res.body).items?.forEach(function(item) { if (item.type != null) noteTypeToId[item.type] = item.id; });
|
if (res.ok) JSON.parse(res.body).items?.forEach(function(item) { if (item.type != null) { noteTypeToId[item.type] = item.id; noteTypeToId[item.type.toLowerCase()] = item.id; } });
|
||||||
} catch (e) { console.warn('z_ref_note_types:', e.message); }
|
} catch (e) { console.warn('z_ref_note_types:', e.message); }
|
||||||
try {
|
try {
|
||||||
const res = await request(apiBase + '/collections/z_ref_install_method_types/records?perPage=500', { headers: { 'Authorization': token } });
|
const res = await request(apiBase + '/collections/z_ref_install_method_types/records?perPage=500', { headers: { 'Authorization': token } });
|
||||||
if (res.ok) JSON.parse(res.body).items?.forEach(function(item) { if (item.type != null) installMethodTypeToId[item.type] = item.id; });
|
if (res.ok) JSON.parse(res.body).items?.forEach(function(item) { if (item.type != null) { installMethodTypeToId[item.type] = item.id; installMethodTypeToId[item.type.toLowerCase()] = item.id; } });
|
||||||
} catch (e) { console.warn('z_ref_install_method_types:', e.message); }
|
} catch (e) { console.warn('z_ref_install_method_types:', e.message); }
|
||||||
try {
|
try {
|
||||||
const res = await request(apiBase + '/collections/z_ref_os/records?perPage=500', { headers: { 'Authorization': token } });
|
const res = await request(apiBase + '/collections/z_ref_os/records?perPage=500', { headers: { 'Authorization': token } });
|
||||||
if (res.ok) JSON.parse(res.body).items?.forEach(function(item) { if (item.os != null) osToId[item.os] = item.id; });
|
if (res.ok) JSON.parse(res.body).items?.forEach(function(item) { if (item.os != null) { osToId[item.os] = item.id; osToId[item.os.toLowerCase()] = item.id; } });
|
||||||
} catch (e) { console.warn('z_ref_os:', e.message); }
|
} catch (e) { console.warn('z_ref_os:', e.message); }
|
||||||
try {
|
try {
|
||||||
const res = await request(apiBase + '/collections/z_ref_os_version/records?perPage=500&expand=os', { headers: { 'Authorization': token } });
|
const res = await request(apiBase + '/collections/z_ref_os_version/records?perPage=500&expand=os', { headers: { 'Authorization': token } });
|
||||||
@@ -154,7 +162,7 @@ jobs:
|
|||||||
name: data.name,
|
name: data.name,
|
||||||
slug: data.slug,
|
slug: data.slug,
|
||||||
script_created: data.date_created || data.script_created,
|
script_created: data.date_created || data.script_created,
|
||||||
script_updated: data.date_created || data.script_updated,
|
script_updated: new Date().toISOString().split('T')[0],
|
||||||
updateable: data.updateable,
|
updateable: data.updateable,
|
||||||
privileged: data.privileged,
|
privileged: data.privileged,
|
||||||
port: data.interface_port != null ? data.interface_port : data.port,
|
port: data.interface_port != null ? data.interface_port : data.port,
|
||||||
@@ -163,8 +171,8 @@ jobs:
|
|||||||
logo: data.logo,
|
logo: data.logo,
|
||||||
description: data.description,
|
description: data.description,
|
||||||
config_path: data.config_path,
|
config_path: data.config_path,
|
||||||
default_user: (data.default_credentials && data.default_credentials.username) || data.default_user,
|
default_user: (data.default_credentials && data.default_credentials.username) || data.default_user || null,
|
||||||
default_passwd: (data.default_credentials && data.default_credentials.password) || data.default_passwd,
|
default_passwd: (data.default_credentials && data.default_credentials.password) || data.default_passwd || null,
|
||||||
is_dev: false
|
is_dev: false
|
||||||
};
|
};
|
||||||
var resolvedType = typeValueToId[data.type];
|
var resolvedType = typeValueToId[data.type];
|
||||||
@@ -190,7 +198,7 @@ jobs:
|
|||||||
var postRes = await request(notesCollUrl, {
|
var postRes = await request(notesCollUrl, {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
body: JSON.stringify({ text: note.text || '', type: typeId })
|
body: JSON.stringify({ text: note.text || '', type: typeId, script: scriptId })
|
||||||
});
|
});
|
||||||
if (postRes.ok) noteIds.push(JSON.parse(postRes.body).id);
|
if (postRes.ok) noteIds.push(JSON.parse(postRes.body).id);
|
||||||
}
|
}
|
||||||
|
|||||||
12
.github/workflows/update-script-timestamp-on-sh-change.yml
generated
vendored
12
.github/workflows/update-script-timestamp-on-sh-change.yml
generated
vendored
@@ -83,7 +83,8 @@ jobs:
|
|||||||
const http = require('http');
|
const http = require('http');
|
||||||
const url = require('url');
|
const url = require('url');
|
||||||
|
|
||||||
function request(fullUrl, opts) {
|
function request(fullUrl, opts, redirectCount) {
|
||||||
|
redirectCount = redirectCount || 0;
|
||||||
return new Promise(function(resolve, reject) {
|
return new Promise(function(resolve, reject) {
|
||||||
const u = url.parse(fullUrl);
|
const u = url.parse(fullUrl);
|
||||||
const isHttps = u.protocol === 'https:';
|
const isHttps = u.protocol === 'https:';
|
||||||
@@ -98,6 +99,13 @@ jobs:
|
|||||||
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
if (body) options.headers['Content-Length'] = Buffer.byteLength(body);
|
||||||
const lib = isHttps ? https : http;
|
const lib = isHttps ? https : http;
|
||||||
const req = lib.request(options, function(res) {
|
const req = lib.request(options, function(res) {
|
||||||
|
if (res.statusCode >= 300 && res.statusCode < 400 && res.headers.location) {
|
||||||
|
if (redirectCount >= 5) return reject(new Error('Too many redirects from ' + fullUrl));
|
||||||
|
const redirectUrl = url.resolve(fullUrl, res.headers.location);
|
||||||
|
res.resume();
|
||||||
|
resolve(request(redirectUrl, opts, redirectCount + 1));
|
||||||
|
return;
|
||||||
|
}
|
||||||
let data = '';
|
let data = '';
|
||||||
res.on('data', function(chunk) { data += chunk; });
|
res.on('data', function(chunk) { data += chunk; });
|
||||||
res.on('end', function() {
|
res.on('end', function() {
|
||||||
@@ -151,7 +159,7 @@ jobs:
|
|||||||
method: 'PATCH',
|
method: 'PATCH',
|
||||||
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
headers: { 'Authorization': token, 'Content-Type': 'application/json' },
|
||||||
body: JSON.stringify({
|
body: JSON.stringify({
|
||||||
name: record.name || record.slug,
|
script_updated: new Date().toISOString().split('T')[0],
|
||||||
last_update_commit: process.env.PR_URL || process.env.COMMIT_URL || ''
|
last_update_commit: process.env.PR_URL || process.env.COMMIT_URL || ''
|
||||||
})
|
})
|
||||||
});
|
});
|
||||||
|
|||||||
393
CHANGELOG.md
393
CHANGELOG.md
@@ -26,6 +26,9 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary><h2>📜 History</h2></summary>
|
<summary><h2>📜 History</h2></summary>
|
||||||
@@ -36,7 +39,7 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary><h4>March (7 entries)</h4></summary>
|
<summary><h4>March (14 entries)</h4></summary>
|
||||||
|
|
||||||
[View March 2026 Changelog](.github/changelogs/2026/03.md)
|
[View March 2026 Changelog](.github/changelogs/2026/03.md)
|
||||||
|
|
||||||
@@ -420,6 +423,154 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
## 2026-03-19
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- core: reorder hwaccel setup and adjust GPU group usermod [@MickLesk](https://github.com/MickLesk) ([#13072](https://github.com/community-scripts/ProxmoxVE/pull/13072))
|
||||||
|
|
||||||
|
## 2026-03-18
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Alpine-Ntfy [@MickLesk](https://github.com/MickLesk) ([#13048](https://github.com/community-scripts/ProxmoxVE/pull/13048))
|
||||||
|
- Split-Pro ([#12975](https://github.com/community-scripts/ProxmoxVE/pull/12975))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Tdarr: use curl_with_retry and correct exit code [@MickLesk](https://github.com/MickLesk) ([#13060](https://github.com/community-scripts/ProxmoxVE/pull/13060))
|
||||||
|
- reitti: fix: v4 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#13039](https://github.com/community-scripts/ProxmoxVE/pull/13039))
|
||||||
|
- Paperless-NGX: increase default RAM to 3GB [@MickLesk](https://github.com/MickLesk) ([#13018](https://github.com/community-scripts/ProxmoxVE/pull/13018))
|
||||||
|
- Plex: restart service after update to apply new version [@MickLesk](https://github.com/MickLesk) ([#13017](https://github.com/community-scripts/ProxmoxVE/pull/13017))
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools: centralize GPU group setup via setup_hwaccel [@MickLesk](https://github.com/MickLesk) ([#13044](https://github.com/community-scripts/ProxmoxVE/pull/13044))
|
||||||
|
- Termix: add guacd build and systemd integration [@MickLesk](https://github.com/MickLesk) ([#12999](https://github.com/community-scripts/ProxmoxVE/pull/12999))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Podman: replace deprecated commands with Quadlets [@MickLesk](https://github.com/MickLesk) ([#13052](https://github.com/community-scripts/ProxmoxVE/pull/13052))
|
||||||
|
- Refactor: Jellyfin repo, ffmpeg package and symlinks [@MickLesk](https://github.com/MickLesk) ([#13045](https://github.com/community-scripts/ProxmoxVE/pull/13045))
|
||||||
|
- pve-scripts-local: Increase default disk size from 4GB to 10GB [@MickLesk](https://github.com/MickLesk) ([#13009](https://github.com/community-scripts/ProxmoxVE/pull/13009))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- tools.func Implement pg_cron setup for setup_postgresql [@MickLesk](https://github.com/MickLesk) ([#13053](https://github.com/community-scripts/ProxmoxVE/pull/13053))
|
||||||
|
- tools.func: Implement check_for_gh_tag function [@MickLesk](https://github.com/MickLesk) ([#12998](https://github.com/community-scripts/ProxmoxVE/pull/12998))
|
||||||
|
- tools.func: Implement fetch_and_deploy_gh_tag function [@MickLesk](https://github.com/MickLesk) ([#13000](https://github.com/community-scripts/ProxmoxVE/pull/13000))
|
||||||
|
|
||||||
|
## 2026-03-17
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Gluetun: add OpenVPN process user and cleanup stale config [@MickLesk](https://github.com/MickLesk) ([#13016](https://github.com/community-scripts/ProxmoxVE/pull/13016))
|
||||||
|
- Frigate: check OpenVino model files exist before configuring detector and use curl_with_retry instead of default wget [@MickLesk](https://github.com/MickLesk) ([#13019](https://github.com/community-scripts/ProxmoxVE/pull/13019))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- tools.func: Update `create_self_signed_cert()` [@tremor021](https://github.com/tremor021) ([#13008](https://github.com/community-scripts/ProxmoxVE/pull/13008))
|
||||||
|
|
||||||
|
## 2026-03-16
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Gluetun ([#12976](https://github.com/community-scripts/ProxmoxVE/pull/12976))
|
||||||
|
- Anytype-Server ([#12974](https://github.com/community-scripts/ProxmoxVE/pull/12974))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Immich: use gcc-13 for compilation & add uv python pre-install with retry logic [@MickLesk](https://github.com/MickLesk) ([#12935](https://github.com/community-scripts/ProxmoxVE/pull/12935))
|
||||||
|
- Tautulli: add setuptools<81 constraint to update script [@MickLesk](https://github.com/MickLesk) ([#12959](https://github.com/community-scripts/ProxmoxVE/pull/12959))
|
||||||
|
- Seerr: add missing build deps [@MickLesk](https://github.com/MickLesk) ([#12960](https://github.com/community-scripts/ProxmoxVE/pull/12960))
|
||||||
|
- fix: yubal update [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12961](https://github.com/community-scripts/ProxmoxVE/pull/12961))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- hwaccel: remove ROCm install from AMD APU setup [@MickLesk](https://github.com/MickLesk) ([#12958](https://github.com/community-scripts/ProxmoxVE/pull/12958))
|
||||||
|
|
||||||
|
## 2026-03-15
|
||||||
|
|
||||||
|
### 🆕 New Scripts
|
||||||
|
|
||||||
|
- Yamtrack ([#12936](https://github.com/community-scripts/ProxmoxVE/pull/12936))
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Wishlist: use --frozen-lockfile for pnpm install [@MickLesk](https://github.com/MickLesk) ([#12892](https://github.com/community-scripts/ProxmoxVE/pull/12892))
|
||||||
|
- SparkyFitness: use --legacy-peer-deps for npm install [@MickLesk](https://github.com/MickLesk) ([#12888](https://github.com/community-scripts/ProxmoxVE/pull/12888))
|
||||||
|
- Frigate: add fallback for OpenVino labelmap file [@MickLesk](https://github.com/MickLesk) ([#12889](https://github.com/community-scripts/ProxmoxVE/pull/12889))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Refactor: ITSM-NG [@MickLesk](https://github.com/MickLesk) ([#12918](https://github.com/community-scripts/ProxmoxVE/pull/12918))
|
||||||
|
- core: unify RELEASE variable for check_for_gh_release and fetch_and_deploy [@MickLesk](https://github.com/MickLesk) ([#12917](https://github.com/community-scripts/ProxmoxVE/pull/12917))
|
||||||
|
- Standardize NSAPP names across VM scripts [@MickLesk](https://github.com/MickLesk) ([#12924](https://github.com/community-scripts/ProxmoxVE/pull/12924))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### ✨ New Features
|
||||||
|
|
||||||
|
- core: retry downloads with exponential backoff [@MickLesk](https://github.com/MickLesk) ([#12896](https://github.com/community-scripts/ProxmoxVE/pull/12896))
|
||||||
|
|
||||||
|
### ❔ Uncategorized
|
||||||
|
|
||||||
|
- [go2rtc] Add ffmpeg dependency to install script [@Copilot](https://github.com/Copilot) ([#12944](https://github.com/community-scripts/ProxmoxVE/pull/12944))
|
||||||
|
|
||||||
|
## 2026-03-14
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Patchmon: remove v prefix from pinned version [@MickLesk](https://github.com/MickLesk) ([#12891](https://github.com/community-scripts/ProxmoxVE/pull/12891))
|
||||||
|
|
||||||
|
### 💾 Core
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- tools.func: don't abort on AMD repo apt update failure [@MickLesk](https://github.com/MickLesk) ([#12890](https://github.com/community-scripts/ProxmoxVE/pull/12890))
|
||||||
|
|
||||||
|
## 2026-03-13
|
||||||
|
|
||||||
|
### 🚀 Updated Scripts
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Hotfix: Removed clean install usage from original script. [@nickheyer](https://github.com/nickheyer) ([#12870](https://github.com/community-scripts/ProxmoxVE/pull/12870))
|
||||||
|
|
||||||
|
- #### 🔧 Refactor
|
||||||
|
|
||||||
|
- Discopanel: V2 Support + Script rewrite [@nickheyer](https://github.com/nickheyer) ([#12763](https://github.com/community-scripts/ProxmoxVE/pull/12763))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- update-apps: fix restore path, add PBS support and improve restore messages [@omertahaoztop](https://github.com/omertahaoztop) ([#12528](https://github.com/community-scripts/ProxmoxVE/pull/12528))
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- fix(pve-privilege-converter): handle already stopped container in manage_states [@liuqitoday](https://github.com/liuqitoday) ([#12765](https://github.com/community-scripts/ProxmoxVE/pull/12765))
|
||||||
|
|
||||||
|
### 📚 Documentation
|
||||||
|
|
||||||
|
- Update: Docs/website metadata workflow [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12858](https://github.com/community-scripts/ProxmoxVE/pull/12858))
|
||||||
|
|
||||||
## 2026-03-12
|
## 2026-03-12
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
### 🚀 Updated Scripts
|
||||||
@@ -450,6 +601,12 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
- tools.func: support older NVIDIA driver versions with 2 segments (xxx.xxx) [@MickLesk](https://github.com/MickLesk) ([#12796](https://github.com/community-scripts/ProxmoxVE/pull/12796))
|
- tools.func: support older NVIDIA driver versions with 2 segments (xxx.xxx) [@MickLesk](https://github.com/MickLesk) ([#12796](https://github.com/community-scripts/ProxmoxVE/pull/12796))
|
||||||
|
|
||||||
|
### 🧰 Tools
|
||||||
|
|
||||||
|
- #### 🐞 Bug Fixes
|
||||||
|
|
||||||
|
- Fix PBS microcode naming [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12834](https://github.com/community-scripts/ProxmoxVE/pull/12834))
|
||||||
|
|
||||||
### 📂 Github
|
### 📂 Github
|
||||||
|
|
||||||
- Cleanup: remove old workflow files [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12818](https://github.com/community-scripts/ProxmoxVE/pull/12818))
|
- Cleanup: remove old workflow files [@michelroegl-brunner](https://github.com/michelroegl-brunner) ([#12818](https://github.com/community-scripts/ProxmoxVE/pull/12818))
|
||||||
@@ -1213,236 +1370,4 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
|
|||||||
|
|
||||||
- #### 📝 Script Information
|
- #### 📝 Script Information
|
||||||
|
|
||||||
- SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning [@MickLesk](https://github.com/MickLesk) ([#11829](https://github.com/community-scripts/ProxmoxVE/pull/11829))
|
- SQLServer-2025: add PVE9/Kernel 6.x incompatibility warning [@MickLesk](https://github.com/MickLesk) ([#11829](https://github.com/community-scripts/ProxmoxVE/pull/11829))
|
||||||
|
|
||||||
## 2026-02-12
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- EMQX: increase disk to 6GB and add optional MQ disable prompt [@MickLesk](https://github.com/MickLesk) ([#11844](https://github.com/community-scripts/ProxmoxVE/pull/11844))
|
|
||||||
- Increased the Grafana container default disk size. [@shtefko](https://github.com/shtefko) ([#11840](https://github.com/community-scripts/ProxmoxVE/pull/11840))
|
|
||||||
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
|
|
||||||
- Deluge: add python3-setuptools as dep [@MickLesk](https://github.com/MickLesk) ([#11833](https://github.com/community-scripts/ProxmoxVE/pull/11833))
|
|
||||||
- Dispatcharr: migrate to uv sync [@MickLesk](https://github.com/MickLesk) ([#11831](https://github.com/community-scripts/ProxmoxVE/pull/11831))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics [@MickLesk](https://github.com/MickLesk) ([#11842](https://github.com/community-scripts/ProxmoxVE/pull/11842))
|
|
||||||
- Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection [@MickLesk](https://github.com/MickLesk) ([#11810](https://github.com/community-scripts/ProxmoxVE/pull/11810))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- tools.func: auto-detect binary vs armored GPG keys in setup_deb822_repo [@MickLesk](https://github.com/MickLesk) ([#11841](https://github.com/community-scripts/ProxmoxVE/pull/11841))
|
|
||||||
- core: remove old Go API and extend misc/api.func with new backend [@MickLesk](https://github.com/MickLesk) ([#11822](https://github.com/community-scripts/ProxmoxVE/pull/11822))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- error_handler: prevent stuck 'installing' status [@MickLesk](https://github.com/MickLesk) ([#11845](https://github.com/community-scripts/ProxmoxVE/pull/11845))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Tailscale: fix DNS check and keyrings directory issues [@MickLesk](https://github.com/MickLesk) ([#11837](https://github.com/community-scripts/ProxmoxVE/pull/11837))
|
|
||||||
|
|
||||||
## 2026-02-11
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Draw.io ([#11788](https://github.com/community-scripts/ProxmoxVE/pull/11788))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- dispatcharr: include port 9191 in success-message [@MickLesk](https://github.com/MickLesk) ([#11808](https://github.com/community-scripts/ProxmoxVE/pull/11808))
|
|
||||||
- fix: make donetick 0.1.71 compatible [@tomfrenzel](https://github.com/tomfrenzel) ([#11804](https://github.com/community-scripts/ProxmoxVE/pull/11804))
|
|
||||||
- Kasm: Support new version URL format without hash suffix [@MickLesk](https://github.com/MickLesk) ([#11787](https://github.com/community-scripts/ProxmoxVE/pull/11787))
|
|
||||||
- LibreTranslate: Remove Torch [@tremor021](https://github.com/tremor021) ([#11783](https://github.com/community-scripts/ProxmoxVE/pull/11783))
|
|
||||||
- Snowshare: fix update script [@TuroYT](https://github.com/TuroYT) ([#11726](https://github.com/community-scripts/ProxmoxVE/pull/11726))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- [Feature] OpenCloud: support PosixFS Collaborative Mode [@vhsdream](https://github.com/vhsdream) ([#11806](https://github.com/community-scripts/ProxmoxVE/pull/11806))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- core: respect EDITOR variable for config editing [@ls-root](https://github.com/ls-root) ([#11693](https://github.com/community-scripts/ProxmoxVE/pull/11693))
|
|
||||||
|
|
||||||
### 📚 Documentation
|
|
||||||
|
|
||||||
- Fix formatting in kutt.json notes section [@tiagodenoronha](https://github.com/tiagodenoronha) ([#11774](https://github.com/community-scripts/ProxmoxVE/pull/11774))
|
|
||||||
|
|
||||||
## 2026-02-10
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Immich: Pin version to 2.5.6 [@vhsdream](https://github.com/vhsdream) ([#11775](https://github.com/community-scripts/ProxmoxVE/pull/11775))
|
|
||||||
- Libretranslate: Fix setuptools [@tremor021](https://github.com/tremor021) ([#11772](https://github.com/community-scripts/ProxmoxVE/pull/11772))
|
|
||||||
- Element Synapse: prevent systemd invoke failure during apt install [@MickLesk](https://github.com/MickLesk) ([#11758](https://github.com/community-scripts/ProxmoxVE/pull/11758))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Refactor: Slskd & Soularr [@vhsdream](https://github.com/vhsdream) ([#11674](https://github.com/community-scripts/ProxmoxVE/pull/11674))
|
|
||||||
|
|
||||||
### 🗑️ Deleted Scripts
|
|
||||||
|
|
||||||
- move paperless-exporter from LXC to addon ([#11737](https://github.com/community-scripts/ProxmoxVE/pull/11737))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- feat: improve storage parsing & add guestname [@carlosmaroot](https://github.com/carlosmaroot) ([#11752](https://github.com/community-scripts/ProxmoxVE/pull/11752))
|
|
||||||
|
|
||||||
### 📂 Github
|
|
||||||
|
|
||||||
- Github-Version Workflow: include addon scripts in extraction [@MickLesk](https://github.com/MickLesk) ([#11757](https://github.com/community-scripts/ProxmoxVE/pull/11757))
|
|
||||||
|
|
||||||
### 🌐 Website
|
|
||||||
|
|
||||||
- #### 📝 Script Information
|
|
||||||
|
|
||||||
- Snowshare: fix typo in config file path on website [@BirdMakingStuff](https://github.com/BirdMakingStuff) ([#11754](https://github.com/community-scripts/ProxmoxVE/pull/11754))
|
|
||||||
|
|
||||||
## 2026-02-09
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- several scripts: add --clear to uv venv calls for uv 0.10 compatibility [@MickLesk](https://github.com/MickLesk) ([#11723](https://github.com/community-scripts/ProxmoxVE/pull/11723))
|
|
||||||
- Koillection: ensure setup_composer is in update script [@MickLesk](https://github.com/MickLesk) ([#11734](https://github.com/community-scripts/ProxmoxVE/pull/11734))
|
|
||||||
- PeaNUT: symlink server.js after update [@vhsdream](https://github.com/vhsdream) ([#11696](https://github.com/community-scripts/ProxmoxVE/pull/11696))
|
|
||||||
- Umlautadaptarr: use release appsettings.json instead of hardcoded copy [@MickLesk](https://github.com/MickLesk) ([#11725](https://github.com/community-scripts/ProxmoxVE/pull/11725))
|
|
||||||
- tracearr: prepare for next stable release [@durzo](https://github.com/durzo) ([#11673](https://github.com/community-scripts/ProxmoxVE/pull/11673))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- remove whiptail from update scripts for unattended update support [@MickLesk](https://github.com/MickLesk) ([#11712](https://github.com/community-scripts/ProxmoxVE/pull/11712))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- Refactor: FileFlows [@tremor021](https://github.com/tremor021) ([#11108](https://github.com/community-scripts/ProxmoxVE/pull/11108))
|
|
||||||
- Refactor: wger [@MickLesk](https://github.com/MickLesk) ([#11722](https://github.com/community-scripts/ProxmoxVE/pull/11722))
|
|
||||||
- Nginx-UI: better User Handling | ACME [@MickLesk](https://github.com/MickLesk) ([#11715](https://github.com/community-scripts/ProxmoxVE/pull/11715))
|
|
||||||
- NginxProxymanager: use better-sqlite3 [@MickLesk](https://github.com/MickLesk) ([#11708](https://github.com/community-scripts/ProxmoxVE/pull/11708))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- hwaccel: add libmfx-gen1.2 to Intel Arc setup for QSV support [@MickLesk](https://github.com/MickLesk) ([#11707](https://github.com/community-scripts/ProxmoxVE/pull/11707))
|
|
||||||
|
|
||||||
### 🧰 Tools
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- addons: ensure curl is installed before use [@MickLesk](https://github.com/MickLesk) ([#11718](https://github.com/community-scripts/ProxmoxVE/pull/11718))
|
|
||||||
- Netbird (addon): add systemd ordering to start after Docker [@MickLesk](https://github.com/MickLesk) ([#11716](https://github.com/community-scripts/ProxmoxVE/pull/11716))
|
|
||||||
|
|
||||||
### ❔ Uncategorized
|
|
||||||
|
|
||||||
- Bichon: Update website [@tremor021](https://github.com/tremor021) ([#11711](https://github.com/community-scripts/ProxmoxVE/pull/11711))
|
|
||||||
|
|
||||||
## 2026-02-08
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- feat(healthchecks): add sendalerts service [@Mika56](https://github.com/Mika56) ([#11694](https://github.com/community-scripts/ProxmoxVE/pull/11694))
|
|
||||||
- ComfyUI: Dynamic Fetch PyTorch Versions [@MickLesk](https://github.com/MickLesk) ([#11657](https://github.com/community-scripts/ProxmoxVE/pull/11657))
|
|
||||||
|
|
||||||
- #### 💥 Breaking Changes
|
|
||||||
|
|
||||||
- Semaphore: switch from Debian to Ubuntu 24.04 [@MickLesk](https://github.com/MickLesk) ([#11670](https://github.com/community-scripts/ProxmoxVE/pull/11670))
|
|
||||||
|
|
||||||
## 2026-02-07
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Checkmate ([#11672](https://github.com/community-scripts/ProxmoxVE/pull/11672))
|
|
||||||
- Bichon ([#11671](https://github.com/community-scripts/ProxmoxVE/pull/11671))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- NocoDB: pin to v0.301.1 [@MickLesk](https://github.com/MickLesk) ([#11655](https://github.com/community-scripts/ProxmoxVE/pull/11655))
|
|
||||||
- Pin Memos to v0.25.3 - last version with release binaries [@MickLesk](https://github.com/MickLesk) ([#11658](https://github.com/community-scripts/ProxmoxVE/pull/11658))
|
|
||||||
- Downgrade: OpenProject | NginxProxyManager | Semaphore to Debian 12 due to persistent SHA1 issues [@MickLesk](https://github.com/MickLesk) ([#11654](https://github.com/community-scripts/ProxmoxVE/pull/11654))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- tools: fallback to previous release when asset is missing [@MickLesk](https://github.com/MickLesk) ([#11660](https://github.com/community-scripts/ProxmoxVE/pull/11660))
|
|
||||||
|
|
||||||
### 📚 Documentation
|
|
||||||
|
|
||||||
- fix(setup): correctly auto-detect username when using --full [@ls-root](https://github.com/ls-root) ([#11650](https://github.com/community-scripts/ProxmoxVE/pull/11650))
|
|
||||||
|
|
||||||
### 🌐 Website
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- feat(frontend): add JSON script import functionality [@ls-root](https://github.com/ls-root) ([#11563](https://github.com/community-scripts/ProxmoxVE/pull/11563))
|
|
||||||
|
|
||||||
## 2026-02-06
|
|
||||||
|
|
||||||
### 🆕 New Scripts
|
|
||||||
|
|
||||||
- Nightscout ([#11621](https://github.com/community-scripts/ProxmoxVE/pull/11621))
|
|
||||||
- PVE LXC Apps Updater [@MickLesk](https://github.com/MickLesk) ([#11533](https://github.com/community-scripts/ProxmoxVE/pull/11533))
|
|
||||||
|
|
||||||
### 🚀 Updated Scripts
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- Immich: supress startup messages for immich-admin [@vhsdream](https://github.com/vhsdream) ([#11635](https://github.com/community-scripts/ProxmoxVE/pull/11635))
|
|
||||||
- Semaphore: Change Ubuntu release from 'jammy' to 'noble' [@MickLesk](https://github.com/MickLesk) ([#11625](https://github.com/community-scripts/ProxmoxVE/pull/11625))
|
|
||||||
- Pangolin: replace build:sqlite with db:generate + build [@MickLesk](https://github.com/MickLesk) ([#11616](https://github.com/community-scripts/ProxmoxVE/pull/11616))
|
|
||||||
- [FIX] OpenCloud: path issues [@vhsdream](https://github.com/vhsdream) ([#11593](https://github.com/community-scripts/ProxmoxVE/pull/11593))
|
|
||||||
- [FIX] Homepage: preserve public/images & public/icons if they exist [@vhsdream](https://github.com/vhsdream) ([#11594](https://github.com/community-scripts/ProxmoxVE/pull/11594))
|
|
||||||
|
|
||||||
- #### ✨ New Features
|
|
||||||
|
|
||||||
- Shelfmark: remove Chromedriver dep, add URL_BASE env [@vhsdream](https://github.com/vhsdream) ([#11619](https://github.com/community-scripts/ProxmoxVE/pull/11619))
|
|
||||||
- Immich: pin to v2.5.5 [@vhsdream](https://github.com/vhsdream) ([#11598](https://github.com/community-scripts/ProxmoxVE/pull/11598))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- refactor: homepage [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11605](https://github.com/community-scripts/ProxmoxVE/pull/11605))
|
|
||||||
|
|
||||||
### 💾 Core
|
|
||||||
|
|
||||||
- #### 🐞 Bug Fixes
|
|
||||||
|
|
||||||
- fix(core): spinner misalignment [@ls-root](https://github.com/ls-root) ([#11627](https://github.com/community-scripts/ProxmoxVE/pull/11627))
|
|
||||||
|
|
||||||
- #### 🔧 Refactor
|
|
||||||
|
|
||||||
- [Fix] build.func: QOL grammar adjustment for Creating LXC message [@vhsdream](https://github.com/vhsdream) ([#11633](https://github.com/community-scripts/ProxmoxVE/pull/11633))
|
|
||||||
|
|
||||||
### 📚 Documentation
|
|
||||||
|
|
||||||
- [gh] Update to the New Script request template [@tremor021](https://github.com/tremor021) ([#11612](https://github.com/community-scripts/ProxmoxVE/pull/11612))
|
|
||||||
|
|
||||||
### 🌐 Website
|
|
||||||
|
|
||||||
- #### 📝 Script Information
|
|
||||||
|
|
||||||
- Update LXC App Updater JSON to reflect tag override option [@vhsdream](https://github.com/vhsdream) ([#11626](https://github.com/community-scripts/ProxmoxVE/pull/11626))
|
|
||||||
|
|
||||||
### ❔ Uncategorized
|
|
||||||
|
|
||||||
- Opencloud: fix JSON [@vhsdream](https://github.com/vhsdream) ([#11617](https://github.com/community-scripts/ProxmoxVE/pull/11617))
|
|
||||||
50
ct/alpine-ntfy.sh
Normal file
50
ct/alpine-ntfy.sh
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: cobalt (cobaltgit)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVED/raw/main/LICENSE
|
||||||
|
# Source: https://ntfy.sh/
|
||||||
|
|
||||||
|
APP="Alpine-ntfy"
|
||||||
|
var_tags="${var_tags:-notification}"
|
||||||
|
var_cpu="${var_cpu:-1}"
|
||||||
|
var_ram="${var_ram:-256}"
|
||||||
|
var_disk="${var_disk:-2}"
|
||||||
|
var_os="${var_os:-alpine}"
|
||||||
|
var_version="${var_version:-3.23}"
|
||||||
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
|
||||||
|
header_info "$APP"
|
||||||
|
variables
|
||||||
|
color
|
||||||
|
catch_errors
|
||||||
|
|
||||||
|
function update_script() {
|
||||||
|
header_info
|
||||||
|
check_container_storage
|
||||||
|
check_container_resources
|
||||||
|
if [[ ! -d /etc/ntfy ]]; then
|
||||||
|
msg_error "No ${APP} Installation Found!"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
msg_info "Updating ntfy LXC"
|
||||||
|
$STD apk -U upgrade
|
||||||
|
setcap 'cap_net_bind_service=+ep' /usr/bin/ntfy
|
||||||
|
msg_ok "Updated ntfy LXC"
|
||||||
|
|
||||||
|
msg_info "Restarting ntfy"
|
||||||
|
rc-service ntfy restart
|
||||||
|
msg_ok "Restarted ntfy"
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
start
|
||||||
|
build_container
|
||||||
|
description
|
||||||
|
|
||||||
|
msg_ok "Completed successfully!\n"
|
||||||
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
|
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
|
||||||
67
ct/anytype-server.sh
Normal file
67
ct/anytype-server.sh
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://anytype.io
|
||||||
|
|
||||||
|
APP="Anytype-Server"
|
||||||
|
var_tags="${var_tags:-notes;productivity;sync}"
|
||||||
|
var_cpu="${var_cpu:-2}"
|
||||||
|
var_ram="${var_ram:-4096}"
|
||||||
|
var_disk="${var_disk:-16}"
|
||||||
|
var_os="${var_os:-ubuntu}"
|
||||||
|
var_version="${var_version:-24.04}"
|
||||||
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
|
||||||
|
header_info "$APP"
|
||||||
|
variables
|
||||||
|
color
|
||||||
|
catch_errors
|
||||||
|
|
||||||
|
function update_script() {
|
||||||
|
header_info
|
||||||
|
check_container_storage
|
||||||
|
check_container_resources
|
||||||
|
|
||||||
|
if [[ ! -f /opt/anytype/any-sync-bundle ]]; then
|
||||||
|
msg_error "No ${APP} Installation Found!"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_for_gh_release "anytype" "grishy/any-sync-bundle"; then
|
||||||
|
msg_info "Stopping Service"
|
||||||
|
systemctl stop anytype
|
||||||
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
|
msg_info "Backing up Data"
|
||||||
|
cp -r /opt/anytype/data /opt/anytype_data_backup
|
||||||
|
msg_ok "Backed up Data"
|
||||||
|
|
||||||
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "anytype" "grishy/any-sync-bundle" "prebuild" "latest" "/opt/anytype" "any-sync-bundle_*_linux_amd64.tar.gz"
|
||||||
|
chmod +x /opt/anytype/any-sync-bundle
|
||||||
|
|
||||||
|
msg_info "Restoring Data"
|
||||||
|
cp -r /opt/anytype_data_backup/. /opt/anytype/data
|
||||||
|
rm -rf /opt/anytype_data_backup
|
||||||
|
msg_ok "Restored Data"
|
||||||
|
|
||||||
|
msg_info "Starting Service"
|
||||||
|
systemctl start anytype
|
||||||
|
msg_ok "Started Service"
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
|
fi
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
start
|
||||||
|
build_container
|
||||||
|
description
|
||||||
|
|
||||||
|
msg_ok "Completed Successfully!\n"
|
||||||
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
|
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:33010${CL}"
|
||||||
|
echo -e "${INFO}${YW} Client config file:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}/opt/anytype/data/client-config.yml${CL}"
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
# Author: DragoQC
|
# Author: DragoQC | Co-Author: nickheyer
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
# Source: https://discopanel.app/ | Github: https://github.com/nickheyer/discopanel
|
# Source: https://discopanel.app/ | Github: https://github.com/nickheyer/discopanel
|
||||||
|
|
||||||
@@ -38,34 +38,15 @@ function update_script() {
|
|||||||
|
|
||||||
msg_info "Creating Backup"
|
msg_info "Creating Backup"
|
||||||
mkdir -p /opt/discopanel_backup_temp
|
mkdir -p /opt/discopanel_backup_temp
|
||||||
cp -r /opt/discopanel/data/discopanel.db \
|
cp /opt/discopanel/data/discopanel.db /opt/discopanel_backup_temp/discopanel.db
|
||||||
/opt/discopanel/data/.recovery_key \
|
|
||||||
/opt/discopanel_backup_temp/
|
|
||||||
if [[ -d /opt/discopanel/data/servers ]]; then
|
|
||||||
cp -r /opt/discopanel/data/servers /opt/discopanel_backup_temp/
|
|
||||||
fi
|
|
||||||
msg_ok "Created Backup"
|
msg_ok "Created Backup"
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "discopanel" "nickheyer/discopanel" "tarball" "latest" "/opt/discopanel"
|
fetch_and_deploy_gh_release "discopanel" "nickheyer/discopanel" "prebuild" "latest" "/opt/discopanel" "discopanel-linux-amd64.tar.gz"
|
||||||
|
ln -sf /opt/discopanel/discopanel-linux-amd64 /opt/discopanel/discopanel
|
||||||
msg_info "Setting up DiscoPanel"
|
|
||||||
cd /opt/discopanel
|
|
||||||
$STD make gen
|
|
||||||
cd /opt/discopanel/web/discopanel
|
|
||||||
$STD npm install
|
|
||||||
$STD npm run build
|
|
||||||
msg_ok "Built Web Interface"
|
|
||||||
|
|
||||||
setup_go
|
|
||||||
|
|
||||||
msg_info "Building DiscoPanel"
|
|
||||||
cd /opt/discopanel
|
|
||||||
$STD go build -o discopanel cmd/discopanel/main.go
|
|
||||||
msg_ok "Built DiscoPanel"
|
|
||||||
|
|
||||||
msg_info "Restoring Data"
|
msg_info "Restoring Data"
|
||||||
mkdir -p /opt/discopanel/data
|
mkdir -p /opt/discopanel/data
|
||||||
cp -a /opt/discopanel_backup_temp/. /opt/discopanel/data/
|
mv /opt/discopanel_backup_temp/discopanel.db /opt/discopanel/data/discopanel.db
|
||||||
rm -rf /opt/discopanel_backup_temp
|
rm -rf /opt/discopanel_backup_temp
|
||||||
msg_ok "Restored Data"
|
msg_ok "Restored Data"
|
||||||
|
|
||||||
|
|||||||
61
ct/gluetun.sh
Normal file
61
ct/gluetun.sh
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/qdm12/gluetun
|
||||||
|
|
||||||
|
APP="Gluetun"
|
||||||
|
var_tags="${var_tags:-vpn;wireguard;openvpn}"
|
||||||
|
var_cpu="${var_cpu:-2}"
|
||||||
|
var_ram="${var_ram:-2048}"
|
||||||
|
var_disk="${var_disk:-8}"
|
||||||
|
var_os="${var_os:-debian}"
|
||||||
|
var_version="${var_version:-13}"
|
||||||
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
var_tun="${var_tun:-yes}"
|
||||||
|
|
||||||
|
header_info "$APP"
|
||||||
|
variables
|
||||||
|
color
|
||||||
|
catch_errors
|
||||||
|
|
||||||
|
function update_script() {
|
||||||
|
header_info
|
||||||
|
check_container_storage
|
||||||
|
check_container_resources
|
||||||
|
|
||||||
|
if [[ ! -f /usr/local/bin/gluetun ]]; then
|
||||||
|
msg_error "No ${APP} Installation Found!"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_for_gh_release "gluetun" "qdm12/gluetun"; then
|
||||||
|
msg_info "Stopping Service"
|
||||||
|
systemctl stop gluetun
|
||||||
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "gluetun" "qdm12/gluetun" "tarball"
|
||||||
|
|
||||||
|
msg_info "Building Gluetun"
|
||||||
|
cd /opt/gluetun
|
||||||
|
$STD go mod download
|
||||||
|
CGO_ENABLED=0 $STD go build -trimpath -ldflags="-s -w" -o /usr/local/bin/gluetun ./cmd/gluetun/
|
||||||
|
msg_ok "Built Gluetun"
|
||||||
|
|
||||||
|
msg_info "Starting Service"
|
||||||
|
systemctl start gluetun
|
||||||
|
msg_ok "Started Service"
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
|
fi
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
start
|
||||||
|
build_container
|
||||||
|
description
|
||||||
|
|
||||||
|
msg_ok "Completed Successfully!\n"
|
||||||
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
|
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8000${CL}"
|
||||||
6
ct/headers/alpine-ntfy
Normal file
6
ct/headers/alpine-ntfy
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
___ __ _ __ ____
|
||||||
|
/ | / /___ (_)___ ___ ____ / /_/ __/_ __
|
||||||
|
/ /| | / / __ \/ / __ \/ _ \______/ __ \/ __/ /_/ / / /
|
||||||
|
/ ___ |/ / /_/ / / / / / __/_____/ / / / /_/ __/ /_/ /
|
||||||
|
/_/ |_/_/ .___/_/_/ /_/\___/ /_/ /_/\__/_/ \__, /
|
||||||
|
/_/ /____/
|
||||||
6
ct/headers/anytype-server
Normal file
6
ct/headers/anytype-server
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
___ __ _____
|
||||||
|
/ | ____ __ __/ /___ ______ ___ / ___/___ ______ _____ _____
|
||||||
|
/ /| | / __ \/ / / / __/ / / / __ \/ _ \______\__ \/ _ \/ ___/ | / / _ \/ ___/
|
||||||
|
/ ___ |/ / / / /_/ / /_/ /_/ / /_/ / __/_____/__/ / __/ / | |/ / __/ /
|
||||||
|
/_/ |_/_/ /_/\__, /\__/\__, / .___/\___/ /____/\___/_/ |___/\___/_/
|
||||||
|
/____/ /____/_/
|
||||||
6
ct/headers/gluetun
Normal file
6
ct/headers/gluetun
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
________ __
|
||||||
|
/ ____/ /_ _____ / /___ ______
|
||||||
|
/ / __/ / / / / _ \/ __/ / / / __ \
|
||||||
|
/ /_/ / / /_/ / __/ /_/ /_/ / / / /
|
||||||
|
\____/_/\__,_/\___/\__/\__,_/_/ /_/
|
||||||
|
|
||||||
6
ct/headers/split-pro
Normal file
6
ct/headers/split-pro
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
_____ ___ __ ____
|
||||||
|
/ ___/____ / (_) /_ / __ \_________
|
||||||
|
\__ \/ __ \/ / / __/_____/ /_/ / ___/ __ \
|
||||||
|
___/ / /_/ / / / /_/_____/ ____/ / / /_/ /
|
||||||
|
/____/ .___/_/_/\__/ /_/ /_/ \____/
|
||||||
|
/_/
|
||||||
6
ct/headers/yamtrack
Normal file
6
ct/headers/yamtrack
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
__ __ __ __
|
||||||
|
\ \/ /___ _____ ___ / /__________ ______/ /__
|
||||||
|
\ / __ `/ __ `__ \/ __/ ___/ __ `/ ___/ //_/
|
||||||
|
/ / /_/ / / / / / / /_/ / / /_/ / /__/ ,<
|
||||||
|
/_/\__,_/_/ /_/ /_/\__/_/ \__,_/\___/_/|_|
|
||||||
|
|
||||||
39
ct/immich.sh
39
ct/immich.sh
@@ -76,7 +76,7 @@ EOF
|
|||||||
SOURCE_DIR=${STAGING_DIR}/image-source
|
SOURCE_DIR=${STAGING_DIR}/image-source
|
||||||
cd /tmp
|
cd /tmp
|
||||||
if [[ -f ~/.intel_version ]]; then
|
if [[ -f ~/.intel_version ]]; then
|
||||||
curl -fsSLO https://raw.githubusercontent.com/immich-app/immich/refs/heads/main/machine-learning/Dockerfile
|
curl_with_retry "https://raw.githubusercontent.com/immich-app/immich/refs/heads/main/machine-learning/Dockerfile" "Dockerfile"
|
||||||
readarray -t INTEL_URLS < <(
|
readarray -t INTEL_URLS < <(
|
||||||
sed -n "/intel-[igc|opencl]/p" ./Dockerfile | awk '{print $3}'
|
sed -n "/intel-[igc|opencl]/p" ./Dockerfile | awk '{print $3}'
|
||||||
sed -n "/libigdgmm12/p" ./Dockerfile | awk '{print $3}'
|
sed -n "/libigdgmm12/p" ./Dockerfile | awk '{print $3}'
|
||||||
@@ -85,7 +85,7 @@ EOF
|
|||||||
if [[ "$INTEL_RELEASE" != "$(cat ~/.intel_version)" ]]; then
|
if [[ "$INTEL_RELEASE" != "$(cat ~/.intel_version)" ]]; then
|
||||||
msg_info "Updating Intel iGPU dependencies"
|
msg_info "Updating Intel iGPU dependencies"
|
||||||
for url in "${INTEL_URLS[@]}"; do
|
for url in "${INTEL_URLS[@]}"; do
|
||||||
curl -fsSLO "$url"
|
curl_with_retry "$url" "$(basename "$url")"
|
||||||
done
|
done
|
||||||
$STD apt-mark unhold libigdgmm12
|
$STD apt-mark unhold libigdgmm12
|
||||||
$STD apt install -y --allow-downgrades ./libigdgmm12*.deb
|
$STD apt install -y --allow-downgrades ./libigdgmm12*.deb
|
||||||
@@ -109,7 +109,7 @@ EOF
|
|||||||
msg_ok "Image-processing libraries up to date"
|
msg_ok "Image-processing libraries up to date"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
RELEASE="2.5.6"
|
RELEASE="v2.5.6"
|
||||||
if check_for_gh_release "Immich" "immich-app/immich" "${RELEASE}"; then
|
if check_for_gh_release "Immich" "immich-app/immich" "${RELEASE}"; then
|
||||||
if [[ $(cat ~/.immich) > "2.5.1" ]]; then
|
if [[ $(cat ~/.immich) > "2.5.1" ]]; then
|
||||||
msg_info "Enabling Maintenance Mode"
|
msg_info "Enabling Maintenance Mode"
|
||||||
@@ -133,7 +133,7 @@ EOF
|
|||||||
$STD sudo -u postgres psql -d immich -c "REINDEX INDEX face_index;"
|
$STD sudo -u postgres psql -d immich -c "REINDEX INDEX face_index;"
|
||||||
$STD sudo -u postgres psql -d immich -c "REINDEX INDEX clip_index;"
|
$STD sudo -u postgres psql -d immich -c "REINDEX INDEX clip_index;"
|
||||||
fi
|
fi
|
||||||
ensure_dependencies ccache
|
ensure_dependencies ccache gcc-13 g++-13
|
||||||
|
|
||||||
INSTALL_DIR="/opt/${APP}"
|
INSTALL_DIR="/opt/${APP}"
|
||||||
UPLOAD_DIR="$(sed -n '/^IMMICH_MEDIA_LOCATION/s/[^=]*=//p' /opt/immich/.env)"
|
UPLOAD_DIR="$(sed -n '/^IMMICH_MEDIA_LOCATION/s/[^=]*=//p' /opt/immich/.env)"
|
||||||
@@ -165,8 +165,8 @@ EOF
|
|||||||
)
|
)
|
||||||
|
|
||||||
setup_uv
|
setup_uv
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v${RELEASE}" "$SRC_DIR"
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "${RELEASE}" "$SRC_DIR"
|
||||||
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1]' ${SRC_DIR}/package.json)"
|
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1] | split("+")[0]' ${SRC_DIR}/package.json)"
|
||||||
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
||||||
|
|
||||||
msg_info "Updating Immich web and microservices"
|
msg_info "Updating Immich web and microservices"
|
||||||
@@ -217,15 +217,36 @@ EOF
|
|||||||
chown -R immich:immich "$INSTALL_DIR"
|
chown -R immich:immich "$INSTALL_DIR"
|
||||||
chown immich:immich ./uv.lock
|
chown immich:immich ./uv.lock
|
||||||
export VIRTUAL_ENV="${ML_DIR}"/ml-venv
|
export VIRTUAL_ENV="${ML_DIR}"/ml-venv
|
||||||
|
export UV_HTTP_TIMEOUT=300
|
||||||
if [[ -f ~/.openvino ]]; then
|
if [[ -f ~/.openvino ]]; then
|
||||||
|
ML_PYTHON="python3.13"
|
||||||
|
msg_info "Pre-installing Python ${ML_PYTHON} for machine-learning"
|
||||||
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv python install "${ML_PYTHON}" && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "Python download attempt $attempt failed, retrying..." && sleep 5
|
||||||
|
done
|
||||||
|
msg_ok "Pre-installed Python ${ML_PYTHON}"
|
||||||
msg_info "Updating HW-accelerated machine-learning"
|
msg_info "Updating HW-accelerated machine-learning"
|
||||||
$STD uv add --no-sync --optional openvino onnxruntime-openvino==1.24.1 --active -n -p python3.13 --managed-python
|
$STD uv add --no-sync --optional openvino onnxruntime-openvino==1.24.1 --active -n -p "${ML_PYTHON}" --managed-python
|
||||||
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv sync --extra openvino --no-dev --active --link-mode copy -n -p python3.13 --managed-python
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV,UV_HTTP_TIMEOUT -nu immich uv sync --extra openvino --no-dev --active --link-mode copy -n -p "${ML_PYTHON}" --managed-python && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "uv sync attempt $attempt failed, retrying..." && sleep 10
|
||||||
|
done
|
||||||
patchelf --clear-execstack "${VIRTUAL_ENV}/lib/python3.13/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-313-x86_64-linux-gnu.so"
|
patchelf --clear-execstack "${VIRTUAL_ENV}/lib/python3.13/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-313-x86_64-linux-gnu.so"
|
||||||
msg_ok "Updated HW-accelerated machine-learning"
|
msg_ok "Updated HW-accelerated machine-learning"
|
||||||
else
|
else
|
||||||
|
ML_PYTHON="python3.11"
|
||||||
|
msg_info "Pre-installing Python ${ML_PYTHON} for machine-learning"
|
||||||
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv python install "${ML_PYTHON}" && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "Python download attempt $attempt failed, retrying..." && sleep 5
|
||||||
|
done
|
||||||
|
msg_ok "Pre-installed Python ${ML_PYTHON}"
|
||||||
msg_info "Updating machine-learning"
|
msg_info "Updating machine-learning"
|
||||||
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv sync --extra cpu --no-dev --active --link-mode copy -n -p python3.11 --managed-python
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV,UV_HTTP_TIMEOUT -nu immich uv sync --extra cpu --no-dev --active --link-mode copy -n -p "${ML_PYTHON}" --managed-python && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "uv sync attempt $attempt failed, retrying..." && sleep 10
|
||||||
|
done
|
||||||
msg_ok "Updated machine-learning"
|
msg_ok "Updated machine-learning"
|
||||||
fi
|
fi
|
||||||
cd "$SRC_DIR"
|
cd "$SRC_DIR"
|
||||||
|
|||||||
@@ -30,9 +30,14 @@ function update_script() {
|
|||||||
fi
|
fi
|
||||||
setup_mariadb
|
setup_mariadb
|
||||||
|
|
||||||
msg_info "Updating LXC"
|
msg_info "Updating ITSM-NG"
|
||||||
$STD apt update
|
$STD apt update
|
||||||
$STD apt -y upgrade
|
$STD apt -y upgrade
|
||||||
|
chown -R www-data:www-data /var/lib/itsm-ng
|
||||||
|
mkdir -p /usr/share/itsm-ng/css/palettes
|
||||||
|
chown -R www-data:www-data /usr/share/itsm-ng/css
|
||||||
|
chown -R www-data:www-data /usr/share/itsm-ng/css_compiled
|
||||||
|
chown www-data:www-data /etc/itsm-ng/config_db.php
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
exit
|
exit
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -39,14 +39,23 @@ function update_script() {
|
|||||||
msg_ok "Updated Intel Dependencies"
|
msg_ok "Updated Intel Dependencies"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
msg_info "Setting up Jellyfin Repository"
|
||||||
|
setup_deb822_repo \
|
||||||
|
"jellyfin" \
|
||||||
|
"https://repo.jellyfin.org/jellyfin_team.gpg.key" \
|
||||||
|
"https://repo.jellyfin.org/$(get_os_info id)" \
|
||||||
|
"$(get_os_info codename)"
|
||||||
|
msg_ok "Set up Jellyfin Repository"
|
||||||
|
|
||||||
msg_info "Updating Jellyfin"
|
msg_info "Updating Jellyfin"
|
||||||
ensure_dependencies libjemalloc2
|
ensure_dependencies libjemalloc2
|
||||||
if [[ ! -f /usr/lib/libjemalloc.so ]]; then
|
if [[ ! -f /usr/lib/libjemalloc.so ]]; then
|
||||||
ln -sf /usr/lib/x86_64-linux-gnu/libjemalloc.so.2 /usr/lib/libjemalloc.so
|
ln -sf /usr/lib/x86_64-linux-gnu/libjemalloc.so.2 /usr/lib/libjemalloc.so
|
||||||
fi
|
fi
|
||||||
$STD apt update
|
|
||||||
$STD apt -y upgrade
|
$STD apt -y upgrade
|
||||||
$STD apt -y --with-new-pkgs upgrade jellyfin jellyfin-server
|
$STD apt -y --with-new-pkgs upgrade jellyfin jellyfin-server jellyfin-ffmpeg7
|
||||||
|
ln -sf /usr/lib/jellyfin-ffmpeg/ffmpeg /usr/bin/ffmpeg
|
||||||
|
ln -sf /usr/lib/jellyfin-ffmpeg/ffprobe /usr/bin/ffprobe
|
||||||
msg_ok "Updated Jellyfin"
|
msg_ok "Updated Jellyfin"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
exit
|
exit
|
||||||
|
|||||||
@@ -23,16 +23,17 @@ function update_script() {
|
|||||||
header_info
|
header_info
|
||||||
check_container_storage
|
check_container_storage
|
||||||
check_container_resources
|
check_container_resources
|
||||||
|
RELEASE="0.301.1"
|
||||||
if [[ ! -f /etc/systemd/system/nocodb.service ]]; then
|
if [[ ! -f /etc/systemd/system/nocodb.service ]]; then
|
||||||
msg_error "No ${APP} Installation Found!"
|
msg_error "No ${APP} Installation Found!"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
if check_for_gh_release "nocodb" "nocodb/nocodb" "0.301.1"; then
|
if check_for_gh_release "nocodb" "nocodb/nocodb" "${RELEASE}"; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop nocodb
|
systemctl stop nocodb
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "nocodb" "nocodb/nocodb" "singlefile" "0.301.1" "/opt/nocodb/" "Noco-linux-x64"
|
fetch_and_deploy_gh_release "nocodb" "nocodb/nocodb" "singlefile" "${RELEASE}" "/opt/nocodb/" "Noco-linux-x64"
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Service"
|
||||||
systemctl start nocodb
|
systemctl start nocodb
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
|
|||||||
APP="Paperless-ngx"
|
APP="Paperless-ngx"
|
||||||
var_tags="${var_tags:-document;management}"
|
var_tags="${var_tags:-document;management}"
|
||||||
var_cpu="${var_cpu:-2}"
|
var_cpu="${var_cpu:-2}"
|
||||||
var_ram="${var_ram:-2048}"
|
var_ram="${var_ram:-3072}"
|
||||||
var_disk="${var_disk:-12}"
|
var_disk="${var_disk:-12}"
|
||||||
var_os="${var_os:-debian}"
|
var_os="${var_os:-debian}"
|
||||||
var_version="${var_version:-13}"
|
var_version="${var_version:-13}"
|
||||||
|
|||||||
@@ -36,8 +36,9 @@ function update_script() {
|
|||||||
read -r
|
read -r
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
RELEASE="v1.4.2"
|
||||||
NODE_VERSION="24" setup_nodejs
|
NODE_VERSION="24" setup_nodejs
|
||||||
if check_for_gh_release "PatchMon" "PatchMon/PatchMon" "v1.4.2"; then
|
if check_for_gh_release "PatchMon" "PatchMon/PatchMon" "${RELEASE}"; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop patchmon-server
|
systemctl stop patchmon-server
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
@@ -47,7 +48,7 @@ function update_script() {
|
|||||||
cp /opt/patchmon/frontend/.env /opt/frontend.env
|
cp /opt/patchmon/frontend/.env /opt/frontend.env
|
||||||
msg_ok "Backup Created"
|
msg_ok "Backup Created"
|
||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "PatchMon" "PatchMon/PatchMon" "tarball" "v1.4.2" "/opt/patchmon"
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "PatchMon" "PatchMon/PatchMon" "tarball" "${RELEASE}" "/opt/patchmon"
|
||||||
|
|
||||||
msg_info "Updating PatchMon"
|
msg_info "Updating PatchMon"
|
||||||
VERSION=$(get_latest_github_release "PatchMon/PatchMon")
|
VERSION=$(get_latest_github_release "PatchMon/PatchMon")
|
||||||
|
|||||||
@@ -23,18 +23,19 @@ function update_script() {
|
|||||||
header_info
|
header_info
|
||||||
check_container_storage
|
check_container_storage
|
||||||
check_container_resources
|
check_container_resources
|
||||||
|
RELEASE="0.10.0"
|
||||||
if [[ ! -d /opt/plant-it ]]; then
|
if [[ ! -d /opt/plant-it ]]; then
|
||||||
msg_error "No ${APP} Installation Found!"
|
msg_error "No ${APP} Installation Found!"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
setup_mariadb
|
setup_mariadb
|
||||||
if check_for_gh_release "plant-it" "MDeLuise/plant-it"; then
|
if check_for_gh_release "plant-it" "MDeLuise/plant-it" "${RELEASE}"; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop plant-it
|
systemctl stop plant-it
|
||||||
msg_info "Stopped Service"
|
msg_info "Stopped Service"
|
||||||
|
|
||||||
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "plant-it" "MDeLuise/plant-it" "singlefile" "0.10.0" "/opt/plant-it/backend" "server.jar"
|
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "plant-it" "MDeLuise/plant-it" "singlefile" "${RELEASE}" "/opt/plant-it/backend" "server.jar"
|
||||||
fetch_and_deploy_gh_release "plant-it-front" "MDeLuise/plant-it" "prebuild" "0.10.0" "/opt/plant-it/frontend" "client.tar.gz"
|
fetch_and_deploy_gh_release "plant-it-front" "MDeLuise/plant-it" "prebuild" "${RELEASE}" "/opt/plant-it/frontend" "client.tar.gz"
|
||||||
msg_warn "Application is updated to latest Web version (v0.10.0). There will be no more updates available."
|
msg_warn "Application is updated to latest Web version (v0.10.0). There will be no more updates available."
|
||||||
msg_warn "Please read: https://github.com/MDeLuise/plant-it/releases/tag/1.0.0"
|
msg_warn "Please read: https://github.com/MDeLuise/plant-it/releases/tag/1.0.0"
|
||||||
|
|
||||||
|
|||||||
20
ct/plex.sh
20
ct/plex.sh
@@ -46,11 +46,11 @@ function update_script() {
|
|||||||
"main"
|
"main"
|
||||||
msg_ok "Migrated to new Plex repository"
|
msg_ok "Migrated to new Plex repository"
|
||||||
fi
|
fi
|
||||||
elif [[ -f /etc/apt/sources.list.d/plexmediaserver.list ]]; then
|
elif compgen -G "/etc/apt/sources.list.d/plex*.list" >/dev/null; then
|
||||||
msg_info "Migrating to new Plex repository (deb822)"
|
msg_info "Migrating to new Plex repository (deb822)"
|
||||||
rm -f /etc/apt/sources.list.d/plexmediaserver.list
|
rm -f /etc/apt/sources.list.d/plex*.list
|
||||||
rm -f /etc/apt/sources.list.d/plex*
|
|
||||||
rm -f /usr/share/keyrings/PlexSign.asc
|
rm -f /usr/share/keyrings/PlexSign.asc
|
||||||
|
rm -f /usr/share/keyrings/plexmediaserver.v2.gpg
|
||||||
setup_deb822_repo \
|
setup_deb822_repo \
|
||||||
"plexmediaserver" \
|
"plexmediaserver" \
|
||||||
"https://downloads.plex.tv/plex-keys/PlexSign.v2.key" \
|
"https://downloads.plex.tv/plex-keys/PlexSign.v2.key" \
|
||||||
@@ -58,6 +58,15 @@ function update_script() {
|
|||||||
"public" \
|
"public" \
|
||||||
"main"
|
"main"
|
||||||
msg_ok "Migrated to new Plex repository (deb822)"
|
msg_ok "Migrated to new Plex repository (deb822)"
|
||||||
|
elif [[ ! -f /etc/apt/sources.list.d/plexmediaserver.sources ]]; then
|
||||||
|
msg_info "Setting up Plex repository"
|
||||||
|
setup_deb822_repo \
|
||||||
|
"plexmediaserver" \
|
||||||
|
"https://downloads.plex.tv/plex-keys/PlexSign.v2.key" \
|
||||||
|
"https://repo.plex.tv/deb/" \
|
||||||
|
"public" \
|
||||||
|
"main"
|
||||||
|
msg_ok "Set up Plex repository"
|
||||||
fi
|
fi
|
||||||
if [[ -f /usr/local/bin/plexupdate ]] || [[ -d /opt/plexupdate ]]; then
|
if [[ -f /usr/local/bin/plexupdate ]] || [[ -d /opt/plexupdate ]]; then
|
||||||
msg_info "Removing legacy plexupdate"
|
msg_info "Removing legacy plexupdate"
|
||||||
@@ -70,6 +79,11 @@ function update_script() {
|
|||||||
$STD apt update
|
$STD apt update
|
||||||
$STD apt install -y plexmediaserver
|
$STD apt install -y plexmediaserver
|
||||||
msg_ok "Updated Plex Media Server"
|
msg_ok "Updated Plex Media Server"
|
||||||
|
|
||||||
|
msg_info "Restarting Plex Media Server"
|
||||||
|
systemctl restart plexmediaserver
|
||||||
|
msg_ok "Restarted Plex Media Server"
|
||||||
|
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
exit
|
exit
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ function update_script() {
|
|||||||
header_info
|
header_info
|
||||||
check_container_storage
|
check_container_storage
|
||||||
check_container_resources
|
check_container_resources
|
||||||
if [[ ! -f /etc/systemd/system/homeassistant.service ]]; then
|
if [[ ! -f /etc/containers/systemd/homeassistant.container ]]; then
|
||||||
msg_error "No ${APP} Installation Found!"
|
msg_error "No ${APP} Installation Found!"
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ APP="PVE-Scripts-Local"
|
|||||||
var_tags="${var_tags:-pve-scripts-local}"
|
var_tags="${var_tags:-pve-scripts-local}"
|
||||||
var_cpu="${var_cpu:-2}"
|
var_cpu="${var_cpu:-2}"
|
||||||
var_ram="${var_ram:-4096}"
|
var_ram="${var_ram:-4096}"
|
||||||
var_disk="${var_disk:-4}"
|
var_disk="${var_disk:-10}"
|
||||||
var_os="${var_os:-debian}"
|
var_os="${var_os:-debian}"
|
||||||
var_version="${var_version:-13}"
|
var_version="${var_version:-13}"
|
||||||
var_unprivileged="${var_unprivileged:-1}"
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
|||||||
34
ct/reitti.sh
34
ct/reitti.sh
@@ -89,17 +89,49 @@ EOF
|
|||||||
msg_ok "Started Service"
|
msg_ok "Started Service"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if check_for_gh_release "photon" "komoot/photon"; then
|
if check_for_gh_release "photon" "komoot/photon"; then
|
||||||
|
if [[ -f "$HOME/.photon" ]] && [[ "$(cat "$HOME/.photon")" == 0.7 ]]; then
|
||||||
|
CURRENT_VERSION="$(<"$HOME/.photon")"
|
||||||
|
echo
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "Photon v1 upgrade detected (breaking change)"
|
||||||
|
echo
|
||||||
|
echo "Your current version: $CURRENT_VERSION"
|
||||||
|
echo
|
||||||
|
echo "Photon v1 requires a manual migration before updating."
|
||||||
|
echo
|
||||||
|
echo "You need to:"
|
||||||
|
echo " 1. Remove existing geocoding data (not actual reitti data):"
|
||||||
|
echo " rm -rf /opt/photon_data"
|
||||||
|
echo
|
||||||
|
echo " 2. Follow the inial setup guide again:"
|
||||||
|
echo " https://github.com/community-scripts/ProxmoxVE/discussions/8737"
|
||||||
|
echo
|
||||||
|
echo " 3. Re-download and import Photon data for v1"
|
||||||
|
echo
|
||||||
|
read -rp "Do you want to continue anyway? (y/N): " CONTINUE
|
||||||
|
echo
|
||||||
|
|
||||||
|
if [[ ! "$CONTINUE" =~ ^[Yy]$ ]]; then
|
||||||
|
msg_info "Migration required. Update cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
msg_warn "Continuing without migration may break Photon in the future!"
|
||||||
|
fi
|
||||||
|
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Service"
|
||||||
systemctl stop photon
|
systemctl stop photon
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
rm -f /opt/photon/photon.jar
|
rm -f /opt/photon/photon.jar
|
||||||
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "photon" "komoot/photon" "singlefile" "latest" "/opt/photon" "photon-0*.jar"
|
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "photon" "komoot/photon" "singlefile" "latest" "/opt/photon" "photon-*.jar"
|
||||||
mv /opt/photon/photon-*.jar /opt/photon/photon.jar
|
mv /opt/photon/photon-*.jar /opt/photon/photon.jar
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Service"
|
||||||
systemctl start photon
|
systemctl start photon
|
||||||
|
systemctl restart nginx
|
||||||
msg_ok "Started Service"
|
msg_ok "Started Service"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -128,6 +128,8 @@ EOF
|
|||||||
|
|
||||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
|
||||||
|
|
||||||
|
ensure_dependencies build-essential python3-setuptools
|
||||||
|
|
||||||
msg_info "Updating PNPM Version"
|
msg_info "Updating PNPM Version"
|
||||||
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/seerr/package.json)
|
pnpm_desired=$(grep -Po '"pnpm":\s*"\K[^"]+' /opt/seerr/package.json)
|
||||||
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
NODE_VERSION="22" NODE_MODULE="pnpm@$pnpm_desired" setup_nodejs
|
||||||
|
|||||||
@@ -51,7 +51,7 @@ function update_script() {
|
|||||||
|
|
||||||
msg_info "Updating Sparky Fitness Backend"
|
msg_info "Updating Sparky Fitness Backend"
|
||||||
cd /opt/sparkyfitness/SparkyFitnessServer
|
cd /opt/sparkyfitness/SparkyFitnessServer
|
||||||
$STD npm install
|
$STD pnpm install
|
||||||
msg_ok "Updated Sparky Fitness Backend"
|
msg_ok "Updated Sparky Fitness Backend"
|
||||||
|
|
||||||
msg_info "Updating Sparky Fitness Frontend (Patience)"
|
msg_info "Updating Sparky Fitness Frontend (Patience)"
|
||||||
@@ -62,6 +62,27 @@ function update_script() {
|
|||||||
cp -a /opt/sparkyfitness/SparkyFitnessFrontend/dist/. /var/www/sparkyfitness/
|
cp -a /opt/sparkyfitness/SparkyFitnessFrontend/dist/. /var/www/sparkyfitness/
|
||||||
msg_ok "Updated Sparky Fitness Frontend"
|
msg_ok "Updated Sparky Fitness Frontend"
|
||||||
|
|
||||||
|
msg_info "Refreshing SparkyFitness Service"
|
||||||
|
cat <<EOF >/etc/systemd/system/sparkyfitness-server.service
|
||||||
|
[Unit]
|
||||||
|
Description=SparkyFitness Backend Service
|
||||||
|
After=network.target postgresql.service
|
||||||
|
Requires=postgresql.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
WorkingDirectory=/opt/sparkyfitness/SparkyFitnessServer
|
||||||
|
EnvironmentFile=/etc/sparkyfitness/.env
|
||||||
|
ExecStart=/opt/sparkyfitness/SparkyFitnessServer/node_modules/.bin/tsx SparkyFitnessServer.js
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
msg_ok "Refreshed SparkyFitness Service"
|
||||||
|
|
||||||
msg_info "Restoring data"
|
msg_info "Restoring data"
|
||||||
cp -r /opt/sparkyfitness_backup/. /opt/sparkyfitness/SparkyFitnessServer/
|
cp -r /opt/sparkyfitness_backup/. /opt/sparkyfitness/SparkyFitnessServer/
|
||||||
rm -rf /opt/sparkyfitness_backup
|
rm -rf /opt/sparkyfitness_backup
|
||||||
|
|||||||
68
ct/split-pro.sh
Normal file
68
ct/split-pro.sh
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: johanngrobe
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/oss-apps/split-pro
|
||||||
|
|
||||||
|
APP="Split-Pro"
|
||||||
|
var_tags="${var_tags:-finance;expense-sharing}"
|
||||||
|
var_cpu="${var_cpu:-2}"
|
||||||
|
var_ram="${var_ram:-4096}"
|
||||||
|
var_disk="${var_disk:-6}"
|
||||||
|
var_os="${var_os:-debian}"
|
||||||
|
var_version="${var_version:-13}"
|
||||||
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
|
||||||
|
variables
|
||||||
|
color
|
||||||
|
catch_errors
|
||||||
|
|
||||||
|
function update_script() {
|
||||||
|
header_info
|
||||||
|
check_container_storage
|
||||||
|
check_container_resources
|
||||||
|
|
||||||
|
if [[ ! -d /opt/split-pro ]]; then
|
||||||
|
msg_error "No Split Pro Installation Found!"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_for_gh_release "split-pro" "oss-apps/split-pro"; then
|
||||||
|
msg_info "Stopping Service"
|
||||||
|
systemctl stop split-pro
|
||||||
|
msg_ok "Stopped Service"
|
||||||
|
|
||||||
|
msg_info "Backing up Data"
|
||||||
|
cp /opt/split-pro/.env /opt/split-pro.env
|
||||||
|
msg_ok "Backed up Data"
|
||||||
|
|
||||||
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "split-pro" "oss-apps/split-pro" "tarball"
|
||||||
|
|
||||||
|
msg_info "Building Application"
|
||||||
|
cd /opt/split-pro
|
||||||
|
$STD pnpm install --frozen-lockfile
|
||||||
|
$STD pnpm build
|
||||||
|
cp /opt/split-pro.env /opt/split-pro/.env
|
||||||
|
rm -f /opt/split-pro.env
|
||||||
|
ln -sf /opt/split-pro_data/uploads /opt/split-pro/uploads
|
||||||
|
$STD pnpm exec prisma migrate deploy
|
||||||
|
msg_ok "Built Application"
|
||||||
|
|
||||||
|
msg_info "Starting Service"
|
||||||
|
systemctl start split-pro
|
||||||
|
msg_ok "Started Service"
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
|
fi
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
start
|
||||||
|
build_container
|
||||||
|
description
|
||||||
|
|
||||||
|
msg_ok "Completed successfully!\n"
|
||||||
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
|
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"
|
||||||
@@ -51,6 +51,7 @@ function update_script() {
|
|||||||
$STD source /opt/Tautulli/.venv/bin/activate
|
$STD source /opt/Tautulli/.venv/bin/activate
|
||||||
$STD uv pip install -r requirements.txt
|
$STD uv pip install -r requirements.txt
|
||||||
$STD uv pip install pyopenssl
|
$STD uv pip install pyopenssl
|
||||||
|
$STD uv pip install "setuptools<81"
|
||||||
msg_ok "Updated Tautulli"
|
msg_ok "Updated Tautulli"
|
||||||
|
|
||||||
msg_info "Restoring config and database"
|
msg_info "Restoring config and database"
|
||||||
|
|||||||
@@ -33,12 +33,16 @@ function update_script() {
|
|||||||
$STD apt upgrade -y
|
$STD apt upgrade -y
|
||||||
rm -rf /opt/tdarr/Tdarr_Updater
|
rm -rf /opt/tdarr/Tdarr_Updater
|
||||||
cd /opt/tdarr
|
cd /opt/tdarr
|
||||||
RELEASE=$(curl -fsSL https://f000.backblazeb2.com/file/tdarrs/versions.json | grep -oP '(?<="Tdarr_Updater": ")[^"]+' | grep linux_x64 | head -n 1)
|
RELEASE=$(curl_with_retry "https://f000.backblazeb2.com/file/tdarrs/versions.json" "-" | grep -oP '(?<="Tdarr_Updater": ")[^"]+' | grep linux_x64 | head -n 1)
|
||||||
curl -fsSL "$RELEASE" -o Tdarr_Updater.zip
|
curl_with_retry "$RELEASE" "Tdarr_Updater.zip"
|
||||||
$STD unzip Tdarr_Updater.zip
|
$STD unzip Tdarr_Updater.zip
|
||||||
chmod +x Tdarr_Updater
|
chmod +x Tdarr_Updater
|
||||||
$STD ./Tdarr_Updater
|
$STD ./Tdarr_Updater
|
||||||
rm -rf /opt/tdarr/Tdarr_Updater.zip
|
rm -rf /opt/tdarr/Tdarr_Updater.zip
|
||||||
|
[[ -f /opt/tdarr/Tdarr_Server/Tdarr_Server ]] || {
|
||||||
|
msg_error "Tdarr_Updater failed — tdarr.io may be blocked by local DNS"
|
||||||
|
exit 250
|
||||||
|
}
|
||||||
msg_ok "Updated Tdarr"
|
msg_ok "Updated Tdarr"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
exit
|
exit
|
||||||
|
|||||||
116
ct/termix.sh
116
ct/termix.sh
@@ -29,10 +29,116 @@ function update_script() {
|
|||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if check_for_gh_tag "guacd" "apache/guacamole-server"; then
|
||||||
|
msg_info "Stopping guacd"
|
||||||
|
systemctl stop guacd 2>/dev/null || true
|
||||||
|
msg_ok "Stopped guacd"
|
||||||
|
|
||||||
|
ensure_dependencies \
|
||||||
|
libcairo2-dev \
|
||||||
|
libjpeg62-turbo-dev \
|
||||||
|
libpng-dev \
|
||||||
|
libtool-bin \
|
||||||
|
uuid-dev \
|
||||||
|
libvncserver-dev \
|
||||||
|
freerdp3-dev \
|
||||||
|
libssh2-1-dev \
|
||||||
|
libtelnet-dev \
|
||||||
|
libwebsockets-dev \
|
||||||
|
libpulse-dev \
|
||||||
|
libvorbis-dev \
|
||||||
|
libwebp-dev \
|
||||||
|
libssl-dev \
|
||||||
|
libpango1.0-dev \
|
||||||
|
libswscale-dev \
|
||||||
|
libavcodec-dev \
|
||||||
|
libavutil-dev \
|
||||||
|
libavformat-dev
|
||||||
|
|
||||||
|
msg_info "Updating Guacamole Server (guacd)"
|
||||||
|
fetch_and_deploy_gh_tag "guacd" "apache/guacamole-server" "${CHECK_UPDATE_RELEASE}" "/opt/guacamole-server"
|
||||||
|
cd /opt/guacamole-server
|
||||||
|
export CPPFLAGS="-Wno-error=deprecated-declarations"
|
||||||
|
$STD autoreconf -fi
|
||||||
|
$STD ./configure --with-init-dir=/etc/init.d --enable-allow-freerdp-snapshots
|
||||||
|
$STD make
|
||||||
|
$STD make install
|
||||||
|
$STD ldconfig
|
||||||
|
cd /opt
|
||||||
|
rm -rf /opt/guacamole-server
|
||||||
|
msg_ok "Updated Guacamole Server (guacd) to ${CHECK_UPDATE_RELEASE}"
|
||||||
|
|
||||||
|
if [[ ! -f /etc/guacamole/guacd.conf ]]; then
|
||||||
|
mkdir -p /etc/guacamole
|
||||||
|
cat <<EOF >/etc/guacamole/guacd.conf
|
||||||
|
[server]
|
||||||
|
bind_host = 127.0.0.1
|
||||||
|
bind_port = 4822
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f /etc/systemd/system/guacd.service ]] || grep -q "Type=forking" /etc/systemd/system/guacd.service 2>/dev/null; then
|
||||||
|
cat <<EOF >/etc/systemd/system/guacd.service
|
||||||
|
[Unit]
|
||||||
|
Description=Guacamole Proxy Daemon (guacd)
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/local/sbin/guacd -f -b 127.0.0.1 -l 4822
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! grep -q "guacd.service" /etc/systemd/system/termix.service 2>/dev/null; then
|
||||||
|
sed -i '/^After=network.target/s/$/ guacd.service/' /etc/systemd/system/termix.service
|
||||||
|
sed -i '/^\[Unit\]/a Wants=guacd.service' /etc/systemd/system/termix.service
|
||||||
|
fi
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable -q --now guacd
|
||||||
|
fi
|
||||||
|
|
||||||
if check_for_gh_release "termix" "Termix-SSH/Termix"; then
|
if check_for_gh_release "termix" "Termix-SSH/Termix"; then
|
||||||
msg_info "Stopping Service"
|
msg_info "Stopping Termix"
|
||||||
systemctl stop termix
|
systemctl stop termix
|
||||||
msg_ok "Stopped Service"
|
msg_ok "Stopped Termix"
|
||||||
|
|
||||||
|
msg_info "Migrating Configuration"
|
||||||
|
if [[ ! -f /opt/termix/.env ]]; then
|
||||||
|
cat <<EOF >/opt/termix/.env
|
||||||
|
NODE_ENV=production
|
||||||
|
DATA_DIR=/opt/termix/data
|
||||||
|
GUACD_HOST=127.0.0.1
|
||||||
|
GUACD_PORT=4822
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
if ! grep -q "EnvironmentFile" /etc/systemd/system/termix.service 2>/dev/null; then
|
||||||
|
cat <<EOF >/etc/systemd/system/termix.service
|
||||||
|
[Unit]
|
||||||
|
Description=Termix Backend
|
||||||
|
After=network.target guacd.service
|
||||||
|
Wants=guacd.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory=/opt/termix
|
||||||
|
EnvironmentFile=/opt/termix/.env
|
||||||
|
ExecStart=/usr/bin/node /opt/termix/dist/backend/backend/starter.js
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
fi
|
||||||
|
msg_ok "Migrated Configuration"
|
||||||
|
|
||||||
msg_info "Backing up Data"
|
msg_info "Backing up Data"
|
||||||
cp -r /opt/termix/data /opt/termix_data_backup
|
cp -r /opt/termix/data /opt/termix_data_backup
|
||||||
@@ -95,16 +201,16 @@ function update_script() {
|
|||||||
sed -i 's|/app/html|/opt/termix/html|g' /etc/nginx/nginx.conf
|
sed -i 's|/app/html|/opt/termix/html|g' /etc/nginx/nginx.conf
|
||||||
sed -i 's|/app/nginx|/opt/termix/nginx|g' /etc/nginx/nginx.conf
|
sed -i 's|/app/nginx|/opt/termix/nginx|g' /etc/nginx/nginx.conf
|
||||||
sed -i 's|listen ${PORT};|listen 80;|g' /etc/nginx/nginx.conf
|
sed -i 's|listen ${PORT};|listen 80;|g' /etc/nginx/nginx.conf
|
||||||
|
|
||||||
nginx -t && systemctl reload nginx
|
nginx -t && systemctl reload nginx
|
||||||
msg_ok "Updated Nginx Configuration"
|
msg_ok "Updated Nginx Configuration"
|
||||||
else
|
else
|
||||||
msg_warn "Nginx configuration not updated. If Termix doesn't work, restore from backup or update manually."
|
msg_warn "Nginx configuration not updated. If Termix doesn't work, restore from backup or update manually."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
msg_info "Starting Service"
|
msg_info "Starting Termix"
|
||||||
systemctl start termix
|
systemctl start termix
|
||||||
msg_ok "Started Service"
|
msg_ok "Started Termix"
|
||||||
msg_ok "Updated successfully!"
|
msg_ok "Updated successfully!"
|
||||||
fi
|
fi
|
||||||
exit
|
exit
|
||||||
|
|||||||
@@ -46,7 +46,7 @@ function update_script() {
|
|||||||
|
|
||||||
msg_info "Updating Wishlist"
|
msg_info "Updating Wishlist"
|
||||||
cd /opt/wishlist
|
cd /opt/wishlist
|
||||||
$STD pnpm install
|
$STD pnpm install --frozen-lockfile
|
||||||
$STD pnpm svelte-kit sync
|
$STD pnpm svelte-kit sync
|
||||||
$STD pnpm prisma generate
|
$STD pnpm prisma generate
|
||||||
sed -i 's|/usr/src/app/|/opt/wishlist/|g' $(grep -rl '/usr/src/app/' /opt/wishlist)
|
sed -i 's|/usr/src/app/|/opt/wishlist/|g' $(grep -rl '/usr/src/app/' /opt/wishlist)
|
||||||
|
|||||||
83
ct/yamtrack.sh
Normal file
83
ct/yamtrack.sh
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/FuzzyGrim/Yamtrack
|
||||||
|
|
||||||
|
APP="Yamtrack"
|
||||||
|
var_tags="${var_tags:-media;tracker;movies;anime}"
|
||||||
|
var_cpu="${var_cpu:-2}"
|
||||||
|
var_ram="${var_ram:-2048}"
|
||||||
|
var_disk="${var_disk:-8}"
|
||||||
|
var_os="${var_os:-debian}"
|
||||||
|
var_version="${var_version:-13}"
|
||||||
|
var_unprivileged="${var_unprivileged:-1}"
|
||||||
|
|
||||||
|
header_info "$APP"
|
||||||
|
variables
|
||||||
|
color
|
||||||
|
catch_errors
|
||||||
|
|
||||||
|
function update_script() {
|
||||||
|
header_info
|
||||||
|
check_container_storage
|
||||||
|
check_container_resources
|
||||||
|
|
||||||
|
if [[ ! -d /opt/yamtrack ]]; then
|
||||||
|
msg_error "No ${APP} Installation Found!"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
|
if check_for_gh_release "yamtrack" "FuzzyGrim/Yamtrack"; then
|
||||||
|
msg_info "Stopping Services"
|
||||||
|
systemctl stop yamtrack yamtrack-celery
|
||||||
|
msg_ok "Stopped Services"
|
||||||
|
|
||||||
|
msg_info "Backing up Data"
|
||||||
|
cp /opt/yamtrack/src/.env /opt/yamtrack_env.bak
|
||||||
|
msg_ok "Backed up Data"
|
||||||
|
|
||||||
|
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "yamtrack" "FuzzyGrim/Yamtrack" "tarball"
|
||||||
|
|
||||||
|
msg_info "Installing Python Dependencies"
|
||||||
|
cd /opt/yamtrack
|
||||||
|
$STD uv venv .venv
|
||||||
|
$STD uv pip install --no-cache-dir -r requirements.txt
|
||||||
|
msg_ok "Installed Python Dependencies"
|
||||||
|
|
||||||
|
msg_info "Restoring Data"
|
||||||
|
cp /opt/yamtrack_env.bak /opt/yamtrack/src/.env
|
||||||
|
rm -f /opt/yamtrack_env.bak
|
||||||
|
msg_ok "Restored Data"
|
||||||
|
|
||||||
|
msg_info "Updating Yamtrack"
|
||||||
|
cd /opt/yamtrack/src
|
||||||
|
$STD /opt/yamtrack/.venv/bin/python manage.py migrate
|
||||||
|
$STD /opt/yamtrack/.venv/bin/python manage.py collectstatic --noinput
|
||||||
|
msg_ok "Updated Yamtrack"
|
||||||
|
|
||||||
|
msg_info "Updating Nginx Configuration"
|
||||||
|
cp /opt/yamtrack/nginx.conf /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|user abc;|user www-data;|' /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|/yamtrack/staticfiles/|/opt/yamtrack/src/staticfiles/|' /etc/nginx/nginx.conf
|
||||||
|
$STD systemctl reload nginx
|
||||||
|
msg_ok "Updated Nginx Configuration"
|
||||||
|
|
||||||
|
msg_info "Starting Services"
|
||||||
|
systemctl start yamtrack yamtrack-celery
|
||||||
|
msg_ok "Started Services"
|
||||||
|
msg_ok "Updated successfully!"
|
||||||
|
fi
|
||||||
|
exit
|
||||||
|
}
|
||||||
|
|
||||||
|
start
|
||||||
|
build_container
|
||||||
|
description
|
||||||
|
|
||||||
|
msg_ok "Completed Successfully!\n"
|
||||||
|
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||||
|
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||||
|
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8000${CL}"
|
||||||
@@ -47,7 +47,7 @@ function update_script() {
|
|||||||
|
|
||||||
msg_info "Installing Python Dependencies"
|
msg_info "Installing Python Dependencies"
|
||||||
cd /opt/yubal
|
cd /opt/yubal
|
||||||
$STD uv sync --no-dev --frozen
|
$STD uv sync --package yubal-api --no-dev --frozen
|
||||||
msg_ok "Installed Python Dependencies"
|
msg_ok "Installed Python Dependencies"
|
||||||
|
|
||||||
msg_info "Starting Services"
|
msg_info "Starting Services"
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# 🤖 AI Contribution Guidelines for ProxmoxVE
|
# 🤖 AI Contribution Guidelines for ProxmoxVE
|
||||||
|
|
||||||
> **This documentation is intended for all AI assistants (GitHub Copilot, Claude, ChatGPT, etc.) contributing to this project.**
|
> **This documentation is intended for all AI assistants (GitHub Copilot, Claude, ChatGPT, etc.) contributing to this project.**
|
||||||
|
|
||||||
@@ -653,15 +653,15 @@ Look at these recent well-implemented applications as reference:
|
|||||||
- Use of `check_for_gh_release` and `fetch_and_deploy_gh_release`
|
- Use of `check_for_gh_release` and `fetch_and_deploy_gh_release`
|
||||||
- Correct backup/restore patterns in `update_script`
|
- Correct backup/restore patterns in `update_script`
|
||||||
- Footer always ends with `motd_ssh`, `customize`, `cleanup_lxc`
|
- Footer always ends with `motd_ssh`, `customize`, `cleanup_lxc`
|
||||||
- JSON metadata files created for each app
|
- Website metadata requested via the website (Report issue on script page) if needed
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## <EFBFBD> JSON Metadata Files
|
## Website Metadata (Reference)
|
||||||
|
|
||||||
Every application requires a JSON metadata file in `frontend/public/json/<appname>.json`.
|
Website metadata (name, slug, description, logo, categories, etc.) is **not** added as files in the repo. Contributors request or update it via the **website**: go to the script's page and use the **Report issue** button; the flow will guide you. The structure below is a **reference** for what metadata exists (e.g. for the form or when describing what you need).
|
||||||
|
|
||||||
### JSON Structure
|
### JSON Structure (Reference)
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
@@ -804,7 +804,7 @@ Or no credentials:
|
|||||||
- [ ] `motd_ssh`, `customize`, `cleanup_lxc` at the end of install scripts
|
- [ ] `motd_ssh`, `customize`, `cleanup_lxc` at the end of install scripts
|
||||||
- [ ] No custom download/version-check logic
|
- [ ] No custom download/version-check logic
|
||||||
- [ ] All links point to `community-scripts/ProxmoxVE` (not `ProxmoxVED`!)
|
- [ ] All links point to `community-scripts/ProxmoxVE` (not `ProxmoxVED`!)
|
||||||
- [ ] JSON metadata file created in `frontend/public/json/<appname>.json`
|
- [ ] Website metadata requested via the website (Report issue) if needed
|
||||||
- [ ] Category IDs are valid (0-25)
|
- [ ] Category IDs are valid (0-25)
|
||||||
- [ ] Default OS version is Debian 13 or newer (unless special requirement)
|
- [ ] Default OS version is Debian 13 or newer (unless special requirement)
|
||||||
- [ ] Default resources are reasonable for the application
|
- [ ] Default resources are reasonable for the application
|
||||||
@@ -832,15 +832,15 @@ Or no credentials:
|
|||||||
|
|
||||||
## 🍒 Important: Cherry-Picking Your Files for PR Submission
|
## 🍒 Important: Cherry-Picking Your Files for PR Submission
|
||||||
|
|
||||||
⚠️ **CRITICAL**: When you submit your PR, you must use git cherry-pick to send ONLY your 3-4 files!
|
⚠️ **CRITICAL**: When you submit your PR, you must use git cherry-pick to send ONLY your 2 files!
|
||||||
|
|
||||||
Why? Because `setup-fork.sh` modifies 600+ files to update links. If you commit all changes, your PR will be impossible to merge.
|
Why? Because `setup-fork.sh` modifies 600+ files to update links. If you commit all changes, your PR will be impossible to merge.
|
||||||
|
|
||||||
**See**: [README.md - Cherry-Pick Section](README.md#-cherry-pick-submitting-only-your-changes) for complete instructions on:
|
**See**: [README.md - Cherry-Pick Section](README.md#-cherry-pick-submitting-only-your-changes) for complete instructions on:
|
||||||
|
|
||||||
- Creating a clean submission branch
|
- Creating a clean submission branch
|
||||||
- Cherry-picking only your files (ct/myapp.sh, install/myapp-install.sh, frontend/public/json/myapp.json)
|
- Cherry-picking only your files (ct/myapp.sh, install/myapp-install.sh)
|
||||||
- Verifying your PR has only 3 file changes (not 600+)
|
- Verifying your PR has only 2 file changes (not 600+)
|
||||||
|
|
||||||
**Quick reference**:
|
**Quick reference**:
|
||||||
|
|
||||||
@@ -849,7 +849,7 @@ Why? Because `setup-fork.sh` modifies 600+ files to update links. If you commit
|
|||||||
git fetch upstream
|
git fetch upstream
|
||||||
git checkout -b submit/myapp upstream/main
|
git checkout -b submit/myapp upstream/main
|
||||||
|
|
||||||
# Cherry-pick your commit(s) or manually add your 3-4 files
|
# Cherry-pick your commit(s) or manually add your 2 files
|
||||||
# Then push to your fork and create PR
|
# Then push to your fork and create PR
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -865,4 +865,4 @@ git checkout -b submit/myapp upstream/main
|
|||||||
- [../EXIT_CODES.md](../EXIT_CODES.md) - Exit code reference
|
- [../EXIT_CODES.md](../EXIT_CODES.md) - Exit code reference
|
||||||
- [templates_ct/](templates_ct/) - CT script templates
|
- [templates_ct/](templates_ct/) - CT script templates
|
||||||
- [templates_install/](templates_install/) - Install script templates
|
- [templates_install/](templates_install/) - Install script templates
|
||||||
- [templates_json/](templates_json/) - JSON metadata templates
|
- [templates_json/](templates_json/) - Metadata structure reference; submit via website
|
||||||
|
|||||||
@@ -24,9 +24,9 @@ This guide explains the current execution flow and what to verify during reviews
|
|||||||
- Uses `tools.func` helpers (setup\_\*).
|
- Uses `tools.func` helpers (setup\_\*).
|
||||||
- Ends with `motd_ssh`, `customize`, `cleanup_lxc`.
|
- Ends with `motd_ssh`, `customize`, `cleanup_lxc`.
|
||||||
|
|
||||||
### JSON Metadata
|
### Website Metadata
|
||||||
|
|
||||||
- File in `frontend/public/json/<appname>.json` matches template schema.
|
- Website metadata for new/updated scripts is requested via the website (Report issue on script page) where applicable.
|
||||||
|
|
||||||
### Testing
|
### Testing
|
||||||
|
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ These documents cover the coding standards for the following types of files in o
|
|||||||
|
|
||||||
- **`install/$AppName-install.sh` Scripts**: These scripts are responsible for the installation of applications.
|
- **`install/$AppName-install.sh` Scripts**: These scripts are responsible for the installation of applications.
|
||||||
- **`ct/$AppName.sh` Scripts**: These scripts handle the creation and updating of containers.
|
- **`ct/$AppName.sh` Scripts**: These scripts handle the creation and updating of containers.
|
||||||
- **`json/$AppName.json`**: These files store structured data and are used for the website.
|
- **Website metadata**: Display data (name, description, logo, etc.) is requested via the website (Report issue on the script page), not via files in the repo.
|
||||||
|
|
||||||
Each section provides detailed guidelines on various aspects of coding, including shebang usage, comments, variable naming, function naming, indentation, error handling, command substitution, quoting, script structure, and logging. Additionally, examples are provided to illustrate the application of these standards.
|
Each section provides detailed guidelines on various aspects of coding, including shebang usage, comments, variable naming, function naming, indentation, error handling, command substitution, quoting, script structure, and logging. Additionally, examples are provided to illustrate the application of these standards.
|
||||||
|
|
||||||
@@ -110,7 +110,7 @@ git push origin your-feature-branch
|
|||||||
|
|
||||||
### 6. Cherry-Pick: Submit Only Your Files for PR
|
### 6. Cherry-Pick: Submit Only Your Files for PR
|
||||||
|
|
||||||
⚠️ **IMPORTANT**: setup-fork.sh modified 600+ files. You MUST only submit your 3 new files!
|
⚠️ **IMPORTANT**: setup-fork.sh modified 600+ files. You MUST only submit your 2 new files!
|
||||||
|
|
||||||
See [README.md - Cherry-Pick Guide](README.md#-cherry-pick-submitting-only-your-changes) for step-by-step instructions.
|
See [README.md - Cherry-Pick Guide](README.md#-cherry-pick-submitting-only-your-changes) for step-by-step instructions.
|
||||||
|
|
||||||
@@ -124,12 +124,11 @@ git checkout -b submit/myapp upstream/main
|
|||||||
# Copy only your files
|
# Copy only your files
|
||||||
cp ../your-work-branch/ct/myapp.sh ct/myapp.sh
|
cp ../your-work-branch/ct/myapp.sh ct/myapp.sh
|
||||||
cp ../your-work-branch/install/myapp-install.sh install/myapp-install.sh
|
cp ../your-work-branch/install/myapp-install.sh install/myapp-install.sh
|
||||||
cp ../your-work-branch/frontend/public/json/myapp.json frontend/public/json/myapp.json
|
|
||||||
|
|
||||||
# Commit and verify
|
# Commit and verify
|
||||||
git add ct/myapp.sh install/myapp-install.sh frontend/public/json/myapp.json
|
git add ct/myapp.sh install/myapp-install.sh
|
||||||
git commit -m "feat: add MyApp"
|
git commit -m "feat: add MyApp"
|
||||||
git diff upstream/main --name-only # Should show ONLY your 3 files
|
git diff upstream/main --name-only # Should show ONLY your 2 files
|
||||||
|
|
||||||
# Push and create PR
|
# Push and create PR
|
||||||
git push origin submit/myapp
|
git push origin submit/myapp
|
||||||
@@ -139,11 +138,10 @@ git push origin submit/myapp
|
|||||||
|
|
||||||
Open a Pull Request from `submit/myapp` → `community-scripts/ProxmoxVE/main`.
|
Open a Pull Request from `submit/myapp` → `community-scripts/ProxmoxVE/main`.
|
||||||
|
|
||||||
Verify the PR shows ONLY these 3 files:
|
Verify the PR shows ONLY these 2 files:
|
||||||
|
|
||||||
- `ct/myapp.sh`
|
- `ct/myapp.sh`
|
||||||
- `install/myapp-install.sh`
|
- `install/myapp-install.sh`
|
||||||
- `frontend/public/json/myapp.json`
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -175,4 +173,4 @@ dev_mode="trace,keep" bash -c "$(curl -fsSL https://raw.githubusercontent.com/co
|
|||||||
|
|
||||||
- [CT Template: AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_ct/AppName.sh)
|
- [CT Template: AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_ct/AppName.sh)
|
||||||
- [Install Template: AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_install/AppName-install.sh)
|
- [Install Template: AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_install/AppName-install.sh)
|
||||||
- [JSON Template: AppName.json](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_json/AppName.json)
|
- [JSON Template: AppName.json](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_json/AppName.json) — metadata structure reference; submit via the website (Report issue on script page)
|
||||||
|
|||||||
@@ -54,15 +54,13 @@ git checkout -b add/my-awesome-app
|
|||||||
# 2. Create application scripts from templates
|
# 2. Create application scripts from templates
|
||||||
cp docs/contribution/templates_ct/AppName.sh ct/myapp.sh
|
cp docs/contribution/templates_ct/AppName.sh ct/myapp.sh
|
||||||
cp docs/contribution/templates_install/AppName-install.sh install/myapp-install.sh
|
cp docs/contribution/templates_install/AppName-install.sh install/myapp-install.sh
|
||||||
cp docs/contribution/templates_json/AppName.json frontend/public/json/myapp.json
|
|
||||||
|
|
||||||
# 3. Edit your scripts
|
# 3. Edit your scripts
|
||||||
nano ct/myapp.sh
|
nano ct/myapp.sh
|
||||||
nano install/myapp-install.sh
|
nano install/myapp-install.sh
|
||||||
nano frontend/public/json/myapp.json
|
|
||||||
|
|
||||||
# 4. Commit and push to your fork
|
# 4. Commit and push to your fork
|
||||||
git add ct/myapp.sh install/myapp-install.sh frontend/public/json/myapp.json
|
git add ct/myapp.sh install/myapp-install.sh
|
||||||
git commit -m "feat: add MyApp container and install scripts"
|
git commit -m "feat: add MyApp container and install scripts"
|
||||||
git push origin add/my-awesome-app
|
git push origin add/my-awesome-app
|
||||||
|
|
||||||
@@ -74,6 +72,8 @@ bash -c "$(curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/ProxmoxVE/
|
|||||||
|
|
||||||
# 7. Open Pull Request on GitHub
|
# 7. Open Pull Request on GitHub
|
||||||
# Create PR from: your-fork/add/my-awesome-app → community-scripts/ProxmoxVE/main
|
# Create PR from: your-fork/add/my-awesome-app → community-scripts/ProxmoxVE/main
|
||||||
|
|
||||||
|
# To add or change website metadata (description, logo, etc.), use the Report issue button on the script's page on the website.
|
||||||
```
|
```
|
||||||
|
|
||||||
**💡 Tip**: See `../FORK_SETUP.md` for detailed fork setup and troubleshooting
|
**💡 Tip**: See `../FORK_SETUP.md` for detailed fork setup and troubleshooting
|
||||||
|
|||||||
@@ -50,15 +50,14 @@ git push origin feature/my-awesome-app
|
|||||||
bash -c "$(curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/ProxmoxVE/main/ct/myapp.sh)"
|
bash -c "$(curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/ProxmoxVE/main/ct/myapp.sh)"
|
||||||
# ⏱️ GitHub may take 10-30 seconds to update files - be patient!
|
# ⏱️ GitHub may take 10-30 seconds to update files - be patient!
|
||||||
|
|
||||||
# 8. Create your JSON metadata file
|
# 8. Request website metadata via the website
|
||||||
cp docs/contribution/templates_json/AppName.json frontend/public/json/myapp.json
|
# Go to the script's page on the website, use the "Report issue" button — it will guide you.
|
||||||
# Edit metadata: name, slug, categories, description, resources, etc.
|
|
||||||
|
|
||||||
# 9. No direct install-script test
|
# 9. No direct install-script test
|
||||||
# Install scripts are executed by the CT script inside the container
|
# Install scripts are executed by the CT script inside the container
|
||||||
|
|
||||||
# 10. Commit ONLY your new files (see Cherry-Pick section below!)
|
# 10. Commit ONLY your new files (see Cherry-Pick section below!)
|
||||||
git add ct/myapp.sh install/myapp-install.sh frontend/public/json/myapp.json
|
git add ct/myapp.sh install/myapp-install.sh
|
||||||
git commit -m "feat: add MyApp container and install scripts"
|
git commit -m "feat: add MyApp container and install scripts"
|
||||||
git push origin feature/my-awesome-app
|
git push origin feature/my-awesome-app
|
||||||
|
|
||||||
@@ -67,7 +66,7 @@ git push origin feature/my-awesome-app
|
|||||||
|
|
||||||
⚠️ **IMPORTANT: After setup-fork.sh, many files are modified!**
|
⚠️ **IMPORTANT: After setup-fork.sh, many files are modified!**
|
||||||
|
|
||||||
See the **Cherry-Pick: Submitting Only Your Changes** section below to learn how to push ONLY your 3-4 files instead of 600+ modified files!
|
See the **Cherry-Pick: Submitting Only Your Changes** section below to learn how to push ONLY your 2 files instead of 600+ modified files!
|
||||||
|
|
||||||
### How Users Run Scripts (After Merged)
|
### How Users Run Scripts (After Merged)
|
||||||
|
|
||||||
@@ -180,7 +179,7 @@ All scripts and configurations must follow our coding standards to ensure consis
|
|||||||
- **[HELPER_FUNCTIONS.md](HELPER_FUNCTIONS.md)** - Reference for all tools.func helper functions
|
- **[HELPER_FUNCTIONS.md](HELPER_FUNCTIONS.md)** - Reference for all tools.func helper functions
|
||||||
- **Container Scripts** - `/ct/` templates and guidelines
|
- **Container Scripts** - `/ct/` templates and guidelines
|
||||||
- **Install Scripts** - `/install/` templates and guidelines
|
- **Install Scripts** - `/install/` templates and guidelines
|
||||||
- **JSON Configurations** - `frontend/public/json/` structure and format
|
- **Website metadata** – Request via the website (Report issue on the script page); see [templates_json/AppName.md](templates_json/AppName.md)
|
||||||
|
|
||||||
### Quick Checklist
|
### Quick Checklist
|
||||||
|
|
||||||
@@ -213,7 +212,7 @@ Key points:
|
|||||||
|
|
||||||
## 🍒 Cherry-Pick: Submitting Only Your Changes
|
## 🍒 Cherry-Pick: Submitting Only Your Changes
|
||||||
|
|
||||||
**Problem**: `setup-fork.sh` modifies 600+ files to update links. You don't want to submit all of those changes - only your new 3-4 files!
|
**Problem**: `setup-fork.sh` modifies 600+ files to update links. You don't want to submit all of those changes - only your new 2 files!
|
||||||
|
|
||||||
**Solution**: Use git cherry-pick to select only YOUR files.
|
**Solution**: Use git cherry-pick to select only YOUR files.
|
||||||
|
|
||||||
@@ -226,7 +225,7 @@ Key points:
|
|||||||
git status
|
git status
|
||||||
|
|
||||||
# Verify your files are there
|
# Verify your files are there
|
||||||
git status | grep -E "ct/myapp|install/myapp|json/myapp"
|
git status | grep -E "ct/myapp|install/myapp"
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 2. Create a clean feature branch for submission
|
#### 2. Create a clean feature branch for submission
|
||||||
@@ -252,15 +251,13 @@ git cherry-pick <commit-hash-of-your-files>
|
|||||||
# From your work branch, get the file contents
|
# From your work branch, get the file contents
|
||||||
git show feature/my-awesome-app:ct/myapp.sh > /tmp/myapp.sh
|
git show feature/my-awesome-app:ct/myapp.sh > /tmp/myapp.sh
|
||||||
git show feature/my-awesome-app:install/myapp-install.sh > /tmp/myapp-install.sh
|
git show feature/my-awesome-app:install/myapp-install.sh > /tmp/myapp-install.sh
|
||||||
git show feature/my-awesome-app:frontend/public/json/myapp.json > /tmp/myapp.json
|
|
||||||
|
|
||||||
# Add them to the clean branch
|
# Add them to the clean branch
|
||||||
cp /tmp/myapp.sh ct/myapp.sh
|
cp /tmp/myapp.sh ct/myapp.sh
|
||||||
cp /tmp/myapp-install.sh install/myapp-install.sh
|
cp /tmp/myapp-install.sh install/myapp-install.sh
|
||||||
cp /tmp/myapp.json frontend/public/json/myapp.json
|
|
||||||
|
|
||||||
# Commit
|
# Commit
|
||||||
git add ct/myapp.sh install/myapp-install.sh frontend/public/json/myapp.json
|
git add ct/myapp.sh install/myapp-install.sh
|
||||||
git commit -m "feat: add MyApp"
|
git commit -m "feat: add MyApp"
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -272,7 +269,6 @@ git diff upstream/main --name-only
|
|||||||
# Should show ONLY:
|
# Should show ONLY:
|
||||||
# ct/myapp.sh
|
# ct/myapp.sh
|
||||||
# install/myapp-install.sh
|
# install/myapp-install.sh
|
||||||
# frontend/public/json/myapp.json
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 5. Push and create PR
|
#### 5. Push and create PR
|
||||||
@@ -345,7 +341,8 @@ If you're using **Visual Studio Code** with an AI assistant, you can leverage ou
|
|||||||
Please follow the guidelines in docs/contribution/AI.md to create:
|
Please follow the guidelines in docs/contribution/AI.md to create:
|
||||||
1. ct/myapp.sh (container script)
|
1. ct/myapp.sh (container script)
|
||||||
2. install/myapp-install.sh (installation script)
|
2. install/myapp-install.sh (installation script)
|
||||||
3. frontend/public/json/myapp.json (metadata)
|
|
||||||
|
Website listing/metadata is requested separately via the website (Report issue on the script page).
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **AI Will Generate**
|
4. **AI Will Generate**
|
||||||
@@ -357,6 +354,8 @@ If you're using **Visual Studio Code** with an AI assistant, you can leverage ou
|
|||||||
- Have correct update mechanisms
|
- Have correct update mechanisms
|
||||||
- Are ready to submit as a PR
|
- Are ready to submit as a PR
|
||||||
|
|
||||||
|
Website listing/metadata is requested separately via the website (Report issue on the script page).
|
||||||
|
|
||||||
### Key Points for AI Assistants
|
### Key Points for AI Assistants
|
||||||
|
|
||||||
- **Templates Location**: `docs/contribution/templates_ct/AppName.sh`, `templates_install/`, `templates_json/`
|
- **Templates Location**: `docs/contribution/templates_ct/AppName.sh`, `templates_install/`, `templates_json/`
|
||||||
@@ -409,11 +408,10 @@ cp docs/contribution/templates_ct/AppName.sh ct/my-app.sh
|
|||||||
|
|
||||||
# Installation script template
|
# Installation script template
|
||||||
cp docs/contribution/templates_install/AppName-install.sh install/my-app-install.sh
|
cp docs/contribution/templates_install/AppName-install.sh install/my-app-install.sh
|
||||||
|
|
||||||
# JSON configuration template
|
|
||||||
cp docs/contribution/templates_json/AppName.json frontend/public/json/my-app.json
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
For website metadata (description, logo, etc.), use the Report issue button on the script's page on the website.
|
||||||
|
|
||||||
**Template Features:**
|
**Template Features:**
|
||||||
|
|
||||||
- Updated to match current codebase patterns
|
- Updated to match current codebase patterns
|
||||||
|
|||||||
@@ -149,7 +149,7 @@ fetch_and_deploy_gh_release "myapp" "owner/repo"
|
|||||||
2. **Only add app-specific dependencies** - Don't add ca-certificates, curl, gnupg (handled by build.func)
|
2. **Only add app-specific dependencies** - Don't add ca-certificates, curl, gnupg (handled by build.func)
|
||||||
3. **Test via curl from your fork** - Push first, then: `bash -c "$(curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/ProxmoxVE/main/ct/MyApp.sh)"`
|
3. **Test via curl from your fork** - Push first, then: `bash -c "$(curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/ProxmoxVE/main/ct/MyApp.sh)"`
|
||||||
4. **Wait for GitHub to update** - Takes 10-30 seconds after git push
|
4. **Wait for GitHub to update** - Takes 10-30 seconds after git push
|
||||||
5. **Cherry-pick only YOUR files** - Submit only ct/MyApp.sh, install/MyApp-install.sh, frontend/public/json/myapp.json (3 files)
|
5. **Cherry-pick only YOUR files** - Submit only ct/MyApp.sh, install/MyApp-install.sh (2 files). Website metadata: use Report issue on the script's page on the website.
|
||||||
6. **Verify before PR** - Run `git diff upstream/main --name-only` to confirm only your files changed
|
6. **Verify before PR** - Run `git diff upstream/main --name-only` to confirm only your files changed
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -1,21 +1,20 @@
|
|||||||
# JSON Metadata Files - Quick Reference
|
# Website Metadata - Quick Reference
|
||||||
|
|
||||||
The metadata file (`frontend/public/json/myapp.json`) tells the web interface how to display your application.
|
Metadata (name, slug, description, logo, categories, etc.) controls how your application appears on the website. You do **not** add JSON files to the repo — you request changes via the website.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Quick Start
|
## How to Request or Update Metadata
|
||||||
|
|
||||||
**Use the JSON Generator Tool:**
|
1. **Go to the script on the website** — Open the [ProxmoxVE website](https://community-scripts.github.io/ProxmoxVE/), find your script (or the script you want to update).
|
||||||
[https://community-scripts.github.io/ProxmoxVE/json-editor](https://community-scripts.github.io/ProxmoxVE/json-editor)
|
2. **Press the "Report issue" button** on that script’s page.
|
||||||
|
3. **Follow the guide** — The flow will walk you through submitting or updating metadata.
|
||||||
1. Enter application details
|
|
||||||
2. Generator creates `frontend/public/json/myapp.json`
|
|
||||||
3. Copy the output to your contribution
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## File Structure
|
## Metadata Structure (Reference)
|
||||||
|
|
||||||
|
The following describes the structure of script metadata used by the website. Use it as reference when filling out the form or describing what you need.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
@@ -148,18 +147,14 @@ Each installation method specifies resource requirements:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Reference Examples
|
## See Examples on the Website
|
||||||
|
|
||||||
See actual examples in the repo:
|
View script pages on the [ProxmoxVE website](https://community-scripts.github.io/ProxmoxVE/) to see how metadata is displayed for existing scripts.
|
||||||
|
|
||||||
- [frontend/public/json/trip.json](https://github.com/community-scripts/ProxmoxVE/blob/main/frontend/public/json/trip.json)
|
|
||||||
- [frontend/public/json/thingsboard.json](https://github.com/community-scripts/ProxmoxVE/blob/main/frontend/public/json/thingsboard.json)
|
|
||||||
- [frontend/public/json/unifi.json](https://github.com/community-scripts/ProxmoxVE/blob/main/frontend/public/json/unifi.json)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Need Help?
|
## Need Help?
|
||||||
|
|
||||||
- **[JSON Generator](https://community-scripts.github.io/ProxmoxVE/json-editor)** - Interactive tool
|
- **Request metadata** — Use the Report issue button on the script’s page on the website (see [How to Request or Update Metadata](#how-to-request-or-update-metadata) above).
|
||||||
|
- **[JSON Generator](https://community-scripts.github.io/ProxmoxVE/json-editor)** - Reference only; structure validation
|
||||||
- **[README.md](../README.md)** - Full contribution workflow
|
- **[README.md](../README.md)** - Full contribution workflow
|
||||||
- **[Quick Start](../README.md)** - Step-by-step guide
|
|
||||||
|
|||||||
@@ -61,7 +61,7 @@ To understand how to create a container script:
|
|||||||
|
|
||||||
## Common Tasks
|
## Common Tasks
|
||||||
|
|
||||||
- **Add new container application** → [CONTRIBUTION_GUIDE.md](../CONTRIBUTION_GUIDE.md)
|
- **Add new container application** → [contribution/README.md](../contribution/README.md)
|
||||||
- **Debug container creation** → [EXIT_CODES.md](../EXIT_CODES.md)
|
- **Debug container creation** → [EXIT_CODES.md](../EXIT_CODES.md)
|
||||||
- **Understand build.func** → [misc/build.func/](../misc/build.func/)
|
- **Understand build.func** → [misc/build.func/](../misc/build.func/)
|
||||||
- **Development mode debugging** → [DEV_MODE.md](../DEV_MODE.md)
|
- **Development mode debugging** → [DEV_MODE.md](../DEV_MODE.md)
|
||||||
|
|||||||
25
install/alpine-ntfy-install.sh
Normal file
25
install/alpine-ntfy-install.sh
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: cobalt (cobaltgit)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVED/raw/main/LICENSE
|
||||||
|
# Source: https://ntfy.sh/
|
||||||
|
|
||||||
|
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||||
|
color
|
||||||
|
verb_ip6
|
||||||
|
catch_errors
|
||||||
|
setting_up_container
|
||||||
|
network_check
|
||||||
|
update_os
|
||||||
|
|
||||||
|
msg_info "Installing ntfy"
|
||||||
|
$STD apk add --no-cache ntfy ntfy-openrc libcap
|
||||||
|
sed -i '/^listen-http/s/^\(.*\)$/#\1\n/' /etc/ntfy/server.yml
|
||||||
|
setcap 'cap_net_bind_service=+ep' /usr/bin/ntfy
|
||||||
|
$STD rc-update add ntfy default
|
||||||
|
$STD service ntfy start
|
||||||
|
msg_ok "Installed ntfy"
|
||||||
|
|
||||||
|
motd_ssh
|
||||||
|
customize
|
||||||
81
install/anytype-server-install.sh
Normal file
81
install/anytype-server-install.sh
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://anytype.io
|
||||||
|
|
||||||
|
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||||
|
color
|
||||||
|
verb_ip6
|
||||||
|
catch_errors
|
||||||
|
setting_up_container
|
||||||
|
network_check
|
||||||
|
update_os
|
||||||
|
|
||||||
|
setup_mongodb
|
||||||
|
|
||||||
|
msg_info "Configuring MongoDB Replica Set"
|
||||||
|
cat <<EOF >>/etc/mongod.conf
|
||||||
|
|
||||||
|
replication:
|
||||||
|
replSetName: "rs0"
|
||||||
|
EOF
|
||||||
|
systemctl restart mongod
|
||||||
|
sleep 3
|
||||||
|
$STD mongosh --eval 'rs.initiate({_id: "rs0", members: [{_id: 0, host: "127.0.0.1:27017"}]})'
|
||||||
|
msg_ok "Configured MongoDB Replica Set"
|
||||||
|
|
||||||
|
msg_info "Installing Redis Stack"
|
||||||
|
setup_deb822_repo \
|
||||||
|
"redis-stack" \
|
||||||
|
"https://packages.redis.io/gpg" \
|
||||||
|
"https://packages.redis.io/deb" \
|
||||||
|
"jammy" \
|
||||||
|
"main"
|
||||||
|
$STD apt install -y redis-stack-server
|
||||||
|
systemctl enable -q --now redis-stack-server
|
||||||
|
msg_ok "Installed Redis Stack"
|
||||||
|
|
||||||
|
fetch_and_deploy_gh_release "anytype" "grishy/any-sync-bundle" "prebuild" "latest" "/opt/anytype" "any-sync-bundle_*_linux_amd64.tar.gz"
|
||||||
|
chmod +x /opt/anytype/any-sync-bundle
|
||||||
|
|
||||||
|
msg_info "Configuring Anytype"
|
||||||
|
mkdir -p /opt/anytype/data/storage
|
||||||
|
cat <<EOF >/opt/anytype/.env
|
||||||
|
ANY_SYNC_BUNDLE_CONFIG=/opt/anytype/data/bundle-config.yml
|
||||||
|
ANY_SYNC_BUNDLE_CLIENT_CONFIG=/opt/anytype/data/client-config.yml
|
||||||
|
ANY_SYNC_BUNDLE_INIT_STORAGE=/opt/anytype/data/storage/
|
||||||
|
ANY_SYNC_BUNDLE_INIT_EXTERNAL_ADDRS=${LOCAL_IP}
|
||||||
|
ANY_SYNC_BUNDLE_INIT_MONGO_URI=mongodb://127.0.0.1:27017/
|
||||||
|
ANY_SYNC_BUNDLE_INIT_REDIS_URI=redis://127.0.0.1:6379/
|
||||||
|
ANY_SYNC_BUNDLE_LOG_LEVEL=info
|
||||||
|
EOF
|
||||||
|
msg_ok "Configured Anytype"
|
||||||
|
|
||||||
|
msg_info "Creating Service"
|
||||||
|
cat <<EOF >/etc/systemd/system/anytype.service
|
||||||
|
[Unit]
|
||||||
|
Description=Anytype Sync Server (any-sync-bundle)
|
||||||
|
After=network-online.target mongod.service redis-stack-server.service
|
||||||
|
Wants=network-online.target
|
||||||
|
Requires=mongod.service redis-stack-server.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory=/opt/anytype
|
||||||
|
EnvironmentFile=/opt/anytype/.env
|
||||||
|
ExecStart=/opt/anytype/any-sync-bundle start-bundle
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl enable -q --now anytype
|
||||||
|
msg_ok "Created Service"
|
||||||
|
|
||||||
|
motd_ssh
|
||||||
|
customize
|
||||||
|
cleanup_lxc
|
||||||
@@ -35,7 +35,6 @@ setup_hwaccel
|
|||||||
msg_info "Installing Channels DVR Server (Patience)"
|
msg_info "Installing Channels DVR Server (Patience)"
|
||||||
cd /opt
|
cd /opt
|
||||||
$STD bash <(curl -fsSL https://getchannels.com/dvr/setup.sh)
|
$STD bash <(curl -fsSL https://getchannels.com/dvr/setup.sh)
|
||||||
sed -i -e 's/^sgx:x:104:$/render:x:104:root/' -e 's/^render:x:106:root$/sgx:x:106:/' /etc/group
|
|
||||||
msg_ok "Installed Channels DVR Server"
|
msg_ok "Installed Channels DVR Server"
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
# Copyright (c) 2021-2026 community-scripts ORG
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
# Author: DragoQC
|
# Author: DragoQC | Co-Author: nickheyer
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
# Source: https://discopanel.app/ | Github: https://github.com/nickheyer/discopanel
|
# Source: https://discopanel.app/ | Github: https://github.com/nickheyer/discopanel
|
||||||
|
|
||||||
@@ -12,25 +12,9 @@ setting_up_container
|
|||||||
network_check
|
network_check
|
||||||
update_os
|
update_os
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
fetch_and_deploy_gh_release "discopanel" "nickheyer/discopanel" "prebuild" "latest" "/opt/discopanel" "discopanel-linux-amd64.tar.gz"
|
||||||
$STD apt install -y build-essential
|
|
||||||
msg_ok "Installed Dependencies"
|
|
||||||
|
|
||||||
NODE_VERSION="22" setup_nodejs
|
|
||||||
setup_go
|
|
||||||
fetch_and_deploy_gh_release "discopanel" "nickheyer/discopanel" "tarball" "latest" "/opt/discopanel"
|
|
||||||
setup_docker
|
setup_docker
|
||||||
|
|
||||||
msg_info "Setting up DiscoPanel"
|
|
||||||
cd /opt/discopanel
|
|
||||||
$STD make gen
|
|
||||||
cd /opt/discopanel/web/discopanel
|
|
||||||
$STD npm install
|
|
||||||
$STD npm run build
|
|
||||||
cd /opt/discopanel
|
|
||||||
$STD go build -o discopanel cmd/discopanel/main.go
|
|
||||||
msg_ok "Setup DiscoPanel"
|
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
cat <<EOF >/etc/systemd/system/discopanel.service
|
cat <<EOF >/etc/systemd/system/discopanel.service
|
||||||
[Unit]
|
[Unit]
|
||||||
@@ -39,7 +23,7 @@ After=network.target
|
|||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
WorkingDirectory=/opt/discopanel
|
WorkingDirectory=/opt/discopanel
|
||||||
ExecStart=/opt/discopanel/discopanel
|
ExecStart=/opt/discopanel/discopanel-linux-amd64
|
||||||
Restart=always
|
Restart=always
|
||||||
|
|
||||||
[Install]
|
[Install]
|
||||||
|
|||||||
@@ -13,17 +13,9 @@ setting_up_container
|
|||||||
network_check
|
network_check
|
||||||
update_os
|
update_os
|
||||||
|
|
||||||
setup_hwaccel
|
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "emby" "MediaBrowser/Emby.Releases" "binary"
|
fetch_and_deploy_gh_release "emby" "MediaBrowser/Emby.Releases" "binary"
|
||||||
|
|
||||||
msg_info "Configuring Emby"
|
setup_hwaccel "emby"
|
||||||
if [[ "$CTTYPE" == "0" ]]; then
|
|
||||||
sed -i -e 's/^ssl-cert:x:104:$/render:x:104:root,emby/' -e 's/^render:x:108:root,emby$/ssl-cert:x:108:/' /etc/group
|
|
||||||
else
|
|
||||||
sed -i -e 's/^ssl-cert:x:104:$/render:x:104:emby/' -e 's/^render:x:108:emby$/ssl-cert:x:108:/' /etc/group
|
|
||||||
fi
|
|
||||||
msg_ok "Configured Emby"
|
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
customize
|
customize
|
||||||
|
|||||||
@@ -146,7 +146,7 @@ ldconfig
|
|||||||
msg_ok "Built libUSB"
|
msg_ok "Built libUSB"
|
||||||
|
|
||||||
msg_info "Bootstrapping pip"
|
msg_info "Bootstrapping pip"
|
||||||
wget -q https://bootstrap.pypa.io/get-pip.py -O /tmp/get-pip.py
|
curl_with_retry "https://bootstrap.pypa.io/get-pip.py" "/tmp/get-pip.py"
|
||||||
sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' /tmp/get-pip.py
|
sed -i 's/args.append("setuptools")/args.append("setuptools==77.0.3")/' /tmp/get-pip.py
|
||||||
$STD python3 /tmp/get-pip.py "pip"
|
$STD python3 /tmp/get-pip.py "pip"
|
||||||
rm -f /tmp/get-pip.py
|
rm -f /tmp/get-pip.py
|
||||||
@@ -169,13 +169,13 @@ NODE_VERSION="20" setup_nodejs
|
|||||||
|
|
||||||
msg_info "Downloading Inference Models"
|
msg_info "Downloading Inference Models"
|
||||||
mkdir -p /models /openvino-model
|
mkdir -p /models /openvino-model
|
||||||
wget -q -O /edgetpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
|
curl_with_retry "https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite" "/edgetpu_model.tflite"
|
||||||
wget -q -O /models/cpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite
|
curl_with_retry "https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite" "/models/cpu_model.tflite"
|
||||||
cp /opt/frigate/labelmap.txt /labelmap.txt
|
cp /opt/frigate/labelmap.txt /labelmap.txt
|
||||||
msg_ok "Downloaded Inference Models"
|
msg_ok "Downloaded Inference Models"
|
||||||
|
|
||||||
msg_info "Downloading Audio Model"
|
msg_info "Downloading Audio Model"
|
||||||
wget -q -O /tmp/yamnet.tar.gz https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download
|
curl_with_retry "https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download" "/tmp/yamnet.tar.gz"
|
||||||
$STD tar xzf /tmp/yamnet.tar.gz -C /
|
$STD tar xzf /tmp/yamnet.tar.gz -C /
|
||||||
mv /1.tflite /cpu_audio_model.tflite
|
mv /1.tflite /cpu_audio_model.tflite
|
||||||
cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
|
cp /opt/frigate/audio-labelmap.txt /audio-labelmap.txt
|
||||||
@@ -205,13 +205,23 @@ msg_ok "Installed OpenVino"
|
|||||||
|
|
||||||
msg_info "Building OpenVino Model"
|
msg_info "Building OpenVino Model"
|
||||||
cd /models
|
cd /models
|
||||||
wget -q http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
|
curl_with_retry "http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz" "ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz"
|
||||||
$STD tar -zxf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz --no-same-owner
|
$STD tar -zxf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz --no-same-owner
|
||||||
if python3 /opt/frigate/docker/main/build_ov_model.py &>/dev/null; then
|
if python3 /opt/frigate/docker/main/build_ov_model.py &>/dev/null; then
|
||||||
mkdir -p /openvino-model
|
mkdir -p /openvino-model
|
||||||
cp /models/ssdlite_mobilenet_v2.xml /openvino-model/
|
cp /models/ssdlite_mobilenet_v2.xml /openvino-model/
|
||||||
cp /models/ssdlite_mobilenet_v2.bin /openvino-model/
|
cp /models/ssdlite_mobilenet_v2.bin /openvino-model/
|
||||||
$STD ln -sf $(python3 -c "import omz_tools; import os; print(os.path.join(omz_tools.__path__[0], 'data/dataset_classes/coco_91cl_bkgr.txt'))") /openvino-model/coco_91cl_bkgr.txt
|
OV_LABELS=$(python3 -c "import omz_tools; import os; print(os.path.join(omz_tools.__path__[0], 'data/dataset_classes/coco_91cl_bkgr.txt'))" 2>/dev/null)
|
||||||
|
if [[ -n "$OV_LABELS" && -f "$OV_LABELS" ]]; then
|
||||||
|
ln -sf "$OV_LABELS" /openvino-model/coco_91cl_bkgr.txt
|
||||||
|
else
|
||||||
|
OV_LABELS=$(find /usr/local/lib -name "coco_91cl_bkgr.txt" 2>/dev/null | head -1)
|
||||||
|
if [[ -n "$OV_LABELS" ]]; then
|
||||||
|
ln -sf "$OV_LABELS" /openvino-model/coco_91cl_bkgr.txt
|
||||||
|
else
|
||||||
|
curl_with_retry "https://raw.githubusercontent.com/openvinotoolkit/open_model_zoo/master/data/dataset_classes/coco_91cl_bkgr.txt" "/openvino-model/coco_91cl_bkgr.txt"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
sed -i 's/truck/car/g' /openvino-model/coco_91cl_bkgr.txt
|
sed -i 's/truck/car/g' /openvino-model/coco_91cl_bkgr.txt
|
||||||
msg_ok "Built OpenVino Model"
|
msg_ok "Built OpenVino Model"
|
||||||
else
|
else
|
||||||
@@ -236,7 +246,7 @@ msg_info "Configuring Frigate"
|
|||||||
mkdir -p /config /media/frigate
|
mkdir -p /config /media/frigate
|
||||||
cp -r /opt/frigate/config/. /config
|
cp -r /opt/frigate/config/. /config
|
||||||
|
|
||||||
curl -fsSL "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4" -o "/media/frigate/person-bicycle-car-detection.mp4"
|
curl_with_retry "https://github.com/intel-iot-devkit/sample-videos/raw/master/person-bicycle-car-detection.mp4" "/media/frigate/person-bicycle-car-detection.mp4"
|
||||||
|
|
||||||
echo "tmpfs /tmp/cache tmpfs defaults 0 0" >>/etc/fstab
|
echo "tmpfs /tmp/cache tmpfs defaults 0 0" >>/etc/fstab
|
||||||
|
|
||||||
@@ -279,7 +289,7 @@ detect:
|
|||||||
enabled: false
|
enabled: false
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
if grep -q -o -m1 -E 'avx[^ ]*|sse4_2' /proc/cpuinfo; then
|
if grep -q -o -m1 -E 'avx[^ ]*|sse4_2' /proc/cpuinfo && [[ -f /openvino-model/ssdlite_mobilenet_v2.xml ]] && [[ -f /openvino-model/coco_91cl_bkgr.txt ]]; then
|
||||||
cat <<EOF >>/config/config.yml
|
cat <<EOF >>/config/config.yml
|
||||||
ffmpeg:
|
ffmpeg:
|
||||||
hwaccel_args: auto
|
hwaccel_args: auto
|
||||||
|
|||||||
96
install/gluetun-install.sh
Normal file
96
install/gluetun-install.sh
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/qdm12/gluetun
|
||||||
|
|
||||||
|
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||||
|
color
|
||||||
|
verb_ip6
|
||||||
|
catch_errors
|
||||||
|
setting_up_container
|
||||||
|
network_check
|
||||||
|
update_os
|
||||||
|
|
||||||
|
msg_info "Installing Dependencies"
|
||||||
|
$STD apt install -y \
|
||||||
|
openvpn \
|
||||||
|
wireguard-tools \
|
||||||
|
iptables
|
||||||
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
|
msg_info "Configuring iptables"
|
||||||
|
$STD update-alternatives --set iptables /usr/sbin/iptables-legacy
|
||||||
|
$STD update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
|
||||||
|
ln -sf /usr/sbin/openvpn /usr/sbin/openvpn2.6
|
||||||
|
msg_ok "Configured iptables"
|
||||||
|
|
||||||
|
setup_go
|
||||||
|
|
||||||
|
fetch_and_deploy_gh_release "gluetun" "qdm12/gluetun" "tarball"
|
||||||
|
|
||||||
|
msg_info "Building Gluetun"
|
||||||
|
cd /opt/gluetun
|
||||||
|
$STD go mod download
|
||||||
|
CGO_ENABLED=0 $STD go build -trimpath -ldflags="-s -w" -o /usr/local/bin/gluetun ./cmd/gluetun/
|
||||||
|
msg_ok "Built Gluetun"
|
||||||
|
|
||||||
|
msg_info "Configuring Gluetun"
|
||||||
|
mkdir -p /opt/gluetun-data
|
||||||
|
touch /etc/alpine-release
|
||||||
|
ln -sf /opt/gluetun-data /gluetun
|
||||||
|
cat <<EOF >/opt/gluetun-data/.env
|
||||||
|
VPN_SERVICE_PROVIDER=custom
|
||||||
|
VPN_TYPE=openvpn
|
||||||
|
OPENVPN_CUSTOM_CONFIG=/opt/gluetun-data/custom.ovpn
|
||||||
|
OPENVPN_USER=
|
||||||
|
OPENVPN_PASSWORD=
|
||||||
|
OPENVPN_PROCESS_USER=root
|
||||||
|
PUID=0
|
||||||
|
PGID=0
|
||||||
|
HTTP_CONTROL_SERVER_ADDRESS=:8000
|
||||||
|
HTTPPROXY=off
|
||||||
|
SHADOWSOCKS=off
|
||||||
|
PPROF_ENABLED=no
|
||||||
|
PPROF_BLOCK_PROFILE_RATE=0
|
||||||
|
PPROF_MUTEX_PROFILE_RATE=0
|
||||||
|
PPROF_HTTP_SERVER_ADDRESS=:6060
|
||||||
|
FIREWALL_ENABLED_DISABLING_IT_SHOOTS_YOU_IN_YOUR_FOOT=on
|
||||||
|
HEALTH_SERVER_ADDRESS=127.0.0.1:9999
|
||||||
|
DNS_UPSTREAM_RESOLVERS=cloudflare
|
||||||
|
LOG_LEVEL=info
|
||||||
|
STORAGE_FILEPATH=/gluetun/servers.json
|
||||||
|
PUBLICIP_FILE=/gluetun/ip
|
||||||
|
VPN_PORT_FORWARDING_STATUS_FILE=/gluetun/forwarded_port
|
||||||
|
TZ=UTC
|
||||||
|
EOF
|
||||||
|
msg_ok "Configured Gluetun"
|
||||||
|
|
||||||
|
msg_info "Creating Service"
|
||||||
|
cat <<EOF >/etc/systemd/system/gluetun.service
|
||||||
|
[Unit]
|
||||||
|
Description=Gluetun VPN Client
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory=/opt/gluetun-data
|
||||||
|
EnvironmentFile=/opt/gluetun-data/.env
|
||||||
|
UnsetEnvironment=USER
|
||||||
|
ExecStartPre=/bin/sh -c 'rm -f /etc/openvpn/target.ovpn'
|
||||||
|
ExecStart=/usr/local/bin/gluetun
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
AmbientCapabilities=CAP_NET_ADMIN
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl enable -q --now gluetun
|
||||||
|
msg_ok "Created Service"
|
||||||
|
|
||||||
|
motd_ssh
|
||||||
|
customize
|
||||||
|
cleanup_lxc
|
||||||
@@ -14,6 +14,10 @@ network_check
|
|||||||
update_os
|
update_os
|
||||||
setup_hwaccel
|
setup_hwaccel
|
||||||
|
|
||||||
|
msg_info "Installing Dependencies"
|
||||||
|
$STD apt install -y ffmpeg
|
||||||
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "go2rtc" "AlexxIT/go2rtc" "singlefile" "latest" "/opt/go2rtc" "go2rtc_linux_amd64"
|
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "go2rtc" "AlexxIT/go2rtc" "singlefile" "latest" "/opt/go2rtc" "go2rtc_linux_amd64"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
|
|||||||
@@ -32,13 +32,13 @@ if [ -d /dev/dri ]; then
|
|||||||
$STD apt install -y --no-install-recommends patchelf
|
$STD apt install -y --no-install-recommends patchelf
|
||||||
tmp_dir=$(mktemp -d)
|
tmp_dir=$(mktemp -d)
|
||||||
$STD pushd "$tmp_dir"
|
$STD pushd "$tmp_dir"
|
||||||
curl -fsSLO https://raw.githubusercontent.com/immich-app/immich/refs/heads/main/machine-learning/Dockerfile
|
curl_with_retry "https://raw.githubusercontent.com/immich-app/immich/refs/heads/main/machine-learning/Dockerfile" "Dockerfile"
|
||||||
readarray -t INTEL_URLS < <(
|
readarray -t INTEL_URLS < <(
|
||||||
sed -n "/intel-[igc|opencl]/p" ./Dockerfile | awk '{print $3}'
|
sed -n "/intel-[igc|opencl]/p" ./Dockerfile | awk '{print $3}'
|
||||||
sed -n "/libigdgmm12/p" ./Dockerfile | awk '{print $3}'
|
sed -n "/libigdgmm12/p" ./Dockerfile | awk '{print $3}'
|
||||||
)
|
)
|
||||||
for url in "${INTEL_URLS[@]}"; do
|
for url in "${INTEL_URLS[@]}"; do
|
||||||
curl -fsSLO "$url"
|
curl_with_retry "$url" "$(basename "$url")"
|
||||||
done
|
done
|
||||||
$STD apt install -y ./libigdgmm12*.deb
|
$STD apt install -y ./libigdgmm12*.deb
|
||||||
rm ./libigdgmm12*.deb
|
rm ./libigdgmm12*.deb
|
||||||
@@ -154,6 +154,10 @@ sed -i "s/^#shared_preload.*/shared_preload_libraries = 'vchord.so'/" /etc/postg
|
|||||||
systemctl restart postgresql.service
|
systemctl restart postgresql.service
|
||||||
PG_DB_NAME="immich" PG_DB_USER="immich" PG_DB_GRANT_SUPERUSER="true" PG_DB_SKIP_ALTER_ROLE="true" setup_postgresql_db
|
PG_DB_NAME="immich" PG_DB_USER="immich" PG_DB_GRANT_SUPERUSER="true" PG_DB_SKIP_ALTER_ROLE="true" setup_postgresql_db
|
||||||
|
|
||||||
|
msg_info "Installing GCC-13 (available as fallback compiler)"
|
||||||
|
$STD apt install -y gcc-13 g++-13
|
||||||
|
msg_ok "Installed GCC-13"
|
||||||
|
|
||||||
msg_warn "Compiling Custom Photo-processing Libraries (can take anywhere from 15min to 2h)"
|
msg_warn "Compiling Custom Photo-processing Libraries (can take anywhere from 15min to 2h)"
|
||||||
LD_LIBRARY_PATH=/usr/local/lib
|
LD_LIBRARY_PATH=/usr/local/lib
|
||||||
export LD_RUN_PATH=/usr/local/lib
|
export LD_RUN_PATH=/usr/local/lib
|
||||||
@@ -290,7 +294,7 @@ GEO_DIR="${INSTALL_DIR}/geodata"
|
|||||||
mkdir -p {"${APP_DIR}","${UPLOAD_DIR}","${GEO_DIR}","${INSTALL_DIR}"/cache}
|
mkdir -p {"${APP_DIR}","${UPLOAD_DIR}","${GEO_DIR}","${INSTALL_DIR}"/cache}
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.5.6" "$SRC_DIR"
|
fetch_and_deploy_gh_release "Immich" "immich-app/immich" "tarball" "v2.5.6" "$SRC_DIR"
|
||||||
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1]' ${SRC_DIR}/package.json)"
|
PNPM_VERSION="$(jq -r '.packageManager | split("@")[1] | split("+")[0]' ${SRC_DIR}/package.json)"
|
||||||
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
NODE_VERSION="24" NODE_MODULE="pnpm@${PNPM_VERSION}" setup_nodejs
|
||||||
|
|
||||||
msg_info "Installing Immich (patience)"
|
msg_info "Installing Immich (patience)"
|
||||||
@@ -340,15 +344,36 @@ cd "$SRC_DIR"/machine-learning
|
|||||||
$STD useradd -U -s /usr/sbin/nologin -r -M -d "$INSTALL_DIR" immich
|
$STD useradd -U -s /usr/sbin/nologin -r -M -d "$INSTALL_DIR" immich
|
||||||
mkdir -p "$ML_DIR" && chown -R immich:immich "$INSTALL_DIR"
|
mkdir -p "$ML_DIR" && chown -R immich:immich "$INSTALL_DIR"
|
||||||
export VIRTUAL_ENV="${ML_DIR}/ml-venv"
|
export VIRTUAL_ENV="${ML_DIR}/ml-venv"
|
||||||
|
export UV_HTTP_TIMEOUT=300
|
||||||
if [[ -f ~/.openvino ]]; then
|
if [[ -f ~/.openvino ]]; then
|
||||||
|
ML_PYTHON="python3.13"
|
||||||
|
msg_info "Pre-installing Python ${ML_PYTHON} for machine-learning"
|
||||||
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv python install "${ML_PYTHON}" && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "Python download attempt $attempt failed, retrying..." && sleep 5
|
||||||
|
done
|
||||||
|
msg_ok "Pre-installed Python ${ML_PYTHON}"
|
||||||
msg_info "Installing HW-accelerated machine-learning"
|
msg_info "Installing HW-accelerated machine-learning"
|
||||||
$STD uv add --no-sync --optional openvino onnxruntime-openvino==1.24.1 --active -n -p python3.13 --managed-python
|
$STD uv add --no-sync --optional openvino onnxruntime-openvino==1.24.1 --active -n -p "${ML_PYTHON}" --managed-python
|
||||||
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv sync --extra openvino --no-dev --active --link-mode copy -n -p python3.13 --managed-python
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV,UV_HTTP_TIMEOUT -nu immich uv sync --extra openvino --no-dev --active --link-mode copy -n -p "${ML_PYTHON}" --managed-python && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "uv sync attempt $attempt failed, retrying..." && sleep 10
|
||||||
|
done
|
||||||
patchelf --clear-execstack "${VIRTUAL_ENV}/lib/python3.13/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-313-x86_64-linux-gnu.so"
|
patchelf --clear-execstack "${VIRTUAL_ENV}/lib/python3.13/site-packages/onnxruntime/capi/onnxruntime_pybind11_state.cpython-313-x86_64-linux-gnu.so"
|
||||||
msg_ok "Installed HW-accelerated machine-learning"
|
msg_ok "Installed HW-accelerated machine-learning"
|
||||||
else
|
else
|
||||||
|
ML_PYTHON="python3.11"
|
||||||
|
msg_info "Pre-installing Python ${ML_PYTHON} for machine-learning"
|
||||||
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv python install "${ML_PYTHON}" && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "Python download attempt $attempt failed, retrying..." && sleep 5
|
||||||
|
done
|
||||||
|
msg_ok "Pre-installed Python ${ML_PYTHON}"
|
||||||
msg_info "Installing machine-learning"
|
msg_info "Installing machine-learning"
|
||||||
$STD sudo --preserve-env=VIRTUAL_ENV -nu immich uv sync --extra cpu --no-dev --active --link-mode copy -n -p python3.11 --managed-python
|
for attempt in $(seq 1 3); do
|
||||||
|
$STD sudo --preserve-env=VIRTUAL_ENV,UV_HTTP_TIMEOUT -nu immich uv sync --extra cpu --no-dev --active --link-mode copy -n -p "${ML_PYTHON}" --managed-python && break
|
||||||
|
[[ $attempt -lt 3 ]] && msg_warn "uv sync attempt $attempt failed, retrying..." && sleep 10
|
||||||
|
done
|
||||||
msg_ok "Installed machine-learning"
|
msg_ok "Installed machine-learning"
|
||||||
fi
|
fi
|
||||||
cd "$SRC_DIR"
|
cd "$SRC_DIR"
|
||||||
@@ -365,10 +390,10 @@ ln -s "$UPLOAD_DIR" "$ML_DIR"/upload
|
|||||||
|
|
||||||
msg_info "Installing GeoNames data"
|
msg_info "Installing GeoNames data"
|
||||||
cd "$GEO_DIR"
|
cd "$GEO_DIR"
|
||||||
curl -fsSLZ -O "https://download.geonames.org/export/dump/admin1CodesASCII.txt" \
|
curl_with_retry "https://download.geonames.org/export/dump/admin1CodesASCII.txt" "admin1CodesASCII.txt"
|
||||||
-O "https://download.geonames.org/export/dump/admin2Codes.txt" \
|
curl_with_retry "https://download.geonames.org/export/dump/admin2Codes.txt" "admin2Codes.txt"
|
||||||
-O "https://download.geonames.org/export/dump/cities500.zip" \
|
curl_with_retry "https://download.geonames.org/export/dump/cities500.zip" "cities500.zip"
|
||||||
-O "https://raw.githubusercontent.com/nvkelso/natural-earth-vector/v5.1.2/geojson/ne_10m_admin_0_countries.geojson"
|
curl_with_retry "https://raw.githubusercontent.com/nvkelso/natural-earth-vector/v5.1.2/geojson/ne_10m_admin_0_countries.geojson" "ne_10m_admin_0_countries.geojson"
|
||||||
unzip -q cities500.zip
|
unzip -q cities500.zip
|
||||||
date --iso-8601=seconds | tr -d "\n" >geodata-date.txt
|
date --iso-8601=seconds | tr -d "\n" >geodata-date.txt
|
||||||
rm cities500.zip
|
rm cities500.zip
|
||||||
|
|||||||
@@ -14,40 +14,32 @@ network_check
|
|||||||
update_os
|
update_os
|
||||||
|
|
||||||
setup_mariadb
|
setup_mariadb
|
||||||
|
msg_info "Loading timezone data"
|
||||||
msg_info "Setting up database"
|
|
||||||
DB_NAME=itsmng_db
|
|
||||||
DB_USER=itsmng
|
|
||||||
DB_PASS=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | head -c13)
|
|
||||||
mariadb-tzinfo-to-sql /usr/share/zoneinfo | mariadb mysql
|
mariadb-tzinfo-to-sql /usr/share/zoneinfo | mariadb mysql
|
||||||
mariadb -u root -e "CREATE DATABASE $DB_NAME;"
|
msg_ok "Loaded timezone data"
|
||||||
mariadb -u root -e "CREATE USER '$DB_USER'@'localhost' IDENTIFIED BY '$DB_PASS';"
|
MARIADB_DB_NAME="itsmng_db" MARIADB_DB_USER="itsmng" MARIADB_DB_EXTRA_GRANTS="GRANT SELECT ON \`mysql\`.\`time_zone_name\`" setup_mariadb_db
|
||||||
mariadb -u root -e "GRANT ALL PRIVILEGES ON $DB_NAME.* TO '$DB_USER'@'localhost';"
|
|
||||||
mariadb -u root -e "GRANT SELECT ON \`mysql\`.\`time_zone_name\` TO '$DB_USER'@'localhost'; FLUSH PRIVILEGES;"
|
|
||||||
{
|
|
||||||
echo "ITSM-NG Database Credentials"
|
|
||||||
echo "Database: $DB_NAME"
|
|
||||||
echo "Username: $DB_USER"
|
|
||||||
echo "Password: $DB_PASS"
|
|
||||||
} >>~/itsmng_db.creds
|
|
||||||
msg_ok "Set up database"
|
|
||||||
|
|
||||||
msg_info "Setup ITSM-NG Repository"
|
msg_info "Installing ITSM-NG"
|
||||||
setup_deb822_repo \
|
setup_deb822_repo \
|
||||||
"itsm-ng" \
|
"itsm-ng" \
|
||||||
"http://deb.itsm-ng.org/pubkey.gpg" \
|
"http://deb.itsm-ng.org/pubkey.gpg" \
|
||||||
"http://deb.itsm-ng.org/$(get_os_info id)/" \
|
"http://deb.itsm-ng.org/$(get_os_info id)/" \
|
||||||
"$(get_os_info codename)"
|
"$(get_os_info codename)"
|
||||||
msg_ok "Setup ITSM-NG Repository"
|
|
||||||
|
|
||||||
msg_info "Installing ITSM-NG"
|
|
||||||
$STD apt install -y itsm-ng
|
$STD apt install -y itsm-ng
|
||||||
cd /usr/share/itsm-ng
|
cd /usr/share/itsm-ng
|
||||||
$STD php bin/console db:install --db-name=$DB_NAME --db-user=$DB_USER --db-password=$DB_PASS --no-interaction
|
$STD php bin/console db:install --db-name="$MARIADB_DB_NAME" --db-user="$MARIADB_DB_USER" --db-password="$MARIADB_DB_PASS" --no-interaction
|
||||||
$STD a2dissite 000-default.conf
|
$STD a2dissite 000-default.conf
|
||||||
echo "* * * * * php /usr/share/itsm-ng/front/cron.php" | crontab -
|
echo "* * * * * www-data php /usr/share/itsm-ng/front/cron.php" | crontab -
|
||||||
msg_ok "Installed ITSM-NG"
|
msg_ok "Installed ITSM-NG"
|
||||||
|
|
||||||
|
msg_info "Setting permissions"
|
||||||
|
chown -R www-data:www-data /var/lib/itsm-ng
|
||||||
|
mkdir -p /usr/share/itsm-ng/css/palettes
|
||||||
|
chown -R www-data:www-data /usr/share/itsm-ng/css
|
||||||
|
chown -R www-data:www-data /usr/share/itsm-ng/css_compiled
|
||||||
|
chown www-data:www-data /etc/itsm-ng/config_db.php
|
||||||
|
msg_ok "Set permissions"
|
||||||
|
|
||||||
msg_info "Configuring PHP"
|
msg_info "Configuring PHP"
|
||||||
PHP_VERSION=$(ls /etc/php/ | grep -E '^[0-9]+\.[0-9]+$' | head -n 1)
|
PHP_VERSION=$(ls /etc/php/ | grep -E '^[0-9]+\.[0-9]+$' | head -n 1)
|
||||||
PHP_INI="/etc/php/$PHP_VERSION/apache2/php.ini"
|
PHP_INI="/etc/php/$PHP_VERSION/apache2/php.ini"
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
# Copyright (c) 2021-2026 tteck
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
# Author: tteck (tteckster)
|
# Author: MickLesk (CanbiZ)
|
||||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
# Source: https://jellyfin.org/
|
# Source: https://jellyfin.org/
|
||||||
|
|
||||||
@@ -14,31 +14,31 @@ network_check
|
|||||||
update_os
|
update_os
|
||||||
|
|
||||||
msg_custom "ℹ️" "${GN}" "If NVIDIA GPU passthrough is detected, you'll be asked whether to install drivers in the container"
|
msg_custom "ℹ️" "${GN}" "If NVIDIA GPU passthrough is detected, you'll be asked whether to install drivers in the container"
|
||||||
setup_hwaccel
|
|
||||||
|
|
||||||
msg_info "Installing Jellyfin"
|
msg_info "Installing Dependencies"
|
||||||
VERSION="$(awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release)"
|
ensure_dependencies libjemalloc2
|
||||||
if ! dpkg -s libjemalloc2 >/dev/null 2>&1; then
|
|
||||||
$STD apt install -y libjemalloc2
|
|
||||||
fi
|
|
||||||
if [[ ! -f /usr/lib/libjemalloc.so ]]; then
|
if [[ ! -f /usr/lib/libjemalloc.so ]]; then
|
||||||
ln -sf /usr/lib/x86_64-linux-gnu/libjemalloc.so.2 /usr/lib/libjemalloc.so
|
ln -sf /usr/lib/x86_64-linux-gnu/libjemalloc.so.2 /usr/lib/libjemalloc.so
|
||||||
fi
|
fi
|
||||||
if [[ ! -d /etc/apt/keyrings ]]; then
|
msg_ok "Installed Dependencies"
|
||||||
mkdir -p /etc/apt/keyrings
|
|
||||||
fi
|
|
||||||
curl -fsSL https://repo.jellyfin.org/jellyfin_team.gpg.key | gpg --dearmor --yes --output /etc/apt/keyrings/jellyfin.gpg
|
|
||||||
cat <<EOF >/etc/apt/sources.list.d/jellyfin.sources
|
|
||||||
Types: deb
|
|
||||||
URIs: https://repo.jellyfin.org/${PCT_OSTYPE}
|
|
||||||
Suites: ${VERSION}
|
|
||||||
Components: main
|
|
||||||
Architectures: amd64
|
|
||||||
Signed-By: /etc/apt/keyrings/jellyfin.gpg
|
|
||||||
EOF
|
|
||||||
|
|
||||||
$STD apt update
|
msg_info "Setting up Jellyfin Repository"
|
||||||
$STD apt install -y jellyfin
|
setup_deb822_repo \
|
||||||
|
"jellyfin" \
|
||||||
|
"https://repo.jellyfin.org/jellyfin_team.gpg.key" \
|
||||||
|
"https://repo.jellyfin.org/$(get_os_info id)" \
|
||||||
|
"$(get_os_info codename)"
|
||||||
|
msg_ok "Set up Jellyfin Repository"
|
||||||
|
|
||||||
|
msg_info "Installing Jellyfin"
|
||||||
|
$STD apt install -y jellyfin jellyfin-ffmpeg7
|
||||||
|
ln -sf /usr/lib/jellyfin-ffmpeg/ffmpeg /usr/bin/ffmpeg
|
||||||
|
ln -sf /usr/lib/jellyfin-ffmpeg/ffprobe /usr/bin/ffprobe
|
||||||
|
msg_ok "Installed Jellyfin"
|
||||||
|
|
||||||
|
setup_hwaccel "jellyfin"
|
||||||
|
|
||||||
|
msg_info "Configuring Jellyfin"
|
||||||
# Configure log rotation to prevent disk fill (keeps fail2ban compatibility) (PR: #1690 / Issue: #11224)
|
# Configure log rotation to prevent disk fill (keeps fail2ban compatibility) (PR: #1690 / Issue: #11224)
|
||||||
cat <<EOF >/etc/logrotate.d/jellyfin
|
cat <<EOF >/etc/logrotate.d/jellyfin
|
||||||
/var/log/jellyfin/*.log {
|
/var/log/jellyfin/*.log {
|
||||||
@@ -55,12 +55,7 @@ EOF
|
|||||||
chown -R jellyfin:adm /etc/jellyfin
|
chown -R jellyfin:adm /etc/jellyfin
|
||||||
sleep 10
|
sleep 10
|
||||||
systemctl restart jellyfin
|
systemctl restart jellyfin
|
||||||
if [[ "$CTTYPE" == "0" ]]; then
|
msg_ok "Configured Jellyfin"
|
||||||
sed -i -e 's/^ssl-cert:x:104:$/render:x:104:root,jellyfin/' -e 's/^render:x:108:root,jellyfin$/ssl-cert:x:108:/' /etc/group
|
|
||||||
else
|
|
||||||
sed -i -e 's/^ssl-cert:x:104:$/render:x:104:jellyfin/' -e 's/^render:x:108:jellyfin$/ssl-cert:x:108:/' /etc/group
|
|
||||||
fi
|
|
||||||
msg_ok "Installed Jellyfin"
|
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
customize
|
customize
|
||||||
|
|||||||
@@ -42,8 +42,6 @@ EOF
|
|||||||
$STD apt update
|
$STD apt update
|
||||||
msg_ok "Set up Intel® Repositories"
|
msg_ok "Set up Intel® Repositories"
|
||||||
|
|
||||||
setup_hwaccel
|
|
||||||
|
|
||||||
msg_info "Installing Intel® Level Zero"
|
msg_info "Installing Intel® Level Zero"
|
||||||
# Debian 13+ has newer Level Zero packages in system repos that conflict with Intel repo packages
|
# Debian 13+ has newer Level Zero packages in system repos that conflict with Intel repo packages
|
||||||
if is_debian && [[ "$(get_os_version_major)" -ge 13 ]]; then
|
if is_debian && [[ "$(get_os_version_major)" -ge 13 ]]; then
|
||||||
@@ -89,11 +87,11 @@ msg_info "Creating ollama User and Group"
|
|||||||
if ! id ollama >/dev/null 2>&1; then
|
if ! id ollama >/dev/null 2>&1; then
|
||||||
useradd -r -s /usr/sbin/nologin -U -m -d /usr/share/ollama ollama
|
useradd -r -s /usr/sbin/nologin -U -m -d /usr/share/ollama ollama
|
||||||
fi
|
fi
|
||||||
$STD usermod -aG render ollama || true
|
|
||||||
$STD usermod -aG video ollama || true
|
|
||||||
$STD usermod -aG ollama $(id -u -n)
|
$STD usermod -aG ollama $(id -u -n)
|
||||||
msg_ok "Created ollama User and adjusted Groups"
|
msg_ok "Created ollama User and adjusted Groups"
|
||||||
|
|
||||||
|
setup_hwaccel "ollama"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
cat <<EOF >/etc/systemd/system/ollama.service
|
cat <<EOF >/etc/systemd/system/ollama.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
|||||||
@@ -13,8 +13,6 @@ setting_up_container
|
|||||||
network_check
|
network_check
|
||||||
update_os
|
update_os
|
||||||
|
|
||||||
setup_hwaccel
|
|
||||||
|
|
||||||
msg_info "Setting Up Plex Media Server Repository"
|
msg_info "Setting Up Plex Media Server Repository"
|
||||||
setup_deb822_repo \
|
setup_deb822_repo \
|
||||||
"plexmediaserver" \
|
"plexmediaserver" \
|
||||||
@@ -26,13 +24,10 @@ msg_ok "Set Up Plex Media Server Repository"
|
|||||||
|
|
||||||
msg_info "Installing Plex Media Server"
|
msg_info "Installing Plex Media Server"
|
||||||
$STD apt install -y plexmediaserver
|
$STD apt install -y plexmediaserver
|
||||||
if [[ "$CTTYPE" == "0" ]]; then
|
|
||||||
sed -i -e 's/^ssl-cert:x:104:plex$/render:x:104:root,plex/' -e 's/^render:x:108:root$/ssl-cert:x:108:plex/' /etc/group
|
|
||||||
else
|
|
||||||
sed -i -e 's/^ssl-cert:x:104:plex$/render:x:104:plex/' -e 's/^render:x:108:$/ssl-cert:x:108:/' /etc/group
|
|
||||||
fi
|
|
||||||
msg_ok "Installed Plex Media Server"
|
msg_ok "Installed Plex Media Server"
|
||||||
|
|
||||||
|
setup_hwaccel "plex"
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
customize
|
customize
|
||||||
cleanup_lxc
|
cleanup_lxc
|
||||||
|
|||||||
@@ -45,32 +45,58 @@ systemctl enable -q --now podman.socket
|
|||||||
echo -e 'unqualified-search-registries=["docker.io"]' >>/etc/containers/registries.conf
|
echo -e 'unqualified-search-registries=["docker.io"]' >>/etc/containers/registries.conf
|
||||||
msg_ok "Installed Podman"
|
msg_ok "Installed Podman"
|
||||||
|
|
||||||
|
mkdir -p /etc/containers/systemd
|
||||||
|
|
||||||
read -r -p "${TAB3}Would you like to add Portainer? <y/N> " prompt
|
read -r -p "${TAB3}Would you like to add Portainer? <y/N> " prompt
|
||||||
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||||
msg_info "Installing Portainer $PORTAINER_LATEST_VERSION"
|
msg_info "Installing Portainer $PORTAINER_LATEST_VERSION"
|
||||||
podman volume create portainer_data >/dev/null
|
podman volume create portainer_data >/dev/null
|
||||||
$STD podman run -d \
|
cat <<EOF >/etc/containers/systemd/portainer.container
|
||||||
-p 8000:8000 \
|
[Unit]
|
||||||
-p 9443:9443 \
|
Description=Portainer Container
|
||||||
--name=portainer \
|
After=network-online.target
|
||||||
--restart=always \
|
|
||||||
-v /run/podman/podman.sock:/var/run/docker.sock \
|
[Container]
|
||||||
-v portainer_data:/data \
|
Image=docker.io/portainer/portainer-ce:latest
|
||||||
portainer/portainer-ce:latest
|
ContainerName=portainer
|
||||||
|
PublishPort=8000:8000
|
||||||
|
PublishPort=9443:9443
|
||||||
|
Volume=/run/podman/podman.sock:/var/run/docker.sock
|
||||||
|
Volume=portainer_data:/data
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
$STD systemctl start portainer
|
||||||
msg_ok "Installed Portainer $PORTAINER_LATEST_VERSION"
|
msg_ok "Installed Portainer $PORTAINER_LATEST_VERSION"
|
||||||
else
|
else
|
||||||
read -r -p "${TAB3}Would you like to add the Portainer Agent? <y/N> " prompt
|
read -r -p "${TAB3}Would you like to add the Portainer Agent? <y/N> " prompt
|
||||||
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||||
msg_info "Installing Portainer agent $PORTAINER_AGENT_LATEST_VERSION"
|
msg_info "Installing Portainer agent $PORTAINER_AGENT_LATEST_VERSION"
|
||||||
podman volume create temp >/dev/null
|
cat <<EOF >/etc/containers/systemd/portainer-agent.container
|
||||||
podman volume remove temp >/dev/null
|
[Unit]
|
||||||
$STD podman run -d \
|
Description=Portainer Agent Container
|
||||||
-p 9001:9001 \
|
After=network-online.target
|
||||||
--name portainer_agent \
|
|
||||||
--restart=always \
|
[Container]
|
||||||
-v /run/podman/podman.sock:/var/run/docker.sock \
|
Image=docker.io/portainer/agent:latest
|
||||||
-v /var/lib/containers/storage/volumes:/var/lib/docker/volumes \
|
ContainerName=portainer_agent
|
||||||
portainer/agent
|
PublishPort=9001:9001
|
||||||
|
Volume=/run/podman/podman.sock:/var/run/docker.sock
|
||||||
|
Volume=/var/lib/containers/storage/volumes:/var/lib/docker/volumes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
$STD systemctl start portainer-agent
|
||||||
msg_ok "Installed Portainer Agent $PORTAINER_AGENT_LATEST_VERSION"
|
msg_ok "Installed Portainer Agent $PORTAINER_AGENT_LATEST_VERSION"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
@@ -81,19 +107,29 @@ msg_ok "Pulled Home Assistant Image"
|
|||||||
|
|
||||||
msg_info "Installing Home Assistant"
|
msg_info "Installing Home Assistant"
|
||||||
$STD podman volume create hass_config
|
$STD podman volume create hass_config
|
||||||
$STD podman run -d \
|
cat <<EOF >/etc/containers/systemd/homeassistant.container
|
||||||
--name homeassistant \
|
[Unit]
|
||||||
--restart unless-stopped \
|
Description=Home Assistant Container
|
||||||
-v /dev:/dev \
|
After=network-online.target
|
||||||
-v hass_config:/config \
|
|
||||||
-v /etc/localtime:/etc/localtime:ro \
|
[Container]
|
||||||
-v /etc/timezone:/etc/timezone:ro \
|
Image=docker.io/homeassistant/home-assistant:stable
|
||||||
--net=host \
|
ContainerName=homeassistant
|
||||||
homeassistant/home-assistant:stable
|
Volume=/dev:/dev
|
||||||
podman generate systemd \
|
Volume=hass_config:/config
|
||||||
--new --name homeassistant \
|
Volume=/etc/localtime:/etc/localtime:ro
|
||||||
>/etc/systemd/system/homeassistant.service
|
Volume=/etc/timezone:/etc/timezone:ro
|
||||||
systemctl enable -q --now homeassistant
|
Network=host
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
TimeoutStartSec=300
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
$STD systemctl start homeassistant
|
||||||
msg_ok "Installed Home Assistant"
|
msg_ok "Installed Home Assistant"
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
|
|||||||
@@ -45,32 +45,58 @@ systemctl enable -q --now podman.socket
|
|||||||
echo -e 'unqualified-search-registries=["docker.io"]' >>/etc/containers/registries.conf
|
echo -e 'unqualified-search-registries=["docker.io"]' >>/etc/containers/registries.conf
|
||||||
msg_ok "Installed Podman"
|
msg_ok "Installed Podman"
|
||||||
|
|
||||||
|
mkdir -p /etc/containers/systemd
|
||||||
|
|
||||||
read -r -p "${TAB3}Would you like to add Portainer? <y/N> " prompt
|
read -r -p "${TAB3}Would you like to add Portainer? <y/N> " prompt
|
||||||
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||||
msg_info "Installing Portainer $PORTAINER_LATEST_VERSION"
|
msg_info "Installing Portainer $PORTAINER_LATEST_VERSION"
|
||||||
podman volume create portainer_data >/dev/null
|
podman volume create portainer_data >/dev/null
|
||||||
$STD podman run -d \
|
cat <<EOF >/etc/containers/systemd/portainer.container
|
||||||
-p 8000:8000 \
|
[Unit]
|
||||||
-p 9443:9443 \
|
Description=Portainer Container
|
||||||
--name=portainer \
|
After=network-online.target
|
||||||
--restart=always \
|
|
||||||
-v /run/podman/podman.sock:/var/run/docker.sock \
|
[Container]
|
||||||
-v portainer_data:/data \
|
Image=docker.io/portainer/portainer-ce:latest
|
||||||
portainer/portainer-ce:latest
|
ContainerName=portainer
|
||||||
|
PublishPort=8000:8000
|
||||||
|
PublishPort=9443:9443
|
||||||
|
Volume=/run/podman/podman.sock:/var/run/docker.sock
|
||||||
|
Volume=portainer_data:/data
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
$STD systemctl start portainer
|
||||||
msg_ok "Installed Portainer $PORTAINER_LATEST_VERSION"
|
msg_ok "Installed Portainer $PORTAINER_LATEST_VERSION"
|
||||||
else
|
else
|
||||||
read -r -p "${TAB3}Would you like to add the Portainer Agent? <y/N> " prompt
|
read -r -p "${TAB3}Would you like to add the Portainer Agent? <y/N> " prompt
|
||||||
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||||
msg_info "Installing Portainer agent $PORTAINER_AGENT_LATEST_VERSION"
|
msg_info "Installing Portainer agent $PORTAINER_AGENT_LATEST_VERSION"
|
||||||
podman volume create temp >/dev/null
|
cat <<EOF >/etc/containers/systemd/portainer-agent.container
|
||||||
podman volume remove temp >/dev/null
|
[Unit]
|
||||||
$STD podman run -d \
|
Description=Portainer Agent Container
|
||||||
-p 9001:9001 \
|
After=network-online.target
|
||||||
--name portainer_agent \
|
|
||||||
--restart=always \
|
[Container]
|
||||||
-v /run/podman/podman.sock:/var/run/docker.sock \
|
Image=docker.io/portainer/agent:latest
|
||||||
-v /var/lib/containers/storage/volumes:/var/lib/docker/volumes \
|
ContainerName=portainer_agent
|
||||||
portainer/agent
|
PublishPort=9001:9001
|
||||||
|
Volume=/run/podman/podman.sock:/var/run/docker.sock
|
||||||
|
Volume=/var/lib/containers/storage/volumes:/var/lib/docker/volumes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl daemon-reload
|
||||||
|
$STD systemctl start portainer-agent
|
||||||
msg_ok "Installed Portainer Agent $PORTAINER_AGENT_LATEST_VERSION"
|
msg_ok "Installed Portainer Agent $PORTAINER_AGENT_LATEST_VERSION"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ msg_ok "Configured RabbitMQ"
|
|||||||
|
|
||||||
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "reitti" "dedicatedcode/reitti" "singlefile" "latest" "/opt/reitti" "reitti-app.jar"
|
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "reitti" "dedicatedcode/reitti" "singlefile" "latest" "/opt/reitti" "reitti-app.jar"
|
||||||
mv /opt/reitti/reitti-*.jar /opt/reitti/reitti.jar
|
mv /opt/reitti/reitti-*.jar /opt/reitti/reitti.jar
|
||||||
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "photon" "komoot/photon" "singlefile" "latest" "/opt/photon" "photon-0*.jar"
|
USE_ORIGINAL_FILENAME="true" fetch_and_deploy_gh_release "photon" "komoot/photon" "singlefile" "latest" "/opt/photon" "photon-*.jar"
|
||||||
mv /opt/photon/photon-*.jar /opt/photon/photon.jar
|
mv /opt/photon/photon-*.jar /opt/photon/photon.jar
|
||||||
|
|
||||||
msg_info "Installing Nginx Tile Cache"
|
msg_info "Installing Nginx Tile Cache"
|
||||||
|
|||||||
@@ -14,7 +14,9 @@ network_check
|
|||||||
update_os
|
update_os
|
||||||
|
|
||||||
msg_info "Installing Dependencies"
|
msg_info "Installing Dependencies"
|
||||||
$STD apt-get install -y build-essential
|
$STD apt install -y \
|
||||||
|
build-essential \
|
||||||
|
python3-setuptools
|
||||||
msg_ok "Installed Dependencies"
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
|
fetch_and_deploy_gh_release "seerr" "seerr-team/seerr" "tarball"
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ msg_ok "Configured Sparky Fitness"
|
|||||||
|
|
||||||
msg_info "Building Backend"
|
msg_info "Building Backend"
|
||||||
cd /opt/sparkyfitness/SparkyFitnessServer
|
cd /opt/sparkyfitness/SparkyFitnessServer
|
||||||
$STD npm install
|
$STD pnpm install
|
||||||
msg_ok "Built Backend"
|
msg_ok "Built Backend"
|
||||||
|
|
||||||
msg_info "Building Frontend (Patience)"
|
msg_info "Building Frontend (Patience)"
|
||||||
@@ -69,7 +69,7 @@ Requires=postgresql.service
|
|||||||
Type=simple
|
Type=simple
|
||||||
WorkingDirectory=/opt/sparkyfitness/SparkyFitnessServer
|
WorkingDirectory=/opt/sparkyfitness/SparkyFitnessServer
|
||||||
EnvironmentFile=/etc/sparkyfitness/.env
|
EnvironmentFile=/etc/sparkyfitness/.env
|
||||||
ExecStart=/usr/bin/node SparkyFitnessServer.js
|
ExecStart=/opt/sparkyfitness/SparkyFitnessServer/node_modules/.bin/tsx SparkyFitnessServer.js
|
||||||
Restart=always
|
Restart=always
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
|
|
||||||
|
|||||||
74
install/split-pro-install.sh
Normal file
74
install/split-pro-install.sh
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: johanngrobe
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/oss-apps/split-pro
|
||||||
|
|
||||||
|
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||||
|
color
|
||||||
|
verb_ip6
|
||||||
|
catch_errors
|
||||||
|
setting_up_container
|
||||||
|
network_check
|
||||||
|
update_os
|
||||||
|
|
||||||
|
NODE_VERSION="22" NODE_MODULE="pnpm" setup_nodejs
|
||||||
|
PG_VERSION="17" PG_MODULES="cron" setup_postgresql
|
||||||
|
|
||||||
|
msg_info "Installing Dependencies"
|
||||||
|
$STD apt install -y openssl
|
||||||
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
|
PG_DB_NAME="splitpro" PG_DB_USER="splitpro" PG_DB_EXTENSIONS="pg_cron" setup_postgresql_db
|
||||||
|
fetch_and_deploy_gh_release "split-pro" "oss-apps/split-pro" "tarball"
|
||||||
|
|
||||||
|
msg_info "Installing Dependencies"
|
||||||
|
cd /opt/split-pro
|
||||||
|
$STD pnpm install --frozen-lockfile
|
||||||
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
|
msg_info "Building Split Pro"
|
||||||
|
cd /opt/split-pro
|
||||||
|
mkdir -p /opt/split-pro_data/uploads
|
||||||
|
ln -sf /opt/split-pro_data/uploads /opt/split-pro/uploads
|
||||||
|
NEXTAUTH_SECRET=$(openssl rand -base64 32)
|
||||||
|
cp .env.example .env
|
||||||
|
sed -i "s|^DATABASE_URL=.*|DATABASE_URL=\"postgresql://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}\"|" .env
|
||||||
|
sed -i "s|^NEXTAUTH_SECRET=.*|NEXTAUTH_SECRET=\"${NEXTAUTH_SECRET}\"|" .env
|
||||||
|
sed -i "s|^NEXTAUTH_URL=.*|NEXTAUTH_URL=\"http://${LOCAL_IP}:3000\"|" .env
|
||||||
|
sed -i "s|^NEXTAUTH_URL_INTERNAL=.*|NEXTAUTH_URL_INTERNAL=\"http://localhost:3000\"|" .env
|
||||||
|
sed -i "/^POSTGRES_CONTAINER_NAME=/d" .env
|
||||||
|
sed -i "/^POSTGRES_USER=/d" .env
|
||||||
|
sed -i "/^POSTGRES_PASSWORD=/d" .env
|
||||||
|
sed -i "/^POSTGRES_DB=/d" .env
|
||||||
|
sed -i "/^POSTGRES_PORT=/d" .env
|
||||||
|
$STD pnpm build
|
||||||
|
$STD pnpm exec prisma migrate deploy
|
||||||
|
msg_ok "Built Split Pro"
|
||||||
|
|
||||||
|
msg_info "Creating Service"
|
||||||
|
cat <<EOF >/etc/systemd/system/split-pro.service
|
||||||
|
[Unit]
|
||||||
|
Description=Split Pro
|
||||||
|
After=network.target postgresql.service
|
||||||
|
Requires=postgresql.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory=/opt/split-pro
|
||||||
|
EnvironmentFile=/opt/split-pro/.env
|
||||||
|
ExecStart=/usr/bin/pnpm start
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl enable -q --now split-pro
|
||||||
|
msg_ok "Created Service"
|
||||||
|
|
||||||
|
motd_ssh
|
||||||
|
customize
|
||||||
|
cleanup_lxc
|
||||||
@@ -20,12 +20,16 @@ msg_ok "Installed Dependencies"
|
|||||||
msg_info "Installing Tdarr"
|
msg_info "Installing Tdarr"
|
||||||
mkdir -p /opt/tdarr
|
mkdir -p /opt/tdarr
|
||||||
cd /opt/tdarr
|
cd /opt/tdarr
|
||||||
RELEASE=$(curl -fsSL https://f000.backblazeb2.com/file/tdarrs/versions.json | grep -oP '(?<="Tdarr_Updater": ")[^"]+' | grep linux_x64 | head -n 1)
|
RELEASE=$(curl_with_retry "https://f000.backblazeb2.com/file/tdarrs/versions.json" "-" | grep -oP '(?<="Tdarr_Updater": ")[^"]+' | grep linux_x64 | head -n 1)
|
||||||
curl -fsSL "$RELEASE" -o Tdarr_Updater.zip
|
curl_with_retry "$RELEASE" "Tdarr_Updater.zip"
|
||||||
$STD unzip Tdarr_Updater.zip
|
$STD unzip Tdarr_Updater.zip
|
||||||
chmod +x Tdarr_Updater
|
chmod +x Tdarr_Updater
|
||||||
$STD ./Tdarr_Updater
|
$STD ./Tdarr_Updater
|
||||||
rm -rf /opt/tdarr/Tdarr_Updater.zip
|
rm -rf /opt/tdarr/Tdarr_Updater.zip
|
||||||
|
[[ -f /opt/tdarr/Tdarr_Server/Tdarr_Server ]] || {
|
||||||
|
msg_error "Tdarr_Updater failed — tdarr.io may be blocked by local DNS"
|
||||||
|
exit 250
|
||||||
|
}
|
||||||
msg_ok "Installed Tdarr"
|
msg_ok "Installed Tdarr"
|
||||||
|
|
||||||
setup_hwaccel
|
setup_hwaccel
|
||||||
|
|||||||
@@ -19,9 +19,41 @@ $STD apt install -y \
|
|||||||
python3 \
|
python3 \
|
||||||
nginx \
|
nginx \
|
||||||
openssl \
|
openssl \
|
||||||
gettext-base
|
gettext-base \
|
||||||
|
libcairo2-dev \
|
||||||
|
libjpeg62-turbo-dev \
|
||||||
|
libpng-dev \
|
||||||
|
libtool-bin \
|
||||||
|
uuid-dev \
|
||||||
|
libvncserver-dev \
|
||||||
|
freerdp3-dev \
|
||||||
|
libssh2-1-dev \
|
||||||
|
libtelnet-dev \
|
||||||
|
libwebsockets-dev \
|
||||||
|
libpulse-dev \
|
||||||
|
libvorbis-dev \
|
||||||
|
libwebp-dev \
|
||||||
|
libssl-dev \
|
||||||
|
libpango1.0-dev \
|
||||||
|
libswscale-dev \
|
||||||
|
libavcodec-dev \
|
||||||
|
libavutil-dev \
|
||||||
|
libavformat-dev
|
||||||
msg_ok "Installed Dependencies"
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
|
msg_info "Building Guacamole Server (guacd)"
|
||||||
|
fetch_and_deploy_gh_tag "guacd" "apache/guacamole-server" "latest" "/opt/guacamole-server"
|
||||||
|
cd /opt/guacamole-server
|
||||||
|
export CPPFLAGS="-Wno-error=deprecated-declarations"
|
||||||
|
$STD autoreconf -fi
|
||||||
|
$STD ./configure --with-init-dir=/etc/init.d --enable-allow-freerdp-snapshots
|
||||||
|
$STD make
|
||||||
|
$STD make install
|
||||||
|
$STD ldconfig
|
||||||
|
cd /opt
|
||||||
|
rm -rf /opt/guacamole-server
|
||||||
|
msg_ok "Built Guacamole Server (guacd)"
|
||||||
|
|
||||||
NODE_VERSION="22" setup_nodejs
|
NODE_VERSION="22" setup_nodejs
|
||||||
fetch_and_deploy_gh_release "termix" "Termix-SSH/Termix"
|
fetch_and_deploy_gh_release "termix" "Termix-SSH/Termix"
|
||||||
|
|
||||||
@@ -74,17 +106,46 @@ systemctl reload nginx
|
|||||||
msg_ok "Configured Nginx"
|
msg_ok "Configured Nginx"
|
||||||
|
|
||||||
msg_info "Creating Service"
|
msg_info "Creating Service"
|
||||||
|
mkdir -p /etc/guacamole
|
||||||
|
cat <<EOF >/etc/guacamole/guacd.conf
|
||||||
|
[server]
|
||||||
|
bind_host = 127.0.0.1
|
||||||
|
bind_port = 4822
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat <<EOF >/opt/termix/.env
|
||||||
|
NODE_ENV=production
|
||||||
|
DATA_DIR=/opt/termix/data
|
||||||
|
GUACD_HOST=127.0.0.1
|
||||||
|
GUACD_PORT=4822
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat <<EOF >/etc/systemd/system/guacd.service
|
||||||
|
[Unit]
|
||||||
|
Description=Guacamole Proxy Daemon (guacd)
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/local/sbin/guacd -f -b 127.0.0.1 -l 4822
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
cat <<EOF >/etc/systemd/system/termix.service
|
cat <<EOF >/etc/systemd/system/termix.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Termix Backend
|
Description=Termix Backend
|
||||||
After=network.target
|
After=network.target guacd.service
|
||||||
|
Wants=guacd.service
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type=simple
|
Type=simple
|
||||||
User=root
|
User=root
|
||||||
WorkingDirectory=/opt/termix
|
WorkingDirectory=/opt/termix
|
||||||
Environment=NODE_ENV=production
|
EnvironmentFile=/opt/termix/.env
|
||||||
Environment=DATA_DIR=/opt/termix/data
|
|
||||||
ExecStart=/usr/bin/node /opt/termix/dist/backend/backend/starter.js
|
ExecStart=/usr/bin/node /opt/termix/dist/backend/backend/starter.js
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
@@ -92,7 +153,7 @@ RestartSec=5
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
systemctl enable -q --now termix
|
systemctl enable -q --now guacd termix
|
||||||
msg_ok "Created Service"
|
msg_ok "Created Service"
|
||||||
|
|
||||||
motd_ssh
|
motd_ssh
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ cp .env.example .env
|
|||||||
sed -i "s|^ORIGIN=.*|ORIGIN=http://${LOCAL_IP}:3280|" /opt/wishlist/.env
|
sed -i "s|^ORIGIN=.*|ORIGIN=http://${LOCAL_IP}:3280|" /opt/wishlist/.env
|
||||||
echo "" >>/opt/wishlist/.env
|
echo "" >>/opt/wishlist/.env
|
||||||
echo "NODE_ENV=production" >>/opt/wishlist/.env
|
echo "NODE_ENV=production" >>/opt/wishlist/.env
|
||||||
$STD pnpm install
|
$STD pnpm install --frozen-lockfile
|
||||||
$STD pnpm svelte-kit sync
|
$STD pnpm svelte-kit sync
|
||||||
$STD pnpm prisma generate
|
$STD pnpm prisma generate
|
||||||
sed -i 's|/usr/src/app/|/opt/wishlist/|g' $(grep -rl '/usr/src/app/' /opt/wishlist)
|
sed -i 's|/usr/src/app/|/opt/wishlist/|g' $(grep -rl '/usr/src/app/' /opt/wishlist)
|
||||||
|
|||||||
105
install/yamtrack-install.sh
Normal file
105
install/yamtrack-install.sh
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Copyright (c) 2021-2026 community-scripts ORG
|
||||||
|
# Author: MickLesk (CanbiZ)
|
||||||
|
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||||
|
# Source: https://github.com/FuzzyGrim/Yamtrack
|
||||||
|
|
||||||
|
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||||
|
color
|
||||||
|
verb_ip6
|
||||||
|
catch_errors
|
||||||
|
setting_up_container
|
||||||
|
network_check
|
||||||
|
update_os
|
||||||
|
|
||||||
|
msg_info "Installing Dependencies"
|
||||||
|
$STD apt install -y \
|
||||||
|
nginx \
|
||||||
|
redis-server
|
||||||
|
msg_ok "Installed Dependencies"
|
||||||
|
|
||||||
|
PG_VERSION="16" setup_postgresql
|
||||||
|
PG_DB_NAME="yamtrack" PG_DB_USER="yamtrack" setup_postgresql_db
|
||||||
|
PYTHON_VERSION="3.12" setup_uv
|
||||||
|
|
||||||
|
fetch_and_deploy_gh_release "yamtrack" "FuzzyGrim/Yamtrack" "tarball"
|
||||||
|
|
||||||
|
msg_info "Installing Python Dependencies"
|
||||||
|
cd /opt/yamtrack
|
||||||
|
$STD uv venv .venv
|
||||||
|
$STD uv pip install --no-cache-dir -r requirements.txt
|
||||||
|
msg_ok "Installed Python Dependencies"
|
||||||
|
|
||||||
|
msg_info "Configuring Yamtrack"
|
||||||
|
SECRET=$(openssl rand -hex 32)
|
||||||
|
cat <<EOF >/opt/yamtrack/src/.env
|
||||||
|
SECRET=${SECRET}
|
||||||
|
DB_HOST=localhost
|
||||||
|
DB_NAME=${PG_DB_NAME}
|
||||||
|
DB_USER=${PG_DB_USER}
|
||||||
|
DB_PASSWORD=${PG_DB_PASS}
|
||||||
|
DB_PORT=5432
|
||||||
|
REDIS_URL=redis://localhost:6379
|
||||||
|
URLS=http://${LOCAL_IP}:8000
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cd /opt/yamtrack/src
|
||||||
|
$STD /opt/yamtrack/.venv/bin/python manage.py migrate
|
||||||
|
$STD /opt/yamtrack/.venv/bin/python manage.py collectstatic --noinput
|
||||||
|
msg_ok "Configured Yamtrack"
|
||||||
|
|
||||||
|
msg_info "Configuring Nginx"
|
||||||
|
rm -f /etc/nginx/sites-enabled/default /etc/nginx/sites-available/default
|
||||||
|
cp /opt/yamtrack/nginx.conf /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|user abc;|user www-data;|' /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|pid /tmp/nginx.pid;|pid /run/nginx.pid;|' /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|/yamtrack/staticfiles/|/opt/yamtrack/src/staticfiles/|' /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|error_log /dev/stderr|error_log /var/log/nginx/error.log|' /etc/nginx/nginx.conf
|
||||||
|
sed -i 's|access_log /dev/stdout|access_log /var/log/nginx/access.log|' /etc/nginx/nginx.conf
|
||||||
|
$STD nginx -t
|
||||||
|
systemctl enable -q nginx
|
||||||
|
$STD systemctl restart nginx
|
||||||
|
msg_ok "Configured Nginx"
|
||||||
|
|
||||||
|
msg_info "Creating Services"
|
||||||
|
cat <<EOF >/etc/systemd/system/yamtrack.service
|
||||||
|
[Unit]
|
||||||
|
Description=Yamtrack Gunicorn
|
||||||
|
After=network.target postgresql.service redis-server.service
|
||||||
|
Requires=postgresql.service redis-server.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
WorkingDirectory=/opt/yamtrack/src
|
||||||
|
ExecStart=/opt/yamtrack/.venv/bin/gunicorn config.wsgi:application -b 127.0.0.1:8001 -w 2 --timeout 120
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat <<EOF >/etc/systemd/system/yamtrack-celery.service
|
||||||
|
[Unit]
|
||||||
|
Description=Yamtrack Celery Worker
|
||||||
|
After=network.target postgresql.service redis-server.service yamtrack.service
|
||||||
|
Requires=postgresql.service redis-server.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
WorkingDirectory=/opt/yamtrack/src
|
||||||
|
ExecStart=/opt/yamtrack/.venv/bin/celery -A config worker --beat --scheduler django --loglevel INFO
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
|
systemctl enable -q --now redis-server yamtrack yamtrack-celery
|
||||||
|
msg_ok "Created Services"
|
||||||
|
|
||||||
|
motd_ssh
|
||||||
|
customize
|
||||||
|
cleanup_lxc
|
||||||
424
misc/tools.func
424
misc/tools.func
@@ -105,11 +105,13 @@ curl_with_retry() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
debug_log "curl attempt $attempt failed, waiting ${backoff}s before retry..."
|
debug_log "curl attempt $attempt failed (timeout=${timeout}s), waiting ${backoff}s before retry..."
|
||||||
sleep "$backoff"
|
sleep "$backoff"
|
||||||
# Exponential backoff: 1, 2, 4, 8... capped at 30s
|
# Exponential backoff: 1, 2, 4, 8... capped at 30s
|
||||||
backoff=$((backoff * 2))
|
backoff=$((backoff * 2))
|
||||||
((backoff > 30)) && backoff=30
|
((backoff > 30)) && backoff=30
|
||||||
|
# Double --max-time on each retry so slow connections can finish
|
||||||
|
timeout=$((timeout * 2))
|
||||||
((attempt++))
|
((attempt++))
|
||||||
done
|
done
|
||||||
|
|
||||||
@@ -172,8 +174,10 @@ curl_api_with_retry() {
|
|||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
debug_log "curl API attempt $attempt failed (HTTP $http_code), waiting ${attempt}s..."
|
debug_log "curl API attempt $attempt failed (HTTP $http_code, timeout=${timeout}s), waiting ${attempt}s..."
|
||||||
sleep "$attempt"
|
sleep "$attempt"
|
||||||
|
# Double --max-time on each retry so slow connections can finish
|
||||||
|
timeout=$((timeout * 2))
|
||||||
((attempt++))
|
((attempt++))
|
||||||
done
|
done
|
||||||
|
|
||||||
@@ -1975,6 +1979,47 @@ extract_version_from_json() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Get latest GitHub tag (for repos that only publish tags, not releases).
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# get_latest_gh_tag "owner/repo" [prefix]
|
||||||
|
#
|
||||||
|
# Arguments:
|
||||||
|
# $1 - GitHub repo (owner/repo)
|
||||||
|
# $2 - Optional prefix filter (e.g., "v" to only match tags starting with "v")
|
||||||
|
#
|
||||||
|
# Returns:
|
||||||
|
# Latest tag name (stdout), or returns 1 on failure
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
get_latest_gh_tag() {
|
||||||
|
local repo="$1"
|
||||||
|
local prefix="${2:-}"
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp)
|
||||||
|
|
||||||
|
if ! github_api_call "https://api.github.com/repos/${repo}/tags?per_page=50" "$temp_file"; then
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
local tag=""
|
||||||
|
if [[ -n "$prefix" ]]; then
|
||||||
|
tag=$(jq -r --arg p "$prefix" '[.[] | select(.name | startswith($p))][0].name // empty' "$temp_file")
|
||||||
|
else
|
||||||
|
tag=$(jq -r '.[0].name // empty' "$temp_file")
|
||||||
|
fi
|
||||||
|
|
||||||
|
rm -f "$temp_file"
|
||||||
|
|
||||||
|
if [[ -z "$tag" ]]; then
|
||||||
|
msg_error "No tags found for ${repo}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$tag"
|
||||||
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
# Get latest GitHub release version with fallback to tags
|
# Get latest GitHub release version with fallback to tags
|
||||||
# Usage: get_latest_github_release "owner/repo" [strip_v] [include_prerelease]
|
# Usage: get_latest_github_release "owner/repo" [strip_v] [include_prerelease]
|
||||||
@@ -2073,103 +2118,131 @@ verify_gpg_fingerprint() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
# Get latest GitHub tag for a repository.
|
# Fetches and deploys a GitHub tag-based source tarball.
|
||||||
#
|
#
|
||||||
# Description:
|
# Description:
|
||||||
# - Queries the GitHub API for tags (not releases)
|
# - Downloads the source tarball for a given tag from GitHub
|
||||||
# - Useful for repos that only create tags, not full releases
|
# - Extracts to the target directory
|
||||||
# - Supports optional prefix filter and version-only extraction
|
# - Writes the version to ~/.<app>
|
||||||
# - Returns the latest tag name (printed to stdout)
|
|
||||||
#
|
#
|
||||||
# Usage:
|
# Usage:
|
||||||
# MONGO_VERSION=$(get_latest_gh_tag "mongodb/mongo-tools")
|
# fetch_and_deploy_gh_tag "guacd" "apache/guacamole-server"
|
||||||
# LATEST=$(get_latest_gh_tag "owner/repo" "v") # only tags starting with "v"
|
# fetch_and_deploy_gh_tag "guacd" "apache/guacamole-server" "latest" "/opt/guacamole-server"
|
||||||
# LATEST=$(get_latest_gh_tag "owner/repo" "" "true") # strip leading "v"
|
|
||||||
#
|
#
|
||||||
# Arguments:
|
# Arguments:
|
||||||
# $1 - GitHub repo (owner/repo)
|
# $1 - App name (used for version file ~/.<app>)
|
||||||
# $2 - Tag prefix filter (optional, e.g. "v" or "100.")
|
# $2 - GitHub repo (owner/repo)
|
||||||
# $3 - Strip prefix from result (optional, "true" to strip $2 prefix)
|
# $3 - Tag version (default: "latest" → auto-detect via get_latest_gh_tag)
|
||||||
#
|
# $4 - Target directory (default: /opt/$app)
|
||||||
# Returns:
|
|
||||||
# 0 on success (tag printed to stdout), 1 on failure
|
|
||||||
#
|
#
|
||||||
# Notes:
|
# Notes:
|
||||||
# - Skips tags containing "rc", "alpha", "beta", "dev", "test"
|
# - Supports CLEAN_INSTALL=1 to wipe target before extracting
|
||||||
# - Sorts by version number (sort -V) to find the latest
|
# - For repos that only publish tags, not GitHub Releases
|
||||||
# - Respects GITHUB_TOKEN for rate limiting
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
get_latest_gh_tag() {
|
fetch_and_deploy_gh_tag() {
|
||||||
local repo="$1"
|
local app="$1"
|
||||||
local prefix="${2:-}"
|
local repo="$2"
|
||||||
local strip_prefix="${3:-false}"
|
local version="${3:-latest}"
|
||||||
|
local target="${4:-/opt/$app}"
|
||||||
|
local app_lc=""
|
||||||
|
app_lc="$(echo "${app,,}" | tr -d ' ')"
|
||||||
|
local version_file="$HOME/.${app_lc}"
|
||||||
|
|
||||||
local header_args=()
|
if [[ "$version" == "latest" ]]; then
|
||||||
[[ -n "${GITHUB_TOKEN:-}" ]] && header_args=(-H "Authorization: Bearer $GITHUB_TOKEN")
|
version=$(get_latest_gh_tag "$repo") || {
|
||||||
|
msg_error "Failed to determine latest tag for ${repo}"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
local http_code=""
|
local current_version=""
|
||||||
http_code=$(curl -sSL --max-time 20 -w "%{http_code}" -o /tmp/gh_tags.json \
|
[[ -f "$version_file" ]] && current_version=$(<"$version_file")
|
||||||
-H 'Accept: application/vnd.github+json' \
|
|
||||||
-H 'X-GitHub-Api-Version: 2022-11-28' \
|
|
||||||
"${header_args[@]}" \
|
|
||||||
"https://api.github.com/repos/${repo}/tags?per_page=100" 2>/dev/null) || true
|
|
||||||
|
|
||||||
if [[ "$http_code" == "401" ]]; then
|
if [[ "$current_version" == "$version" ]]; then
|
||||||
msg_error "GitHub API authentication failed (HTTP 401)."
|
msg_ok "$app is already up-to-date ($version)"
|
||||||
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
|
return 0
|
||||||
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
|
fi
|
||||||
else
|
|
||||||
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
|
local tmpdir
|
||||||
fi
|
tmpdir=$(mktemp -d) || return 1
|
||||||
rm -f /tmp/gh_tags.json
|
local tarball_url="https://github.com/${repo}/archive/refs/tags/${version}.tar.gz"
|
||||||
|
local filename="${app_lc}-${version}.tar.gz"
|
||||||
|
|
||||||
|
msg_info "Fetching GitHub tag: ${app} (${version})"
|
||||||
|
|
||||||
|
download_file "$tarball_url" "$tmpdir/$filename" || {
|
||||||
|
msg_error "Download failed: $tarball_url"
|
||||||
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
mkdir -p "$target"
|
||||||
|
if [[ "${CLEAN_INSTALL:-0}" == "1" ]]; then
|
||||||
|
rm -rf "${target:?}/"*
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "$http_code" == "403" ]]; then
|
tar --no-same-owner -xzf "$tmpdir/$filename" -C "$tmpdir" || {
|
||||||
msg_error "GitHub API rate limit exceeded (HTTP 403)."
|
msg_error "Failed to extract tarball"
|
||||||
msg_error "To increase the limit, export a GitHub token before running the script:"
|
rm -rf "$tmpdir"
|
||||||
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
|
|
||||||
rm -f /tmp/gh_tags.json
|
|
||||||
return 1
|
return 1
|
||||||
fi
|
}
|
||||||
|
|
||||||
if [[ "$http_code" == "000" || -z "$http_code" ]]; then
|
local unpack_dir
|
||||||
msg_error "GitHub API connection failed (no response)."
|
unpack_dir=$(find "$tmpdir" -mindepth 1 -maxdepth 1 -type d | head -n1)
|
||||||
msg_error "Check your network/DNS: curl -sSL https://api.github.com/rate_limit"
|
|
||||||
rm -f /tmp/gh_tags.json
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$http_code" != "200" ]] || [[ ! -s /tmp/gh_tags.json ]]; then
|
shopt -s dotglob nullglob
|
||||||
msg_error "Unable to fetch tags for ${repo} (HTTP ${http_code})"
|
cp -r "$unpack_dir"/* "$target/"
|
||||||
rm -f /tmp/gh_tags.json
|
shopt -u dotglob nullglob
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local tags_json
|
rm -rf "$tmpdir"
|
||||||
tags_json=$(</tmp/gh_tags.json)
|
echo "$version" >"$version_file"
|
||||||
rm -f /tmp/gh_tags.json
|
msg_ok "Deployed ${app} ${version} to ${target}"
|
||||||
|
|
||||||
# Extract tag names, filter by prefix, exclude pre-release patterns, sort by version
|
|
||||||
local latest=""
|
|
||||||
latest=$(echo "$tags_json" | grep -oP '"name":\s*"\K[^"]+' |
|
|
||||||
{ [[ -n "$prefix" ]] && grep "^${prefix}" || cat; } |
|
|
||||||
grep -viE '(rc|alpha|beta|dev|test|preview|snapshot)' |
|
|
||||||
sort -V | tail -n1)
|
|
||||||
|
|
||||||
if [[ -z "$latest" ]]; then
|
|
||||||
msg_warn "No matching tags found for ${repo}${prefix:+ (prefix: $prefix)}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$strip_prefix" == "true" && -n "$prefix" ]]; then
|
|
||||||
latest="${latest#"$prefix"}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$latest"
|
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Checks for new GitHub tag (for repos without releases).
|
||||||
|
#
|
||||||
|
# Description:
|
||||||
|
# - Uses get_latest_gh_tag to fetch the latest tag
|
||||||
|
# - Compares it to a local cached version (~/.<app>)
|
||||||
|
# - If newer, sets global CHECK_UPDATE_RELEASE and returns 0
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# if check_for_gh_tag "guacd" "apache/guacamole-server"; then
|
||||||
|
# fetch_and_deploy_gh_tag "guacd" "apache/guacamole-server" "/opt/guacamole-server"
|
||||||
|
# fi
|
||||||
|
#
|
||||||
|
# Notes:
|
||||||
|
# - For repos that only publish tags, not GitHub Releases
|
||||||
|
# - Same interface as check_for_gh_release
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
check_for_gh_tag() {
|
||||||
|
local app="$1"
|
||||||
|
local repo="$2"
|
||||||
|
local prefix="${3:-}"
|
||||||
|
local app_lc=""
|
||||||
|
app_lc="$(echo "${app,,}" | tr -d ' ')"
|
||||||
|
local current_file="$HOME/.${app_lc}"
|
||||||
|
|
||||||
|
msg_info "Checking for update: ${app}"
|
||||||
|
|
||||||
|
local latest=""
|
||||||
|
latest=$(get_latest_gh_tag "$repo" "$prefix") || return 1
|
||||||
|
|
||||||
|
local current=""
|
||||||
|
[[ -f "$current_file" ]] && current="$(<"$current_file")"
|
||||||
|
|
||||||
|
if [[ -z "$current" || "$current" != "$latest" ]]; then
|
||||||
|
CHECK_UPDATE_RELEASE="$latest"
|
||||||
|
msg_ok "Update available: ${app} ${current:-not installed} → ${latest}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
msg_ok "No update available: ${app} (${latest})"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
# INSTALL FUNCTIONS
|
# INSTALL FUNCTIONS
|
||||||
# ==============================================================================
|
# ==============================================================================
|
||||||
@@ -2219,6 +2292,35 @@ check_for_gh_release() {
|
|||||||
# Try /latest endpoint for non-pinned versions (most efficient)
|
# Try /latest endpoint for non-pinned versions (most efficient)
|
||||||
local releases_json="" http_code=""
|
local releases_json="" http_code=""
|
||||||
|
|
||||||
|
# For pinned versions, query the specific release tag directly
|
||||||
|
if [[ -n "$pinned_version_in" ]]; then
|
||||||
|
http_code=$(curl -sSL --max-time 20 -w "%{http_code}" -o /tmp/gh_check.json \
|
||||||
|
-H 'Accept: application/vnd.github+json' \
|
||||||
|
-H 'X-GitHub-Api-Version: 2022-11-28' \
|
||||||
|
"${header_args[@]}" \
|
||||||
|
"https://api.github.com/repos/${source}/releases/tags/${pinned_version_in}" 2>/dev/null) || true
|
||||||
|
|
||||||
|
if [[ "$http_code" == "200" ]] && [[ -s /tmp/gh_check.json ]]; then
|
||||||
|
releases_json="[$(</tmp/gh_check.json)]"
|
||||||
|
elif [[ "$http_code" == "401" ]]; then
|
||||||
|
msg_error "GitHub API authentication failed (HTTP 401)."
|
||||||
|
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
|
||||||
|
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
|
||||||
|
else
|
||||||
|
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
|
||||||
|
fi
|
||||||
|
rm -f /tmp/gh_check.json
|
||||||
|
return 1
|
||||||
|
elif [[ "$http_code" == "403" ]]; then
|
||||||
|
msg_error "GitHub API rate limit exceeded (HTTP 403)."
|
||||||
|
msg_error "To increase the limit, export a GitHub token before running the script:"
|
||||||
|
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
|
||||||
|
rm -f /tmp/gh_check.json
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
rm -f /tmp/gh_check.json
|
||||||
|
fi
|
||||||
|
|
||||||
if [[ -z "$pinned_version_in" ]]; then
|
if [[ -z "$pinned_version_in" ]]; then
|
||||||
http_code=$(curl -sSL --max-time 20 -w "%{http_code}" -o /tmp/gh_check.json \
|
http_code=$(curl -sSL --max-time 20 -w "%{http_code}" -o /tmp/gh_check.json \
|
||||||
-H 'Accept: application/vnd.github+json' \
|
-H 'Accept: application/vnd.github+json' \
|
||||||
@@ -2486,6 +2588,8 @@ check_for_codeberg_release() {
|
|||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
create_self_signed_cert() {
|
create_self_signed_cert() {
|
||||||
local APP_NAME="${1:-${APPLICATION}}"
|
local APP_NAME="${1:-${APPLICATION}}"
|
||||||
|
local HOSTNAME="$(hostname -f)"
|
||||||
|
local IP="$(hostname -I | awk '{print $1}')"
|
||||||
local APP_NAME_LC=$(echo "${APP_NAME,,}" | tr -d ' ')
|
local APP_NAME_LC=$(echo "${APP_NAME,,}" | tr -d ' ')
|
||||||
local CERT_DIR="/etc/ssl/${APP_NAME_LC}"
|
local CERT_DIR="/etc/ssl/${APP_NAME_LC}"
|
||||||
local CERT_KEY="${CERT_DIR}/${APP_NAME_LC}.key"
|
local CERT_KEY="${CERT_DIR}/${APP_NAME_LC}.key"
|
||||||
@@ -2503,8 +2607,8 @@ create_self_signed_cert() {
|
|||||||
|
|
||||||
mkdir -p "$CERT_DIR"
|
mkdir -p "$CERT_DIR"
|
||||||
$STD openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \
|
$STD openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \
|
||||||
-subj "/CN=${APP_NAME}" \
|
-subj "/CN=${HOSTNAME}" \
|
||||||
-addext "subjectAltName=DNS:${APP_NAME}" \
|
-addext "subjectAltName=DNS:${HOSTNAME},DNS:localhost,IP:${IP},IP:127.0.0.1" \
|
||||||
-keyout "$CERT_KEY" \
|
-keyout "$CERT_KEY" \
|
||||||
-out "$CERT_CRT" || {
|
-out "$CERT_CRT" || {
|
||||||
msg_error "Failed to create self-signed certificate"
|
msg_error "Failed to create self-signed certificate"
|
||||||
@@ -2574,6 +2678,30 @@ function ensure_usr_local_bin_persist() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# curl_download - Downloads a file with automatic retry and exponential backoff.
|
||||||
|
#
|
||||||
|
# Usage: curl_download <output_file> <url>
|
||||||
|
#
|
||||||
|
# Retries up to 5 times with increasing --max-time (60/120/240/480/960s).
|
||||||
|
# Returns 0 on success, 1 if all attempts fail.
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
function curl_download() {
|
||||||
|
local output="$1"
|
||||||
|
local url="$2"
|
||||||
|
local timeouts=(60 120 240 480 960)
|
||||||
|
|
||||||
|
for i in "${!timeouts[@]}"; do
|
||||||
|
if curl --connect-timeout 15 --max-time "${timeouts[$i]}" -fsSL -o "$output" "$url"; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
if ((i < ${#timeouts[@]} - 1)); then
|
||||||
|
msg_warn "Download timed out after ${timeouts[$i]}s, retrying... (attempt $((i + 2))/${#timeouts[@]})"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
# Downloads and deploys latest Codeberg release (source, binary, tarball, asset).
|
# Downloads and deploys latest Codeberg release (source, binary, tarball, asset).
|
||||||
#
|
#
|
||||||
@@ -2631,8 +2759,7 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
local app_lc=$(echo "${app,,}" | tr -d ' ')
|
local app_lc=$(echo "${app,,}" | tr -d ' ')
|
||||||
local version_file="$HOME/.${app_lc}"
|
local version_file="$HOME/.${app_lc}"
|
||||||
|
|
||||||
local api_timeout="--connect-timeout 10 --max-time 60"
|
local api_timeouts=(60 120 240)
|
||||||
local download_timeout="--connect-timeout 15 --max-time 900"
|
|
||||||
|
|
||||||
local current_version=""
|
local current_version=""
|
||||||
[[ -f "$version_file" ]] && current_version=$(<"$version_file")
|
[[ -f "$version_file" ]] && current_version=$(<"$version_file")
|
||||||
@@ -2672,7 +2799,7 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
|
|
||||||
# Codeberg archive URL format: https://codeberg.org/{owner}/{repo}/archive/{tag}.tar.gz
|
# Codeberg archive URL format: https://codeberg.org/{owner}/{repo}/archive/{tag}.tar.gz
|
||||||
local archive_url="https://codeberg.org/$repo/archive/${tag_name}.tar.gz"
|
local archive_url="https://codeberg.org/$repo/archive/${tag_name}.tar.gz"
|
||||||
if curl $download_timeout -fsSL -o "$tmpdir/$filename" "$archive_url"; then
|
if curl_download "$tmpdir/$filename" "$archive_url"; then
|
||||||
download_success=true
|
download_success=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -2719,16 +2846,18 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local max_retries=3 retry_delay=2 attempt=1 success=false resp http_code
|
local attempt=0 success=false resp http_code
|
||||||
|
|
||||||
while ((attempt <= max_retries)); do
|
while ((attempt < ${#api_timeouts[@]})); do
|
||||||
resp=$(curl $api_timeout -fsSL -w "%{http_code}" -o /tmp/codeberg_rel.json "$api_url") && success=true && break
|
resp=$(curl --connect-timeout 10 --max-time "${api_timeouts[$attempt]}" -fsSL -w "%{http_code}" -o /tmp/codeberg_rel.json "$api_url") && success=true && break
|
||||||
sleep "$retry_delay"
|
|
||||||
((attempt++))
|
((attempt++))
|
||||||
|
if ((attempt < ${#api_timeouts[@]})); then
|
||||||
|
msg_warn "API request timed out after ${api_timeouts[$((attempt - 1))]}s, retrying... (attempt $((attempt + 1))/${#api_timeouts[@]})"
|
||||||
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
if ! $success; then
|
if ! $success; then
|
||||||
msg_error "Failed to fetch release metadata from $api_url after $max_retries attempts"
|
msg_error "Failed to fetch release metadata from $api_url after ${#api_timeouts[@]} attempts"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -2769,7 +2898,7 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
|
|
||||||
# Codeberg archive URL format
|
# Codeberg archive URL format
|
||||||
local archive_url="https://codeberg.org/$repo/archive/${tag_name}.tar.gz"
|
local archive_url="https://codeberg.org/$repo/archive/${tag_name}.tar.gz"
|
||||||
if curl $download_timeout -fsSL -o "$tmpdir/$filename" "$archive_url"; then
|
if curl_download "$tmpdir/$filename" "$archive_url"; then
|
||||||
download_success=true
|
download_success=true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -2843,7 +2972,7 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
filename="${url_match##*/}"
|
filename="${url_match##*/}"
|
||||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$url_match" || {
|
curl_download "$tmpdir/$filename" "$url_match" || {
|
||||||
msg_error "Download failed: $url_match"
|
msg_error "Download failed: $url_match"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -2886,7 +3015,7 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
filename="${asset_url##*/}"
|
filename="${asset_url##*/}"
|
||||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$asset_url" || {
|
curl_download "$tmpdir/$filename" "$asset_url" || {
|
||||||
msg_error "Download failed: $asset_url"
|
msg_error "Download failed: $asset_url"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -2987,7 +3116,7 @@ function fetch_and_deploy_codeberg_release() {
|
|||||||
local target_file="$app"
|
local target_file="$app"
|
||||||
[[ "$use_filename" == "true" ]] && target_file="$filename"
|
[[ "$use_filename" == "true" ]] && target_file="$filename"
|
||||||
|
|
||||||
curl $download_timeout -fsSL -o "$target/$target_file" "$asset_url" || {
|
curl_download "$target/$target_file" "$asset_url" || {
|
||||||
msg_error "Download failed: $asset_url"
|
msg_error "Download failed: $asset_url"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -3182,8 +3311,7 @@ function fetch_and_deploy_gh_release() {
|
|||||||
local app_lc=$(echo "${app,,}" | tr -d ' ')
|
local app_lc=$(echo "${app,,}" | tr -d ' ')
|
||||||
local version_file="$HOME/.${app_lc}"
|
local version_file="$HOME/.${app_lc}"
|
||||||
|
|
||||||
local api_timeout="--connect-timeout 10 --max-time 60"
|
local api_timeouts=(60 120 240)
|
||||||
local download_timeout="--connect-timeout 15 --max-time 900"
|
|
||||||
|
|
||||||
local current_version=""
|
local current_version=""
|
||||||
[[ -f "$version_file" ]] && current_version=$(<"$version_file")
|
[[ -f "$version_file" ]] && current_version=$(<"$version_file")
|
||||||
@@ -3203,10 +3331,10 @@ function fetch_and_deploy_gh_release() {
|
|||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local max_retries=3 retry_delay=2 attempt=1 success=false http_code
|
local max_retries=${#api_timeouts[@]} retry_delay=2 attempt=1 success=false http_code
|
||||||
|
|
||||||
while ((attempt <= max_retries)); do
|
while ((attempt <= max_retries)); do
|
||||||
http_code=$(curl $api_timeout -sSL -w "%{http_code}" -o /tmp/gh_rel.json "${header[@]}" "$api_url" 2>/dev/null) || true
|
http_code=$(curl --connect-timeout 10 --max-time "${api_timeouts[$((attempt - 1))]:-240}" -sSL -w "%{http_code}" -o /tmp/gh_rel.json "${header[@]}" "$api_url" 2>/dev/null) || true
|
||||||
if [[ "$http_code" == "200" ]]; then
|
if [[ "$http_code" == "200" ]]; then
|
||||||
success=true
|
success=true
|
||||||
break
|
break
|
||||||
@@ -3280,7 +3408,7 @@ function fetch_and_deploy_gh_release() {
|
|||||||
local direct_tarball_url="https://github.com/$repo/archive/refs/tags/$tag_name.tar.gz"
|
local direct_tarball_url="https://github.com/$repo/archive/refs/tags/$tag_name.tar.gz"
|
||||||
filename="${app_lc}-${version_safe}.tar.gz"
|
filename="${app_lc}-${version_safe}.tar.gz"
|
||||||
|
|
||||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$direct_tarball_url" || {
|
curl_download "$tmpdir/$filename" "$direct_tarball_url" || {
|
||||||
msg_error "Download failed: $direct_tarball_url"
|
msg_error "Download failed: $direct_tarball_url"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -3383,7 +3511,7 @@ function fetch_and_deploy_gh_release() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
filename="${url_match##*/}"
|
filename="${url_match##*/}"
|
||||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$url_match" || {
|
curl_download "$tmpdir/$filename" "$url_match" || {
|
||||||
msg_error "Download failed: $url_match"
|
msg_error "Download failed: $url_match"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -3450,7 +3578,7 @@ function fetch_and_deploy_gh_release() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
filename="${asset_url##*/}"
|
filename="${asset_url##*/}"
|
||||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$asset_url" || {
|
curl_download "$tmpdir/$filename" "$asset_url" || {
|
||||||
msg_error "Download failed: $asset_url"
|
msg_error "Download failed: $asset_url"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -3571,7 +3699,7 @@ function fetch_and_deploy_gh_release() {
|
|||||||
local target_file="$app"
|
local target_file="$app"
|
||||||
[[ "$use_filename" == "true" ]] && target_file="$filename"
|
[[ "$use_filename" == "true" ]] && target_file="$filename"
|
||||||
|
|
||||||
curl $download_timeout -fsSL -o "$target/$target_file" "$asset_url" || {
|
curl_download "$target/$target_file" "$asset_url" || {
|
||||||
msg_error "Download failed: $asset_url"
|
msg_error "Download failed: $asset_url"
|
||||||
rm -rf "$tmpdir"
|
rm -rf "$tmpdir"
|
||||||
return 1
|
return 1
|
||||||
@@ -4128,6 +4256,8 @@ function setup_gs() {
|
|||||||
# - NVIDIA requires matching host driver version
|
# - NVIDIA requires matching host driver version
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
function setup_hwaccel() {
|
function setup_hwaccel() {
|
||||||
|
local service_user="${1:-}"
|
||||||
|
|
||||||
# Check if user explicitly disabled GPU in advanced settings
|
# Check if user explicitly disabled GPU in advanced settings
|
||||||
# ENABLE_GPU is exported from build.func
|
# ENABLE_GPU is exported from build.func
|
||||||
if [[ "${ENABLE_GPU:-no}" == "no" ]]; then
|
if [[ "${ENABLE_GPU:-no}" == "no" ]]; then
|
||||||
@@ -4379,7 +4509,7 @@ function setup_hwaccel() {
|
|||||||
# ═══════════════════════════════════════════════════════════════════════════
|
# ═══════════════════════════════════════════════════════════════════════════
|
||||||
# Device Permissions
|
# Device Permissions
|
||||||
# ═══════════════════════════════════════════════════════════════════════════
|
# ═══════════════════════════════════════════════════════════════════════════
|
||||||
_setup_gpu_permissions "$in_ct"
|
_setup_gpu_permissions "$in_ct" "$service_user"
|
||||||
|
|
||||||
cache_installed_version "hwaccel" "1.0"
|
cache_installed_version "hwaccel" "1.0"
|
||||||
msg_ok "Setup Hardware Acceleration"
|
msg_ok "Setup Hardware Acceleration"
|
||||||
@@ -4572,9 +4702,6 @@ _setup_amd_apu() {
|
|||||||
$STD apt -y install firmware-amd-graphics 2>/dev/null || true
|
$STD apt -y install firmware-amd-graphics 2>/dev/null || true
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# ROCm compute stack (OpenCL + HIP) - also works for many APUs
|
|
||||||
_setup_rocm "$os_id" "$os_codename"
|
|
||||||
|
|
||||||
msg_ok "AMD APU configured"
|
msg_ok "AMD APU configured"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -4622,16 +4749,9 @@ _setup_rocm() {
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
# AMDGPU driver repository (append to same keyring)
|
# Note: The amdgpu/latest/ubuntu repo (kernel driver packages) is intentionally
|
||||||
{
|
# omitted — kernel drivers are managed by the Proxmox host, not the LXC container.
|
||||||
echo ""
|
# Only the ROCm userspace compute stack is needed inside the container.
|
||||||
echo "Types: deb"
|
|
||||||
echo "URIs: https://repo.radeon.com/amdgpu/latest/ubuntu"
|
|
||||||
echo "Suites: ${ROCM_REPO_CODENAME}"
|
|
||||||
echo "Components: main"
|
|
||||||
echo "Architectures: amd64"
|
|
||||||
echo "Signed-By: /etc/apt/keyrings/rocm.gpg"
|
|
||||||
} >>/etc/apt/sources.list.d/rocm.sources
|
|
||||||
|
|
||||||
# Pin ROCm packages to prefer radeon repo
|
# Pin ROCm packages to prefer radeon repo
|
||||||
cat <<EOF >/etc/apt/preferences.d/rocm-pin-600
|
cat <<EOF >/etc/apt/preferences.d/rocm-pin-600
|
||||||
@@ -4640,7 +4760,26 @@ Pin: release o=repo.radeon.com
|
|||||||
Pin-Priority: 600
|
Pin-Priority: 600
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
$STD apt update
|
# apt update with retry — repo.radeon.com CDN can be mid-sync (transient size mismatches).
|
||||||
|
# Run with ERR trap disabled so a transient failure does not abort the entire install.
|
||||||
|
local _apt_ok=0
|
||||||
|
for _attempt in 1 2 3; do
|
||||||
|
if (
|
||||||
|
set +e
|
||||||
|
apt-get update -qq 2>&1
|
||||||
|
exit $?
|
||||||
|
) 2>/dev/null; then
|
||||||
|
_apt_ok=1
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
msg_warn "apt update failed (attempt ${_attempt}/3) — AMD repo may be temporarily unavailable, retrying in 30s…"
|
||||||
|
sleep 30
|
||||||
|
done
|
||||||
|
if [[ $_apt_ok -eq 0 ]]; then
|
||||||
|
msg_warn "apt update still failing after 3 attempts — skipping ROCm install"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
# Install only runtime packages — full 'rocm' meta-package includes 15GB+ dev tools
|
# Install only runtime packages — full 'rocm' meta-package includes 15GB+ dev tools
|
||||||
$STD apt install -y rocm-opencl-runtime rocm-hip-runtime rocm-smi-lib 2>/dev/null || {
|
$STD apt install -y rocm-opencl-runtime rocm-hip-runtime rocm-smi-lib 2>/dev/null || {
|
||||||
msg_warn "ROCm runtime install failed — trying minimal set"
|
msg_warn "ROCm runtime install failed — trying minimal set"
|
||||||
@@ -5004,6 +5143,7 @@ EOF
|
|||||||
# ══════════════════════════════════════════════════════════════════════════════
|
# ══════════════════════════════════════════════════════════════════════════════
|
||||||
_setup_gpu_permissions() {
|
_setup_gpu_permissions() {
|
||||||
local in_ct="$1"
|
local in_ct="$1"
|
||||||
|
local service_user="${2:-}"
|
||||||
|
|
||||||
# /dev/dri permissions (Intel/AMD)
|
# /dev/dri permissions (Intel/AMD)
|
||||||
if [[ "$in_ct" == "0" && -d /dev/dri ]]; then
|
if [[ "$in_ct" == "0" && -d /dev/dri ]]; then
|
||||||
@@ -5070,6 +5210,12 @@ _setup_gpu_permissions() {
|
|||||||
chmod 666 /dev/kfd 2>/dev/null || true
|
chmod 666 /dev/kfd 2>/dev/null || true
|
||||||
msg_info "AMD ROCm compute device configured"
|
msg_info "AMD ROCm compute device configured"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Add service user to render and video groups for GPU hardware acceleration
|
||||||
|
if [[ -n "$service_user" ]]; then
|
||||||
|
usermod -aG render "$service_user" 2>/dev/null || true
|
||||||
|
usermod -aG video "$service_user" 2>/dev/null || true
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
@@ -6552,22 +6698,38 @@ EOF
|
|||||||
# - Optionally uses official PGDG repository for specific versions
|
# - Optionally uses official PGDG repository for specific versions
|
||||||
# - Detects existing PostgreSQL version
|
# - Detects existing PostgreSQL version
|
||||||
# - Dumps all databases before upgrade
|
# - Dumps all databases before upgrade
|
||||||
# - Installs optional PG_MODULES (e.g. postgis, contrib)
|
# - Installs optional PG_MODULES (e.g. postgis, contrib, cron)
|
||||||
# - Restores dumped data post-upgrade
|
# - Restores dumped data post-upgrade
|
||||||
#
|
#
|
||||||
# Variables:
|
# Variables:
|
||||||
# USE_PGDG_REPO - Use official PGDG repository (default: true)
|
# USE_PGDG_REPO - Use official PGDG repository (default: true)
|
||||||
# Set to "false" to use distro packages instead
|
# Set to "false" to use distro packages instead
|
||||||
# PG_VERSION - Major PostgreSQL version (e.g. 15, 16) (default: 16)
|
# PG_VERSION - Major PostgreSQL version (e.g. 15, 16) (default: 16)
|
||||||
# PG_MODULES - Comma-separated list of modules (e.g. "postgis,contrib")
|
# PG_MODULES - Comma-separated list of modules (e.g. "postgis,contrib,cron")
|
||||||
#
|
#
|
||||||
# Examples:
|
# Examples:
|
||||||
# setup_postgresql # Uses PGDG repo, PG 16
|
# setup_postgresql # Uses PGDG repo, PG 16
|
||||||
# PG_VERSION="17" setup_postgresql # Specific version from PGDG
|
# PG_VERSION="17" setup_postgresql # Specific version from PGDG
|
||||||
# USE_PGDG_REPO=false setup_postgresql # Uses distro package instead
|
# USE_PGDG_REPO=false setup_postgresql # Uses distro package instead
|
||||||
|
# PG_VERSION="17" PG_MODULES="cron" setup_postgresql # With pg_cron module
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
|
|
||||||
function setup_postgresql() {
|
# Internal helper: Configure shared_preload_libraries for pg_cron
|
||||||
|
_configure_pg_cron_preload() {
|
||||||
|
local modules="${1:-}"
|
||||||
|
[[ -z "$modules" ]] && return 0
|
||||||
|
if [[ ",$modules," == *",cron,"* ]]; then
|
||||||
|
local current_libs
|
||||||
|
current_libs=$(sudo -u postgres psql -tAc "SHOW shared_preload_libraries;" 2>/dev/null || echo "")
|
||||||
|
if [[ "$current_libs" != *"pg_cron"* ]]; then
|
||||||
|
local new_libs="${current_libs:+${current_libs},}pg_cron"
|
||||||
|
$STD sudo -u postgres psql -c "ALTER SYSTEM SET shared_preload_libraries = '${new_libs}';"
|
||||||
|
$STD systemctl restart postgresql
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
setup_postgresql() {
|
||||||
local PG_VERSION="${PG_VERSION:-16}"
|
local PG_VERSION="${PG_VERSION:-16}"
|
||||||
local PG_MODULES="${PG_MODULES:-}"
|
local PG_MODULES="${PG_MODULES:-}"
|
||||||
local USE_PGDG_REPO="${USE_PGDG_REPO:-true}"
|
local USE_PGDG_REPO="${USE_PGDG_REPO:-true}"
|
||||||
@@ -6605,6 +6767,7 @@ function setup_postgresql() {
|
|||||||
$STD apt install -y "postgresql-${CURRENT_PG_VERSION}-${module}" 2>/dev/null || true
|
$STD apt install -y "postgresql-${CURRENT_PG_VERSION}-${module}" 2>/dev/null || true
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
_configure_pg_cron_preload "$PG_MODULES"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -6640,6 +6803,7 @@ function setup_postgresql() {
|
|||||||
$STD apt install -y "postgresql-${INSTALLED_VERSION}-${module}" 2>/dev/null || true
|
$STD apt install -y "postgresql-${INSTALLED_VERSION}-${module}" 2>/dev/null || true
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
_configure_pg_cron_preload "$PG_MODULES"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -6661,6 +6825,7 @@ function setup_postgresql() {
|
|||||||
$STD apt install -y "postgresql-${PG_VERSION}-${module}" 2>/dev/null || true
|
$STD apt install -y "postgresql-${PG_VERSION}-${module}" 2>/dev/null || true
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
_configure_pg_cron_preload "$PG_MODULES"
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -6688,13 +6853,16 @@ function setup_postgresql() {
|
|||||||
local SUITE
|
local SUITE
|
||||||
case "$DISTRO_CODENAME" in
|
case "$DISTRO_CODENAME" in
|
||||||
trixie | forky | sid)
|
trixie | forky | sid)
|
||||||
|
|
||||||
if verify_repo_available "https://apt.postgresql.org/pub/repos/apt" "trixie-pgdg"; then
|
if verify_repo_available "https://apt.postgresql.org/pub/repos/apt" "trixie-pgdg"; then
|
||||||
SUITE="trixie-pgdg"
|
SUITE="trixie-pgdg"
|
||||||
|
|
||||||
else
|
else
|
||||||
msg_warn "PGDG repo not available for ${DISTRO_CODENAME}, falling back to distro packages"
|
msg_warn "PGDG repo not available for ${DISTRO_CODENAME}, falling back to distro packages"
|
||||||
USE_PGDG_REPO=false setup_postgresql
|
USE_PGDG_REPO=false setup_postgresql
|
||||||
return $?
|
return $?
|
||||||
fi
|
fi
|
||||||
|
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
SUITE=$(get_fallback_suite "$DISTRO_ID" "$DISTRO_CODENAME" "https://apt.postgresql.org/pub/repos/apt")
|
SUITE=$(get_fallback_suite "$DISTRO_ID" "$DISTRO_CODENAME" "https://apt.postgresql.org/pub/repos/apt")
|
||||||
@@ -6778,6 +6946,7 @@ function setup_postgresql() {
|
|||||||
}
|
}
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
_configure_pg_cron_preload "$PG_MODULES"
|
||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
@@ -6796,6 +6965,7 @@ function setup_postgresql() {
|
|||||||
# PG_DB_NAME="immich" PG_DB_USER="immich" PG_DB_EXTENSIONS="pgvector" setup_postgresql_db
|
# PG_DB_NAME="immich" PG_DB_USER="immich" PG_DB_EXTENSIONS="pgvector" setup_postgresql_db
|
||||||
# PG_DB_NAME="ghostfolio" PG_DB_USER="ghostfolio" PG_DB_GRANT_SUPERUSER="true" setup_postgresql_db
|
# PG_DB_NAME="ghostfolio" PG_DB_USER="ghostfolio" PG_DB_GRANT_SUPERUSER="true" setup_postgresql_db
|
||||||
# PG_DB_NAME="adventurelog" PG_DB_USER="adventurelog" PG_DB_EXTENSIONS="postgis" setup_postgresql_db
|
# PG_DB_NAME="adventurelog" PG_DB_USER="adventurelog" PG_DB_EXTENSIONS="postgis" setup_postgresql_db
|
||||||
|
# PG_DB_NAME="splitpro" PG_DB_USER="splitpro" PG_DB_EXTENSIONS="pg_cron" setup_postgresql_db
|
||||||
#
|
#
|
||||||
# Variables:
|
# Variables:
|
||||||
# PG_DB_NAME - Database name (required)
|
# PG_DB_NAME - Database name (required)
|
||||||
@@ -6827,6 +6997,13 @@ function setup_postgresql_db() {
|
|||||||
$STD sudo -u postgres psql -c "CREATE ROLE $PG_DB_USER WITH LOGIN PASSWORD '$PG_DB_PASS';"
|
$STD sudo -u postgres psql -c "CREATE ROLE $PG_DB_USER WITH LOGIN PASSWORD '$PG_DB_PASS';"
|
||||||
$STD sudo -u postgres psql -c "CREATE DATABASE $PG_DB_NAME WITH OWNER $PG_DB_USER ENCODING 'UTF8' TEMPLATE template0;"
|
$STD sudo -u postgres psql -c "CREATE DATABASE $PG_DB_NAME WITH OWNER $PG_DB_USER ENCODING 'UTF8' TEMPLATE template0;"
|
||||||
|
|
||||||
|
# Configure pg_cron database BEFORE creating the extension (must be set before pg_cron loads)
|
||||||
|
if [[ -n "${PG_DB_EXTENSIONS:-}" ]] && [[ ",${PG_DB_EXTENSIONS//[[:space:]]/}," == *",pg_cron,"* ]]; then
|
||||||
|
$STD sudo -u postgres psql -c "ALTER SYSTEM SET cron.database_name = '${PG_DB_NAME}';"
|
||||||
|
$STD sudo -u postgres psql -c "ALTER SYSTEM SET cron.timezone = 'UTC';"
|
||||||
|
$STD systemctl restart postgresql
|
||||||
|
fi
|
||||||
|
|
||||||
# Install extensions (comma-separated)
|
# Install extensions (comma-separated)
|
||||||
if [[ -n "${PG_DB_EXTENSIONS:-}" ]]; then
|
if [[ -n "${PG_DB_EXTENSIONS:-}" ]]; then
|
||||||
IFS=',' read -ra EXT_LIST <<<"${PG_DB_EXTENSIONS:-}"
|
IFS=',' read -ra EXT_LIST <<<"${PG_DB_EXTENSIONS:-}"
|
||||||
@@ -6836,6 +7013,12 @@ function setup_postgresql_db() {
|
|||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Grant pg_cron schema permissions to DB user
|
||||||
|
if [[ -n "${PG_DB_EXTENSIONS:-}" ]] && [[ ",${PG_DB_EXTENSIONS//[[:space:]]/}," == *",pg_cron,"* ]]; then
|
||||||
|
$STD sudo -u postgres psql -d "$PG_DB_NAME" -c "GRANT USAGE ON SCHEMA cron TO ${PG_DB_USER};"
|
||||||
|
$STD sudo -u postgres psql -d "$PG_DB_NAME" -c "GRANT ALL ON ALL TABLES IN SCHEMA cron TO ${PG_DB_USER};"
|
||||||
|
fi
|
||||||
|
|
||||||
# ALTER ROLE settings for Django/Rails compatibility (unless skipped)
|
# ALTER ROLE settings for Django/Rails compatibility (unless skipped)
|
||||||
if [[ "${PG_DB_SKIP_ALTER_ROLE:-}" != "true" ]]; then
|
if [[ "${PG_DB_SKIP_ALTER_ROLE:-}" != "true" ]]; then
|
||||||
$STD sudo -u postgres psql -c "ALTER ROLE $PG_DB_USER SET client_encoding TO 'utf8';"
|
$STD sudo -u postgres psql -c "ALTER ROLE $PG_DB_USER SET client_encoding TO 'utf8';"
|
||||||
@@ -6877,7 +7060,6 @@ function setup_postgresql_db() {
|
|||||||
export PG_DB_USER
|
export PG_DB_USER
|
||||||
export PG_DB_PASS
|
export PG_DB_PASS
|
||||||
}
|
}
|
||||||
|
|
||||||
# ------------------------------------------------------------------------------
|
# ------------------------------------------------------------------------------
|
||||||
# Installs rbenv and ruby-build, installs Ruby and optionally Rails.
|
# Installs rbenv and ruby-build, installs Ruby and optionally Rails.
|
||||||
#
|
#
|
||||||
|
|||||||
@@ -134,16 +134,20 @@ manage_states() {
|
|||||||
read -rp "Shutdown source and start new container? [Y/n]: " answer
|
read -rp "Shutdown source and start new container? [Y/n]: " answer
|
||||||
answer=${answer:-Y}
|
answer=${answer:-Y}
|
||||||
if [[ $answer =~ ^[Yy] ]]; then
|
if [[ $answer =~ ^[Yy] ]]; then
|
||||||
pct shutdown "$CONTAINER_ID"
|
|
||||||
for i in {1..36}; do
|
|
||||||
sleep 5
|
|
||||||
! pct status "$CONTAINER_ID" | grep -q running && break
|
|
||||||
done
|
|
||||||
if pct status "$CONTAINER_ID" | grep -q running; then
|
if pct status "$CONTAINER_ID" | grep -q running; then
|
||||||
read -rp "Timeout reached. Force shutdown? [Y/n]: " force
|
pct shutdown "$CONTAINER_ID"
|
||||||
if [[ ${force:-Y} =~ ^[Yy] ]]; then
|
for i in {1..36}; do
|
||||||
pkill -9 -f "lxc-start -F -n $CONTAINER_ID"
|
sleep 5
|
||||||
|
! pct status "$CONTAINER_ID" | grep -q running && break
|
||||||
|
done
|
||||||
|
if pct status "$CONTAINER_ID" | grep -q running; then
|
||||||
|
read -rp "Timeout reached. Force shutdown? [Y/n]: " force
|
||||||
|
if [[ ${force:-Y} =~ ^[Yy] ]]; then
|
||||||
|
pkill -9 -f "lxc-start -F -n $CONTAINER_ID"
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
else
|
||||||
|
msg_custom "ℹ️" "\e[36m" "Source container $CONTAINER_ID is already stopped"
|
||||||
fi
|
fi
|
||||||
pct start "$NEW_CONTAINER_ID"
|
pct start "$NEW_CONTAINER_ID"
|
||||||
msg_ok "New container started"
|
msg_ok "New container started"
|
||||||
|
|||||||
@@ -426,16 +426,23 @@ for container in $CHOICE; do
|
|||||||
elif [ $exit_code -eq 75 ]; then
|
elif [ $exit_code -eq 75 ]; then
|
||||||
echo -e "${YW}[WARN]${CL} Container $container skipped (requires interactive mode)"
|
echo -e "${YW}[WARN]${CL} Container $container skipped (requires interactive mode)"
|
||||||
elif [ "$BACKUP_CHOICE" == "yes" ]; then
|
elif [ "$BACKUP_CHOICE" == "yes" ]; then
|
||||||
msg_info "Restoring LXC from backup"
|
msg_error "Update failed for container $container (exit code: $exit_code) — attempting restore"
|
||||||
|
msg_info "Restoring LXC $container from backup ($STORAGE_CHOICE)"
|
||||||
pct stop $container
|
pct stop $container
|
||||||
LXC_STORAGE=$(pct config $container | awk -F '[:,]' '/rootfs/ {print $2}')
|
LXC_STORAGE=$(pct config $container | awk -F '[:,]' '/rootfs/ {print $2}')
|
||||||
pct restore $container /var/lib/vz/dump/vzdump-lxc-${container}-*.tar.zst --storage $LXC_STORAGE --force >/dev/null 2>&1
|
BACKUP_ENTRY=$(pvesm list "$STORAGE_CHOICE" 2>/dev/null | awk -v ctid="$container" '$1 ~ "vzdump-lxc-"ctid"-" || $1 ~ "/ct/"ctid"/" {print $1}' | sort -r | head -n1)
|
||||||
pct start $container
|
if [ -z "$BACKUP_ENTRY" ]; then
|
||||||
|
msg_error "No backup found in storage $STORAGE_CHOICE for container $container"
|
||||||
|
exit 235
|
||||||
|
fi
|
||||||
|
msg_info "Restoring from: $BACKUP_ENTRY"
|
||||||
|
pct restore $container "$BACKUP_ENTRY" --storage $LXC_STORAGE --force >/dev/null 2>&1
|
||||||
restorestatus=$?
|
restorestatus=$?
|
||||||
if [ $restorestatus -eq 0 ]; then
|
if [ $restorestatus -eq 0 ]; then
|
||||||
msg_ok "Restored LXC from backup"
|
pct start $container
|
||||||
|
msg_ok "Container $container successfully restored from backup"
|
||||||
else
|
else
|
||||||
msg_error "Restored LXC from backup failed"
|
msg_error "Restore failed for container $container"
|
||||||
exit 235
|
exit 235
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="arch-linux-vm"
|
NSAPP="archlinux-vm"
|
||||||
var_os="arch-linux"
|
var_os="arch-linux"
|
||||||
var_version="n.d."
|
var_version="n.d."
|
||||||
|
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="debian13vm"
|
NSAPP="debian-13-vm"
|
||||||
var_os="debian"
|
var_os="debian"
|
||||||
var_version="13"
|
var_version="13"
|
||||||
|
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="debian12vm"
|
NSAPP="debian-vm"
|
||||||
var_os="debian"
|
var_os="debian"
|
||||||
var_version="12"
|
var_version="12"
|
||||||
|
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:
|
|||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
VERSIONS=(stable beta dev)
|
VERSIONS=(stable beta dev)
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="homeassistant-os"
|
NSAPP="haos-vm"
|
||||||
var_os="homeassistant"
|
var_os="homeassistant"
|
||||||
DISK_SIZE="32G"
|
DISK_SIZE="32G"
|
||||||
|
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ echo -e "Loading..."
|
|||||||
GEN_MAC=$(echo '00 60 2f'$(od -An -N3 -t xC /dev/urandom) | sed -e 's/ /:/g' | tr '[:lower:]' '[:upper:]')
|
GEN_MAC=$(echo '00 60 2f'$(od -An -N3 -t xC /dev/urandom) | sed -e 's/ /:/g' | tr '[:lower:]' '[:upper:]')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="mikrotik-router-os"
|
NSAPP="mikrotik-routeros"
|
||||||
var_os="mikrotik"
|
var_os="mikrotik"
|
||||||
var_version=" "
|
var_version=" "
|
||||||
DISK_SIZE="1G"
|
DISK_SIZE="1G"
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="turnkey-nextcloud"
|
NSAPP="nextcloud-vm"
|
||||||
var_os="turnkey-nextcloud"
|
var_os="turnkey-nextcloud"
|
||||||
var_version="n.d."
|
var_version="n.d."
|
||||||
|
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="turnkey-owncloud-vm"
|
NSAPP="owncloud-vm"
|
||||||
var_os="owncloud"
|
var_os="owncloud"
|
||||||
var_version="18.0"
|
var_version="18.0"
|
||||||
APP="TurnKey ownCloud VM"
|
APP="TurnKey ownCloud VM"
|
||||||
|
|||||||
@@ -23,6 +23,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
|
NSAPP="truenas-vm"
|
||||||
|
|
||||||
YW=$(echo "\033[33m")
|
YW=$(echo "\033[33m")
|
||||||
BL=$(echo "\033[36m")
|
BL=$(echo "\033[36m")
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="ubuntu-2204-vm"
|
NSAPP="ubuntu2204-vm"
|
||||||
var_os="ubuntu"
|
var_os="ubuntu"
|
||||||
var_version="2204"
|
var_version="2204"
|
||||||
|
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="ubuntu-2404-vm"
|
NSAPP="ubuntu2404-vm"
|
||||||
var_os="ubuntu"
|
var_os="ubuntu"
|
||||||
var_version="2404"
|
var_version="2404"
|
||||||
|
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ echo -e "\n Loading..."
|
|||||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||||
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||||
METHOD=""
|
METHOD=""
|
||||||
NSAPP="ubuntu-2504-vm"
|
NSAPP="ubuntu2504-vm"
|
||||||
var_os="ubuntu"
|
var_os="ubuntu"
|
||||||
var_version="2504"
|
var_version="2504"
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user