Compare commits

..

21 Commits

Author SHA1 Message Date
CanbiZ (MickLesk)
98bdab6d3c Switch sqlite-specific db scripts to generic
Replace npm script calls to db:sqlite:generate and db:sqlite:push with db:generate and db:push in ct/pangolin.sh and install/pangolin-install.sh. This makes the build/install steps use the generic DB task names for consistency across update and install workflows.
2026-02-13 08:42:15 +01:00
community-scripts-pr-app[bot]
724a066aed chore: update github-versions.json (#11867)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 06:23:08 +00:00
community-scripts-pr-app[bot]
cd6e8ecbbe Update CHANGELOG.md (#11866)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 06:17:38 +00:00
Chris
8083c0c0e1 [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config (#11864) 2026-02-13 07:17:08 +01:00
community-scripts-pr-app[bot]
29836f35ed Update CHANGELOG.md (#11863)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 00:24:04 +00:00
community-scripts-pr-app[bot]
17d3d4297c chore: update github-versions.json (#11862)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-13 00:23:36 +00:00
MickLesk
2b921736e6 fix(tools.func): fix GPG key format detection in setup_deb822_repo
The previous logic using 'file | grep PGP' was inverted — both
ASCII-armored and binary GPG keys matched the pattern, causing
ASCII-armored keys to be copied directly instead of being
dearmored. This resulted in APT failing with NO_PUBKEY errors
on Debian 12 (bookworm).

Use 'grep BEGIN PGP' to reliably detect ASCII-armored keys and
dearmor them, otherwise copy binary keys directly.
2026-02-12 22:30:34 +01:00
MickLesk
ddabe81dd8 fix(tools.func): set GPG keyring files to 644 permissions
gpg --dearmor creates files with restrictive permissions (600),
which prevents Debian 13's sqv signature verifier from reading
the keyring files. This causes apt update to fail with
'Permission denied' errors for all repositories using custom
GPG keys (adoptium, pgdg, pdm, etc.).

Set chmod 644 after creating .gpg files in both setup_deb822_repo()
and the MongoDB GPG key import in manage_tool_repository().
2026-02-12 22:24:24 +01:00
community-scripts-pr-app[bot]
19c5671d3f Update CHANGELOG.md (#11854)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:06:26 +00:00
CanbiZ (MickLesk)
2326520d17 Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics (#11842)
* fix(archlinux-vm): fix LVM/LVM-thin storage and improve error reporting

- Add catch-all (*) case for storage types (LVM, LVM-thin, zfspool)
  Previously only nfs/dir/cifs and btrfs were handled, leaving
  DISK_EXT, DISK_REF, and DISK_IMPORT unset on LVM/LVM-thin storage
- Fix error_handler to send numeric exit_code to API instead of
  bash command text (which caused 'Unknown error' in telemetry)
- Replace fragile pvesm alloc for EFI disk with Proxmox-managed
  :0,efitype=4m (consistent with docker-vm.sh)
- Modernize disk import: auto-detect qm disk import vs qm importdisk,
  parse output to get correct disk reference instead of guessing names
- Use --format flag (double dash) consistent with modern Proxmox API
- Remove unused FORMAT variable (EFI type now always set correctly)
- Remove fragile eval-based disk name construction

* fix(vm): fix LVM/LVM-thin storage and error reporting for all VM scripts

- Add catch-all (*) case to storage type detection in all VM scripts
  that were missing it (debian-vm, debian-13-vm, ubuntu2204/2404/2504,
  nextcloud-vm, owncloud-vm, opnsense-vm, pimox-haos-vm)
- Add catch-all to mikrotik-routeros (had zfspool but not lvm/lvmthin)
- Fix error_handler in ALL 14 VM scripts to send numeric exit_code
  to post_update_to_api instead of bash command text, which caused
  'Unknown error' in telemetry because the API expects a number
2026-02-12 20:06:02 +01:00
community-scripts-pr-app[bot]
7964d39e32 Update CHANGELOG.md (#11853)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:05:55 +00:00
CanbiZ (MickLesk)
f7cf7c8adc fix(telemetry): prevent stuck 'installing' status in error_handler.func (#11845)
The catch_errors() function in CT scripts overrides the API telemetry
traps set by build.func. This caused on_exit, on_interrupt, and
on_terminate to never call post_update_to_api, leaving telemetry
records permanently stuck on 'installing'.

Changes:
- on_exit: Report orphaned 'installing' records on ANY exit where
  post_to_api was called but post_update_to_api was not
- on_interrupt: Call post_update_to_api('failed', '130') before exit
- on_terminate: Call post_update_to_api('failed', '143') before exit

All calls are guarded by POST_UPDATE_DONE flag to prevent duplicates.
2026-02-12 20:05:27 +01:00
community-scripts-pr-app[bot]
744191cb84 Update CHANGELOG.md (#11852)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:05:20 +00:00
CanbiZ (MickLesk)
291ed4c5ad fix(emqx): increase disk to 6GB and add optional MQ disable prompt (#11844)
EMQX 6.1+ preallocates significant disk space for the MQ feature,
causing high CPU/disk usage on small containers (emqx/emqx#16649).

- Increase default disk from 4GB to 6GB
- Add read -rp prompt during install to optionally disable MQ feature
  via mq.enable=false in emqx.conf (reduces disk/CPU overhead)
- Setting is in install script (not CT script) per reviewer feedback

Co-authored-by: sim-san <sim-san@users.noreply.github.com>
2026-02-12 20:04:47 +01:00
community-scripts-pr-app[bot]
f9612c5aba Update CHANGELOG.md (#11851)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 19:04:18 +00:00
CanbiZ (MickLesk)
403a839ac0 fix(tools): auto-detect binary vs armored GPG keys in setup_deb822_repo (#11841)
The UniFi GPG key at dl.ui.com/unifi/unifi-repo.gpg is already in binary
format. setup_deb822_repo unconditionally ran gpg --dearmor which expects
ASCII-armored input, corrupting binary keys and causing apt to fail with
'Unable to locate package unifi'.

setup_deb822_repo now downloads the key to a temp file first and uses
the file command to detect whether it is already a binary PGP/GPG key.
Binary keys are copied directly; armored keys are dearmored as before.

This also reverts unifi-install.sh back to using setup_deb822_repo for
consistency with all other install scripts.
2026-02-12 20:03:33 +01:00
community-scripts-pr-app[bot]
41c89413ef chore: update github-versions.json (#11848)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 18:21:07 +00:00
community-scripts-pr-app[bot]
fa11528a7b Update CHANGELOG.md (#11847)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 18:12:53 +00:00
CanbiZ (MickLesk)
2a03c86384 Tailscale: fix DNS check and keyrings directory issues (#11837)
* fix(tailscale-addon): fix DNS check and keyrings directory issues

- Source /etc/os-release instead of grep to handle quoted values properly
- Use VERSION_CODENAME variable instead of VER for correct URL
- Add fallback DNS resolution methods (host, nslookup, getent) when dig is missing
- Create /usr/share/keyrings directory if it doesn't exist
- Skip DNS check gracefully when no DNS tools are available

Fixes installation failures with 'dig: command not found' and
'No such file or directory' for keyrings path

* Update tools/addon/add-tailscale-lxc.sh

Co-authored-by: Chris <punk.sand7393@fastmail.com>

* Update tools/addon/add-tailscale-lxc.sh

Co-authored-by: Chris <punk.sand7393@fastmail.com>

---------

Co-authored-by: Chris <punk.sand7393@fastmail.com>
2026-02-12 19:12:23 +01:00
community-scripts-pr-app[bot]
57b4e10b93 Update CHANGELOG.md (#11846)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-02-12 17:40:58 +00:00
Stefan Tomas
4b22c7cc2d Increased the Grafana container default disk size. (#11840)
Co-authored-by: Stefan Tomas <stefan.tomas@proton.me>
2026-02-12 18:40:31 +01:00
26 changed files with 228 additions and 75 deletions

View File

@@ -401,26 +401,48 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-13
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- [Hotfix] Jotty: Copy contents of config backup into /opt/jotty/config [@vhsdream](https://github.com/vhsdream) ([#11864](https://github.com/community-scripts/ProxmoxVE/pull/11864))
## 2026-02-12
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- EMQX: increase disk to 6GB and add optional MQ disable prompt [@MickLesk](https://github.com/MickLesk) ([#11844](https://github.com/community-scripts/ProxmoxVE/pull/11844))
- Increased the Grafana container default disk size. [@shtefko](https://github.com/shtefko) ([#11840](https://github.com/community-scripts/ProxmoxVE/pull/11840))
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
- Deluge: add python3-setuptools as dep [@MickLesk](https://github.com/MickLesk) ([#11833](https://github.com/community-scripts/ProxmoxVE/pull/11833))
- Dispatcharr: migrate to uv sync [@MickLesk](https://github.com/MickLesk) ([#11831](https://github.com/community-scripts/ProxmoxVE/pull/11831))
- Pangolin: Update database generation command in install script [@tremor021](https://github.com/tremor021) ([#11825](https://github.com/community-scripts/ProxmoxVE/pull/11825))
- #### ✨ New Features
- Archlinux-VM: fix LVM/LVM-thin storage and improve error reporting | VM's add correct exit_code for analytics [@MickLesk](https://github.com/MickLesk) ([#11842](https://github.com/community-scripts/ProxmoxVE/pull/11842))
- Debian13-VM: Optimize First Boot & add noCloud/Cloud Selection [@MickLesk](https://github.com/MickLesk) ([#11810](https://github.com/community-scripts/ProxmoxVE/pull/11810))
### 💾 Core
- #### ✨ New Features
- tools.func: auto-detect binary vs armored GPG keys in setup_deb822_repo [@MickLesk](https://github.com/MickLesk) ([#11841](https://github.com/community-scripts/ProxmoxVE/pull/11841))
- core: remove old Go API and extend misc/api.func with new backend [@MickLesk](https://github.com/MickLesk) ([#11822](https://github.com/community-scripts/ProxmoxVE/pull/11822))
- #### 🔧 Refactor
- error_handler: prevent stuck 'installing' status [@MickLesk](https://github.com/MickLesk) ([#11845](https://github.com/community-scripts/ProxmoxVE/pull/11845))
### 🧰 Tools
- #### 🐞 Bug Fixes
- Tailscale: fix DNS check and keyrings directory issues [@MickLesk](https://github.com/MickLesk) ([#11837](https://github.com/community-scripts/ProxmoxVE/pull/11837))
## 2026-02-11
### 🆕 New Scripts

View File

@@ -9,7 +9,7 @@ APP="Alpine-Grafana"
var_tags="${var_tags:-alpine;monitoring}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-256}"
var_disk="${var_disk:-1}"
var_disk="${var_disk:-2}"
var_os="${var_os:-alpine}"
var_version="${var_version:-3.23}"
var_unprivileged="${var_unprivileged:-1}"

View File

@@ -9,7 +9,7 @@ APP="Grafana"
var_tags="${var_tags:-monitoring;visualization}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-512}"
var_disk="${var_disk:-2}"
var_disk="${var_disk:-4}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"

View File

@@ -46,7 +46,7 @@ function update_script() {
msg_info "Restoring configuration & data"
mv /opt/app.env /opt/jotty/.env
[[ -d /opt/data ]] && mv /opt/data /opt/jotty/data
[[ -d /opt/jotty/config ]] && mv /opt/config/* /opt/jotty/config
[[ -d /opt/jotty/config ]] && cp -a /opt/config/* /opt/jotty/config && rm -rf /opt/config
msg_ok "Restored configuration & data"
msg_info "Starting Service"

View File

@@ -51,7 +51,7 @@ function update_script() {
$STD npm run db:generate
$STD npm run build
$STD npm run build:cli
$STD npm run db:sqlite:push
$STD npm run db:push
cp -R .next/standalone ./
chmod +x ./dist/cli.mjs
cp server/db/names.json ./dist/names.json

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-12T12:15:15Z",
"generated": "2026-02-13T06:23:00Z",
"versions": [
{
"slug": "2fauth",
@@ -193,9 +193,9 @@
{
"slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr",
"version": "v2.5.1",
"version": "v2.6.0",
"pinned": false,
"date": "2026-01-11T00:46:17Z"
"date": "2026-02-13T00:14:21Z"
},
{
"slug": "cloudreve",
@@ -298,9 +298,9 @@
{
"slug": "donetick",
"repo": "donetick/donetick",
"version": "v0.1.71",
"version": "v0.1.73",
"pinned": false,
"date": "2026-02-11T06:01:13Z"
"date": "2026-02-12T23:42:30Z"
},
{
"slug": "drawio",
@@ -403,9 +403,9 @@
{
"slug": "ghostfolio",
"repo": "ghostfolio/ghostfolio",
"version": "2.237.0",
"version": "2.238.0",
"pinned": false,
"date": "2026-02-08T13:59:53Z"
"date": "2026-02-12T18:28:55Z"
},
{
"slug": "gitea",
@@ -543,9 +543,9 @@
{
"slug": "huntarr",
"repo": "plexguide/Huntarr.io",
"version": "9.2.4",
"version": "9.2.4.1",
"pinned": false,
"date": "2026-02-12T08:31:23Z"
"date": "2026-02-12T22:17:47Z"
},
{
"slug": "immich-public-proxy",
@@ -571,16 +571,16 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.57",
"version": "v5.12.59",
"pinned": false,
"date": "2026-02-11T23:08:56Z"
"date": "2026-02-13T02:26:13Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1098",
"version": "v0.24.1103",
"pinned": false,
"date": "2026-02-12T05:56:25Z"
"date": "2026-02-13T05:53:23Z"
},
{
"slug": "jellystat",
@@ -823,9 +823,9 @@
{
"slug": "metube",
"repo": "alexta69/metube",
"version": "2026.02.08",
"version": "2026.02.12",
"pinned": false,
"date": "2026-02-08T17:01:37Z"
"date": "2026-02-12T21:05:49Z"
},
{
"slug": "miniflux",
@@ -998,9 +998,9 @@
{
"slug": "pangolin",
"repo": "fosrl/pangolin",
"version": "1.15.3",
"version": "1.15.4",
"pinned": false,
"date": "2026-02-12T06:10:19Z"
"date": "2026-02-13T00:54:02Z"
},
{
"slug": "paperless-ai",
@@ -1292,9 +1292,9 @@
{
"slug": "scraparr",
"repo": "thecfu/scraparr",
"version": "v3.0.1",
"version": "v3.0.3",
"pinned": false,
"date": "2026-02-11T17:42:23Z"
"date": "2026-02-12T14:20:56Z"
},
{
"slug": "seelf",
@@ -1432,9 +1432,9 @@
{
"slug": "termix",
"repo": "Termix-SSH/Termix",
"version": "release-1.11.0-tag",
"version": "release-1.11.1-tag",
"pinned": false,
"date": "2026-01-25T02:09:52Z"
"date": "2026-02-13T04:49:16Z"
},
{
"slug": "the-lounge",
@@ -1460,9 +1460,9 @@
{
"slug": "tianji",
"repo": "msgbyte/tianji",
"version": "v1.31.10",
"version": "v1.31.12",
"pinned": false,
"date": "2026-02-04T17:21:04Z"
"date": "2026-02-12T19:06:14Z"
},
{
"slug": "traccar",
@@ -1551,9 +1551,9 @@
{
"slug": "upsnap",
"repo": "seriousm4x/UpSnap",
"version": "5.2.7",
"version": "5.2.8",
"pinned": false,
"date": "2026-01-07T23:48:00Z"
"date": "2026-02-13T00:02:37Z"
},
{
"slug": "uptimekuma",

View File

@@ -21,7 +21,7 @@
"resources": {
"cpu": 1,
"ram": 512,
"hdd": 2,
"hdd": 4,
"os": "debian",
"version": "13"
}
@@ -32,7 +32,7 @@
"resources": {
"cpu": 1,
"ram": 256,
"hdd": 1,
"hdd": 2,
"os": "alpine",
"version": "3.23"
}

View File

@@ -36,7 +36,7 @@ $STD npm ci
$STD npm run set:sqlite
$STD npm run set:oss
rm -rf server/private
$STD npm run db:sqlite:generate
$STD npm run db:generate
$STD npm run build
$STD npm run build:cli
cp -R .next/standalone ./
@@ -178,7 +178,7 @@ http:
servers:
- url: "http://$LOCAL_IP:3000"
EOF
$STD npm run db:sqlite:push
$STD npm run db:push
. /etc/os-release
if [ "$VERSION_CODENAME" = "trixie" ]; then

View File

@@ -243,6 +243,18 @@ error_handler() {
# ------------------------------------------------------------------------------
on_exit() {
local exit_code=$?
# Report orphaned "installing" records to telemetry API
# Catches ALL exit paths: errors (non-zero), signals, AND clean exits where
# post_to_api was called ("installing" sent) but post_update_to_api was never called
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if declare -f post_update_to_api >/dev/null 2>&1; then
if [[ $exit_code -ne 0 ]]; then
post_update_to_api "failed" "$exit_code"
else
post_update_to_api "failed" "1"
fi
fi
fi
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
exit "$exit_code"
}
@@ -255,6 +267,10 @@ on_exit() {
# - Exits with code 130 (128 + SIGINT=2)
# ------------------------------------------------------------------------------
on_interrupt() {
# Report interruption to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "130"
fi
if declare -f msg_error >/dev/null 2>&1; then
msg_error "Interrupted by user (SIGINT)"
else
@@ -272,6 +288,10 @@ on_interrupt() {
# - Triggered by external process termination
# ------------------------------------------------------------------------------
on_terminate() {
# Report termination to telemetry API (prevents stuck "installing" records)
if declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "failed" "143"
fi
if declare -f msg_error >/dev/null 2>&1; then
msg_error "Terminated by signal (SIGTERM)"
else

View File

@@ -465,6 +465,7 @@ manage_tool_repository() {
msg_error "Failed to download MongoDB GPG key"
return 1
fi
chmod 644 "/etc/apt/keyrings/mongodb-server-${version}.gpg"
# Setup repository
local distro_codename
@@ -1294,12 +1295,33 @@ setup_deb822_repo() {
return 1
}
# Import GPG
curl -fsSL "$gpg_url" | gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" || {
msg_error "Failed to import GPG key for ${name}"
# Import GPG key (auto-detect binary vs ASCII-armored format)
local tmp_gpg
tmp_gpg=$(mktemp) || return 1
curl -fsSL "$gpg_url" -o "$tmp_gpg" || {
msg_error "Failed to download GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
if grep -q "BEGIN PGP" "$tmp_gpg" 2>/dev/null; then
# ASCII-armored — dearmor to binary
gpg --dearmor --yes -o "/etc/apt/keyrings/${name}.gpg" < "$tmp_gpg" || {
msg_error "Failed to dearmor GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
else
# Already in binary GPG format — copy directly
cp "$tmp_gpg" "/etc/apt/keyrings/${name}.gpg" || {
msg_error "Failed to install GPG key for ${name}"
rm -f "$tmp_gpg"
return 1
}
fi
rm -f "$tmp_gpg"
chmod 644 "/etc/apt/keyrings/${name}.gpg"
# Write deb822
{
echo "Types: deb"

View File

@@ -75,14 +75,37 @@ pct exec "$CTID" -- bash -c '
set -e
export DEBIAN_FRONTEND=noninteractive
ID=$(grep "^ID=" /etc/os-release | cut -d"=" -f2)
VER=$(grep "^VERSION_CODENAME=" /etc/os-release | cut -d"=" -f2)
# Source os-release properly (handles quoted values)
source /etc/os-release
# fallback if DNS is poisoned or blocked
# Fallback if DNS is poisoned or blocked
ORIG_RESOLV="/etc/resolv.conf"
BACKUP_RESOLV="/tmp/resolv.conf.backup"
if ! dig +short pkgs.tailscale.com | grep -qvE "^127\.|^0\.0\.0\.0$"; then
# Check DNS resolution using multiple methods (dig may not be installed)
dns_check_failed=true
if command -v dig &>/dev/null; then
if dig +short pkgs.tailscale.com 2>/dev/null | grep -qvE "^127\.|^0\.0\.0\.0$|^$"; then
dns_check_failed=false
fi
elif command -v host &>/dev/null; then
if host pkgs.tailscale.com 2>/dev/null | grep -q "has address"; then
dns_check_failed=false
fi
elif command -v nslookup &>/dev/null; then
if nslookup pkgs.tailscale.com 2>/dev/null | grep -q "Address:"; then
dns_check_failed=false
fi
elif command -v getent &>/dev/null; then
if getent hosts pkgs.tailscale.com &>/dev/null; then
dns_check_failed=false
fi
else
# No DNS tools available, try curl directly and assume DNS works
dns_check_failed=false
fi
if $dns_check_failed; then
echo "[INFO] DNS resolution for pkgs.tailscale.com failed (blocked or redirected)."
echo "[INFO] Temporarily overriding /etc/resolv.conf with Cloudflare DNS (1.1.1.1)"
cp "$ORIG_RESOLV" "$BACKUP_RESOLV"
@@ -92,17 +115,22 @@ fi
if ! command -v curl &>/dev/null; then
echo "[INFO] curl not found, installing..."
apt-get update -qq
apt-get install -y curl >/dev/null
apt update -qq
apt install -y curl >/dev/null
fi
curl -fsSL https://pkgs.tailscale.com/stable/${ID}/${VER}.noarmor.gpg \
# Ensure keyrings directory exists
mkdir -p /usr/share/keyrings
curl -fsSL "https://pkgs.tailscale.com/stable/${ID}/${VERSION_CODENAME}.noarmor.gpg" \
| tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/tailscale-archive-keyring.gpg] https://pkgs.tailscale.com/stable/${ID} ${VER} main" \
echo "deb [signed-by=/usr/share/keyrings/tailscale-archive-keyring.gpg] https://pkgs.tailscale.com/stable/${ID} ${VERSION_CODENAME} main" \
>/etc/apt/sources.list.d/tailscale.list
apt-get update -qq
apt-get install -y tailscale >/dev/null
apt update -qq
apt install -y tailscale >/dev/null
if [[ -f /tmp/resolv.conf.backup ]]; then
echo "[INFO] Restoring original /etc/resolv.conf"

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -203,7 +203,6 @@ function exit-script() {
function default_settings() {
VMID=$(get_valid_nextid)
FORMAT=",efitype=4m"
MACHINE=""
DISK_SIZE="4G"
DISK_CACHE=""
@@ -259,11 +258,9 @@ function advanced_settings() {
3>&1 1>&2 2>&3); then
if [ "$MACH" = q35 ]; then
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Machine Type: ${BGN}$MACH${CL}"
FORMAT=""
MACHINE=" -machine q35"
else
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Machine Type: ${BGN}$MACH${CL}"
FORMAT=",efitype=4m"
MACHINE=""
fi
else
@@ -476,31 +473,45 @@ case $STORAGE_TYPE in
nfs | dir | cifs)
DISK_EXT=".qcow2"
DISK_REF="$VMID/"
DISK_IMPORT="-format qcow2"
DISK_IMPORT="--format qcow2"
THIN=""
;;
btrfs)
DISK_EXT=".raw"
DISK_REF="$VMID/"
DISK_IMPORT="-format raw"
FORMAT=",efitype=4m"
DISK_IMPORT="--format raw"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="--format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"
eval DISK"${i}"=vm-"${VMID}"-disk-"${i}"${DISK_EXT:-}
eval DISK"${i}"_REF="${STORAGE}":"${DISK_REF:-}"${!disk}
done
msg_info "Creating a Arch Linux VM"
qm create $VMID -agent 1${MACHINE} -tablet 0 -localtime 1 -bios ovmf${CPU_TYPE} -cores $CORE_COUNT -memory $RAM_SIZE \
-name $HN -tags community-script -net0 virtio,bridge=$BRG,macaddr=$MAC$VLAN$MTU -onboot 1 -ostype l26 -scsihw virtio-scsi-pci
pvesm alloc $STORAGE $VMID $DISK0 4M 1>&/dev/null
qm importdisk $VMID ${FILE} $STORAGE ${DISK_IMPORT:-} 1>&/dev/null
if qm disk import --help >/dev/null 2>&1; then
IMPORT_CMD=(qm disk import)
else
IMPORT_CMD=(qm importdisk)
fi
IMPORT_OUT="$("${IMPORT_CMD[@]}" "$VMID" "${FILE}" "$STORAGE" ${DISK_IMPORT:-} 2>&1 || true)"
DISK_REF_IMPORTED="$(printf '%s\n' "$IMPORT_OUT" | sed -n "s/.*successfully imported disk '\([^']\+\)'.*/\1/p" | tr -d "\r\"'")"
[[ -z "$DISK_REF_IMPORTED" ]] && DISK_REF_IMPORTED="$(pvesm list "$STORAGE" | awk -v id="$VMID" '$5 ~ ("vm-"id"-disk-") {print $1":"$5}' | sort | tail -n1)"
[[ -z "$DISK_REF_IMPORTED" ]] && {
msg_error "Unable to determine imported disk reference."
echo "$IMPORT_OUT"
exit 1
}
msg_ok "Imported disk (${CL}${BL}${DISK_REF_IMPORTED}${CL})"
qm set $VMID \
-efidisk0 ${DISK0_REF}${FORMAT} \
-scsi0 ${DISK1_REF},${DISK_CACHE}${THIN}size=${DISK_SIZE} \
-efidisk0 ${STORAGE}:0,efitype=4m \
-scsi0 ${DISK_REF_IMPORTED},${DISK_CACHE}${THIN%,} \
-ide2 ${STORAGE}:cloudinit \
-boot order=scsi0 \
-serial0 socket >/dev/null

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -560,6 +560,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -501,6 +501,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -45,7 +45,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}

View File

@@ -74,7 +74,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}

View File

@@ -71,7 +71,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -566,6 +566,11 @@ zfspool)
DISK_REF=""
DISK_IMPORT="-format raw"
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
DISK_VAR="vm-${VMID}-disk-0${DISK_EXT:-}"

View File

@@ -70,7 +70,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -487,6 +487,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1,2}; do
disk="DISK$i"

View File

@@ -74,7 +74,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid

View File

@@ -48,7 +48,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -619,6 +619,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -71,7 +71,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -500,6 +500,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1,2}; do
disk="DISK$i"

View File

@@ -79,7 +79,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}
@@ -402,6 +402,11 @@ nfs | dir)
DISK_REF="$VMID/"
DISK_IMPORT="-format qcow2"
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -66,7 +66,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -482,6 +482,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -69,7 +69,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -484,6 +484,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -68,7 +68,7 @@ function error_handler() {
local exit_code="$?"
local line_number="$1"
local command="$2"
post_update_to_api "failed" "$command"
post_update_to_api "failed" "$exit_code"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
echo -e "\n$error_message\n"
cleanup_vmid
@@ -483,6 +483,11 @@ btrfs)
FORMAT=",efitype=4m"
THIN=""
;;
*)
DISK_EXT=""
DISK_REF=""
DISK_IMPORT="-format raw"
;;
esac
for i in {0,1}; do
disk="DISK$i"

View File

@@ -69,7 +69,7 @@ function error_handler() {
local line_number="$1"
local command="$2"
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
post_update_to_api "failed" "${command}"
post_update_to_api "failed" "${exit_code}"
echo -e "\n$error_message\n"
cleanup_vmid
}