Compare commits

..

7 Commits

Author SHA1 Message Date
MickLesk
6c05f7f858 perf(tools.func, alpine-tools.func): replace direct /etc/os-release reads with get_os_info(), echo|tr with bash builtins, consolidate pipelines
- get_os_info(): single while-read replaces 4 awk subprocesses on first call
- manage_tool_repository(): 7 awk|tr pipelines → get_os_info() cached lookups
- setup_hwaccel(): 3 grep|tr pipelines → get_os_info()
- setup_java/mysql/php/postgresql/clickhouse(): 2 awk|tr pipelines each → get_os_info()
- 6× echo|tr -d ' ' → bash parameter expansion (${var// /})
- 3× mod=$(echo|tr) → bash ${mod//[[:space:]]/}
- npm/apt-cache/cargo pipelines: grep|awk|tr → single awk with gsub
- COMBINED_MODULES: echo|tr|awk|paste (4 proc) → single awk with RS
- download_with_progress: awk|tr → awk gsub (both tools.func and alpine-tools.func)

Eliminates ~30 subprocess forks across typical tool setup paths.
2026-03-23 20:57:08 +01:00
MickLesk
be52f8e223 style: formatting cleanup in api.func and core.func 2026-03-23 20:46:34 +01:00
MickLesk
2008b1b458 perf(misc): optimize network checks, IP detection, and subprocess chains
Files: install.func, alpine-install.func, core.func, api.func

Changes:
- install.func: Cache hostname -I result in setting_up_container() —
  eliminates 3 redundant calls (retry loop + post-check + display).
  Single awk extracts first IP from cached call.

- alpine-install.func: Cache ip addr result in setting_up_container() —
  replaces 3 identical 4-process pipelines (ip|grep|grep|awk|cut) with
  one call using single awk (sub+print in one pass).

- core.func get_lxc_ip(): Merge awk+cut into single awk (sub+print)
  for eth0 IPv4/IPv6 lookups. Replace tr|grep pipeline for IPv6 hostname
  fallback with bash array iteration (read -ra + for loop).

- api.func detect_gpu(): Merge 2x sed + cut into single sed -E + bash
  substring. detect_cpu(): Merge 4x sed + cut into single sed -E + bash
  substring. get_error_text(): Merge 2x sed into single sed -E.
  post_addon_to_api(): Replace 2x grep|cut|tr on /etc/os-release with
  single while-read loop (1 file read instead of 6 subprocess forks).
2026-03-23 20:44:47 +01:00
MickLesk
a02bdf083d perf(build.func): eliminate subprocess forks in settings, hostname, and template discovery
Changes:
- Replace 6x echo|sed with bash parameter expansion in
  _build_current_app_vars_tmp() — eliminates 12 subprocess forks
  (6 echo + 6 sed) per advanced settings save cycle.

- Replace 4x echo|tr with bash builtins for lowercase + space
  removal in variables(), base_settings(), advanced_settings()
  hostname and tags handling — eliminates 8 subprocess forks.

- Consolidate 4 grep|awk|grep pipelines in template discovery to
  single awk passes — each template search now uses 1 awk process
  instead of 3 (grep + awk + grep). Applied to all online template
  and version discovery paths in create_lxc_container().

- Consolidate 2 grep|cut reads in ensure_storage_selection_for_vars_file()
  to single while-read loop — 1 file read instead of 2.
2026-03-23 20:39:40 +01:00
MickLesk
21e215dc1b perf(build.func): cache pveam calls, consolidate hostname/sed, pre-warm pvesm
Changes:
- Add pveam_cached() function with associative array cache for
  pveam available/list calls. Eliminates 5 redundant 'pveam available
  -section system' calls (each involves network I/O + catalog parsing)
  and 2 redundant 'pveam list' calls during template discovery.
  Cache is invalidated after pveam download for fresh post-check.

- Consolidate get_current_ip(): single 'hostname -I' call with bash
  array parsing instead of 2 calls piped through tr/grep/head.
  Pure bash regex matching replaces external grep for IPv4 detection.

- Batch update_motd_ip(): single sed pass with 3 expressions replaces
  3 separate grep + sed pairs (6 file reads → 1 read + 1 write).
  Read /etc/os-release once with sed instead of 2 grep|cut|tr pipes.

- Pre-warm pvesm cache at create_lxc_container() start: fetches
  pvesm status (unfiltered, rootdir, vztmpl) in 3 calls upfront so
  all subsequent pvesm_status_cached calls are instant cache hits.
2026-03-23 20:33:41 +01:00
MickLesk
6b9930c8df refactor(build.func): extract nested functions, recursion guard, GPU timeout, pvesm cache
Changes:
- Extract 7 functions from build_container() to module level:
  is_gpu_app(), detect_gpu_devices(), configure_usb_passthrough(),
  configure_gpu_passthrough(), configure_additional_devices(),
  get_container_gid(), pvesm_status_cached()
  Prevents function re-declaration on recursive retry attempts.

- Add recursion guard (_BUILD_DEPTH max 3) to build_container()
  to prevent infinite OOM→retry→OOM→retry loops (exit code 250).

- Add 30s timeout to multi-GPU selection prompt with auto-select
  fallback for the first available GPU type.

- Add pvesm_status_cached() with associative array cache keyed by
  arguments. Eliminates ~10 redundant pvesm subprocess calls during
  a single container creation flow. Applied to: check_storage_support(),
  select_storage(), resolve_storage_preselect(), create_lxc_container()
  rootdir/vztmpl validation, validate_storage_space(),
  choose_and_set_storage_for_file(), and settings validation.

- Simplify check_storage_support() from while-read loop to single
  pipeline with grep.
2026-03-23 20:26:36 +01:00
MickLesk
c3cb983085 perf(build.func): parallel IP scanning, sysfs bridge detection, GPU GID pre-start, VAR_WHITELIST consolidation, IPv6 description
Performance:
- resolve_ip_from_range(): parallel ping in batches of 10 (up to 10x faster)
  + Added bounds validation (start > end check)
  + Added range cap at 254 IPs to prevent excessive scanning
- _detect_bridges(): replaced 30-line config file parser with sysfs lookup
  (instant /sys/class/net/*/bridge detection + OVS bridge support)
  + Bridge descriptions now searched across interfaces.d/ too
- fix_gpu_gids(): read GIDs from mounted rootfs before first container start
  → eliminates 3+ second stop/edit/restart cycle for GPU passthrough

Bugfix:
- VAR_WHITELIST: consolidated 3 separate declarations into single global
  + Global copy was missing var_keyctl, var_mknod, var_mount_fs, var_nesting,
    var_protection, var_timezone, var_searchdomain — these were silently
    dropped from app defaults when default_var_settings() wasn't in scope

Edge case:
- description(): added IPv6 fallback for IPv6-only containers
2026-03-23 20:15:45 +01:00
20 changed files with 785 additions and 986 deletions

View File

@@ -426,15 +426,19 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-03-24
## 2026-03-23
### 🆕 New Scripts
- Alpine-Borgbackup-Server ([#13219](https://github.com/community-scripts/ProxmoxVE/pull/13219))
### 🚀 Updated Scripts
- #### 🔧 Refactor
- NginxProxyManager: build OpenResty from source via GitHub releases [@MickLesk](https://github.com/MickLesk) ([#13134](https://github.com/community-scripts/ProxmoxVE/pull/13134))
- core: harden shell scripts against injection and insecure permissions [@MickLesk](https://github.com/MickLesk) ([#13239](https://github.com/community-scripts/ProxmoxVE/pull/13239))
- #### ✨ New Features
- Kometa: optimize config.yml sed patterns, add Quickstart integration [@MickLesk](https://github.com/MickLesk) ([#13198](https://github.com/community-scripts/ProxmoxVE/pull/13198))
## 2026-03-22

View File

@@ -1,6 +0,0 @@
__ ______ __
____ ___ _ __/ /_/ ____/ ______ / /___ ________ _____
/ __ \/ _ \| |/_/ __/ __/ | |/_/ __ \/ / __ \/ ___/ _ \/ ___/
/ / / / __/> </ /_/ /____> </ /_/ / / /_/ / / / __/ /
/_/ /_/\___/_/|_|\__/_____/_/|_/ .___/_/\____/_/ \___/_/
/_/

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: vhsdream
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/nxzai/nextExplorer
APP="nextExplorer"
var_tags="${var_tags:-files;documents}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-3072}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/nextExplorer ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
NODE_VERSION="24" setup_nodejs
if check_for_gh_release "nextExplorer" "nxzai/nextExplorer"; then
msg_info "Stopping nextExplorer"
$STD systemctl stop nextexplorer
msg_ok "Stopped nextExplorer"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "nextExplorer" "nxzai/nextExplorer" "tarball" "latest" "/opt/nextExplorer"
msg_info "Updating nextExplorer"
APP_DIR="/opt/nextExplorer/app"
mkdir -p "$APP_DIR"
cd /opt/nextExplorer
export NODE_ENV=production
$STD npm ci --omit=dev --workspace backend
mv node_modules "$APP_DIR"
mv backend/{src,package.json} "$APP_DIR"
unset NODE_ENV
export NODE_ENV=development
$STD npm ci --workspace frontend
$STD npm run -w frontend build -- --sourcemap false
unset NODE_ENV
mv frontend/dist/ "$APP_DIR"/src/public
chown -R explorer:explorer "$APP_DIR" /etc/nextExplorer
sed -i "\|version|s|$(jq -cr '.version' ${APP_DIR}/package.json)|$(cat ~/.nextexplorer)|" "$APP_DIR"/package.json
sed -i 's/app.js/server.js/' /etc/systemd/system/nextexplorer.service && systemctl daemon-reload
msg_ok "Updated nextExplorer"
msg_info "Starting nextExplorer"
$STD systemctl start nextexplorer
msg_ok "Started nextExplorer"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3000${CL}"

View File

@@ -49,25 +49,41 @@ function update_script() {
NODE_VERSION="22" NODE_MODULE="yarn" setup_nodejs
if dpkg -s openresty &>/dev/null 2>&1; then
msg_info "Migrating from packaged OpenResty to source"
rm -f /etc/apt/trusted.gpg.d/openresty-archive-keyring.gpg /etc/apt/trusted.gpg.d/openresty.gpg
rm -f /etc/apt/sources.list.d/openresty.list /etc/apt/sources.list.d/openresty.sources
RELEASE=$(get_latest_github_release "NginxProxyManager/nginx-proxy-manager")
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager" "tarball" "v${RELEASE}" "/opt/nginxproxymanager"
msg_info "Stopping Services"
systemctl stop openresty
systemctl stop npm
msg_ok "Stopped Services"
msg_info "Cleaning old files"
$STD rm -rf /app \
/var/www/html \
/etc/nginx \
/var/log/nginx \
/var/lib/nginx \
/var/cache/nginx
msg_ok "Cleaned old files"
msg_info "Migrating to OpenResty from source"
rm -f /etc/apt/trusted.gpg.d/openresty-archive-keyring.gpg /etc/apt/trusted.gpg.d/openresty.gpg
rm -f /etc/apt/sources.list.d/openresty.list /etc/apt/sources.list.d/openresty.sources
if dpkg -l openresty &>/dev/null; then
$STD apt remove -y openresty
$STD apt autoremove -y
rm -f ~/.openresty
msg_ok "Migrated from packaged OpenResty to source"
fi
local pcre_pkg="libpcre3-dev"
if grep -qE 'VERSION_ID="1[3-9]"' /etc/os-release 2>/dev/null; then
pcre_pkg="libpcre2-dev"
fi
$STD apt install -y build-essential "$pcre_pkg" libssl-dev zlib1g-dev
msg_ok "Migrated to OpenResty from source"
if check_for_gh_release "openresty" "openresty/openresty"; then
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "openresty" "openresty/openresty" "prebuild" "${CHECK_UPDATE_RELEASE}" "/opt/openresty" "openresty-*.tar.gz"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "openresty" "openresty/openresty" "prebuild" "latest" "/opt/openresty" "openresty-*.tar.gz"
if [[ -d /opt/openresty ]]; then
msg_info "Building OpenResty"
cd /opt/openresty
$STD ./configure \
@@ -98,101 +114,75 @@ ExecStart=/usr/local/openresty/nginx/sbin/nginx -g 'daemon off;'
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart openresty
msg_ok "Built OpenResty"
fi
cd /root
if [ -d /opt/certbot ]; then
msg_info "Updating Certbot"
$STD /opt/certbot/bin/pip install --upgrade pip setuptools wheel
$STD /opt/certbot/bin/pip install --upgrade certbot certbot-dns-cloudflare
msg_ok "Updated Certbot"
msg_info "Setting up Environment"
ln -sf /usr/bin/python3 /usr/bin/python
ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/sbin/nginx
ln -sf /usr/local/openresty/nginx/ /etc/nginx
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/backend/package.json
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/frontend/package.json
sed -i 's+^daemon+#daemon+g' /opt/nginxproxymanager/docker/rootfs/etc/nginx/nginx.conf
NGINX_CONFS=$(find /opt/nginxproxymanager -type f -name "*.conf")
for NGINX_CONF in $NGINX_CONFS; do
sed -i 's+include conf.d+include /etc/nginx/conf.d+g' "$NGINX_CONF"
done
mkdir -p /var/www/html /etc/nginx/logs
cp -r /opt/nginxproxymanager/docker/rootfs/var/www/html/* /var/www/html/
cp -r /opt/nginxproxymanager/docker/rootfs/etc/nginx/* /etc/nginx/
cp /opt/nginxproxymanager/docker/rootfs/etc/letsencrypt.ini /etc/letsencrypt.ini
cp /opt/nginxproxymanager/docker/rootfs/etc/logrotate.d/nginx-proxy-manager /etc/logrotate.d/nginx-proxy-manager
ln -sf /etc/nginx/nginx.conf /etc/nginx/conf/nginx.conf
rm -f /etc/nginx/conf.d/dev.conf
mkdir -p /tmp/nginx/body \
/run/nginx \
/data/nginx \
/data/custom_ssl \
/data/logs \
/data/access \
/data/nginx/default_host \
/data/nginx/default_www \
/data/nginx/proxy_host \
/data/nginx/redirection_host \
/data/nginx/stream \
/data/nginx/dead_host \
/data/nginx/temp \
/var/lib/nginx/cache/public \
/var/lib/nginx/cache/private \
/var/cache/nginx/proxy_temp
chmod -R 777 /var/cache/nginx
chown root /tmp/nginx
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" {print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf);" >/etc/nginx/conf.d/include/resolvers.conf
if [ ! -f /data/nginx/dummycert.pem ] || [ ! -f /data/nginx/dummykey.pem ]; then
$STD openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/O=Nginx Proxy Manager/OU=Dummy Certificate/CN=localhost" -keyout /data/nginx/dummykey.pem -out /data/nginx/dummycert.pem
fi
if check_for_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager"; then
msg_info "Stopping Services"
systemctl stop openresty
systemctl stop npm
msg_ok "Stopped Services"
mkdir -p /app/frontend/images
cp -r /opt/nginxproxymanager/backend/* /app
msg_ok "Set up Environment"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "nginxproxymanager" "NginxProxyManager/nginx-proxy-manager" "tarball" "${CHECK_UPDATE_RELEASE}" "/opt/nginxproxymanager"
msg_info "Building Frontend"
export NODE_OPTIONS="--max_old_space_size=2048 --openssl-legacy-provider"
cd /opt/nginxproxymanager/frontend
# Replace node-sass with sass in package.json before installation
sed -E -i 's/"node-sass" *: *"([^"]*)"/"sass": "\1"/g' package.json
$STD yarn install --network-timeout 600000
$STD yarn locale-compile
$STD yarn build
cp -r /opt/nginxproxymanager/frontend/dist/* /app/frontend
cp -r /opt/nginxproxymanager/frontend/public/images/* /app/frontend/images
msg_ok "Built Frontend"
msg_info "Cleaning old files"
$STD rm -rf /app \
/var/www/html \
/etc/nginx \
/var/log/nginx \
/var/lib/nginx \
/var/cache/nginx
msg_ok "Cleaned old files"
local RELEASE="${CHECK_UPDATE_RELEASE#v}"
msg_info "Setting up Environment"
ln -sf /usr/bin/python3 /usr/bin/python
ln -sf /usr/local/openresty/nginx/sbin/nginx /usr/sbin/nginx
ln -sf /usr/local/openresty/nginx/ /etc/nginx
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/backend/package.json
sed -i "0,/\"version\": \"[^\"]*\"/s|\"version\": \"[^\"]*\"|\"version\": \"$RELEASE\"|" /opt/nginxproxymanager/frontend/package.json
sed -i 's+^daemon+#daemon+g' /opt/nginxproxymanager/docker/rootfs/etc/nginx/nginx.conf
NGINX_CONFS=$(find /opt/nginxproxymanager -type f -name "*.conf")
for NGINX_CONF in $NGINX_CONFS; do
sed -i 's+include conf.d+include /etc/nginx/conf.d+g' "$NGINX_CONF"
done
mkdir -p /var/www/html /etc/nginx/logs
cp -r /opt/nginxproxymanager/docker/rootfs/var/www/html/* /var/www/html/
cp -r /opt/nginxproxymanager/docker/rootfs/etc/nginx/* /etc/nginx/
cp /opt/nginxproxymanager/docker/rootfs/etc/letsencrypt.ini /etc/letsencrypt.ini
cp /opt/nginxproxymanager/docker/rootfs/etc/logrotate.d/nginx-proxy-manager /etc/logrotate.d/nginx-proxy-manager
ln -sf /etc/nginx/nginx.conf /etc/nginx/conf/nginx.conf
rm -f /etc/nginx/conf.d/dev.conf
mkdir -p /tmp/nginx/body \
/run/nginx \
/data/nginx \
/data/custom_ssl \
/data/logs \
/data/access \
/data/nginx/default_host \
/data/nginx/default_www \
/data/nginx/proxy_host \
/data/nginx/redirection_host \
/data/nginx/stream \
/data/nginx/dead_host \
/data/nginx/temp \
/var/lib/nginx/cache/public \
/var/lib/nginx/cache/private \
/var/cache/nginx/proxy_temp
chmod -R 777 /var/cache/nginx
chown root /tmp/nginx
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" {print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf);" >/etc/nginx/conf.d/include/resolvers.conf
if [ ! -f /data/nginx/dummycert.pem ] || [ ! -f /data/nginx/dummykey.pem ]; then
$STD openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj "/O=Nginx Proxy Manager/OU=Dummy Certificate/CN=localhost" -keyout /data/nginx/dummykey.pem -out /data/nginx/dummycert.pem
fi
mkdir -p /app/frontend/images
cp -r /opt/nginxproxymanager/backend/* /app
msg_ok "Set up Environment"
msg_info "Building Frontend"
export NODE_OPTIONS="--max_old_space_size=2048 --openssl-legacy-provider"
cd /opt/nginxproxymanager/frontend
sed -E -i 's/"node-sass" *: *"([^"]*)"/"sass": "\1"/g' package.json
$STD yarn install --network-timeout 600000
$STD yarn locale-compile
$STD yarn build
cp -r /opt/nginxproxymanager/frontend/dist/* /app/frontend
cp -r /opt/nginxproxymanager/frontend/public/images/* /app/frontend/images
msg_ok "Built Frontend"
msg_info "Initializing Backend"
rm -rf /app/config/default.json
if [ ! -f /app/config/production.json ]; then
cat <<'EOF' >/app/config/production.json
msg_info "Initializing Backend"
rm -rf /app/config/default.json
if [ ! -f /app/config/production.json ]; then
cat <<'EOF' >/app/config/production.json
{
"database": {
"engine": "knex-native",
@@ -206,21 +196,28 @@ EOF
}
}
EOF
fi
sed -i 's/"client": "sqlite3"/"client": "better-sqlite3"/' /app/config/production.json
cd /app
$STD yarn install --network-timeout 600000
msg_ok "Initialized Backend"
msg_info "Starting Services"
sed -i 's/user npm/user root/g; s/^pid/#pid/g' /usr/local/openresty/nginx/conf/nginx.conf
sed -r -i 's/^([[:space:]]*)su npm npm/\1#su npm npm/g;' /etc/logrotate.d/nginx-proxy-manager
systemctl daemon-reload
systemctl enable -q --now openresty
systemctl enable -q --now npm
msg_ok "Started Services"
msg_ok "Updated successfully!"
fi
sed -i 's/"client": "sqlite3"/"client": "better-sqlite3"/' /app/config/production.json
cd /app
$STD yarn install --network-timeout 600000
msg_ok "Initialized Backend"
msg_info "Updating Certbot"
if [ -d /opt/certbot ]; then
$STD /opt/certbot/bin/pip install --upgrade pip setuptools wheel
$STD /opt/certbot/bin/pip install --upgrade certbot certbot-dns-cloudflare
fi
msg_ok "Updated Certbot"
msg_info "Starting Services"
sed -i 's/user npm/user root/g; s/^pid/#pid/g' /usr/local/openresty/nginx/conf/nginx.conf
sed -r -i 's/^([[:space:]]*)su npm npm/\1#su npm npm/g;' /etc/logrotate.d/nginx-proxy-manager
systemctl daemon-reload
systemctl enable -q --now openresty
systemctl enable -q --now npm
msg_ok "Started Services"
msg_ok "Updated successfully!"
exit
}

View File

@@ -27,27 +27,36 @@ function update_script() {
msg_error "No ${APP} Installation Found!"
exit
fi
RELEASE=$(get_latest_github_release "Part-DB/Part-DB-server")
if check_for_gh_release "partdb" "Part-DB/Part-DB-server"; then
msg_info "Stopping Service"
systemctl stop apache2
msg_ok "Stopped Service"
msg_info "Updating $APP to v${RELEASE}"
cd /opt
mv /opt/partdb/ /opt/partdb-backup
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "partdb" "Part-DB/Part-DB-server" "prebuild" "latest" "/opt/partdb" "partdb_with_assets.zip"
curl -fsSL "https://github.com/Part-DB/Part-DB-server/archive/refs/tags/v${RELEASE}.zip" -o "/opt/v${RELEASE}.zip"
$STD unzip "v${RELEASE}.zip"
mv /opt/Part-DB-server-${RELEASE}/ /opt/partdb
msg_info "Updating Part-DB"
cd /opt/partdb/
cp -r /opt/partdb-backup/.env.local /opt/partdb/
cp -r /opt/partdb-backup/public/media /opt/partdb/public/
cp -r /opt/partdb-backup/config/banner.md /opt/partdb/config/
cp -r "/opt/partdb-backup/.env.local" /opt/partdb/
cp -r "/opt/partdb-backup/public/media" /opt/partdb/public/
cp -r "/opt/partdb-backup/config/banner.md" /opt/partdb/config/
export COMPOSER_ALLOW_SUPERUSER=1
$STD composer install --no-dev -o --no-interaction
$STD yarn install
$STD yarn build
$STD php bin/console cache:clear
$STD php bin/console doctrine:migrations:migrate -n
chown -R www-data:www-data /opt/partdb
rm -r "/opt/v${RELEASE}.zip"
rm -r /opt/partdb-backup
msg_ok "Updated Part-DB"
echo "${RELEASE}" >~/.partdb
msg_ok "Updated $APP to v${RELEASE}"
msg_info "Starting Service"
systemctl start apache2

View File

@@ -8,7 +8,7 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
APP="Tracearr"
var_tags="${var_tags:-media}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-8192}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-10}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
@@ -102,7 +102,7 @@ EOF
if check_for_gh_release "tracearr" "connorgallopo/Tracearr"; then
msg_info "Stopping Services"
systemctl stop tracearr postgresql redis-server
systemctl stop tracearr postgresql redis
msg_ok "Stopped Services"
msg_info "Updating pnpm"
@@ -115,7 +115,6 @@ EOF
msg_info "Building Tracearr"
export TZ=$(cat /etc/timezone)
export NODE_OPTIONS="--max-old-space-size=4096"
cd /opt/tracearr.build
$STD pnpm install --frozen-lockfile --force
$STD pnpm turbo telemetry disable
@@ -149,7 +148,7 @@ EOF
msg_ok "Configured Tracearr"
msg_info "Starting services"
systemctl start postgresql redis-server tracearr
systemctl start postgresql redis tracearr
msg_ok "Started services"
msg_ok "Updated successfully!"
else

View File

@@ -1,164 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: vhsdream
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/nxzai/nextExplorer
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
ripgrep \
imagemagick \
ffmpeg \
libva-drm2 \
libva2 \
mesa-va-drivers \
vainfo
msg_ok "Installed Dependencies"
NODE_VERSION="24" setup_nodejs
fetch_and_deploy_gh_release "nextExplorer" "nxzai/nextExplorer" "tarball" "latest" "/opt/nextExplorer"
msg_info "Building nextExplorer"
APP_DIR="/opt/nextExplorer/app"
LOCAL_IP="$(hostname -I | awk '{print $1}')"
mkdir -p "$APP_DIR"
mkdir -p /etc/nextExplorer
cd /opt/nextExplorer
export NODE_ENV=production
$STD npm ci --omit=dev --workspace backend
mv node_modules "$APP_DIR"
mv backend/{src,package.json} "$APP_DIR"
unset NODE_ENV
export NODE_ENV=development
export NODE_OPTIONS="--max-old-space-size=2048"
$STD npm ci --workspace frontend
$STD npm run -w frontend build -- --sourcemap false
unset NODE_ENV
mv frontend/dist/ "$APP_DIR"/src/public
msg_ok "Built nextExplorer"
msg_info "Configuring nextExplorer"
SECRET=$(openssl rand -hex 32)
cat <<EOF >/etc/nextExplorer/.env
NODE_ENV=production
PORT=3000
VOLUME_ROOT=/mnt
CONFIG_DIR=/etc/nextExplorer
CACHE_DIR=/etc/nextExplorer/cache
# USER_ROOT=
PUBLIC_URL=${LOCAL_IP}:3000
# TRUST_PROXY=
# CORS_ORIGINS=
TERMINAL_ENABLED=false
LOG_LEVEL=info
DEBUG=false
ENABLE_HTTP_LOGGING=false
AUTH_ENABLED=true
AUTH_MODE=both
SESSION_SECRET="${SECRET}"
# AUTH_MAX_FAILED=
# AUTH_LOCK_MINUTES=
# AUTH_USER_EMAIL=
# AUTH_USER_PASSWORD=
# OIDC_ENABLED=
# OIDC_ISSUER=
# OIDC_AUTHORIZATION_URL=
# OIDC_TOKEN_URL=
# OIDC_USERINFO_URL=
# OIDC_CLIENT_ID=
# OIDC_CLIENT_SECRET=
# OIDC_CALLBACK_URL=
# OIDC_LOGOUT_URL=
# OIDC_SCOPES=
# OIDC_AUTO_CREATE_USERS=true
# SEARCH_DEEP=
# SEARCH_RIPGREP=
# SEARCH_MAX_FILESIZE=
# ONLYOFFICE_URL=
# ONLYOFFICE_SECRET=
# ONLYOFFICE_LANG=
# ONLYOFFICE_FORCE_SAVE=
# ONLYOFFICE_FILE_EXTENSIONS=
# COLLABORA_URL=
# COLLABORA_DISCOVERY_URL=
# COLLABORA_SECRET=
# COLLABORA_LANG=
# COLLABORA_FILE_EXTENSIONS=
SHOW_VOLUME_USAGE=true
# USER_DIR_ENABLED=
# SKIP_HOME=
# EDITOR_EXTENSIONS=
# FFMPEG_PATH=
# FFPROBE_PATH=
## Hardware acceleration
# FFMPEG_HWACCEL=vaapi
# FFMPEG_HWACCEL_DEVICE=/dev/dri/renderD128
# FFMPEG_HWACCEL_OUTPUT_FORMAT=nv12
FAVORITES_DEFAULT_ICON=outline.StarIcon
SHARES_ENABLED=true
# SHARES_TOKEN_LENGTH=10
# SHARES_MAX_PER_USER=100
# SHARES_DEFAULT_EXPIRY_DAYS=30
# SHARES_GUEST_SESSION_HOURS=24
# SHARES_ALLOW_PASSWORD=true
# SHARES_ALLOW_ANONYMOUS=true
EOF
chmod 600 /etc/nextExplorer/.env
$STD useradd -U -s /usr/sbin/nologin -m -d /home/explorer explorer
chown -R explorer:explorer "$APP_DIR" /etc/nextExplorer
sed -i "\|version|s|$(jq -cr '.version' ${APP_DIR}/package.json)|$(cat ~/.nextexplorer)|" "$APP_DIR"/package.json
msg_ok "Configured nextExplorer"
msg_info "Creating nextExplorer Service"
cat <<EOF >/etc/systemd/system/nextexplorer.service
[Unit]
Description=nextExplorer Service
After=network.target
[Service]
Type=simple
User=explorer
Group=explorer
WorkingDirectory=/opt/nextExplorer/app
EnvironmentFile=/etc/nextExplorer/.env
ExecStart=/usr/bin/node ./src/server.js
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
$STD systemctl enable -q --now nextexplorer
msg_ok "Created nextExplorer Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -13,19 +13,27 @@ setting_up_container
network_check
update_os
NODE_VERSION="22" NODE_MODULE="yarn@latest" setup_nodejs
PG_VERSION="16" setup_postgresql
PG_DB_NAME="partdb" PG_DB_USER="partdb" setup_postgresql_db
PHP_VERSION="8.4" PHP_APACHE="YES" PHP_MODULE="xsl" PHP_POST_MAX_SIZE="100M" PHP_UPLOAD_MAX_FILESIZE="100M" setup_php
setup_composer
fetch_and_deploy_gh_release "partdb" "Part-DB/Part-DB-server" "prebuild" "latest" "/opt/partdb" "partdb_with_assets.zip"
msg_info "Installing Part-DB (Patience)"
cd /opt
RELEASE=$(get_latest_github_release "Part-DB/Part-DB-server")
curl -fsSL "https://github.com/Part-DB/Part-DB-server/archive/refs/tags/v${RELEASE}.zip" -o "/opt/v${RELEASE}.zip"
$STD unzip "v${RELEASE}.zip"
mv /opt/Part-DB-server-${RELEASE}/ /opt/partdb
msg_info "Installing Part-DB"
cd /opt/partdb/
cp .env .env.local
sed -i "s|DATABASE_URL=\"sqlite:///%kernel.project_dir%/var/app.db\"|DATABASE_URL=\"postgresql://${PG_DB_USER}:${PG_DB_PASS}@127.0.0.1:5432/${PG_DB_NAME}?serverVersion=12.19&charset=utf8\"|" .env.local
export COMPOSER_ALLOW_SUPERUSER=1
$STD composer install --no-dev -o --no-interaction
$STD yarn install
$STD yarn build
$STD php bin/console cache:clear
php bin/console doctrine:migrations:migrate -n >~/database-migration-output
chown -R www-data:www-data /opt/partdb
@@ -36,6 +44,8 @@ ADMIN_PASS=$(grep -oP 'The initial password for the "admin" user is: \K\w+' ~/da
echo "Part-DB Admin Password: $ADMIN_PASS"
} >>~/partdb.creds
rm -rf ~/database-migration-output
rm -rf "/opt/v${RELEASE}.zip"
echo "${RELEASE}" >~/.partdb
msg_ok "Installed Part-DB"
msg_info "Creating Service"

View File

@@ -35,7 +35,7 @@ cd Shinobi
gitVersionNumber=$(git rev-parse HEAD)
theDateRightNow=$(date)
touch version.json
chmod 644 version.json
chmod 777 version.json
echo '{"Product" : "'"Shinobi"'" , "Branch" : "'"master"'" , "Version" : "'"$gitVersionNumber"'" , "Date" : "'"$theDateRightNow"'" , "Repository" : "'"https://gitlab.com/Shinobi-Systems/Shinobi.git"'"}' >version.json
msg_ok "Cloned Shinobi"

View File

@@ -23,7 +23,7 @@ fetch_and_deploy_gh_release "tasmoadmin" "TasmoAdmin/TasmoAdmin" "prebuild" "lat
msg_info "Configuring TasmoAdmin"
rm -rf /etc/php/8.4/apache2/conf.d/10-opcache.ini
chown -R www-data:www-data /var/www/tasmoadmin
chmod 775 /var/www/tasmoadmin/tmp /var/www/tasmoadmin/data
chmod 777 /var/www/tasmoadmin/tmp /var/www/tasmoadmin/data
cat <<EOF >/etc/apache2/sites-available/tasmoadmin.conf
<VirtualHost *:9999>
ServerName tasmoadmin

View File

@@ -62,7 +62,6 @@ fetch_and_deploy_gh_release "tracearr" "connorgallopo/Tracearr" "tarball" "lates
msg_info "Building Tracearr"
export TZ=$(cat /etc/timezone)
export NODE_OPTIONS="--max-old-space-size=4096"
cd /opt/tracearr.build
$STD pnpm install --frozen-lockfile --force
$STD pnpm turbo telemetry disable

View File

@@ -67,22 +67,22 @@ EOF
# This function sets up the Container OS by generating the locale, setting the timezone, and checking the network connection
setting_up_container() {
msg_info "Setting up Container OS"
local _ip=""
while [ $i -gt 0 ]; do
if [ "$(ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -d'/' -f1)" != "" ]; then
break
fi
_ip=$(ip -4 addr show 2>/dev/null | awk '/inet [0-9]/ && !/127\.0\.0\.1/ {sub(/\/.*/, "", $2); print $2; exit}')
[[ -n "$_ip" ]] && break
echo 1>&2 -en "${CROSS}${RD} No Network! "
sleep $RETRY_EVERY
i=$((i - 1))
done
if [ "$(ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -d'/' -f1)" = "" ]; then
if [[ -z "$_ip" ]]; then
echo 1>&2 -e "\n${CROSS}${RD} No Network After $RETRY_NUM Tries${CL}"
echo -e "${NETWORK}Check Network Settings"
exit 121
fi
msg_ok "Set up Container OS"
msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}"
msg_ok "Network Connected: ${BL}${_ip}${CL}"
post_progress_to_api
}
@@ -90,18 +90,11 @@ setting_up_container() {
network_check() {
set +e
trap - ERR
ipv4_connected=false
# Check IPv4 connectivity to Cloudflare, Google & Quad9 DNS servers
if ping -c 1 -W 1 1.1.1.1 &>/dev/null || ping -c 1 -W 1 8.8.8.8 &>/dev/null || ping -c 1 -W 1 9.9.9.9 &>/dev/null; then
msg_ok "IPv4 Internet Connected"
ipv4_connected=true
ipv4_status="${GN}${CL} IPv4"
else
msg_error "IPv4 Internet Not Connected"
fi
if [[ $ipv4_connected == false ]]; then
read -r -p "No Internet detected, would you like to continue anyway? <y/N> " prompt
ipv4_status="${RD}${CL} IPv4"
read -r -p "Internet NOT connected. Continue anyway? <y/N> " prompt
if [[ "${prompt,,}" =~ ^(y|yes)$ ]]; then
echo -e "${INFO}${RD}Expect Issues Without Internet${CL}"
else
@@ -109,28 +102,12 @@ network_check() {
exit 122
fi
fi
# DNS resolution checks for GitHub-related domains
GIT_HOSTS=("github.com" "raw.githubusercontent.com" "api.github.com" "git.community-scripts.org")
GIT_STATUS="Git DNS:"
DNS_FAILED=false
for HOST in "${GIT_HOSTS[@]}"; do
RESOLVEDIP=$(getent hosts "$HOST" | awk '{ print $1 }' | grep -E '(^([0-9]{1,3}\.){3}[0-9]{1,3}$)|(^[a-fA-F0-9:]+$)' | head -n1)
if [[ -z "$RESOLVEDIP" ]]; then
GIT_STATUS+="$HOST:($DNSFAIL)"
DNS_FAILED=true
else
GIT_STATUS+=" $HOST:($DNSOK)"
fi
done
if [[ "$DNS_FAILED" == true ]]; then
fatal "$GIT_STATUS"
RESOLVEDIP=$(getent hosts github.com | awk '{ print $1 }')
if [[ -z "$RESOLVEDIP" ]]; then
msg_error "Internet: ${ipv4_status} DNS Failed"
else
msg_ok "$GIT_STATUS"
msg_ok "Internet: ${ipv4_status} DNS: ${BL}${RESOLVEDIP}${CL}"
fi
set -e
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
}

View File

@@ -53,7 +53,7 @@ download_with_progress() {
# $1 url, $2 dest
local url="$1" out="$2" cl
need_tool curl pv || return 1
cl=$(curl -fsSLI "$url" 2>/dev/null | awk 'tolower($0) ~ /^content-length:/ {print $2}' | tr -d '\r')
cl=$(curl -fsSLI "$url" 2>/dev/null | awk 'tolower($0) ~ /^content-length:/ {gsub(/\r/,""); print $2}')
if [ -n "$cl" ]; then
curl -fsSL "$url" | pv -s "$cl" >"$out" || {
msg_error "Download failed: $url"

View File

@@ -401,7 +401,7 @@ get_error_text() {
fi
if [[ -n "$logfile" && -s "$logfile" ]]; then
tail -n 20 "$logfile" 2>/dev/null | sed 's/\r$//' | sed 's/\x1b\[[0-9;]*[a-zA-Z]//g'
tail -n 20 "$logfile" 2>/dev/null | sed -E 's/\r$//; s/\x1b\[[0-9;]*[a-zA-Z]//g'
fi
}
@@ -508,7 +508,8 @@ detect_gpu() {
if [[ -n "$gpu_line" ]]; then
# Extract model: everything after the colon, clean up
GPU_MODEL=$(echo "$gpu_line" | sed 's/.*: //' | sed 's/ (rev .*)$//' | cut -c1-64)
GPU_MODEL=$(echo "$gpu_line" | sed -E 's/.*: //; s/ \(rev .*\)$//')
GPU_MODEL="${GPU_MODEL:0:64}"
# Detect vendor and passthrough type
if echo "$gpu_line" | grep -qi "Intel"; then
@@ -557,7 +558,8 @@ detect_cpu() {
esac
# Extract model name and clean it up
CPU_MODEL=$(grep -m1 "model name" /proc/cpuinfo 2>/dev/null | cut -d: -f2 | sed 's/^ *//' | sed 's/(R)//g' | sed 's/(TM)//g' | sed 's/ */ /g' | cut -c1-64)
CPU_MODEL=$(grep -m1 "model name" /proc/cpuinfo 2>/dev/null | cut -d: -f2 | sed -E 's/^ *//; s/\(R\)//g; s/\(TM\)//g; s/ +/ /g')
CPU_MODEL="${CPU_MODEL:0:64}"
fi
export CPU_VENDOR CPU_MODEL
@@ -627,8 +629,8 @@ post_to_api() {
[[ "${DEV_MODE:-}" == "true" ]] && echo "[DEBUG] post_to_api() DIAGNOSTICS=$DIAGNOSTICS RANDOM_UUID=$RANDOM_UUID NSAPP=$NSAPP" >&2
# Set type for later status updates (preserve if already set, e.g. turnkey)
TELEMETRY_TYPE="${TELEMETRY_TYPE:-lxc}"
# Set type for later status updates
TELEMETRY_TYPE="lxc"
local pve_version=""
if command -v pveversion &>/dev/null; then
@@ -692,7 +694,6 @@ EOF
# Send initial "installing" record with retry.
# This record MUST exist for all subsequent updates to succeed.
local http_code="" attempt
local _post_success=false
for attempt in 1 2 3; do
if [[ "${DEV_MODE:-}" == "true" ]]; then
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
@@ -704,19 +705,11 @@ EOF
-H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
fi
if [[ "$http_code" =~ ^2[0-9]{2}$ ]]; then
_post_success=true
break
fi
[[ "$http_code" =~ ^2[0-9]{2}$ ]] && break
[[ "$attempt" -lt 3 ]] && sleep 1
done
# Only mark done if at least one attempt succeeded.
# If all 3 failed, POST_TO_API_DONE stays false so post_update_to_api
# and on_exit() know the initial record was never created.
# The server has fallback logic to create a new record on status updates,
# so subsequent calls can still succeed even without the initial record.
POST_TO_API_DONE=${_post_success}
POST_TO_API_DONE=true
}
# ------------------------------------------------------------------------------
@@ -807,19 +800,15 @@ EOF
# Send initial "installing" record with retry (must succeed for updates to work)
local http_code="" attempt
local _post_success=false
for attempt in 1 2 3; do
http_code=$(curl -sS -w "%{http_code}" -m "${TELEMETRY_TIMEOUT}" -X POST "${TELEMETRY_URL}" \
-H "Content-Type: application/json" \
-d "$JSON_PAYLOAD" -o /dev/null 2>/dev/null) || http_code="000"
if [[ "$http_code" =~ ^2[0-9]{2}$ ]]; then
_post_success=true
break
fi
[[ "$http_code" =~ ^2[0-9]{2}$ ]] && break
[[ "$attempt" -lt 3 ]] && sleep 1
done
POST_TO_API_DONE=${_post_success}
POST_TO_API_DONE=true
}
# ------------------------------------------------------------------------------
@@ -1096,12 +1085,6 @@ EOF
# - Used to group errors in dashboard
# ------------------------------------------------------------------------------
categorize_error() {
# Allow build.func to override category based on log analysis (exit code 1 subclassification)
if [[ -n "${ERROR_CATEGORY_OVERRIDE:-}" ]]; then
echo "$ERROR_CATEGORY_OVERRIDE"
return
fi
local code="$1"
case "$code" in
# Network errors (curl/wget)
@@ -1344,11 +1327,16 @@ post_addon_to_api() {
error_category=$(categorize_error "$exit_code")
fi
# Detect OS info
# Detect OS info (single read)
local os_type="" os_version=""
if [[ -f /etc/os-release ]]; then
os_type=$(grep "^ID=" /etc/os-release | cut -d= -f2 | tr -d '"')
os_version=$(grep "^VERSION_ID=" /etc/os-release | cut -d= -f2 | tr -d '"')
while IFS='=' read -r _k _v; do
_v="${_v//\"/}"
case "$_k" in
ID) os_type="$_v" ;;
VERSION_ID) os_version="$_v" ;;
esac
done </etc/os-release
fi
local JSON_PAYLOAD

File diff suppressed because it is too large Load Diff

View File

@@ -1644,7 +1644,7 @@ function get_lxc_ip() {
local ip
# Try direct interface lookup for eth0 FIRST (most reliable for LXC) - IPv4
ip=$(ip -4 addr show eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1 | head -n1)
ip=$(ip -4 addr show eth0 2>/dev/null | awk '/inet / {sub(/\/.*/, "", $2); print $2; exit}')
if [[ -n "$ip" && "$ip" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "$ip"
return 0
@@ -1652,11 +1652,14 @@ function get_lxc_ip() {
# Fallback: Try hostname -I (returns IPv4 first if available)
if command -v hostname >/dev/null 2>&1; then
ip=$(hostname -I 2>/dev/null | awk '{print $1}')
if [[ -n "$ip" && "$ip" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "$ip"
return 0
fi
local -a _ips
read -ra _ips <<<"$(hostname -I 2>/dev/null)"
for ip in "${_ips[@]}"; do
if [[ "$ip" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "$ip"
return 0
fi
done
fi
# Try routing table with IPv4 targets
@@ -1674,7 +1677,7 @@ function get_lxc_ip() {
done
# IPv6 fallback: Try direct interface lookup for eth0
ip=$(ip -6 addr show eth0 scope global 2>/dev/null | awk '/inet6 / {print $2}' | cut -d/ -f1 | head -n1)
ip=$(ip -6 addr show eth0 scope global 2>/dev/null | awk '/inet6 / {sub(/\/.*/, "", $2); print $2; exit}')
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
@@ -1682,11 +1685,14 @@ function get_lxc_ip() {
# IPv6 fallback: Try hostname -I for IPv6
if command -v hostname >/dev/null 2>&1; then
ip=$(hostname -I 2>/dev/null | tr ' ' '\n' | grep -E ':' | head -n1)
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
fi
local -a _ips6
read -ra _ips6 <<<"$(hostname -I 2>/dev/null)"
for ip in "${_ips6[@]}"; do
[[ "$ip" == *:* ]] && {
echo "$ip"
return 0
}
done
fi
# IPv6 fallback: Use routing table with IPv6 targets

View File

@@ -507,23 +507,14 @@ _stop_container_if_installing() {
on_exit() {
local exit_code=$?
# Report orphaned telemetry records
# Two scenarios handled:
# 1. POST_TO_API_DONE=true but POST_UPDATE_DONE=false: Record was created but
# never got a final status update → send abort/done now.
# 2. POST_TO_API_DONE=false but DIAGNOSTICS=yes: Initial post failed (server
# unreachable/timeout), but the server has fallback create-on-update logic,
# so a status update can still create the record. Worth one last try.
if [[ "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ "${POST_TO_API_DONE:-}" == "true" || "${DIAGNOSTICS:-no}" == "yes" ]]; then
if [[ $exit_code -ne 0 ]]; then
_send_abort_telemetry "$exit_code"
elif [[ "${INSTALL_COMPLETE:-}" == "true" ]] && declare -f post_update_to_api >/dev/null 2>&1; then
# Only report success if the install was explicitly marked complete.
# Without this guard, early bailouts (e.g. user cancelled) with exit 0
# would be falsely reported as successful installations.
post_update_to_api "done" "0" 2>/dev/null || true
fi
# Report orphaned "installing" records to telemetry API
# Catches ALL exit paths: errors, signals, AND clean exits where
# post_to_api was called but post_update_to_api was never called
if [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
if [[ $exit_code -ne 0 ]]; then
_send_abort_telemetry "$exit_code"
elif declare -f post_update_to_api >/dev/null 2>&1; then
post_update_to_api "done" "0" 2>/dev/null || true
fi
fi

View File

@@ -116,14 +116,14 @@ setting_up_container() {
(chown root:root / 2>/dev/null) || true
fi
local _host_ip=""
for ((i = RETRY_NUM; i > 0; i--)); do
if [ "$(hostname -I)" != "" ]; then
break
fi
_host_ip=$(hostname -I 2>/dev/null | awk '{print $1}')
[[ -n "$_host_ip" ]] && break
echo 1>&2 -en "${CROSS}${RD} No Network! "
sleep $RETRY_EVERY
done
if [ "$(hostname -I)" = "" ]; then
if [[ -z "$_host_ip" ]]; then
echo 1>&2 -e "\n${CROSS}${RD} No Network After $RETRY_NUM Tries${CL}"
echo -e "${NETWORK}Check Network Settings"
exit 121
@@ -131,8 +131,7 @@ setting_up_container() {
rm -rf /usr/lib/python3.*/EXTERNALLY-MANAGED
systemctl disable -q --now systemd-networkd-wait-online.service
msg_ok "Set up Container OS"
#msg_custom "${CM}" "${GN}" "Network Connected: ${BL}$(hostname -I)"
msg_ok "Network Connected: ${BL}$(hostname -I)"
msg_ok "Network Connected: ${BL}${_host_ip}"
post_progress_to_api
}
@@ -309,14 +308,14 @@ customize() {
if [[ "$PASSWORD" == "" ]]; then
msg_info "Customizing Container"
GETTY_OVERRIDE="/etc/systemd/system/container-getty@1.service.d/override.conf"
mkdir -p "$(dirname "$GETTY_OVERRIDE")"
cat <<EOF >"$GETTY_OVERRIDE"
mkdir -p $(dirname $GETTY_OVERRIDE)
cat <<EOF >$GETTY_OVERRIDE
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --noclear --keep-baud tty%I 115200,38400,9600 \$TERM
EOF
systemctl daemon-reload
systemctl restart "$(basename "$(dirname "$GETTY_OVERRIDE")" | sed 's/\.d//')"
systemctl restart $(basename $(dirname $GETTY_OVERRIDE) | sed 's/\.d//')
msg_ok "Customized Container"
fi
echo "bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/${app}.sh)\"" >/usr/bin/update

View File

@@ -242,7 +242,7 @@ download_gpg_key() {
# Process based on mode
if [[ "$mode" == "dearmor" ]]; then
if gpg --dearmor --yes -o "$output" <"$temp_key" 2>/dev/null && [[ -s "$output" ]]; then
if gpg --dearmor --yes -o "$output" <"$temp_key" 2>/dev/null; then
rm -f "$temp_key"
debug_log "GPG key installed (dearmored): $output"
return 0
@@ -700,7 +700,7 @@ manage_tool_repository() {
local gpg_key_url="${4:-}"
local distro_id repo_component suite
distro_id=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
distro_id=$(get_os_info id)
case "$tool_name" in
mariadb)
@@ -714,7 +714,7 @@ manage_tool_repository() {
# Get suite for fallback handling
local distro_codename
distro_codename=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
distro_codename=$(get_os_info codename)
suite=$(get_fallback_suite "$distro_id" "$distro_codename" "$repo_url/$distro_id")
# Setup new repository using deb822 format
@@ -745,7 +745,7 @@ manage_tool_repository() {
# Setup repository
local distro_codename
distro_codename=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
distro_codename=$(get_os_info codename)
# Suite mapping with fallback for newer releases not yet supported by upstream
if [[ "$distro_id" == "debian" ]]; then
@@ -816,7 +816,7 @@ EOF
# NodeSource uses deb822 format with GPG from repo
local distro_codename
distro_codename=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
distro_codename=$(get_os_info codename)
# Download GPG key from NodeSource with retry logic
if ! download_gpg_key "$gpg_key_url" "/etc/apt/keyrings/nodesource.gpg" "dearmor"; then
@@ -858,7 +858,7 @@ EOF
# Setup repository
local distro_codename
distro_codename=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
distro_codename=$(get_os_info codename)
cat <<EOF >/etc/apt/sources.list.d/php.sources
Types: deb
URIs: https://packages.sury.org/php
@@ -886,7 +886,7 @@ EOF
# Setup repository
local distro_codename
distro_codename=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
distro_codename=$(get_os_info codename)
cat <<EOF >/etc/apt/sources.list.d/postgresql.sources
Types: deb
URIs: http://apt.postgresql.org/pub/repos/apt
@@ -1323,10 +1323,16 @@ get_os_info() {
# Cache OS info to avoid repeated file reads
if [[ -z "${_OS_ID:-}" ]]; then
export _OS_ID=$(awk -F= '/^ID=/{gsub(/"/,"",$2); print $2}' /etc/os-release)
export _OS_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{gsub(/"/,"",$2); print $2}' /etc/os-release)
export _OS_VERSION=$(awk -F= '/^VERSION_ID=/{gsub(/"/,"",$2); print $2}' /etc/os-release)
export _OS_VERSION_FULL=$(awk -F= '/^VERSION=/{gsub(/"/,"",$2); print $2}' /etc/os-release)
local _line
while IFS='=' read -r _key _val; do
_val="${_val//\"/}"
case "$_key" in
ID) export _OS_ID="$_val" ;;
VERSION_CODENAME) export _OS_CODENAME="$_val" ;;
VERSION_ID) export _OS_VERSION="$_val" ;;
VERSION) export _OS_VERSION_FULL="$_val" ;;
esac
done </etc/os-release
fi
case "$field" in
@@ -2144,8 +2150,8 @@ fetch_and_deploy_gh_tag() {
local repo="$2"
local version="${3:-latest}"
local target="${4:-/opt/$app}"
local app_lc=""
app_lc="$(echo "${app,,}" | tr -d ' ')"
local app_lc="${app,,}"
app_lc="${app_lc// /}"
local version_file="$HOME/.${app_lc}"
if [[ "$version" == "latest" ]]; then
@@ -2221,8 +2227,8 @@ check_for_gh_tag() {
local app="$1"
local repo="$2"
local prefix="${3:-}"
local app_lc=""
app_lc="$(echo "${app,,}" | tr -d ' ')"
local app_lc="${app,,}"
app_lc="${app_lc// /}"
local current_file="$HOME/.${app_lc}"
msg_info "Checking for update: ${app}"
@@ -2272,8 +2278,8 @@ check_for_gh_release() {
local source="$2"
local pinned_version_in="${3:-}" # optional
local pin_reason="${4:-}" # optional reason shown to user
local app_lc=""
app_lc="$(echo "${app,,}" | tr -d ' ')"
local app_lc="${app,,}"
app_lc="${app_lc// /}"
local current_file="$HOME/.${app_lc}"
msg_info "Checking for update: ${app}"
@@ -2600,7 +2606,8 @@ create_self_signed_cert() {
local APP_NAME="${1:-${APPLICATION}}"
local HOSTNAME="$(hostname -f)"
local IP="$(hostname -I | awk '{print $1}')"
local APP_NAME_LC=$(echo "${APP_NAME,,}" | tr -d ' ')
local APP_NAME_LC="${APP_NAME,,}"
APP_NAME_LC="${APP_NAME_LC// /}"
local CERT_DIR="/etc/ssl/${APP_NAME_LC}"
local CERT_KEY="${CERT_DIR}/${APP_NAME_LC}.key"
local CERT_CRT="${CERT_DIR}/${APP_NAME_LC}.crt"
@@ -2647,7 +2654,7 @@ function download_with_progress() {
# Content-Length aus HTTP-Header holen
local content_length
content_length=$(curl -fsSLI "$url" | awk '/Content-Length/ {print $2}' | tr -d '\r' || true)
content_length=$(curl -fsSLI "$url" | awk '/Content-Length/ {gsub(/\r/,""); print $2}' || true)
if [[ -z "$content_length" ]]; then
if ! curl -fL# -o "$output" "$url"; then
@@ -2766,7 +2773,8 @@ function fetch_and_deploy_codeberg_release() {
local target="${5:-/opt/$app}"
local asset_pattern="${6:-}"
local app_lc=$(echo "${app,,}" | tr -d ' ')
local app_lc="${app,,}"
app_lc="${app_lc// /}"
local version_file="$HOME/.${app_lc}"
local api_timeouts=(60 120 240)
@@ -3318,7 +3326,8 @@ function fetch_and_deploy_gh_release() {
fi
fi
local app_lc=$(echo "${app,,}" | tr -d ' ')
local app_lc="${app,,}"
app_lc="${app_lc// /}"
local version_file="$HOME/.${app_lc}"
local api_timeouts=(60 120 240)
@@ -4409,7 +4418,7 @@ function setup_hwaccel() {
# Parse comma-separated numbers
IFS=',' read -ra nums <<<"$selection"
for num in "${nums[@]}"; do
num=$(echo "$num" | tr -d ' ')
num="${num// /}"
if [[ "$num" =~ ^[0-9]+$ ]] && ((num >= 1 && num <= gpu_count)); then
SELECTED_INDICES+=("$((num - 1))")
fi
@@ -4451,9 +4460,9 @@ function setup_hwaccel() {
# OS Detection
# ═══════════════════════════════════════════════════════════════════════════
local os_id os_codename os_version
os_id=$(grep -oP '(?<=^ID=).+' /etc/os-release 2>/dev/null | tr -d '"' || echo "debian")
os_codename=$(grep -oP '(?<=^VERSION_CODENAME=).+' /etc/os-release 2>/dev/null | tr -d '"' || echo "unknown")
os_version=$(grep -oP '(?<=^VERSION_ID=).+' /etc/os-release 2>/dev/null | tr -d '"' || echo "")
os_id=$(get_os_info id)
os_codename=$(get_os_info codename)
os_version=$(get_os_info version)
[[ -z "$os_id" ]] && os_id="debian"
local in_ct="${CTTYPE:-0}"
@@ -5192,7 +5201,7 @@ _setup_gpu_permissions() {
for nvidia_dev in /dev/nvidia*; do
[[ -e "$nvidia_dev" ]] && {
chgrp video "$nvidia_dev" 2>/dev/null || true
chmod 660 "$nvidia_dev" 2>/dev/null || true
chmod 666 "$nvidia_dev" 2>/dev/null || true
}
done
if [[ -d /dev/nvidia-caps ]]; then
@@ -5200,7 +5209,7 @@ _setup_gpu_permissions() {
for caps_dev in /dev/nvidia-caps/*; do
[[ -e "$caps_dev" ]] && {
chgrp video "$caps_dev" 2>/dev/null || true
chmod 660 "$caps_dev" 2>/dev/null || true
chmod 666 "$caps_dev" 2>/dev/null || true
}
done
fi
@@ -5217,8 +5226,7 @@ _setup_gpu_permissions() {
# /dev/kfd permissions (AMD ROCm)
if [[ -e /dev/kfd ]]; then
chgrp render /dev/kfd 2>/dev/null || true
chmod 660 /dev/kfd 2>/dev/null || true
chmod 666 /dev/kfd 2>/dev/null || true
msg_info "AMD ROCm compute device configured"
fi
@@ -5340,8 +5348,8 @@ function setup_imagemagick() {
function setup_java() {
local JAVA_VERSION="${JAVA_VERSION:-21}"
local DISTRO_ID DISTRO_CODENAME
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/VERSION_CODENAME/ { print $2 }' /etc/os-release)
DISTRO_ID=$(get_os_info id)
DISTRO_CODENAME=$(get_os_info codename)
local DESIRED_PACKAGE="temurin-${JAVA_VERSION}-jdk"
# Prepare repository (cleanup + validation)
@@ -5596,7 +5604,7 @@ EOF
if [[ -n "$CURRENT_VERSION" ]]; then
# Get available distro version
local DISTRO_VERSION=""
DISTRO_VERSION=$(apt-cache policy mariadb-server 2>/dev/null | grep -E "Candidate:" | awk '{print $2}' | grep -oP '^\d+:\K\d+\.\d+\.\d+' || echo "")
DISTRO_VERSION=$(apt-cache policy mariadb-server 2>/dev/null | awk '/Candidate:/ {sub(/^[0-9]+:/, "", $2); print $2}' || echo "")
if [[ -n "$DISTRO_VERSION" ]]; then
# Compare versions - if current is higher, keep it
@@ -5997,8 +6005,8 @@ function setup_mysql() {
local MYSQL_VERSION="${MYSQL_VERSION:-8.0}"
local USE_MYSQL_REPO="${USE_MYSQL_REPO:-true}"
local DISTRO_ID DISTRO_CODENAME
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
DISTRO_ID=$(get_os_info id)
DISTRO_CODENAME=$(get_os_info codename)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
@@ -6344,7 +6352,7 @@ function setup_nodejs() {
# Check if the module is already installed
if $STD npm list -g --depth=0 "$MODULE_NAME" 2>&1 | grep -q "$MODULE_NAME@"; then
MODULE_INSTALLED_VERSION="$(npm list -g --depth=0 "$MODULE_NAME" 2>&1 | grep "$MODULE_NAME@" | awk -F@ '{print $2}' 2>/dev/null | tr -d '[:space:]' || echo '')"
MODULE_INSTALLED_VERSION="$(npm list -g --depth=0 "$MODULE_NAME" 2>&1 | awk -F@ -v mod="$MODULE_NAME" '$0 ~ mod"@" {gsub(/[[:space:]]/, "", $2); print $2}' || echo '')"
if [[ "$MODULE_REQ_VERSION" != "latest" && "$MODULE_REQ_VERSION" != "$MODULE_INSTALLED_VERSION" ]]; then
msg_info "Updating $MODULE_NAME from v$MODULE_INSTALLED_VERSION to v$MODULE_REQ_VERSION"
if ! $STD npm install -g "${MODULE_NAME}@${MODULE_REQ_VERSION}" 2>/dev/null; then
@@ -6415,8 +6423,8 @@ function setup_php() {
local PHP_APACHE="${PHP_APACHE:-NO}"
local PHP_FPM="${PHP_FPM:-NO}"
local DISTRO_ID DISTRO_CODENAME
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
DISTRO_ID=$(get_os_info id)
DISTRO_CODENAME=$(get_os_info codename)
# Parse version for compatibility checks
local PHP_MAJOR="${PHP_VERSION%%.*}"
@@ -6463,7 +6471,7 @@ function setup_php() {
local FILTERED_MODULES=""
IFS=',' read -ra ALL_MODULES <<<"$COMBINED_MODULES"
for mod in "${ALL_MODULES[@]}"; do
mod=$(echo "$mod" | tr -d '[:space:]')
mod="${mod//[[:space:]]/}"
[[ -z "$mod" ]] && continue
# Skip if it's a known built-in module
@@ -6480,7 +6488,7 @@ function setup_php() {
done
# Deduplicate
COMBINED_MODULES=$(echo "$FILTERED_MODULES" | tr ',' '\n' | awk '!seen[$0]++' | paste -sd, -)
COMBINED_MODULES=$(awk -v RS=',' '!seen[$0]++{printf "%s%s", sep, $0; sep=","}' <<<"$FILTERED_MODULES")
# Get current PHP-CLI version
local CURRENT_PHP=""
@@ -6549,7 +6557,7 @@ EOF
IFS=',' read -ra MODULES <<<"$COMBINED_MODULES"
for mod in "${MODULES[@]}"; do
mod=$(echo "$mod" | tr -d '[:space:]')
mod="${mod//[[:space:]]/}"
[[ -z "$mod" ]] && continue
local pkg_name="php${PHP_VERSION}-${mod}"
@@ -6728,8 +6736,8 @@ setup_postgresql() {
local PG_MODULES="${PG_MODULES:-}"
local USE_PGDG_REPO="${USE_PGDG_REPO:-true}"
local DISTRO_ID DISTRO_CODENAME
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
DISTRO_ID=$(get_os_info id)
DISTRO_CODENAME=$(get_os_info codename)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
@@ -7572,8 +7580,8 @@ EOF
function setup_clickhouse() {
local CLICKHOUSE_VERSION="${CLICKHOUSE_VERSION:-latest}"
local DISTRO_ID DISTRO_CODENAME
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
DISTRO_ID=$(get_os_info id)
DISTRO_CODENAME=$(get_os_info codename)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
@@ -7787,7 +7795,7 @@ function setup_rust() {
# Check if already installed
if echo "$CRATE_LIST" | grep -q "^${NAME} "; then
INSTALLED_VER=$(echo "$CRATE_LIST" | grep "^${NAME} " | head -1 | awk '{print $2}' 2>/dev/null | tr -d 'v:' || echo '')
INSTALLED_VER=$(echo "$CRATE_LIST" | grep "^${NAME} " | head -1 | awk '{gsub(/[v:]/, "", $2); print $2}' || echo '')
if [[ -n "$VER" && "$VER" != "$INSTALLED_VER" ]]; then
msg_info "Upgrading $NAME from v$INSTALLED_VER to v$VER"

View File

@@ -150,7 +150,7 @@ function install() {
curl -fsSL "https://raw.githubusercontent.com/runtipi/runtipi/master/scripts/install.sh" -o "install.sh"
chmod +x install.sh
$STD ./install.sh
chmod 660 /opt/runtipi/state/settings.json 2>/dev/null || true
chmod 666 /opt/runtipi/state/settings.json 2>/dev/null || true
rm -f /opt/install.sh
msg_ok "Installed ${APP}"