mirror of
https://github.com/community-scripts/ProxmoxVE.git
synced 2025-12-14 11:13:27 +01:00
Compare commits
90 Commits
2025-12-03
...
feature/gp
| Author | SHA1 | Date | |
|---|---|---|---|
| 7a16bb8505 | |||
| 009b743c33 | |||
| a4fd631a30 | |||
| 1cae72bdec | |||
| 531ecad4c7 | |||
| 9e8ab9de01 | |||
| 70557798ec | |||
| 4b554900ca | |||
| 9f84eae07f | |||
| ba5bdd94ad | |||
| d18baa2177 | |||
| 779c06f232 | |||
| 9e2b6524c4 | |||
| a328d7b8ba | |||
| dfa4d82951 | |||
| 5e5a8cd104 | |||
| 0da3231d3c | |||
| 5a6a30e594 | |||
| 97ac2520ec | |||
| bd5fe17228 | |||
| f42586c083 | |||
| fab5539c82 | |||
| 1ecb5bbeab | |||
| 64dbd4e9f7 | |||
| 2ba63b28f0 | |||
| 2a3b09b413 | |||
| d6ca5676df | |||
| 478194ba1a | |||
| d241c03b3d | |||
| 8cd037ff88 | |||
| fb15c13833 | |||
| e95541260b | |||
| a37ac14907 | |||
| 74a870bc5c | |||
| e0f65f2db8 | |||
| 01b246f375 | |||
| 53dd0efddd | |||
| f31978a503 | |||
| 4971bc46be | |||
| 28b894db2b | |||
| 6ec4aeb4f0 | |||
| 6409d64b93 | |||
| 08cb3cc76a | |||
| 170d44e2aa | |||
| b436ba548d | |||
| ed435d58d6 | |||
| dd5993d7ab | |||
| 0c2521c05e | |||
| 89595627a6 | |||
| a0e8ee2130 | |||
| e462aba7c2 | |||
| 338762b30b | |||
| bda700a6c3 | |||
| 976b9188a0 | |||
| 2ef2ce0a4b | |||
| 6dc73981d9 | |||
| f64bed06d0 | |||
| ca4de7bbe9 | |||
| 00ccef68bf | |||
| e0dc02a3e7 | |||
| 1c325f6885 | |||
| 64407dfccb | |||
| c7f04f379c | |||
| 86491da8b5 | |||
| 0d6ea7fa59 | |||
| a0c1243c94 | |||
| 2799201cfe | |||
| f971c077d6 | |||
| d2e9997d0d | |||
| 316082eaaa | |||
| a81c074228 | |||
| 33ce3fdbc5 | |||
| e73f35e2c8 | |||
| eb53af44c9 | |||
| 7ce32cc320 | |||
| 0e263ad54e | |||
| 85c20e3b25 | |||
| 268464781e | |||
| 775caae9c9 | |||
| 0f7c201b57 | |||
| 5780dc1532 | |||
| 5333e9f4b6 | |||
| 513e11569b | |||
| 9ed56051b9 | |||
| 4b496ebf2e | |||
| 530438a721 | |||
| 62201a0872 | |||
| 8e93f5cb1d | |||
| 838e663a6d | |||
| a826769899 |
14
.github/CONTRIBUTOR_AND_GUIDES/CODE-AUDIT.md
generated
vendored
14
.github/CONTRIBUTOR_AND_GUIDES/CODE-AUDIT.md
generated
vendored
@ -1,14 +0,0 @@
|
||||
<div align="center">
|
||||
<img src="https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo.png" height="100px" />
|
||||
</div>
|
||||
<h2><div align="center">Exploring the Scripts and Steps Involved in an Application LXC Installation</div></h2>
|
||||
|
||||
1) [adguard.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/ct/adguard.sh): This script collects system parameters. (Also holds the function to update the application.)
|
||||
2) [build.func](https://github.com/community-scripts/ProxmoxVE/blob/main/misc/build.func): Adds user settings and integrates collected information.
|
||||
3) [create_lxc.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/misc/create_lxc.sh): Constructs the LXC container.
|
||||
4) [adguard-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/install/adguard-install.sh): Executes functions from [install.func](https://github.com/community-scripts/ProxmoxVE/blob/main/misc/install.func), and installs the application.
|
||||
5) [adguard.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/ct/adguard.sh) (again): To display the completion message.
|
||||
|
||||
The installation process uses reusable scripts: [build.func](https://github.com/community-scripts/ProxmoxVE/blob/main/misc/build.func), [create_lxc.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/misc/create_lxc.sh), and [install.func](https://github.com/community-scripts/ProxmoxVE/blob/main/misc/install.func), which are not specific to any particular application.
|
||||
|
||||
To gain a better understanding, focus on reviewing [adguard-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/install/adguard-install.sh). This script contains the commands and configurations for installing and configuring AdGuard Home within the LXC container.
|
||||
129
.github/CONTRIBUTOR_AND_GUIDES/CONTRIBUTING.md
generated
vendored
129
.github/CONTRIBUTOR_AND_GUIDES/CONTRIBUTING.md
generated
vendored
@ -1,129 +0,0 @@
|
||||
|
||||
# Community Scripts Contribution Guide
|
||||
|
||||
## **Welcome to the communty-scripts Repository!**
|
||||
|
||||
📜 These documents outline the essential coding standards for all our scripts and JSON files. Adhering to these standards ensures that our codebase remains consistent, readable, and maintainable. By following these guidelines, we can improve collaboration, reduce errors, and enhance the overall quality of our project.
|
||||
|
||||
### Why Coding Standards Matter
|
||||
|
||||
Coding standards are crucial for several reasons:
|
||||
|
||||
1. **Consistency**: Consistent code is easier to read, understand, and maintain. It helps new team members quickly get up to speed and reduces the learning curve.
|
||||
2. **Readability**: Clear and well-structured code is easier to debug and extend. It allows developers to quickly identify and fix issues.
|
||||
3. **Maintainability**: Code that follows a standard structure is easier to refactor and update. It ensures that changes can be made with minimal risk of introducing new bugs.
|
||||
4. **Collaboration**: When everyone follows the same standards, it becomes easier to collaborate on code. It reduces friction and misunderstandings during code reviews and merges.
|
||||
|
||||
### Scope of These Documents
|
||||
|
||||
These documents cover the coding standards for the following types of files in our project:
|
||||
|
||||
- **`install/$AppName-install.sh` Scripts**: These scripts are responsible for the installation of applications.
|
||||
- **`ct/$AppName.sh` Scripts**: These scripts handle the creation and updating of containers.
|
||||
- **`frontend/public/json/$AppName.json`**: These files store structured data and are used for the website.
|
||||
|
||||
Each section provides detailed guidelines on various aspects of coding, including shebang usage, comments, variable naming, function naming, indentation, error handling, command substitution, quoting, script structure, and logging. Additionally, examples are provided to illustrate the application of these standards.
|
||||
|
||||
By following the coding standards outlined in this document, we ensure that our scripts and JSON files are of high quality, making our project more robust and easier to manage. Please refer to this guide whenever you create or update scripts and JSON files to maintain a high standard of code quality across the project. 📚🔍
|
||||
|
||||
Let's work together to keep our codebase clean, efficient, and maintainable! 💪🚀
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
Before contributing, please ensure that you have the following setup:
|
||||
|
||||
1. **Visual Studio Code** (recommended for script development)
|
||||
2. **Recommended VS Code Extensions:**
|
||||
- [Shell Syntax](https://marketplace.visualstudio.com/items?itemName=bmalehorn.shell-syntax)
|
||||
- [ShellCheck](https://marketplace.visualstudio.com/items?itemName=timonwong.shellcheck)
|
||||
- [Shell Format](https://marketplace.visualstudio.com/items?itemName=foxundermoon.shell-format)
|
||||
|
||||
### Important Notes
|
||||
- Use [AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.sh) and [AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh) as templates when creating new scripts. Final version of the script (the one you will push for review), must have all comments removed, except the ones in the file header.
|
||||
|
||||
---
|
||||
|
||||
# 🚀 The Application Script (ct/AppName.sh)
|
||||
|
||||
- You can find all coding standards, as well as the structure for this file [here](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.md).
|
||||
- These scripts are responsible for container creation, setting the necessary variables and handling the update of the application once installed.
|
||||
|
||||
---
|
||||
|
||||
# 🛠 The Installation Script (install/AppName-install.sh)
|
||||
|
||||
- You can find all coding standards, as well as the structure for this file [here](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.md).
|
||||
- These scripts are responsible for the installation of the application.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Building Your Own Scripts
|
||||
|
||||
Start with the [template script](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh)
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contribution Process
|
||||
|
||||
All PR's related to new scripts should be made against our Dev repository first, where we can test the scripts before they are pushed and merged in the official repository.
|
||||
|
||||
**Our Dev repo is `http://www.github.com/community-scripts/ProxmoxVED`**
|
||||
|
||||
You will need to adjust paths mentioned further down this document to match the repo you're pushing the scripts to.
|
||||
|
||||
### 1. Fork the repository
|
||||
Fork to your GitHub account
|
||||
|
||||
### 2. Clone your fork on your local environment
|
||||
```bash
|
||||
git clone https://github.com/yourUserName/ForkName
|
||||
```
|
||||
|
||||
### 3. Create a new branch
|
||||
```bash
|
||||
git switch -c your-feature-branch
|
||||
```
|
||||
|
||||
### 4. Change paths in build.func install.func and AppName.sh
|
||||
To be able to develop from your own branch you need to change:\
|
||||
`https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main`\
|
||||
to\
|
||||
`https://raw.githubusercontent.com/[USER]/[REPOSITORY]/refs/heads/[BRANCH]`\
|
||||
in following files:
|
||||
|
||||
`misc/build.func`\
|
||||
`misc/install.func`\
|
||||
`ct/AppName.sh`
|
||||
|
||||
Example: `https://raw.githubusercontent.com/tremor021/PromoxVE/refs/heads/testbranch`
|
||||
|
||||
Also you need to change:\
|
||||
`https://raw.githubusercontent.com/community-scripts/ProxmoxVE/raw/main`\
|
||||
to\
|
||||
`https://raw.githubusercontent.com/[USER]/[REPOSITORY]/raw/[BRANCH]`\
|
||||
in `misc/install.func` in order for `update` shell command to work.\
|
||||
These changes are only while writing and testing your scripts. Before opening a Pull Request, you should change all above mentioned paths in `misc/build.func`, `misc/install.func` and `ct/AppName.sh` to point to the original paths.
|
||||
|
||||
### 4. Commit changes (without build.func and install.func!)
|
||||
```bash
|
||||
git commit -m "Your commit message"
|
||||
```
|
||||
|
||||
### 5. Push to your fork
|
||||
```bash
|
||||
git push origin your-feature-branch
|
||||
```
|
||||
|
||||
### 6. Create a Pull Request
|
||||
Open a Pull Request from your feature branch to the main branch on the Dev repository. You must only include your **$AppName.sh**, **$AppName-install.sh** and **$AppName.json** files in the pull request.
|
||||
|
||||
---
|
||||
|
||||
## 📚 Pages
|
||||
|
||||
- [CT Template: AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.sh)
|
||||
- [Install Template: AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh)
|
||||
- [JSON Template: AppName.json](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/json/AppName.json)
|
||||
|
||||
|
||||
44
.github/CONTRIBUTOR_AND_GUIDES/USER_SUBMITTED_GUIDES.md
generated
vendored
44
.github/CONTRIBUTOR_AND_GUIDES/USER_SUBMITTED_GUIDES.md
generated
vendored
@ -1,44 +0,0 @@
|
||||
<div align="center">
|
||||
<a href="#">
|
||||
<img src="https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo.png" height="100px" />
|
||||
</a>
|
||||
</div>
|
||||
<h2 align="center">User Submitted Guides </h2>
|
||||
|
||||
<sub> In order to contribute a guide on installing with Proxmox VE Helper Scripts, you should open a pull request that adds the guide to the `USER_SUBMITTED_GUIDES.md` file. </sub>
|
||||
|
||||
[Proxmox Automation with Proxmox Helper Scripts!](https://www.youtube.com/watch?v=kcpu4z5eSEU)
|
||||
|
||||
[Installing Home Assistant OS using Proxmox 8](https://community.home-assistant.io/t/installing-home-assistant-os-using-proxmox-8/201835)
|
||||
|
||||
[How To Separate Zigbee2MQTT From Home Assistant In Proxmox](https://smarthomescene.com/guides/how-to-separate-zigbee2mqtt-from-home-assistant-in-proxmox/)
|
||||
|
||||
[How To Install Home Assistant On Proxmox: The Easy Way](https://smarthomescene.com/guides/how-to-install-home-assistant-on-proxmox-the-easy-way/)
|
||||
|
||||
[Home Assistant: Installing InfluxDB (LXC)](https://www.derekseaman.com/2023/04/home-assistant-installing-influxdb-lxc.html)
|
||||
|
||||
[Home Assistant: Proxmox Quick Start Guide](https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html)
|
||||
|
||||
[Home Assistant: Installing Grafana (LXC) with Let’s Encrypt SSL](https://www.derekseaman.com/2023/04/home-assistant-installing-grafana-lxc.html)
|
||||
|
||||
[Proxmox: Plex LXC with Alder Lake Transcoding](https://www.derekseaman.com/2023/04/proxmox-plex-lxc-with-alder-lake-transcoding.html)
|
||||
|
||||
[How To Backup Home Assistant In Proxmox](https://smarthomescene.com/guides/how-to-backup-home-assistant-in-proxmox/)
|
||||
|
||||
[Running Frigate on Proxmox](https://www.homeautomationguy.io/blog/running-frigate-on-proxmox)
|
||||
|
||||
[Frigate VM on Proxmox with PCIe Coral TPU](https://www.derekseaman.com/2023/06/home-assistant-frigate-vm-on-proxmox-with-pcie-coral-tpu.html)
|
||||
|
||||
[Moving Home Assistant’s Database To MariaDB On Proxmox](https://smarthomescene.com/guides/moving-home-assistants-database-to-mariadb-on-proxmox/)
|
||||
|
||||
[How-to: Proxmox VE 7.4 to 8.0 Upgrade](https://www.derekseaman.com/2023/06/how-to-proxmox-7-4-to-8-0-upgrade.html)
|
||||
|
||||
[iGPU Transcoding In Proxmox with Jellyfin](https://www.youtube.com/watch?v=XAa_qpNmzZs)
|
||||
|
||||
[Proxmox + NetData](<https://dbt3ch.com/books/proxmox-netdata-for-better-insights-and-notifications/page/proxmox-netdata-for-better-insights-and-notifications>)
|
||||
|
||||
[Proxmox Homelab Series](<https://blog.kye.dev/proxmox-series>)
|
||||
|
||||
[The fastest installation of Docker and Portainer on Proxmox VE](https://lavr.site/en-fastest-install-docker-portainer-proxmox/)
|
||||
|
||||
[How To Setup Proxmox Backuper Server Using Helper Scripts](<https://youtu.be/6C2JOsrZZZw?si=kkrrcL_nLCDBJkOB>)
|
||||
287
.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.md
generated
vendored
287
.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.md
generated
vendored
@ -1,287 +0,0 @@
|
||||
# **AppName<span></span>.sh Scripts**
|
||||
|
||||
`AppName.sh` scripts found in the `/ct` directory. These scripts are responsible for the installation of the desired application. For this guide we take `/ct/snipeit.sh` as example.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [**AppName.sh Scripts**](#appnamesh-scripts)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [1. **File Header**](#1-file-header)
|
||||
- [1.1 **Shebang**](#11-shebang)
|
||||
- [1.2 **Import Functions**](#12-import-functions)
|
||||
- [1.3 **Metadata**](#13-metadata)
|
||||
- [2 **Variables and function import**](#2-variables-and-function-import)
|
||||
- [2.1 **Default Values**](#21-default-values)
|
||||
- [2.2 **📋 App output \& base settings**](#22--app-output--base-settings)
|
||||
- [2.3 **🛠 Core functions**](#23--core-functions)
|
||||
- [3 **Update function**](#3-update-function)
|
||||
- [3.1 **Function Header**](#31-function-header)
|
||||
- [3.2 **Check APP**](#32-check-app)
|
||||
- [3.3 **Check version**](#33-check-version)
|
||||
- [3.4 **Verbosity**](#34-verbosity)
|
||||
- [3.5 **Backups**](#35-backups)
|
||||
- [3.6 **Cleanup**](#36-cleanup)
|
||||
- [3.7 **No update function**](#37-no-update-function)
|
||||
- [4 **End of the script**](#4-end-of-the-script)
|
||||
- [5. **Contribution checklist**](#5-contribution-checklist)
|
||||
|
||||
## 1. **File Header**
|
||||
|
||||
### 1.1 **Shebang**
|
||||
|
||||
- Use `#!/usr/bin/env bash` as the shebang.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
```
|
||||
|
||||
### 1.2 **Import Functions**
|
||||
|
||||
- Import the build.func file.
|
||||
- When developing your own script, change the URL to your own repository.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> You also need to change all apperances of this URL in `misc/build.func` and `misc/install.func`
|
||||
|
||||
Example for development:
|
||||
|
||||
```bash
|
||||
source <(curl -s https://raw.githubusercontent.com/[USER]/[REPO]/refs/heads/[BRANCH]/misc/build.func)
|
||||
```
|
||||
|
||||
Final script:
|
||||
|
||||
```bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
```
|
||||
|
||||
> [!CAUTION]
|
||||
> Before opening a Pull Request, change the URLs to point to the community-scripts repo.
|
||||
|
||||
### 1.3 **Metadata**
|
||||
|
||||
- Add clear comments for script metadata, including author, copyright, and license information.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: [YourUserName]
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: [SOURCE_URL]
|
||||
```
|
||||
|
||||
> [!NOTE]:
|
||||
>
|
||||
> - Add your username and source URL
|
||||
> - For existing scripts, add "| Co-Author [YourUserName]" after the current author
|
||||
> - Source is a URL of github repo containting source files of the application you're installing (not URL of your homepage or a blog)
|
||||
|
||||
---
|
||||
|
||||
## 2 **Variables and function import**
|
||||
>
|
||||
> [!NOTE]
|
||||
> You need to have all this set in your script, otherwise it will not work!
|
||||
|
||||
### 2.1 **Default Values**
|
||||
|
||||
- This section sets the default values for the container.
|
||||
- `APP` needs to be set to the application name and must be equal to the filenames of your scripts.
|
||||
- `var_tags`: You can set Tags for the CT wich show up in the Proxmox UI. Don´t overdo it!
|
||||
|
||||
>[!NOTE]
|
||||
>Description for all Default Values
|
||||
>
|
||||
>| Variable | Description | Notes |
|
||||
>|----------|-------------|-------|
|
||||
>| `APP` | Application name | Must match ct\AppName.sh |
|
||||
>| `var_tags` | Proxmox display tags without Spaces, only ; | Limit the number to 2 |
|
||||
>| `var_cpu` | CPU cores | Number of cores |
|
||||
>| `var_ram` | RAM | In MB |
|
||||
>| `var_disk` | Disk capacity | In GB |
|
||||
>| `var_os` | Operating system | alpine, debian, ubuntu |
|
||||
>| `var_version` | OS version | e.g., 3.20, 11, 12, 20.04 |
|
||||
>| `var_unprivileged` | Container type | 1 = Unprivileged, 0 = Privileged |
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
APP="SnipeIT"
|
||||
var_tags="${var_tags:-asset-management;foss}"
|
||||
var_cpu="${var_cpu:-2}"
|
||||
var_ram="${var_ram:-2048}"
|
||||
var_disk="${var_disk:-4}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-12}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
```
|
||||
|
||||
## 2.2 **📋 App output & base settings**
|
||||
|
||||
```bash
|
||||
header_info "$APP"
|
||||
```
|
||||
- `header_info`: Generates ASCII header for APP
|
||||
|
||||
## 2.3 **🛠 Core functions**
|
||||
|
||||
```bash
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
```
|
||||
|
||||
- `variables`: Processes input and prepares variables
|
||||
- `color`: Sets icons, colors, and formatting
|
||||
- `catch_errors`: Enables error handling
|
||||
|
||||
---
|
||||
|
||||
## 3 **Update function**
|
||||
|
||||
### 3.1 **Function Header**
|
||||
|
||||
- If applicable write a function that updates the application and the OS in the container.
|
||||
- Each update function starts with the same code:
|
||||
|
||||
```bash
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
```
|
||||
|
||||
### 3.2 **Check APP**
|
||||
|
||||
- Before doing anything update-wise, check if the app is installed in the container.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
if [[ ! -d /opt/snipe-it ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
```
|
||||
|
||||
### 3.3 **Check version**
|
||||
|
||||
- Before updating, check if a new version exists.
|
||||
- We use the `${APPLICATION}_version.txt` file created in `/opt` during the install to compare new versions against the currently installed version.
|
||||
|
||||
Example with a Github Release:
|
||||
|
||||
```bash
|
||||
RELEASE=$(curl -fsSL https://api.github.com/repos/snipe/snipe-it/releases/latest | grep "tag_name" | awk '{print substr($2, 3, length($2)-4) }')
|
||||
if [[ ! -f /opt/${APP}_version.txt ]] || [[ "${RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]]; then
|
||||
msg_info "Updating ${APP} to v${RELEASE}"
|
||||
#DO UPDATE
|
||||
else
|
||||
msg_ok "No update required. ${APP} is already at v${RELEASE}."
|
||||
fi
|
||||
exit
|
||||
}
|
||||
```
|
||||
|
||||
### 3.4 **Verbosity**
|
||||
|
||||
- Use the appropriate flag (**-q** in the examples) for a command to suppress its output.
|
||||
Example:
|
||||
|
||||
```bash
|
||||
wget -q
|
||||
unzip -q
|
||||
```
|
||||
|
||||
- If a command does not come with this functionality use `$STD` to suppress it's output.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
$STD php artisan migrate --force
|
||||
$STD php artisan config:clear
|
||||
```
|
||||
|
||||
### 3.5 **Backups**
|
||||
|
||||
- Backup user data if necessary.
|
||||
- Move all user data back in the directory when the update is finished.
|
||||
|
||||
>[!NOTE]
|
||||
>This is not meant to be a permanent backup
|
||||
|
||||
Example backup:
|
||||
|
||||
```bash
|
||||
mv /opt/snipe-it /opt/snipe-it-backup
|
||||
```
|
||||
|
||||
Example config restore:
|
||||
|
||||
```bash
|
||||
cp /opt/snipe-it-backup/.env /opt/snipe-it/.env
|
||||
cp -r /opt/snipe-it-backup/public/uploads/ /opt/snipe-it/public/uploads/
|
||||
cp -r /opt/snipe-it-backup/storage/private_uploads /opt/snipe-it/storage/private_uploads
|
||||
```
|
||||
|
||||
### 3.6 **Cleanup**
|
||||
|
||||
- Do not forget to remove any temporary files/folders such as zip-files or temporary backups.
|
||||
Example:
|
||||
|
||||
```bash
|
||||
rm -rf /opt/v${RELEASE}.zip
|
||||
rm -rf /opt/snipe-it-backup
|
||||
```
|
||||
|
||||
### 3.7 **No update function**
|
||||
|
||||
- In case you can not provide an update function use the following code to provide user feedback.
|
||||
|
||||
```bash
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
if [[ ! -d /opt/snipeit ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
msg_error "Currently we don't provide an update function for this ${APP}."
|
||||
exit
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4 **End of the script**
|
||||
|
||||
- `start`: Launches Whiptail dialogue
|
||||
- `build_container`: Collects and integrates user settings
|
||||
- `description`: Sets LXC container description
|
||||
- With `echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"` you can point the user to the IP:PORT/folder needed to access the app.
|
||||
|
||||
```bash
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}${CL}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. **Contribution checklist**
|
||||
|
||||
- [ ] Shebang is correctly set (`#!/usr/bin/env bash`).
|
||||
- [ ] Correct link to *build.func*
|
||||
- [ ] Metadata (author, license) is included at the top.
|
||||
- [ ] Variables follow naming conventions.
|
||||
- [ ] Update function exists.
|
||||
- [ ] Update functions checks if app is installed and for new version.
|
||||
- [ ] Update function cleans up temporary files.
|
||||
- [ ] Script ends with a helpful message for the user to reach the application.
|
||||
86
.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.sh
generated
vendored
86
.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.sh
generated
vendored
@ -1,86 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: [YourUserName]
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: [SOURCE_URL]
|
||||
|
||||
# App Default Values
|
||||
# Name of the app (e.g. Google, Adventurelog, Apache-Guacamole"
|
||||
APP="[APP_NAME]"
|
||||
# Tags for Proxmox VE, maximum 2 pcs., no spaces allowed, separated by a semicolon ; (e.g. database | adblock;dhcp)
|
||||
var_tags="${var_tags:-[TAGS]}"
|
||||
# Number of cores (1-X) (e.g. 4) - default are 2
|
||||
var_cpu="${var_cpu:-[CPU]}"
|
||||
# Amount of used RAM in MB (e.g. 2048 or 4096)
|
||||
var_ram="${var_ram:-[RAM]}"
|
||||
# Amount of used disk space in GB (e.g. 4 or 10)
|
||||
var_disk="${var_disk:-[DISK]}"
|
||||
# Default OS (e.g. debian, ubuntu, alpine)
|
||||
var_os="${var_os:-[OS]}"
|
||||
# Default OS version (e.g. 12 for debian, 24.04 for ubuntu, 3.20 for alpine)
|
||||
var_version="${var_version:-[VERSION]}"
|
||||
# 1 = unprivileged container, 0 = privileged container
|
||||
var_unprivileged="${var_unprivileged:-[UNPRIVILEGED]}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
# Check if installation is present | -f for file, -d for folder
|
||||
if [[ ! -f [INSTALLATION_CHECK_PATH] ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
# Crawling the new version and checking whether an update is required
|
||||
RELEASE=$(curl -fsSL [RELEASE_URL] | [PARSE_RELEASE_COMMAND])
|
||||
if [[ "${RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]] || [[ ! -f /opt/${APP}_version.txt ]]; then
|
||||
# Stopping Services
|
||||
msg_info "Stopping $APP"
|
||||
systemctl stop [SERVICE_NAME]
|
||||
msg_ok "Stopped $APP"
|
||||
|
||||
# Creating Backup
|
||||
msg_info "Creating Backup"
|
||||
tar -czf "/opt/${APP}_backup_$(date +%F).tar.gz" [IMPORTANT_PATHS]
|
||||
msg_ok "Backup Created"
|
||||
|
||||
# Execute Update
|
||||
msg_info "Updating $APP to v${RELEASE}"
|
||||
[UPDATE_COMMANDS]
|
||||
msg_ok "Updated $APP to v${RELEASE}"
|
||||
|
||||
# Starting Services
|
||||
msg_info "Starting $APP"
|
||||
systemctl start [SERVICE_NAME]
|
||||
msg_ok "Started $APP"
|
||||
|
||||
# Cleaning up
|
||||
msg_info "Cleaning Up"
|
||||
rm -rf [TEMP_FILES]
|
||||
msg_ok "Cleanup Completed"
|
||||
|
||||
# Last Action
|
||||
echo "${RELEASE}" >/opt/${APP}_version.txt
|
||||
msg_ok "Update Successful"
|
||||
else
|
||||
msg_ok "No update required. ${APP} is already at v${RELEASE}"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:[PORT]${CL}"
|
||||
354
.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.md
generated
vendored
354
.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.md
generated
vendored
@ -1,354 +0,0 @@
|
||||
|
||||
# **AppName<span></span>-install.sh Scripts**
|
||||
|
||||
`AppName-install.sh` scripts found in the `/install` directory. These scripts are responsible for the installation of the application. For this guide we take `/install/snipeit-install.sh` as example.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [**AppName-install.sh Scripts**](#appname-installsh-scripts)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [1. **File header**](#1-file-header)
|
||||
- [1.1 **Shebang**](#11-shebang)
|
||||
- [1.2 **Comments**](#12-comments)
|
||||
- [1.3 **Variables and function import**](#13-variables-and-function-import)
|
||||
- [2. **Variable naming and management**](#2-variable-naming-and-management)
|
||||
- [2.1 **Naming conventions**](#21-naming-conventions)
|
||||
- [3. **Dependencies**](#3-dependencies)
|
||||
- [3.1 **Install all at once**](#31-install-all-at-once)
|
||||
- [3.2 **Collapse dependencies**](#32-collapse-dependencies)
|
||||
- [4. **Paths to application files**](#4-paths-to-application-files)
|
||||
- [5. **Version management**](#5-version-management)
|
||||
- [5.1 **Install the latest release**](#51-install-the-latest-release)
|
||||
- [5.2 **Save the version for update checks**](#52-save-the-version-for-update-checks)
|
||||
- [6. **Input and output management**](#6-input-and-output-management)
|
||||
- [6.1 **User feedback**](#61-user-feedback)
|
||||
- [6.2 **Verbosity**](#62-verbosity)
|
||||
- [7. **String/File Manipulation**](#7-stringfile-manipulation)
|
||||
- [7.1 **File Manipulation**](#71-file-manipulation)
|
||||
- [8. **Security practices**](#8-security-practices)
|
||||
- [8.1 **Password generation**](#81-password-generation)
|
||||
- [8.2 **File permissions**](#82-file-permissions)
|
||||
- [9. **Service Configuration**](#9-service-configuration)
|
||||
- [9.1 **Configuration files**](#91-configuration-files)
|
||||
- [9.2 **Credential management**](#92-credential-management)
|
||||
- [9.3 **Enviroment files**](#93-enviroment-files)
|
||||
- [9.4 **Services**](#94-services)
|
||||
- [10. **Cleanup**](#10-cleanup)
|
||||
- [10.1 **Remove temporary files**](#101-remove-temporary-files)
|
||||
- [10.2 **Autoremove and autoclean**](#102-autoremove-and-autoclean)
|
||||
- [11. **Best Practices Checklist**](#11-best-practices-checklist)
|
||||
- [Example: High-Level Script Flow](#example-high-level-script-flow)
|
||||
|
||||
## 1. **File header**
|
||||
|
||||
### 1.1 **Shebang**
|
||||
|
||||
- Use `#!/usr/bin/env bash` as the shebang.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
```
|
||||
|
||||
### 1.2 **Comments**
|
||||
|
||||
- Add clear comments for script metadata, including author, copyright, and license information.
|
||||
- Use meaningful inline comments to explain complex commands or logic.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: [YourUserName]
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: [SOURCE_URL]
|
||||
```
|
||||
|
||||
> [!NOTE]:
|
||||
>
|
||||
> - Add your username
|
||||
> - When updating/reworking scripts, add "| Co-Author [YourUserName]"
|
||||
> - Source is a URL of github repo containting source files of the application you're installing (not URL of your homepage or a blog)
|
||||
|
||||
### 1.3 **Variables and function import**
|
||||
|
||||
- This sections adds the support for all needed functions and variables.
|
||||
|
||||
```bash
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. **Variable naming and management**
|
||||
|
||||
### 2.1 **Naming conventions**
|
||||
|
||||
- Use uppercase names for constants and environment variables.
|
||||
- Use lowercase names for local script variables.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
DB_NAME=snipeit_db # Environment-like variable (constant)
|
||||
db_user="snipeit" # Local variable
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. **Dependencies**
|
||||
|
||||
### 3.1 **Install all at once**
|
||||
|
||||
- Install all dependencies with a single command if possible
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
$STD apt-get install -y \
|
||||
composer \
|
||||
git \
|
||||
nginx
|
||||
```
|
||||
|
||||
### 3.2 **Collapse dependencies**
|
||||
|
||||
Collapse dependencies to keep the code readable.
|
||||
|
||||
Example:
|
||||
Use
|
||||
|
||||
```bash
|
||||
php8.2-{bcmath,common,ctype}
|
||||
```
|
||||
|
||||
instead of
|
||||
|
||||
```bash
|
||||
php8.2-bcmath php8.2-common php8.2-ctype
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. **Paths to application files**
|
||||
|
||||
If possible install the app and all necessary files in `/opt/`
|
||||
|
||||
---
|
||||
|
||||
## 5. **Version management**
|
||||
|
||||
### 5.1 **Install the latest release**
|
||||
|
||||
- Always try and install the latest release
|
||||
- Do not hardcode any version if not absolutely necessary
|
||||
|
||||
Example for a git release:
|
||||
|
||||
```bash
|
||||
RELEASE=$(curl -fsSL https://api.github.com/repos/snipe/snipe-it/releases/latest | grep "tag_name" | awk '{print substr($2, 3, length($2)-4) }')
|
||||
curl -fsSL "https://github.com/snipe/snipe-it/archive/refs/tags/v${RELEASE}.zip" -o "v${RELEASE}.zip"
|
||||
```
|
||||
|
||||
### 5.2 **Save the version for update checks**
|
||||
|
||||
- Write the installed version into a file.
|
||||
- This is used for the update function in **AppName.sh** to check for if a Update is needed.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
echo "${RELEASE}" >/opt/${APPLICATION}_version.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. **Input and output management**
|
||||
|
||||
### 6.1 **User feedback**
|
||||
|
||||
- Use standard functions like `msg_info`, `msg_ok` or `msg_error` to print status messages.
|
||||
- Each `msg_info` must be followed with a `msg_ok` before any other output is made.
|
||||
- Display meaningful progress messages at key stages.
|
||||
- Taking user input with `read -p` must be outside of `msg_info`...`msg_ok` code block
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt-get install -y ...
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
read -p "${TAB3}Do you wish to enable HTTPS mode? (y/N): " httpschoice
|
||||
```
|
||||
|
||||
### 6.2 **Verbosity**
|
||||
|
||||
- Use the appropiate flag (**-q** in the examples) for a command to suppres its output
|
||||
Example:
|
||||
|
||||
```bash
|
||||
wget -q
|
||||
unzip -q
|
||||
```
|
||||
|
||||
- If a command dose not come with such a functionality use `$STD` (a custom standard redirection variable) for managing output verbosity.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
$STD apt-get install -y nginx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. **String/File Manipulation**
|
||||
|
||||
### 7.1 **File Manipulation**
|
||||
|
||||
- Use `sed` to replace placeholder values in configuration files.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
sed -i -e "s|^DB_DATABASE=.*|DB_DATABASE=$DB_NAME|" \
|
||||
-e "s|^DB_USERNAME=.*|DB_USERNAME=$DB_USER|" \
|
||||
-e "s|^DB_PASSWORD=.*|DB_PASSWORD=$DB_PASS|" .env
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. **Security practices**
|
||||
|
||||
### 8.1 **Password generation**
|
||||
|
||||
- Use `openssl` to generate random passwords.
|
||||
- Use only alphanumeric values to not introduce unknown behaviour.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
DB_PASS=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | head -c13)
|
||||
```
|
||||
|
||||
### 8.2 **File permissions**
|
||||
|
||||
Explicitly set secure ownership and permissions for sensitive files.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
chown -R www-data: /opt/snipe-it
|
||||
chmod -R 755 /opt/snipe-it
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. **Service Configuration**
|
||||
|
||||
### 9.1 **Configuration files**
|
||||
|
||||
Use `cat <<EOF` to write configuration files in a clean and readable way.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
cat <<EOF >/etc/nginx/conf.d/snipeit.conf
|
||||
server {
|
||||
listen 80;
|
||||
root /opt/snipe-it/public;
|
||||
index index.php;
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
### 9.2 **Credential management**
|
||||
|
||||
Store the generated credentials in a file.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
USERNAME=username
|
||||
PASSWORD=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | head -c13)
|
||||
{
|
||||
echo "Application-Credentials"
|
||||
echo "Username: $USERNAME"
|
||||
echo "Password: $PASSWORD"
|
||||
} >> ~/application.creds
|
||||
```
|
||||
|
||||
### 9.3 **Enviroment files**
|
||||
|
||||
Use `cat <<EOF` to write enviromental files in a clean and readable way.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
cat <<EOF >/path/to/.env
|
||||
VARIABLE="value"
|
||||
PORT=3000
|
||||
DB_NAME="${DB_NAME}"
|
||||
EOF
|
||||
```
|
||||
|
||||
### 9.4 **Services**
|
||||
|
||||
Enable affected services after configuration changes and start them right away.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
systemctl enable -q --now nginx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. **Cleanup**
|
||||
|
||||
### 10.1 **Remove temporary files**
|
||||
|
||||
Remove temporary files and downloads after use.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
rm -rf /opt/v${RELEASE}.zip
|
||||
```
|
||||
|
||||
### 10.2 **Autoremove and autoclean**
|
||||
|
||||
Remove unused dependencies to reduce disk space usage.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
apt-get -y autoremove
|
||||
apt-get -y autoclean
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. **Best Practices Checklist**
|
||||
|
||||
- [ ] Shebang is correctly set (`#!/usr/bin/env bash`).
|
||||
- [ ] Metadata (author, license) is included at the top.
|
||||
- [ ] Variables follow naming conventions.
|
||||
- [ ] Sensitive values are dynamically generated.
|
||||
- [ ] Files and services have proper permissions.
|
||||
- [ ] Script cleans up temporary files.
|
||||
|
||||
---
|
||||
|
||||
### Example: High-Level Script Flow
|
||||
|
||||
1. Dependencies installation
|
||||
2. Database setup
|
||||
3. Download and configure application
|
||||
4. Service configuration
|
||||
5. Final cleanup
|
||||
78
.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh
generated
vendored
78
.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh
generated
vendored
@ -1,78 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: [YourUserName]
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: [SOURCE_URL]
|
||||
|
||||
# Import Functions und Setup
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
# Installing Dependencies
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt-get install -y \
|
||||
[PACKAGE_1] \
|
||||
[PACKAGE_2] \
|
||||
[PACKAGE_3]
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
# Template: MySQL Database
|
||||
msg_info "Setting up Database"
|
||||
DB_NAME=[DB_NAME]
|
||||
DB_USER=[DB_USER]
|
||||
DB_PASS=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | head -c13)
|
||||
$STD mysql -u root -e "CREATE DATABASE $DB_NAME;"
|
||||
$STD mysql -u root -e "CREATE USER '$DB_USER'@'localhost' IDENTIFIED WITH mysql_native_password AS PASSWORD('$DB_PASS');"
|
||||
$STD mysql -u root -e "GRANT ALL ON $DB_NAME.* TO '$DB_USER'@'localhost'; FLUSH PRIVILEGES;"
|
||||
{
|
||||
echo "${APPLICATION} Credentials"
|
||||
echo "Database User: $DB_USER"
|
||||
echo "Database Password: $DB_PASS"
|
||||
echo "Database Name: $DB_NAME"
|
||||
} >>~/"$APP_NAME".creds
|
||||
msg_ok "Set up Database"
|
||||
|
||||
# Setup App
|
||||
msg_info "Setup ${APPLICATION}"
|
||||
RELEASE=$(curl -fsSL https://api.github.com/repos/[REPO]/releases/latest | grep "tag_name" | awk '{print substr($2, 2, length($2)-3) }')
|
||||
curl -fsSL -o "${RELEASE}.zip" "https://github.com/[REPO]/archive/refs/tags/${RELEASE}.zip"
|
||||
unzip -q "${RELEASE}.zip"
|
||||
mv "${APPLICATION}-${RELEASE}/" "/opt/${APPLICATION}"
|
||||
#
|
||||
#
|
||||
#
|
||||
echo "${RELEASE}" >/opt/"${APPLICATION}"_version.txt
|
||||
msg_ok "Setup ${APPLICATION}"
|
||||
|
||||
# Creating Service (if needed)
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/"${APPLICATION}".service
|
||||
[Unit]
|
||||
Description=${APPLICATION} Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
ExecStart=[START_COMMAND]
|
||||
Restart=always
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now "${APPLICATION}"
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
|
||||
# Cleanup
|
||||
msg_info "Cleaning up"
|
||||
rm -f "${RELEASE}".zip
|
||||
$STD apt-get -y autoremove
|
||||
$STD apt-get -y autoclean
|
||||
msg_ok "Cleaned"
|
||||
114
.github/autolabeler-config.json
generated
vendored
114
.github/autolabeler-config.json
generated
vendored
@ -4,9 +4,7 @@
|
||||
"fileStatus": "added",
|
||||
"includeGlobs": [
|
||||
"ct/**",
|
||||
"tools/**",
|
||||
"install/**",
|
||||
"misc/**",
|
||||
"turnkey/**",
|
||||
"vm/**"
|
||||
],
|
||||
@ -18,9 +16,7 @@
|
||||
"fileStatus": "modified",
|
||||
"includeGlobs": [
|
||||
"ct/**",
|
||||
"tools/**",
|
||||
"install/**",
|
||||
"misc/**",
|
||||
"turnkey/**",
|
||||
"vm/**"
|
||||
],
|
||||
@ -32,71 +28,27 @@
|
||||
"fileStatus": "removed",
|
||||
"includeGlobs": [
|
||||
"ct/**",
|
||||
"tools/**",
|
||||
"install/**",
|
||||
"misc/**",
|
||||
"turnkey/**",
|
||||
"vm/**"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"maintenance": [
|
||||
"vm": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"*.md"
|
||||
"vm/**"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"core": [
|
||||
"tools": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"misc/*.func",
|
||||
"misc/create_lxc.sh"
|
||||
],
|
||||
"excludeGlobs": [
|
||||
"misc/api.func"
|
||||
]
|
||||
}
|
||||
],
|
||||
"website": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"frontend/**"
|
||||
],
|
||||
"excludeGlobs": [
|
||||
"frontend/public/json/**"
|
||||
]
|
||||
}
|
||||
],
|
||||
"api": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"api/**",
|
||||
"misc/api.func"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"github": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
".github/**"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"json": [
|
||||
{
|
||||
"fileStatus": "modified",
|
||||
"includeGlobs": [
|
||||
"frontend/public/json/**"
|
||||
"tools/**"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
@ -119,11 +71,65 @@
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"vm": [
|
||||
"core": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"vm/**"
|
||||
"misc/*.func"
|
||||
],
|
||||
"excludeGlobs": [
|
||||
"misc/api.func"
|
||||
]
|
||||
}
|
||||
],
|
||||
"documentation": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"docs/**"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"github": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
".github/**",
|
||||
"README.md",
|
||||
"SECURITY.md",
|
||||
"LICENSE",
|
||||
"CHANGELOG.md"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"api": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"api/**",
|
||||
"misc/api.func"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
],
|
||||
"website": [
|
||||
{
|
||||
"fileStatus": null,
|
||||
"includeGlobs": [
|
||||
"frontend/**"
|
||||
],
|
||||
"excludeGlobs": [
|
||||
"frontend/public/json/**"
|
||||
]
|
||||
}
|
||||
],
|
||||
"json": [
|
||||
{
|
||||
"fileStatus": "modified",
|
||||
"includeGlobs": [
|
||||
"frontend/public/json/**"
|
||||
],
|
||||
"excludeGlobs": []
|
||||
}
|
||||
|
||||
88
.github/changelog-pr-config.json
generated
vendored
88
.github/changelog-pr-config.json
generated
vendored
@ -42,9 +42,15 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "🧰 Maintenance",
|
||||
"title": "🗑️ Deleted Scripts",
|
||||
"labels": [
|
||||
"maintenance"
|
||||
"delete script"
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "💾 Core",
|
||||
"labels": [
|
||||
"core"
|
||||
],
|
||||
"subCategories": [
|
||||
{
|
||||
@ -69,30 +75,86 @@
|
||||
"notes": []
|
||||
},
|
||||
{
|
||||
"title": "📡 API",
|
||||
"title": "🔧 Refactor",
|
||||
"labels": [
|
||||
"api"
|
||||
"refactor"
|
||||
],
|
||||
"notes": []
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "🧰 Tools",
|
||||
"labels": [
|
||||
"tools"
|
||||
],
|
||||
"subCategories": [
|
||||
{
|
||||
"title": "🐞 Bug Fixes",
|
||||
"labels": [
|
||||
"bugfix"
|
||||
],
|
||||
"notes": []
|
||||
},
|
||||
{
|
||||
"title": "💾 Core",
|
||||
"title": "✨ New Features",
|
||||
"labels": [
|
||||
"core"
|
||||
"feature"
|
||||
],
|
||||
"notes": []
|
||||
},
|
||||
{
|
||||
"title": "📂 Github",
|
||||
"title": "💥 Breaking Changes",
|
||||
"labels": [
|
||||
"github"
|
||||
"breaking change"
|
||||
],
|
||||
"notes": []
|
||||
},
|
||||
{
|
||||
"title": "📝 Documentation",
|
||||
"title": "🔧 Refactor",
|
||||
"labels": [
|
||||
"maintenance"
|
||||
"refactor"
|
||||
],
|
||||
"notes": []
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "📚 Documentation",
|
||||
"labels": [
|
||||
"documentation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "📂 Github",
|
||||
"labels": [
|
||||
"github"
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "📡 API",
|
||||
"labels": [
|
||||
"api"
|
||||
],
|
||||
"subCategories": [
|
||||
{
|
||||
"title": "🐞 Bug Fixes",
|
||||
"labels": [
|
||||
"bugfix"
|
||||
],
|
||||
"notes": []
|
||||
},
|
||||
{
|
||||
"title": "✨ New Features",
|
||||
"labels": [
|
||||
"feature"
|
||||
],
|
||||
"notes": []
|
||||
},
|
||||
{
|
||||
"title": "💥 Breaking Changes",
|
||||
"labels": [
|
||||
"breaking change"
|
||||
],
|
||||
"notes": []
|
||||
},
|
||||
@ -142,7 +204,9 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "❔ Unlabelled",
|
||||
"labels": []
|
||||
"title": "❔ Uncategorized",
|
||||
"labels": [
|
||||
"needs triage"
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
33
.github/pull_request_template.md
generated
vendored
33
.github/pull_request_template.md
generated
vendored
@ -1,27 +1,26 @@
|
||||
<!--🛑 New scripts must be submitted to [ProxmoxVED](https://github.com/community-scripts/ProxmoxVED) for testing.
|
||||
<!--🛑 New scripts must be submitted to [ProxmoxVED](https://github.com/community-scripts/ProxmoxVED) for testing.
|
||||
PRs without prior testing will be closed. -->
|
||||
## ✍️ Description
|
||||
|
||||
## ✍️ Description
|
||||
|
||||
## 🔗 Related Issue
|
||||
|
||||
## 🔗 Related PR / Issue
|
||||
Link: #
|
||||
Fixes #
|
||||
|
||||
## ✅ Prerequisites (**X** in brackets)
|
||||
|
||||
## ✅ Prerequisites (**X** in brackets)
|
||||
|
||||
- [ ] **Self-review completed** – Code follows project standards.
|
||||
- [ ] **Tested thoroughly** – Changes work as expected.
|
||||
- [ ] **No security risks** – No hardcoded secrets, unnecessary privilege escalations, or permission issues.
|
||||
- [ ] **Self-review completed** – Code follows project standards.
|
||||
- [ ] **Tested thoroughly** – Changes work as expected.
|
||||
- [ ] **No security risks** – No hardcoded secrets, unnecessary privilege escalations, or permission issues.
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Type of Change (**X** in brackets)
|
||||
## 🛠️ Type of Change (**X** in brackets)
|
||||
|
||||
- [ ] 🐞 **Bug fix** – Resolves an issue without breaking functionality.
|
||||
- [ ] ✨ **New feature** – Adds new, non-breaking functionality.
|
||||
- [ ] 💥 **Breaking change** – Alters existing functionality in a way that may require updates.
|
||||
- [ ] 🆕 **New script** – A fully functional and tested script or script set.
|
||||
- [ ] 🌍 **Website update** – Changes to website-related JSON files or metadata.
|
||||
- [ ] 🔧 **Refactoring / Code Cleanup** – Improves readability or maintainability without changing functionality.
|
||||
- [ ] 📝 **Documentation update** – Changes to `README`, `AppName.md`, `CONTRIBUTING.md`, or other docs.
|
||||
- [ ] 🐞 **Bug fix** – Resolves an issue without breaking functionality.
|
||||
- [ ] ✨ **New feature** – Adds new, non-breaking functionality.
|
||||
- [ ] 💥 **Breaking change** – Alters existing functionality in a way that may require updates.
|
||||
- [ ] 🆕 **New script** – A fully functional and tested script or script set.
|
||||
- [ ] 🌍 **Website update** – Changes to website-related JSON files or metadata.
|
||||
- [ ] 🔧 **Refactoring / Code Cleanup** – Improves readability or maintainability without changing functionality.
|
||||
- [ ] 📝 **Documentation update** – Changes to `README`, `AppName.md`, `CONTRIBUTING.md`, or other docs.
|
||||
|
||||
64
.github/workflows/autolabeler.yml
generated
vendored
64
.github/workflows/autolabeler.yml
generated
vendored
@ -57,10 +57,10 @@ jobs:
|
||||
|
||||
if (shouldAddLabel) {
|
||||
labelsToAdd.add(label);
|
||||
if (label === "update script") {
|
||||
// Add specific sub-labels for tools
|
||||
if (label === "tools") {
|
||||
for (const prFile of prFiles) {
|
||||
const filename = prFile.filename;
|
||||
if (filename.startsWith("vm/")) labelsToAdd.add("vm");
|
||||
if (filename.startsWith("tools/addon/")) labelsToAdd.add("addon");
|
||||
if (filename.startsWith("tools/pve/")) labelsToAdd.add("pve-tool");
|
||||
}
|
||||
@ -68,38 +68,42 @@ jobs:
|
||||
}
|
||||
}
|
||||
|
||||
if (labelsToAdd.size < 2) {
|
||||
const templateLabelMappings = {
|
||||
"🐞 **Bug fix**": "bugfix",
|
||||
"✨ **New feature**": "feature",
|
||||
"💥 **Breaking change**": "breaking change",
|
||||
"🆕 **New script**": "new script",
|
||||
"🌍 **Website update**": "website", // handled special
|
||||
"🔧 **Refactoring / Code Cleanup**": "refactor",
|
||||
"📝 **Documentation update**": "documentation" // mapped to maintenance
|
||||
};
|
||||
// Always parse template checkboxes to add content-type labels (bugfix, feature, etc.)
|
||||
const templateLabelMappings = {
|
||||
"🐞 **Bug fix**": "bugfix",
|
||||
"✨ **New feature**": "feature",
|
||||
"💥 **Breaking change**": "breaking change",
|
||||
"🆕 **New script**": "new script",
|
||||
"🔧 **Refactoring / Code Cleanup**": "refactor",
|
||||
"📝 **Documentation update**": "documentation"
|
||||
};
|
||||
|
||||
for (const [checkbox, label] of Object.entries(templateLabelMappings)) {
|
||||
const escapedCheckbox = checkbox.replace(/([.*+?^=!:${}()|[\]\/\\])/g, "\\$1");
|
||||
const regex = new RegExp(`- \\[(x|X)\\]\\s*${escapedCheckbox}`, "i");
|
||||
for (const [checkbox, label] of Object.entries(templateLabelMappings)) {
|
||||
const escapedCheckbox = checkbox.replace(/([.*+?^=!:${}()|[\]\/\\])/g, "\\$1");
|
||||
const regex = new RegExp(`- \\[(x|X)\\]\\s*${escapedCheckbox}`, "i");
|
||||
|
||||
if (regex.test(prBody)) {
|
||||
if (label === "website") {
|
||||
const hasJson = prFiles.some((f) => f.filename.startsWith("frontend/public/json/"));
|
||||
const hasUpdateScript = labelsToAdd.has("update script");
|
||||
const hasContentLabel = ["bugfix", "feature", "refactor"].some((l) => labelsToAdd.has(l));
|
||||
|
||||
if (!(hasUpdateScript && hasContentLabel)) {
|
||||
labelsToAdd.add(hasJson ? "json" : "website");
|
||||
}
|
||||
} else if (label === "documentation") {
|
||||
labelsToAdd.add("maintenance");
|
||||
} else {
|
||||
labelsToAdd.add(label);
|
||||
}
|
||||
}
|
||||
if (regex.test(prBody)) {
|
||||
labelsToAdd.add(label);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle website checkbox specially - only add if not already an update script with content label
|
||||
const websiteCheckbox = "🌍 **Website update**";
|
||||
const escapedWebsite = websiteCheckbox.replace(/([.*+?^=!:${}()|[\]\/\\])/g, "\\$1");
|
||||
const websiteRegex = new RegExp(`- \\[(x|X)\\]\\s*${escapedWebsite}`, "i");
|
||||
|
||||
if (websiteRegex.test(prBody)) {
|
||||
const hasJson = prFiles.some((f) => f.filename.startsWith("frontend/public/json/"));
|
||||
const hasUpdateScript = labelsToAdd.has("update script");
|
||||
const hasContentLabel = ["bugfix", "feature", "refactor"].some((l) => labelsToAdd.has(l));
|
||||
|
||||
// If it's an update script PR with json changes and a content label, skip adding website/json
|
||||
// The PR should be categorized as update script with the content label
|
||||
if (!(hasUpdateScript && hasJson && hasContentLabel)) {
|
||||
labelsToAdd.add(hasJson ? "json" : "website");
|
||||
}
|
||||
}
|
||||
|
||||
if (labelsToAdd.size === 0) {
|
||||
labelsToAdd.add("needs triage");
|
||||
}
|
||||
|
||||
33
.github/workflows/changelog-pr.yml
generated
vendored
33
.github/workflows/changelog-pr.yml
generated
vendored
@ -157,13 +157,31 @@ jobs:
|
||||
|
||||
let categorized = false;
|
||||
const priorityCategories = categorizedPRs.slice();
|
||||
|
||||
// Priority order for content-type labels (highest priority first)
|
||||
const subCategoryPriority = ["breaking change", "bugfix", "feature", "refactor"];
|
||||
|
||||
for (const category of priorityCategories) {
|
||||
if (categorized) break;
|
||||
if (category.labels.some(label => prLabels.includes(label))) {
|
||||
if (category.subCategories && category.subCategories.length > 0) {
|
||||
const subCategory = category.subCategories.find(sub =>
|
||||
sub.labels.some(label => prLabels.includes(label))
|
||||
);
|
||||
// Find subcategory by priority order instead of first match
|
||||
let subCategory = null;
|
||||
for (const priorityLabel of subCategoryPriority) {
|
||||
if (prLabels.includes(priorityLabel)) {
|
||||
subCategory = category.subCategories.find(sub =>
|
||||
sub.labels.includes(priorityLabel)
|
||||
);
|
||||
if (subCategory) break;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: check for any other subcategory match (api, github, json, etc.)
|
||||
if (!subCategory) {
|
||||
subCategory = category.subCategories.find(sub =>
|
||||
sub.labels.some(label => prLabels.includes(label))
|
||||
);
|
||||
}
|
||||
|
||||
if (subCategory) {
|
||||
subCategory.notes.push(prNote);
|
||||
@ -176,6 +194,15 @@ jobs:
|
||||
categorized = true;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: Add to Uncategorized if no category matched
|
||||
if (!categorized) {
|
||||
const uncategorized = categorizedPRs.find(category =>
|
||||
category.title.includes("Uncategorized") || category.labels.includes("needs triage"));
|
||||
if (uncategorized) {
|
||||
uncategorized.notes.push(prNote);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
4
.github/workflows/validate-filenames.yml
generated
vendored
4
.github/workflows/validate-filenames.yml
generated
vendored
@ -51,10 +51,6 @@ jobs:
|
||||
|
||||
NON_COMPLIANT_FILES=""
|
||||
for FILE in $CHANGED_FILES; do
|
||||
# Skip File "misc/create_lxc.sh"
|
||||
if [[ "$FILE" == "misc/create_lxc.sh" ]]; then
|
||||
continue
|
||||
fi
|
||||
BASENAME=$(echo "$(basename "${FILE%.*}")")
|
||||
if [[ ! "$BASENAME" =~ ^[a-z0-9-]+$ ]]; then
|
||||
NON_COMPLIANT_FILES="$NON_COMPLIANT_FILES $FILE"
|
||||
|
||||
96
CHANGELOG.md
96
CHANGELOG.md
@ -10,8 +10,104 @@
|
||||
> [!CAUTION]
|
||||
Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit the project's popularity for potentially malicious purposes.
|
||||
|
||||
## 2025-12-07
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- wanderer: add meilisearch dumpless upgrade for database migration [@MickLesk](https://github.com/MickLesk) ([#9749](https://github.com/community-scripts/ProxmoxVE/pull/9749))
|
||||
|
||||
- #### 💥 Breaking Changes
|
||||
|
||||
- Refactor: Inventree (uses now ubuntu 24.04) [@MickLesk](https://github.com/MickLesk) ([#9752](https://github.com/community-scripts/ProxmoxVE/pull/9752))
|
||||
- Revert Zammad: use Debian 12 and dynamic APT source version [@MickLesk](https://github.com/MickLesk) ([#9750](https://github.com/community-scripts/ProxmoxVE/pull/9750))
|
||||
|
||||
### 💾 Core
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- tools.func: handle empty grep results in stop_all_services [@MickLesk](https://github.com/MickLesk) ([#9748](https://github.com/community-scripts/ProxmoxVE/pull/9748))
|
||||
- Remove Debian from GPU passthrough [@MickLesk](https://github.com/MickLesk) ([#9754](https://github.com/community-scripts/ProxmoxVE/pull/9754))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- core: motd - dynamically read OS version on each login [@MickLesk](https://github.com/MickLesk) ([#9751](https://github.com/community-scripts/ProxmoxVE/pull/9751))
|
||||
|
||||
### 🌐 Website
|
||||
|
||||
- FAQ update [@tremor021](https://github.com/tremor021) ([#9742](https://github.com/community-scripts/ProxmoxVE/pull/9742))
|
||||
|
||||
## 2025-12-06
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- Update domain-locker-install.sh to enable auto-start after reboot [@alexindigo](https://github.com/alexindigo) ([#9715](https://github.com/community-scripts/ProxmoxVE/pull/9715))
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- InfluxDB: Remove InfluxData source list post-installation [@tremor021](https://github.com/tremor021) ([#9723](https://github.com/community-scripts/ProxmoxVE/pull/9723))
|
||||
- InfluxDB: Update InfluxDB repository key URL [@tremor021](https://github.com/tremor021) ([#9720](https://github.com/community-scripts/ProxmoxVE/pull/9720))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- pin Portainer Update to CE Version only [@sgaert](https://github.com/sgaert) ([#9710](https://github.com/community-scripts/ProxmoxVE/pull/9710))
|
||||
|
||||
## 2025-12-05
|
||||
|
||||
### 🆕 New Scripts
|
||||
|
||||
- Endurain ([#9681](https://github.com/community-scripts/ProxmoxVE/pull/9681))
|
||||
- MeTube ([#9671](https://github.com/community-scripts/ProxmoxVE/pull/9671))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- libretranslate: pin uv python to 3.12 (pytorch fix) [@MickLesk](https://github.com/MickLesk) ([#9699](https://github.com/community-scripts/ProxmoxVE/pull/9699))
|
||||
- alpine: (mariadb/postgresql): correct php-cgi path for php83 (adminer) [@MickLesk](https://github.com/MickLesk) ([#9698](https://github.com/community-scripts/ProxmoxVE/pull/9698))
|
||||
- fix(librespeed-rs): use correct service name [@jniles](https://github.com/jniles) ([#9683](https://github.com/community-scripts/ProxmoxVE/pull/9683))
|
||||
- NetVisor: fix daemon auto-config [@vhsdream](https://github.com/vhsdream) ([#9682](https://github.com/community-scripts/ProxmoxVE/pull/9682))
|
||||
- Improve NVIDIA device detection for container passthrough [@MickLesk](https://github.com/MickLesk) ([#9670](https://github.com/community-scripts/ProxmoxVE/pull/9670))
|
||||
- Fix AdventureLog installation failure: missing postgis extension permissions [@Copilot](https://github.com/Copilot) ([#9674](https://github.com/community-scripts/ProxmoxVE/pull/9674))
|
||||
- paperless: ASGI interface typo [@MickLesk](https://github.com/MickLesk) ([#9668](https://github.com/community-scripts/ProxmoxVE/pull/9668))
|
||||
- var. core fixes (bash to sh in fix_gpu_gids ...) [@MickLesk](https://github.com/MickLesk) ([#9666](https://github.com/community-scripts/ProxmoxVE/pull/9666))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- tools.func: handle GitHub 300 Multiple Choices in tarball mode [@MickLesk](https://github.com/MickLesk) ([#9697](https://github.com/community-scripts/ProxmoxVE/pull/9697))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- Refactor: OneDev [@MickLesk](https://github.com/MickLesk) ([#9597](https://github.com/community-scripts/ProxmoxVE/pull/9597))
|
||||
|
||||
### 📂 Github
|
||||
|
||||
- chore(github): improve PR template and cleanup obsolete references | move contribution guide [@MickLesk](https://github.com/MickLesk) ([#9700](https://github.com/community-scripts/ProxmoxVE/pull/9700))
|
||||
|
||||
## 2025-12-04
|
||||
|
||||
### 🛠️ Core Overhaul
|
||||
|
||||
- Major refactor of the entire `/misc` subsystem introducing a secure, modular and fully extensible foundation for all future scripts.
|
||||
Includes the new three-tier defaults architecture (ENV → App → User), strict variable whitelisting, safe `.vars` parsing without `source/eval`, centralized `error_handler.func`, structured logging, an improved 19-step advanced wizard, unified container creation, dedicated storage selector, updated sysctl handling, IPv6 disable mode, cloud-init library, SSH key auto-discovery, and a complete cleanup of legacy components.
|
||||
Documentation added under `/docs/guides`.
|
||||
[@MickLesk](https://github.com/MickLesk) ([#9540](https://github.com/community-scripts/ProxmoxVE/pull/9540))
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
- #### 🐞 Bug Fixes
|
||||
|
||||
- Fix kimai.sh update script path typo for local.yaml [@Copilot](https://github.com/Copilot) ([#9645](https://github.com/community-scripts/ProxmoxVE/pull/9645))
|
||||
|
||||
- #### ✨ New Features
|
||||
|
||||
- core: extend storage type support (rbd, nfs, cifs) and validation (iscidirect, isci, zfs, cephfs, pbs) [@MickLesk](https://github.com/MickLesk) ([#9646](https://github.com/community-scripts/ProxmoxVE/pull/9646))
|
||||
|
||||
- #### 🔧 Refactor
|
||||
|
||||
- update pdm repo to stable [@CrazyWolf13](https://github.com/CrazyWolf13) ([#9648](https://github.com/community-scripts/ProxmoxVE/pull/9648))
|
||||
|
||||
## 2025-12-03
|
||||
|
||||
### 🚀 Updated Scripts
|
||||
|
||||
10
README.md
10
README.md
@ -17,10 +17,10 @@
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<a href="https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/CONTRIBUTING.md">
|
||||
<a href="https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/README.md">
|
||||
<img src="https://img.shields.io/badge/🤝_Contribute-Guidelines-ff4785?style=for-the-badge&labelColor=2d3748" alt="Contribute" />
|
||||
</a>
|
||||
<a href="https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/USER_SUBMITTED_GUIDES.md">
|
||||
<a href="https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/USER_SUBMITTED_GUIDES.md">
|
||||
<img src="https://img.shields.io/badge/📚_Guides-Read-0077b5?style=for-the-badge&labelColor=2d3748" alt="Guides" />
|
||||
</a>
|
||||
<a href="https://github.com/community-scripts/ProxmoxVE/blob/main/CHANGELOG.md">
|
||||
@ -30,8 +30,8 @@
|
||||
|
||||
<br />
|
||||
|
||||
> **Simplify your Proxmox VE setup with community-driven automation scripts**
|
||||
> Originally created by tteck, now maintained and expanded by the community
|
||||
> **Simplify your Proxmox VE setup with community-driven automation scripts**
|
||||
> Originally created by tteck, now maintained and expanded by the community
|
||||
|
||||
</div>
|
||||
|
||||
@ -214,7 +214,7 @@ This adds a menu to your Proxmox interface for easy script access without visiti
|
||||
<div align="center">
|
||||
<br />
|
||||
|
||||
👉 Check our **[Contributing Guidelines](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/CONTRIBUTING.md)** to get started
|
||||
👉 Check our **[Contributing Guidelines](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/README.md)** to get started
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
@ -45,6 +45,10 @@ function update_script() {
|
||||
fetch_and_deploy_gh_release "adventurelog" "seanmorley15/adventurelog"
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
|
||||
msg_info "Ensuring PostgreSQL Extensions"
|
||||
$STD sudo -u postgres psql -d adventurelog_db -c "CREATE EXTENSION IF NOT EXISTS postgis;"
|
||||
msg_ok "PostgreSQL Extensions Ready"
|
||||
|
||||
msg_info "Updating ${APP}"
|
||||
cp /opt/adventurelog-backup/backend/server/.env /opt/adventurelog/backend/server/.env
|
||||
cp -r /opt/adventurelog-backup/backend/server/media /opt/adventurelog/backend/server/media
|
||||
|
||||
@ -51,7 +51,7 @@ function update_script() {
|
||||
msg_info "Configuring BookStack"
|
||||
cd /opt/bookstack
|
||||
export COMPOSER_ALLOW_SUPERUSER=1
|
||||
$STD composer install --no-dev
|
||||
$STD /usr/local/bin/composer install --no-dev
|
||||
$STD php artisan migrate --force
|
||||
chown www-data:www-data -R /opt/bookstack /opt/bookstack/bootstrap/cache /opt/bookstack/public/uploads /opt/bookstack/storage
|
||||
chmod -R 755 /opt/bookstack /opt/bookstack/bootstrap/cache /opt/bookstack/public/uploads /opt/bookstack/storage
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-8}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-12}"
|
||||
var_unprivileged="${var_unprivileged:-0}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
@ -38,4 +39,4 @@ description
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8089${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8089${CL}"
|
||||
|
||||
@ -24,11 +24,11 @@ function update_script() {
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
if [[ ! -f /opt/${APP} ]]; then
|
||||
if [[ ! -d /opt/ComfyUI ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
msg_error "To update use the ${APP} Manager."
|
||||
msg_error "To update use the ComfyUI Manager."
|
||||
exit
|
||||
}
|
||||
|
||||
|
||||
@ -87,6 +87,11 @@ function update_script() {
|
||||
mv /tmp/start-daphne.sh.backup /opt/dispatcharr/start-daphne.sh
|
||||
fi
|
||||
|
||||
if ! grep -q "DJANGO_SECRET_KEY" /opt/dispatcharr/.env; then
|
||||
DJANGO_SECRET=$(openssl rand -base64 48 | tr -dc 'a-zA-Z0-9' | cut -c1-50)
|
||||
echo "DJANGO_SECRET_KEY=$DJANGO_SECRET" >> /opt/dispatcharr/.env
|
||||
fi
|
||||
|
||||
cd /opt/dispatcharr
|
||||
rm -rf .venv
|
||||
$STD uv venv
|
||||
|
||||
@ -47,7 +47,7 @@ function update_script() {
|
||||
msg_ok "Docker Compose updated"
|
||||
fi
|
||||
|
||||
if docker ps -a --format '{{.Names}}' | grep -q '^portainer$'; then
|
||||
if docker ps -a --format '{{.Image}}' | grep -q '^portainer/portainer-ce:latest$'; then
|
||||
msg_info "Updating Portainer"
|
||||
$STD docker pull portainer/portainer-ce:latest
|
||||
$STD docker stop portainer && docker rm portainer
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-8}"
|
||||
var_os="${var_os:-ubuntu}"
|
||||
var_version="${var_version:-24.04}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
83
ct/endurain.sh
Normal file
83
ct/endurain.sh
Normal file
@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: johanngrobe
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/joaovitoriasilva/endurain
|
||||
|
||||
APP="Endurain"
|
||||
var_tags="${var_tags:-sport;social-media}"
|
||||
var_cpu="${var_cpu:-2}"
|
||||
var_ram="${var_ram:-2048}"
|
||||
var_disk="${var_disk:-5}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
if [[ ! -d /opt/endurain ]]; then
|
||||
msg_error "No ${APP} installation found!"
|
||||
exit 1
|
||||
fi
|
||||
if check_for_gh_release "endurain" "joaovitoriasilva/endurain"; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop endurain
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Creating Backup"
|
||||
cp /opt/endurain/.env /opt/endurain.env
|
||||
cp /opt/endurain/frontend/app/dist/env.js /opt/endurain.env.js
|
||||
msg_ok "Created Backup"
|
||||
|
||||
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "endurain" "joaovitoriasilva/endurain" "tarball" "latest" "/opt/endurain"
|
||||
|
||||
msg_info "Preparing Update"
|
||||
cd /opt/endurain
|
||||
rm -rf \
|
||||
/opt/endurain/{docs,example.env,screenshot_01.png} \
|
||||
/opt/endurain/docker* \
|
||||
/opt/endurain/*.yml
|
||||
cp /opt/endurain.env /opt/endurain/.env
|
||||
rm /opt/endurain.env
|
||||
msg_ok "Prepared Update"
|
||||
|
||||
msg_info "Updating Frontend"
|
||||
cd /opt/endurain/frontend/app
|
||||
$STD npm ci
|
||||
$STD npm run build
|
||||
cp /opt/endurain.env.js /opt/endurain/frontend/app/dist/env.js
|
||||
rm /opt/endurain.env.js
|
||||
msg_ok "Updated Frontend"
|
||||
|
||||
msg_info "Updating Backend"
|
||||
cd /opt/endurain/backend
|
||||
$STD poetry export -f requirements.txt --output requirements.txt --without-hashes
|
||||
$STD uv venv
|
||||
$STD uv pip install -r requirements.txt
|
||||
msg_ok "Backend Updated"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start endurain
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Update Completed Successfully!"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8080${CL}"
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-5}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-12}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-8}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-12}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-20}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-11}"
|
||||
var_unprivileged="${var_unprivileged:-0}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
@ -38,4 +39,4 @@ description
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:5000${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:5000${CL}"
|
||||
|
||||
6
ct/headers/endurain
Normal file
6
ct/headers/endurain
Normal file
@ -0,0 +1,6 @@
|
||||
______ __ _
|
||||
/ ____/___ ____/ /_ ___________ _(_)___
|
||||
/ __/ / __ \/ __ / / / / ___/ __ `/ / __ \
|
||||
/ /___/ / / / /_/ / /_/ / / / /_/ / / / / /
|
||||
/_____/_/ /_/\__,_/\__,_/_/ \__,_/_/_/ /_/
|
||||
|
||||
6
ct/headers/metube
Normal file
6
ct/headers/metube
Normal file
@ -0,0 +1,6 @@
|
||||
__ ___ ______ __
|
||||
/ |/ /__/_ __/_ __/ /_ ___
|
||||
/ /|_/ / _ \/ / / / / / __ \/ _ \
|
||||
/ / / / __/ / / /_/ / /_/ / __/
|
||||
/_/ /_/\___/_/ \__,_/_.___/\___/
|
||||
|
||||
@ -13,6 +13,7 @@ var_ram="${var_ram:-4096}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -10,8 +10,8 @@ var_tags="${var_tags:-inventory}"
|
||||
var_cpu="${var_cpu:-2}"
|
||||
var_ram="${var_ram:-2048}"
|
||||
var_disk="${var_disk:-6}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_os="${var_os:-ubuntu}"
|
||||
var_version="${var_version:-24.04}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
@ -28,10 +28,16 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
msg_info "Updating $APP"
|
||||
|
||||
if ! grep -qE "^ID=(ubuntu)$" /etc/os-release; then
|
||||
msg_error "Unsupported OS. InvenTree requires Ubuntu (20.04/22.04/24.04)."
|
||||
exit
|
||||
fi
|
||||
|
||||
msg_info "Updating InvenTree"
|
||||
$STD apt update
|
||||
$STD apt install --only-upgrade inventree -y
|
||||
msg_ok "Updated $APP"
|
||||
msg_ok "Updated InvenTree"
|
||||
msg_ok "Updated successfully!"
|
||||
exit
|
||||
}
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-16}"
|
||||
var_os="${var_os:-ubuntu}"
|
||||
var_version="${var_version:-24.04}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -56,7 +56,7 @@ function update_script() {
|
||||
[ -f "$BACKUP_DIR/local.yaml" ] && cp "$BACKUP_DIR/local.yaml" /opt/kimai/config/packages/
|
||||
rm -rf "$BACKUP_DIR"
|
||||
cd /opt/kimai
|
||||
sed -i '/^admin_lte:/,/^[^[:space:]]/d' config/local.yaml
|
||||
sed -i '/^admin_lte:/,/^[^[:space:]]/d' config/packages/local.yaml
|
||||
$STD composer install --no-dev --optimize-autoloader
|
||||
$STD bin/console kimai:update
|
||||
msg_ok "Updated Kimai"
|
||||
|
||||
@ -30,13 +30,13 @@ function update_script() {
|
||||
|
||||
if check_for_gh_release "librespeed-rust" "librespeed/speedtest-rust"; then
|
||||
msg_info "Stopping Services"
|
||||
systemctl stop librespeed-rs
|
||||
systemctl stop speedtest_rs
|
||||
msg_ok "Services Stopped"
|
||||
|
||||
fetch_and_deploy_gh_release "librespeed-rust" "librespeed/speedtest-rust" "binary" "latest" "/opt/librespeed-rust" "librespeed-rs-x86_64-unknown-linux-gnu.deb"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start librespeed-rs
|
||||
systemctl start speedtest_rs
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
fi
|
||||
|
||||
@ -29,7 +29,7 @@ function update_script() {
|
||||
exit
|
||||
fi
|
||||
|
||||
setup_uv
|
||||
PYTHON_VERSION="3.12" setup_uv
|
||||
|
||||
if check_for_gh_release "libretranslate" "LibreTranslate/LibreTranslate"; then
|
||||
msg_info "Stopping Service"
|
||||
@ -39,7 +39,7 @@ function update_script() {
|
||||
msg_info "Updating LibreTranslate"
|
||||
cd /opt/libretranslate
|
||||
source .venv/bin/activate
|
||||
$STD pip install -U libretranslate
|
||||
$STD uv pip install -U libretranslate
|
||||
msg_ok "Updated LibreTranslate"
|
||||
|
||||
msg_info "Starting Service"
|
||||
|
||||
113
ct/metube.sh
Normal file
113
ct/metube.sh
Normal file
@ -0,0 +1,113 @@
|
||||
#!/usr/bin/env bash
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: MickLesk (Canbiz)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/alexta69/metube
|
||||
|
||||
APP="MeTube"
|
||||
var_tags="${var_tags:-media;youtube}"
|
||||
var_cpu="${var_cpu:-1}"
|
||||
var_ram="${var_ram:-2048}"
|
||||
var_disk="${var_disk:-10}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
color
|
||||
catch_errors
|
||||
|
||||
function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
|
||||
if [[ ! -d /opt/metube ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
if [[ $(echo ":$PATH:" != *":/usr/local/bin:"*) ]]; then
|
||||
echo -e "\nexport PATH=\"/usr/local/bin:\$PATH\"" >>~/.bashrc
|
||||
source ~/.bashrc
|
||||
if ! command -v deno &>/dev/null; then
|
||||
export DENO_INSTALL="/usr/local"
|
||||
curl -fsSL https://deno.land/install.sh | $STD sh -s -- -y
|
||||
else
|
||||
$STD deno upgrade
|
||||
fi
|
||||
fi
|
||||
|
||||
if check_for_gh_release "metube" "alexta69/metube"; then
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop metube
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Backing up Old Installation"
|
||||
if [[ -d /opt/metube_bak ]]; then
|
||||
rm -rf /opt/metube_bak
|
||||
fi
|
||||
mv /opt/metube /opt/metube_bak
|
||||
msg_ok "Backup created"
|
||||
|
||||
fetch_and_deploy_gh_release "metube" "alexta69/metube" "tarball" "latest"
|
||||
|
||||
msg_info "Building Frontend"
|
||||
cd /opt/metube/ui
|
||||
$STD npm install
|
||||
$STD node_modules/.bin/ng build
|
||||
msg_ok "Built Frontend"
|
||||
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
|
||||
msg_info "Installing Backend Requirements"
|
||||
cd /opt/metube
|
||||
$STD uv sync
|
||||
msg_ok "Installed Backend"
|
||||
|
||||
msg_info "Restoring .env"
|
||||
if [[ -f /opt/metube_bak/.env ]]; then
|
||||
cp /opt/metube_bak/.env /opt/metube/.env
|
||||
fi
|
||||
rm -rf /opt/metube_bak
|
||||
msg_ok "Restored .env"
|
||||
|
||||
if grep -q 'pipenv' /etc/systemd/system/metube.service; then
|
||||
msg_info "Patching systemd Service"
|
||||
cat <<EOF >/etc/systemd/system/metube.service
|
||||
[Unit]
|
||||
Description=Metube - YouTube Downloader
|
||||
After=network.target
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/opt/metube
|
||||
EnvironmentFile=/opt/metube/.env
|
||||
ExecStart=/opt/metube/.venv/bin/python3 app/main.py
|
||||
Restart=always
|
||||
User=root
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
msg_ok "Patched systemd Service"
|
||||
fi
|
||||
$STD systemctl daemon-reload
|
||||
msg_ok "Service Updated"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start metube
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
build_container
|
||||
description
|
||||
|
||||
msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:8081${CL}"
|
||||
@ -3,7 +3,7 @@ source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxV
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: vhsdream
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/maynayza/netvisor
|
||||
# Source: https://github.com/mayanayza/netvisor
|
||||
|
||||
APP="NetVisor"
|
||||
var_tags="${var_tags:-analytics}"
|
||||
@ -99,3 +99,4 @@ msg_ok "Completed Successfully!\n"
|
||||
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
|
||||
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
|
||||
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:60072${CL}"
|
||||
echo -e "${INFO}${YW} Then create your account, and run the 'configure_daemon.sh' script to setup the daemon.${CL}"
|
||||
|
||||
@ -12,6 +12,7 @@ var_ram="${var_ram:-4096}"
|
||||
var_disk="${var_disk:-35}"
|
||||
var_os="${var_os:-ubuntu}"
|
||||
var_version="${var_version:-24.04}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
19
ct/onedev.sh
19
ct/onedev.sh
@ -27,31 +27,30 @@ function update_script() {
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
GITHUB_RELEASE=$(curl -fsSL https://api.github.com/repos/theonedev/onedev/releases/latest | grep "tag_name" | awk '{print substr($2, 3, length($2)-4) }')
|
||||
if [[ ! -f /opt/${APP}_version.txt ]] || [[ "${GITHUB_RELEASE}" != "$(cat /opt/${APP}_version.txt)" ]]; then
|
||||
|
||||
if check_for_gh_release "onedev" "theonedev/onedev"; then
|
||||
JAVA_VERSION="21" setup_java
|
||||
|
||||
msg_info "Stopping Service"
|
||||
systemctl stop onedev
|
||||
msg_ok "Stopped Service"
|
||||
|
||||
msg_info "Updating ${APP} to v${GITHUB_RELEASE}"
|
||||
msg_info "Updating OneDev"
|
||||
cd /opt
|
||||
curl -fsSL "https://code.onedev.io/onedev/server/~site/onedev-latest.tar.gz" -o $(basename "https://code.onedev.io/onedev/server/~site/onedev-latest.tar.gz")
|
||||
curl -fsSL "https://code.onedev.io/onedev/server/~site/onedev-latest.tar.gz" -o onedev-latest.tar.gz
|
||||
tar -xzf onedev-latest.tar.gz
|
||||
$STD /opt/onedev-latest/bin/upgrade.sh /opt/onedev
|
||||
RELEASE=$(cat /opt/onedev/release.properties | grep "version" | cut -d'=' -f2)
|
||||
rm -rf /opt/onedev-latest
|
||||
rm -rf /opt/onedev-latest.tar.gz
|
||||
echo "${RELEASE}" >"/opt/${APP}_version.txt"
|
||||
msg_ok "Updated ${APP} to v${RELEASE}"
|
||||
echo "${CHECK_UPDATE_RELEASE}" >~/.onedev
|
||||
msg_ok "Updated OneDev"
|
||||
|
||||
msg_info "Starting Service"
|
||||
systemctl start onedev
|
||||
msg_ok "Started Service"
|
||||
msg_ok "Updated successfully!"
|
||||
else
|
||||
msg_ok "No update required. ${APP} is already at v${RELEASE}."
|
||||
exit
|
||||
fi
|
||||
exit
|
||||
}
|
||||
|
||||
start
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-25}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -28,12 +28,12 @@ function update_script() {
|
||||
exit
|
||||
fi
|
||||
|
||||
# Check for old data structure and prompt migration
|
||||
# Check for old data structure and prompt migration (exclude symlinks)
|
||||
if [[ -f /opt/paperless/paperless.conf ]]; then
|
||||
local OLD_DIRS=()
|
||||
[[ -d /opt/paperless/consume ]] && OLD_DIRS+=("consume")
|
||||
[[ -d /opt/paperless/data ]] && OLD_DIRS+=("data")
|
||||
[[ -d /opt/paperless/media ]] && OLD_DIRS+=("media")
|
||||
[[ -d /opt/paperless/consume && ! -L /opt/paperless/consume ]] && OLD_DIRS+=("consume")
|
||||
[[ -d /opt/paperless/data && ! -L /opt/paperless/data ]] && OLD_DIRS+=("data")
|
||||
[[ -d /opt/paperless/media && ! -L /opt/paperless/media ]] && OLD_DIRS+=("media")
|
||||
|
||||
if [[ ${#OLD_DIRS[@]} -gt 0 ]]; then
|
||||
msg_error "Old data structure detected in /opt/paperless/"
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-8}"
|
||||
var_os="${var_os:-ubuntu}"
|
||||
var_version="${var_version:-24.04}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
@ -23,8 +24,8 @@ function update_script() {
|
||||
header_info
|
||||
check_container_storage
|
||||
check_container_resources
|
||||
if [[ ! -f /etc/apt/sources.list.d/plexmediaserver.list ]] \
|
||||
&& [[ ! -f /etc/apt/sources.list.d/plexmediaserver.sources ]]; then
|
||||
if [[ ! -f /etc/apt/sources.list.d/plexmediaserver.list ]] &&
|
||||
[[ ! -f /etc/apt/sources.list.d/plexmediaserver.sources ]]; then
|
||||
msg_error "No ${APP} Installation Found!"
|
||||
exit
|
||||
fi
|
||||
|
||||
@ -37,6 +37,20 @@ function update_script() {
|
||||
msg_ok "Updated old sources"
|
||||
fi
|
||||
|
||||
if grep -q 'Debian GNU/Linux 13' /etc/os-release; then
|
||||
if [ -f "/etc/apt/sources.list.d/pdm-test.sources" ]; then
|
||||
if ! grep -qx "Enabled: false" "/etc/apt/sources.list.d/pdm-test.sources"; then
|
||||
echo "Enabled: false" >> "/etc/apt/sources.list.d/pdm-test.sources"
|
||||
setup_deb822_repo \
|
||||
"pdm" \
|
||||
"https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg" \
|
||||
"http://download.proxmox.com/debian/pdm" \
|
||||
"trixie" \
|
||||
"pdm-no-subscription"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
msg_info "Updating $APP LXC"
|
||||
$STD apt update
|
||||
$STD apt -y upgrade
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-4}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-5}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -13,6 +13,7 @@ var_disk="${var_disk:-4}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_unprivileged="${var_unprivileged:-0}"
|
||||
var_gpu="${var_gpu:-yes}"
|
||||
|
||||
header_info "$APP"
|
||||
variables
|
||||
|
||||
@ -55,7 +55,8 @@ function update_script() {
|
||||
systemctl stop wanderer-web
|
||||
msg_ok "Stopped service"
|
||||
|
||||
fetch_and_deploy_gh_release "meilisearch" "meilisearch/meilisearch" "binary" "latest" "/opt/wanderer/source/search"
|
||||
fetch_and_deploy_gh_release "meilisearch" "meilisearch/meilisearch" "binary" "latest" "/opt/wanderer/source/search"
|
||||
grep -q -- '--experimental-dumpless-upgrade' /opt/wanderer/start.sh || sed -i 's|meilisearch --master-key|meilisearch --experimental-dumpless-upgrade --master-key|' /opt/wanderer/start.sh
|
||||
|
||||
msg_info "Starting service"
|
||||
systemctl start wanderer-web
|
||||
|
||||
@ -11,7 +11,7 @@ var_disk="${var_disk:-8}"
|
||||
var_cpu="${var_cpu:-2}"
|
||||
var_ram="${var_ram:-4096}"
|
||||
var_os="${var_os:-debian}"
|
||||
var_version="${var_version:-13}"
|
||||
var_version="${var_version:-12}"
|
||||
var_unprivileged="${var_unprivileged:-1}"
|
||||
|
||||
header_info "$APP"
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
# Technical Reference: Configuration System Architecture
|
||||
|
||||
> **For Developers and Advanced Users**
|
||||
>
|
||||
> *Deep dive into how the defaults and configuration system works*
|
||||
>
|
||||
> _Deep dive into how the defaults and configuration system works_
|
||||
|
||||
---
|
||||
|
||||
@ -123,13 +123,13 @@ VAR_VALUE := [^\n]* # Any printable characters except newline
|
||||
|
||||
**Constraints**:
|
||||
|
||||
| Constraint | Value |
|
||||
|-----------|-------|
|
||||
| Max file size | 64 KB |
|
||||
| Max line length | 1024 bytes |
|
||||
| Max variables | 100 |
|
||||
| Allowed var names | `var_[a-z_]+` |
|
||||
| Value validation | Whitelist + Sanitization |
|
||||
| Constraint | Value |
|
||||
| ----------------- | ------------------------ |
|
||||
| Max file size | 64 KB |
|
||||
| Max line length | 1024 bytes |
|
||||
| Max variables | 100 |
|
||||
| Allowed var names | `var_[a-z_]+` |
|
||||
| Value validation | Whitelist + Sanitization |
|
||||
|
||||
**Example Valid File**:
|
||||
|
||||
@ -206,21 +206,24 @@ var_tags=dns,pihole
|
||||
**Purpose**: Safely load variables from .vars files without using `source` or `eval`
|
||||
|
||||
**Signature**:
|
||||
|
||||
```bash
|
||||
load_vars_file(filepath)
|
||||
```
|
||||
|
||||
**Parameters**:
|
||||
|
||||
| Param | Type | Required | Example |
|
||||
|-------|------|----------|---------|
|
||||
| filepath | String | Yes | `/usr/local/community-scripts/default.vars` |
|
||||
| Param | Type | Required | Example |
|
||||
| -------- | ------ | -------- | ------------------------------------------- |
|
||||
| filepath | String | Yes | `/usr/local/community-scripts/default.vars` |
|
||||
|
||||
**Returns**:
|
||||
|
||||
- `0` on success
|
||||
- `1` on error (file missing, parse error, etc.)
|
||||
|
||||
**Environment Side Effects**:
|
||||
|
||||
- Sets all parsed `var_*` variables as shell variables
|
||||
- Does NOT unset variables if file missing (safe)
|
||||
- Does NOT affect other variables
|
||||
@ -230,25 +233,25 @@ load_vars_file(filepath)
|
||||
```bash
|
||||
load_vars_file() {
|
||||
local file="$1"
|
||||
|
||||
|
||||
# File must exist
|
||||
[ -f "$file" ] || return 0
|
||||
|
||||
|
||||
# Parse line by line (not with source/eval)
|
||||
local line key val
|
||||
while IFS='=' read -r key val || [ -n "$key" ]; do
|
||||
# Skip comments and empty lines
|
||||
[[ "$key" =~ ^[[:space:]]*# ]] && continue
|
||||
[[ -z "$key" ]] && continue
|
||||
|
||||
|
||||
# Validate key is in whitelist
|
||||
_is_whitelisted_key "$key" || continue
|
||||
|
||||
|
||||
# Sanitize and export value
|
||||
val="$(_sanitize_value "$val")"
|
||||
[ $? -eq 0 ] && export "$key=$val"
|
||||
done < "$file"
|
||||
|
||||
|
||||
return 0
|
||||
}
|
||||
```
|
||||
@ -281,6 +284,7 @@ echo "Allocating ${var_ram} MB RAM"
|
||||
**Purpose**: Get the full path for app-specific defaults file
|
||||
|
||||
**Signature**:
|
||||
|
||||
```bash
|
||||
get_app_defaults_path()
|
||||
```
|
||||
@ -288,6 +292,7 @@ get_app_defaults_path()
|
||||
**Parameters**: None
|
||||
|
||||
**Returns**:
|
||||
|
||||
- String: Full path to app defaults file
|
||||
|
||||
**Implementation**:
|
||||
@ -322,6 +327,7 @@ load_vars_file "$(get_app_defaults_path)"
|
||||
**Purpose**: Load and display user global defaults
|
||||
|
||||
**Signature**:
|
||||
|
||||
```bash
|
||||
default_var_settings()
|
||||
```
|
||||
@ -329,6 +335,7 @@ default_var_settings()
|
||||
**Parameters**: None
|
||||
|
||||
**Returns**:
|
||||
|
||||
- `0` on success
|
||||
- `1` on error
|
||||
|
||||
@ -337,15 +344,15 @@ default_var_settings()
|
||||
```
|
||||
1. Find default.vars location
|
||||
(usually /usr/local/community-scripts/default.vars)
|
||||
|
||||
|
||||
2. Create if missing
|
||||
|
||||
|
||||
3. Load variables from file
|
||||
|
||||
|
||||
4. Map var_verbose → VERBOSE variable
|
||||
|
||||
|
||||
5. Call base_settings (apply to container config)
|
||||
|
||||
|
||||
6. Call echo_default (display summary)
|
||||
```
|
||||
|
||||
@ -354,20 +361,20 @@ default_var_settings()
|
||||
```bash
|
||||
default_var_settings() {
|
||||
local VAR_WHITELIST=(
|
||||
var_apt_cacher var_apt_cacher_ip var_brg var_cpu var_disk var_fuse
|
||||
var_apt_cacher var_apt_cacher_ip var_brg var_cpu var_disk var_fuse var_gpu
|
||||
var_gateway var_hostname var_ipv6_method var_mac var_mtu
|
||||
var_net var_ns var_pw var_ram var_tags var_tun var_unprivileged
|
||||
var_verbose var_vlan var_ssh var_ssh_authorized_key
|
||||
var_container_storage var_template_storage
|
||||
)
|
||||
|
||||
|
||||
# Ensure file exists
|
||||
_ensure_default_vars
|
||||
|
||||
|
||||
# Find and load
|
||||
local dv="$(_find_default_vars)"
|
||||
load_vars_file "$dv"
|
||||
|
||||
|
||||
# Map verbose flag
|
||||
if [[ -n "${var_verbose:-}" ]]; then
|
||||
case "${var_verbose,,}" in
|
||||
@ -375,7 +382,7 @@ default_var_settings() {
|
||||
*) VERBOSE="${var_verbose}" ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
|
||||
# Apply and display
|
||||
base_settings "$VERBOSE"
|
||||
echo_default
|
||||
@ -389,6 +396,7 @@ default_var_settings() {
|
||||
**Purpose**: Offer to save current settings as app-specific defaults
|
||||
|
||||
**Signature**:
|
||||
|
||||
```bash
|
||||
maybe_offer_save_app_defaults()
|
||||
```
|
||||
@ -413,10 +421,10 @@ maybe_offer_save_app_defaults()
|
||||
```bash
|
||||
maybe_offer_save_app_defaults() {
|
||||
local app_vars_path="$(get_app_defaults_path)"
|
||||
|
||||
|
||||
# Build current settings from memory
|
||||
local new_tmp="$(_build_current_app_vars_tmp)"
|
||||
|
||||
|
||||
# Check if already exists
|
||||
if [ -f "$app_vars_path" ]; then
|
||||
# Show diff and ask: Update? Keep? View Diff?
|
||||
@ -438,29 +446,31 @@ maybe_offer_save_app_defaults() {
|
||||
**Purpose**: Remove dangerous characters/patterns from configuration values
|
||||
|
||||
**Signature**:
|
||||
|
||||
```bash
|
||||
_sanitize_value(value)
|
||||
```
|
||||
|
||||
**Parameters**:
|
||||
|
||||
| Param | Type | Required |
|
||||
|-------|------|----------|
|
||||
| value | String | Yes |
|
||||
| Param | Type | Required |
|
||||
| ----- | ------ | -------- |
|
||||
| value | String | Yes |
|
||||
|
||||
**Returns**:
|
||||
|
||||
- `0` (success) + sanitized value on stdout
|
||||
- `1` (failure) + nothing if dangerous
|
||||
|
||||
**Dangerous Patterns**:
|
||||
|
||||
| Pattern | Threat | Example |
|
||||
|---------|--------|---------|
|
||||
| `$(...)` | Command substitution | `$(rm -rf /)` |
|
||||
| `` ` ` `` | Command substitution | `` `whoami` `` |
|
||||
| `;` | Command separator | `value; rm -rf /` |
|
||||
| `&` | Background execution | `value & malicious` |
|
||||
| `<(` | Process substitution | `<(cat /etc/passwd)` |
|
||||
| Pattern | Threat | Example |
|
||||
| --------- | -------------------- | -------------------- |
|
||||
| `$(...)` | Command substitution | `$(rm -rf /)` |
|
||||
| `` ` ` `` | Command substitution | `` `whoami` `` |
|
||||
| `;` | Command separator | `value; rm -rf /` |
|
||||
| `&` | Background execution | `value & malicious` |
|
||||
| `<(` | Process substitution | `<(cat /etc/passwd)` |
|
||||
|
||||
**Implementation**:
|
||||
|
||||
@ -501,17 +511,19 @@ fi
|
||||
**Purpose**: Check if variable name is in allowed whitelist
|
||||
|
||||
**Signature**:
|
||||
|
||||
```bash
|
||||
_is_whitelisted_key(key)
|
||||
```
|
||||
|
||||
**Parameters**:
|
||||
|
||||
| Param | Type | Required | Example |
|
||||
|-------|------|----------|---------|
|
||||
| key | String | Yes | `var_cpu` |
|
||||
| Param | Type | Required | Example |
|
||||
| ----- | ------ | -------- | --------- |
|
||||
| key | String | Yes | `var_cpu` |
|
||||
|
||||
**Returns**:
|
||||
|
||||
- `0` if key is whitelisted
|
||||
- `1` if key is NOT whitelisted
|
||||
|
||||
@ -573,6 +585,7 @@ Step 4: Use BUILT-IN DEFAULTS
|
||||
### Precedence Examples
|
||||
|
||||
**Example 1: Environment Variable Wins**
|
||||
|
||||
```bash
|
||||
# Shell environment has highest priority
|
||||
$ export var_cpu=16
|
||||
@ -583,6 +596,7 @@ $ bash pihole-install.sh
|
||||
```
|
||||
|
||||
**Example 2: App Defaults Override User Defaults**
|
||||
|
||||
```bash
|
||||
# User Defaults: var_cpu=4
|
||||
# App Defaults: var_cpu=2
|
||||
@ -593,6 +607,7 @@ $ bash pihole-install.sh
|
||||
```
|
||||
|
||||
**Example 3: All Defaults Missing (Built-ins Used)**
|
||||
|
||||
```bash
|
||||
# No environment variables set
|
||||
# No app defaults file
|
||||
@ -611,21 +626,21 @@ $ bash pihole-install.sh
|
||||
base_settings() {
|
||||
# Priority 1: Environment variables (already set if export used)
|
||||
CT_TYPE=${var_unprivileged:-"1"} # Use existing or default
|
||||
|
||||
|
||||
# Priority 2: Load app defaults (may override above)
|
||||
if [ -f "$(get_app_defaults_path)" ]; then
|
||||
load_vars_file "$(get_app_defaults_path)"
|
||||
fi
|
||||
|
||||
|
||||
# Priority 3: Load user defaults
|
||||
if [ -f "/usr/local/community-scripts/default.vars" ]; then
|
||||
load_vars_file "/usr/local/community-scripts/default.vars"
|
||||
fi
|
||||
|
||||
|
||||
# Priority 4: Apply built-in defaults (lowest)
|
||||
CORE_COUNT=${var_cpu:-"${APP_CPU_DEFAULT:-2}"}
|
||||
RAM_SIZE=${var_ram:-"${APP_RAM_DEFAULT:-1024}"}
|
||||
|
||||
|
||||
# Result: var_cpu has been set through precedence chain
|
||||
}
|
||||
```
|
||||
@ -734,14 +749,14 @@ CONTAINER CREATION STARTED
|
||||
|
||||
### Threat Model
|
||||
|
||||
| Threat | Mitigation |
|
||||
|--------|-----------|
|
||||
| **Arbitrary Code Execution** | No `source` or `eval`; manual parsing only |
|
||||
| **Variable Injection** | Whitelist of allowed variable names |
|
||||
| **Command Substitution** | `_sanitize_value()` blocks `$()`, backticks, etc. |
|
||||
| **Path Traversal** | Files locked to `/usr/local/community-scripts/` |
|
||||
| **Permission Escalation** | Files created with restricted permissions |
|
||||
| **Information Disclosure** | Sensitive variables not logged |
|
||||
| Threat | Mitigation |
|
||||
| ---------------------------- | ------------------------------------------------- |
|
||||
| **Arbitrary Code Execution** | No `source` or `eval`; manual parsing only |
|
||||
| **Variable Injection** | Whitelist of allowed variable names |
|
||||
| **Command Substitution** | `_sanitize_value()` blocks `$()`, backticks, etc. |
|
||||
| **Path Traversal** | Files locked to `/usr/local/community-scripts/` |
|
||||
| **Permission Escalation** | Files created with restricted permissions |
|
||||
| **Information Disclosure** | Sensitive variables not logged |
|
||||
|
||||
### Security Controls
|
||||
|
||||
@ -798,6 +813,7 @@ fi
|
||||
### Module: `build.func`
|
||||
|
||||
**Load Order** (in actual scripts):
|
||||
|
||||
1. `#!/usr/bin/env bash` - Shebang
|
||||
2. `source /dev/stdin <<<$(curl ... api.func)` - API functions
|
||||
3. `source /dev/stdin <<<$(curl ... build.func)` - Build functions
|
||||
@ -832,17 +848,17 @@ fi
|
||||
|
||||
# Section 6: Installation Flow
|
||||
- install_script() # Main entry point
|
||||
- advanced_settings() # 19-step wizard
|
||||
- advanced_settings() # 20-step wizard
|
||||
```
|
||||
|
||||
### Regex Patterns Used
|
||||
|
||||
| Pattern | Purpose | Example Match |
|
||||
|---------|---------|---|
|
||||
| `^[0-9]+([.][0-9]+)?$` | Integer validation | `4`, `192.168` |
|
||||
| `^var_[a-z_]+$` | Variable name | `var_cpu`, `var_ssh` |
|
||||
| `*'$('*` | Command substitution | `$(whoami)` |
|
||||
| `*\`*` | Backtick substitution | `` `cat /etc/passwd` `` |
|
||||
| Pattern | Purpose | Example Match |
|
||||
| ---------------------- | --------------------- | ----------------------- |
|
||||
| `^[0-9]+([.][0-9]+)?$` | Integer validation | `4`, `192.168` |
|
||||
| `^var_[a-z_]+$` | Variable name | `var_cpu`, `var_ssh` |
|
||||
| `*'$('*` | Command substitution | `$(whoami)` |
|
||||
| `*\`\*` | Backtick substitution | `` `cat /etc/passwd` `` |
|
||||
|
||||
---
|
||||
|
||||
@ -869,12 +885,12 @@ fi
|
||||
|
||||
### Function Mapping
|
||||
|
||||
| Old | New | Location |
|
||||
|-----|-----|----------|
|
||||
| `read_config()` | `load_vars_file()` | build.func |
|
||||
| `write_config()` | `_build_current_app_vars_tmp()` | build.func |
|
||||
| None | `maybe_offer_save_app_defaults()` | build.func |
|
||||
| None | `get_app_defaults_path()` | build.func |
|
||||
| Old | New | Location |
|
||||
| ---------------- | --------------------------------- | ---------- |
|
||||
| `read_config()` | `load_vars_file()` | build.func |
|
||||
| `write_config()` | `_build_current_app_vars_tmp()` | build.func |
|
||||
| None | `maybe_offer_save_app_defaults()` | build.func |
|
||||
| None | `get_app_defaults_path()` | build.func |
|
||||
|
||||
---
|
||||
|
||||
|
||||
@ -1,7 +1,6 @@
|
||||
|
||||
# Community Scripts Contribution Guide
|
||||
|
||||
## **Welcome to the communty-scripts Repository!**
|
||||
## **Welcome to the communty-scripts Repository!**
|
||||
|
||||
📜 These documents outline the essential coding standards for all our scripts and JSON files. Adhering to these standards ensures that our codebase remains consistent, readable, and maintainable. By following these guidelines, we can improve collaboration, reduce errors, and enhance the overall quality of our project.
|
||||
|
||||
@ -28,7 +27,6 @@ By following the coding standards outlined in this document, we ensure that our
|
||||
|
||||
Let's work together to keep our codebase clean, efficient, and maintainable! 💪🚀
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
Before contributing, please ensure that you have the following setup:
|
||||
@ -40,67 +38,73 @@ Before contributing, please ensure that you have the following setup:
|
||||
- [Shell Format](https://marketplace.visualstudio.com/items?itemName=foxundermoon.shell-format)
|
||||
|
||||
### Important Notes
|
||||
- Use [AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.sh) and [AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh) as templates when creating new scripts.
|
||||
|
||||
- Use [AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_ct/AppName.sh) and [AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_install/AppName-install.sh) as templates when creating new scripts.
|
||||
|
||||
---
|
||||
|
||||
# 🚀 The Application Script (ct/AppName.sh)
|
||||
|
||||
- You can find all coding standards, as well as the structure for this file [here](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.md).
|
||||
- You can find all coding standards, as well as the structure for this file [here](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_ct/AppName.md).
|
||||
- These scripts are responsible for container creation, setting the necessary variables and handling the update of the application once installed.
|
||||
|
||||
---
|
||||
|
||||
# 🛠 The Installation Script (install/AppName-install.sh)
|
||||
|
||||
- You can find all coding standards, as well as the structure for this file [here](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.md).
|
||||
- You can find all coding standards, as well as the structure for this file [here](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_install/AppName-install.md).
|
||||
- These scripts are responsible for the installation of the application.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Building Your Own Scripts
|
||||
|
||||
Start with the [template script](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh)
|
||||
Start with the [template script](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_install/AppName-install.sh)
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contribution Process
|
||||
|
||||
### 1. Fork the repository
|
||||
|
||||
Fork to your GitHub account
|
||||
|
||||
### 2. Clone your fork on your local environment
|
||||
### 2. Clone your fork on your local environment
|
||||
|
||||
```bash
|
||||
git clone https://github.com/yourUserName/ForkName
|
||||
```
|
||||
|
||||
### 3. Create a new branch
|
||||
|
||||
```bash
|
||||
git switch -c your-feature-branch
|
||||
```
|
||||
|
||||
### 4. Change paths in build.func install.func and AppName.sh
|
||||
|
||||
To be able to develop from your own branch you need to change `https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main` to `https://raw.githubusercontent.com/[USER]/[REPOSITORY]/refs/heads/[BRANCH]`. You need to make this change atleast in misc/build.func misc/install.func and in your ct/AppName.sh. This change is only for testing. Before opening a Pull Request you should change this line change all this back to point to `https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main`.
|
||||
|
||||
### 4. Commit changes (without build.func and install.func!)
|
||||
|
||||
```bash
|
||||
git commit -m "Your commit message"
|
||||
```
|
||||
|
||||
### 5. Push to your fork
|
||||
|
||||
```bash
|
||||
git push origin your-feature-branch
|
||||
```
|
||||
|
||||
### 6. Create a Pull Request
|
||||
|
||||
Open a Pull Request from your feature branch to the main repository branch. You must only include your **$AppName.sh**, **$AppName-install.sh** and **$AppName.json** files in the pull request.
|
||||
|
||||
---
|
||||
|
||||
## 📚 Pages
|
||||
|
||||
- [CT Template: AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/ct/AppName.sh)
|
||||
- [Install Template: AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/install/AppName-install.sh)
|
||||
- [JSON Template: AppName.json](https://github.com/community-scripts/ProxmoxVE/blob/main/.github/CONTRIBUTOR_AND_GUIDES/json/AppName.json)
|
||||
|
||||
|
||||
- [CT Template: AppName.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_ct/AppName.sh)
|
||||
- [Install Template: AppName-install.sh](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_install/AppName-install.sh)
|
||||
- [JSON Template: AppName.json](https://github.com/community-scripts/ProxmoxVE/blob/main/docs/contribution/templates_json/AppName.json)
|
||||
|
||||
@ -8,103 +8,123 @@ This document provides a comprehensive reference of all environment variables us
|
||||
|
||||
### Core Container Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `APP` | Application name (e.g., "plex", "nextcloud") | - | Environment | Throughout |
|
||||
| `NSAPP` | Namespace application name | `$APP` | Environment | Throughout |
|
||||
| `CTID` | Container ID | - | Environment | Container creation |
|
||||
| `CT_TYPE` | Container type ("install" or "update") | "install" | Environment | Entry point |
|
||||
| `CT_NAME` | Container name | `$APP` | Environment | Container creation |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| --------- | -------------------------------------------- | --------- | ----------- | ------------------ |
|
||||
| `APP` | Application name (e.g., "plex", "nextcloud") | - | Environment | Throughout |
|
||||
| `NSAPP` | Namespace application name | `$APP` | Environment | Throughout |
|
||||
| `CTID` | Container ID | - | Environment | Container creation |
|
||||
| `CT_TYPE` | Container type ("install" or "update") | "install" | Environment | Entry point |
|
||||
| `CT_NAME` | Container name | `$APP` | Environment | Container creation |
|
||||
|
||||
### Operating System Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `var_os` | Operating system selection | "debian" | base_settings() | OS selection |
|
||||
| `var_version` | OS version | "12" | base_settings() | Template selection |
|
||||
| `var_template` | Template name | Auto-generated | base_settings() | Template download |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| -------------- | -------------------------- | -------------- | --------------- | ------------------ |
|
||||
| `var_os` | Operating system selection | "debian" | base_settings() | OS selection |
|
||||
| `var_version` | OS version | "12" | base_settings() | Template selection |
|
||||
| `var_template` | Template name | Auto-generated | base_settings() | Template download |
|
||||
|
||||
### Resource Configuration Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `var_cpu` | CPU cores | "2" | base_settings() | Container creation |
|
||||
| `var_ram` | RAM in MB | "2048" | base_settings() | Container creation |
|
||||
| `var_disk` | Disk size in GB | "8" | base_settings() | Container creation |
|
||||
| `DISK_SIZE` | Disk size (alternative) | `$var_disk` | Environment | Container creation |
|
||||
| `CORE_COUNT` | CPU cores (alternative) | `$var_cpu` | Environment | Container creation |
|
||||
| `RAM_SIZE` | RAM size (alternative) | `$var_ram` | Environment | Container creation |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| ------------ | ----------------------- | ----------- | --------------- | ------------------ |
|
||||
| `var_cpu` | CPU cores | "2" | base_settings() | Container creation |
|
||||
| `var_ram` | RAM in MB | "2048" | base_settings() | Container creation |
|
||||
| `var_disk` | Disk size in GB | "8" | base_settings() | Container creation |
|
||||
| `DISK_SIZE` | Disk size (alternative) | `$var_disk` | Environment | Container creation |
|
||||
| `CORE_COUNT` | CPU cores (alternative) | `$var_cpu` | Environment | Container creation |
|
||||
| `RAM_SIZE` | RAM size (alternative) | `$var_ram` | Environment | Container creation |
|
||||
|
||||
### Network Configuration Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `var_net` | Network interface | "vmbr0" | base_settings() | Network config |
|
||||
| `var_bridge` | Bridge interface | "vmbr0" | base_settings() | Network config |
|
||||
| `var_gateway` | Gateway IP | "192.168.1.1" | base_settings() | Network config |
|
||||
| `var_ip` | Container IP address | - | User input | Network config |
|
||||
| `var_ipv6` | IPv6 address | - | User input | Network config |
|
||||
| `var_vlan` | VLAN ID | - | User input | Network config |
|
||||
| `var_mtu` | MTU size | "1500" | base_settings() | Network config |
|
||||
| `var_mac` | MAC address | Auto-generated | base_settings() | Network config |
|
||||
| `NET` | Network interface (alternative) | `$var_net` | Environment | Network config |
|
||||
| `BRG` | Bridge interface (alternative) | `$var_bridge` | Environment | Network config |
|
||||
| `GATE` | Gateway IP (alternative) | `$var_gateway` | Environment | Network config |
|
||||
| `IPV6_METHOD` | IPv6 configuration method | "none" | Environment | Network config |
|
||||
| `VLAN` | VLAN ID (alternative) | `$var_vlan` | Environment | Network config |
|
||||
| `MTU` | MTU size (alternative) | `$var_mtu` | Environment | Network config |
|
||||
| `MAC` | MAC address (alternative) | `$var_mac` | Environment | Network config |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| ------------- | ------------------------------- | -------------- | --------------- | -------------- |
|
||||
| `var_net` | Network interface | "vmbr0" | base_settings() | Network config |
|
||||
| `var_bridge` | Bridge interface | "vmbr0" | base_settings() | Network config |
|
||||
| `var_gateway` | Gateway IP | "192.168.1.1" | base_settings() | Network config |
|
||||
| `var_ip` | Container IP address | - | User input | Network config |
|
||||
| `var_ipv6` | IPv6 address | - | User input | Network config |
|
||||
| `var_vlan` | VLAN ID | - | User input | Network config |
|
||||
| `var_mtu` | MTU size | "1500" | base_settings() | Network config |
|
||||
| `var_mac` | MAC address | Auto-generated | base_settings() | Network config |
|
||||
| `NET` | Network interface (alternative) | `$var_net` | Environment | Network config |
|
||||
| `BRG` | Bridge interface (alternative) | `$var_bridge` | Environment | Network config |
|
||||
| `GATE` | Gateway IP (alternative) | `$var_gateway` | Environment | Network config |
|
||||
| `IPV6_METHOD` | IPv6 configuration method | "none" | Environment | Network config |
|
||||
| `VLAN` | VLAN ID (alternative) | `$var_vlan` | Environment | Network config |
|
||||
| `MTU` | MTU size (alternative) | `$var_mtu` | Environment | Network config |
|
||||
| `MAC` | MAC address (alternative) | `$var_mac` | Environment | Network config |
|
||||
|
||||
### Storage Configuration Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `var_template_storage` | Storage for templates | - | select_storage() | Template storage |
|
||||
| `var_container_storage` | Storage for container disks | - | select_storage() | Container storage |
|
||||
| `TEMPLATE_STORAGE` | Template storage (alternative) | `$var_template_storage` | Environment | Template storage |
|
||||
| `CONTAINER_STORAGE` | Container storage (alternative) | `$var_container_storage` | Environment | Container storage |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| ----------------------- | ------------------------------- | ------------------------ | ---------------- | ----------------- |
|
||||
| `var_template_storage` | Storage for templates | - | select_storage() | Template storage |
|
||||
| `var_container_storage` | Storage for container disks | - | select_storage() | Container storage |
|
||||
| `TEMPLATE_STORAGE` | Template storage (alternative) | `$var_template_storage` | Environment | Template storage |
|
||||
| `CONTAINER_STORAGE` | Container storage (alternative) | `$var_container_storage` | Environment | Container storage |
|
||||
|
||||
### Feature Flags
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `ENABLE_FUSE` | Enable FUSE support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_TUN` | Enable TUN/TAP support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_KEYCTL` | Enable keyctl support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_MOUNT` | Enable mount support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_NESTING` | Enable nesting support | "false" | base_settings() | Container features |
|
||||
| `ENABLE_PRIVILEGED` | Enable privileged mode | "false" | base_settings() | Container features |
|
||||
| `ENABLE_UNPRIVILEGED` | Enable unprivileged mode | "true" | base_settings() | Container features |
|
||||
| `VERBOSE` | Enable verbose output | "false" | Environment | Logging |
|
||||
| `SSH` | Enable SSH key provisioning | "true" | base_settings() | SSH setup |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| --------------------- | --------------------------- | ------- | --------------- | ------------------ |
|
||||
| `ENABLE_FUSE` | Enable FUSE support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_TUN` | Enable TUN/TAP support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_KEYCTL` | Enable keyctl support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_MOUNT` | Enable mount support | "true" | base_settings() | Container features |
|
||||
| `ENABLE_NESTING` | Enable nesting support | "false" | base_settings() | Container features |
|
||||
| `ENABLE_PRIVILEGED` | Enable privileged mode | "false" | base_settings() | Container features |
|
||||
| `ENABLE_UNPRIVILEGED` | Enable unprivileged mode | "true" | base_settings() | Container features |
|
||||
| `VERBOSE` | Enable verbose output | "false" | Environment | Logging |
|
||||
| `SSH` | Enable SSH key provisioning | "true" | base_settings() | SSH setup |
|
||||
|
||||
### GPU Passthrough Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `GPU_APPS` | List of apps that support GPU | - | Environment | GPU detection |
|
||||
| `var_gpu` | GPU selection | - | User input | GPU passthrough |
|
||||
| `var_gpu_type` | GPU type (intel/amd/nvidia) | - | detect_gpu_devices() | GPU passthrough |
|
||||
| `var_gpu_devices` | GPU device list | - | detect_gpu_devices() | GPU passthrough |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| ------------ | ------------------------------- | ------- | ------------------------------------------- | ------------------ |
|
||||
| `var_gpu` | Enable GPU passthrough | "no" | CT script / Environment / Advanced Settings | GPU passthrough |
|
||||
| `ENABLE_GPU` | GPU passthrough flag (internal) | "no" | Advanced Settings | Container creation |
|
||||
|
||||
**Note**: GPU passthrough is controlled via `var_gpu`. Apps that benefit from GPU acceleration (media servers, AI/ML, transcoding) have `var_gpu=yes` as default in their CT scripts.
|
||||
|
||||
**Apps with GPU enabled by default**:
|
||||
|
||||
- Media: jellyfin, plex, emby, channels, ersatztv, tunarr, immich
|
||||
- Transcoding: tdarr, unmanic, fileflows
|
||||
- AI/ML: ollama, openwebui
|
||||
- NVR: frigate
|
||||
|
||||
**Usage Examples**:
|
||||
|
||||
```bash
|
||||
# Disable GPU for a specific installation
|
||||
var_gpu=no bash -c "$(curl -fsSL https://...jellyfin.sh)"
|
||||
|
||||
# Enable GPU for apps without default GPU support
|
||||
var_gpu=yes bash -c "$(curl -fsSL https://...debian.sh)"
|
||||
|
||||
# Set in default.vars for all apps
|
||||
echo "var_gpu=yes" >> /usr/local/community-scripts/default.vars
|
||||
```
|
||||
|
||||
### API and Diagnostics Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `DIAGNOSTICS` | Enable diagnostics mode | "false" | Environment | Diagnostics |
|
||||
| `METHOD` | Installation method | "install" | Environment | Installation flow |
|
||||
| `RANDOM_UUID` | Random UUID for tracking | - | Environment | Logging |
|
||||
| `API_TOKEN` | Proxmox API token | - | Environment | API calls |
|
||||
| `API_USER` | Proxmox API user | - | Environment | API calls |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| ------------- | ------------------------ | --------- | ----------- | ----------------- |
|
||||
| `DIAGNOSTICS` | Enable diagnostics mode | "false" | Environment | Diagnostics |
|
||||
| `METHOD` | Installation method | "install" | Environment | Installation flow |
|
||||
| `RANDOM_UUID` | Random UUID for tracking | - | Environment | Logging |
|
||||
| `API_TOKEN` | Proxmox API token | - | Environment | API calls |
|
||||
| `API_USER` | Proxmox API user | - | Environment | API calls |
|
||||
|
||||
### Settings Persistence Variables
|
||||
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
|----------|-------------|---------|---------|---------|
|
||||
| `SAVE_DEFAULTS` | Save settings as defaults | "false" | User input | Settings persistence |
|
||||
| `SAVE_APP_DEFAULTS` | Save app-specific defaults | "false" | User input | Settings persistence |
|
||||
| `DEFAULT_VARS_FILE` | Path to default.vars | "/usr/local/community-scripts/default.vars" | Environment | Settings persistence |
|
||||
| `APP_DEFAULTS_FILE` | Path to app.vars | "/usr/local/community-scripts/defaults/$APP.vars" | Environment | Settings persistence |
|
||||
| Variable | Description | Default | Set In | Used In |
|
||||
| ------------------- | -------------------------- | ------------------------------------------------- | ----------- | -------------------- |
|
||||
| `SAVE_DEFAULTS` | Save settings as defaults | "false" | User input | Settings persistence |
|
||||
| `SAVE_APP_DEFAULTS` | Save app-specific defaults | "false" | User input | Settings persistence |
|
||||
| `DEFAULT_VARS_FILE` | Path to default.vars | "/usr/local/community-scripts/default.vars" | Environment | Settings persistence |
|
||||
| `APP_DEFAULTS_FILE` | Path to app.vars | "/usr/local/community-scripts/defaults/$APP.vars" | Environment | Settings persistence |
|
||||
|
||||
## Variable Precedence Chain
|
||||
|
||||
@ -152,6 +172,7 @@ export SSH="true"
|
||||
## Environment Variable Usage Patterns
|
||||
|
||||
### 1. Container Creation
|
||||
|
||||
```bash
|
||||
# Basic container creation
|
||||
export APP="nextcloud"
|
||||
@ -170,6 +191,7 @@ export var_container_storage="local"
|
||||
```
|
||||
|
||||
### 2. GPU Passthrough
|
||||
|
||||
```bash
|
||||
# Enable GPU passthrough
|
||||
export GPU_APPS="plex,jellyfin,emby"
|
||||
@ -178,6 +200,7 @@ export ENABLE_PRIVILEGED="true"
|
||||
```
|
||||
|
||||
### 3. Advanced Network Configuration
|
||||
|
||||
```bash
|
||||
# VLAN and IPv6 configuration
|
||||
export var_vlan="100"
|
||||
@ -187,6 +210,7 @@ export var_mtu="9000"
|
||||
```
|
||||
|
||||
### 4. Storage Configuration
|
||||
|
||||
```bash
|
||||
# Custom storage locations
|
||||
export var_template_storage="nfs-storage"
|
||||
@ -206,6 +230,7 @@ The script validates variables at several points:
|
||||
## Common Variable Combinations
|
||||
|
||||
### Development Container
|
||||
|
||||
```bash
|
||||
export APP="dev-container"
|
||||
export CTID="200"
|
||||
@ -220,6 +245,7 @@ export ENABLE_PRIVILEGED="true"
|
||||
```
|
||||
|
||||
### Media Server with GPU
|
||||
|
||||
```bash
|
||||
export APP="plex"
|
||||
export CTID="300"
|
||||
@ -235,6 +261,7 @@ export ENABLE_PRIVILEGED="true"
|
||||
```
|
||||
|
||||
### Lightweight Service
|
||||
|
||||
```bash
|
||||
export APP="nginx"
|
||||
export CTID="400"
|
||||
|
||||
22
frontend/bun.lock
generated
22
frontend/bun.lock
generated
@ -31,7 +31,7 @@
|
||||
"lucide-react": "^0.554.0",
|
||||
"mini-svg-data-uri": "^1.4.4",
|
||||
"motion": "^12.23.12",
|
||||
"next": "15.5.2",
|
||||
"next": "15.5.7",
|
||||
"next-themes": "^0.4.4",
|
||||
"nuqs": "^2.4.1",
|
||||
"prettier-plugin-organize-imports": "^4.1.0",
|
||||
@ -331,25 +331,25 @@
|
||||
|
||||
"@napi-rs/wasm-runtime": ["@napi-rs/wasm-runtime@0.2.12", "", { "dependencies": { "@emnapi/core": "^1.4.3", "@emnapi/runtime": "^1.4.3", "@tybys/wasm-util": "^0.10.0" } }, "sha512-ZVWUcfwY4E/yPitQJl481FjFo3K22D6qF0DuFH6Y/nbnE11GY5uguDxZMGXPQ8WQ0128MXQD7TnfHyK4oWoIJQ=="],
|
||||
|
||||
"@next/env": ["@next/env@15.5.2", "", {}, "sha512-Qe06ew4zt12LeO6N7j8/nULSOe3fMXE4dM6xgpBQNvdzyK1sv5y4oAP3bq4LamrvGCZtmRYnW8URFCeX5nFgGg=="],
|
||||
"@next/env": ["@next/env@15.5.7", "", {}, "sha512-4h6Y2NyEkIEN7Z8YxkA27pq6zTkS09bUSYC0xjd0NpwFxjnIKeZEeH591o5WECSmjpUhLn3H2QLJcDye3Uzcvg=="],
|
||||
|
||||
"@next/eslint-plugin-next": ["@next/eslint-plugin-next@15.5.6", "", { "dependencies": { "fast-glob": "3.3.1" } }, "sha512-YxDvsT2fwy1j5gMqk3ppXlsgDopHnkM4BoxSVASbvvgh5zgsK8lvWerDzPip8k3WVzsTZ1O7A7si1KNfN4OZfQ=="],
|
||||
|
||||
"@next/swc-darwin-arm64": ["@next/swc-darwin-arm64@15.5.2", "", { "os": "darwin", "cpu": "arm64" }, "sha512-8bGt577BXGSd4iqFygmzIfTYizHb0LGWqH+qgIF/2EDxS5JsSdERJKA8WgwDyNBZgTIIA4D8qUtoQHmxIIquoQ=="],
|
||||
"@next/swc-darwin-arm64": ["@next/swc-darwin-arm64@15.5.7", "", { "os": "darwin", "cpu": "arm64" }, "sha512-IZwtxCEpI91HVU/rAUOOobWSZv4P2DeTtNaCdHqLcTJU4wdNXgAySvKa/qJCgR5m6KI8UsKDXtO2B31jcaw1Yw=="],
|
||||
|
||||
"@next/swc-darwin-x64": ["@next/swc-darwin-x64@15.5.2", "", { "os": "darwin", "cpu": "x64" }, "sha512-2DjnmR6JHK4X+dgTXt5/sOCu/7yPtqpYt8s8hLkHFK3MGkka2snTv3yRMdHvuRtJVkPwCGsvBSwmoQCHatauFQ=="],
|
||||
"@next/swc-darwin-x64": ["@next/swc-darwin-x64@15.5.7", "", { "os": "darwin", "cpu": "x64" }, "sha512-UP6CaDBcqaCBuiq/gfCEJw7sPEoX1aIjZHnBWN9v9qYHQdMKvCKcAVs4OX1vIjeE+tC5EIuwDTVIoXpUes29lg=="],
|
||||
|
||||
"@next/swc-linux-arm64-gnu": ["@next/swc-linux-arm64-gnu@15.5.2", "", { "os": "linux", "cpu": "arm64" }, "sha512-3j7SWDBS2Wov/L9q0mFJtEvQ5miIqfO4l7d2m9Mo06ddsgUK8gWfHGgbjdFlCp2Ek7MmMQZSxpGFqcC8zGh2AA=="],
|
||||
"@next/swc-linux-arm64-gnu": ["@next/swc-linux-arm64-gnu@15.5.7", "", { "os": "linux", "cpu": "arm64" }, "sha512-NCslw3GrNIw7OgmRBxHtdWFQYhexoUCq+0oS2ccjyYLtcn1SzGzeM54jpTFonIMUjNbHmpKpziXnpxhSWLcmBA=="],
|
||||
|
||||
"@next/swc-linux-arm64-musl": ["@next/swc-linux-arm64-musl@15.5.2", "", { "os": "linux", "cpu": "arm64" }, "sha512-s6N8k8dF9YGc5T01UPQ08yxsK6fUow5gG1/axWc1HVVBYQBgOjca4oUZF7s4p+kwhkB1bDSGR8QznWrFZ/Rt5g=="],
|
||||
"@next/swc-linux-arm64-musl": ["@next/swc-linux-arm64-musl@15.5.7", "", { "os": "linux", "cpu": "arm64" }, "sha512-nfymt+SE5cvtTrG9u1wdoxBr9bVB7mtKTcj0ltRn6gkP/2Nu1zM5ei8rwP9qKQP0Y//umK+TtkKgNtfboBxRrw=="],
|
||||
|
||||
"@next/swc-linux-x64-gnu": ["@next/swc-linux-x64-gnu@15.5.2", "", { "os": "linux", "cpu": "x64" }, "sha512-o1RV/KOODQh6dM6ZRJGZbc+MOAHww33Vbs5JC9Mp1gDk8cpEO+cYC/l7rweiEalkSm5/1WGa4zY7xrNwObN4+Q=="],
|
||||
"@next/swc-linux-x64-gnu": ["@next/swc-linux-x64-gnu@15.5.7", "", { "os": "linux", "cpu": "x64" }, "sha512-hvXcZvCaaEbCZcVzcY7E1uXN9xWZfFvkNHwbe/n4OkRhFWrs1J1QV+4U1BN06tXLdaS4DazEGXwgqnu/VMcmqw=="],
|
||||
|
||||
"@next/swc-linux-x64-musl": ["@next/swc-linux-x64-musl@15.5.2", "", { "os": "linux", "cpu": "x64" }, "sha512-/VUnh7w8RElYZ0IV83nUcP/J4KJ6LLYliiBIri3p3aW2giF+PAVgZb6mk8jbQSB3WlTai8gEmCAr7kptFa1H6g=="],
|
||||
"@next/swc-linux-x64-musl": ["@next/swc-linux-x64-musl@15.5.7", "", { "os": "linux", "cpu": "x64" }, "sha512-4IUO539b8FmF0odY6/SqANJdgwn1xs1GkPO5doZugwZ3ETF6JUdckk7RGmsfSf7ws8Qb2YB5It33mvNL/0acqA=="],
|
||||
|
||||
"@next/swc-win32-arm64-msvc": ["@next/swc-win32-arm64-msvc@15.5.2", "", { "os": "win32", "cpu": "arm64" }, "sha512-sMPyTvRcNKXseNQ/7qRfVRLa0VhR0esmQ29DD6pqvG71+JdVnESJaHPA8t7bc67KD5spP3+DOCNLhqlEI2ZgQg=="],
|
||||
"@next/swc-win32-arm64-msvc": ["@next/swc-win32-arm64-msvc@15.5.7", "", { "os": "win32", "cpu": "arm64" }, "sha512-CpJVTkYI3ZajQkC5vajM7/ApKJUOlm6uP4BknM3XKvJ7VXAvCqSjSLmM0LKdYzn6nBJVSjdclx8nYJSa3xlTgQ=="],
|
||||
|
||||
"@next/swc-win32-x64-msvc": ["@next/swc-win32-x64-msvc@15.5.2", "", { "os": "win32", "cpu": "x64" }, "sha512-W5VvyZHnxG/2ukhZF/9Ikdra5fdNftxI6ybeVKYvBPDtyx7x4jPPSNduUkfH5fo3zG0JQ0bPxgy41af2JX5D4Q=="],
|
||||
"@next/swc-win32-x64-msvc": ["@next/swc-win32-x64-msvc@15.5.7", "", { "os": "win32", "cpu": "x64" }, "sha512-gMzgBX164I6DN+9/PGA+9dQiwmTkE4TloBNx8Kv9UiGARsr9Nba7IpcBRA1iTV9vwlYnrE3Uy6I7Aj6qLjQuqw=="],
|
||||
|
||||
"@nodelib/fs.scandir": ["@nodelib/fs.scandir@2.1.5", "", { "dependencies": { "@nodelib/fs.stat": "2.0.5", "run-parallel": "^1.1.9" } }, "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g=="],
|
||||
|
||||
@ -1331,7 +1331,7 @@
|
||||
|
||||
"natural-orderby": ["natural-orderby@5.0.0", "", {}, "sha512-kKHJhxwpR/Okycz4HhQKKlhWe4ASEfPgkSWNmKFHd7+ezuQlxkA5cM3+XkBPvm1gmHen3w53qsYAv+8GwRrBlg=="],
|
||||
|
||||
"next": ["next@15.5.2", "", { "dependencies": { "@next/env": "15.5.2", "@swc/helpers": "0.5.15", "caniuse-lite": "^1.0.30001579", "postcss": "8.4.31", "styled-jsx": "5.1.6" }, "optionalDependencies": { "@next/swc-darwin-arm64": "15.5.2", "@next/swc-darwin-x64": "15.5.2", "@next/swc-linux-arm64-gnu": "15.5.2", "@next/swc-linux-arm64-musl": "15.5.2", "@next/swc-linux-x64-gnu": "15.5.2", "@next/swc-linux-x64-musl": "15.5.2", "@next/swc-win32-arm64-msvc": "15.5.2", "@next/swc-win32-x64-msvc": "15.5.2", "sharp": "^0.34.3" }, "peerDependencies": { "@opentelemetry/api": "^1.1.0", "@playwright/test": "^1.51.1", "babel-plugin-react-compiler": "*", "react": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", "react-dom": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", "sass": "^1.3.0" }, "optionalPeers": ["@opentelemetry/api", "@playwright/test", "babel-plugin-react-compiler", "sass"], "bin": { "next": "dist/bin/next" } }, "sha512-H8Otr7abj1glFhbGnvUt3gz++0AF1+QoCXEBmd/6aKbfdFwrn0LpA836Ed5+00va/7HQSDD+mOoVhn3tNy3e/Q=="],
|
||||
"next": ["next@15.5.7", "", { "dependencies": { "@next/env": "15.5.7", "@swc/helpers": "0.5.15", "caniuse-lite": "^1.0.30001579", "postcss": "8.4.31", "styled-jsx": "5.1.6" }, "optionalDependencies": { "@next/swc-darwin-arm64": "15.5.7", "@next/swc-darwin-x64": "15.5.7", "@next/swc-linux-arm64-gnu": "15.5.7", "@next/swc-linux-arm64-musl": "15.5.7", "@next/swc-linux-x64-gnu": "15.5.7", "@next/swc-linux-x64-musl": "15.5.7", "@next/swc-win32-arm64-msvc": "15.5.7", "@next/swc-win32-x64-msvc": "15.5.7", "sharp": "^0.34.3" }, "peerDependencies": { "@opentelemetry/api": "^1.1.0", "@playwright/test": "^1.51.1", "babel-plugin-react-compiler": "*", "react": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", "react-dom": "^18.2.0 || 19.0.0-rc-de68d2f4-20241204 || ^19.0.0", "sass": "^1.3.0" }, "optionalPeers": ["@opentelemetry/api", "@playwright/test", "babel-plugin-react-compiler", "sass"], "bin": { "next": "dist/bin/next" } }, "sha512-+t2/0jIJ48kUpGKkdlhgkv+zPTEOoXyr60qXe68eB/pl3CMJaLeIGjzp5D6Oqt25hCBiBTt8wEeeAzfJvUKnPQ=="],
|
||||
|
||||
"next-themes": ["next-themes@0.4.6", "", { "peerDependencies": { "react": "^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc", "react-dom": "^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc" } }, "sha512-pZvgD5L0IEvX5/9GWyHMf3m8BKiVQwsCMHfoFosXtXBMnaS0ZnIJ9ST4b4NqLVKDEm8QBxoNNGNaBv2JNF6XNA=="],
|
||||
|
||||
|
||||
40
frontend/public/json/endurain.json
Normal file
40
frontend/public/json/endurain.json
Normal file
@ -0,0 +1,40 @@
|
||||
{
|
||||
"name": "Endurain",
|
||||
"slug": "endurain",
|
||||
"categories": [
|
||||
24
|
||||
],
|
||||
"date_created": "2025-12-05",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 8080,
|
||||
"documentation": "https://docs.endurain.com/",
|
||||
"website": "https://github.com/joaovitoriasilva/endurain",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/endurain.webp",
|
||||
"config_path": "/opt/endurain/.env",
|
||||
"description": "Endurain is a self-hosted fitness tracking service designed to give users full control over their data and hosting environment. It's similar to Strava but focused on privacy and customization",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/endurain.sh",
|
||||
"resources": {
|
||||
"cpu": 2,
|
||||
"ram": 2048,
|
||||
"hdd": 5,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": "admin",
|
||||
"password": "admin"
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "When using a reverse proxy, edit `/opt/endurain/frontend/app/dist/env.js`.",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -22,8 +22,8 @@
|
||||
"cpu": 2,
|
||||
"ram": 2048,
|
||||
"hdd": 6,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
"os": "ubuntu",
|
||||
"version": "24.04"
|
||||
}
|
||||
}
|
||||
],
|
||||
|
||||
35
frontend/public/json/metube.json
Normal file
35
frontend/public/json/metube.json
Normal file
@ -0,0 +1,35 @@
|
||||
{
|
||||
"name": "MeTube",
|
||||
"slug": "metube",
|
||||
"categories": [
|
||||
11
|
||||
],
|
||||
"date_created": "2025-12-05",
|
||||
"type": "ct",
|
||||
"updateable": true,
|
||||
"privileged": false,
|
||||
"interface_port": 8081,
|
||||
"documentation": "https://github.com/alexta69/metube/blob/master/README.md",
|
||||
"website": "https://github.com/alexta69/metube",
|
||||
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icon@master/webp/metube.webp",
|
||||
"config_path": "/opt/metube/.env",
|
||||
"description": "MeTube allows you to download videos from YouTube and dozens of other sites.",
|
||||
"install_methods": [
|
||||
{
|
||||
"type": "default",
|
||||
"script": "ct/metube.sh",
|
||||
"resources": {
|
||||
"cpu": 1,
|
||||
"ram": 2048,
|
||||
"hdd": 10,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
}
|
||||
}
|
||||
],
|
||||
"default_credentials": {
|
||||
"username": null,
|
||||
"password": null
|
||||
},
|
||||
"notes": []
|
||||
}
|
||||
@ -33,11 +33,11 @@
|
||||
},
|
||||
"notes": [
|
||||
{
|
||||
"text": "The integrated daemon config is located at `/root/.config/daemon/config.json`",
|
||||
"text": "To configure the integrated daemon after install is complete, either use the `Create Daemon` menu in the UI, or run `/root/configure_daemon.sh` for automatic configuration",
|
||||
"type": "info"
|
||||
},
|
||||
{
|
||||
"text": "When using a reverse proxy, edit `/opt/netvisor/ui/build/_app/env.js`: add 443 to `PUBLIC_SERVER_PORT` and remove 'default' from `PUBLIC_SERVER_HOSTNAME`.",
|
||||
"text": "The integrated daemon config is located at `/root/.config/daemon/config.json`",
|
||||
"type": "info"
|
||||
}
|
||||
]
|
||||
|
||||
@ -35,10 +35,6 @@
|
||||
{
|
||||
"text": "Set a root password if using autologin. This will be the Proxmox-Datacenter-Manager password. `sudo passwd root`",
|
||||
"type": "info"
|
||||
},
|
||||
{
|
||||
"text": "Proxmox Datacenter Manager is in an alpha stage of development. Use it cautiously, as bugs, incomplete features, and potential instabilities are expected.",
|
||||
"type": "warning"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@ -1,8 +1,253 @@
|
||||
[
|
||||
{
|
||||
"name": "morpheus65535/bazarr",
|
||||
"version": "v1.5.3",
|
||||
"date": "2025-09-20T12:12:33Z"
|
||||
},
|
||||
{
|
||||
"name": "Jackett/Jackett",
|
||||
"version": "v0.24.415",
|
||||
"date": "2025-12-07T05:56:32Z"
|
||||
},
|
||||
{
|
||||
"name": "BerriAI/litellm",
|
||||
"version": "v1.80.8.rc.1",
|
||||
"date": "2025-12-07T01:36:40Z"
|
||||
},
|
||||
{
|
||||
"name": "umami-software/umami",
|
||||
"version": "v2.20.1",
|
||||
"date": "2025-12-07T01:14:23Z"
|
||||
},
|
||||
{
|
||||
"name": "steveiliop56/tinyauth",
|
||||
"version": "v4.1.0",
|
||||
"date": "2025-11-23T12:13:34Z"
|
||||
},
|
||||
{
|
||||
"name": "jeedom/core",
|
||||
"version": "4.5",
|
||||
"date": "2025-12-07T00:27:06Z"
|
||||
},
|
||||
{
|
||||
"name": "seerr-team/seerr",
|
||||
"version": "preview-test-fix-subscriptions",
|
||||
"date": "2025-12-06T22:36:36Z"
|
||||
},
|
||||
{
|
||||
"name": "sysadminsmedia/homebox",
|
||||
"version": "v0.22.0-rc.2",
|
||||
"date": "2025-12-06T21:24:28Z"
|
||||
},
|
||||
{
|
||||
"name": "keycloak/keycloak",
|
||||
"version": "26.4.7",
|
||||
"date": "2025-12-01T08:14:11Z"
|
||||
},
|
||||
{
|
||||
"name": "Koenkk/zigbee2mqtt",
|
||||
"version": "2.7.1",
|
||||
"date": "2025-12-06T20:30:34Z"
|
||||
},
|
||||
{
|
||||
"name": "blakeblackshear/frigate",
|
||||
"version": "v0.14.1",
|
||||
"date": "2024-08-29T22:32:51Z"
|
||||
},
|
||||
{
|
||||
"name": "navidrome/navidrome",
|
||||
"version": "v0.59.0",
|
||||
"date": "2025-12-06T18:08:42Z"
|
||||
},
|
||||
{
|
||||
"name": "Brandawg93/PeaNUT",
|
||||
"version": "v5.19.2",
|
||||
"date": "2025-12-06T14:56:53Z"
|
||||
},
|
||||
{
|
||||
"name": "openhab/openhab-core",
|
||||
"version": "4.3.9",
|
||||
"date": "2025-12-06T14:31:36Z"
|
||||
},
|
||||
{
|
||||
"name": "fuma-nama/fumadocs",
|
||||
"version": "fumadocs-openapi@10.1.1",
|
||||
"date": "2025-12-06T11:27:58Z"
|
||||
},
|
||||
{
|
||||
"name": "YunoHost/yunohost",
|
||||
"version": "debian/13.0.2",
|
||||
"date": "2025-12-06T10:46:12Z"
|
||||
},
|
||||
{
|
||||
"name": "toniebox-reverse-engineering/teddycloud",
|
||||
"version": "tc_v0.6.5",
|
||||
"date": "2025-12-06T10:32:07Z"
|
||||
},
|
||||
{
|
||||
"name": "inspircd/inspircd",
|
||||
"version": "v4.9.0",
|
||||
"date": "2025-12-06T08:58:40Z"
|
||||
},
|
||||
{
|
||||
"name": "chrisvel/tududi",
|
||||
"version": "v0.87",
|
||||
"date": "2025-12-06T07:36:26Z"
|
||||
},
|
||||
{
|
||||
"name": "firefly-iii/firefly-iii",
|
||||
"version": "v6.4.9",
|
||||
"date": "2025-11-28T20:36:20Z"
|
||||
},
|
||||
{
|
||||
"name": "tobychui/zoraxy",
|
||||
"version": "v3.3.0",
|
||||
"date": "2025-12-06T06:18:23Z"
|
||||
},
|
||||
{
|
||||
"name": "theonedev/onedev",
|
||||
"version": "v13.1.3",
|
||||
"date": "2025-12-06T04:40:09Z"
|
||||
},
|
||||
{
|
||||
"name": "ollama/ollama",
|
||||
"version": "v0.13.2-rc1",
|
||||
"date": "2025-12-04T23:19:06Z"
|
||||
},
|
||||
{
|
||||
"name": "Stirling-Tools/Stirling-PDF",
|
||||
"version": "v2.1.1",
|
||||
"date": "2025-12-05T23:48:08Z"
|
||||
},
|
||||
{
|
||||
"name": "chrisbenincasa/tunarr",
|
||||
"version": "v0.23.0-alpha.30",
|
||||
"date": "2025-12-05T21:23:38Z"
|
||||
},
|
||||
{
|
||||
"name": "home-assistant/core",
|
||||
"version": "2025.12.1",
|
||||
"date": "2025-12-05T21:10:31Z"
|
||||
},
|
||||
{
|
||||
"name": "n8n-io/n8n",
|
||||
"version": "n8n@1.122.5",
|
||||
"date": "2025-12-04T14:09:39Z"
|
||||
},
|
||||
{
|
||||
"name": "homarr-labs/homarr",
|
||||
"version": "v1.45.2",
|
||||
"date": "2025-12-05T19:17:09Z"
|
||||
},
|
||||
{
|
||||
"name": "booklore-app/booklore",
|
||||
"version": "v1.13.2",
|
||||
"date": "2025-12-05T16:03:08Z"
|
||||
},
|
||||
{
|
||||
"name": "tailscale/tailscale",
|
||||
"version": "v1.92.1",
|
||||
"date": "2025-12-05T15:53:22Z"
|
||||
},
|
||||
{
|
||||
"name": "FlowiseAI/Flowise",
|
||||
"version": "flowise@3.0.12",
|
||||
"date": "2025-12-05T15:02:01Z"
|
||||
},
|
||||
{
|
||||
"name": "emqx/emqx",
|
||||
"version": "e6.1.0-streams.1",
|
||||
"date": "2025-12-05T12:27:36Z"
|
||||
},
|
||||
{
|
||||
"name": "Luligu/matterbridge",
|
||||
"version": "3.4.2",
|
||||
"date": "2025-12-05T12:20:54Z"
|
||||
},
|
||||
{
|
||||
"name": "node-red/node-red",
|
||||
"version": "4.1.2",
|
||||
"date": "2025-12-03T16:12:05Z"
|
||||
},
|
||||
{
|
||||
"name": "traefik/traefik",
|
||||
"version": "v3.6.4",
|
||||
"date": "2025-12-05T09:58:17Z"
|
||||
},
|
||||
{
|
||||
"name": "alexta69/metube",
|
||||
"version": "2025.12.05",
|
||||
"date": "2025-12-05T09:45:02Z"
|
||||
},
|
||||
{
|
||||
"name": "esphome/esphome",
|
||||
"version": "2025.11.4",
|
||||
"date": "2025-12-05T03:54:58Z"
|
||||
},
|
||||
{
|
||||
"name": "documenso/documenso",
|
||||
"version": "v2.2.4",
|
||||
"date": "2025-12-05T01:23:23Z"
|
||||
},
|
||||
{
|
||||
"name": "transmission/transmission",
|
||||
"version": "4.0.1-beta.1",
|
||||
"date": "2024-12-13T00:16:24Z"
|
||||
},
|
||||
{
|
||||
"name": "LibreTranslate/LibreTranslate",
|
||||
"version": "v1.8.1",
|
||||
"date": "2025-12-03T22:46:51Z"
|
||||
"version": "v1.8.3",
|
||||
"date": "2025-12-04T21:07:00Z"
|
||||
},
|
||||
{
|
||||
"name": "caddyserver/caddy",
|
||||
"version": "v2.10.2",
|
||||
"date": "2025-08-23T03:10:31Z"
|
||||
},
|
||||
{
|
||||
"name": "msgbyte/tianji",
|
||||
"version": "v1.30.20",
|
||||
"date": "2025-12-04T18:17:47Z"
|
||||
},
|
||||
{
|
||||
"name": "AdguardTeam/AdGuardHome",
|
||||
"version": "v0.107.70",
|
||||
"date": "2025-12-03T16:12:15Z"
|
||||
},
|
||||
{
|
||||
"name": "wazuh/wazuh",
|
||||
"version": "coverity-w49-4.14.2",
|
||||
"date": "2025-12-02T14:01:48Z"
|
||||
},
|
||||
{
|
||||
"name": "crowdsecurity/crowdsec",
|
||||
"version": "v1.7.4",
|
||||
"date": "2025-12-04T13:56:39Z"
|
||||
},
|
||||
{
|
||||
"name": "pocketbase/pocketbase",
|
||||
"version": "v0.34.2",
|
||||
"date": "2025-12-04T13:08:18Z"
|
||||
},
|
||||
{
|
||||
"name": "glpi-project/glpi",
|
||||
"version": "11.0.4",
|
||||
"date": "2025-12-04T09:26:37Z"
|
||||
},
|
||||
{
|
||||
"name": "bunkerity/bunkerweb",
|
||||
"version": "v1.6.6",
|
||||
"date": "2025-11-24T15:30:21Z"
|
||||
},
|
||||
{
|
||||
"name": "kyantech/Palmr",
|
||||
"version": "v3.3.1-beta",
|
||||
"date": "2025-12-04T03:33:38Z"
|
||||
},
|
||||
{
|
||||
"name": "gtsteffaniak/filebrowser",
|
||||
"version": "v1.2.0-experimental-sql-indexing",
|
||||
"date": "2025-12-04T01:35:46Z"
|
||||
},
|
||||
{
|
||||
"name": "rcourtman/Pulse",
|
||||
@ -14,71 +259,26 @@
|
||||
"version": "v5.42.1",
|
||||
"date": "2025-12-03T22:42:22Z"
|
||||
},
|
||||
{
|
||||
"name": "BerriAI/litellm",
|
||||
"version": "v1.80.5-stable",
|
||||
"date": "2025-12-03T22:09:03Z"
|
||||
},
|
||||
{
|
||||
"name": "Stirling-Tools/Stirling-PDF",
|
||||
"version": "v2.1.0",
|
||||
"date": "2025-12-03T21:29:10Z"
|
||||
},
|
||||
{
|
||||
"name": "danielbrendel/hortusfox-web",
|
||||
"version": "v5.5",
|
||||
"date": "2025-12-03T21:20:30Z"
|
||||
},
|
||||
{
|
||||
"name": "chrisbenincasa/tunarr",
|
||||
"version": "v0.23.0-alpha.28",
|
||||
"date": "2025-12-03T20:32:43Z"
|
||||
},
|
||||
{
|
||||
"name": "gelbphoenix/autocaliweb",
|
||||
"version": "v0.11.1",
|
||||
"date": "2025-12-03T18:24:51Z"
|
||||
},
|
||||
{
|
||||
"name": "home-assistant/core",
|
||||
"version": "2025.12.0",
|
||||
"date": "2025-12-03T18:07:51Z"
|
||||
},
|
||||
{
|
||||
"name": "actualbudget/actual",
|
||||
"version": "v25.12.0",
|
||||
"date": "2025-12-03T17:45:09Z"
|
||||
},
|
||||
{
|
||||
"name": "homarr-labs/homarr",
|
||||
"version": "v1.45.1",
|
||||
"date": "2025-12-03T17:43:51Z"
|
||||
},
|
||||
{
|
||||
"name": "caddyserver/caddy",
|
||||
"version": "v2.11.0-beta.1",
|
||||
"date": "2025-12-03T17:23:02Z"
|
||||
},
|
||||
{
|
||||
"name": "karakeep-app/karakeep",
|
||||
"version": "v0.29.1",
|
||||
"date": "2025-12-03T16:56:18Z"
|
||||
},
|
||||
{
|
||||
"name": "esphome/esphome",
|
||||
"version": "2025.11.3",
|
||||
"date": "2025-12-03T16:37:01Z"
|
||||
},
|
||||
{
|
||||
"name": "AdguardTeam/AdGuardHome",
|
||||
"version": "v0.107.70",
|
||||
"date": "2025-12-03T16:12:15Z"
|
||||
},
|
||||
{
|
||||
"name": "node-red/node-red",
|
||||
"version": "4.1.2",
|
||||
"date": "2025-12-03T16:12:05Z"
|
||||
},
|
||||
{
|
||||
"name": "BookStackApp/BookStack",
|
||||
"version": "v25.11.5",
|
||||
@ -94,31 +294,6 @@
|
||||
"version": "6.2.10",
|
||||
"date": "2025-12-03T13:58:32Z"
|
||||
},
|
||||
{
|
||||
"name": "n8n-io/n8n",
|
||||
"version": "n8n@1.121.3",
|
||||
"date": "2025-11-26T14:58:48Z"
|
||||
},
|
||||
{
|
||||
"name": "documenso/documenso",
|
||||
"version": "v2.2.0",
|
||||
"date": "2025-12-03T13:31:11Z"
|
||||
},
|
||||
{
|
||||
"name": "crowdsecurity/crowdsec",
|
||||
"version": "v1.7.3",
|
||||
"date": "2025-10-24T10:51:12Z"
|
||||
},
|
||||
{
|
||||
"name": "kyantech/Palmr",
|
||||
"version": "v3.3.0-beta",
|
||||
"date": "2025-12-03T12:59:19Z"
|
||||
},
|
||||
{
|
||||
"name": "glpi-project/glpi",
|
||||
"version": "11.0.3",
|
||||
"date": "2025-12-03T12:00:17Z"
|
||||
},
|
||||
{
|
||||
"name": "laurent22/joplin",
|
||||
"version": "server-v3.5.1",
|
||||
@ -134,11 +309,6 @@
|
||||
"version": "v10.11.8",
|
||||
"date": "2025-11-21T17:06:07Z"
|
||||
},
|
||||
{
|
||||
"name": "Jackett/Jackett",
|
||||
"version": "v0.24.399",
|
||||
"date": "2025-12-03T05:55:33Z"
|
||||
},
|
||||
{
|
||||
"name": "openobserve/openobserve",
|
||||
"version": "v0.20.2",
|
||||
@ -149,16 +319,6 @@
|
||||
"version": "2.1.1",
|
||||
"date": "2025-06-14T17:45:06Z"
|
||||
},
|
||||
{
|
||||
"name": "jeedom/core",
|
||||
"version": "4.5",
|
||||
"date": "2025-12-03T00:27:06Z"
|
||||
},
|
||||
{
|
||||
"name": "steveiliop56/tinyauth",
|
||||
"version": "v4.1.0",
|
||||
"date": "2025-11-23T12:13:34Z"
|
||||
},
|
||||
{
|
||||
"name": "henrygd/beszel",
|
||||
"version": "v0.17.0",
|
||||
@ -167,7 +327,7 @@
|
||||
{
|
||||
"name": "mealie-recipes/mealie",
|
||||
"version": "v3.6.1",
|
||||
"date": "2025-12-02T23:08:41Z"
|
||||
"date": "2025-12-02T22:54:10Z"
|
||||
},
|
||||
{
|
||||
"name": "apache/tomcat",
|
||||
@ -199,21 +359,6 @@
|
||||
"version": "v2.8.0",
|
||||
"date": "2025-12-02T20:22:32Z"
|
||||
},
|
||||
{
|
||||
"name": "keycloak/keycloak",
|
||||
"version": "26.4.7",
|
||||
"date": "2025-12-01T08:14:11Z"
|
||||
},
|
||||
{
|
||||
"name": "ollama/ollama",
|
||||
"version": "v0.13.1-rc2",
|
||||
"date": "2025-12-02T17:28:41Z"
|
||||
},
|
||||
{
|
||||
"name": "pocketbase/pocketbase",
|
||||
"version": "v0.34.1",
|
||||
"date": "2025-12-02T18:43:50Z"
|
||||
},
|
||||
{
|
||||
"name": "WordPress/WordPress",
|
||||
"version": "6.9",
|
||||
@ -229,36 +374,11 @@
|
||||
"version": "jenkins-2.540",
|
||||
"date": "2025-12-02T16:56:49Z"
|
||||
},
|
||||
{
|
||||
"name": "fuma-nama/fumadocs",
|
||||
"version": "fumadocs-ui@16.2.2",
|
||||
"date": "2025-12-02T16:17:30Z"
|
||||
},
|
||||
{
|
||||
"name": "nzbgetcom/nzbget",
|
||||
"version": "v25.4",
|
||||
"date": "2025-10-09T10:27:01Z"
|
||||
},
|
||||
{
|
||||
"name": "gtsteffaniak/filebrowser",
|
||||
"version": "v1.0.3-stable",
|
||||
"date": "2025-12-02T15:21:13Z"
|
||||
},
|
||||
{
|
||||
"name": "Brandawg93/PeaNUT",
|
||||
"version": "v5.19.0",
|
||||
"date": "2025-12-02T14:58:59Z"
|
||||
},
|
||||
{
|
||||
"name": "wazuh/wazuh",
|
||||
"version": "coverity-w49-4.14.2",
|
||||
"date": "2025-12-02T14:01:48Z"
|
||||
},
|
||||
{
|
||||
"name": "tobychui/zoraxy",
|
||||
"version": "v3.3.0-rc3",
|
||||
"date": "2025-12-02T13:45:13Z"
|
||||
},
|
||||
{
|
||||
"name": "docker/compose",
|
||||
"version": "v5.0.0",
|
||||
@ -269,21 +389,16 @@
|
||||
"version": "v3.8.0",
|
||||
"date": "2025-12-02T10:11:05Z"
|
||||
},
|
||||
{
|
||||
"name": "neo4j/neo4j",
|
||||
"version": "5.26.18",
|
||||
"date": "2025-12-02T09:25:19Z"
|
||||
},
|
||||
{
|
||||
"name": "syncthing/syncthing",
|
||||
"version": "v2.0.12",
|
||||
"date": "2025-12-02T08:11:24Z"
|
||||
},
|
||||
{
|
||||
"name": "morpheus65535/bazarr",
|
||||
"version": "v1.5.3",
|
||||
"date": "2025-09-20T12:12:33Z"
|
||||
},
|
||||
{
|
||||
"name": "booklore-app/booklore",
|
||||
"version": "v1.13.1",
|
||||
"date": "2025-12-02T03:59:00Z"
|
||||
},
|
||||
{
|
||||
"name": "advplyr/audiobookshelf",
|
||||
"version": "v2.31.0",
|
||||
@ -309,11 +424,6 @@
|
||||
"version": "v0.16.0",
|
||||
"date": "2025-12-01T21:35:19Z"
|
||||
},
|
||||
{
|
||||
"name": "Koenkk/zigbee2mqtt",
|
||||
"version": "2.7.0",
|
||||
"date": "2025-12-01T20:26:49Z"
|
||||
},
|
||||
{
|
||||
"name": "slskd/slskd",
|
||||
"version": "0.24.1",
|
||||
@ -324,21 +434,11 @@
|
||||
"version": "v3.6.0",
|
||||
"date": "2025-12-01T18:48:10Z"
|
||||
},
|
||||
{
|
||||
"name": "msgbyte/tianji",
|
||||
"version": "v1.30.18",
|
||||
"date": "2025-12-01T17:57:43Z"
|
||||
},
|
||||
{
|
||||
"name": "VictoriaMetrics/VictoriaMetrics",
|
||||
"version": "pmm-6401-v1.131.0",
|
||||
"date": "2025-12-01T17:18:24Z"
|
||||
},
|
||||
{
|
||||
"name": "bunkerity/bunkerweb",
|
||||
"version": "v1.6.6",
|
||||
"date": "2025-11-24T15:30:21Z"
|
||||
},
|
||||
{
|
||||
"name": "OctoPrint/OctoPrint",
|
||||
"version": "1.11.5",
|
||||
@ -349,11 +449,6 @@
|
||||
"version": "0.211.0",
|
||||
"date": "2025-12-01T11:22:11Z"
|
||||
},
|
||||
{
|
||||
"name": "Luligu/matterbridge",
|
||||
"version": "3.4.1",
|
||||
"date": "2025-12-01T11:06:39Z"
|
||||
},
|
||||
{
|
||||
"name": "cockpit-project/cockpit",
|
||||
"version": "310.6",
|
||||
@ -364,11 +459,6 @@
|
||||
"version": "251130-b3068414c",
|
||||
"date": "2025-12-01T05:07:31Z"
|
||||
},
|
||||
{
|
||||
"name": "firefly-iii/firefly-iii",
|
||||
"version": "v6.4.9",
|
||||
"date": "2025-11-28T20:36:20Z"
|
||||
},
|
||||
{
|
||||
"name": "jellyfin/jellyfin",
|
||||
"version": "v10.11.4",
|
||||
@ -404,11 +494,6 @@
|
||||
"version": "0.11.1",
|
||||
"date": "2025-11-30T14:54:03Z"
|
||||
},
|
||||
{
|
||||
"name": "openhab/openhab-core",
|
||||
"version": "5.1.0.M3",
|
||||
"date": "2025-11-30T14:36:37Z"
|
||||
},
|
||||
{
|
||||
"name": "healthchecks/healthchecks",
|
||||
"version": "v3.13",
|
||||
@ -449,11 +534,6 @@
|
||||
"version": "v4.39.15",
|
||||
"date": "2025-11-29T12:13:04Z"
|
||||
},
|
||||
{
|
||||
"name": "alexta69/metube",
|
||||
"version": "2025.11.29",
|
||||
"date": "2025-11-29T07:59:21Z"
|
||||
},
|
||||
{
|
||||
"name": "FlareSolverr/FlareSolverr",
|
||||
"version": "v3.4.6",
|
||||
@ -469,11 +549,6 @@
|
||||
"version": "0.51.4",
|
||||
"date": "2025-11-28T12:26:59Z"
|
||||
},
|
||||
{
|
||||
"name": "chrisvel/tududi",
|
||||
"version": "v0.86.1",
|
||||
"date": "2025-11-14T05:05:44Z"
|
||||
},
|
||||
{
|
||||
"name": "gotson/komga",
|
||||
"version": "1.23.6",
|
||||
@ -494,31 +569,16 @@
|
||||
"version": "v6.3",
|
||||
"date": "2025-11-27T18:12:22Z"
|
||||
},
|
||||
{
|
||||
"name": "theonedev/onedev",
|
||||
"version": "v13.1.2",
|
||||
"date": "2025-11-27T12:48:22Z"
|
||||
},
|
||||
{
|
||||
"name": "ipfs/kubo",
|
||||
"version": "v0.39.0",
|
||||
"date": "2025-11-27T03:47:38Z"
|
||||
},
|
||||
{
|
||||
"name": "YunoHost/yunohost",
|
||||
"version": "debian/12.1.36",
|
||||
"date": "2025-11-27T00:33:48Z"
|
||||
},
|
||||
{
|
||||
"name": "gristlabs/grist-core",
|
||||
"version": "v1.7.8",
|
||||
"date": "2025-11-26T22:35:03Z"
|
||||
},
|
||||
{
|
||||
"name": "tailscale/tailscale",
|
||||
"version": "v1.93.0-pre",
|
||||
"date": "2025-11-26T20:50:59Z"
|
||||
},
|
||||
{
|
||||
"name": "jhuckaby/Cronicle",
|
||||
"version": "v0.9.101",
|
||||
@ -549,11 +609,6 @@
|
||||
"version": "release-1.24.2",
|
||||
"date": "2025-11-26T11:22:30Z"
|
||||
},
|
||||
{
|
||||
"name": "seerr-team/seerr",
|
||||
"version": "preview-test-fix-subscriptions",
|
||||
"date": "2025-11-25T22:11:46Z"
|
||||
},
|
||||
{
|
||||
"name": "wizarrrr/wizarr",
|
||||
"version": "v2025.11.3",
|
||||
@ -654,11 +709,6 @@
|
||||
"version": "5.2.4",
|
||||
"date": "2025-11-21T10:25:05Z"
|
||||
},
|
||||
{
|
||||
"name": "emqx/emqx",
|
||||
"version": "e6.0.2-beta.2",
|
||||
"date": "2025-11-21T10:20:27Z"
|
||||
},
|
||||
{
|
||||
"name": "sabnzbd/sabnzbd",
|
||||
"version": "4.5.5",
|
||||
@ -689,11 +739,6 @@
|
||||
"version": "v3007.9",
|
||||
"date": "2025-11-20T17:58:32Z"
|
||||
},
|
||||
{
|
||||
"name": "neo4j/neo4j",
|
||||
"version": "5.26.17",
|
||||
"date": "2025-11-20T16:09:28Z"
|
||||
},
|
||||
{
|
||||
"name": "kimai/kimai",
|
||||
"version": "2.44.0",
|
||||
@ -744,16 +789,6 @@
|
||||
"version": "v7.5.0",
|
||||
"date": "2025-11-19T08:36:29Z"
|
||||
},
|
||||
{
|
||||
"name": "umami-software/umami",
|
||||
"version": "v3.0.1",
|
||||
"date": "2025-11-18T18:50:35Z"
|
||||
},
|
||||
{
|
||||
"name": "traefik/traefik",
|
||||
"version": "v3.6.2",
|
||||
"date": "2025-11-18T16:25:16Z"
|
||||
},
|
||||
{
|
||||
"name": "redis/redis",
|
||||
"version": "8.4.0",
|
||||
@ -799,11 +834,6 @@
|
||||
"version": "v6.0.4.10291",
|
||||
"date": "2025-11-16T22:39:01Z"
|
||||
},
|
||||
{
|
||||
"name": "sysadminsmedia/homebox",
|
||||
"version": "v0.21.0",
|
||||
"date": "2025-08-23T18:33:53Z"
|
||||
},
|
||||
{
|
||||
"name": "binwiederhier/ntfy",
|
||||
"version": "v2.15.0",
|
||||
@ -824,11 +854,6 @@
|
||||
"version": "v25.11.1",
|
||||
"date": "2025-11-16T13:04:21Z"
|
||||
},
|
||||
{
|
||||
"name": "FlowiseAI/Flowise",
|
||||
"version": "flowise@3.0.11",
|
||||
"date": "2025-11-16T01:29:06Z"
|
||||
},
|
||||
{
|
||||
"name": "cloudreve/cloudreve",
|
||||
"version": "4.10.1",
|
||||
@ -874,11 +899,6 @@
|
||||
"version": "REL_13_23",
|
||||
"date": "2025-11-10T21:59:18Z"
|
||||
},
|
||||
{
|
||||
"name": "navidrome/navidrome",
|
||||
"version": "v0.58.5",
|
||||
"date": "2025-11-09T19:12:41Z"
|
||||
},
|
||||
{
|
||||
"name": "pelican-dev/panel",
|
||||
"version": "v1.0.0-beta28",
|
||||
@ -914,11 +934,6 @@
|
||||
"version": "v4.52.0",
|
||||
"date": "2025-11-06T22:39:26Z"
|
||||
},
|
||||
{
|
||||
"name": "transmission/transmission",
|
||||
"version": "4.0.1-beta.1",
|
||||
"date": "2024-12-13T00:16:24Z"
|
||||
},
|
||||
{
|
||||
"name": "Notifiarr/notifiarr",
|
||||
"version": "v0.9.1",
|
||||
@ -1074,11 +1089,6 @@
|
||||
"version": "v2.13.1",
|
||||
"date": "2025-10-15T13:29:37Z"
|
||||
},
|
||||
{
|
||||
"name": "blakeblackshear/frigate",
|
||||
"version": "v0.14.1",
|
||||
"date": "2024-08-29T22:32:51Z"
|
||||
},
|
||||
{
|
||||
"name": "rogerfar/rdt-client",
|
||||
"version": "v2.0.119",
|
||||
@ -1289,11 +1299,6 @@
|
||||
"version": "v1.28.3",
|
||||
"date": "2025-08-06T12:32:02Z"
|
||||
},
|
||||
{
|
||||
"name": "inspircd/inspircd",
|
||||
"version": "v4.8.0",
|
||||
"date": "2025-08-02T09:12:10Z"
|
||||
},
|
||||
{
|
||||
"name": "Suwayomi/Suwayomi-Server",
|
||||
"version": "v2.1.1867",
|
||||
@ -1454,11 +1459,6 @@
|
||||
"version": "v2.4.2",
|
||||
"date": "2025-03-08T10:49:04Z"
|
||||
},
|
||||
{
|
||||
"name": "toniebox-reverse-engineering/teddycloud",
|
||||
"version": "tc_v0.6.4",
|
||||
"date": "2025-03-05T15:43:40Z"
|
||||
},
|
||||
{
|
||||
"name": "bitmagnet-io/bitmagnet",
|
||||
"version": "v0.10.0",
|
||||
|
||||
@ -23,7 +23,7 @@
|
||||
"ram": 4096,
|
||||
"hdd": 8,
|
||||
"os": "debian",
|
||||
"version": "13"
|
||||
"version": "12"
|
||||
}
|
||||
}
|
||||
],
|
||||
|
||||
@ -11,18 +11,16 @@ import {
|
||||
Trophy,
|
||||
XCircle,
|
||||
} from "lucide-react";
|
||||
import {
|
||||
Bar,
|
||||
BarChart,
|
||||
CartesianGrid,
|
||||
Cell,
|
||||
LabelList,
|
||||
XAxis,
|
||||
} from "recharts";
|
||||
import React, { useEffect, useMemo, useState } from "react";
|
||||
import { useEffect, useMemo, useState } from "react";
|
||||
import { Bar, BarChart, CartesianGrid, Cell, LabelList, XAxis } from "recharts";
|
||||
|
||||
import type { ChartConfig } from "@/components/ui/chart";
|
||||
|
||||
import { formattedBadge } from "@/components/command-menu";
|
||||
import { Badge } from "@/components/ui/badge";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
|
||||
import { ChartContainer, ChartTooltip, ChartTooltipContent } from "@/components/ui/chart";
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
@ -31,37 +29,10 @@ import {
|
||||
DialogTitle,
|
||||
DialogTrigger,
|
||||
} from "@/components/ui/dialog";
|
||||
import {
|
||||
Table,
|
||||
TableBody,
|
||||
TableCell,
|
||||
TableHead,
|
||||
TableHeader,
|
||||
TableRow,
|
||||
} from "@/components/ui/table";
|
||||
import {
|
||||
Select,
|
||||
SelectContent,
|
||||
SelectItem,
|
||||
SelectTrigger,
|
||||
SelectValue,
|
||||
} from "@/components/ui/select";
|
||||
import {
|
||||
Card,
|
||||
CardContent,
|
||||
CardDescription,
|
||||
CardHeader,
|
||||
CardTitle,
|
||||
} from "@/components/ui/card";
|
||||
import {
|
||||
ChartContainer,
|
||||
ChartTooltip,
|
||||
ChartTooltipContent,
|
||||
} from "@/components/ui/chart";
|
||||
import { formattedBadge } from "@/components/command-menu";
|
||||
import { ScrollArea } from "@/components/ui/scroll-area";
|
||||
import { Button } from "@/components/ui/button";
|
||||
import { Badge } from "@/components/ui/badge";
|
||||
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "@/components/ui/select";
|
||||
import { Table, TableBody, TableCell, TableHead, TableHeader, TableRow } from "@/components/ui/table";
|
||||
import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from "@/components/ui/tooltip";
|
||||
|
||||
type DataModel = {
|
||||
id: number;
|
||||
@ -141,7 +112,8 @@ export default function DataPage() {
|
||||
const [summaryRes, dataRes] = await Promise.all([
|
||||
fetch("https://api.htl-braunau.at/data/summary"),
|
||||
fetch(
|
||||
`https://api.htl-braunau.at/data/paginated?page=${currentPage}&limit=${itemsPerPage === 0 ? "" : itemsPerPage
|
||||
`https://api.htl-braunau.at/data/paginated?page=${currentPage}&limit=${
|
||||
itemsPerPage === 0 ? "" : itemsPerPage
|
||||
}`,
|
||||
),
|
||||
]);
|
||||
@ -158,11 +130,9 @@ export default function DataPage() {
|
||||
|
||||
setSummary(summaryData);
|
||||
setData(pageData);
|
||||
}
|
||||
catch (err) {
|
||||
} catch (err) {
|
||||
setError((err as Error).message);
|
||||
}
|
||||
finally {
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
@ -171,8 +141,7 @@ export default function DataPage() {
|
||||
}, [currentPage, itemsPerPage]);
|
||||
|
||||
const sortedData = useMemo(() => {
|
||||
if (!sortConfig)
|
||||
return data;
|
||||
if (!sortConfig) return data;
|
||||
return [...data].sort((a, b) => {
|
||||
if (a[sortConfig.key] < b[sortConfig.key]) {
|
||||
return sortConfig.direction === "ascending" ? -1 : 1;
|
||||
@ -186,11 +155,7 @@ export default function DataPage() {
|
||||
|
||||
const requestSort = (key: string) => {
|
||||
let direction: "ascending" | "descending" = "ascending";
|
||||
if (
|
||||
sortConfig
|
||||
&& sortConfig.key === key
|
||||
&& sortConfig.direction === "ascending"
|
||||
) {
|
||||
if (sortConfig && sortConfig.key === key && sortConfig.direction === "ascending") {
|
||||
direction = "descending";
|
||||
}
|
||||
setSortConfig({ key, direction });
|
||||
@ -205,10 +170,8 @@ export default function DataPage() {
|
||||
};
|
||||
|
||||
const getTypeBadge = (type: string) => {
|
||||
if (type === "lxc")
|
||||
return formattedBadge("ct");
|
||||
if (type === "vm")
|
||||
return formattedBadge("vm");
|
||||
if (type === "lxc") return formattedBadge("ct");
|
||||
if (type === "vm") return formattedBadge("vm");
|
||||
return null;
|
||||
};
|
||||
|
||||
@ -219,8 +182,7 @@ export default function DataPage() {
|
||||
const successRate = totalCount > 0 ? (successCount / totalCount) * 100 : 0;
|
||||
|
||||
const allApps = useMemo(() => {
|
||||
if (!summary?.nsapp_count)
|
||||
return [];
|
||||
if (!summary?.nsapp_count) return [];
|
||||
return Object.entries(summary.nsapp_count).sort(([, a], [, b]) => b - a);
|
||||
}, [summary]);
|
||||
|
||||
@ -255,9 +217,7 @@ export default function DataPage() {
|
||||
{/* Header */}
|
||||
<div>
|
||||
<h1 className="text-3xl font-bold tracking-tight">Analytics</h1>
|
||||
<p className="text-muted-foreground">
|
||||
Overview of container installations and system statistics.
|
||||
</p>
|
||||
<p className="text-muted-foreground">Overview of container installations and system statistics.</p>
|
||||
</div>
|
||||
|
||||
{/* Widgets */}
|
||||
@ -269,9 +229,7 @@ export default function DataPage() {
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div className="text-2xl font-bold">{nf.format(totalCount)}</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
Total LXC/VM entries found
|
||||
</p>
|
||||
<p className="text-xs text-muted-foreground">Total LXC/VM entries found</p>
|
||||
</CardContent>
|
||||
</Card>
|
||||
|
||||
@ -281,15 +239,8 @@ export default function DataPage() {
|
||||
<CheckCircle2 className="h-4 w-4 text-green-500" />
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div className="text-2xl font-bold">
|
||||
{successRate.toFixed(1)}
|
||||
%
|
||||
</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{nf.format(successCount)}
|
||||
{" "}
|
||||
successful installations
|
||||
</p>
|
||||
<div className="text-2xl font-bold">{successRate.toFixed(1)}%</div>
|
||||
<p className="text-xs text-muted-foreground">{nf.format(successCount)} successful installations</p>
|
||||
</CardContent>
|
||||
</Card>
|
||||
|
||||
@ -300,9 +251,7 @@ export default function DataPage() {
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div className="text-2xl font-bold">{nf.format(failureCount)}</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
Installations encountered errors
|
||||
</p>
|
||||
<p className="text-xs text-muted-foreground">Installations encountered errors</p>
|
||||
</CardContent>
|
||||
</Card>
|
||||
|
||||
@ -312,13 +261,9 @@ export default function DataPage() {
|
||||
<Trophy className="h-4 w-4 text-yellow-500" />
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div className="truncate text-2xl font-bold">
|
||||
{mostPopularApp ? mostPopularApp[0] : "N/A"}
|
||||
</div>
|
||||
<div className="truncate text-2xl font-bold">{mostPopularApp ? mostPopularApp[0] : "N/A"}</div>
|
||||
<p className="text-xs text-muted-foreground">
|
||||
{mostPopularApp ? nf.format(mostPopularApp[1]) : 0}
|
||||
{" "}
|
||||
installations
|
||||
{mostPopularApp ? nf.format(mostPopularApp[1]) : 0} installations
|
||||
</p>
|
||||
</CardContent>
|
||||
</Card>
|
||||
@ -329,9 +274,7 @@ export default function DataPage() {
|
||||
<CardHeader className="flex flex-row items-center justify-between">
|
||||
<div className="space-y-1.5">
|
||||
<CardTitle>Top Applications</CardTitle>
|
||||
<CardDescription>
|
||||
The most frequently installed applications.
|
||||
</CardDescription>
|
||||
<CardDescription>The most frequently installed applications.</CardDescription>
|
||||
</div>
|
||||
<Dialog>
|
||||
<DialogTrigger asChild>
|
||||
@ -343,26 +286,14 @@ export default function DataPage() {
|
||||
<DialogContent className="max-h-[80vh] sm:max-w-md">
|
||||
<DialogHeader>
|
||||
<DialogTitle>Application Statistics</DialogTitle>
|
||||
<DialogDescription>
|
||||
Installation counts for all
|
||||
{" "}
|
||||
{allApps.length}
|
||||
{" "}
|
||||
applications.
|
||||
</DialogDescription>
|
||||
<DialogDescription>Installation counts for all {allApps.length} applications.</DialogDescription>
|
||||
</DialogHeader>
|
||||
<ScrollArea className="h-[60vh] w-full rounded-md border p-4">
|
||||
<div className="space-y-4">
|
||||
{allApps.map(([name, count], index) => (
|
||||
<div
|
||||
key={name}
|
||||
className="flex items-center justify-between text-sm"
|
||||
>
|
||||
<div key={name} className="flex items-center justify-between text-sm">
|
||||
<div className="flex items-center gap-2">
|
||||
<span className="w-8 font-mono text-muted-foreground">
|
||||
{index + 1}
|
||||
.
|
||||
</span>
|
||||
<span className="w-8 font-mono text-muted-foreground">{index + 1}.</span>
|
||||
<span className="font-medium">{name}</span>
|
||||
</div>
|
||||
<span className="font-mono">{nf.format(count)}</span>
|
||||
@ -375,47 +306,37 @@ export default function DataPage() {
|
||||
</CardHeader>
|
||||
<CardContent className="pl-2">
|
||||
<div className="h-[300px] w-full">
|
||||
{loading
|
||||
? (
|
||||
<div className="flex h-full w-full items-center justify-center">
|
||||
<Loader2 className="h-8 w-8 animate-spin text-muted-foreground" />
|
||||
</div>
|
||||
)
|
||||
: (
|
||||
<ChartContainer config={chartConfigApps} className="h-full w-full">
|
||||
<BarChart
|
||||
accessibilityLayer
|
||||
data={appsChartData}
|
||||
margin={{
|
||||
top: 20,
|
||||
}}
|
||||
>
|
||||
<CartesianGrid vertical={false} />
|
||||
<XAxis
|
||||
dataKey="app"
|
||||
tickLine={false}
|
||||
tickMargin={10}
|
||||
axisLine={false}
|
||||
tickFormatter={value => (value.length > 8 ? `${value.slice(0, 8)}...` : value)}
|
||||
/>
|
||||
<ChartTooltip
|
||||
cursor={false}
|
||||
content={<ChartTooltipContent nameKey="app" />}
|
||||
/>
|
||||
<Bar dataKey="count" radius={8}>
|
||||
{appsChartData.map((entry, index) => (
|
||||
<Cell key={`cell-${index}`} fill={entry.fill} />
|
||||
))}
|
||||
<LabelList
|
||||
position="top"
|
||||
offset={12}
|
||||
className="fill-foreground"
|
||||
fontSize={12}
|
||||
/>
|
||||
</Bar>
|
||||
</BarChart>
|
||||
</ChartContainer>
|
||||
)}
|
||||
{loading ? (
|
||||
<div className="flex h-full w-full items-center justify-center">
|
||||
<Loader2 className="h-8 w-8 animate-spin text-muted-foreground" />
|
||||
</div>
|
||||
) : (
|
||||
<ChartContainer config={chartConfigApps} className="h-full w-full">
|
||||
<BarChart
|
||||
accessibilityLayer
|
||||
data={appsChartData}
|
||||
margin={{
|
||||
top: 20,
|
||||
}}
|
||||
>
|
||||
<CartesianGrid vertical={false} />
|
||||
<XAxis
|
||||
dataKey="app"
|
||||
tickLine={false}
|
||||
tickMargin={10}
|
||||
axisLine={false}
|
||||
tickFormatter={(value) => (value.length > 8 ? `${value.slice(0, 8)}...` : value)}
|
||||
/>
|
||||
<ChartTooltip cursor={false} content={<ChartTooltipContent nameKey="app" />} />
|
||||
<Bar dataKey="count" radius={8}>
|
||||
{appsChartData.map((entry, index) => (
|
||||
<Cell key={`cell-${index}`} fill={entry.fill} />
|
||||
))}
|
||||
<LabelList position="top" offset={12} className="fill-foreground" fontSize={12} />
|
||||
</Bar>
|
||||
</BarChart>
|
||||
</ChartContainer>
|
||||
)}
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
@ -425,15 +346,10 @@ export default function DataPage() {
|
||||
<CardHeader className="flex flex-row items-center justify-between">
|
||||
<div>
|
||||
<CardTitle>Installation Log</CardTitle>
|
||||
<CardDescription>
|
||||
Detailed records of all container creation attempts.
|
||||
</CardDescription>
|
||||
<CardDescription>Detailed records of all container creation attempts.</CardDescription>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<Select
|
||||
value={String(itemsPerPage)}
|
||||
onValueChange={val => setItemsPerPage(Number(val))}
|
||||
>
|
||||
<Select value={String(itemsPerPage)} onValueChange={(val) => setItemsPerPage(Number(val))}>
|
||||
<SelectTrigger className="w-[80px]">
|
||||
<SelectValue placeholder="Limit" />
|
||||
</SelectTrigger>
|
||||
@ -451,161 +367,108 @@ export default function DataPage() {
|
||||
<Table>
|
||||
<TableHeader>
|
||||
<TableRow>
|
||||
<TableHead
|
||||
className="w-[100px] cursor-pointer"
|
||||
onClick={() => requestSort("status")}
|
||||
>
|
||||
<TableHead className="w-[100px] cursor-pointer" onClick={() => requestSort("status")}>
|
||||
Status
|
||||
{sortConfig?.key === "status" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "status" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="cursor-pointer"
|
||||
onClick={() => requestSort("type")}
|
||||
>
|
||||
<TableHead className="cursor-pointer" onClick={() => requestSort("type")}>
|
||||
Type
|
||||
{sortConfig?.key === "type" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "type" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="cursor-pointer"
|
||||
onClick={() => requestSort("nsapp")}
|
||||
>
|
||||
<TableHead className="cursor-pointer" onClick={() => requestSort("nsapp")}>
|
||||
Application
|
||||
{sortConfig?.key === "nsapp" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "nsapp" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="hidden cursor-pointer md:table-cell"
|
||||
onClick={() => requestSort("os_type")}
|
||||
>
|
||||
<TableHead className="hidden cursor-pointer md:table-cell" onClick={() => requestSort("os_type")}>
|
||||
OS
|
||||
{sortConfig?.key === "os_type" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "os_type" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="hidden cursor-pointer md:table-cell"
|
||||
onClick={() => requestSort("disk_size")}
|
||||
>
|
||||
Disk Size
|
||||
{sortConfig?.key === "disk_size" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "disk_size" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="hidden cursor-pointer lg:table-cell"
|
||||
onClick={() => requestSort("core_count")}
|
||||
>
|
||||
Core Count
|
||||
{sortConfig?.key === "core_count" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "core_count" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="hidden cursor-pointer lg:table-cell"
|
||||
onClick={() => requestSort("ram_size")}
|
||||
>
|
||||
RAM Size
|
||||
{sortConfig?.key === "ram_size" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "ram_size" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
<TableHead
|
||||
className="cursor-pointer text-right"
|
||||
onClick={() => requestSort("created_at")}
|
||||
>
|
||||
<TableHead className="cursor-pointer text-right" onClick={() => requestSort("created_at")}>
|
||||
Created At
|
||||
{sortConfig?.key === "created_at" && (
|
||||
<ArrowUpDown className="ml-2 inline h-4 w-4" />
|
||||
)}
|
||||
{sortConfig?.key === "created_at" && <ArrowUpDown className="ml-2 inline h-4 w-4" />}
|
||||
</TableHead>
|
||||
</TableRow>
|
||||
</TableHeader>
|
||||
<TableBody>
|
||||
{loading
|
||||
? (
|
||||
<TableRow>
|
||||
<TableCell colSpan={8} className="h-24 text-center">
|
||||
<div className="flex items-center justify-center gap-2">
|
||||
<Loader2 className="h-4 w-4 animate-spin" />
|
||||
{" "}
|
||||
Loading data...
|
||||
</div>
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
)
|
||||
: sortedData.length > 0
|
||||
? (
|
||||
sortedData.map((item, idx) => (
|
||||
<TableRow key={`${item.id}-${idx}`}>
|
||||
<TableCell>
|
||||
{item.status === "done"
|
||||
? (
|
||||
<Badge className="text-green-500/75 border-green-500/75">
|
||||
Success
|
||||
</Badge>
|
||||
)
|
||||
: item.status === "failed"
|
||||
? (
|
||||
<Badge className="text-red-500/75 border-red-500/75">
|
||||
Failed
|
||||
</Badge>
|
||||
)
|
||||
: item.status === "installing"
|
||||
? (
|
||||
<Badge className="text-blue-500/75 border-blue-500/75">
|
||||
Installing
|
||||
</Badge>
|
||||
)
|
||||
: (
|
||||
<Badge variant="outline">
|
||||
{item.status}
|
||||
</Badge>
|
||||
)}
|
||||
</TableCell>
|
||||
<TableCell>
|
||||
{getTypeBadge(item.type) || (
|
||||
<Badge variant="outline">
|
||||
{item.type}
|
||||
</Badge>
|
||||
)}
|
||||
</TableCell>
|
||||
<TableCell className="font-medium">
|
||||
{item.nsapp}
|
||||
</TableCell>
|
||||
<TableCell className="hidden md:table-cell">
|
||||
{item.os_type}
|
||||
{" "}
|
||||
{item.os_version}
|
||||
</TableCell>
|
||||
<TableCell className="hidden md:table-cell">
|
||||
{item.disk_size}
|
||||
MB
|
||||
</TableCell>
|
||||
<TableCell className="hidden lg:table-cell">
|
||||
{item.core_count}
|
||||
</TableCell>
|
||||
<TableCell className="hidden lg:table-cell">
|
||||
{item.ram_size}
|
||||
MB
|
||||
</TableCell>
|
||||
<TableCell className="text-right">
|
||||
{formatDate(item.created_at)}
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
))
|
||||
)
|
||||
: (
|
||||
<TableRow>
|
||||
<TableCell colSpan={8} className="h-24 text-center">
|
||||
No results found.
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
)}
|
||||
{loading ? (
|
||||
<TableRow>
|
||||
<TableCell colSpan={8} className="h-24 text-center">
|
||||
<div className="flex items-center justify-center gap-2">
|
||||
<Loader2 className="h-4 w-4 animate-spin" /> Loading data...
|
||||
</div>
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
) : sortedData.length > 0 ? (
|
||||
sortedData.map((item, idx) => (
|
||||
<TableRow key={`${item.id}-${idx}`}>
|
||||
<TableCell>
|
||||
{item.status === "done" ? (
|
||||
<Badge className="text-green-500/75 border-green-500/75">Success</Badge>
|
||||
) : item.status === "failed" ? (
|
||||
<TooltipProvider>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<Badge className="text-red-500/75 border-red-500/75 cursor-help">Failed</Badge>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent className="max-w-xs">
|
||||
<p className="font-semibold">Error:</p>
|
||||
<p className="text-sm">{item.error || "Unknown error"}</p>
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
) : item.status === "installing" ? (
|
||||
<Badge className="text-blue-500/75 border-blue-500/75">Installing</Badge>
|
||||
) : (
|
||||
<Badge variant="outline">{item.status}</Badge>
|
||||
)}
|
||||
</TableCell>
|
||||
<TableCell>
|
||||
{getTypeBadge(item.type) || <Badge variant="outline">{item.type}</Badge>}
|
||||
</TableCell>
|
||||
<TableCell className="font-medium">{item.nsapp}</TableCell>
|
||||
<TableCell className="hidden md:table-cell">
|
||||
{item.os_type} {item.os_version}
|
||||
</TableCell>
|
||||
<TableCell className="hidden md:table-cell">
|
||||
{item.disk_size}
|
||||
GB
|
||||
</TableCell>
|
||||
<TableCell className="hidden lg:table-cell">{item.core_count}</TableCell>
|
||||
<TableCell className="hidden lg:table-cell">
|
||||
{item.ram_size}
|
||||
MB
|
||||
</TableCell>
|
||||
<TableCell className="text-right">{formatDate(item.created_at)}</TableCell>
|
||||
</TableRow>
|
||||
))
|
||||
) : (
|
||||
<TableRow>
|
||||
<TableCell colSpan={8} className="h-24 text-center">
|
||||
No results found.
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
)}
|
||||
</TableBody>
|
||||
</Table>
|
||||
</div>
|
||||
@ -614,21 +477,17 @@ export default function DataPage() {
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={() => setCurrentPage(prev => Math.max(prev - 1, 1))}
|
||||
onClick={() => setCurrentPage((prev) => Math.max(prev - 1, 1))}
|
||||
disabled={currentPage === 1 || loading}
|
||||
>
|
||||
<ChevronLeft className="mr-2 h-4 w-4" />
|
||||
Previous
|
||||
</Button>
|
||||
<div className="text-sm text-muted-foreground">
|
||||
Page
|
||||
{" "}
|
||||
{currentPage}
|
||||
</div>
|
||||
<div className="text-sm text-muted-foreground">Page {currentPage}</div>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
onClick={() => setCurrentPage(prev => prev + 1)}
|
||||
onClick={() => setCurrentPage((prev) => prev + 1)}
|
||||
disabled={loading || sortedData.length < itemsPerPage}
|
||||
>
|
||||
Next
|
||||
|
||||
@ -34,9 +34,4 @@ export const FAQ_Items = [
|
||||
content:
|
||||
"If an LXC script fails, run it again using Verbose mode. Standard mode hides detailed output for neatness, showing only progress. Verbose mode displays all messages, which helps you (and us) diagnose the error. Include this verbose output if you report the issue.",
|
||||
},
|
||||
{
|
||||
title: "What does \"Updatable\" and \"Not updatable\" mean?",
|
||||
content:
|
||||
"Updatable means that script has a function that is used to update the installed application to the latest version available. Not updatable means that script doesn't have a function that can safely update the application to the latest version available, so only the LXC OS is updated.",
|
||||
},
|
||||
];
|
||||
|
||||
@ -25,7 +25,7 @@ msg_ok "Installed Dependencies"
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
NODE_VERSION="22" NODE_MODULE="pnpm@latest" setup_nodejs
|
||||
PG_VERSION="17" PG_MODULES="postgis" setup_postgresql
|
||||
PG_DB_NAME="adventurelog_db" PG_DB_USER="adventurelog_user" setup_postgresql_db
|
||||
PG_DB_NAME="adventurelog_db" PG_DB_USER="adventurelog_user" PG_DB_EXTENSIONS="postgis" setup_postgresql_db
|
||||
fetch_and_deploy_gh_release "adventurelog" "seanmorley15/adventurelog"
|
||||
import_local_ip
|
||||
|
||||
|
||||
@ -43,6 +43,7 @@ if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||
jq
|
||||
|
||||
sed -i 's|# *include "mod_fastcgi.conf"|include "mod_fastcgi.conf"|' /etc/lighttpd/lighttpd.conf
|
||||
sed -i 's|/usr/bin/php-cgi|/usr/bin/php-cgi83|g' /etc/lighttpd/mod_fastcgi.conf
|
||||
mkdir -p /var/www/localhost/htdocs
|
||||
ADMINER_VERSION=$(curl -fsSL https://api.github.com/repos/vrana/adminer/releases/latest | jq -r '.tag_name' | sed 's/^v//')
|
||||
curl -fsSL "https://github.com/vrana/adminer/releases/download/v${ADMINER_VERSION}/adminer-${ADMINER_VERSION}.php" -o /var/www/localhost/htdocs/adminer.php
|
||||
|
||||
@ -56,6 +56,7 @@ if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||
jq
|
||||
|
||||
sed -i 's|# *include "mod_fastcgi.conf"|include "mod_fastcgi.conf"|' /etc/lighttpd/lighttpd.conf
|
||||
sed -i 's|/usr/bin/php-cgi|/usr/bin/php-cgi83|g' /etc/lighttpd/mod_fastcgi.conf
|
||||
mkdir -p /var/www/localhost/htdocs
|
||||
ADMINER_VERSION=$(curl -fsSL https://api.github.com/repos/vrana/adminer/releases/latest | jq -r '.tag_name' | sed 's/^v//')
|
||||
curl -fsSL "https://github.com/vrana/adminer/releases/download/v${ADMINER_VERSION}/adminer-${ADMINER_VERSION}.php" -o /var/www/localhost/htdocs/adminer.php
|
||||
|
||||
@ -62,11 +62,13 @@ install -d -m 755 \
|
||||
/data/uploads/{m3us,epgs} \
|
||||
/data/{m3us,epgs}
|
||||
chown -R root:root /data
|
||||
DJANGO_SECRET=$(openssl rand -base64 48 | tr -dc 'a-zA-Z0-9' | cut -c1-50)
|
||||
export DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@localhost:5432/${DB_NAME}"
|
||||
export POSTGRES_DB=$DB_NAME
|
||||
export POSTGRES_USER=$DB_USER
|
||||
export POSTGRES_PASSWORD=$DB_PASS
|
||||
export POSTGRES_HOST=localhost
|
||||
export DJANGO_SECRET_KEY=$DJANGO_SECRET
|
||||
$STD uv run python manage.py migrate --noinput
|
||||
$STD uv run python manage.py collectstatic --noinput
|
||||
cat <<EOF >/opt/dispatcharr/.env
|
||||
@ -76,6 +78,7 @@ POSTGRES_USER=$DB_USER
|
||||
POSTGRES_PASSWORD=$DB_PASS
|
||||
POSTGRES_HOST=localhost
|
||||
CELERY_BROKER_URL=redis://localhost:6379/0
|
||||
DJANGO_SECRET_KEY=$DJANGO_SECRET
|
||||
EOF
|
||||
cd /opt/dispatcharr/frontend
|
||||
$STD npm install --legacy-peer-deps
|
||||
|
||||
@ -64,7 +64,7 @@ Restart=always
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl start --now -q domain-locker
|
||||
systemctl enable -q --now domain-locker
|
||||
msg_info "Created Service"
|
||||
|
||||
motd_ssh
|
||||
|
||||
121
install/endurain-install.sh
Normal file
121
install/endurain-install.sh
Normal file
@ -0,0 +1,121 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: johanngrobe
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/joaovitoriasilva/endurain
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
default-libmysqlclient-dev \
|
||||
build-essential \
|
||||
pkg-config
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
NODE_VERSION="24" setup_nodejs
|
||||
PG_VERSION="17" PG_MODULES="postgis" setup_postgresql
|
||||
PG_DB_NAME="enduraindb" PG_DB_USER="endurain" setup_postgresql_db
|
||||
import_local_ip
|
||||
fetch_and_deploy_gh_release "endurain" "joaovitoriasilva/endurain" "tarball" "latest" "/opt/endurain"
|
||||
|
||||
msg_info "Setting up Endurain"
|
||||
cd /opt/endurain
|
||||
rm -rf \
|
||||
/opt/endurain/{docs,example.env,screenshot_01.png} \
|
||||
/opt/endurain/docker* \
|
||||
/opt/endurain/*.yml
|
||||
mkdir -p /opt/endurain_data/{data,logs}
|
||||
SECRET_KEY=$(openssl rand -hex 32)
|
||||
FERNET_KEY=$(openssl rand -base64 32)
|
||||
ENDURAIN_HOST=http://${LOCAL_IP}:8080
|
||||
cat <<EOF >/opt/endurain/.env
|
||||
DB_PASSWORD=${PG_DB_PASS}
|
||||
|
||||
SECRET_KEY=${SECRET_KEY}
|
||||
FERNET_KEY=${FERNET_KEY}
|
||||
|
||||
TZ=Europe/Berlin
|
||||
ENDURAIN_HOST=${ENDURAIN_HOST}
|
||||
BEHIND_PROXY=false
|
||||
|
||||
POSTGRES_DB=${PG_DB_NAME}
|
||||
POSTGRES_USER=${PG_DB_USER}
|
||||
PGDATA=/var/lib/postgresql/${PG_DB_NAME}
|
||||
|
||||
DB_DATABASE=${PG_DB_NAME}
|
||||
DB_USER=${PG_DB_USER}
|
||||
DB_PORT=5432
|
||||
DB_HOST=localhost
|
||||
|
||||
DATABASE_URL=postgresql+psycopg://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}
|
||||
|
||||
BACKEND_DIR="/opt/endurain/backend/app"
|
||||
FRONTEND_DIR="/opt/endurain/frontend/app/dist"
|
||||
DATA_DIR="/opt/endurain_data/data"
|
||||
LOGS_DIR="/opt/endurain_data/logs"
|
||||
|
||||
#SMTP_HOST=smtp.protonmail.ch
|
||||
#SMTP_PORT=587
|
||||
#SMTP_USERNAME=your-email@example.com
|
||||
#SMTP_PASSWORD=your-app-password
|
||||
#SMTP_SECURE=true
|
||||
#SMTP_SECURE_TYPE=starttls
|
||||
EOF
|
||||
msg_ok "Setup Endurain"
|
||||
|
||||
msg_info "Building Frontend"
|
||||
cd /opt/endurain/frontend/app
|
||||
$STD npm ci --prefer-offline
|
||||
$STD npm run build
|
||||
cat <<EOF >/opt/endurain/frontend/app/dist/env.js
|
||||
window.env = {
|
||||
ENDURAIN_HOST: "${ENDURAIN_HOST}"
|
||||
}
|
||||
EOF
|
||||
msg_ok "Built Frontend"
|
||||
|
||||
msg_info "Setting up Backend"
|
||||
cd /opt/endurain/backend
|
||||
$STD uv tool install poetry
|
||||
$STD uv tool update-shell
|
||||
export PATH="/root/.local/bin:$PATH"
|
||||
$STD poetry self add poetry-plugin-export
|
||||
$STD poetry export -f requirements.txt --output requirements.txt --without-hashes
|
||||
$STD uv venv
|
||||
$STD uv pip install -r requirements.txt
|
||||
msg_ok "Setup Backend"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/endurain.service
|
||||
[Unit]
|
||||
Description=Endurain FastAPI Backend
|
||||
After=network.target postgresql.service
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/opt/endurain/backend/app
|
||||
EnvironmentFile=/opt/endurain/.env
|
||||
ExecStart=/opt/endurain/backend/.venv/bin/uvicorn main:app --host 0.0.0.0 --port 8080
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
User=root
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now endurain
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@ -16,7 +16,7 @@ update_os
|
||||
msg_info "Setting up InfluxDB Repository"
|
||||
setup_deb822_repo \
|
||||
"influxdata" \
|
||||
"https://repos.influxdata.com/influxdata-archive_compat.key" \
|
||||
"https://repos.influxdata.com/influxdata-archive.key" \
|
||||
"https://repos.influxdata.com/$(get_os_info id)" \
|
||||
"stable"
|
||||
msg_ok "Set up InfluxDB Repository"
|
||||
@ -38,6 +38,7 @@ else
|
||||
$STD dpkg -i chronograf_1.10.8_amd64.deb
|
||||
rm -rf /chronograf_1.10.8_amd64.deb
|
||||
fi
|
||||
rm /etc/apt/sources.list.d/influxdata.list
|
||||
$STD systemctl enable --now influxdb
|
||||
msg_ok "Installed InfluxDB"
|
||||
|
||||
|
||||
@ -13,23 +13,27 @@ setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
temp_file=$(mktemp)
|
||||
curl -fsSL "http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2_amd64.deb" -o "$temp_file"
|
||||
$STD dpkg -i $temp_file
|
||||
rm -f $temp_file
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
msg_info "Setting up InvenTree Repository"
|
||||
mkdir -p /etc/apt/keyrings
|
||||
curl -fsSL https://dl.packager.io/srv/inventree/InvenTree/key | gpg --dearmor -o /etc/apt/keyrings/inventree.gpg
|
||||
echo "deb [signed-by=/etc/apt/keyrings/inventree.gpg] https://dl.packager.io/srv/deb/inventree/InvenTree/stable/ubuntu 20.04 main" >/etc/apt/sources.list.d/inventree.list
|
||||
setup_deb822_repo \
|
||||
"inventree" \
|
||||
"https://dl.packager.io/srv/inventree/InvenTree/key" \
|
||||
"https://dl.packager.io/srv/deb/inventree/InvenTree/stable/$(get_os_info id)" \
|
||||
"$(get_os_info version)" \
|
||||
"main"
|
||||
msg_ok "Set up InvenTree Repository"
|
||||
|
||||
msg_info "Setup ${APPLICATION} (Patience)"
|
||||
$STD apt-get update
|
||||
$STD apt-get install -y inventree
|
||||
msg_ok "Setup ${APPLICATION}"
|
||||
msg_info "Installing InvenTree (Patience)"
|
||||
export SETUP_NO_CALLS=true
|
||||
$STD apt install -y inventree
|
||||
msg_ok "Installed InvenTree"
|
||||
|
||||
msg_info "Configuring InvenTree"
|
||||
LOCAL_IP="$(hostname -I | awk '{print $1}')"
|
||||
if [[ -f /etc/inventree/config.yaml ]]; then
|
||||
sed -i "s|site_url:.*|site_url: http://${LOCAL_IP}|" /etc/inventree/config.yaml
|
||||
fi
|
||||
$STD inventree run invoke update
|
||||
msg_ok "Configured InvenTree"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
|
||||
@ -16,7 +16,7 @@ update_os
|
||||
fetch_and_deploy_gh_release "librespeed-rust" "librespeed/speedtest-rust" "binary" "latest" "/opt/librespeed-rust" "librespeed-rs-x86_64-unknown-linux-gnu.deb"
|
||||
|
||||
msg_info "Enabling Service"
|
||||
systemctl enable -q --now librespeed-rs
|
||||
systemctl enable -q --now speedtest_rs
|
||||
msg_ok "Enabled Service"
|
||||
|
||||
motd_ssh
|
||||
|
||||
@ -32,7 +32,7 @@ $STD apt install -y \
|
||||
python3-icu
|
||||
msg_ok "Setup Python3"
|
||||
|
||||
setup_uv
|
||||
PYTHON_VERSION="3.12" setup_uv
|
||||
fetch_and_deploy_gh_release "libretranslate" "LibreTranslate/LibreTranslate"
|
||||
|
||||
msg_info "Setup LibreTranslate (Patience)"
|
||||
@ -42,7 +42,7 @@ if [[ -z "$TORCH_VERSION" ]]; then
|
||||
TORCH_VERSION="2.5.0"
|
||||
fi
|
||||
cd /opt/libretranslate
|
||||
$STD uv venv .venv
|
||||
$STD uv venv .venv --python 3.12
|
||||
$STD source .venv/bin/activate
|
||||
$STD uv pip install --upgrade pip setuptools
|
||||
$STD uv pip install Babel==2.12.1
|
||||
|
||||
96
install/metube-install.sh
Normal file
96
install/metube-install.sh
Normal file
@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: MickLesk (Canbiz)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# Source: https://github.com/alexta69/metube
|
||||
|
||||
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
|
||||
color
|
||||
verb_ip6
|
||||
catch_errors
|
||||
setting_up_container
|
||||
network_check
|
||||
update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
build-essential \
|
||||
aria2 \
|
||||
coreutils \
|
||||
musl-dev \
|
||||
ffmpeg
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
PYTHON_VERSION="3.13" setup_uv
|
||||
NODE_VERSION="24" setup_nodejs
|
||||
|
||||
msg_info "Installing Deno"
|
||||
export DENO_INSTALL="/usr/local"
|
||||
curl -fsSL https://deno.land/install.sh | $STD sh -s -- -y
|
||||
[[ ":$PATH:" != *":/usr/local/bin:"* ]] &&
|
||||
echo -e "\nexport PATH=\"/usr/local/bin:\$PATH\"" >>~/.bashrc &&
|
||||
source ~/.bashrc
|
||||
msg_ok "Installed Deno"
|
||||
|
||||
fetch_and_deploy_gh_release "metube" "alexta69/metube" "tarball" "latest"
|
||||
|
||||
msg_info "Installing MeTube"
|
||||
cd /opt/metube/ui
|
||||
$STD npm ci
|
||||
$STD node_modules/.bin/ng build --configuration production
|
||||
cd /opt/metube
|
||||
$STD uv sync
|
||||
mkdir -p /opt/metube_downloads /opt/metube_downloads/.metube /opt/metube_downloads/music /opt/metube_downloads/videos
|
||||
cat <<EOF >/opt/metube/.env
|
||||
# Storage & Directories
|
||||
DOWNLOAD_DIR=/opt/metube_downloads
|
||||
AUDIO_DOWNLOAD_DIR=/opt/metube_downloads/music
|
||||
STATE_DIR=/opt/metube_downloads/.metube
|
||||
TEMP_DIR=/opt/metube_downloads
|
||||
|
||||
# Download Behavior
|
||||
DOWNLOAD_MODE=limited
|
||||
MAX_CONCURRENT_DOWNLOADS=3
|
||||
DELETE_FILE_ON_TRASHCAN=false
|
||||
DEFAULT_OPTION_PLAYLIST_STRICT_MODE=false
|
||||
DEFAULT_OPTION_PLAYLIST_ITEM_LIMIT=0
|
||||
|
||||
# File Naming & yt-dlp
|
||||
OUTPUT_TEMPLATE=%(title)s.%(ext)s
|
||||
OUTPUT_TEMPLATE_CHAPTER=%(title)s - %(section_number)s %(section_title)s.%(ext)s
|
||||
OUTPUT_TEMPLATE_PLAYLIST=%(playlist_title)s/%(title)s.%(ext)s
|
||||
YTDL_OPTIONS={"trim_file_name":200,"extractor_args":{"youtube":{"player_client":["default","-tv_simply"]}}}
|
||||
|
||||
# Custom Directories
|
||||
CUSTOM_DIRS=true
|
||||
CREATE_CUSTOM_DIRS=true
|
||||
|
||||
# Basic Setup
|
||||
DEFAULT_THEME=auto
|
||||
LOGLEVEL=INFO
|
||||
ENABLE_ACCESSLOG=false
|
||||
EOF
|
||||
msg_ok "Installed MeTube"
|
||||
|
||||
msg_info "Creating Service"
|
||||
cat <<EOF >/etc/systemd/system/metube.service
|
||||
[Unit]
|
||||
Description=Metube - YouTube Downloader
|
||||
After=network.target
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/opt/metube
|
||||
EnvironmentFile=/opt/metube/.env
|
||||
ExecStart=/opt/metube/.venv/bin/python3 app/main.py
|
||||
Restart=always
|
||||
User=root
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
systemctl enable -q --now metube
|
||||
msg_ok "Created Service"
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
cleanup_lxc
|
||||
@ -48,7 +48,7 @@ $STD cargo build --release --bin daemon
|
||||
cp ./target/release/daemon /usr/bin/netvisor-daemon
|
||||
msg_ok "Built Netvisor-daemon"
|
||||
|
||||
msg_info "Configuring server & daemon for first-run"
|
||||
msg_info "Configuring server for first-run"
|
||||
LOCAL_IP="$(hostname -I | awk '{print $1}')"
|
||||
cat <<EOF >/opt/netvisor/.env
|
||||
### - SERVER
|
||||
@ -101,19 +101,26 @@ WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
systemctl enable -q --now netvisor-server
|
||||
sleep 5
|
||||
NETWORK_ID="$(sudo -u postgres psql -1 -t -d $PG_DB_NAME -c 'SELECT id FROM networks;')"
|
||||
API_KEY="$(sudo -u postgres psql -1 -t -d $PG_DB_NAME -c 'SELECT key from api_keys;')"
|
||||
|
||||
cat <<EOF >/etc/systemd/system/netvisor-daemon.service
|
||||
# Creating short script to configure netvisor-daemon
|
||||
cat <<EOF >~/configure_daemon.sh
|
||||
#!/usr/bin/env bash
|
||||
|
||||
echo "Auto-configuring integrated daemon..."
|
||||
|
||||
NETWORK_ID="\$(sudo -u postgres psql -1 -t -d "${PG_DB_NAME}" -c 'SELECT id FROM networks;')"
|
||||
API_KEY="\$(sudo -u postgres psql -1 -t -d "${PG_DB_NAME}" -c 'SELECT key FROM api_keys;')"
|
||||
|
||||
cat <<END >/etc/systemd/system/netvisor-daemon.service
|
||||
[Unit]
|
||||
Description=NetVisor Network Discovery Daemon
|
||||
After=network.target netvisor-server.service
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
EnvironmentFile=/opt/netvisor/.env
|
||||
ExecStart=/usr/bin/netvisor-daemon --server-url http://127.0.0.1:60072 --network-id ${NETWORK_ID} --daemon-api-key ${API_KEY}
|
||||
User=root
|
||||
ExecStart=/usr/bin/netvisor-daemon --server-url http://127.0.0.1:60072 --network-id \${NETWORK_ID} --daemon-api-key \${API_KEY} --mode push
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
@ -121,9 +128,14 @@ StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
END
|
||||
|
||||
systemctl enable -q --now netvisor-daemon
|
||||
msg_ok "Netvisor server & daemon configured and running"
|
||||
echo "NetVisor daemon configured and running"
|
||||
|
||||
EOF
|
||||
chmod +x ~/configure_daemon.sh
|
||||
msg_ok "Netvisor server running - please create an account in the UI to continue."
|
||||
|
||||
motd_ssh
|
||||
customize
|
||||
|
||||
@ -15,21 +15,22 @@ update_os
|
||||
|
||||
msg_info "Installing Dependencies"
|
||||
$STD apt install -y \
|
||||
default-jdk \
|
||||
git \
|
||||
git-lfs
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
JAVA_VERSION="21" setup_java
|
||||
|
||||
msg_info "Installing OneDev"
|
||||
RELEASE=$(curl -fsSL https://api.github.com/repos/theonedev/onedev/releases/latest | grep '"tag_name":' | cut -d'"' -f4)
|
||||
cd /opt
|
||||
curl -fsSL "https://code.onedev.io/onedev/server/~site/onedev-latest.tar.gz" -o "/opt/onedev-latest.tar.gz"
|
||||
curl -fsSL "https://code.onedev.io/onedev/server/~site/onedev-latest.tar.gz" -o onedev-latest.tar.gz
|
||||
tar -xzf onedev-latest.tar.gz
|
||||
mv /opt/onedev-latest /opt/onedev
|
||||
$STD /opt/onedev/bin/server.sh install
|
||||
systemctl start onedev
|
||||
RELEASE=$(cat /opt/onedev/release.properties | grep "version" | cut -d'=' -f2)
|
||||
rm -rf /opt/onedev-latest.tar.gz
|
||||
echo "${RELEASE}" >"/opt/${APPLICATION}_version.txt"
|
||||
echo "${RELEASE}" >~/.onedev
|
||||
msg_ok "Installed OneDev"
|
||||
|
||||
motd_ssh
|
||||
|
||||
@ -158,7 +158,7 @@ Requires=redis.service
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/opt/paperless/src
|
||||
ExecStart=uv run -- granian --interface asginl --ws "paperless.asgi:application"
|
||||
ExecStart=uv run -- granian --interface asgi --ws "paperless.asgi:application"
|
||||
Environment=GRANIAN_HOST=::
|
||||
Environment=GRANIAN_PORT=8000
|
||||
Environment=GRANIAN_WORKERS=1
|
||||
|
||||
@ -15,12 +15,20 @@ update_os
|
||||
|
||||
msg_info "Installing Proxmox Datacenter Manager"
|
||||
curl -fsSL https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg -o /usr/share/keyrings/proxmox-archive-keyring.gpg
|
||||
setup_deb822_repo \
|
||||
"pdm" \
|
||||
"https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg" \
|
||||
"http://download.proxmox.com/debian/pdm" \
|
||||
"trixie" \
|
||||
"pdm-no-subscription"
|
||||
|
||||
cat <<EOF >/etc/apt/sources.list.d/pdm-test.sources
|
||||
Types: deb
|
||||
URIs: http://download.proxmox.com/debian/pdm
|
||||
Suites: trixie
|
||||
Components: pdm-test
|
||||
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
|
||||
Enabled: false
|
||||
EOF
|
||||
$STD apt update
|
||||
DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
@ -52,7 +52,7 @@ cat <<EOF >/opt/wanderer/start.sh
|
||||
|
||||
trap "kill 0" EXIT
|
||||
|
||||
cd /opt/wanderer/source/search && meilisearch --master-key \$MEILI_MASTER_KEY &
|
||||
cd /opt/wanderer/source/search && meilisearch --experimental-dumpless-upgrade --master-key \$MEILI_MASTER_KEY &
|
||||
sleep 1
|
||||
cd /opt/wanderer/source/db && ./pocketbase serve --http=\$PB_URL --dir=\$PB_DB_LOCATION &
|
||||
cd /opt/wanderer/source/web && node build &
|
||||
|
||||
@ -21,15 +21,12 @@ $STD apt install -y \
|
||||
msg_ok "Installed Dependencies"
|
||||
|
||||
msg_info "Setting up Elasticsearch"
|
||||
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
|
||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/elasticsearch.sources >/dev/null
|
||||
Types: deb
|
||||
URIs: https://artifacts.elastic.co/packages/7.x/apt
|
||||
Suites: stable
|
||||
Components: main
|
||||
Signed-By: /usr/share/keyrings/elasticsearch-keyring.gpg
|
||||
EOF
|
||||
$STD apt update
|
||||
setup_deb822_repo \
|
||||
"elasticsearch" \
|
||||
"https://artifacts.elastic.co/GPG-KEY-elasticsearch" \
|
||||
"https://artifacts.elastic.co/packages/7.x/apt" \
|
||||
"stable" \
|
||||
"main"
|
||||
$STD apt -y install elasticsearch
|
||||
echo "-Xms2g" >>/etc/elasticsearch/jvm.options
|
||||
echo "-Xmx2g" >>/etc/elasticsearch/jvm.options
|
||||
@ -39,15 +36,12 @@ systemctl restart -q elasticsearch
|
||||
msg_ok "Setup Elasticsearch"
|
||||
|
||||
msg_info "Installing Zammad"
|
||||
curl -fsSL https://dl.packager.io/srv/zammad/zammad/key | gpg --dearmor | sudo tee /etc/apt/keyrings/pkgr-zammad.gpg >/dev/null
|
||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/zammad.sources >/dev/null
|
||||
Types: deb
|
||||
URIs: https://dl.packager.io/srv/deb/zammad/zammad/stable/debian
|
||||
Suites: 12
|
||||
Components: main
|
||||
Signed-By: /etc/apt/keyrings/pkgr-zammad.gpg
|
||||
EOF
|
||||
$STD apt update
|
||||
setup_deb822_repo \
|
||||
"zammad" \
|
||||
"https://dl.packager.io/srv/zammad/zammad/key" \
|
||||
"https://dl.packager.io/srv/deb/zammad/zammad/stable/debian" \
|
||||
"$(get_os_info version_id)" \
|
||||
"main"
|
||||
$STD apt -y install zammad
|
||||
$STD zammad run rails r "Setting.set('es_url', 'http://localhost:9200')"
|
||||
$STD zammad run rake zammad:searchindex:rebuild
|
||||
|
||||
@ -7,13 +7,15 @@ if ! command -v curl >/dev/null 2>&1; then
|
||||
apk update && apk add curl >/dev/null 2>&1
|
||||
fi
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
|
||||
load_functions
|
||||
catch_errors
|
||||
|
||||
# This function enables IPv6 if it's not disabled and sets verbose mode
|
||||
verb_ip6() {
|
||||
set_std_mode # Set STD mode based on VERBOSE
|
||||
|
||||
if [ "$IPV6_METHOD" == "disable" ]; then
|
||||
if [ "${IPV6_METHOD:-}" = "disable" ]; then
|
||||
msg_info "Disabling IPv6 (this may affect some services)"
|
||||
$STD sysctl -w net.ipv6.conf.all.disable_ipv6=1
|
||||
$STD sysctl -w net.ipv6.conf.default.disable_ipv6=1
|
||||
@ -29,19 +31,40 @@ EOF
|
||||
fi
|
||||
}
|
||||
|
||||
# This function catches errors and handles them with the error handler function
|
||||
catch_errors() {
|
||||
set -Eeuo pipefail
|
||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||
set -Eeuo pipefail
|
||||
trap 'error_handler $? $LINENO "$BASH_COMMAND"' ERR
|
||||
trap on_exit EXIT
|
||||
trap on_interrupt INT
|
||||
trap on_terminate TERM
|
||||
|
||||
error_handler() {
|
||||
local exit_code="$1"
|
||||
local line_number="$2"
|
||||
local command="$3"
|
||||
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
printf "\e[?25h"
|
||||
echo -e "\n${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}\n"
|
||||
exit "$exit_code"
|
||||
}
|
||||
|
||||
# This function handles errors
|
||||
error_handler() {
|
||||
on_exit() {
|
||||
local exit_code="$?"
|
||||
local line_number="$1"
|
||||
local command="$2"
|
||||
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
|
||||
echo -e "\n$error_message\n"
|
||||
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
|
||||
exit "$exit_code"
|
||||
}
|
||||
|
||||
on_interrupt() {
|
||||
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}"
|
||||
exit 130
|
||||
}
|
||||
|
||||
on_terminate() {
|
||||
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}"
|
||||
exit 143
|
||||
}
|
||||
|
||||
# This function sets up the Container OS by generating the locale, setting the timezone, and checking the network connection
|
||||
@ -70,10 +93,10 @@ network_check() {
|
||||
set +e
|
||||
trap - ERR
|
||||
if ping -c 1 -W 1 1.1.1.1 &>/dev/null || ping -c 1 -W 1 8.8.8.8 &>/dev/null || ping -c 1 -W 1 9.9.9.9 &>/dev/null; then
|
||||
msg_ok "Internet Connected"
|
||||
ipv4_status="${GN}✔${CL} IPv4"
|
||||
else
|
||||
msg_error "Internet NOT Connected"
|
||||
read -r -p "Would you like to continue anyway? <y/N> " prompt
|
||||
ipv4_status="${RD}✖${CL} IPv4"
|
||||
read -r -p "Internet NOT connected. Continue anyway? <y/N> " prompt
|
||||
if [[ "${prompt,,}" =~ ^(y|yes)$ ]]; then
|
||||
echo -e "${INFO}${RD}Expect Issues Without Internet${CL}"
|
||||
else
|
||||
@ -82,7 +105,11 @@ network_check() {
|
||||
fi
|
||||
fi
|
||||
RESOLVEDIP=$(getent hosts github.com | awk '{ print $1 }')
|
||||
if [[ -z "$RESOLVEDIP" ]]; then msg_error "DNS Lookup Failure"; else msg_ok "DNS Resolved github.com to ${BL}$RESOLVEDIP${CL}"; fi
|
||||
if [[ -z "$RESOLVEDIP" ]]; then
|
||||
msg_error "Internet: ${ipv4_status} DNS Failed"
|
||||
else
|
||||
msg_ok "Internet: ${ipv4_status} DNS: ${BL}${RESOLVEDIP}${CL}"
|
||||
fi
|
||||
set -e
|
||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||
}
|
||||
@ -98,22 +125,13 @@ update_os() {
|
||||
# This function modifies the message of the day (motd) and SSH settings
|
||||
motd_ssh() {
|
||||
echo "export TERM='xterm-256color'" >>/root/.bashrc
|
||||
IP=$(ip -4 addr show eth0 | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
|
||||
|
||||
if [ -f "/etc/os-release" ]; then
|
||||
OS_NAME=$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||
OS_VERSION=$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||
else
|
||||
OS_NAME="Alpine Linux"
|
||||
OS_VERSION="Unknown"
|
||||
fi
|
||||
|
||||
PROFILE_FILE="/etc/profile.d/00_lxc-details.sh"
|
||||
echo "echo -e \"\"" >"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${BOLD}${APPLICATION} LXC Container${CL}"\" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${GATEWAY}${YW} Provided by: ${GN}community-scripts ORG ${YW}| GitHub: ${GN}https://github.com/community-scripts/ProxmoxVE${CL}\"" >>"$PROFILE_FILE"
|
||||
echo "echo \"\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}${OS_NAME} - Version: ${OS_VERSION}${CL}\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}\$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '\"') - Version: \$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '\"')${CL}\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${HOSTNAME}${YW} Hostname: ${GN}\$(hostname)${CL}\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${INFO}${YW} IP Address: ${GN}\$(ip -4 addr show eth0 | awk '/inet / {print \$2}' | cut -d/ -f1 | head -n 1)${CL}\"" >>"$PROFILE_FILE"
|
||||
|
||||
@ -163,10 +181,4 @@ EOF
|
||||
echo "bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/${app}.sh)\"" >/usr/bin/update
|
||||
chmod +x /usr/bin/update
|
||||
|
||||
if [[ -n "${SSH_AUTHORIZED_KEY}" ]]; then
|
||||
mkdir -p /root/.ssh
|
||||
echo "${SSH_AUTHORIZED_KEY}" >/root/.ssh/authorized_keys
|
||||
chmod 700 /root/.ssh
|
||||
chmod 600 /root/.ssh/authorized_keys
|
||||
fi
|
||||
}
|
||||
|
||||
507
misc/alpine-tools.func
Normal file
507
misc/alpine-tools.func
Normal file
@ -0,0 +1,507 @@
|
||||
#!/bin/ash
|
||||
# shellcheck shell=ash
|
||||
|
||||
# Expects existing msg_* functions and optional $STD from the framework.
|
||||
|
||||
# ------------------------------
|
||||
# helpers
|
||||
# ------------------------------
|
||||
lower() { printf '%s' "$1" | tr '[:upper:]' '[:lower:]'; }
|
||||
has() { command -v "$1" >/dev/null 2>&1; }
|
||||
|
||||
need_tool() {
|
||||
# usage: need_tool curl jq unzip ...
|
||||
# setup missing tools via apk
|
||||
local missing=0 t
|
||||
for t in "$@"; do
|
||||
if ! has "$t"; then missing=1; fi
|
||||
done
|
||||
if [ "$missing" -eq 1 ]; then
|
||||
msg_info "Installing tools: $*"
|
||||
apk add --no-cache "$@" >/dev/null 2>&1 || {
|
||||
msg_error "apk add failed for: $*"
|
||||
return 1
|
||||
}
|
||||
msg_ok "Tools ready: $*"
|
||||
fi
|
||||
}
|
||||
|
||||
net_resolves() {
|
||||
# better handling for missing getent on Alpine
|
||||
# usage: net_resolves api.github.com
|
||||
local host="$1"
|
||||
ping -c1 -W1 "$host" >/dev/null 2>&1 || nslookup "$host" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
ensure_usr_local_bin_persist() {
|
||||
local PROFILE_FILE="/etc/profile.d/10-localbin.sh"
|
||||
if [ ! -f "$PROFILE_FILE" ]; then
|
||||
echo 'case ":$PATH:" in *:/usr/local/bin:*) ;; *) export PATH="/usr/local/bin:$PATH";; esac' >"$PROFILE_FILE"
|
||||
chmod +x "$PROFILE_FILE"
|
||||
fi
|
||||
}
|
||||
|
||||
download_with_progress() {
|
||||
# $1 url, $2 dest
|
||||
local url="$1" out="$2" cl
|
||||
need_tool curl pv || return 1
|
||||
cl=$(curl -fsSLI "$url" 2>/dev/null | awk 'tolower($0) ~ /^content-length:/ {print $2}' | tr -d '\r')
|
||||
if [ -n "$cl" ]; then
|
||||
curl -fsSL "$url" | pv -s "$cl" >"$out" || {
|
||||
msg_error "Download failed: $url"
|
||||
return 1
|
||||
}
|
||||
else
|
||||
curl -fL# -o "$out" "$url" || {
|
||||
msg_error "Download failed: $url"
|
||||
return 1
|
||||
}
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# GitHub: check Release
|
||||
# ------------------------------
|
||||
check_for_gh_release() {
|
||||
# app, repo, [pinned]
|
||||
local app="$1" source="$2" pinned="${3:-}"
|
||||
local app_lc
|
||||
app_lc="$(lower "$app" | tr -d ' ')"
|
||||
local current_file="$HOME/.${app_lc}"
|
||||
local current="" release tag
|
||||
|
||||
msg_info "Check for update: $app"
|
||||
|
||||
net_resolves api.github.com || {
|
||||
msg_error "DNS/network error: api.github.com"
|
||||
return 1
|
||||
}
|
||||
need_tool curl jq || return 1
|
||||
|
||||
tag=$(curl -fsSL "https://api.github.com/repos/${source}/releases/latest" | jq -r '.tag_name // empty')
|
||||
[ -z "$tag" ] && {
|
||||
msg_error "Unable to fetch latest tag for $app"
|
||||
return 1
|
||||
}
|
||||
release="${tag#v}"
|
||||
|
||||
[ -f "$current_file" ] && current="$(cat "$current_file")"
|
||||
|
||||
if [ -n "$pinned" ]; then
|
||||
if [ "$pinned" = "$release" ]; then
|
||||
msg_ok "$app pinned to v$pinned (no update)"
|
||||
return 1
|
||||
fi
|
||||
if [ "$current" = "$pinned" ]; then
|
||||
msg_ok "$app pinned v$pinned installed (upstream v$release)"
|
||||
return 1
|
||||
fi
|
||||
msg_info "$app pinned v$pinned (upstream v$release) → update/downgrade"
|
||||
CHECK_UPDATE_RELEASE="$pinned"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ "$release" != "$current" ] || [ ! -f "$current_file" ]; then
|
||||
CHECK_UPDATE_RELEASE="$release"
|
||||
msg_info "New release available: v$release (current: v${current:-none})"
|
||||
return 0
|
||||
fi
|
||||
|
||||
msg_ok "$app is up to date (v$release)"
|
||||
return 1
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# GitHub: get Release & deploy (Alpine)
|
||||
# modes: tarball | prebuild | singlefile
|
||||
# ------------------------------
|
||||
fetch_and_deploy_gh() {
|
||||
# $1 app, $2 repo, [$3 mode], [$4 version], [$5 target], [$6 asset_pattern
|
||||
local app="$1" repo="$2" mode="${3:-tarball}" version="${4:-latest}" target="${5:-/opt/$1}" pattern="${6:-}"
|
||||
local app_lc
|
||||
app_lc="$(lower "$app" | tr -d ' ')"
|
||||
local vfile="$HOME/.${app_lc}"
|
||||
local json url filename tmpd unpack
|
||||
|
||||
net_resolves api.github.com || {
|
||||
msg_error "DNS/network error"
|
||||
return 1
|
||||
}
|
||||
need_tool curl jq tar || return 1
|
||||
[ "$mode" = "prebuild" ] || [ "$mode" = "singlefile" ] && need_tool unzip >/dev/null 2>&1 || true
|
||||
|
||||
tmpd="$(mktemp -d)" || return 1
|
||||
mkdir -p "$target"
|
||||
|
||||
# Release JSON
|
||||
if [ "$version" = "latest" ]; then
|
||||
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/latest")" || {
|
||||
msg_error "GitHub API failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
else
|
||||
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/tags/$version")" || {
|
||||
msg_error "GitHub API failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
fi
|
||||
|
||||
# correct Version
|
||||
version="$(printf '%s' "$json" | jq -r '.tag_name // empty')"
|
||||
version="${version#v}"
|
||||
|
||||
[ -z "$version" ] && {
|
||||
msg_error "No tag in release json"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
|
||||
case "$mode" in
|
||||
tarball | source)
|
||||
url="$(printf '%s' "$json" | jq -r '.tarball_url // empty')"
|
||||
[ -z "$url" ] && url="https://github.com/$repo/archive/refs/tags/v$version.tar.gz"
|
||||
filename="${app_lc}-${version}.tar.gz"
|
||||
download_with_progress "$url" "$tmpd/$filename" || {
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
tar -xzf "$tmpd/$filename" -C "$tmpd" || {
|
||||
msg_error "tar extract failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
unpack="$(find "$tmpd" -mindepth 1 -maxdepth 1 -type d | head -n1)"
|
||||
# copy content of unpack to target
|
||||
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
||||
msg_error "copy failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
;;
|
||||
prebuild)
|
||||
[ -n "$pattern" ] || {
|
||||
msg_error "prebuild requires asset pattern"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
|
||||
BEGIN{IGNORECASE=1}
|
||||
$0 ~ p {print; exit}
|
||||
')"
|
||||
[ -z "$url" ] && {
|
||||
msg_error "asset not found for pattern: $pattern"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
filename="${url##*/}"
|
||||
download_with_progress "$url" "$tmpd/$filename" || {
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
# unpack archive (Zip or tarball)
|
||||
case "$filename" in
|
||||
*.zip)
|
||||
need_tool unzip || {
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
mkdir -p "$tmpd/unp"
|
||||
unzip -q "$tmpd/$filename" -d "$tmpd/unp"
|
||||
;;
|
||||
*.tar.gz | *.tgz | *.tar.xz | *.tar.zst | *.tar.bz2)
|
||||
mkdir -p "$tmpd/unp"
|
||||
tar -xf "$tmpd/$filename" -C "$tmpd/unp"
|
||||
;;
|
||||
*)
|
||||
msg_error "unsupported archive: $filename"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
# top-level folder strippen
|
||||
if [ "$(find "$tmpd/unp" -mindepth 1 -maxdepth 1 -type d | wc -l)" -eq 1 ] && [ -z "$(find "$tmpd/unp" -mindepth 1 -maxdepth 1 -type f | head -n1)" ]; then
|
||||
unpack="$(find "$tmpd/unp" -mindepth 1 -maxdepth 1 -type d)"
|
||||
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
||||
msg_error "copy failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
else
|
||||
(cd "$tmpd/unp" && tar -cf - .) | (cd "$target" && tar -xf -) || {
|
||||
msg_error "copy failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
fi
|
||||
;;
|
||||
singlefile)
|
||||
[ -n "$pattern" ] || {
|
||||
msg_error "singlefile requires asset pattern"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
|
||||
BEGIN{IGNORECASE=1}
|
||||
$0 ~ p {print; exit}
|
||||
')"
|
||||
[ -z "$url" ] && {
|
||||
msg_error "asset not found for pattern: $pattern"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
filename="${url##*/}"
|
||||
download_with_progress "$url" "$target/$app" || {
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
chmod +x "$target/$app"
|
||||
;;
|
||||
*)
|
||||
msg_error "Unknown mode: $mode"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "$version" >"$vfile"
|
||||
ensure_usr_local_bin_persist
|
||||
rm -rf "$tmpd"
|
||||
msg_ok "Deployed $app ($version) → $target"
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# yq (mikefarah) – Alpine
|
||||
# ------------------------------
|
||||
setup_yq() {
|
||||
# prefer apk, unless FORCE_GH=1
|
||||
if [ "${FORCE_GH:-0}" != "1" ] && apk info -e yq >/dev/null 2>&1; then
|
||||
msg_info "Updating yq via apk"
|
||||
apk add --no-cache --upgrade yq >/dev/null 2>&1 || true
|
||||
msg_ok "yq ready ($(yq --version 2>/dev/null))"
|
||||
return 0
|
||||
fi
|
||||
|
||||
need_tool curl || return 1
|
||||
local arch bin url tmp
|
||||
case "$(uname -m)" in
|
||||
x86_64) arch="amd64" ;;
|
||||
aarch64) arch="arm64" ;;
|
||||
*)
|
||||
msg_error "Unsupported arch for yq: $(uname -m)"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
url="https://github.com/mikefarah/yq/releases/latest/download/yq_linux_${arch}"
|
||||
tmp="$(mktemp)"
|
||||
download_with_progress "$url" "$tmp" || return 1
|
||||
install -m 0755 "$tmp" /usr/local/bin/yq
|
||||
rm -f "$tmp"
|
||||
msg_ok "Setup yq ($(yq --version 2>/dev/null))"
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# Adminer – Alpine
|
||||
# ------------------------------
|
||||
setup_adminer() {
|
||||
need_tool curl || return 1
|
||||
msg_info "Setup Adminer (Alpine)"
|
||||
mkdir -p /var/www/localhost/htdocs/adminer
|
||||
curl -fsSL https://github.com/vrana/adminer/releases/latest/download/adminer.php \
|
||||
-o /var/www/localhost/htdocs/adminer/index.php || {
|
||||
msg_error "Adminer download failed"
|
||||
return 1
|
||||
}
|
||||
msg_ok "Adminer at /adminer (served by your webserver)"
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# uv – Alpine (musl tarball)
|
||||
# optional: PYTHON_VERSION="3.12"
|
||||
# ------------------------------
|
||||
setup_uv() {
|
||||
need_tool curl tar || return 1
|
||||
local UV_BIN="/usr/local/bin/uv"
|
||||
local arch tarball url tmpd ver installed
|
||||
|
||||
case "$(uname -m)" in
|
||||
x86_64) arch="x86_64-unknown-linux-musl" ;;
|
||||
aarch64) arch="aarch64-unknown-linux-musl" ;;
|
||||
*)
|
||||
msg_error "Unsupported arch for uv: $(uname -m)"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
ver="$(curl -fsSL https://api.github.com/repos/astral-sh/uv/releases/latest | jq -r '.tag_name' 2>/dev/null)"
|
||||
ver="${ver#v}"
|
||||
[ -z "$ver" ] && {
|
||||
msg_error "uv: cannot determine latest version"
|
||||
return 1
|
||||
}
|
||||
|
||||
if has "$UV_BIN"; then
|
||||
installed="$($UV_BIN -V 2>/dev/null | awk '{print $2}')"
|
||||
[ "$installed" = "$ver" ] && {
|
||||
msg_ok "uv $ver already installed"
|
||||
return 0
|
||||
}
|
||||
msg_info "Updating uv $installed → $ver"
|
||||
else
|
||||
msg_info "Setup uv $ver"
|
||||
fi
|
||||
|
||||
tmpd="$(mktemp -d)" || return 1
|
||||
tarball="uv-${arch}.tar.gz"
|
||||
url="https://github.com/astral-sh/uv/releases/download/v${ver}/${tarball}"
|
||||
|
||||
download_with_progress "$url" "$tmpd/uv.tar.gz" || {
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
tar -xzf "$tmpd/uv.tar.gz" -C "$tmpd" || {
|
||||
msg_error "uv: extract failed"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
|
||||
# tar contains ./uv
|
||||
if [ -x "$tmpd/uv" ]; then
|
||||
install -m 0755 "$tmpd/uv" "$UV_BIN"
|
||||
else
|
||||
# fallback: in subfolder
|
||||
install -m 0755 "$tmpd"/*/uv "$UV_BIN" 2>/dev/null || {
|
||||
msg_error "uv binary not found in tar"
|
||||
rm -rf "$tmpd"
|
||||
return 1
|
||||
}
|
||||
fi
|
||||
rm -rf "$tmpd"
|
||||
ensure_usr_local_bin_persist
|
||||
msg_ok "Setup uv $ver"
|
||||
|
||||
if [ -n "${PYTHON_VERSION:-}" ]; then
|
||||
local match
|
||||
match="$(uv python list --only-downloads 2>/dev/null | awk -v maj="$PYTHON_VERSION" '
|
||||
$0 ~ "^cpython-"maj"\\." { print $0 }' | awk -F- '{print $2}' | sort -V | tail -n1)"
|
||||
[ -z "$match" ] && {
|
||||
msg_error "No matching Python for $PYTHON_VERSION"
|
||||
return 1
|
||||
}
|
||||
if ! uv python list | grep -q "cpython-${match}-linux"; then
|
||||
msg_info "Installing Python $match via uv"
|
||||
uv python install "$match" || {
|
||||
msg_error "uv python install failed"
|
||||
return 1
|
||||
}
|
||||
msg_ok "Python $match installed (uv)"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# Java – Alpine (OpenJDK)
|
||||
# JAVA_VERSION: 17|21 (Default 21)
|
||||
# ------------------------------
|
||||
setup_java() {
|
||||
local JAVA_VERSION="${JAVA_VERSION:-21}" pkg
|
||||
case "$JAVA_VERSION" in
|
||||
17) pkg="openjdk17-jdk" ;;
|
||||
21 | *) pkg="openjdk21-jdk" ;;
|
||||
esac
|
||||
msg_info "Setup Java (OpenJDK $JAVA_VERSION)"
|
||||
apk add --no-cache "$pkg" >/dev/null 2>&1 || {
|
||||
msg_error "apk add $pkg failed"
|
||||
return 1
|
||||
}
|
||||
# set JAVA_HOME
|
||||
local prof="/etc/profile.d/20-java.sh"
|
||||
if [ ! -f "$prof" ]; then
|
||||
echo 'export JAVA_HOME=$(dirname $(dirname $(readlink -f $(command -v java))))' >"$prof"
|
||||
echo 'case ":$PATH:" in *:$JAVA_HOME/bin:*) ;; *) export PATH="$JAVA_HOME/bin:$PATH";; esac' >>"$prof"
|
||||
chmod +x "$prof"
|
||||
fi
|
||||
msg_ok "Java ready: $(java -version 2>&1 | head -n1)"
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# Go – Alpine (apk prefers, else tarball)
|
||||
# ------------------------------
|
||||
setup_go() {
|
||||
if [ -z "${GO_VERSION:-}" ]; then
|
||||
msg_info "Setup Go (apk)"
|
||||
apk add --no-cache go >/dev/null 2>&1 || {
|
||||
msg_error "apk add go failed"
|
||||
return 1
|
||||
}
|
||||
msg_ok "Go ready: $(go version 2>/dev/null)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
need_tool curl tar || return 1
|
||||
local ARCH TARBALL URL TMP
|
||||
case "$(uname -m)" in
|
||||
x86_64) ARCH="amd64" ;;
|
||||
aarch64) ARCH="arm64" ;;
|
||||
*)
|
||||
msg_error "Unsupported arch for Go: $(uname -m)"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
TARBALL="go${GO_VERSION}.linux-${ARCH}.tar.gz"
|
||||
URL="https://go.dev/dl/${TARBALL}"
|
||||
msg_info "Setup Go $GO_VERSION (tarball)"
|
||||
TMP="$(mktemp)"
|
||||
download_with_progress "$URL" "$TMP" || return 1
|
||||
rm -rf /usr/local/go
|
||||
tar -C /usr/local -xzf "$TMP" || {
|
||||
msg_error "extract go failed"
|
||||
rm -f "$TMP"
|
||||
return 1
|
||||
}
|
||||
rm -f "$TMP"
|
||||
ln -sf /usr/local/go/bin/go /usr/local/bin/go
|
||||
ln -sf /usr/local/go/bin/gofmt /usr/local/bin/gofmt
|
||||
ensure_usr_local_bin_persist
|
||||
msg_ok "Go ready: $(go version 2>/dev/null)"
|
||||
}
|
||||
|
||||
# ------------------------------
|
||||
# Composer – Alpine
|
||||
# uses php83-cli + openssl + phar
|
||||
# ------------------------------
|
||||
setup_composer() {
|
||||
local COMPOSER_BIN="/usr/local/bin/composer"
|
||||
if ! has php; then
|
||||
# prefers php83
|
||||
msg_info "Installing PHP CLI for Composer"
|
||||
apk add --no-cache php83-cli php83-openssl php83-phar php83-iconv >/dev/null 2>&1 || {
|
||||
# Fallback to generic php if 83 not available
|
||||
apk add --no-cache php-cli php-openssl php-phar php-iconv >/dev/null 2>&1 || {
|
||||
msg_error "Failed to install php-cli for composer"
|
||||
return 1
|
||||
}
|
||||
}
|
||||
msg_ok "PHP CLI ready: $(php -v | head -n1)"
|
||||
fi
|
||||
|
||||
if [ -x "$COMPOSER_BIN" ]; then
|
||||
msg_info "Updating Composer"
|
||||
else
|
||||
msg_info "Setup Composer"
|
||||
fi
|
||||
|
||||
need_tool curl || return 1
|
||||
curl -fsSL https://getcomposer.org/installer -o /tmp/composer-setup.php || {
|
||||
msg_error "composer installer download failed"
|
||||
return 1
|
||||
}
|
||||
php /tmp/composer-setup.php --install-dir=/usr/local/bin --filename=composer >/dev/null 2>&1 || {
|
||||
msg_error "composer install failed"
|
||||
return 1
|
||||
}
|
||||
rm -f /tmp/composer-setup.php
|
||||
ensure_usr_local_bin_persist
|
||||
msg_ok "Composer ready: $(composer --version 2>/dev/null)"
|
||||
}
|
||||
202
misc/api.func
202
misc/api.func
@ -2,6 +2,153 @@
|
||||
# Author: michelroegl-brunner
|
||||
# License: MIT | https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/LICENSE
|
||||
|
||||
# ==============================================================================
|
||||
# API.FUNC - TELEMETRY & DIAGNOSTICS API
|
||||
# ==============================================================================
|
||||
#
|
||||
# Provides functions for sending anonymous telemetry data to Community-Scripts
|
||||
# API for analytics and diagnostics purposes.
|
||||
#
|
||||
# Features:
|
||||
# - Container/VM creation statistics
|
||||
# - Installation success/failure tracking
|
||||
# - Error code mapping and reporting
|
||||
# - Privacy-respecting anonymous telemetry
|
||||
#
|
||||
# Usage:
|
||||
# source <(curl -fsSL .../api.func)
|
||||
# post_to_api # Report container creation
|
||||
# post_update_to_api # Report installation status
|
||||
#
|
||||
# Privacy:
|
||||
# - Only anonymous statistics (no personal data)
|
||||
# - User can opt-out via diagnostics settings
|
||||
# - Random UUID for session tracking only
|
||||
#
|
||||
# ==============================================================================
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 1: ERROR CODE DESCRIPTIONS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# explain_exit_code()
|
||||
#
|
||||
# - Maps numeric exit codes to human-readable error descriptions
|
||||
# - Supports:
|
||||
# * Generic/Shell errors (1, 2, 126, 127, 128, 130, 137, 139, 143)
|
||||
# * Package manager errors (APT, DPKG: 100, 101, 255)
|
||||
# * Node.js/npm errors (243-249, 254)
|
||||
# * Python/pip/uv errors (210-212)
|
||||
# * PostgreSQL errors (231-234)
|
||||
# * MySQL/MariaDB errors (241-244)
|
||||
# * MongoDB errors (251-254)
|
||||
# * Proxmox custom codes (200-231)
|
||||
# - Returns description string for given exit code
|
||||
# - Shared function with error_handler.func for consistency
|
||||
# ------------------------------------------------------------------------------
|
||||
explain_exit_code() {
|
||||
local code="$1"
|
||||
case "$code" in
|
||||
# --- Generic / Shell ---
|
||||
1) echo "General error / Operation not permitted" ;;
|
||||
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
|
||||
126) echo "Command invoked cannot execute (permission problem?)" ;;
|
||||
127) echo "Command not found" ;;
|
||||
128) echo "Invalid argument to exit" ;;
|
||||
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
|
||||
137) echo "Killed (SIGKILL / Out of memory?)" ;;
|
||||
139) echo "Segmentation fault (core dumped)" ;;
|
||||
143) echo "Terminated (SIGTERM)" ;;
|
||||
|
||||
# --- Package manager / APT / DPKG ---
|
||||
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
|
||||
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
|
||||
255) echo "DPKG: Fatal internal error" ;;
|
||||
|
||||
# --- Node.js / npm / pnpm / yarn ---
|
||||
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
|
||||
245) echo "Node.js: Invalid command-line option" ;;
|
||||
246) echo "Node.js: Internal JavaScript Parse Error" ;;
|
||||
247) echo "Node.js: Fatal internal error" ;;
|
||||
248) echo "Node.js: Invalid C++ addon / N-API failure" ;;
|
||||
249) echo "Node.js: Inspector error" ;;
|
||||
254) echo "npm/pnpm/yarn: Unknown fatal error" ;;
|
||||
|
||||
# --- Python / pip / uv ---
|
||||
210) echo "Python: Virtualenv / uv environment missing or broken" ;;
|
||||
211) echo "Python: Dependency resolution failed" ;;
|
||||
212) echo "Python: Installation aborted (permissions or EXTERNALLY-MANAGED)" ;;
|
||||
|
||||
# --- PostgreSQL ---
|
||||
231) echo "PostgreSQL: Connection failed (server not running / wrong socket)" ;;
|
||||
232) echo "PostgreSQL: Authentication failed (bad user/password)" ;;
|
||||
233) echo "PostgreSQL: Database does not exist" ;;
|
||||
234) echo "PostgreSQL: Fatal error in query / syntax" ;;
|
||||
|
||||
# --- MySQL / MariaDB ---
|
||||
241) echo "MySQL/MariaDB: Connection failed (server not running / wrong socket)" ;;
|
||||
242) echo "MySQL/MariaDB: Authentication failed (bad user/password)" ;;
|
||||
243) echo "MySQL/MariaDB: Database does not exist" ;;
|
||||
244) echo "MySQL/MariaDB: Fatal error in query / syntax" ;;
|
||||
|
||||
# --- MongoDB ---
|
||||
251) echo "MongoDB: Connection failed (server not running)" ;;
|
||||
252) echo "MongoDB: Authentication failed (bad user/password)" ;;
|
||||
253) echo "MongoDB: Database not found" ;;
|
||||
254) echo "MongoDB: Fatal query error" ;;
|
||||
|
||||
# --- Proxmox Custom Codes ---
|
||||
200) echo "Custom: Failed to create lock file" ;;
|
||||
203) echo "Custom: Missing CTID variable" ;;
|
||||
204) echo "Custom: Missing PCT_OSTYPE variable" ;;
|
||||
205) echo "Custom: Invalid CTID (<100)" ;;
|
||||
206) echo "Custom: CTID already in use (check 'pct list' and /etc/pve/lxc/)" ;;
|
||||
207) echo "Custom: Password contains unescaped special characters (-, /, \\, *, etc.)" ;;
|
||||
208) echo "Custom: Invalid configuration (DNS/MAC/Network format error)" ;;
|
||||
209) echo "Custom: Container creation failed (check logs for pct create output)" ;;
|
||||
210) echo "Custom: Cluster not quorate" ;;
|
||||
211) echo "Custom: Timeout waiting for template lock (concurrent download in progress)" ;;
|
||||
214) echo "Custom: Not enough storage space" ;;
|
||||
215) echo "Custom: Container created but not listed (ghost state - check /etc/pve/lxc/)" ;;
|
||||
216) echo "Custom: RootFS entry missing in config (incomplete creation)" ;;
|
||||
217) echo "Custom: Storage does not support rootdir (check storage capabilities)" ;;
|
||||
218) echo "Custom: Template file corrupted or incomplete download (size <1MB or invalid archive)" ;;
|
||||
220) echo "Custom: Unable to resolve template path" ;;
|
||||
221) echo "Custom: Template file exists but not readable (check file permissions)" ;;
|
||||
222) echo "Custom: Template download failed after 3 attempts (network/storage issue)" ;;
|
||||
223) echo "Custom: Template not available after download (storage sync issue)" ;;
|
||||
225) echo "Custom: No template available for OS/Version (check 'pveam available')" ;;
|
||||
231) echo "Custom: LXC stack upgrade/retry failed (outdated pve-container - check https://github.com/community-scripts/ProxmoxVE/discussions/8126)" ;;
|
||||
|
||||
# --- Default ---
|
||||
*) echo "Unknown error" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 2: TELEMETRY FUNCTIONS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# post_to_api()
|
||||
#
|
||||
# - Sends LXC container creation statistics to Community-Scripts API
|
||||
# - Only executes if:
|
||||
# * curl is available
|
||||
# * DIAGNOSTICS=yes
|
||||
# * RANDOM_UUID is set
|
||||
# - Payload includes:
|
||||
# * Container type, disk size, CPU cores, RAM
|
||||
# * OS type and version
|
||||
# * IPv6 disable status
|
||||
# * Application name (NSAPP)
|
||||
# * Installation method
|
||||
# * PVE version
|
||||
# * Status: "installing"
|
||||
# * Random UUID for session tracking
|
||||
# - Anonymous telemetry (no personal data)
|
||||
# ------------------------------------------------------------------------------
|
||||
post_to_api() {
|
||||
|
||||
if ! command -v curl &>/dev/null; then
|
||||
@ -38,14 +185,26 @@ post_to_api() {
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
if [[ "$DIAGNOSTICS" == "yes" ]]; then
|
||||
RESPONSE=$(curl -s -w "%{http_code}" -L -X POST "$API_URL" --post301 --post302 \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$JSON_PAYLOAD") || true
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# post_to_api_vm()
|
||||
#
|
||||
# - Sends VM creation statistics to Community-Scripts API
|
||||
# - Similar to post_to_api() but for virtual machines (not containers)
|
||||
# - Reads DIAGNOSTICS from /usr/local/community-scripts/diagnostics file
|
||||
# - Payload differences:
|
||||
# * ct_type=2 (VM instead of LXC)
|
||||
# * type="vm"
|
||||
# * Disk size without 'G' suffix (parsed from DISK_SIZE variable)
|
||||
# - Only executes if DIAGNOSTICS=yes and RANDOM_UUID is set
|
||||
# ------------------------------------------------------------------------------
|
||||
post_to_api_vm() {
|
||||
|
||||
if [[ ! -f /usr/local/community-scripts/diagnostics ]]; then
|
||||
@ -88,7 +247,6 @@ post_to_api_vm() {
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
if [[ "$DIAGNOSTICS" == "yes" ]]; then
|
||||
RESPONSE=$(curl -s -w "%{http_code}" -L -X POST "$API_URL" --post301 --post302 \
|
||||
-H "Content-Type: application/json" \
|
||||
@ -96,19 +254,54 @@ EOF
|
||||
fi
|
||||
}
|
||||
|
||||
POST_UPDATE_DONE=false
|
||||
# ------------------------------------------------------------------------------
|
||||
# post_update_to_api()
|
||||
#
|
||||
# - Reports installation completion status to API
|
||||
# - Prevents duplicate submissions via POST_UPDATE_DONE flag
|
||||
# - Arguments:
|
||||
# * $1: status ("success" or "failed")
|
||||
# * $2: exit_code (default: 1 for failed, 0 for success)
|
||||
# - Payload includes:
|
||||
# * Final status (success/failed)
|
||||
# * Error description via get_error_description()
|
||||
# * Random UUID for session correlation
|
||||
# - Only executes once per session
|
||||
# - Silently returns if:
|
||||
# * curl not available
|
||||
# * Already reported (POST_UPDATE_DONE=true)
|
||||
# * DIAGNOSTICS=no
|
||||
# ------------------------------------------------------------------------------
|
||||
post_update_to_api() {
|
||||
|
||||
if ! command -v curl &>/dev/null; then
|
||||
return
|
||||
fi
|
||||
|
||||
# Initialize flag if not set (prevents 'unbound variable' error with set -u)
|
||||
POST_UPDATE_DONE=${POST_UPDATE_DONE:-false}
|
||||
|
||||
if [ "$POST_UPDATE_DONE" = true ]; then
|
||||
return 0
|
||||
fi
|
||||
exit_code=${2:-1}
|
||||
local API_URL="http://api.community-scripts.org/upload/updatestatus"
|
||||
local status="${1:-failed}"
|
||||
local error="${2:-No error message}"
|
||||
if [[ "$status" == "failed" ]]; then
|
||||
local exit_code="${2:-1}"
|
||||
elif [[ "$status" == "success" ]]; then
|
||||
local exit_code="${2:-0}"
|
||||
fi
|
||||
|
||||
if [[ -z "$exit_code" ]]; then
|
||||
exit_code=1
|
||||
fi
|
||||
|
||||
error=$(explain_exit_code "$exit_code")
|
||||
|
||||
if [ -z "$error" ]; then
|
||||
error="Unknown error"
|
||||
fi
|
||||
|
||||
JSON_PAYLOAD=$(
|
||||
cat <<EOF
|
||||
@ -119,7 +312,6 @@ post_update_to_api() {
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
if [[ "$DIAGNOSTICS" == "yes" ]]; then
|
||||
RESPONSE=$(curl -s -w "%{http_code}" -L -X POST "$API_URL" --post301 --post302 \
|
||||
-H "Content-Type: application/json" \
|
||||
|
||||
4253
misc/build.func
4253
misc/build.func
File diff suppressed because it is too large
Load Diff
@ -1,699 +0,0 @@
|
||||
config_file() {
|
||||
CONFIG_FILE="/opt/community-scripts/.settings"
|
||||
|
||||
if [[ -f "/opt/community-scripts/${NSAPP}.conf" ]]; then
|
||||
CONFIG_FILE="/opt/community-scripts/${NSAPP}.conf"
|
||||
fi
|
||||
|
||||
if CONFIG_FILE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set absolute path to config file" 8 58 "$CONFIG_FILE" --title "CONFIG FILE" 3>&1 1>&2 2>&3); then
|
||||
if [[ ! -f "$CONFIG_FILE" ]]; then
|
||||
echo -e "${CROSS}${RD}Config file not found, exiting script!.${CL}"
|
||||
exit
|
||||
else
|
||||
echo -e "${INFO}${BOLD}${DGN}Using config File: ${BGN}$CONFIG_FILE${CL}"
|
||||
source "$CONFIG_FILE"
|
||||
fi
|
||||
fi
|
||||
if [[ -n "${CT_ID-}" ]]; then
|
||||
if [[ "$CT_ID" =~ ^([0-9]{3,4})-([0-9]{3,4})$ ]]; then
|
||||
MIN_ID=${BASH_REMATCH[1]}
|
||||
MAX_ID=${BASH_REMATCH[2]}
|
||||
if ((MIN_ID >= MAX_ID)); then
|
||||
msg_error "Invalid Container ID range. The first number must be smaller than the second number, was ${CT_ID}"
|
||||
exit
|
||||
fi
|
||||
|
||||
LIST_OF_IDS=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null | grep -oP '"vmid":\s*\K\d+') || true
|
||||
if [[ -n "$LIST_OF_IDS" ]]; then
|
||||
for ((ID = MIN_ID; ID <= MAX_ID; ID++)); do
|
||||
if ! grep -q "^$ID$" <<<"$LIST_OF_IDS"; then
|
||||
CT_ID=$ID
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
echo -e "${CONTAINERID}${BOLD}${DGN}Container ID: ${BGN}$CT_ID${CL}"
|
||||
|
||||
elif [[ "$CT_ID" =~ ^[0-9]+$ ]]; then
|
||||
LIST_OF_IDS=$(pvesh get /cluster/resources --type vm --output-format json 2>/dev/null | grep -oP '"vmid":\s*\K\d+') || true
|
||||
if [[ -n "$LIST_OF_IDS" ]]; then
|
||||
|
||||
if ! grep -q "^$CT_ID$" <<<"$LIST_OF_IDS"; then
|
||||
echo -e "${CONTAINERID}${BOLD}${DGN}Container ID: ${BGN}$CT_ID${CL}"
|
||||
else
|
||||
msg_error "Container ID $CT_ID already exists"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
echo -e "${CONTAINERID}${BOLD}${DGN}Container ID: ${BGN}$CT_ID${CL}"
|
||||
fi
|
||||
else
|
||||
msg_error "Invalid Container ID format. Needs to be 0000-9999 or 0-9999, was ${CT_ID}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if CT_ID=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set Container ID" 8 58 "$NEXTID" --title "CONTAINER ID" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$CT_ID" ]; then
|
||||
CT_ID="$NEXTID"
|
||||
echo -e "${CONTAINERID}${BOLD}${DGN}Container ID: ${BGN}$CT_ID${CL}"
|
||||
else
|
||||
echo -e "${CONTAINERID}${BOLD}${DGN}Container ID: ${BGN}$CT_ID${CL}"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
|
||||
fi
|
||||
if [[ -n "${CT_TYPE-}" ]]; then
|
||||
if [[ "$CT_TYPE" -eq 0 ]]; then
|
||||
CT_TYPE_DESC="Privileged"
|
||||
elif [[ "$CT_TYPE" -eq 1 ]]; then
|
||||
CT_TYPE_DESC="Unprivileged"
|
||||
else
|
||||
msg_error "Unknown setting for CT_TYPE, should be 1 or 0, was ${CT_TYPE}"
|
||||
exit
|
||||
fi
|
||||
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Container Type: ${BGN}$CT_TYPE_DESC${CL}"
|
||||
else
|
||||
if CT_TYPE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --title "CONTAINER TYPE" --radiolist "Choose Type" 10 58 2 \
|
||||
"1" "Unprivileged" ON \
|
||||
"0" "Privileged" OFF \
|
||||
3>&1 1>&2 2>&3); then
|
||||
if [ -n "$CT_TYPE" ]; then
|
||||
CT_TYPE_DESC="Unprivileged"
|
||||
if [ "$CT_TYPE" -eq 0 ]; then
|
||||
CT_TYPE_DESC="Privileged"
|
||||
fi
|
||||
echo -e "${CONTAINERTYPE}${BOLD}${DGN}Container Type: ${BGN}$CT_TYPE_DESC${CL}"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${PW-}" ]]; then
|
||||
if [[ "$PW" == "none" ]]; then
|
||||
PW=""
|
||||
else
|
||||
if [[ "$PW" == *" "* ]]; then
|
||||
msg_error "Password cannot be empty"
|
||||
exit
|
||||
elif [[ ${#PW} -lt 5 ]]; then
|
||||
msg_error "Password must be at least 5 characters long"
|
||||
exit
|
||||
else
|
||||
echo -e "${VERIFYPW}${BOLD}${DGN}Root Password: ${BGN}********${CL}"
|
||||
fi
|
||||
PW="-password $PW"
|
||||
fi
|
||||
else
|
||||
while true; do
|
||||
if PW1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --passwordbox "\nSet Root Password (needed for root ssh access)" 9 58 --title "PASSWORD (leave blank for automatic login)" 3>&1 1>&2 2>&3); then
|
||||
if [[ -n "$PW1" ]]; then
|
||||
if [[ "$PW1" == *" "* ]]; then
|
||||
whiptail --msgbox "Password cannot contain spaces. Please try again." 8 58
|
||||
elif [ ${#PW1} -lt 5 ]; then
|
||||
whiptail --msgbox "Password must be at least 5 characters long. Please try again." 8 58
|
||||
else
|
||||
if PW2=$(whiptail --backtitle "Proxmox VE Helper Scripts" --passwordbox "\nVerify Root Password" 9 58 --title "PASSWORD VERIFICATION" 3>&1 1>&2 2>&3); then
|
||||
if [[ "$PW1" == "$PW2" ]]; then
|
||||
PW="-password $PW1"
|
||||
echo -e "${VERIFYPW}${BOLD}${DGN}Root Password: ${BGN}********${CL}"
|
||||
break
|
||||
else
|
||||
whiptail --msgbox "Passwords do not match. Please try again." 8 58
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
else
|
||||
PW1="Automatic Login"
|
||||
PW=""
|
||||
echo -e "${VERIFYPW}${BOLD}${DGN}Root Password: ${BGN}$PW1${CL}"
|
||||
break
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if [[ -n "${HN-}" ]]; then
|
||||
echo -e "${HOSTNAME}${BOLD}${DGN}Hostname: ${BGN}$HN${CL}"
|
||||
else
|
||||
if CT_NAME=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set Hostname" 8 58 "$NSAPP" --title "HOSTNAME" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$CT_NAME" ]; then
|
||||
HN="$NSAPP"
|
||||
else
|
||||
HN=$(echo "${CT_NAME,,}" | tr -d ' ')
|
||||
fi
|
||||
echo -e "${HOSTNAME}${BOLD}${DGN}Hostname: ${BGN}$HN${CL}"
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${DISK_SIZE-}" ]]; then
|
||||
if [[ "$DISK_SIZE" =~ ^-?[0-9]+$ ]]; then
|
||||
echo -e "${DISKSIZE}${BOLD}${DGN}Disk Size: ${BGN}${DISK_SIZE} GB${CL}"
|
||||
else
|
||||
msg_error "DISK_SIZE must be an integer, was ${DISK_SIZE}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if DISK_SIZE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set Disk Size in GB" 8 58 "$var_disk" --title "DISK SIZE" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$DISK_SIZE" ]; then
|
||||
DISK_SIZE="$var_disk"
|
||||
echo -e "${DISKSIZE}${BOLD}${DGN}Disk Size: ${BGN}${DISK_SIZE} GB${CL}"
|
||||
else
|
||||
if ! [[ $DISK_SIZE =~ $INTEGER ]]; then
|
||||
echo -e "{INFO}${HOLD}${RD} DISK SIZE MUST BE AN INTEGER NUMBER!${CL}"
|
||||
advanced_settings
|
||||
fi
|
||||
echo -e "${DISKSIZE}${BOLD}${DGN}Disk Size: ${BGN}${DISK_SIZE} GB${CL}"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${CORE_COUNT-}" ]]; then
|
||||
if [[ "$CORE_COUNT" =~ ^-?[0-9]+$ ]]; then
|
||||
echo -e "${CPUCORE}${BOLD}${DGN}CPU Cores: ${BGN}${CORE_COUNT}${CL}"
|
||||
else
|
||||
msg_error "CORE_COUNT must be an integer, was ${CORE_COUNT}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if CORE_COUNT=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Allocate CPU Cores" 8 58 "$var_cpu" --title "CORE COUNT" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$CORE_COUNT" ]; then
|
||||
CORE_COUNT="$var_cpu"
|
||||
echo -e "${CPUCORE}${BOLD}${DGN}CPU Cores: ${BGN}$CORE_COUNT${CL}"
|
||||
else
|
||||
echo -e "${CPUCORE}${BOLD}${DGN}CPU Cores: ${BGN}$CORE_COUNT${CL}"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${RAM_SIZE-}" ]]; then
|
||||
if [[ "$RAM_SIZE" =~ ^-?[0-9]+$ ]]; then
|
||||
echo -e "${RAMSIZE}${BOLD}${DGN}RAM Size: ${BGN}${RAM_SIZE} MiB${CL}"
|
||||
else
|
||||
msg_error "RAM_SIZE must be an integer, was ${RAM_SIZE}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if RAM_SIZE=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Allocate RAM in MiB" 8 58 "$var_ram" --title "RAM" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$RAM_SIZE" ]; then
|
||||
RAM_SIZE="$var_ram"
|
||||
echo -e "${RAMSIZE}${BOLD}${DGN}RAM Size: ${BGN}${RAM_SIZE} MiB${CL}"
|
||||
else
|
||||
echo -e "${RAMSIZE}${BOLD}${DGN}RAM Size: ${BGN}${RAM_SIZE} MiB${CL}"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
IFACE_FILEPATH_LIST="/etc/network/interfaces"$'\n'$(find "/etc/network/interfaces.d/" -type f)
|
||||
BRIDGES=""
|
||||
OLD_IFS=$IFS
|
||||
IFS=$'\n'
|
||||
|
||||
for iface_filepath in ${IFACE_FILEPATH_LIST}; do
|
||||
|
||||
iface_indexes_tmpfile=$(mktemp -q -u '.iface-XXXX')
|
||||
(grep -Pn '^\s*iface' "${iface_filepath}" | cut -d':' -f1 && wc -l "${iface_filepath}" | cut -d' ' -f1) | awk 'FNR==1 {line=$0; next} {print line":"$0-1; line=$0}' >"${iface_indexes_tmpfile}" || true
|
||||
|
||||
if [ -f "${iface_indexes_tmpfile}" ]; then
|
||||
|
||||
while read -r pair; do
|
||||
start=$(echo "${pair}" | cut -d':' -f1)
|
||||
end=$(echo "${pair}" | cut -d':' -f2)
|
||||
if awk "NR >= ${start} && NR <= ${end}" "${iface_filepath}" | grep -qP '^\s*(bridge[-_](ports|stp|fd|vlan-aware|vids)|ovs_type\s+OVSBridge)\b'; then
|
||||
iface_name=$(sed "${start}q;d" "${iface_filepath}" | awk '{print $2}')
|
||||
BRIDGES="${iface_name}"$'\n'"${BRIDGES}"
|
||||
fi
|
||||
|
||||
done <"${iface_indexes_tmpfile}"
|
||||
rm -f "${iface_indexes_tmpfile}"
|
||||
fi
|
||||
|
||||
done
|
||||
IFS=$OLD_IFS
|
||||
BRIDGES=$(echo "$BRIDGES" | grep -v '^\s*$' | sort | uniq)
|
||||
|
||||
if [[ -n "${BRG-}" ]]; then
|
||||
if echo "$BRIDGES" | grep -q "${BRG}"; then
|
||||
echo -e "${BRIDGE}${BOLD}${DGN}Bridge: ${BGN}$BRG${CL}"
|
||||
else
|
||||
msg_error "Bridge '${BRG}' does not exist in /etc/network/interfaces or /etc/network/interfaces.d/sdn"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
BRG=$(whiptail --backtitle "Proxmox VE Helper Scripts" --menu "Select network bridge:" 15 40 6 $(echo "$BRIDGES" | awk '{print $0, "Bridge"}') 3>&1 1>&2 2>&3)
|
||||
if [ -z "$BRG" ]; then
|
||||
exit_script
|
||||
else
|
||||
echo -e "${BRIDGE}${BOLD}${DGN}Bridge: ${BGN}$BRG${CL}"
|
||||
fi
|
||||
fi
|
||||
|
||||
local ip_cidr_regex='^([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})/([0-9]{1,2})$'
|
||||
local ip_regex='^([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})$'
|
||||
|
||||
if [[ -n ${NET-} ]]; then
|
||||
if [ "$NET" == "dhcp" ]; then
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IP Address: ${BGN}DHCP${CL}"
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}Default${CL}"
|
||||
GATE=""
|
||||
elif [[ "$NET" =~ $ip_cidr_regex ]]; then
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IP Address: ${BGN}$NET${CL}"
|
||||
if [[ -n "$GATE" ]]; then
|
||||
[[ "$GATE" =~ ",gw=" ]] && GATE="${GATE##,gw=}"
|
||||
if [[ "$GATE" =~ $ip_regex ]]; then
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}$GATE${CL}"
|
||||
GATE=",gw=$GATE"
|
||||
else
|
||||
msg_error "Invalid IP Address format for Gateway. Needs to be 0.0.0.0, was ${GATE}"
|
||||
exit
|
||||
fi
|
||||
|
||||
else
|
||||
while true; do
|
||||
GATE1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Enter gateway IP address" 8 58 --title "Gateway IP" 3>&1 1>&2 2>&3)
|
||||
if [ -z "$GATE1" ]; then
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "Gateway IP address cannot be empty" 8 58
|
||||
elif [[ ! "$GATE1" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "Invalid IP address format" 8 58
|
||||
else
|
||||
GATE=",gw=$GATE1"
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}$GATE1${CL}"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
elif [[ "$NET" == *-* ]]; then
|
||||
IFS="-" read -r ip_start ip_end <<<"$NET"
|
||||
|
||||
if [[ ! "$ip_start" =~ $ip_cidr_regex ]] || [[ ! "$ip_end" =~ $ip_cidr_regex ]]; then
|
||||
msg_error "Invalid IP range format, was $NET should be 0.0.0.0/0-0.0.0.0/0"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ip1="${ip_start%%/*}"
|
||||
ip2="${ip_end%%/*}"
|
||||
cidr="${ip_start##*/}"
|
||||
|
||||
ip_to_int() {
|
||||
local IFS=.
|
||||
read -r i1 i2 i3 i4 <<<"$1"
|
||||
echo $(((i1 << 24) + (i2 << 16) + (i3 << 8) + i4))
|
||||
}
|
||||
|
||||
int_to_ip() {
|
||||
local ip=$1
|
||||
echo "$(((ip >> 24) & 0xFF)).$(((ip >> 16) & 0xFF)).$(((ip >> 8) & 0xFF)).$((ip & 0xFF))"
|
||||
}
|
||||
|
||||
start_int=$(ip_to_int "$ip1")
|
||||
end_int=$(ip_to_int "$ip2")
|
||||
|
||||
for ((ip_int = start_int; ip_int <= end_int; ip_int++)); do
|
||||
ip=$(int_to_ip $ip_int)
|
||||
msg_info "Checking IP: $ip"
|
||||
if ! ping -c 2 -W 1 "$ip" >/dev/null 2>&1; then
|
||||
NET="$ip/$cidr"
|
||||
msg_ok "Using free IP Address: ${BGN}$NET${CL}"
|
||||
sleep 3
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [[ "$NET" == *-* ]]; then
|
||||
msg_error "No free IP found in range"
|
||||
exit 1
|
||||
fi
|
||||
if [ -n "$GATE" ]; then
|
||||
if [[ "$GATE" =~ $ip_regex ]]; then
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}$GATE${CL}"
|
||||
GATE=",gw=$GATE"
|
||||
else
|
||||
msg_error "Invalid IP Address format for Gateway. Needs to be 0.0.0.0, was ${GATE}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
while true; do
|
||||
GATE1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Enter gateway IP address" 8 58 --title "Gateway IP" 3>&1 1>&2 2>&3)
|
||||
if [ -z "$GATE1" ]; then
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "Gateway IP address cannot be empty" 8 58
|
||||
elif [[ ! "$GATE1" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "Invalid IP address format" 8 58
|
||||
else
|
||||
GATE=",gw=$GATE1"
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}$GATE1${CL}"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
else
|
||||
msg_error "Invalid IP Address format. Needs to be 0.0.0.0/0 or a range like 10.0.0.1/24-10.0.0.10/24, was ${NET}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
while true; do
|
||||
NET=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set a Static IPv4 CIDR Address (/24)" 8 58 dhcp --title "IP ADDRESS" 3>&1 1>&2 2>&3)
|
||||
exit_status=$?
|
||||
if [ $exit_status -eq 0 ]; then
|
||||
if [ "$NET" = "dhcp" ]; then
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IP Address: ${BGN}$NET${CL}"
|
||||
break
|
||||
else
|
||||
if [[ "$NET" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}/([0-9]|[1-2][0-9]|3[0-2])$ ]]; then
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IP Address: ${BGN}$NET${CL}"
|
||||
break
|
||||
else
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "$NET is an invalid IPv4 CIDR address. Please enter a valid IPv4 CIDR address or 'dhcp'" 8 58
|
||||
fi
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
done
|
||||
if [ "$NET" != "dhcp" ]; then
|
||||
while true; do
|
||||
GATE1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Enter gateway IP address" 8 58 --title "Gateway IP" 3>&1 1>&2 2>&3)
|
||||
if [ -z "$GATE1" ]; then
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "Gateway IP address cannot be empty" 8 58
|
||||
elif [[ ! "$GATE1" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
|
||||
whiptail --backtitle "Proxmox VE Helper Scripts" --msgbox "Invalid IP address format" 8 58
|
||||
else
|
||||
GATE=",gw=$GATE1"
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}$GATE1${CL}"
|
||||
break
|
||||
fi
|
||||
done
|
||||
else
|
||||
GATE=""
|
||||
echo -e "${GATEWAY}${BOLD}${DGN}Gateway IP Address: ${BGN}Default${CL}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$var_os" == "alpine" ]; then
|
||||
APT_CACHER=""
|
||||
APT_CACHER_IP=""
|
||||
else
|
||||
if [[ -n "${APT_CACHER_IP-}" ]]; then
|
||||
if [[ ! $APT_CACHER_IP == "none" ]]; then
|
||||
APT_CACHER="yes"
|
||||
echo -e "${NETWORK}${BOLD}${DGN}APT-CACHER IP Address: ${BGN}$APT_CACHER_IP${CL}"
|
||||
else
|
||||
APT_CACHER=""
|
||||
echo -e "${NETWORK}${BOLD}${DGN}APT-Cacher IP Address: ${BGN}No${CL}"
|
||||
fi
|
||||
else
|
||||
if APT_CACHER_IP=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set APT-Cacher IP (leave blank for none)" 8 58 --title "APT-Cacher IP" 3>&1 1>&2 2>&3); then
|
||||
APT_CACHER="${APT_CACHER_IP:+yes}"
|
||||
echo -e "${NETWORK}${BOLD}${DGN}APT-Cacher IP Address: ${BGN}${APT_CACHER_IP:-Default}${CL}"
|
||||
if [[ -n $APT_CACHER_IP ]]; then
|
||||
APT_CACHER_IP="none"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${MTU-}" ]]; then
|
||||
if [[ "$MTU" =~ ^-?[0-9]+$ ]]; then
|
||||
echo -e "${DEFAULT}${BOLD}${DGN}Interface MTU Size: ${BGN}$MTU${CL}"
|
||||
MTU=",mtu=$MTU"
|
||||
else
|
||||
msg_error "MTU must be an integer, was ${MTU}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if MTU1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set Interface MTU Size (leave blank for default [The MTU of your selected vmbr, default is 1500])" 8 58 --title "MTU SIZE" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$MTU1" ]; then
|
||||
MTU1="Default"
|
||||
MTU=""
|
||||
else
|
||||
MTU=",mtu=$MTU1"
|
||||
fi
|
||||
echo -e "${DEFAULT}${BOLD}${DGN}Interface MTU Size: ${BGN}$MTU1${CL}"
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$IPV6_METHOD" == "static" ]]; then
|
||||
if [[ -n "$IPV6STATIC" ]]; then
|
||||
IP6=",ip6=${IPV6STATIC}"
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IPv6 Address: ${BGN}${IPV6STATIC}${CL}"
|
||||
else
|
||||
msg_error "IPV6_METHOD is set to static but IPV6STATIC is empty"
|
||||
exit
|
||||
fi
|
||||
elif [[ "$IPV6_METHOD" == "auto" ]]; then
|
||||
IP6=",ip6=auto"
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IPv6 Address: ${BGN}auto${CL}"
|
||||
else
|
||||
IP6=""
|
||||
echo -e "${NETWORK}${BOLD}${DGN}IPv6 Address: ${BGN}none${CL}"
|
||||
fi
|
||||
|
||||
if [[ -n "${SD-}" ]]; then
|
||||
if [[ "$SD" == "none" ]]; then
|
||||
SD=""
|
||||
echo -e "${SEARCH}${BOLD}${DGN}DNS Search Domain: ${BGN}Host${CL}"
|
||||
else
|
||||
# Strip prefix if present for config file storage
|
||||
local SD_VALUE="$SD"
|
||||
[[ "$SD" =~ ^-searchdomain= ]] && SD_VALUE="${SD#-searchdomain=}"
|
||||
echo -e "${SEARCH}${BOLD}${DGN}DNS Search Domain: ${BGN}$SD_VALUE${CL}"
|
||||
SD="-searchdomain=$SD_VALUE"
|
||||
fi
|
||||
else
|
||||
if SD=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set a DNS Search Domain (leave blank for HOST)" 8 58 --title "DNS Search Domain" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$SD" ]; then
|
||||
SX=Host
|
||||
SD=""
|
||||
else
|
||||
SX=$SD
|
||||
SD="-searchdomain=$SD"
|
||||
fi
|
||||
echo -e "${SEARCH}${BOLD}${DGN}DNS Search Domain: ${BGN}$SX${CL}"
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${NS-}" ]]; then
|
||||
if [[ $NS == "none" ]]; then
|
||||
NS=""
|
||||
echo -e "${NETWORK}${BOLD}${DGN}DNS Server IP Address: ${BGN}Host${CL}"
|
||||
else
|
||||
# Strip prefix if present for config file storage
|
||||
local NS_VALUE="$NS"
|
||||
[[ "$NS" =~ ^-nameserver= ]] && NS_VALUE="${NS#-nameserver=}"
|
||||
if [[ "$NS_VALUE" =~ $ip_regex ]]; then
|
||||
echo -e "${NETWORK}${BOLD}${DGN}DNS Server IP Address: ${BGN}$NS_VALUE${CL}"
|
||||
NS="-nameserver=$NS_VALUE"
|
||||
else
|
||||
msg_error "Invalid IP Address format for DNS Server. Needs to be 0.0.0.0, was ${NS_VALUE}"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
else
|
||||
if NX=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set a DNS Server IP (leave blank for HOST)" 8 58 --title "DNS SERVER IP" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$NX" ]; then
|
||||
NX=Host
|
||||
NS=""
|
||||
else
|
||||
NS="-nameserver=$NX"
|
||||
fi
|
||||
echo -e "${NETWORK}${BOLD}${DGN}DNS Server IP Address: ${BGN}$NX${CL}"
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${MAC-}" ]]; then
|
||||
if [[ "$MAC" == "none" ]]; then
|
||||
MAC=""
|
||||
echo -e "${MACADDRESS}${BOLD}${DGN}MAC Address: ${BGN}Host${CL}"
|
||||
else
|
||||
# Strip prefix if present for config file storage
|
||||
local MAC_VALUE="$MAC"
|
||||
[[ "$MAC" =~ ^,hwaddr= ]] && MAC_VALUE="${MAC#,hwaddr=}"
|
||||
if [[ "$MAC_VALUE" =~ ^([A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2}$ ]]; then
|
||||
echo -e "${MACADDRESS}${BOLD}${DGN}MAC Address: ${BGN}$MAC_VALUE${CL}"
|
||||
MAC=",hwaddr=$MAC_VALUE"
|
||||
else
|
||||
msg_error "MAC Address must be in the format xx:xx:xx:xx:xx:xx, was ${MAC_VALUE}"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
else
|
||||
if MAC1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set a MAC Address(leave blank for generated MAC)" 8 58 --title "MAC ADDRESS" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$MAC1" ]; then
|
||||
MAC1="Default"
|
||||
MAC=""
|
||||
else
|
||||
MAC=",hwaddr=$MAC1"
|
||||
echo -e "${MACADDRESS}${BOLD}${DGN}MAC Address: ${BGN}$MAC1${CL}"
|
||||
fi
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${VLAN-}" ]]; then
|
||||
if [[ "$VLAN" == "none" ]]; then
|
||||
VLAN=""
|
||||
echo -e "${VLANTAG}${BOLD}${DGN}Vlan: ${BGN}Host${CL}"
|
||||
else
|
||||
# Strip prefix if present for config file storage
|
||||
local VLAN_VALUE="$VLAN"
|
||||
[[ "$VLAN" =~ ^,tag= ]] && VLAN_VALUE="${VLAN#,tag=}"
|
||||
if [[ "$VLAN_VALUE" =~ ^-?[0-9]+$ ]]; then
|
||||
echo -e "${VLANTAG}${BOLD}${DGN}Vlan: ${BGN}$VLAN_VALUE${CL}"
|
||||
VLAN=",tag=$VLAN_VALUE"
|
||||
else
|
||||
msg_error "VLAN must be an integer, was ${VLAN_VALUE}"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
else
|
||||
if VLAN1=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set a Vlan(leave blank for no VLAN)" 8 58 --title "VLAN" 3>&1 1>&2 2>&3); then
|
||||
if [ -z "$VLAN1" ]; then
|
||||
VLAN1="Default"
|
||||
VLAN=""
|
||||
else
|
||||
VLAN=",tag=$VLAN1"
|
||||
fi
|
||||
echo -e "${VLANTAG}${BOLD}${DGN}Vlan: ${BGN}$VLAN1${CL}"
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${TAGS-}" ]]; then
|
||||
if [[ "$TAGS" == *"DEFAULT"* ]]; then
|
||||
TAGS="${TAGS//DEFAULT/}"
|
||||
TAGS="${TAGS//;/}"
|
||||
TAGS="$TAGS;${var_tags:-}"
|
||||
echo -e "${NETWORK}${BOLD}${DGN}Tags: ${BGN}$TAGS${CL}"
|
||||
fi
|
||||
else
|
||||
TAGS="community-scripts;"
|
||||
if ADV_TAGS=$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "Set Custom Tags?[If you remove all, there will be no tags!]" 8 58 "${TAGS}" --title "Advanced Tags" 3>&1 1>&2 2>&3); then
|
||||
if [ -n "${ADV_TAGS}" ]; then
|
||||
ADV_TAGS=$(echo "$ADV_TAGS" | tr -d '[:space:]')
|
||||
TAGS="${ADV_TAGS}"
|
||||
else
|
||||
TAGS=";"
|
||||
fi
|
||||
echo -e "${NETWORK}${BOLD}${DGN}Tags: ${BGN}$TAGS${CL}"
|
||||
else
|
||||
exit_script
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "${SSH-}" ]]; then
|
||||
if [[ "$SSH" == "yes" ]]; then
|
||||
echo -e "${ROOTSSH}${BOLD}${DGN}Root SSH Access: ${BGN}$SSH${CL}"
|
||||
if [[ ! -z "$SSH_AUTHORIZED_KEY" ]]; then
|
||||
echo -e "${ROOTSSH}${BOLD}${DGN}SSH Authorized Key: ${BGN}********************${CL}"
|
||||
else
|
||||
echo -e "${ROOTSSH}${BOLD}${DGN}SSH Authorized Key: ${BGN}None${CL}"
|
||||
fi
|
||||
elif [[ "$SSH" == "no" ]]; then
|
||||
echo -e "${ROOTSSH}${BOLD}${DGN}Root SSH Access: ${BGN}$SSH${CL}"
|
||||
else
|
||||
msg_error "SSH needs to be 'yes' or 'no', was ${SSH}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
SSH_AUTHORIZED_KEY="$(whiptail --backtitle "Proxmox VE Helper Scripts" --inputbox "SSH Authorized key for root (leave empty for none)" 8 58 --title "SSH Key" 3>&1 1>&2 2>&3)"
|
||||
if [[ -z "${SSH_AUTHORIZED_KEY}" ]]; then
|
||||
SSH_AUTHORIZED_KEY=""
|
||||
fi
|
||||
if [[ "$PW" == -password* || -n "$SSH_AUTHORIZED_KEY" ]]; then
|
||||
if (whiptail --backtitle "Proxmox VE Helper Scripts" --defaultno --title "SSH ACCESS" --yesno "Enable Root SSH Access?" 10 58); then
|
||||
SSH="yes"
|
||||
else
|
||||
SSH="no"
|
||||
fi
|
||||
echo -e "${ROOTSSH}${BOLD}${DGN}Root SSH Access: ${BGN}$SSH${CL}"
|
||||
else
|
||||
SSH="no"
|
||||
echo -e "${ROOTSSH}${BOLD}${DGN}Root SSH Access: ${BGN}$SSH${CL}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "$ENABLE_FUSE" ]]; then
|
||||
if [[ "$ENABLE_FUSE" == "yes" ]]; then
|
||||
echo -e "${FUSE}${BOLD}${DGN}Enable FUSE: ${BGN}Yes${CL}"
|
||||
elif [[ "$ENABLE_FUSE" == "no" ]]; then
|
||||
echo -e "${FUSE}${BOLD}${DGN}Enable FUSE: ${BGN}No${CL}"
|
||||
else
|
||||
msg_error "Enable FUSE needs to be 'yes' or 'no', was ${ENABLE_FUSE}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if (whiptail --backtitle "Proxmox VE Helper Scripts" --defaultno --title "FUSE" --yesno "Enable FUSE?" 10 58); then
|
||||
ENABLE_FUSE="yes"
|
||||
else
|
||||
ENABLE_FUSE="no"
|
||||
fi
|
||||
echo -e "${FUSE}${BOLD}${DGN}Enable FUSE: ${BGN}$ENABLE_FUSE${CL}"
|
||||
fi
|
||||
|
||||
if [[ -n "$ENABLE_TUN" ]]; then
|
||||
if [[ "$ENABLE_TUN" == "yes" ]]; then
|
||||
echo -e "${FUSE}${BOLD}${DGN}Enable TUN: ${BGN}Yes${CL}"
|
||||
elif [[ "$ENABLE_TUN" == "no" ]]; then
|
||||
echo -e "${FUSE}${BOLD}${DGN}Enable TUN: ${BGN}No${CL}"
|
||||
else
|
||||
msg_error "Enable TUN needs to be 'yes' or 'no', was ${ENABLE_TUN}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if (whiptail --backtitle "Proxmox VE Helper Scripts" --defaultno --title "TUN" --yesno "Enable TUN?" 10 58); then
|
||||
ENABLE_TUN="yes"
|
||||
else
|
||||
ENABLE_TUN="no"
|
||||
fi
|
||||
echo -e "${FUSE}${BOLD}${DGN}Enable TUN: ${BGN}$ENABLE_TUN${CL}"
|
||||
fi
|
||||
|
||||
if [[ -n "${VERBOSE-}" ]]; then
|
||||
if [[ "$VERBOSE" == "yes" ]]; then
|
||||
echo -e "${SEARCH}${BOLD}${DGN}Verbose Mode: ${BGN}$VERBOSE${CL}"
|
||||
elif [[ "$VERBOSE" == "no" ]]; then
|
||||
echo -e "${SEARCH}${BOLD}${DGN}Verbose Mode: ${BGN}No${CL}"
|
||||
else
|
||||
msg_error "Verbose Mode needs to be 'yes' or 'no', was ${VERBOSE}"
|
||||
exit
|
||||
fi
|
||||
else
|
||||
if (whiptail --backtitle "Proxmox VE Helper Scripts" --defaultno --title "VERBOSE MODE" --yesno "Enable Verbose Mode?" 10 58); then
|
||||
VERBOSE="yes"
|
||||
else
|
||||
VERBOSE="no"
|
||||
fi
|
||||
echo -e "${SEARCH}${BOLD}${DGN}Verbose Mode: ${BGN}$VERBOSE${CL}"
|
||||
fi
|
||||
|
||||
if (whiptail --backtitle "Proxmox VE Helper Scripts" --title "ADVANCED SETTINGS WITH CONFIG FILE COMPLETE" --yesno "Ready to create ${APP} LXC?" 10 58); then
|
||||
echo -e "${CREATING}${BOLD}${RD}Creating a ${APP} LXC using the above settings${CL}"
|
||||
else
|
||||
clear
|
||||
header_info
|
||||
echo -e "${INFO}${HOLD} ${GN}Using Config File on node $PVEHOST_NAME${CL}"
|
||||
config_file
|
||||
fi
|
||||
}
|
||||
775
misc/core.func
775
misc/core.func
@ -1,13 +1,35 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# License: MIT | https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/LICENSE
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Loads core utility groups once (colors, formatting, icons, defaults).
|
||||
# ------------------------------------------------------------------------------
|
||||
# ==============================================================================
|
||||
# CORE FUNCTIONS - LXC CONTAINER UTILITIES
|
||||
# ==============================================================================
|
||||
#
|
||||
# This file provides core utility functions for LXC container management
|
||||
# including colors, formatting, validation checks, message output, and
|
||||
# execution helpers used throughout the Community-Scripts ecosystem.
|
||||
#
|
||||
# Usage:
|
||||
# source <(curl -fsSL https://git.community-scripts.org/.../core.func)
|
||||
# load_functions
|
||||
#
|
||||
# ==============================================================================
|
||||
|
||||
[[ -n "${_CORE_FUNC_LOADED:-}" ]] && return
|
||||
_CORE_FUNC_LOADED=1
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 1: INITIALIZATION & SETUP
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# load_functions()
|
||||
#
|
||||
# - Initializes all core utility groups (colors, formatting, icons, defaults)
|
||||
# - Ensures functions are loaded only once via __FUNCTIONS_LOADED flag
|
||||
# - Must be called at start of any script using these utilities
|
||||
# ------------------------------------------------------------------------------
|
||||
load_functions() {
|
||||
[[ -n "${__FUNCTIONS_LOADED:-}" ]] && return
|
||||
__FUNCTIONS_LOADED=1
|
||||
@ -16,58 +38,14 @@ load_functions() {
|
||||
icons
|
||||
default_vars
|
||||
set_std_mode
|
||||
# add more
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Error & Signal Handling – robust, universal, subshell-safe
|
||||
# ============================================================================
|
||||
|
||||
_tool_error_hint() {
|
||||
local cmd="$1"
|
||||
local code="$2"
|
||||
case "$cmd" in
|
||||
curl)
|
||||
case "$code" in
|
||||
6) echo "Curl: Could not resolve host (DNS problem)" ;;
|
||||
7) echo "Curl: Failed to connect to host (connection refused)" ;;
|
||||
22) echo "Curl: HTTP error (404/403 etc)" ;;
|
||||
28) echo "Curl: Operation timeout" ;;
|
||||
*) echo "Curl: Unknown error ($code)" ;;
|
||||
esac
|
||||
;;
|
||||
wget)
|
||||
echo "Wget failed – URL unreachable or permission denied"
|
||||
;;
|
||||
systemctl)
|
||||
echo "Systemd unit failure – check service name and permissions"
|
||||
;;
|
||||
jq)
|
||||
echo "jq parse error – malformed JSON or missing key"
|
||||
;;
|
||||
mariadb | mysql)
|
||||
echo "MySQL/MariaDB command failed – check credentials or DB"
|
||||
;;
|
||||
unzip)
|
||||
echo "unzip failed – corrupt file or missing permission"
|
||||
;;
|
||||
tar)
|
||||
echo "tar failed – invalid format or missing binary"
|
||||
;;
|
||||
node | npm | pnpm | yarn)
|
||||
echo "Node tool failed – check version compatibility or package.json"
|
||||
;;
|
||||
*) echo "" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
catch_errors() {
|
||||
set -Eeuo pipefail
|
||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Sets ANSI color codes used for styled terminal output.
|
||||
# color()
|
||||
#
|
||||
# - Sets ANSI color codes for styled terminal output
|
||||
# - Variables: YW (yellow), YWB (yellow bright), BL (blue), RD (red)
|
||||
# GN (green), DGN (dark green), BGN (background green), CL (clear)
|
||||
# ------------------------------------------------------------------------------
|
||||
color() {
|
||||
YW=$(echo "\033[33m")
|
||||
@ -80,7 +58,14 @@ color() {
|
||||
CL=$(echo "\033[m")
|
||||
}
|
||||
|
||||
# Special for spinner and colorized output via printf
|
||||
# ------------------------------------------------------------------------------
|
||||
# color_spinner()
|
||||
#
|
||||
# - Sets ANSI color codes specifically for spinner animation
|
||||
# - Variables: CS_YW (spinner yellow), CS_YWB (spinner yellow bright),
|
||||
# CS_CL (spinner clear)
|
||||
# - Used by spinner() function to avoid color conflicts
|
||||
# ------------------------------------------------------------------------------
|
||||
color_spinner() {
|
||||
CS_YW=$'\033[33m'
|
||||
CS_YWB=$'\033[93m'
|
||||
@ -88,7 +73,12 @@ color_spinner() {
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Defines formatting helpers like tab, bold, and line reset sequences.
|
||||
# formatting()
|
||||
#
|
||||
# - Defines formatting helpers for terminal output
|
||||
# - BFR: Backspace and clear line sequence
|
||||
# - BOLD: Bold text escape code
|
||||
# - TAB/TAB3: Indentation spacing
|
||||
# ------------------------------------------------------------------------------
|
||||
formatting() {
|
||||
BFR="\\r\\033[K"
|
||||
@ -99,7 +89,11 @@ formatting() {
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Sets symbolic icons used throughout user feedback and prompts.
|
||||
# icons()
|
||||
#
|
||||
# - Sets symbolic emoji icons used throughout user feedback
|
||||
# - Provides consistent visual indicators for success, error, info, etc.
|
||||
# - Icons: CM (checkmark), CROSS (error), INFO (info), HOURGLASS (wait), etc.
|
||||
# ------------------------------------------------------------------------------
|
||||
icons() {
|
||||
CM="${TAB}✔️${TAB}"
|
||||
@ -129,22 +123,31 @@ icons() {
|
||||
CREATING="${TAB}🚀${TAB}${CL}"
|
||||
ADVANCED="${TAB}🧩${TAB}${CL}"
|
||||
FUSE="${TAB}🗂️${TAB}${CL}"
|
||||
GPU="${TAB}🎮${TAB}${CL}"
|
||||
HOURGLASS="${TAB}⏳${TAB}"
|
||||
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Sets default retry and wait variables used for system actions.
|
||||
# default_vars()
|
||||
#
|
||||
# - Sets default retry and wait variables used for system actions
|
||||
# - RETRY_NUM: Maximum number of retry attempts (default: 10)
|
||||
# - RETRY_EVERY: Seconds to wait between retries (default: 3)
|
||||
# - i: Counter variable initialized to RETRY_NUM
|
||||
# ------------------------------------------------------------------------------
|
||||
default_vars() {
|
||||
RETRY_NUM=10
|
||||
RETRY_EVERY=3
|
||||
i=$RETRY_NUM
|
||||
#[[ "${VAR_OS:-}" == "unknown" ]]
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Sets default verbose mode for script and os execution.
|
||||
# set_std_mode()
|
||||
#
|
||||
# - Sets default verbose mode for script and OS execution
|
||||
# - If VERBOSE=yes: STD="" (show all output)
|
||||
# - If VERBOSE=no: STD="silent" (suppress output via silent() wrapper)
|
||||
# - If DEV_MODE_TRACE=true: Enables bash tracing (set -x)
|
||||
# ------------------------------------------------------------------------------
|
||||
set_std_mode() {
|
||||
if [ "${VERBOSE:-no}" = "yes" ]; then
|
||||
@ -152,138 +155,338 @@ set_std_mode() {
|
||||
else
|
||||
STD="silent"
|
||||
fi
|
||||
}
|
||||
|
||||
# Silent execution function
|
||||
silent() {
|
||||
"$@" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Function to download & save header files
|
||||
get_header() {
|
||||
local app_name=$(echo "${APP,,}" | tr -d ' ')
|
||||
local app_type=${APP_TYPE:-ct}
|
||||
local header_url="https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/${app_type}/headers/${app_name}"
|
||||
local local_header_path="/usr/local/community-scripts/headers/${app_type}/${app_name}"
|
||||
|
||||
mkdir -p "$(dirname "$local_header_path")"
|
||||
|
||||
if [ ! -s "$local_header_path" ]; then
|
||||
if ! curl -fsSL "$header_url" -o "$local_header_path"; then
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
cat "$local_header_path" 2>/dev/null || true
|
||||
}
|
||||
|
||||
header_info() {
|
||||
local app_name=$(echo "${APP,,}" | tr -d ' ')
|
||||
local header_content
|
||||
|
||||
header_content=$(get_header "$app_name") || header_content=""
|
||||
|
||||
clear
|
||||
local term_width
|
||||
term_width=$(tput cols 2>/dev/null || echo 120)
|
||||
|
||||
if [ -n "$header_content" ]; then
|
||||
echo "$header_content"
|
||||
# Enable bash tracing if trace mode active
|
||||
if [[ "${DEV_MODE_TRACE:-false}" == "true" ]]; then
|
||||
set -x
|
||||
export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
|
||||
fi
|
||||
}
|
||||
|
||||
ensure_tput() {
|
||||
if ! command -v tput >/dev/null 2>&1; then
|
||||
if grep -qi 'alpine' /etc/os-release; then
|
||||
apk add --no-cache ncurses >/dev/null 2>&1
|
||||
elif command -v apt-get >/dev/null 2>&1; then
|
||||
apt-get update -qq >/dev/null
|
||||
apt-get install -y -qq ncurses-bin >/dev/null 2>&1
|
||||
# ------------------------------------------------------------------------------
|
||||
# parse_dev_mode()
|
||||
#
|
||||
# - Parses comma-separated dev_mode variable (e.g., "motd,keep,trace")
|
||||
# - Sets global flags for each mode:
|
||||
# * DEV_MODE_MOTD: Setup SSH/MOTD before installation
|
||||
# * DEV_MODE_KEEP: Never delete container on failure
|
||||
# * DEV_MODE_TRACE: Enable bash set -x tracing
|
||||
# * DEV_MODE_PAUSE: Pause after each msg_info step
|
||||
# * DEV_MODE_BREAKPOINT: Open shell on error instead of cleanup
|
||||
# * DEV_MODE_LOGS: Persist all logs to /var/log/community-scripts/
|
||||
# * DEV_MODE_DRYRUN: Show commands without executing
|
||||
# - Call this early in script execution
|
||||
# ------------------------------------------------------------------------------
|
||||
parse_dev_mode() {
|
||||
local mode
|
||||
# Initialize all flags to false
|
||||
export DEV_MODE_MOTD=false
|
||||
export DEV_MODE_KEEP=false
|
||||
export DEV_MODE_TRACE=false
|
||||
export DEV_MODE_PAUSE=false
|
||||
export DEV_MODE_BREAKPOINT=false
|
||||
export DEV_MODE_LOGS=false
|
||||
export DEV_MODE_DRYRUN=false
|
||||
|
||||
# Parse comma-separated modes
|
||||
if [[ -n "${dev_mode:-}" ]]; then
|
||||
IFS=',' read -ra MODES <<<"$dev_mode"
|
||||
for mode in "${MODES[@]}"; do
|
||||
mode="$(echo "$mode" | xargs)" # Trim whitespace
|
||||
case "$mode" in
|
||||
motd) export DEV_MODE_MOTD=true ;;
|
||||
keep) export DEV_MODE_KEEP=true ;;
|
||||
trace) export DEV_MODE_TRACE=true ;;
|
||||
pause) export DEV_MODE_PAUSE=true ;;
|
||||
breakpoint) export DEV_MODE_BREAKPOINT=true ;;
|
||||
logs) export DEV_MODE_LOGS=true ;;
|
||||
dryrun) export DEV_MODE_DRYRUN=true ;;
|
||||
*)
|
||||
if declare -f msg_warn >/dev/null 2>&1; then
|
||||
msg_warn "Unknown dev_mode: '$mode' (ignored)"
|
||||
else
|
||||
echo "[WARN] Unknown dev_mode: '$mode' (ignored)" >&2
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Show active dev modes
|
||||
local active_modes=()
|
||||
[[ $DEV_MODE_MOTD == true ]] && active_modes+=("motd")
|
||||
[[ $DEV_MODE_KEEP == true ]] && active_modes+=("keep")
|
||||
[[ $DEV_MODE_TRACE == true ]] && active_modes+=("trace")
|
||||
[[ $DEV_MODE_PAUSE == true ]] && active_modes+=("pause")
|
||||
[[ $DEV_MODE_BREAKPOINT == true ]] && active_modes+=("breakpoint")
|
||||
[[ $DEV_MODE_LOGS == true ]] && active_modes+=("logs")
|
||||
[[ $DEV_MODE_DRYRUN == true ]] && active_modes+=("dryrun")
|
||||
|
||||
if [[ ${#active_modes[@]} -gt 0 ]]; then
|
||||
if declare -f msg_custom >/dev/null 2>&1; then
|
||||
msg_custom "🔧" "${YWB}" "Dev modes active: ${active_modes[*]}"
|
||||
else
|
||||
echo "[DEV] Active modes: ${active_modes[*]}" >&2
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
is_alpine() {
|
||||
local os_id="${var_os:-${PCT_OSTYPE:-}}"
|
||||
# ==============================================================================
|
||||
# SECTION 2: VALIDATION CHECKS
|
||||
# ==============================================================================
|
||||
|
||||
if [[ -z "$os_id" && -f /etc/os-release ]]; then
|
||||
os_id="$(
|
||||
. /etc/os-release 2>/dev/null
|
||||
echo "${ID:-}"
|
||||
)"
|
||||
# ------------------------------------------------------------------------------
|
||||
# shell_check()
|
||||
#
|
||||
# - Verifies that the script is running under Bash shell
|
||||
# - Exits with error message if different shell is detected
|
||||
# - Required because scripts use Bash-specific features
|
||||
# ------------------------------------------------------------------------------
|
||||
shell_check() {
|
||||
if [[ "$(ps -p $$ -o comm=)" != "bash" ]]; then
|
||||
clear
|
||||
msg_error "Your default shell is currently not set to Bash. To use these scripts, please switch to the Bash shell."
|
||||
echo -e "\nExiting..."
|
||||
sleep 2
|
||||
exit
|
||||
fi
|
||||
|
||||
[[ "$os_id" == "alpine" ]]
|
||||
}
|
||||
|
||||
is_verbose_mode() {
|
||||
local verbose="${VERBOSE:-${var_verbose:-no}}"
|
||||
local tty_status
|
||||
if [[ -t 2 ]]; then
|
||||
tty_status="interactive"
|
||||
else
|
||||
tty_status="not-a-tty"
|
||||
fi
|
||||
[[ "$verbose" != "no" || ! -t 2 ]]
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Handles specific curl error codes and displays descriptive messages.
|
||||
# root_check()
|
||||
#
|
||||
# - Verifies script is running with root privileges
|
||||
# - Detects if executed via sudo (which can cause issues)
|
||||
# - Exits with error if not running as root directly
|
||||
# ------------------------------------------------------------------------------
|
||||
__curl_err_handler() {
|
||||
local exit_code="$1"
|
||||
local target="$2"
|
||||
local curl_msg="$3"
|
||||
root_check() {
|
||||
if [[ "$(id -u)" -ne 0 || $(ps -o comm= -p $PPID) == "sudo" ]]; then
|
||||
clear
|
||||
msg_error "Please run this script as root."
|
||||
echo -e "\nExiting..."
|
||||
sleep 2
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
case $exit_code in
|
||||
1) msg_error "Unsupported protocol: $target" ;;
|
||||
2) msg_error "Curl init failed: $target" ;;
|
||||
3) msg_error "Malformed URL: $target" ;;
|
||||
5) msg_error "Proxy resolution failed: $target" ;;
|
||||
6) msg_error "Host resolution failed: $target" ;;
|
||||
7) msg_error "Connection failed: $target" ;;
|
||||
9) msg_error "Access denied: $target" ;;
|
||||
18) msg_error "Partial file transfer: $target" ;;
|
||||
22) msg_error "HTTP error (e.g. 400/404): $target" ;;
|
||||
23) msg_error "Write error on local system: $target" ;;
|
||||
26) msg_error "Read error from local file: $target" ;;
|
||||
28) msg_error "Timeout: $target" ;;
|
||||
35) msg_error "SSL connect error: $target" ;;
|
||||
47) msg_error "Too many redirects: $target" ;;
|
||||
51) msg_error "SSL cert verify failed: $target" ;;
|
||||
52) msg_error "Empty server response: $target" ;;
|
||||
55) msg_error "Send error: $target" ;;
|
||||
56) msg_error "Receive error: $target" ;;
|
||||
60) msg_error "SSL CA not trusted: $target" ;;
|
||||
67) msg_error "Login denied by server: $target" ;;
|
||||
78) msg_error "Remote file not found (404): $target" ;;
|
||||
*) msg_error "Curl failed with code $exit_code: $target" ;;
|
||||
esac
|
||||
# ------------------------------------------------------------------------------
|
||||
# pve_check()
|
||||
#
|
||||
# - Validates Proxmox VE version compatibility
|
||||
# - Supported: PVE 8.0-8.9 and PVE 9.0-9.1
|
||||
# - Exits with error message if unsupported version detected
|
||||
# ------------------------------------------------------------------------------
|
||||
pve_check() {
|
||||
local PVE_VER
|
||||
PVE_VER="$(pveversion | awk -F'/' '{print $2}' | awk -F'-' '{print $1}')"
|
||||
|
||||
[[ -n "$curl_msg" ]] && printf "%s\n" "$curl_msg" >&2
|
||||
# Check for Proxmox VE 8.x: allow 8.0–8.9
|
||||
if [[ "$PVE_VER" =~ ^8\.([0-9]+) ]]; then
|
||||
local MINOR="${BASH_REMATCH[1]}"
|
||||
if ((MINOR < 0 || MINOR > 9)); then
|
||||
msg_error "This version of Proxmox VE is not supported."
|
||||
msg_error "Supported: Proxmox VE version 8.0 – 8.9"
|
||||
exit 1
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check for Proxmox VE 9.x: allow 9.0–9.1
|
||||
if [[ "$PVE_VER" =~ ^9\.([0-9]+) ]]; then
|
||||
local MINOR="${BASH_REMATCH[1]}"
|
||||
if ((MINOR < 0 || MINOR > 1)); then
|
||||
msg_error "This version of Proxmox VE is not yet supported."
|
||||
msg_error "Supported: Proxmox VE version 9.0 – 9.1"
|
||||
exit 1
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
# All other unsupported versions
|
||||
msg_error "This version of Proxmox VE is not supported."
|
||||
msg_error "Supported versions: Proxmox VE 8.0 – 8.9 or 9.0 – 9.1"
|
||||
exit 1
|
||||
}
|
||||
|
||||
fatal() {
|
||||
msg_error "$1"
|
||||
kill -INT $$
|
||||
# ------------------------------------------------------------------------------
|
||||
# arch_check()
|
||||
#
|
||||
# - Validates system architecture is amd64/x86_64
|
||||
# - Exits with error message for unsupported architectures (e.g., ARM/PiMox)
|
||||
# - Provides link to ARM64-compatible scripts
|
||||
# ------------------------------------------------------------------------------
|
||||
arch_check() {
|
||||
if [ "$(dpkg --print-architecture)" != "amd64" ]; then
|
||||
echo -e "\n ${INFO}${YWB}This script will not work with PiMox! \n"
|
||||
echo -e "\n ${YWB}Visit https://github.com/asylumexp/Proxmox for ARM64 support. \n"
|
||||
echo -e "Exiting..."
|
||||
sleep 2
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# ssh_check()
|
||||
#
|
||||
# - Detects if script is running over SSH connection
|
||||
# - Warns user for external SSH connections (recommends Proxmox shell)
|
||||
# - Skips warning for local/same-subnet connections
|
||||
# - Does not abort execution, only warns
|
||||
# ------------------------------------------------------------------------------
|
||||
ssh_check() {
|
||||
if [ -n "$SSH_CLIENT" ]; then
|
||||
local client_ip=$(awk '{print $1}' <<<"$SSH_CLIENT")
|
||||
local host_ip=$(hostname -I | awk '{print $1}')
|
||||
|
||||
# Check if connection is local (Proxmox WebUI or same machine)
|
||||
# - localhost (127.0.0.1, ::1)
|
||||
# - same IP as host
|
||||
# - local network range (10.x, 172.16-31.x, 192.168.x)
|
||||
if [[ "$client_ip" == "127.0.0.1" || "$client_ip" == "::1" || "$client_ip" == "$host_ip" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
# Check if client is in same local network (optional, safer approach)
|
||||
local host_subnet=$(echo "$host_ip" | cut -d. -f1-3)
|
||||
local client_subnet=$(echo "$client_ip" | cut -d. -f1-3)
|
||||
if [[ "$host_subnet" == "$client_subnet" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
# Only warn for truly external connections
|
||||
msg_warn "Running via external SSH (client: $client_ip)."
|
||||
msg_warn "For better stability, consider using the Proxmox Shell (Console) instead."
|
||||
fi
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 3: EXECUTION HELPERS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# get_active_logfile()
|
||||
#
|
||||
# - Returns the appropriate log file based on execution context
|
||||
# - BUILD_LOG: Host operations (container creation)
|
||||
# - INSTALL_LOG: Container operations (application installation)
|
||||
# - Fallback to BUILD_LOG if neither is set
|
||||
# ------------------------------------------------------------------------------
|
||||
get_active_logfile() {
|
||||
if [[ -n "${INSTALL_LOG:-}" ]]; then
|
||||
echo "$INSTALL_LOG"
|
||||
elif [[ -n "${BUILD_LOG:-}" ]]; then
|
||||
echo "$BUILD_LOG"
|
||||
else
|
||||
# Fallback for legacy scripts
|
||||
echo "/tmp/build-$(date +%Y%m%d_%H%M%S).log"
|
||||
fi
|
||||
}
|
||||
|
||||
# Legacy compatibility: SILENT_LOGFILE points to active log
|
||||
SILENT_LOGFILE="$(get_active_logfile)"
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# silent()
|
||||
#
|
||||
# - Executes command with output redirected to active log file
|
||||
# - On error: displays last 10 lines of log and exits with original exit code
|
||||
# - Temporarily disables error trap to capture exit code correctly
|
||||
# - Sources explain_exit_code() for detailed error messages
|
||||
# ------------------------------------------------------------------------------
|
||||
silent() {
|
||||
local cmd="$*"
|
||||
local caller_line="${BASH_LINENO[0]:-unknown}"
|
||||
local logfile="$(get_active_logfile)"
|
||||
|
||||
# Dryrun mode: Show command without executing
|
||||
if [[ "${DEV_MODE_DRYRUN:-false}" == "true" ]]; then
|
||||
if declare -f msg_custom >/dev/null 2>&1; then
|
||||
msg_custom "🔍" "${BL}" "[DRYRUN] $cmd"
|
||||
else
|
||||
echo "[DRYRUN] $cmd" >&2
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
set +Eeuo pipefail
|
||||
trap - ERR
|
||||
|
||||
"$@" >>"$logfile" 2>&1
|
||||
local rc=$?
|
||||
|
||||
set -Eeuo pipefail
|
||||
trap 'error_handler' ERR
|
||||
|
||||
if [[ $rc -ne 0 ]]; then
|
||||
# Source explain_exit_code if needed
|
||||
if ! declare -f explain_exit_code >/dev/null 2>&1; then
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
|
||||
fi
|
||||
|
||||
local explanation
|
||||
explanation="$(explain_exit_code "$rc")"
|
||||
|
||||
printf "\e[?25h"
|
||||
msg_error "in line ${caller_line}: exit code ${rc} (${explanation})"
|
||||
msg_custom "→" "${YWB}" "${cmd}"
|
||||
|
||||
if [[ -s "$logfile" ]]; then
|
||||
local log_lines=$(wc -l <"$logfile")
|
||||
echo "--- Last 10 lines of silent log ---"
|
||||
tail -n 10 "$logfile"
|
||||
echo "-----------------------------------"
|
||||
|
||||
# Show how to view full log if there are more lines
|
||||
if [[ $log_lines -gt 10 ]]; then
|
||||
msg_custom "📋" "${YW}" "View full log (${log_lines} lines): ${logfile}"
|
||||
fi
|
||||
fi
|
||||
|
||||
exit "$rc"
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# spinner()
|
||||
#
|
||||
# - Displays animated spinner with rotating characters (⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏)
|
||||
# - Shows SPINNER_MSG alongside animation
|
||||
# - Runs in infinite loop until killed by stop_spinner()
|
||||
# - Uses color_spinner() colors for output
|
||||
# ------------------------------------------------------------------------------
|
||||
spinner() {
|
||||
local chars=(⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏)
|
||||
local msg="${SPINNER_MSG:-Processing...}"
|
||||
local i=0
|
||||
while true; do
|
||||
local index=$((i++ % ${#chars[@]}))
|
||||
printf "\r\033[2K%s %b" "${CS_YWB}${chars[$index]}${CS_CL}" "${CS_YWB}${SPINNER_MSG:-}${CS_CL}"
|
||||
printf "\r\033[2K%s %b" "${CS_YWB}${chars[$index]}${CS_CL}" "${CS_YWB}${msg}${CS_CL}"
|
||||
sleep 0.1
|
||||
done
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# clear_line()
|
||||
#
|
||||
# - Clears current terminal line using tput or ANSI escape codes
|
||||
# - Moves cursor to beginning of line (carriage return)
|
||||
# - Erases from cursor to end of line
|
||||
# - Fallback to ANSI codes if tput not available
|
||||
# ------------------------------------------------------------------------------
|
||||
clear_line() {
|
||||
tput cr 2>/dev/null || echo -en "\r"
|
||||
tput el 2>/dev/null || echo -en "\033[K"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# stop_spinner()
|
||||
#
|
||||
# - Stops running spinner process by PID
|
||||
# - Reads PID from SPINNER_PID variable or /tmp/.spinner.pid file
|
||||
# - Attempts graceful kill, then forced kill if needed
|
||||
# - Cleans up temp file and resets terminal state
|
||||
# - Unsets SPINNER_PID and SPINNER_MSG variables
|
||||
# ------------------------------------------------------------------------------
|
||||
stop_spinner() {
|
||||
local pid="${SPINNER_PID:-}"
|
||||
[[ -z "$pid" && -f /tmp/.spinner.pid ]] && pid=$(</tmp/.spinner.pid)
|
||||
@ -301,6 +504,19 @@ stop_spinner() {
|
||||
stty sane 2>/dev/null || true
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 4: MESSAGE OUTPUT
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_info()
|
||||
#
|
||||
# - Displays informational message with spinner animation
|
||||
# - Shows each unique message only once (tracked via MSG_INFO_SHOWN)
|
||||
# - In verbose/Alpine mode: shows hourglass icon instead of spinner
|
||||
# - Stops any existing spinner before starting new one
|
||||
# - Backgrounds spinner process and stores PID for later cleanup
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_info() {
|
||||
local msg="$1"
|
||||
[[ -z "$msg" ]] && return
|
||||
@ -317,6 +533,12 @@ msg_info() {
|
||||
if is_verbose_mode || is_alpine; then
|
||||
local HOURGLASS="${TAB}⏳${TAB}"
|
||||
printf "\r\e[2K%s %b" "$HOURGLASS" "${YW}${msg}${CL}" >&2
|
||||
|
||||
# Pause mode: Wait for Enter after each step
|
||||
if [[ "${DEV_MODE_PAUSE:-false}" == "true" ]]; then
|
||||
echo -en "\n${YWB}[PAUSE]${CL} Press Enter to continue..." >&2
|
||||
read -r
|
||||
fi
|
||||
return
|
||||
fi
|
||||
|
||||
@ -325,29 +547,68 @@ msg_info() {
|
||||
SPINNER_PID=$!
|
||||
echo "$SPINNER_PID" >/tmp/.spinner.pid
|
||||
disown "$SPINNER_PID" 2>/dev/null || true
|
||||
|
||||
# Pause mode: Stop spinner and wait
|
||||
if [[ "${DEV_MODE_PAUSE:-false}" == "true" ]]; then
|
||||
stop_spinner
|
||||
echo -en "\n${YWB}[PAUSE]${CL} Press Enter to continue..." >&2
|
||||
read -r
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_ok()
|
||||
#
|
||||
# - Displays success message with checkmark icon
|
||||
# - Stops spinner and clears line before output
|
||||
# - Removes message from MSG_INFO_SHOWN to allow re-display
|
||||
# - Uses green color for success indication
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_ok() {
|
||||
local msg="$1"
|
||||
[[ -z "$msg" ]] && return
|
||||
stop_spinner
|
||||
clear_line
|
||||
printf "%s %b\n" "$CM" "${GN}${msg}${CL}" >&2
|
||||
echo -e "$CM ${GN}${msg}${CL}"
|
||||
unset MSG_INFO_SHOWN["$msg"]
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_error()
|
||||
#
|
||||
# - Displays error message with cross/X icon
|
||||
# - Stops spinner before output
|
||||
# - Uses red color for error indication
|
||||
# - Outputs to stderr
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_error() {
|
||||
stop_spinner
|
||||
local msg="$1"
|
||||
echo -e "${BFR:-} ${CROSS:-✖️} ${RD}${msg}${CL}"
|
||||
echo -e "${BFR:-}${CROSS:-✖️} ${RD}${msg}${CL}" >&2
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_warn()
|
||||
#
|
||||
# - Displays warning message with info/lightbulb icon
|
||||
# - Stops spinner before output
|
||||
# - Uses bright yellow color for warning indication
|
||||
# - Outputs to stderr
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_warn() {
|
||||
stop_spinner
|
||||
local msg="$1"
|
||||
echo -e "${BFR:-} ${INFO:-ℹ️} ${YWB}${msg}${CL}"
|
||||
echo -e "${BFR:-}${INFO:-ℹ️} ${YWB}${msg}${CL}" >&2
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_custom()
|
||||
#
|
||||
# - Displays custom message with user-defined symbol and color
|
||||
# - Arguments: symbol, color code, message text
|
||||
# - Stops spinner before output
|
||||
# - Useful for specialized status messages
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_custom() {
|
||||
local symbol="${1:-"[*]"}"
|
||||
local color="${2:-"\e[36m"}"
|
||||
@ -357,17 +618,181 @@ msg_custom() {
|
||||
echo -e "${BFR:-} ${symbol} ${color}${msg}${CL:-\e[0m}"
|
||||
}
|
||||
|
||||
run_container_safe() {
|
||||
local ct="$1"
|
||||
shift
|
||||
local cmd="$*"
|
||||
|
||||
lxc-attach -n "$ct" -- bash -euo pipefail -c "
|
||||
trap 'echo Aborted in container; exit 130' SIGINT SIGTERM
|
||||
$cmd
|
||||
" || __handle_general_error "lxc-attach to CT $ct"
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_debug()
|
||||
#
|
||||
# - Displays debug message with timestamp when var_full_verbose=1
|
||||
# - Automatically enables var_verbose if not already set
|
||||
# - Shows date/time prefix for log correlation
|
||||
# - Uses bright yellow color for debug output
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_debug() {
|
||||
if [[ "${var_full_verbose:-0}" == "1" ]]; then
|
||||
[[ "${var_verbose:-0}" != "1" ]] && var_verbose=1
|
||||
echo -e "${YWB}[$(date '+%F %T')] [DEBUG]${CL} $*"
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# msg_dev()
|
||||
#
|
||||
# - Display development mode messages with 🔧 icon
|
||||
# - Only shown when dev_mode is active
|
||||
# - Useful for debugging and development-specific output
|
||||
# - Format: [DEV] message with distinct formatting
|
||||
# - Usage: msg_dev "Container ready for debugging"
|
||||
# ------------------------------------------------------------------------------
|
||||
msg_dev() {
|
||||
if [[ -n "${dev_mode:-}" ]]; then
|
||||
echo -e "${SEARCH}${BOLD}${DGN}🔧 [DEV]${CL} $*"
|
||||
fi
|
||||
}
|
||||
#
|
||||
# - Displays error message and immediately terminates script
|
||||
# - Sends SIGINT to current process to trigger error handler
|
||||
# - Use for unrecoverable errors that require immediate exit
|
||||
# ------------------------------------------------------------------------------
|
||||
fatal() {
|
||||
msg_error "$1"
|
||||
kill -INT $$
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 5: UTILITY FUNCTIONS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# exit_script()
|
||||
#
|
||||
# - Called when user cancels an action
|
||||
# - Clears screen and displays exit message
|
||||
# - Exits with default exit code
|
||||
# ------------------------------------------------------------------------------
|
||||
exit_script() {
|
||||
clear
|
||||
echo -e "\n${CROSS}${RD}User exited script${CL}\n"
|
||||
exit
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# get_header()
|
||||
#
|
||||
# - Downloads and caches application header ASCII art
|
||||
# - Falls back to local cache if already downloaded
|
||||
# - Determines app type (ct/vm) from APP_TYPE variable
|
||||
# - Returns header content or empty string on failure
|
||||
# ------------------------------------------------------------------------------
|
||||
get_header() {
|
||||
local app_name=$(echo "${APP,,}" | tr -d ' ')
|
||||
local app_type=${APP_TYPE:-ct} # Default to 'ct' if not set
|
||||
local header_url="https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/${app_type}/headers/${app_name}"
|
||||
local local_header_path="/usr/local/community-scripts/headers/${app_type}/${app_name}"
|
||||
|
||||
mkdir -p "$(dirname "$local_header_path")"
|
||||
|
||||
if [ ! -s "$local_header_path" ]; then
|
||||
if ! curl -fsSL "$header_url" -o "$local_header_path"; then
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
cat "$local_header_path" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# header_info()
|
||||
#
|
||||
# - Displays application header ASCII art at top of screen
|
||||
# - Clears screen before displaying header
|
||||
# - Detects terminal width for formatting
|
||||
# - Returns silently if header not available
|
||||
# ------------------------------------------------------------------------------
|
||||
header_info() {
|
||||
local app_name=$(echo "${APP,,}" | tr -d ' ')
|
||||
local header_content
|
||||
|
||||
header_content=$(get_header "$app_name") || header_content=""
|
||||
|
||||
clear
|
||||
local term_width
|
||||
term_width=$(tput cols 2>/dev/null || echo 120)
|
||||
|
||||
if [ -n "$header_content" ]; then
|
||||
echo "$header_content"
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# ensure_tput()
|
||||
#
|
||||
# - Ensures tput command is available for terminal control
|
||||
# - Installs ncurses-bin on Debian/Ubuntu or ncurses on Alpine
|
||||
# - Required for clear_line() and terminal width detection
|
||||
# ------------------------------------------------------------------------------
|
||||
ensure_tput() {
|
||||
if ! command -v tput >/dev/null 2>&1; then
|
||||
if grep -qi 'alpine' /etc/os-release; then
|
||||
apk add --no-cache ncurses >/dev/null 2>&1
|
||||
elif command -v apt-get >/dev/null 2>&1; then
|
||||
apt-get update -qq >/dev/null
|
||||
apt-get install -y -qq ncurses-bin >/dev/null 2>&1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# is_alpine()
|
||||
#
|
||||
# - Detects if running on Alpine Linux
|
||||
# - Checks var_os, PCT_OSTYPE, or /etc/os-release
|
||||
# - Returns 0 if Alpine, 1 otherwise
|
||||
# - Used to adjust behavior for Alpine-specific commands
|
||||
# ------------------------------------------------------------------------------
|
||||
is_alpine() {
|
||||
local os_id="${var_os:-${PCT_OSTYPE:-}}"
|
||||
|
||||
if [[ -z "$os_id" && -f /etc/os-release ]]; then
|
||||
os_id="$(
|
||||
. /etc/os-release 2>/dev/null
|
||||
echo "${ID:-}"
|
||||
)"
|
||||
fi
|
||||
|
||||
[[ "$os_id" == "alpine" ]]
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# is_verbose_mode()
|
||||
#
|
||||
# - Determines if script should run in verbose mode
|
||||
# - Checks VERBOSE and var_verbose variables
|
||||
# - Also returns true if not running in TTY (pipe/redirect scenario)
|
||||
# - Used by msg_info() to decide between spinner and static output
|
||||
# ------------------------------------------------------------------------------
|
||||
is_verbose_mode() {
|
||||
local verbose="${VERBOSE:-${var_verbose:-no}}"
|
||||
local tty_status
|
||||
if [[ -t 2 ]]; then
|
||||
tty_status="interactive"
|
||||
else
|
||||
tty_status="not-a-tty"
|
||||
fi
|
||||
[[ "$verbose" != "no" || ! -t 2 ]]
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 6: CLEANUP & MAINTENANCE
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# cleanup_lxc()
|
||||
#
|
||||
# - Comprehensive cleanup of package managers, caches, and logs
|
||||
# - Supports Alpine (apk), Debian/Ubuntu (apt), and language package managers
|
||||
# - Cleans: Python (pip/uv), Node.js (npm/yarn/pnpm), Go, Rust, Ruby, PHP
|
||||
# - Truncates log files and vacuums systemd journal
|
||||
# - Run at end of container creation to minimize disk usage
|
||||
# ------------------------------------------------------------------------------
|
||||
cleanup_lxc() {
|
||||
msg_info "Cleaning up"
|
||||
|
||||
@ -411,6 +836,16 @@ cleanup_lxc() {
|
||||
msg_ok "Cleaned"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# check_or_create_swap()
|
||||
#
|
||||
# - Checks if swap is active on system
|
||||
# - Offers to create swap file if none exists
|
||||
# - Prompts user for swap size in MB
|
||||
# - Creates /swapfile with specified size
|
||||
# - Activates swap immediately
|
||||
# - Returns 0 if swap active or successfully created, 1 if declined/failed
|
||||
# ------------------------------------------------------------------------------
|
||||
check_or_create_swap() {
|
||||
msg_info "Checking for active swap"
|
||||
|
||||
@ -449,4 +884,8 @@ check_or_create_swap() {
|
||||
fi
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SIGNAL TRAPS
|
||||
# ==============================================================================
|
||||
|
||||
trap 'stop_spinner' EXIT INT TERM
|
||||
|
||||
@ -1,385 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright (c) 2021-2025 tteck
|
||||
# Author: tteck (tteckster)
|
||||
# Co-Author: MickLesk
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
|
||||
# This sets verbose mode if the global variable is set to "yes"
|
||||
# if [ "$VERBOSE" == "yes" ]; then set -x; fi
|
||||
|
||||
if command -v curl >/dev/null 2>&1; then
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
|
||||
load_functions
|
||||
#echo "(create-lxc.sh) Loaded core.func via curl"
|
||||
elif command -v wget >/dev/null 2>&1; then
|
||||
source <(wget -qO- https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
|
||||
load_functions
|
||||
#echo "(create-lxc.sh) Loaded core.func via wget"
|
||||
fi
|
||||
|
||||
# This sets error handling options and defines the error_handler function to handle errors
|
||||
set -Eeuo pipefail
|
||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||
trap on_exit EXIT
|
||||
trap on_interrupt INT
|
||||
trap on_terminate TERM
|
||||
|
||||
function on_exit() {
|
||||
local exit_code="$?"
|
||||
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
|
||||
exit "$exit_code"
|
||||
}
|
||||
|
||||
function error_handler() {
|
||||
local exit_code="$?"
|
||||
local line_number="$1"
|
||||
local command="$2"
|
||||
printf "\e[?25h"
|
||||
echo -e "\n${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}\n"
|
||||
exit "$exit_code"
|
||||
}
|
||||
|
||||
function on_interrupt() {
|
||||
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}"
|
||||
exit 130
|
||||
}
|
||||
|
||||
function on_terminate() {
|
||||
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}"
|
||||
exit 143
|
||||
}
|
||||
|
||||
function exit_script() {
|
||||
clear
|
||||
printf "\e[?25h"
|
||||
echo -e "\n${CROSS}${RD}User exited script${CL}\n"
|
||||
kill 0
|
||||
exit 1
|
||||
}
|
||||
|
||||
function check_storage_support() {
|
||||
local CONTENT="$1"
|
||||
local -a VALID_STORAGES=()
|
||||
while IFS= read -r line; do
|
||||
local STORAGE_NAME
|
||||
STORAGE_NAME=$(awk '{print $1}' <<<"$line")
|
||||
[[ -z "$STORAGE_NAME" ]] && continue
|
||||
VALID_STORAGES+=("$STORAGE_NAME")
|
||||
done < <(pvesm status -content "$CONTENT" 2>/dev/null | awk 'NR>1')
|
||||
|
||||
[[ ${#VALID_STORAGES[@]} -gt 0 ]]
|
||||
}
|
||||
|
||||
# This function selects a storage pool for a given content type (e.g., rootdir, vztmpl).
|
||||
function select_storage() {
|
||||
local CLASS=$1 CONTENT CONTENT_LABEL
|
||||
|
||||
case $CLASS in
|
||||
container)
|
||||
CONTENT='rootdir'
|
||||
CONTENT_LABEL='Container'
|
||||
;;
|
||||
template)
|
||||
CONTENT='vztmpl'
|
||||
CONTENT_LABEL='Container template'
|
||||
;;
|
||||
iso)
|
||||
CONTENT='iso'
|
||||
CONTENT_LABEL='ISO image'
|
||||
;;
|
||||
images)
|
||||
CONTENT='images'
|
||||
CONTENT_LABEL='VM Disk image'
|
||||
;;
|
||||
backup)
|
||||
CONTENT='backup'
|
||||
CONTENT_LABEL='Backup'
|
||||
;;
|
||||
snippets)
|
||||
CONTENT='snippets'
|
||||
CONTENT_LABEL='Snippets'
|
||||
;;
|
||||
*)
|
||||
msg_error "Invalid storage class '$CLASS'"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Check for preset STORAGE variable
|
||||
if [ "$CONTENT" = "rootdir" ] && [ -n "${STORAGE:-}" ]; then
|
||||
if pvesm status -content "$CONTENT" | awk 'NR>1 {print $1}' | grep -qx "$STORAGE"; then
|
||||
STORAGE_RESULT="$STORAGE"
|
||||
msg_info "Using preset storage: $STORAGE_RESULT for $CONTENT_LABEL"
|
||||
return 0
|
||||
else
|
||||
msg_error "Preset storage '$STORAGE' is not valid for content type '$CONTENT'."
|
||||
return 2
|
||||
fi
|
||||
fi
|
||||
|
||||
local -A STORAGE_MAP
|
||||
local -a MENU
|
||||
local COL_WIDTH=0
|
||||
|
||||
while read -r TAG TYPE _ TOTAL USED FREE _; do
|
||||
[[ -n "$TAG" && -n "$TYPE" ]] || continue
|
||||
local STORAGE_NAME="$TAG"
|
||||
local DISPLAY="${STORAGE_NAME} (${TYPE})"
|
||||
local USED_FMT=$(numfmt --to=iec --from-unit=K --format %.1f <<<"$USED")
|
||||
local FREE_FMT=$(numfmt --to=iec --from-unit=K --format %.1f <<<"$FREE")
|
||||
local INFO="Free: ${FREE_FMT}B Used: ${USED_FMT}B"
|
||||
STORAGE_MAP["$DISPLAY"]="$STORAGE_NAME"
|
||||
MENU+=("$DISPLAY" "$INFO" "OFF")
|
||||
((${#DISPLAY} > COL_WIDTH)) && COL_WIDTH=${#DISPLAY}
|
||||
done < <(pvesm status -content "$CONTENT" | awk 'NR>1')
|
||||
|
||||
if [ ${#MENU[@]} -eq 0 ]; then
|
||||
msg_error "No storage found for content type '$CONTENT'."
|
||||
return 2
|
||||
fi
|
||||
|
||||
if [ $((${#MENU[@]} / 3)) -eq 1 ]; then
|
||||
STORAGE_RESULT="${STORAGE_MAP[${MENU[0]}]}"
|
||||
STORAGE_INFO="${MENU[1]}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local WIDTH=$((COL_WIDTH + 42))
|
||||
while true; do
|
||||
local DISPLAY_SELECTED
|
||||
DISPLAY_SELECTED=$(whiptail --backtitle "Proxmox VE Helper Scripts" \
|
||||
--title "Storage Pools" \
|
||||
--radiolist "Which storage pool for ${CONTENT_LABEL,,}?\n(Spacebar to select)" \
|
||||
16 "$WIDTH" 6 "${MENU[@]}" 3>&1 1>&2 2>&3)
|
||||
|
||||
# Cancel or ESC
|
||||
[[ $? -ne 0 ]] && exit_script
|
||||
|
||||
# Strip trailing whitespace or newline (important for storages like "storage (dir)")
|
||||
DISPLAY_SELECTED=$(sed 's/[[:space:]]*$//' <<<"$DISPLAY_SELECTED")
|
||||
|
||||
if [[ -z "$DISPLAY_SELECTED" || -z "${STORAGE_MAP[$DISPLAY_SELECTED]+_}" ]]; then
|
||||
whiptail --msgbox "No valid storage selected. Please try again." 8 58
|
||||
continue
|
||||
fi
|
||||
|
||||
STORAGE_RESULT="${STORAGE_MAP[$DISPLAY_SELECTED]}"
|
||||
for ((i = 0; i < ${#MENU[@]}; i += 3)); do
|
||||
if [[ "${MENU[$i]}" == "$DISPLAY_SELECTED" ]]; then
|
||||
STORAGE_INFO="${MENU[$i + 1]}"
|
||||
break
|
||||
fi
|
||||
done
|
||||
return 0
|
||||
done
|
||||
}
|
||||
|
||||
# Test if required variables are set
|
||||
[[ "${CTID:-}" ]] || {
|
||||
msg_error "You need to set 'CTID' variable."
|
||||
exit 203
|
||||
}
|
||||
[[ "${PCT_OSTYPE:-}" ]] || {
|
||||
msg_error "You need to set 'PCT_OSTYPE' variable."
|
||||
exit 204
|
||||
}
|
||||
|
||||
# Test if ID is valid
|
||||
[ "$CTID" -ge "100" ] || {
|
||||
msg_error "ID cannot be less than 100."
|
||||
exit 205
|
||||
}
|
||||
|
||||
# Test if ID is in use
|
||||
if qm status "$CTID" &>/dev/null || pct status "$CTID" &>/dev/null; then
|
||||
echo -e "ID '$CTID' is already in use."
|
||||
unset CTID
|
||||
msg_error "Cannot use ID that is already in use."
|
||||
exit 206
|
||||
fi
|
||||
|
||||
# This checks for the presence of valid Container Storage and Template Storage locations
|
||||
if ! check_storage_support "rootdir"; then
|
||||
msg_error "No valid storage found for 'rootdir' [Container]"
|
||||
exit 1
|
||||
fi
|
||||
if ! check_storage_support "vztmpl"; then
|
||||
msg_error "No valid storage found for 'vztmpl' [Template]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
while true; do
|
||||
if select_storage template; then
|
||||
TEMPLATE_STORAGE="$STORAGE_RESULT"
|
||||
TEMPLATE_STORAGE_INFO="$STORAGE_INFO"
|
||||
msg_ok "Storage ${BL}$TEMPLATE_STORAGE${CL} ($TEMPLATE_STORAGE_INFO) [Template]"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
while true; do
|
||||
if select_storage container; then
|
||||
CONTAINER_STORAGE="$STORAGE_RESULT"
|
||||
CONTAINER_STORAGE_INFO="$STORAGE_INFO"
|
||||
msg_ok "Storage ${BL}$CONTAINER_STORAGE${CL} ($CONTAINER_STORAGE_INFO) [Container]"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
# Check free space on selected container storage
|
||||
STORAGE_FREE=$(pvesm status | awk -v s="$CONTAINER_STORAGE" '$1 == s { print $6 }')
|
||||
REQUIRED_KB=$((${PCT_DISK_SIZE:-8} * 1024 * 1024))
|
||||
if [ "$STORAGE_FREE" -lt "$REQUIRED_KB" ]; then
|
||||
msg_error "Not enough space on '$CONTAINER_STORAGE'. Needed: ${PCT_DISK_SIZE:-8}G."
|
||||
exit 214
|
||||
fi
|
||||
|
||||
# Check Cluster Quorum if in Cluster
|
||||
if [ -f /etc/pve/corosync.conf ]; then
|
||||
msg_info "Checking cluster quorum"
|
||||
if ! pvecm status | awk -F':' '/^Quorate/ { exit ($2 ~ /Yes/) ? 0 : 1 }'; then
|
||||
|
||||
msg_error "Cluster is not quorate. Start all nodes or configure quorum device (QDevice)."
|
||||
exit 210
|
||||
fi
|
||||
msg_ok "Cluster is quorate"
|
||||
fi
|
||||
|
||||
# Update LXC template list
|
||||
TEMPLATE_SEARCH="${PCT_OSTYPE}-${PCT_OSVERSION:-}"
|
||||
case "$PCT_OSTYPE" in
|
||||
debian | ubuntu)
|
||||
TEMPLATE_PATTERN="-standard_"
|
||||
;;
|
||||
alpine | fedora | rocky | centos)
|
||||
TEMPLATE_PATTERN="-default_"
|
||||
;;
|
||||
*)
|
||||
TEMPLATE_PATTERN=""
|
||||
;;
|
||||
esac
|
||||
|
||||
# 1. Check local templates first
|
||||
msg_info "Searching for template '$TEMPLATE_SEARCH'"
|
||||
mapfile -t TEMPLATES < <(
|
||||
pveam list "$TEMPLATE_STORAGE" |
|
||||
awk -v s="$TEMPLATE_SEARCH" -v p="$TEMPLATE_PATTERN" '$1 ~ s && $1 ~ p {print $1}' |
|
||||
sed 's/.*\///' | sort -t - -k 2 -V
|
||||
)
|
||||
|
||||
if [ ${#TEMPLATES[@]} -gt 0 ]; then
|
||||
TEMPLATE_SOURCE="local"
|
||||
else
|
||||
msg_info "No local template found, checking online repository"
|
||||
pveam update >/dev/null 2>&1
|
||||
mapfile -t TEMPLATES < <(
|
||||
pveam update >/dev/null 2>&1 &&
|
||||
pveam available -section system |
|
||||
sed -n "s/.*\($TEMPLATE_SEARCH.*$TEMPLATE_PATTERN.*\)/\1/p" |
|
||||
sort -t - -k 2 -V
|
||||
)
|
||||
TEMPLATE_SOURCE="online"
|
||||
fi
|
||||
|
||||
TEMPLATE="${TEMPLATES[-1]}"
|
||||
TEMPLATE_PATH="$(pvesm path $TEMPLATE_STORAGE:vztmpl/$TEMPLATE 2>/dev/null ||
|
||||
echo "/var/lib/vz/template/cache/$TEMPLATE")"
|
||||
msg_ok "Template ${BL}$TEMPLATE${CL} [$TEMPLATE_SOURCE]"
|
||||
|
||||
# 4. Validate template (exists & not corrupted)
|
||||
TEMPLATE_VALID=1
|
||||
|
||||
if [ ! -s "$TEMPLATE_PATH" ]; then
|
||||
TEMPLATE_VALID=0
|
||||
elif ! tar --use-compress-program=zstdcat -tf "$TEMPLATE_PATH" >/dev/null 2>&1; then
|
||||
TEMPLATE_VALID=0
|
||||
fi
|
||||
|
||||
if [ "$TEMPLATE_VALID" -eq 0 ]; then
|
||||
msg_warn "Template $TEMPLATE is missing or corrupted. Re-downloading."
|
||||
[[ -f "$TEMPLATE_PATH" ]] && rm -f "$TEMPLATE_PATH"
|
||||
for attempt in {1..3}; do
|
||||
msg_info "Attempt $attempt: Downloading LXC template..."
|
||||
if pveam download "$TEMPLATE_STORAGE" "$TEMPLATE" >/dev/null 2>&1; then
|
||||
msg_ok "Template download successful."
|
||||
break
|
||||
fi
|
||||
if [ $attempt -eq 3 ]; then
|
||||
msg_error "Failed after 3 attempts. Please check network access or manually run:\n pveam download $TEMPLATE_STORAGE $TEMPLATE"
|
||||
exit 208
|
||||
fi
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
fi
|
||||
|
||||
msg_info "Creating LXC Container"
|
||||
# Check and fix subuid/subgid
|
||||
grep -q "root:100000:65536" /etc/subuid || echo "root:100000:65536" >>/etc/subuid
|
||||
grep -q "root:100000:65536" /etc/subgid || echo "root:100000:65536" >>/etc/subgid
|
||||
|
||||
# Combine all options
|
||||
PCT_OPTIONS=(${PCT_OPTIONS[@]:-${DEFAULT_PCT_OPTIONS[@]}})
|
||||
[[ " ${PCT_OPTIONS[@]} " =~ " -rootfs " ]] || PCT_OPTIONS+=(-rootfs "$CONTAINER_STORAGE:${PCT_DISK_SIZE:-8}")
|
||||
|
||||
# Secure creation of the LXC container with lock and template check
|
||||
lockfile="/tmp/template.${TEMPLATE}.lock"
|
||||
exec 9>"$lockfile" || {
|
||||
msg_error "Failed to create lock file '$lockfile'."
|
||||
exit 200
|
||||
}
|
||||
flock -w 60 9 || {
|
||||
msg_error "Timeout while waiting for template lock"
|
||||
exit 211
|
||||
}
|
||||
|
||||
if ! pct create "$CTID" "${TEMPLATE_STORAGE}:vztmpl/${TEMPLATE}" "${PCT_OPTIONS[@]}" &>/dev/null; then
|
||||
msg_error "Container creation failed. Checking if template is corrupted or incomplete."
|
||||
|
||||
if [[ ! -s "$TEMPLATE_PATH" || "$(stat -c%s "$TEMPLATE_PATH")" -lt 1000000 ]]; then
|
||||
msg_error "Template file too small or missing – re-downloading."
|
||||
rm -f "$TEMPLATE_PATH"
|
||||
elif ! zstdcat "$TEMPLATE_PATH" | tar -tf - &>/dev/null; then
|
||||
msg_error "Template appears to be corrupted – re-downloading."
|
||||
rm -f "$TEMPLATE_PATH"
|
||||
else
|
||||
msg_error "Template is valid, but container creation failed. Update your whole Proxmox System (pve-container) first or check https://github.com/community-scripts/ProxmoxVE/discussions/8126"
|
||||
exit 209
|
||||
fi
|
||||
|
||||
# Retry download
|
||||
for attempt in {1..3}; do
|
||||
msg_info "Attempt $attempt: Re-downloading template..."
|
||||
if timeout 120 pveam download "$TEMPLATE_STORAGE" "$TEMPLATE" >/dev/null; then
|
||||
msg_ok "Template re-download successful."
|
||||
break
|
||||
fi
|
||||
if [ "$attempt" -eq 3 ]; then
|
||||
msg_error "Three failed attempts. Aborting."
|
||||
exit 208
|
||||
fi
|
||||
sleep $((attempt * 5))
|
||||
done
|
||||
|
||||
sleep 1 # I/O-Sync-Delay
|
||||
msg_ok "Re-downloaded LXC Template"
|
||||
fi
|
||||
|
||||
if ! pct list | awk '{print $1}' | grep -qx "$CTID"; then
|
||||
msg_error "Container ID $CTID not listed in 'pct list' – unexpected failure."
|
||||
exit 215
|
||||
fi
|
||||
|
||||
if ! grep -q '^rootfs:' "/etc/pve/lxc/$CTID.conf"; then
|
||||
msg_error "RootFS entry missing in container config – storage not correctly assigned."
|
||||
exit 216
|
||||
fi
|
||||
|
||||
if grep -q '^hostname:' "/etc/pve/lxc/$CTID.conf"; then
|
||||
CT_HOSTNAME=$(grep '^hostname:' "/etc/pve/lxc/$CTID.conf" | awk '{print $2}')
|
||||
if [[ ! "$CT_HOSTNAME" =~ ^[a-z0-9-]+$ ]]; then
|
||||
msg_warn "Hostname '$CT_HOSTNAME' contains invalid characters – may cause issues with networking or DNS."
|
||||
fi
|
||||
fi
|
||||
|
||||
msg_ok "LXC Container ${BL}$CTID${CL} ${GN}was successfully created."
|
||||
322
misc/error_handler.func
Normal file
322
misc/error_handler.func
Normal file
@ -0,0 +1,322 @@
|
||||
#!/usr/bin/env bash
|
||||
# ------------------------------------------------------------------------------
|
||||
# ERROR HANDLER - ERROR & SIGNAL MANAGEMENT
|
||||
# ------------------------------------------------------------------------------
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: MickLesk (CanbiZ)
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# ------------------------------------------------------------------------------
|
||||
#
|
||||
# Provides comprehensive error handling and signal management for all scripts.
|
||||
# Includes:
|
||||
# - Exit code explanations (shell, package managers, databases, custom codes)
|
||||
# - Error handler with detailed logging
|
||||
# - Signal handlers (EXIT, INT, TERM)
|
||||
# - Initialization function for trap setup
|
||||
#
|
||||
# Usage:
|
||||
# source <(curl -fsSL .../error_handler.func)
|
||||
# catch_errors
|
||||
#
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 1: EXIT CODE EXPLANATIONS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# explain_exit_code()
|
||||
#
|
||||
# - Maps numeric exit codes to human-readable error descriptions
|
||||
# - Supports:
|
||||
# * Generic/Shell errors (1, 2, 126, 127, 128, 130, 137, 139, 143)
|
||||
# * Package manager errors (APT, DPKG: 100, 101, 255)
|
||||
# * Node.js/npm errors (243-249, 254)
|
||||
# * Python/pip/uv errors (210-212)
|
||||
# * PostgreSQL errors (231-234)
|
||||
# * MySQL/MariaDB errors (241-244)
|
||||
# * MongoDB errors (251-254)
|
||||
# * Proxmox custom codes (200-231)
|
||||
# - Returns description string for given exit code
|
||||
# ------------------------------------------------------------------------------
|
||||
explain_exit_code() {
|
||||
local code="$1"
|
||||
case "$code" in
|
||||
# --- Generic / Shell ---
|
||||
1) echo "General error / Operation not permitted" ;;
|
||||
2) echo "Misuse of shell builtins (e.g. syntax error)" ;;
|
||||
126) echo "Command invoked cannot execute (permission problem?)" ;;
|
||||
127) echo "Command not found" ;;
|
||||
128) echo "Invalid argument to exit" ;;
|
||||
130) echo "Terminated by Ctrl+C (SIGINT)" ;;
|
||||
137) echo "Killed (SIGKILL / Out of memory?)" ;;
|
||||
139) echo "Segmentation fault (core dumped)" ;;
|
||||
143) echo "Terminated (SIGTERM)" ;;
|
||||
|
||||
# --- Package manager / APT / DPKG ---
|
||||
100) echo "APT: Package manager error (broken packages / dependency problems)" ;;
|
||||
101) echo "APT: Configuration error (bad sources.list, malformed config)" ;;
|
||||
255) echo "DPKG: Fatal internal error" ;;
|
||||
|
||||
# --- Node.js / npm / pnpm / yarn ---
|
||||
243) echo "Node.js: Out of memory (JavaScript heap out of memory)" ;;
|
||||
245) echo "Node.js: Invalid command-line option" ;;
|
||||
246) echo "Node.js: Internal JavaScript Parse Error" ;;
|
||||
247) echo "Node.js: Fatal internal error" ;;
|
||||
248) echo "Node.js: Invalid C++ addon / N-API failure" ;;
|
||||
249) echo "Node.js: Inspector error" ;;
|
||||
254) echo "npm/pnpm/yarn: Unknown fatal error" ;;
|
||||
|
||||
# --- Python / pip / uv ---
|
||||
210) echo "Python: Virtualenv / uv environment missing or broken" ;;
|
||||
211) echo "Python: Dependency resolution failed" ;;
|
||||
212) echo "Python: Installation aborted (permissions or EXTERNALLY-MANAGED)" ;;
|
||||
|
||||
# --- PostgreSQL ---
|
||||
231) echo "PostgreSQL: Connection failed (server not running / wrong socket)" ;;
|
||||
232) echo "PostgreSQL: Authentication failed (bad user/password)" ;;
|
||||
233) echo "PostgreSQL: Database does not exist" ;;
|
||||
234) echo "PostgreSQL: Fatal error in query / syntax" ;;
|
||||
|
||||
# --- MySQL / MariaDB ---
|
||||
241) echo "MySQL/MariaDB: Connection failed (server not running / wrong socket)" ;;
|
||||
242) echo "MySQL/MariaDB: Authentication failed (bad user/password)" ;;
|
||||
243) echo "MySQL/MariaDB: Database does not exist" ;;
|
||||
244) echo "MySQL/MariaDB: Fatal error in query / syntax" ;;
|
||||
|
||||
# --- MongoDB ---
|
||||
251) echo "MongoDB: Connection failed (server not running)" ;;
|
||||
252) echo "MongoDB: Authentication failed (bad user/password)" ;;
|
||||
253) echo "MongoDB: Database not found" ;;
|
||||
254) echo "MongoDB: Fatal query error" ;;
|
||||
|
||||
# --- Proxmox Custom Codes ---
|
||||
200) echo "Proxmox: Failed to create lock file" ;;
|
||||
203) echo "Proxmox: Missing CTID variable" ;;
|
||||
204) echo "Proxmox: Missing PCT_OSTYPE variable" ;;
|
||||
205) echo "Proxmox: Invalid CTID (<100)" ;;
|
||||
206) echo "Proxmox: CTID already in use" ;;
|
||||
207) echo "Proxmox: Password contains unescaped special characters" ;;
|
||||
208) echo "Proxmox: Invalid configuration (DNS/MAC/Network format)" ;;
|
||||
209) echo "Proxmox: Container creation failed" ;;
|
||||
210) echo "Proxmox: Cluster not quorate" ;;
|
||||
211) echo "Proxmox: Timeout waiting for template lock" ;;
|
||||
212) echo "Proxmox: Storage type 'iscsidirect' does not support containers (VMs only)" ;;
|
||||
213) echo "Proxmox: Storage type does not support 'rootdir' content" ;;
|
||||
214) echo "Proxmox: Not enough storage space" ;;
|
||||
215) echo "Proxmox: Container created but not listed (ghost state)" ;;
|
||||
216) echo "Proxmox: RootFS entry missing in config" ;;
|
||||
217) echo "Proxmox: Storage not accessible" ;;
|
||||
219) echo "Proxmox: CephFS does not support containers - use RBD" ;;
|
||||
224) echo "Proxmox: PBS storage is for backups only" ;;
|
||||
218) echo "Proxmox: Template file corrupted or incomplete" ;;
|
||||
220) echo "Proxmox: Unable to resolve template path" ;;
|
||||
221) echo "Proxmox: Template file not readable" ;;
|
||||
222) echo "Proxmox: Template download failed" ;;
|
||||
223) echo "Proxmox: Template not available after download" ;;
|
||||
225) echo "Proxmox: No template available for OS/Version" ;;
|
||||
231) echo "Proxmox: LXC stack upgrade failed" ;;
|
||||
|
||||
# --- Default ---
|
||||
*) echo "Unknown error" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 2: ERROR HANDLERS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# error_handler()
|
||||
#
|
||||
# - Main error handler triggered by ERR trap
|
||||
# - Arguments: exit_code, command, line_number
|
||||
# - Behavior:
|
||||
# * Returns silently if exit_code is 0 (success)
|
||||
# * Sources explain_exit_code() for detailed error description
|
||||
# * Displays error message with:
|
||||
# - Line number where error occurred
|
||||
# - Exit code with explanation
|
||||
# - Command that failed
|
||||
# * Shows last 20 lines of SILENT_LOGFILE if available
|
||||
# * Copies log to container /root for later inspection
|
||||
# * Exits with original exit code
|
||||
# ------------------------------------------------------------------------------
|
||||
error_handler() {
|
||||
local exit_code=${1:-$?}
|
||||
local command=${2:-${BASH_COMMAND:-unknown}}
|
||||
local line_number=${BASH_LINENO[0]:-unknown}
|
||||
|
||||
command="${command//\$STD/}"
|
||||
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local explanation
|
||||
explanation="$(explain_exit_code "$exit_code")"
|
||||
|
||||
printf "\e[?25h"
|
||||
|
||||
# Use msg_error if available, fallback to echo
|
||||
if declare -f msg_error >/dev/null 2>&1; then
|
||||
msg_error "in line ${line_number}: exit code ${exit_code} (${explanation}): while executing command ${command}"
|
||||
else
|
||||
echo -e "\n${RD}[ERROR]${CL} in line ${RD}${line_number}${CL}: exit code ${RD}${exit_code}${CL} (${explanation}): while executing command ${YWB}${command}${CL}\n"
|
||||
fi
|
||||
|
||||
if [[ -n "${DEBUG_LOGFILE:-}" ]]; then
|
||||
{
|
||||
echo "------ ERROR ------"
|
||||
echo "Timestamp : $(date '+%Y-%m-%d %H:%M:%S')"
|
||||
echo "Exit Code : $exit_code ($explanation)"
|
||||
echo "Line : $line_number"
|
||||
echo "Command : $command"
|
||||
echo "-------------------"
|
||||
} >>"$DEBUG_LOGFILE"
|
||||
fi
|
||||
|
||||
# Get active log file (BUILD_LOG or INSTALL_LOG)
|
||||
local active_log=""
|
||||
if declare -f get_active_logfile >/dev/null 2>&1; then
|
||||
active_log="$(get_active_logfile)"
|
||||
elif [[ -n "${SILENT_LOGFILE:-}" ]]; then
|
||||
active_log="$SILENT_LOGFILE"
|
||||
fi
|
||||
|
||||
if [[ -n "$active_log" && -s "$active_log" ]]; then
|
||||
echo "--- Last 20 lines of silent log ---"
|
||||
tail -n 20 "$active_log"
|
||||
echo "-----------------------------------"
|
||||
|
||||
# Detect context: Container (INSTALL_LOG set + /root exists) vs Host (BUILD_LOG)
|
||||
if [[ -n "${INSTALL_LOG:-}" && -d /root ]]; then
|
||||
# CONTAINER CONTEXT: Copy log and create flag file for host
|
||||
local container_log="/root/.install-${SESSION_ID:-error}.log"
|
||||
cp "$active_log" "$container_log" 2>/dev/null || true
|
||||
|
||||
# Create error flag file with exit code for host detection
|
||||
echo "$exit_code" >"/root/.install-${SESSION_ID:-error}.failed" 2>/dev/null || true
|
||||
|
||||
if declare -f msg_custom >/dev/null 2>&1; then
|
||||
msg_custom "📋" "${YW}" "Log saved to: ${container_log}"
|
||||
else
|
||||
echo -e "${YW}Log saved to:${CL} ${BL}${container_log}${CL}"
|
||||
fi
|
||||
else
|
||||
# HOST CONTEXT: Show local log path and offer container cleanup
|
||||
if declare -f msg_custom >/dev/null 2>&1; then
|
||||
msg_custom "📋" "${YW}" "Full log: ${active_log}"
|
||||
else
|
||||
echo -e "${YW}Full log:${CL} ${BL}${active_log}${CL}"
|
||||
fi
|
||||
|
||||
# Offer to remove container if it exists (build errors after container creation)
|
||||
if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null && pct status "$CTID" &>/dev/null; then
|
||||
echo ""
|
||||
echo -en "${YW}Remove broken container ${CTID}? (Y/n) [auto-remove in 60s]: ${CL}"
|
||||
|
||||
if read -t 60 -r response; then
|
||||
if [[ -z "$response" || "$response" =~ ^[Yy]$ ]]; then
|
||||
echo -e "\n${YW}Removing container ${CTID}${CL}"
|
||||
pct stop "$CTID" &>/dev/null || true
|
||||
pct destroy "$CTID" &>/dev/null || true
|
||||
echo -e "${GN}✔${CL} Container ${CTID} removed"
|
||||
elif [[ "$response" =~ ^[Nn]$ ]]; then
|
||||
echo -e "\n${YW}Container ${CTID} kept for debugging${CL}"
|
||||
fi
|
||||
else
|
||||
# Timeout - auto-remove
|
||||
echo -e "\n${YW}No response - auto-removing container${CL}"
|
||||
pct stop "$CTID" &>/dev/null || true
|
||||
pct destroy "$CTID" &>/dev/null || true
|
||||
echo -e "${GN}✔${CL} Container ${CTID} removed"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
exit "$exit_code"
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 3: SIGNAL HANDLERS
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# on_exit()
|
||||
#
|
||||
# - EXIT trap handler
|
||||
# - Cleans up lock files if lockfile variable is set
|
||||
# - Exits with captured exit code
|
||||
# - Always runs on script termination (success or failure)
|
||||
# ------------------------------------------------------------------------------
|
||||
on_exit() {
|
||||
local exit_code=$?
|
||||
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
|
||||
exit "$exit_code"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# on_interrupt()
|
||||
#
|
||||
# - SIGINT (Ctrl+C) trap handler
|
||||
# - Displays "Interrupted by user" message
|
||||
# - Exits with code 130 (128 + SIGINT=2)
|
||||
# ------------------------------------------------------------------------------
|
||||
on_interrupt() {
|
||||
if declare -f msg_error >/dev/null 2>&1; then
|
||||
msg_error "Interrupted by user (SIGINT)"
|
||||
else
|
||||
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}"
|
||||
fi
|
||||
exit 130
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# on_terminate()
|
||||
#
|
||||
# - SIGTERM trap handler
|
||||
# - Displays "Terminated by signal" message
|
||||
# - Exits with code 143 (128 + SIGTERM=15)
|
||||
# - Triggered by external process termination
|
||||
# ------------------------------------------------------------------------------
|
||||
on_terminate() {
|
||||
if declare -f msg_error >/dev/null 2>&1; then
|
||||
msg_error "Terminated by signal (SIGTERM)"
|
||||
else
|
||||
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}"
|
||||
fi
|
||||
exit 143
|
||||
}
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 4: INITIALIZATION
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# catch_errors()
|
||||
#
|
||||
# - Initializes error handling and signal traps
|
||||
# - Enables strict error handling:
|
||||
# * set -Ee: Exit on error, inherit ERR trap in functions
|
||||
# * set -o pipefail: Pipeline fails if any command fails
|
||||
# * set -u: (optional) Exit on undefined variable (if STRICT_UNSET=1)
|
||||
# - Sets up traps:
|
||||
# * ERR → error_handler
|
||||
# * EXIT → on_exit
|
||||
# * INT → on_interrupt
|
||||
# * TERM → on_terminate
|
||||
# - Call this function early in every script
|
||||
# ------------------------------------------------------------------------------
|
||||
catch_errors() {
|
||||
set -Ee -o pipefail
|
||||
if [ "${STRICT_UNSET:-0}" = "1" ]; then
|
||||
set -u
|
||||
fi
|
||||
|
||||
trap 'error_handler' ERR
|
||||
trap on_exit EXIT
|
||||
trap on_interrupt INT
|
||||
trap on_terminate TERM
|
||||
}
|
||||
@ -1,21 +1,57 @@
|
||||
# Copyright (c) 2021-2025 tteck
|
||||
# Copyright (c) 2021-2025 community-scripts ORG
|
||||
# Author: tteck (tteckster)
|
||||
# Co-Author: MickLesk
|
||||
# License: MIT
|
||||
# https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
|
||||
|
||||
# ==============================================================================
|
||||
# INSTALL.FUNC - CONTAINER INSTALLATION & SETUP
|
||||
# ==============================================================================
|
||||
#
|
||||
# This file provides installation functions executed inside LXC containers
|
||||
# after creation. Handles:
|
||||
#
|
||||
# - Network connectivity verification (IPv4/IPv6)
|
||||
# - OS updates and package installation
|
||||
# - DNS resolution checks
|
||||
# - MOTD and SSH configuration
|
||||
# - Container customization and auto-login
|
||||
#
|
||||
# Usage:
|
||||
# - Sourced by <app>-install.sh scripts
|
||||
# - Executes via pct exec inside container
|
||||
# - Requires internet connectivity
|
||||
#
|
||||
# ==============================================================================
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 1: INITIALIZATION
|
||||
# ==============================================================================
|
||||
|
||||
if ! command -v curl >/dev/null 2>&1; then
|
||||
printf "\r\e[2K%b" '\033[93m Setup Source \033[m' >&2
|
||||
apt-get update >/dev/null 2>&1
|
||||
apt-get install -y curl >/dev/null 2>&1
|
||||
apt update >/dev/null 2>&1
|
||||
apt install -y curl >/dev/null 2>&1
|
||||
fi
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/core.func)
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
|
||||
load_functions
|
||||
# This function enables IPv6 if it's not disabled and sets verbose mode
|
||||
catch_errors
|
||||
|
||||
# ==============================================================================
|
||||
# SECTION 2: NETWORK & CONNECTIVITY
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# verb_ip6()
|
||||
#
|
||||
# - Configures IPv6 based on DISABLEIPV6 variable
|
||||
# - If DISABLEIPV6=yes: disables IPv6 via sysctl
|
||||
# - Sets verbose mode via set_std_mode()
|
||||
# ------------------------------------------------------------------------------
|
||||
verb_ip6() {
|
||||
set_std_mode # Set STD mode based on VERBOSE
|
||||
|
||||
if [ "$IPV6_METHOD" == "disable" ]; then
|
||||
if [ "${IPV6_METHOD:-}" = "disable" ]; then
|
||||
msg_info "Disabling IPv6 (this may affect some services)"
|
||||
mkdir -p /etc/sysctl.d
|
||||
$STD tee /etc/sysctl.d/99-disable-ipv6.conf >/dev/null <<EOF
|
||||
@ -29,30 +65,15 @@ EOF
|
||||
fi
|
||||
}
|
||||
|
||||
# This function sets error handling options and defines the error_handler function to handle errors
|
||||
catch_errors() {
|
||||
set -Eeuo pipefail
|
||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||
}
|
||||
|
||||
# This function handles errors
|
||||
error_handler() {
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/api.func)
|
||||
printf "\e[?25h"
|
||||
local exit_code="$?"
|
||||
local line_number="$1"
|
||||
local command="$2"
|
||||
local error_message="${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}"
|
||||
echo -e "\n$error_message"
|
||||
if [[ "$line_number" -eq 51 ]]; then
|
||||
echo -e "The silent function has suppressed the error, run the script with verbose mode enabled, which will provide more detailed output.\n"
|
||||
post_update_to_api "failed" "No error message, script ran in silent mode"
|
||||
else
|
||||
post_update_to_api "failed" "${command}"
|
||||
fi
|
||||
}
|
||||
|
||||
# This function sets up the Container OS by generating the locale, setting the timezone, and checking the network connection
|
||||
# ------------------------------------------------------------------------------
|
||||
# setting_up_container()
|
||||
#
|
||||
# - Verifies network connectivity via hostname -I
|
||||
# - Retries up to RETRY_NUM times with RETRY_EVERY seconds delay
|
||||
# - Removes Python EXTERNALLY-MANAGED restrictions
|
||||
# - Disables systemd-networkd-wait-online.service for faster boot
|
||||
# - Exits with error if network unavailable after retries
|
||||
# ------------------------------------------------------------------------------
|
||||
setting_up_container() {
|
||||
msg_info "Setting up Container OS"
|
||||
for ((i = RETRY_NUM; i > 0; i--)); do
|
||||
@ -74,8 +95,17 @@ setting_up_container() {
|
||||
msg_ok "Network Connected: ${BL}$(hostname -I)"
|
||||
}
|
||||
|
||||
# This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
|
||||
# This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
|
||||
# ------------------------------------------------------------------------------
|
||||
# network_check()
|
||||
#
|
||||
# - Comprehensive network connectivity check for IPv4 and IPv6
|
||||
# - Tests connectivity to multiple DNS servers:
|
||||
# * IPv4: 1.1.1.1 (Cloudflare), 8.8.8.8 (Google), 9.9.9.9 (Quad9)
|
||||
# * IPv6: 2606:4700:4700::1111, 2001:4860:4860::8888, 2620:fe::fe
|
||||
# - Verifies DNS resolution for GitHub and Community-Scripts domains
|
||||
# - Prompts user to continue if no internet detected
|
||||
# - Uses fatal() on DNS resolution failure for critical hosts
|
||||
# ------------------------------------------------------------------------------
|
||||
network_check() {
|
||||
set +e
|
||||
trap - ERR
|
||||
@ -135,7 +165,19 @@ network_check() {
|
||||
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
|
||||
}
|
||||
|
||||
# This function updates the Container OS by running apt-get update and upgrade
|
||||
# ==============================================================================
|
||||
# SECTION 3: OS UPDATE & PACKAGE MANAGEMENT
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# update_os()
|
||||
#
|
||||
# - Updates container OS via apt-get update and dist-upgrade
|
||||
# - Configures APT cacher proxy if CACHER=yes (accelerates package downloads)
|
||||
# - Removes Python EXTERNALLY-MANAGED restrictions for pip
|
||||
# - Sources tools.func for additional setup functions after update
|
||||
# - Uses $STD wrapper to suppress output unless VERBOSE=yes
|
||||
# ------------------------------------------------------------------------------
|
||||
update_os() {
|
||||
msg_info "Updating Container OS"
|
||||
if [[ "$CACHER" == "yes" ]]; then
|
||||
@ -158,26 +200,34 @@ EOF
|
||||
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
|
||||
}
|
||||
|
||||
# This function modifies the message of the day (motd) and SSH settings
|
||||
# ==============================================================================
|
||||
# SECTION 4: MOTD & SSH CONFIGURATION
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# motd_ssh()
|
||||
#
|
||||
# - Configures Message of the Day (MOTD) with container information
|
||||
# - Creates /etc/profile.d/00_lxc-details.sh with:
|
||||
# * Application name
|
||||
# * Warning banner (DEV repository)
|
||||
# * OS name and version
|
||||
# * Hostname and IP address
|
||||
# * GitHub repository link
|
||||
# - Disables executable flag on /etc/update-motd.d/* scripts
|
||||
# - Enables root SSH access if SSH_ROOT=yes
|
||||
# - Configures TERM environment variable for better terminal support
|
||||
# ------------------------------------------------------------------------------
|
||||
motd_ssh() {
|
||||
# Set terminal to 256-color mode
|
||||
grep -qxF "export TERM='xterm-256color'" /root/.bashrc || echo "export TERM='xterm-256color'" >>/root/.bashrc
|
||||
|
||||
# Get OS information (Debian / Ubuntu)
|
||||
if [ -f "/etc/os-release" ]; then
|
||||
OS_NAME=$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||
OS_VERSION=$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '"')
|
||||
elif [ -f "/etc/debian_version" ]; then
|
||||
OS_NAME="Debian"
|
||||
OS_VERSION=$(cat /etc/debian_version)
|
||||
fi
|
||||
|
||||
PROFILE_FILE="/etc/profile.d/00_lxc-details.sh"
|
||||
echo "echo -e \"\"" >"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${BOLD}${APPLICATION} LXC Container${CL}"\" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${GATEWAY}${YW} Provided by: ${GN}community-scripts ORG ${YW}| GitHub: ${GN}https://github.com/community-scripts/ProxmoxVE${CL}\"" >>"$PROFILE_FILE"
|
||||
echo "echo \"\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}${OS_NAME} - Version: ${OS_VERSION}${CL}\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}\$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '\"') - Version: \$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '\"')${CL}\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${HOSTNAME}${YW} Hostname: ${GN}\$(hostname)${CL}\"" >>"$PROFILE_FILE"
|
||||
echo -e "echo -e \"${TAB}${INFO}${YW} IP Address: ${GN}\$(hostname -I | awk '{print \$1}')${CL}\"" >>"$PROFILE_FILE"
|
||||
|
||||
@ -190,7 +240,19 @@ motd_ssh() {
|
||||
fi
|
||||
}
|
||||
|
||||
# This function customizes the container by modifying the getty service and enabling auto-login for the root user
|
||||
# ==============================================================================
|
||||
# SECTION 5: CONTAINER CUSTOMIZATION
|
||||
# ==============================================================================
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# customize()
|
||||
#
|
||||
# - Customizes container for passwordless root login if PASSWORD is empty
|
||||
# - Configures getty for auto-login via /etc/systemd/system/container-getty@1.service.d/override.conf
|
||||
# - Creates /usr/bin/update script for easy application updates
|
||||
# - Injects SSH authorized keys if SSH_AUTHORIZED_KEY variable is set
|
||||
# - Sets proper permissions on SSH directories and key files
|
||||
# ------------------------------------------------------------------------------
|
||||
customize() {
|
||||
if [[ "$PASSWORD" == "" ]]; then
|
||||
msg_info "Customizing Container"
|
||||
|
||||
338
misc/tools.func
338
misc/tools.func
@ -72,23 +72,19 @@ stop_all_services() {
|
||||
local service_patterns=("$@")
|
||||
|
||||
for pattern in "${service_patterns[@]}"; do
|
||||
# Find all matching services (use || true to avoid pipeline failures)
|
||||
# Find all matching services (grep || true to handle no matches)
|
||||
local services
|
||||
services=$(systemctl list-units --type=service --all 2>/dev/null |
|
||||
grep -oE "${pattern}[^ ]*\.service" 2>/dev/null |
|
||||
sort -u 2>/dev/null || true)
|
||||
grep -oE "${pattern}[^ ]*\.service" 2>/dev/null | sort -u) || true
|
||||
|
||||
# Only process if we found any services
|
||||
if [[ -n "$services" ]]; then
|
||||
while IFS= read -r service; do
|
||||
[[ -z "$service" ]] && continue
|
||||
while read -r service; do
|
||||
$STD systemctl stop "$service" 2>/dev/null || true
|
||||
$STD systemctl disable "$service" 2>/dev/null || true
|
||||
done <<<"$services"
|
||||
fi
|
||||
done
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
@ -196,6 +192,8 @@ install_packages_with_retry() {
|
||||
if [[ $retry -le $max_retries ]]; then
|
||||
msg_warn "Package installation failed, retrying ($retry/$max_retries)..."
|
||||
sleep 2
|
||||
# Fix any interrupted dpkg operations before retry
|
||||
$STD dpkg --configure -a 2>/dev/null || true
|
||||
$STD apt update 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
@ -221,6 +219,8 @@ upgrade_packages_with_retry() {
|
||||
if [[ $retry -le $max_retries ]]; then
|
||||
msg_warn "Package upgrade failed, retrying ($retry/$max_retries)..."
|
||||
sleep 2
|
||||
# Fix any interrupted dpkg operations before retry
|
||||
$STD dpkg --configure -a 2>/dev/null || true
|
||||
$STD apt update 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
@ -1186,6 +1186,12 @@ cleanup_orphaned_sources() {
|
||||
# This should be called at the start of any setup function
|
||||
# ------------------------------------------------------------------------------
|
||||
ensure_apt_working() {
|
||||
# Fix interrupted dpkg operations first
|
||||
# This can happen if a previous installation was interrupted (e.g., by script error)
|
||||
if [[ -f /var/lib/dpkg/lock-frontend ]] || dpkg --audit 2>&1 | grep -q "interrupted"; then
|
||||
$STD dpkg --configure -a 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Clean up orphaned sources first
|
||||
cleanup_orphaned_sources
|
||||
|
||||
@ -1214,7 +1220,7 @@ setup_deb822_repo() {
|
||||
local gpg_url="$2"
|
||||
local repo_url="$3"
|
||||
local suite="$4"
|
||||
local component="${5-main}"
|
||||
local component="${5:-main}"
|
||||
local architectures="${6-}" # optional
|
||||
|
||||
# Validate required parameters
|
||||
@ -1729,12 +1735,13 @@ function fetch_and_deploy_gh_release() {
|
||||
|
||||
### Tarball Mode ###
|
||||
if [[ "$mode" == "tarball" || "$mode" == "source" ]]; then
|
||||
url=$(echo "$json" | jq -r '.tarball_url // empty')
|
||||
[[ -z "$url" ]] && url="https://github.com/$repo/archive/refs/tags/v$version.tar.gz"
|
||||
# GitHub API's tarball_url/zipball_url can return HTTP 300 Multiple Choices
|
||||
# when a branch and tag share the same name. Use explicit refs/tags/ URL instead.
|
||||
local direct_tarball_url="https://github.com/$repo/archive/refs/tags/$tag_name.tar.gz"
|
||||
filename="${app_lc}-${version}.tar.gz"
|
||||
|
||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$url" || {
|
||||
msg_error "Download failed: $url"
|
||||
curl $download_timeout -fsSL -o "$tmpdir/$filename" "$direct_tarball_url" || {
|
||||
msg_error "Download failed: $direct_tarball_url"
|
||||
rm -rf "$tmpdir"
|
||||
return 1
|
||||
}
|
||||
@ -2788,12 +2795,19 @@ function setup_java() {
|
||||
INSTALLED_VERSION=$(dpkg -l 2>/dev/null | awk '/temurin-.*-jdk/{print $2}' | grep -oP 'temurin-\K[0-9]+' | head -n1 || echo "")
|
||||
fi
|
||||
|
||||
# Validate INSTALLED_VERSION is not empty if matched
|
||||
# Validate INSTALLED_VERSION is not empty if JDK package found
|
||||
local JDK_COUNT=0
|
||||
JDK_COUNT=$(dpkg -l 2>/dev/null | grep -c "temurin-.*-jdk" || true)
|
||||
if [[ -z "$INSTALLED_VERSION" && "${JDK_COUNT:-0}" -gt 0 ]]; then
|
||||
msg_warn "Found Temurin JDK but cannot determine version"
|
||||
INSTALLED_VERSION="0"
|
||||
msg_warn "Found Temurin JDK but cannot determine version - attempting reinstall"
|
||||
# Try to get actual package name for purge
|
||||
local OLD_PACKAGE
|
||||
OLD_PACKAGE=$(dpkg -l 2>/dev/null | awk '/temurin-.*-jdk/{print $2}' | head -n1 || echo "")
|
||||
if [[ -n "$OLD_PACKAGE" ]]; then
|
||||
msg_info "Removing existing package: $OLD_PACKAGE"
|
||||
$STD apt purge -y "$OLD_PACKAGE" || true
|
||||
fi
|
||||
INSTALLED_VERSION="" # Reset to trigger fresh install
|
||||
fi
|
||||
|
||||
# Scenario 1: Already at correct version
|
||||
@ -3242,7 +3256,6 @@ function setup_mongodb() {
|
||||
return 1
|
||||
}
|
||||
|
||||
# Verify MongoDB was installed correctly
|
||||
if ! command -v mongod >/dev/null 2>&1; then
|
||||
msg_error "MongoDB binary not found after installation"
|
||||
return 1
|
||||
@ -3418,12 +3431,12 @@ EOF
|
||||
# - Optionally installs or updates global npm modules
|
||||
#
|
||||
# Variables:
|
||||
# NODE_VERSION - Node.js version to install (default: 22)
|
||||
# NODE_VERSION - Node.js version to install (default: 24 LTS)
|
||||
# NODE_MODULE - Comma-separated list of global modules (e.g. "yarn,@vue/cli@5.0.0")
|
||||
# ------------------------------------------------------------------------------
|
||||
|
||||
function setup_nodejs() {
|
||||
local NODE_VERSION="${NODE_VERSION:-22}"
|
||||
local NODE_VERSION="${NODE_VERSION:-24}"
|
||||
local NODE_MODULE="${NODE_MODULE:-}"
|
||||
|
||||
# ALWAYS clean up legacy installations first (nvm, etc.) to prevent conflicts
|
||||
@ -3485,14 +3498,11 @@ function setup_nodejs() {
|
||||
return 1
|
||||
}
|
||||
|
||||
# CRITICAL: Force APT cache refresh AFTER repository setup
|
||||
# This ensures NodeSource is the only nodejs source in APT cache
|
||||
# Force APT cache refresh after repository setup
|
||||
$STD apt update
|
||||
|
||||
# Install dependencies (NodeSource is now the only nodejs source)
|
||||
ensure_dependencies curl ca-certificates gnupg
|
||||
|
||||
# Install Node.js from NodeSource
|
||||
install_packages_with_retry "nodejs" || {
|
||||
msg_error "Failed to install Node.js ${NODE_VERSION} from NodeSource"
|
||||
return 1
|
||||
@ -3643,12 +3653,12 @@ function setup_php() {
|
||||
local CURRENT_PHP=""
|
||||
CURRENT_PHP=$(is_tool_installed "php" 2>/dev/null) || true
|
||||
|
||||
# CRITICAL: If wrong version is installed, remove it FIRST before any pinning
|
||||
# Remove conflicting PHP version before pinning
|
||||
if [[ -n "$CURRENT_PHP" && "$CURRENT_PHP" != "$PHP_VERSION" ]]; then
|
||||
msg_info "Removing conflicting PHP ${CURRENT_PHP} (need ${PHP_VERSION})"
|
||||
stop_all_services "php.*-fpm"
|
||||
$STD apt-get purge -y "php*" 2>/dev/null || true
|
||||
$STD apt-get autoremove -y 2>/dev/null || true
|
||||
$STD apt purge -y "php*" 2>/dev/null || true
|
||||
$STD apt autoremove -y 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# NOW create pinning for the desired version
|
||||
@ -3675,7 +3685,7 @@ EOF
|
||||
}
|
||||
|
||||
ensure_apt_working || return 1
|
||||
$STD apt-get update
|
||||
$STD apt update
|
||||
|
||||
# Get available PHP version from repository
|
||||
local AVAILABLE_PHP_VERSION=""
|
||||
@ -3790,7 +3800,6 @@ EOF
|
||||
|
||||
local INSTALLED_VERSION=$(php -v 2>/dev/null | awk '/^PHP/{print $2}' | cut -d. -f1,2)
|
||||
|
||||
# Critical: if major.minor doesn't match, fail and cleanup
|
||||
if [[ "$INSTALLED_VERSION" != "$PHP_VERSION" ]]; then
|
||||
msg_error "PHP version mismatch: requested ${PHP_VERSION} but got ${INSTALLED_VERSION}"
|
||||
msg_error "This indicates a critical package installation issue"
|
||||
@ -3870,74 +3879,14 @@ function setup_postgresql() {
|
||||
local SUITE
|
||||
case "$DISTRO_CODENAME" in
|
||||
trixie | forky | sid)
|
||||
# For Debian Testing/Unstable, try PostgreSQL repo first, fallback to native packages
|
||||
|
||||
if verify_repo_available "https://apt.postgresql.org/pub/repos/apt" "trixie-pgdg"; then
|
||||
SUITE="trixie-pgdg"
|
||||
|
||||
setup_deb822_repo \
|
||||
"pgdg" \
|
||||
"https://www.postgresql.org/media/keys/ACCC4CF8.asc" \
|
||||
"https://apt.postgresql.org/pub/repos/apt" \
|
||||
"$SUITE" \
|
||||
"main"
|
||||
|
||||
if ! $STD apt update; then
|
||||
msg_warn "Failed to update PostgreSQL repository, falling back to native packages"
|
||||
SUITE=""
|
||||
fi
|
||||
else
|
||||
SUITE=""
|
||||
SUITE="bookworm-pgdg"
|
||||
fi
|
||||
|
||||
# If no repo or packages not installable, use native Debian packages
|
||||
if [[ -z "$SUITE" ]] || ! apt-cache show "postgresql-${PG_VERSION}" 2>/dev/null | grep -q "Version:"; then
|
||||
msg_info "Using native Debian packages for $DISTRO_CODENAME"
|
||||
|
||||
# Install ssl-cert dependency if available
|
||||
if apt-cache search "^ssl-cert$" 2>/dev/null | grep -q .; then
|
||||
$STD apt install -y ssl-cert 2>/dev/null || true
|
||||
fi
|
||||
|
||||
if ! $STD apt install -y postgresql postgresql-client 2>/dev/null; then
|
||||
msg_error "Failed to install native PostgreSQL packages"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! command -v psql >/dev/null 2>&1; then
|
||||
msg_error "PostgreSQL installed but psql command not found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Restore database backup if we upgraded from previous version
|
||||
if [[ -n "$CURRENT_PG_VERSION" ]]; then
|
||||
msg_info "Restoring PostgreSQL databases from backup..."
|
||||
$STD runuser -u postgres -- psql </var/lib/postgresql/backup_$(date +%F)_v${CURRENT_PG_VERSION}.sql 2>/dev/null || {
|
||||
msg_warn "Failed to restore database backup - this may be expected for major version upgrades"
|
||||
}
|
||||
fi
|
||||
|
||||
$STD systemctl enable --now postgresql 2>/dev/null || true
|
||||
|
||||
# Get actual installed version
|
||||
INSTALLED_VERSION="$(psql -V 2>/dev/null | awk '{print $3}' | cut -d. -f1)"
|
||||
|
||||
# Add PostgreSQL binaries to PATH
|
||||
if ! grep -q '/usr/lib/postgresql' /etc/environment 2>/dev/null; then
|
||||
echo 'PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/'"${INSTALLED_VERSION}"'/bin"' >/etc/environment
|
||||
fi
|
||||
|
||||
cache_installed_version "postgresql" "$INSTALLED_VERSION"
|
||||
msg_ok "Setup PostgreSQL $INSTALLED_VERSION (native)"
|
||||
|
||||
# Install optional modules if specified
|
||||
if [[ -n "$PG_MODULES" ]]; then
|
||||
IFS=',' read -ra MODULES <<<"$PG_MODULES"
|
||||
for module in "${MODULES[@]}"; do
|
||||
$STD apt install -y "postgresql-${INSTALLED_VERSION}-${module}" 2>/dev/null || true
|
||||
done
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
SUITE=$(get_fallback_suite "$DISTRO_ID" "$DISTRO_CODENAME" "https://apt.postgresql.org/pub/repos/apt")
|
||||
@ -4831,3 +4780,214 @@ function setup_yq() {
|
||||
cache_installed_version "yq" "$FINAL_VERSION"
|
||||
msg_ok "Setup yq $FINAL_VERSION"
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# Docker Engine Installation and Management (All-In-One)
|
||||
#
|
||||
# Description:
|
||||
# - Detects and migrates old Docker installations
|
||||
# - Installs/Updates Docker Engine via official repository
|
||||
# - Optional: Installs/Updates Portainer CE
|
||||
# - Updates running containers interactively
|
||||
# - Cleans up legacy repository files
|
||||
#
|
||||
# Usage:
|
||||
# setup_docker
|
||||
# DOCKER_PORTAINER="true" setup_docker
|
||||
# DOCKER_LOG_DRIVER="json-file" setup_docker
|
||||
#
|
||||
# Variables:
|
||||
# DOCKER_PORTAINER - Install Portainer CE (optional, "true" to enable)
|
||||
# DOCKER_LOG_DRIVER - Log driver (optional, default: "journald")
|
||||
# DOCKER_SKIP_UPDATES - Skip container update check (optional, "true" to skip)
|
||||
#
|
||||
# Features:
|
||||
# - Migrates from get.docker.com to repository-based installation
|
||||
# - Updates Docker Engine if newer version available
|
||||
# - Interactive container update with multi-select
|
||||
# - Portainer installation and update support
|
||||
# ------------------------------------------------------------------------------
|
||||
function setup_docker() {
|
||||
local docker_installed=false
|
||||
local portainer_installed=false
|
||||
|
||||
# Check if Docker is already installed
|
||||
if command -v docker &>/dev/null; then
|
||||
docker_installed=true
|
||||
DOCKER_CURRENT_VERSION=$(docker --version | grep -oP '\d+\.\d+\.\d+' | head -1)
|
||||
msg_info "Docker $DOCKER_CURRENT_VERSION detected"
|
||||
fi
|
||||
|
||||
# Check if Portainer is running
|
||||
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q '^portainer$'; then
|
||||
portainer_installed=true
|
||||
msg_info "Portainer container detected"
|
||||
fi
|
||||
|
||||
# Cleanup old repository configurations
|
||||
if [ -f /etc/apt/sources.list.d/docker.list ]; then
|
||||
msg_info "Migrating from old Docker repository format"
|
||||
rm -f /etc/apt/sources.list.d/docker.list
|
||||
rm -f /etc/apt/keyrings/docker.asc
|
||||
fi
|
||||
|
||||
# Setup/Update Docker repository
|
||||
msg_info "Setting up Docker Repository"
|
||||
setup_deb822_repo \
|
||||
"docker" \
|
||||
"https://download.docker.com/linux/$(get_os_info id)/gpg" \
|
||||
"https://download.docker.com/linux/$(get_os_info id)" \
|
||||
"$(get_os_info codename)" \
|
||||
"stable" \
|
||||
"$(dpkg --print-architecture)"
|
||||
|
||||
# Install or upgrade Docker
|
||||
if [ "$docker_installed" = true ]; then
|
||||
msg_info "Checking for Docker updates"
|
||||
DOCKER_LATEST_VERSION=$(apt-cache policy docker-ce | grep Candidate | awk '{print $2}' | cut -d':' -f2 | cut -d'-' -f1)
|
||||
|
||||
if [ "$DOCKER_CURRENT_VERSION" != "$DOCKER_LATEST_VERSION" ]; then
|
||||
msg_info "Updating Docker $DOCKER_CURRENT_VERSION → $DOCKER_LATEST_VERSION"
|
||||
$STD apt install -y --only-upgrade \
|
||||
docker-ce \
|
||||
docker-ce-cli \
|
||||
containerd.io \
|
||||
docker-buildx-plugin \
|
||||
docker-compose-plugin
|
||||
msg_ok "Updated Docker to $DOCKER_LATEST_VERSION"
|
||||
else
|
||||
msg_ok "Docker is up-to-date ($DOCKER_CURRENT_VERSION)"
|
||||
fi
|
||||
else
|
||||
msg_info "Installing Docker"
|
||||
$STD apt install -y \
|
||||
docker-ce \
|
||||
docker-ce-cli \
|
||||
containerd.io \
|
||||
docker-buildx-plugin \
|
||||
docker-compose-plugin
|
||||
|
||||
DOCKER_CURRENT_VERSION=$(docker --version | grep -oP '\d+\.\d+\.\d+' | head -1)
|
||||
msg_ok "Installed Docker $DOCKER_CURRENT_VERSION"
|
||||
fi
|
||||
|
||||
# Configure daemon.json
|
||||
local log_driver="${DOCKER_LOG_DRIVER:-journald}"
|
||||
mkdir -p /etc/docker
|
||||
if [ ! -f /etc/docker/daemon.json ]; then
|
||||
cat <<EOF >/etc/docker/daemon.json
|
||||
{
|
||||
"log-driver": "$log_driver"
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Enable and start Docker
|
||||
systemctl enable -q --now docker
|
||||
|
||||
# Portainer Management
|
||||
if [[ "${DOCKER_PORTAINER:-}" == "true" ]]; then
|
||||
if [ "$portainer_installed" = true ]; then
|
||||
msg_info "Checking for Portainer updates"
|
||||
PORTAINER_CURRENT=$(docker inspect portainer --format='{{.Config.Image}}' 2>/dev/null | cut -d':' -f2)
|
||||
PORTAINER_LATEST=$(curl -fsSL https://registry.hub.docker.com/v2/repositories/portainer/portainer-ce/tags?page_size=100 | grep -oP '"name":"\K[0-9]+\.[0-9]+\.[0-9]+"' | head -1 | tr -d '"')
|
||||
|
||||
if [ "$PORTAINER_CURRENT" != "$PORTAINER_LATEST" ]; then
|
||||
read -r -p "${TAB3}Update Portainer $PORTAINER_CURRENT → $PORTAINER_LATEST? <y/N> " prompt
|
||||
if [[ ${prompt,,} =~ ^(y|yes)$ ]]; then
|
||||
msg_info "Updating Portainer"
|
||||
docker stop portainer
|
||||
docker rm portainer
|
||||
docker pull portainer/portainer-ce:latest
|
||||
docker run -d \
|
||||
-p 9000:9000 \
|
||||
-p 9443:9443 \
|
||||
--name=portainer \
|
||||
--restart=always \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v portainer_data:/data \
|
||||
portainer/portainer-ce:latest
|
||||
msg_ok "Updated Portainer to $PORTAINER_LATEST"
|
||||
fi
|
||||
else
|
||||
msg_ok "Portainer is up-to-date ($PORTAINER_CURRENT)"
|
||||
fi
|
||||
else
|
||||
msg_info "Installing Portainer"
|
||||
docker volume create portainer_data
|
||||
docker run -d \
|
||||
-p 9000:9000 \
|
||||
-p 9443:9443 \
|
||||
--name=portainer \
|
||||
--restart=always \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v portainer_data:/data \
|
||||
portainer/portainer-ce:latest
|
||||
|
||||
LOCAL_IP=$(hostname -I | awk '{print $1}')
|
||||
msg_ok "Installed Portainer (http://${LOCAL_IP}:9000)"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Interactive Container Update Check
|
||||
if [[ "${DOCKER_SKIP_UPDATES:-}" != "true" ]] && [ "$docker_installed" = true ]; then
|
||||
msg_info "Checking for container updates"
|
||||
|
||||
# Get list of running containers with update status
|
||||
local containers_with_updates=()
|
||||
local container_info=()
|
||||
local index=1
|
||||
|
||||
while IFS= read -r container; do
|
||||
local name=$(echo "$container" | awk '{print $1}')
|
||||
local image=$(echo "$container" | awk '{print $2}')
|
||||
local current_digest=$(docker inspect "$name" --format='{{.Image}}' 2>/dev/null | cut -d':' -f2 | cut -c1-12)
|
||||
|
||||
# Pull latest image digest
|
||||
docker pull "$image" >/dev/null 2>&1
|
||||
local latest_digest=$(docker inspect "$image" --format='{{.Id}}' 2>/dev/null | cut -d':' -f2 | cut -c1-12)
|
||||
|
||||
if [ "$current_digest" != "$latest_digest" ]; then
|
||||
containers_with_updates+=("$name")
|
||||
container_info+=("${index}) ${name} (${image})")
|
||||
((index++))
|
||||
fi
|
||||
done < <(docker ps --format '{{.Names}} {{.Image}}')
|
||||
|
||||
if [ ${#containers_with_updates[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
echo "${TAB3}Container updates available:"
|
||||
for info in "${container_info[@]}"; do
|
||||
echo "${TAB3} $info"
|
||||
done
|
||||
echo ""
|
||||
read -r -p "${TAB3}Select containers to update (e.g., 1,3,5 or 'all' or 'none'): " selection
|
||||
|
||||
if [[ ${selection,,} == "all" ]]; then
|
||||
for container in "${containers_with_updates[@]}"; do
|
||||
msg_info "Updating container: $container"
|
||||
docker stop "$container"
|
||||
docker rm "$container"
|
||||
# Note: This requires the original docker run command - best to recreate via compose
|
||||
msg_ok "Stopped and removed $container (please recreate with updated image)"
|
||||
done
|
||||
elif [[ ${selection,,} != "none" ]]; then
|
||||
IFS=',' read -ra SELECTED <<<"$selection"
|
||||
for num in "${SELECTED[@]}"; do
|
||||
num=$(echo "$num" | xargs) # trim whitespace
|
||||
if [[ "$num" =~ ^[0-9]+$ ]] && [ "$num" -ge 1 ] && [ "$num" -le "${#containers_with_updates[@]}" ]; then
|
||||
container="${containers_with_updates[$((num - 1))]}"
|
||||
msg_info "Updating container: $container"
|
||||
docker stop "$container"
|
||||
docker rm "$container"
|
||||
msg_ok "Stopped and removed $container (please recreate with updated image)"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
else
|
||||
msg_ok "All containers are up-to-date"
|
||||
fi
|
||||
fi
|
||||
|
||||
msg_ok "Docker setup completed"
|
||||
}
|
||||
|
||||
@ -24,7 +24,7 @@ RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
|
||||
METHOD=""
|
||||
NSAPP="opnsense-vm"
|
||||
var_os="opnsense"
|
||||
var_version="25.1"
|
||||
var_version="25.7"
|
||||
#
|
||||
GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||
GEN_MAC_LAN=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
|
||||
@ -670,7 +670,7 @@ if [ -n "$WAN_BRG" ]; then
|
||||
msg_ok "WAN interface added"
|
||||
sleep 5 # Brief pause after adding network interface
|
||||
fi
|
||||
send_line_to_vm "sh ./opnsense-bootstrap.sh.in -y -f -r 25.1"
|
||||
send_line_to_vm "sh ./opnsense-bootstrap.sh.in -y -f -r 25.7"
|
||||
msg_ok "OPNsense VM is being installed, do not close the terminal, or the installation will fail."
|
||||
#We need to wait for the OPNsense build proccess to finish, this takes a few minutes
|
||||
sleep 1000
|
||||
|
||||
Reference in New Issue
Block a user