Proxmox Backup Server 3.4 Released
Proxmox has officially released Proxmox Backup Server 3.4, bringing significant enhancements focused on performance, compatibility, and user control. This latest version builds upon the stable Debian Bookworm foundation and incorporates updated kernel options alongside numerous quality-of-life improvements for administrators.
Key Takeaways
- Garbage Collection Performance Boost: A new caching mechanism significantly speeds up the marking phase of garbage collection, reducing runtime at the cost of increased memory usage.
- Enhanced Sync Job Filtering: Gain finer control over sync jobs by selecting snapshots based on their encryption or verification status, in addition to existing group filters.
- Static Command-Line Client: A new statically linked `proxmox-backup-client` binary improves compatibility across various Linux distributions, simplifying file-level backups from non-Debian hosts.
- Updated Core Components: Based on Debian 12.10 “Bookworm”, featuring the latest stable Linux 6.8 kernel (with 6.14 available opt-in) and ZFS 2.2.7.
- Improved Installation & Management: Numerous updates to the installer (GUI and automated) and system management tools enhance usability, security, and reliability.
Table of Contents
- Proxmox Backup Server 3.4 Released
- Key Takeaways
- Table of Contents
- Release Highlights
- Changelog Overview
- Further Information
Release Highlights
Proxmox Backup Server 3.4 introduces several key features aimed at improving efficiency and flexibility for backup administrators.
Performance Improvements for Garbage Collection
Garbage collection is crucial for reclaiming storage space by removing data chunks no longer referenced by any backup snapshot. The process involves marking all currently used chunks.
The marking phase now uses a cache to avoid redundant marking operations. This increases memory consumption but can significantly decrease the runtime of garbage collection.
This optimization addresses potential bottlenecks in large environments, making storage maintenance faster. The cache size is configurable via the datastore’s tuning options, allowing administrators to balance memory usage and performance gains based on their specific hardware and workload.
More Fine-Grained Control for Sync Jobs
Sync jobs are essential for replicating backup data between Proxmox Backup Server instances, enabling off-site backups and disaster recovery strategies. While filtering by backup group was already possible, version 3.4 adds more granularity.
In addition, it is now possible to only synchronize backup snapshots that are encrypted, or only backup snapshots that are verified.
This allows for more targeted synchronization strategies, such as only replicating verified backups to ensure data integrity on the remote site or prioritizing encrypted backups for security compliance. Note that the `transfer-last` setting takes precedence over these new filters.
Static Build of the Command-Line Client
While Proxmox Backup Server integrates tightly with Proxmox VE, its command-line client (`proxmox-backup-client`) is a versatile tool for backing up data from various sources. Previously, packages were primarily available for Debian-based systems.
A new statically linked binary increases the compatibility with Linux hosts running other distributions. This makes it easier to use Proxmox Backup Server to create file-level backups of arbitrary Linux hosts.
This significantly broadens the usability of Proxmox Backup Server, allowing users on distributions like CentOS, Fedora, SUSE, or others to easily install and use the client without complex dependency management, facilitating standardized backup procedures across heterogeneous environments.
Latest Linux 6.14 Kernel Available
Proxmox Backup Server 3.4 ships with the stable Linux 6.8.12 kernel as the default. For users seeking the latest hardware support or features, the newer 6.14 kernel is available as an opt-in option. This provides flexibility for different hardware requirements and user preferences, while ZFS 2.2.7 includes compatibility patches specifically for the 6.14 kernel.
Changelog Overview
Beyond the highlights, version 3.4 incorporates a wide range of enhancements and fixes across the platform.
Enhancements in the Web Interface (GUI)
- Added the ability to configure a default realm for the login dialog, streamlining logins for environments with multiple authentication sources.
- The prune simulator now correctly handles schedules combining range and step sizes and accurately displays kept backups.
- Fixed a rare issue preventing the GUI from fully loading after navigating to “Prune & GC Jobs”.
- Enabled the deletion of comments associated with API tokens.
- Improved context for translators by fixing split translatable strings.
- Various minor UI improvements for a smoother user experience.
General Backend Improvements
- Garbage Collection: Besides the caching improvement, the marking phase benefits from improved chunk iteration logic. The configurable cache capacity is available in datastore tuning options.
- Sync Job Filtering: Implemented the ability to filter sync jobs by verified-only or encrypted-only snapshots.
- Filesystem `atime` Check: Added a crucial safeguard during datastore creation and garbage collection. It now checks if the underlying filesystem correctly honours `atime` (access time) updates, preventing potential data loss on non-compliant filesystems. This check is enabled by default but can be disabled in tuning options.
- Configurable `atime` Cutoff: Advanced users can now adjust the garbage collection `atime` cutoff (defaulting to 24 hours 5 minutes) via tuning options. This allows potentially faster chunk removal on filesystems with immediate `atime` updates.
- API Token Secret Regeneration: Added the capability to generate a new secret for an existing API token via the API and GUI.
- Reverted Chunk Check: Rolled back a check for known-but-missing chunks during backup creation introduced in 3.3 due to scalability issues reported by the community. An alternative approach is planned for the future.
- Removable Datastore Unmount: Ensured proper unmounting if creating a removable datastore fails.
- Empty Backup Group Removal: Backup groups are now automatically removed when their last snapshot is deleted, resolving potential ownership conflicts.
- Decoupled Locking: Backup group, snapshot, and manifest locking now uses `tmpfs` under `/run` instead of the datastore’s filesystem, improving reliability, especially on network filesystems.
- API Token Permission Cleanup: Ensured permissions are correctly deleted when an API token is removed.
- Chunk Ownership: Fixed chunk file ownership when the backup process runs as root.
- Prune Job Logging: Addressed an issue where prune jobs sometimes failed to write task logs, resulting in an “Unknown” status.
- Datastore Listing Performance: Improved performance when listing datastores on large setups by optimizing authorization checks.
- Error Reporting: Enhanced error messages now often include more system details, like the `errno`.
- Disk Wiping: Ensured “Wipe Disk” also clears the GPT header backup at the end of the disk.
- Task Status Reporting: Fixed task status reporting even when logging is disabled via environment variables.
- Log output duplication fixed for `proxmox-backup-manager`.
- Fixed cleanup for worker tasks failing during startup.
- Addressed a race condition affecting the current task count.
- Increased locking timeout for the task index file to mitigate contention issues.
- Fixed overly eager abortion of verify jobs on manifest update failure.
- Resolved an issue with file descriptors not being closed properly on daemon reload.
- Corrected version checking for remote Proxmox Backup Server instances.
Client Improvements
- Introduced the statically linked `proxmox-backup-client` binary for broader Linux distribution compatibility.
- Enabled reading passwords (like API token secrets or encryption key passwords) from credentials passed by `systemd`.
- Enhanced the `vma-to-pbs` tool (for importing Proxmox VE VM archives):
- Optionally read repository/password details from environment variables.
- Added `–version` command-line option support.
- Prevented leaving zombie processes (`zstd`, `lzop`, `zcat`).
- Improved error message clarity for unexpected VMA file endings.
- Clarified archive naming restrictions in documentation.
- Fixed issues in file-based backup change detection modes (introduced in 3.3), including proper file size consideration and resolving a race condition during container backups.
- Corrected file restore from image backups to use `blockdev` options and fixed related regressions.
Tape Backup Updates
- Allowed increasing worker threads for chunk reading during tape backup, potentially boosting throughput on specific hardware setups.
- Added a dedicated section on disaster recovery from tape to the official documentation.
Installation ISO Changes
- Increased the minimum required root password length from 5 to 8 characters during installation, aligning with current NIST recommendations.
- Improved feedback during automated installation failures.
- Made RAID level specification case-insensitive in automated installer answer files.
- Prevented misleading progress messages during automated installation stalls.
- Correctly honored user preference for rebooting on error during automated installs.
- Allowed binary executables (not just scripts) for the first-boot hook in automated installations.
- Permitted both `snake_case` and `kebab-case` for properties in the answer file (preferring `kebab-case` for consistency). `snake_case` will be deprecated gradually.
- Validated locale and first-boot-hook settings during ISO preparation, preventing installation failures later.
- Suppressed non-critical kernel messages that could overlay the TUI installer.
- Preserved DHCP-detected network settings in the GUI installer even without immediate confirmation.
- Added an option to retrieve the FQDN via DHCP during automated installation.
- Improved error handling for missing DHCP servers or leases, with more sensible fallback network values.
- Added an option to power off the machine after successful automated installation.
- Optimized ZFS ARC maximum size settings for systems with limited RAM, ensuring at least 1 GiB is left for the system.
- Enabled `proxmox-boot-tool` for managing EFI system partitions on Btrfs installations.
- Made GRUB install the bootloader directly to the disk for better resilience against EFI variable corruption.
- Fixed a bug in the GUI installer’s disk options display when switching between filesystem types.
Improved Management of Proxmox Backup Server Machines
- Addressed several GRUB vulnerabilities related to Secure Boot bypass. Documentation now includes guidance on using revocation policies.
- Notification System Improvements:
- Allowed overriding both plain text and HTML notification templates.
- Streamlined templates for easier customization.
- Clarified descriptions for notification matcher modes.
- Fixed an error during notification target creation/updates.
- Ensured webhook/gotify HTTP requests include the `Content-Length` header.
- Lowered the minimum character length for InfluxDB organization and bucket names to one.
- Improved the accuracy of the “Used Memory” metric by utilizing the kernel’s `MemAvailable` statistic, correctly accounting for reclaimable memory.
- Backported kernel patches to avoid performance penalties on specific Raptor Lake CPUs and fix rare Open vSwitch network crashes.
Further Information
For more details on planned features and the future direction of Proxmox Backup Server, please refer to the official roadmap:
Add comment