remote-backups.comremote-backups.com
Contact illustration
Sign In
Don't have an account ?Sign Up

PBS Sync Jobs: Offsite Replication for Disaster Recovery

  • February 19, 2026
  • 12 min read
Table of Contents
Bennet Gallein
Bennet Gallein
remote-backups.com operator

Local backups protect against hardware failure and accidental deletion. They don't protect against site-level disasters: fire, flood, or ransomware that compromises your entire network. PBS sync jobs solve this by replicating your backups to a geographically separate PBS instance. This post covers setup, scheduling, and the operational details that keep offsite replication running smoothly.

Key Takeaways
  • PBS sync jobs use a pull model — the destination PBS pulls snapshots from the source
  • Sync is incremental at the chunk level, transferring only new/changed chunks
  • Sync must complete before local pruning removes snapshots you want offsite
  • Use --remove-vanished to mirror source retention, or disable it to keep independent history
  • Client-side encryption with --encrypted-only ensures offsite copies never contain plaintext

Why Offsite Replication Matters

The 3-2-1 backup strategy calls for three copies of your data on two different media types, with one copy offsite. Most Proxmox environments handle the first two: PVE stores the live copy, and PBS keeps backup snapshots. The offsite copy is where many setups fall short.

A local PBS protects you from disk failures, accidental VM deletions, and corrupted filesystems. It does not protect you if the building floods, a fire takes out the server room, or ransomware spreads across your LAN. In a ransomware scenario, an attacker with network access can potentially reach your local PBS. We covered this attack surface in detail in our post on immutable backups and ransomware protection. Proper permissions limit the damage, but a compromised PBS server means everything stored there is at risk.

Some compliance frameworks require geographic separation of backup data. If you're subject to ISO 27001, SOC 2, or industry-specific regulations, a local-only backup strategy may not pass audit.

The answer is the same in all cases: replicate your backups to a second PBS instance in a different location.

How PBS Sync Jobs Work

PBS sync jobs operate on a pull model. This is counterintuitive and worth understanding upfront. The PBS instance where you configure the sync job is the destination. It reaches out to a configured "remote" (the source) and pulls snapshot data to itself.

In practical terms: if you want your backups to end up on an offsite PBS server, you configure the sync job on that offsite server. It connects to your local PBS (the "remote" in PBS terminology) and downloads the snapshots.

Loading...
Rendering diagram...
PBS sync job: the offsite (destination) PBS pulls snapshot data from your local (source) PBS.

Sync is incremental at the chunk level. PBS deduplication means each backup snapshot is a manifest pointing to content-addressed chunks. When the destination syncs, it requests the manifest, checks which chunks it already has, and only transfers the missing ones. If a chunk already exists on the destination from a previous sync, it's skipped. Deduplication is preserved end to end.

The --remove-vanished flag controls what happens to snapshots on the destination that no longer exist on the source. With this flag enabled, pruned snapshots on the source get removed from the destination too. Without it, the destination keeps everything it has ever synced, even after the source prunes those snapshots.

Sync jobs also respect namespace boundaries, so you can organize and sync specific namespaces independently.

Prerequisites and Planning

Before configuring anything, a few things need to be in place.

Network access. The destination PBS must reach the source PBS on port 8007 (the PBS API port). If your local PBS sits behind NAT, you'll need port forwarding, a VPN tunnel, or a reverse proxy. If you use a managed offsite PBS like remote-backups.com as your destination, your local PBS just needs to accept inbound connections from our infrastructure.

A user account on the source PBS. The destination connects to the source using credentials. Create a dedicated user with the DatastoreBackup role on the source. This grants read access to snapshots without allowing deletion or pruning. If you haven't set up your local PBS yet, our datastore setup guide covers the basics.

The TLS fingerprint of the source PBS. PBS uses certificate fingerprints for trust verification. Retrieve this from the source PBS dashboard under Configuration > Certificates, or via CLI:

bash
proxmox-backup-manager cert info | grep Fingerprint
Retrieve PBS fingerprint

Bandwidth estimation. The initial sync transfers your entire datastore. Subsequent syncs only transfer new chunks. Use our initial seed calculator to estimate how long the first sync will take.

Bandwidth Estimates for Initial Sync
Datastore Size
50 Mbps Upload
100 Mbps Upload
1 Gbps Upload
500 GB
~22 hours
~11 hours
~1 hour
1 TB
~44 hours
~22 hours
~2.2 hours
5 TB
~9 days
~4.5 days
~11 hours
10 TB
~18 days
~9 days
~22 hours

For large datastores on slow links, consider a physical initial seed: back up to a portable drive, ship it to the offsite location, and import it there. After the initial import, incremental syncs handle the rest.

What You Need Before Starting
15-30 minutesintermediate
  • Network access from destination PBS to source PBS on port 8007
  • A dedicated backup user with DatastoreBackup role on the source
  • The TLS certificate fingerprint of the source PBS
  • Sufficient bandwidth for initial sync (use the seed calculator to estimate)

Step-by-Step: Configuring a Sync Job

1. Create a Remote Connection

On the destination PBS (the offsite server where replicated backups should land), add your local PBS as a remote.

GUI: Navigate to Configuration > Remotes > Add.

CLI:

bash
proxmox-backup-manager remote create my-local-pbs \
  --host 203.0.113.50 \
  --userid sync@pbs \
  --password \
  --fingerprint 64:d3:ff:3a:50:38:a2:b7:4c:9e:...
Add remote on the destination PBS

The --password flag without a value prompts for interactive input. You can also pass it inline for scripted setups, but avoid storing passwords in shell history.

Test the connection by listing configured remotes:

bash
proxmox-backup-manager remote list
Verify remote configuration

If the remote appears without errors, the destination can reach your source PBS.

2. Create the Sync Job

Still on the destination PBS:

GUI: Navigate to your target datastore, then Sync Jobs > Add.

CLI:

bash
proxmox-backup-manager sync-job create offsite-sync \
  --remote my-local-pbs \
  --remote-store local-datastore \
  --store offsite-datastore \
  --schedule "Mon..Fri 06:00" \
  --remove-vanished true \
  --owner sync@pbs
Create a sync job

Key parameters:

  • --remote and --remote-store: The remote connection name and the datastore on the source PBS to pull from.
  • --store: The datastore on the destination where synced snapshots are stored.
  • --schedule: Cron-style schedule. Mon..Fri 06:00 runs every weekday at 6 AM.
  • --remove-vanished: When true, snapshots pruned on the source get removed from the destination. Set to false if you want the destination to retain everything independently of the source's retention.
  • --groups: Optional filter to sync specific backup groups only (e.g., vm/105,vm/110). Omit to sync all groups.
  • --owner: The user who owns the synced snapshots on the destination.

3. Run and Verify

Trigger the first sync manually:

bash
proxmox-backup-manager sync-job run offsite-sync
Manually trigger sync job

Monitor progress in the task log (Administration > Task Log in the GUI). The first run transfers all existing snapshots and their chunks. This will take a while depending on datastore size and bandwidth.

Once complete, verify that snapshots appear on the destination datastore. Snapshot names, timestamps, and backup groups should match the source.

First Sync Takes Time

The initial sync transfers your entire datastore. A 1 TB datastore over a 100 Mbps link takes roughly 22 hours. Schedule the first sync during low-usage periods or consider a physical initial seed for large datasets.

Scheduling Best Practices

The order of operations matters. Your backup pipeline should follow this sequence:

  1. Local backup job completes on PVE
  2. Sync job runs on the destination PBS, pulling new snapshots
  3. Local prune job runs on the source PBS, removing old snapshots
  4. Remote prune job runs on the destination (if applicable)

Sync must happen after backups complete so there's fresh data to replicate. Sync must also finish before local pruning removes the snapshots you want offsite.

Recommended Sync Schedules
Property
Critical production
Standard VMs
Archives
Compliance
Schedule
Every 6 hours
Daily
Weekly
Daily
Notes
Minimizes RPO, requires adequate bandwidth
Run after nightly backup window closes
Low-change data, weekly sync is sufficient
Ensures daily offsite copies exist for audit
Sync Before Local Prune Window Closes

Your sync job must complete before local retention policies delete snapshots. If you keep 7 daily snapshots locally and sync runs weekly, you'll miss snapshots that were pruned between sync runs. Either sync daily or extend local retention to cover the gap.

If bandwidth is limited, several approaches help.

Bandwidth throttling. PBS doesn't have a built-in bandwidth limit on sync jobs, but you can use OS-level traffic shaping (tc on Linux) or VPN-level throttling to prevent sync from saturating your link during business hours.

Physical initial seed. For multi-terabyte datastores, shipping a disk is faster than syncing over the internet. Back up to a portable drive, ship it, import the data on the destination, then let incremental syncs handle ongoing changes.

Selective sync with --groups. If bandwidth is tight, sync only your most critical VMs first. Add less critical groups later as capacity allows.

Note that PBS chunks are already compressed (ZSTD by default). Re-compressing during transfer provides no meaningful benefit.

Security Considerations

Offsite replication introduces new trust boundaries. Your backup data is leaving your network.

Client-side encryption. PBS supports client-side encryption where backup data is encrypted before it's stored. When syncing encrypted backups, the destination PBS only ever sees encrypted chunks. It cannot read your data. This is the strongest protection for offsite copies, especially when syncing to third-party infrastructure.

The --encrypted-only flag on sync jobs ensures that only encrypted backups are synced. Unencrypted snapshots are skipped. Use this to prevent accidental leakage of plaintext backups to offsite storage.

Credential management. The remote connection stores credentials on the destination PBS. Use a dedicated user account with minimal permissions (DatastoreBackup role). Rotate credentials periodically.

Network security. Sync traffic runs over HTTPS on port 8007. For additional isolation, run sync over a site-to-site VPN rather than exposing PBS directly to the internet. If direct exposure is necessary, restrict firewall rules to known source IPs only.

Monitoring and Alerting

A sync job that silently fails for weeks defeats the purpose of offsite replication.

Check the task log on the destination PBS after each sync run. Successful syncs show the number of snapshots and chunks transferred. Failed syncs log the error. Common failure causes:

  • Network timeout: Source PBS unreachable. Check connectivity and firewall rules.
  • Authentication error: Password changed or user deleted on source. Update the remote configuration.
  • Storage full: Destination datastore out of space. Prune old snapshots or expand storage.
  • Fingerprint mismatch: Source PBS certificate changed (reinstall, renewal). Update the fingerprint in the remote configuration.

Integrate sync monitoring with your existing alerting. We covered monitoring setup in detail in our backup monitoring and alerting guide. Track sync duration trends over time. A sync that took 20 minutes last month but now takes 3 hours indicates growing data volume or degrading network performance.

Frequently Asked Questions

Yes. Create multiple remote connections and separate sync jobs for each destination. This is useful for maintaining copies in different geographic regions. Each sync job runs independently with its own schedule and configuration.

PBS sync is resumable. On the next run, it picks up where it left off. Chunks already transferred don't need re-sending. Partially transferred snapshots are not visible on the destination until fully synced. No manual cleanup is needed.

The --verified-only flag on sync jobs restricts syncing to snapshots that have passed verification on the source. This ensures you don't replicate corrupted backups offsite. Run verification jobs on the source before the sync window opens.

Yes. Add the offsite PBS as a storage target in Proxmox VE, just like any other PBS instance. You can browse snapshots and restore VMs directly from it. Restore speed depends on the network link to the offsite location. For guidance on testing restores, see our post on restore testing and DR drills.

PVE replication copies VM data between cluster nodes for high availability. It operates at the storage layer (ZFS send/receive) and targets local cluster nodes. PBS sync jobs replicate backup snapshots between PBS instances, typically across sites. They serve different purposes: HA failover vs. offsite backup.

It depends on the daily change rate of your VMs. A typical server with 100 GB of data and a 5% daily change rate generates roughly 5 GB of new chunks per day. Deduplication across VMs can reduce this further. Monitor actual transfer sizes in the task log after a few sync runs to establish your baseline.

Completing Your Backup Strategy

Sync jobs add the geographic separation that turns a local backup setup into a real disaster recovery strategy. The critical pieces: configure a remote connection to your source PBS, create a sync job on the destination, schedule it to run after your backup window, and monitor for failures.

For those running their own offsite PBS, the setup described above works well. The trade-off is managing a second PBS instance: hardware, connectivity, monitoring, and maintenance for both sides.