remote-backups.comremote-backups.com
Contact illustration
Sign In
Don't have an account ?Sign Up
Included Free • No Extra Cost

Edge Locations

Points of presence close to your servers that eliminate protocol and TCP bottlenecks. Up to 13x faster Proxmox Backup Server transfers over high-latency links. One hostname change, everything else stays the same.

Why Backups Over WAN Are Slow

Proxmox Backup Server backups over high-latency links are significantly slower than available bandwidth allows. Two independent bottlenecks cause this.

Protocol Serialization

PBS uploads chunks one at a time, waiting for a server acknowledgment before sending the next. On a 150 ms round-trip link, this caps throughput at roughly 60 Mbit/s regardless of available bandwidth.

4 MB / 0.150 s = 26.7 MB/s ≈ 213 Mbit/s theoretical max
In practice: 44–67 Mbit/s measured

TCP Congestion Window

A single TCP connection cannot fill a fast, high-latency link. Linux kernel defaults limit a single stream to around 200–250 Mbit/s on a 150 ms path. PBS uses a single TCP connection for the entire backup.

iperf3 ×1: 216 Mbit/s • iperf3 ×8: 1,040 Mbit/s
8 parallel streams = ~5x throughput

How Edge Locations Work

An edge location runs two components that address each bottleneck independently.

1

Your client connects to the nearest edge node

Instead of connecting to Frankfurt, your PBS client connects to an edge location near your server with sub-5 ms latency.

2

Write accelerator ACKs chunks instantly

The edge node writes each chunk to local NVMe storage and immediately returns success. Your client never waits for a Frankfurt round trip.

3

Chunks are forwarded asynchronously

Up to 64 parallel uploads forward chunks to the datacenter. Consistency-critical operations (index closes, backup finish) block until all pending chunks are confirmed.

4

TCP multiplexer saturates the link

8 parallel TCP streams between edge and datacenter, each with its own congestion window. Aggregate throughput scales linearly with stream count.

Data Flow

Your Server
e.g. Canada
< 5 ms
Edge Location
Write accelerator + TCP mux
~150 ms
Datacenter
Frankfurt

Your client talks to the nearby edge node at near-LAN speeds. The edge forwards data to the datacenter over parallel TCP streams.

Two Components, Two Bottlenecks

Write Accelerator

A protocol-aware PBS proxy that ACKs chunk uploads instantly by writing to local NVMe storage. Chunks are forwarded to the datacenter with up to 64 parallel uploads. Consistency-critical operations (index closes, backup finish) block until all chunks are confirmed stored. This eliminates the per-chunk round-trip wait that limits PBS throughput.

TCP Multiplexer

Splits the single TCP connection into 8 parallel streams, each with its own kernel congestion window. Traffic is distributed using a framed protocol with round-robin scheduling. The aggregate throughput scales with the number of streams, reaching ~800 Mbit/s through the multiplexer on our test path.

Protocol Correctness

Not all operations can be ACKed instantly. Index closes, finish operations, and appends act as barriers that block until all pending chunk forwards have completed. A backup only reports success when every chunk is confirmed stored in the datacenter. No partial or corrupt backups.

Drop-In Replacement

Change the remote hostname in your Proxmox Backup Server configuration. Credentials, encryption keys, datastore names, and all other settings remain unchanged. If the edge location is unavailable, point back at the direct hostname. Zero lock-in.

Measured Results

Tested on a link between OVH BHS (Beauharnois, Quebec) and Frankfurt. ~150 ms RTT.

MethodThroughputvs Direct PBS
PBS direct44–67 Mbit/sbaseline
TCP multiplexer only80 Mbit/s~1.5x
iperf3 single stream216 Mbit/sTCP ceiling
Write accelerator + TCP mux573 Mbit/s8–13x
iperf3 ×8 streams1,040 Mbit/slink max

Available Edge Locations

More locations will be added based on demand. Contact us to request one for your region.

Beauharnois, Quebec

North America

bhs1-1.edge.pbs-host.de

Best for servers in Canada, US East, and US Central

Hillsboro, Oregon

West America

hil1-1.edge.pbs-host.de

Best for servers in US West and US Central

Singapore

Asia

sgp1-1.edge.pbs-host.de

Best for servers in Asia and Australia

When Edge Locations Help

Good Fit

  • RTT to Frankfurt above 50 ms
  • 100 Mbit/s+ upload bandwidth available
  • Large VMs or frequent changes
  • Servers in North America, Asia, or other distant regions

Won't Help

  • Central Europe with sub-20 ms latency
  • Upload bandwidth is the actual bottleneck
  • Small, infrequent backups where speed doesn't matter

One Hostname Change

No software to install, no agents, no configuration files. Change the remote hostname and you're done.

  1. Run ping fra1-ingress.pbs-host.de from your server. If RTT is above 50 ms, an edge location will help.
  2. Pick the nearest edge location from the list above.
  3. In the Proxmox Backup Server UI (Datastore → Remotes), change the Host field to the edge hostname. Keep the same port, user, password, and fingerprint.
  4. Run a backup and check the task log. Transfer rates should be significantly higher.
The edge location terminates TLS with its own certificate. You may need to update the fingerprint in your remote configuration. The new fingerprint is shown when you first connect.

Built as an Optimization, Not a Dependency

Edge Goes Down

Point your remote back at the direct hostname. Your backups continue at the lower direct speed. No data loss, no reconfiguration beyond the hostname.

Connection Drops Mid-Backup

Unconfirmed chunks are not recorded. The next barrier operation fails, and the client reports the error. No partial or corrupt backup is stored. Retry works normally.

No Data Persists on Edge

Chunks are buffered on NVMe during the backup and forwarded in real time. After the backup completes, no customer data remains on the edge node. Durable storage is only in the main datacenters.

Part of the Full Stack

Edge locations make your backups faster. Combine with other features to make them safer.

Primary Datastore

Your working backup target.

Included

Edge Locations

Faster transfers over WAN.

Included

Geo-Replication

Multi-region copies.

€4/TB/copy*

Frequently Asked Questions

No. Edge locations are included with every plan at no additional charge.

Only temporarily during the backup. Chunks are buffered on local NVMe storage and forwarded to the datacenter in real time. The spool is cleared after the backup completes. Durable storage is only in the main datacenters.

No. The edge location is a transparent proxy. Your existing credentials, encryption keys, and datastore configuration remain the same. You may need to update the TLS fingerprint since the edge node uses its own certificate.

Change your remote hostname back to the direct endpoint. Backups continue at the lower direct-to-datacenter speed. Edge locations are an optimization, not a dependency.

No. The accelerator only ACKs chunk uploads instantly. All consistency-critical operations (index closes, backup finish) wait for full confirmation from the storage server. A backup only reports success after every chunk is durably stored.

Yes. Contact us with your server's location and typical backup sizes. We deploy new edge locations based on demand.

Run ping fra1-ingress.pbs-host.de from your server. If the round-trip time is above 50 ms and you have at least 100 Mbit/s of upload bandwidth, an edge location will improve throughput. If you're in central Europe with sub-20 ms latency, the PBS protocol overhead is negligible and your link bandwidth is the actual constraint.

Edge locations accelerate the connection between your server and our platform. Sync jobs and geo-replication operate between our own datacenters and are already on low-latency links. Edge locations are specifically for the client-to-datacenter leg of the backup.

Back Up at Near-LAN Speeds

Edge locations eliminate protocol and TCP bottlenecks for Proxmox Backup Server transfers over high-latency links. Up to 13x faster. No extra cost. One hostname change.

* = VAT may apply