


Points of presence close to your servers that eliminate protocol and TCP bottlenecks. Up to 13x faster Proxmox Backup Server transfers over high-latency links. One hostname change, everything else stays the same.
Proxmox Backup Server backups over high-latency links are significantly slower than available bandwidth allows. Two independent bottlenecks cause this.
PBS uploads chunks one at a time, waiting for a server acknowledgment before sending the next. On a 150 ms round-trip link, this caps throughput at roughly 60 Mbit/s regardless of available bandwidth.
4 MB / 0.150 s = 26.7 MB/s ≈ 213 Mbit/s theoretical maxA single TCP connection cannot fill a fast, high-latency link. Linux kernel defaults limit a single stream to around 200–250 Mbit/s on a 150 ms path. PBS uses a single TCP connection for the entire backup.
iperf3 ×1: 216 Mbit/s • iperf3 ×8: 1,040 Mbit/sAn edge location runs two components that address each bottleneck independently.
Instead of connecting to Frankfurt, your PBS client connects to an edge location near your server with sub-5 ms latency.
The edge node writes each chunk to local NVMe storage and immediately returns success. Your client never waits for a Frankfurt round trip.
Up to 64 parallel uploads forward chunks to the datacenter. Consistency-critical operations (index closes, backup finish) block until all pending chunks are confirmed.
8 parallel TCP streams between edge and datacenter, each with its own congestion window. Aggregate throughput scales linearly with stream count.
Your client talks to the nearby edge node at near-LAN speeds. The edge forwards data to the datacenter over parallel TCP streams.
A protocol-aware PBS proxy that ACKs chunk uploads instantly by writing to local NVMe storage. Chunks are forwarded to the datacenter with up to 64 parallel uploads. Consistency-critical operations (index closes, backup finish) block until all chunks are confirmed stored. This eliminates the per-chunk round-trip wait that limits PBS throughput.
Splits the single TCP connection into 8 parallel streams, each with its own kernel congestion window. Traffic is distributed using a framed protocol with round-robin scheduling. The aggregate throughput scales with the number of streams, reaching ~800 Mbit/s through the multiplexer on our test path.
Not all operations can be ACKed instantly. Index closes, finish operations, and appends act as barriers that block until all pending chunk forwards have completed. A backup only reports success when every chunk is confirmed stored in the datacenter. No partial or corrupt backups.
Change the remote hostname in your Proxmox Backup Server configuration. Credentials, encryption keys, datastore names, and all other settings remain unchanged. If the edge location is unavailable, point back at the direct hostname. Zero lock-in.
Tested on a link between OVH BHS (Beauharnois, Quebec) and Frankfurt. ~150 ms RTT.
| Method | Throughput | vs Direct PBS |
|---|---|---|
| PBS direct | 44–67 Mbit/s | baseline |
| TCP multiplexer only | 80 Mbit/s | ~1.5x |
| iperf3 single stream | 216 Mbit/s | TCP ceiling |
| Write accelerator + TCP mux | 573 Mbit/s | 8–13x |
| iperf3 ×8 streams | 1,040 Mbit/s | link max |
More locations will be added based on demand. Contact us to request one for your region.
North America
bhs1-1.edge.pbs-host.deBest for servers in Canada, US East, and US Central
West America
hil1-1.edge.pbs-host.deBest for servers in US West and US Central
Asia
sgp1-1.edge.pbs-host.deBest for servers in Asia and Australia
No software to install, no agents, no configuration files. Change the remote hostname and you're done.
ping fra1-ingress.pbs-host.de from your server. If RTT is above 50 ms, an edge location will help. Point your remote back at the direct hostname. Your backups continue at the lower direct speed. No data loss, no reconfiguration beyond the hostname.
Unconfirmed chunks are not recorded. The next barrier operation fails, and the client reports the error. No partial or corrupt backup is stored. Retry works normally.
Chunks are buffered on NVMe during the backup and forwarded in real time. After the backup completes, no customer data remains on the edge node. Durable storage is only in the main datacenters.
Edge locations make your backups faster. Combine with other features to make them safer.
Your working backup target.
IncludedFaster transfers over WAN.
Includedping fra1-ingress.pbs-host.de from your server. If the round-trip time is above 50 ms and you have at least 100 Mbit/s of upload bandwidth, an edge location will improve throughput. If you're in central Europe with sub-20 ms latency, the PBS protocol overhead is negligible and your link bandwidth is the actual constraint.Edge locations eliminate protocol and TCP bottlenecks for Proxmox Backup Server transfers over high-latency links. Up to 13x faster. No extra cost. One hostname change.
* = VAT may apply