You manage backups for ten clients. Each one gets a separate Proxmox Backup Server datastore. That's ten garbage collection schedules, ten sets of storage overhead, and zero cross-client deduplication. When most of those clients run the same Ubuntu LTS base, you're storing identical OS chunks ten times over. Namespaces fix this by giving you tenant isolation inside a single datastore, with deduplication working across every client.
Key Takeaways
- Namespaces provide logical tenant isolation within a single PBS datastore
- Deduplication works across all namespaces — the main benefit over separate datastores
- ACLs restrict each client to their own namespace with no visibility into others
- Each namespace supports independent retention and pruning policies
- PVE 7.2+ supports namespace specification when adding PBS storage
The Multi-Tenant Storage Problem
Running Proxmox Backup Server for multiple clients forces a choice. You can create a separate datastore per client, or you can share a datastore and use access controls to keep tenants apart.
Separate datastores give you clean isolation. Each client has their own storage path, garbage collection schedule, and usage metrics. Client offboarding is simple: delete the datastore. We recommended this approach in our PBS for MSPs guide, and it remains the right call when you need strict regulatory separation or independent GC schedules.
But separate datastores have a cost. PBS deduplication is datastore-scoped. Chunks are only deduplicated within a single datastore, not across datastores. If client A and client B both run Debian 12 with the same base packages, those identical chunks exist twice on disk. Multiply that by 20 clients running similar stacks and you're wasting significant storage.
Namespaces split the difference. Introduced in Proxmox Backup Server 2.2, namespaces create logical boundaries inside a single datastore. Each namespace acts as an isolated container for backup groups. ACLs enforce who can access what. And because everything lives in one datastore, the deduplication engine sees all chunks together.
What Are PBS Namespaces?
A namespace is a hierarchical path within a datastore. Think of it as a directory that contains backup groups, with access controls applied at the path level.
Namespaces support nesting. You can create /acme for a client and then /acme/production and /acme/staging beneath it. Each level can have its own ACL entries and retention policies.
Key properties:
- Deduplication is datastore-wide. Chunks from
/client-aand/client-bshare the same chunk store. Identical data is stored once regardless of namespace. - Garbage collection runs at the datastore level. You don't get per-namespace GC. One schedule covers all namespaces.
- Retention and pruning are per-namespace. Each namespace can have different keep policies.
- ACLs scope access to specific namespace paths. A user granted access to
/client-acannot list or access backup groups in/client-b.
Separate Datastores vs Namespaces
| Criteria | Separate Datastores | Namespaces |
|---|---|---|
Cross-Client Deduplication | ||
Data Isolation | ACL-based | |
Independent GC Schedules | ||
Per-Client Retention | ||
Clean Offboarding | Manual | |
Per-Client Usage Metrics | Via API | |
Setup Overhead | Higher | Lower |
Creating and Managing Namespaces
CLI Setup
Create namespaces with proxmox-backup-manager. The syntax is straightforward.
# Create top-level client namespaces
proxmox-backup-manager namespace create main-datastore client-a
proxmox-backup-manager namespace create main-datastore client-b
proxmox-backup-manager namespace create main-datastore client-c
# Create nested namespaces for environment separation
proxmox-backup-manager namespace create main-datastore client-a/production
proxmox-backup-manager namespace create main-datastore client-a/stagingWeb UI
In the PBS web interface, navigate to your datastore, select the Namespace dropdown, and click Add. The UI shows existing namespaces in a tree view.
Naming Conventions
Pick a convention and stick with it. Good options:
- Client slug:
acme-corp,contoso-ltd(readable, matches DNS/billing systems) - Client ID:
client-042,client-073(consistent, avoids rename issues) - Nested by environment:
acme-corp/production,acme-corp/dev
Avoid spaces, special characters, and deep nesting beyond two levels. Keep it simple enough that your automation scripts handle it without escaping headaches.
proxmox-backup-manager namespace list main-datastoreAccess Control with Namespace ACLs
Namespaces without ACLs are just folders. The isolation comes from PBS access control.
Creating Client Users
For each tenant, create a dedicated PBS user and API token.
# Create user
proxmox-backup-manager user create client-a@pbs \
--comment "Acme Corp backup account"
# Generate API token for automation
proxmox-backup-manager user generate-token client-a@pbs backup-token
# Output: client-a@pbs!backup-token = <secret-value>Scoping Permissions to a Namespace
The ACL path format for namespaces is /datastore/{datastore-name}/{namespace}. Grant the DatastoreBackup role at the namespace level, not the datastore level.
# Grant backup permissions on client-a namespace only
proxmox-backup-manager acl update /datastore/main-datastore/client-a \
DatastoreBackup \
--auth-id client-a@pbs!backup-token
# Grant read access for restores
proxmox-backup-manager acl update /datastore/main-datastore/client-a \
DatastoreReader \
--auth-id client-a@pbs!backup-tokenWith this configuration, client-a@pbs can push backups to and restore from the client-a namespace. They cannot list, access, or even detect the existence of client-b or client-c namespaces.
Scope ACLs to Namespaces, Not Datastores
Granting DatastoreBackup at /datastore/main-datastore (without a namespace path) gives the user access to the entire datastore, including all namespaces. Always include the namespace in the ACL path.
PBS Admins See Everything
Namespace ACLs restrict non-admin users. Users with the Admin role or root@pam can still access all namespaces. This is by design. If you need to prevent even administrative access, client-side encryption is the answer. See our client-side encryption guide.
Connecting PVE to a Namespaced PBS Target
Proxmox VE 7.2 and later supports specifying a namespace when adding a PBS storage target. Each client's PVE cluster points to their assigned namespace.
Web UI
In PVE, go to Datacenter > Storage > Add > Proxmox Backup Server. Fill in the server, datastore, and credentials as usual. The Namespace field accepts the namespace path (e.g., client-a).
CLI
pvesm add pbs offsite-backup \
--server pbs.example.com \
--datastore main-datastore \
--namespace client-a \
--username client-a@pbs!backup-token \
--password <token-secret> \
--fingerprint 64:d3:ff:3a:50:38:...Backup jobs configured against this storage target automatically land in the client-a namespace. The PVE user doesn't need to know about namespaces at all. Their backup jobs just work.
For nested namespaces, specify the full path: --namespace client-a/production.
Deduplication Across Namespaces
This is the headline feature. With separate datastores, PBS deduplicates within each datastore independently. With namespaces, the entire datastore shares one chunk store.
Consider 20 clients all running Ubuntu 22.04 servers. The base OS, common packages, and default configurations produce identical chunks. In a separate-datastore model, those chunks exist 20 times. In a namespaced datastore, they exist once.
Real-world impact depends on how similar your clients' workloads are. MSPs managing standardized environments (same OS images, same application stacks) see the biggest wins. A portfolio of 30 clients running similar LAMP stacks can easily achieve 5:1 or better dedup ratios at the datastore level, compared to 2:1 or 3:1 per client in isolated datastores.
Storage Savings Add Up Fast
If each of 20 clients has 500 GB of backup data with 60% overlap in OS and application chunks, namespaced dedup can save 3-4 TB compared to separate datastores. At typical storage costs, that's meaningful savings every month.
The trade-off is that garbage collection affects the entire datastore. When GC runs, it scans all chunks across all namespaces. For large datastores with many namespaces, GC cycles take longer. Schedule GC during off-peak hours and monitor its duration as you add tenants.
Retention and Pruning Per Namespace
Each namespace supports its own pruning configuration. Different clients can have different retention policies within the same datastore.
# Client A: standard retention
proxmox-backup-manager prune-job create client-a-prune \
--store main-datastore \
--ns client-a \
--keep-last 3 \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--schedule "daily"
# Client B: minimal retention
proxmox-backup-manager prune-job create client-b-prune \
--store main-datastore \
--ns client-b \
--keep-last 1 \
--keep-daily 3 \
--schedule "daily"Pruning one namespace only removes snapshots (and marks chunks as unused) within that namespace. Other namespaces are untouched. However, the actual disk space isn't freed until garbage collection runs at the datastore level and removes chunks that are no longer referenced by any namespace.
GC Reclaims Space Across All Namespaces
Pruning marks chunks as candidates for removal. GC actually frees the disk space. Since GC is datastore-wide, a chunk referenced by any namespace in the datastore stays on disk until all references are pruned.
Sync Jobs with Namespaces
PBS sync jobs support namespace filtering. You can replicate specific client namespaces to an offsite PBS instance, giving you per-client offsite replication without per-client datastores on either end.
# On the destination PBS: sync only client-a's namespace
proxmox-backup-manager sync-job create client-a-offsite \
--remote source-pbs \
--remote-store main-datastore \
--remote-ns client-a \
--store offsite-datastore \
--ns client-a \
--schedule "daily" \
--remove-vanished trueThis pulls only the client-a namespace from the source and writes it to the client-a namespace on the destination. Each client can have an independent sync schedule and offsite retention policy. For the full sync job setup, see our offsite replication guide.
Monitoring Namespaced Datastores
Standard PBS metrics (pbs_datastore_* in Prometheus exporters) report at the datastore level. You get total used space, total chunk count, and overall dedup ratio. That's useful for capacity planning but doesn't tell you how much storage each client consumes.
For per-namespace usage, query the PBS API directly:
# Get namespace-level status
curl -s -k \
-H "Authorization: PBSAPIToken=admin@pbs!metrics:$(cat /etc/pbs-token)" \
"https://localhost:8007/api2/json/admin/datastore/main-datastore/namespace?name=client-a" \
| jq '.data'If you need per-client billing, iterate through your namespaces and aggregate snapshot sizes. The numbers won't account for shared chunks (dedup is datastore-wide), so decide upfront whether you bill on raw snapshot size or actual storage consumed.
For alerting on backup failures and missed schedules, integrate with your monitoring stack. Our backup monitoring guide covers the Prometheus and Grafana setup.
When NOT to Use Namespaces
Namespaces aren't always the right answer. Use separate datastores when:
- Regulatory compliance demands physical separation. Some standards require that client data lives on separate storage paths or devices. Namespaces share underlying storage.
- Clients need independent GC schedules. GC is datastore-wide. One large client triggering long GC pauses affects everyone in that datastore.
- A single client dominates storage. If one client uses 80% of the datastore, their GC and pruning patterns dictate performance for all tenants. Give them their own datastore.
- You need simple offboarding. Deleting a datastore is atomic. Removing a namespace and ensuring no orphaned chunks remain requires more care.
A hybrid approach works well in practice. Put smaller clients with similar workloads into a shared, namespaced datastore for dedup benefits. Give large or compliance-sensitive clients their own datastores.
Wrapping Up
PBS namespaces give you tenant isolation without giving up cross-client deduplication. Combined with namespace-scoped ACLs, each client sees only their own backup groups. For MSPs managing many clients with similar environments, the storage savings over separate datastores are significant. Just keep GC scheduling and monitoring in mind as you add tenants.
Need managed multi-tenant PBS hosting?
remote-backups.com provides isolated PBS namespaces for each of your clients with built-in geo-replication and monitoring included.
View Plans


