remote-backups.comremote-backups.com
Contact illustration
Sign In
Don't have an account ?Sign Up

PBS API Automation for MSPs

Managing five PBS clients through the web UI is annoying. Managing fifty is a different problem entirely. Every new customer means the same sequence: create user, create datastore or namespace, assign ACL, configure retention. Miss one step and you have a billing gap or a security hole. Do it fifty times and the miss rate is not zero. Proxmox Backup Server exposes a full REST API — every action the UI performs has an API equivalent you can script, audit, and re-run.

Key Takeaways
  • Proxmox Backup Server ships with a full REST API — every UI action has a direct API call
  • API tokens are scoped and revocable, making them safe for unattended automation scripts
  • A single bash script can provision a new client namespace in under 10 seconds
  • Check-before-create patterns make your scripts idempotent and safe to re-run after failures
  • Usage reporting pulls actual namespace sizes from the API — no estimating, no manual counting

Why Script PBS Instead of Clicking

The UI works for a handful of clients. It breaks as a process at scale for three reasons.

Reproducibility. Every click is a chance to skip a step. An onboarding script is the onboarding process — nothing is optional, nothing is forgotten.

Auditability. A bash script in a git repository is a record of every provisioning decision. A sequence of browser clicks leaves nothing. When a client asks what permissions they have, you can answer in seconds.

Scale. Scripted onboarding runs in under 10 seconds. Manual UI onboarding takes at least 8 minutes. At 40 new clients per month, that gap is significant. And that's ignoring the mental overhead of switching contexts for every onboarding.

If you're thinking about automation more broadly, PBS Ansible automation covers provisioning at the infrastructure level — datastores, sync jobs, and retention as version-controlled code.

PBS API Basics

Proxmox Backup Server runs a REST API on port 8007. The web UI calls the same endpoints you'll use in scripts. Nothing in the UI is off-limits to the API.

Authentication

Two options: password auth and API token auth. Use tokens for automation.

Password auth is session-based. The session expires. You need to re-authenticate on expiry, and rotation means script changes. API tokens are stateless — one token per script, revokable without touching anything else.

Create a dedicated automation user and token:

bash
# Create a dedicated automation user
proxmox-backup-manager user create automation@pbs \
    --password "$(openssl rand -base64 32)"

# Create a scoped API token for that user
proxmox-backup-manager user token create automation@pbs onboarding \
    --comment "MSP onboarding script"
Create automation user and token

The output includes the token secret. Save it immediately — PBS will not show it again.

The token ID format is user@realm!tokenname. In all API calls, you pass this as the Authorization header: PBSAPIToken=automation@pbs!onboarding:secret.

Token Permission Scopes

Tokens inherit from their user but can be further restricted. Create one token for onboarding (needs write access) and a separate token for billing reports (read-only). Least privilege per token means a leaked billing token cannot provision or delete anything.

API Token Permission Scopes
Privilege
DatastoreAdmin
DatastoreBackup
DatastoreReader
DatastoreAudit
Sys.Modify
Sys.Audit
What It Allows
Create, delete, configure datastores and namespaces; manage retention
Upload backups, create snapshots in assigned namespace
Read and restore snapshots, no writes or deletes
Read datastore stats and snapshot lists, no modifications
Create and modify users and ACLs
Read system configuration and user list
MSP Token Use
Onboarding script token
Per-client backup token
Restore-only token for clients
Billing and monitoring token
Onboarding script token
Monitoring token

A Quick API Test

The base URL for all endpoints is https://your-pbs-host:8007/api2/json/. Verify your token works with a simple datastore list:

bash
PBS_HOST="https://your-pbs-host:8007"
TOKEN_ID="automation@pbs!onboarding"
TOKEN_SECRET="your-token-secret"

curl -s \
  -H "Authorization: PBSAPIToken=${TOKEN_ID}:${TOKEN_SECRET}" \
  "${PBS_HOST}/api2/json/admin/datastore" | jq '.data[].store'
Test API auth — list datastores

If you see your datastore names in the output, auth is working.

TLS in production

Add -k to curl only for initial testing on self-signed certs. In production scripts, add your PBS CA certificate to the system trust store instead. Skipping TLS verification silently in an unattended script is a real exposure.

Scripted Client Onboarding

A complete client onboarding on Proxmox Backup Server has four steps: create user, create namespace, set ACL, configure retention. The script below does all four and is safe to re-run — it checks for existing resources before creating them.

Namespaces are the right model for most MSP setups. One shared datastore, one namespace per client. Clients see only their namespace. The deduplication pool is shared, which benefits clients with similar workloads. If you need hard storage isolation or separate billing footprints, use separate datastores instead. The PBS namespaces multi-tenant isolation guide covers the trade-offs in detail.

bash
#!/usr/bin/env bash
set -euo pipefail

PBS_HOST="${PBS_HOST:?PBS_HOST not set}"
TOKEN_ID="${TOKEN_ID:?TOKEN_ID not set}"
TOKEN_SECRET="${TOKEN_SECRET:?TOKEN_SECRET not set}"
DATASTORE="${DATASTORE:-client-data}"

CLIENT_NAME="${1:?Usage: $0 <client-name> <password>}"
CLIENT_PASSWORD="${2:?Usage: $0 <client-name> <password>}"

API="${PBS_HOST}/api2/json"

api_call() {
  curl -sf \
    -H "Authorization: PBSAPIToken=${TOKEN_ID}:${TOKEN_SECRET}" \
    "$@"
}

log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $*"; }

# 1. Create user (idempotent: skip if exists)
USERNAME="${CLIENT_NAME}@pbs"
log "Checking user ${USERNAME}"
if api_call "${API}/access/users/${USERNAME}" > /dev/null 2>&1; then
  log "User ${USERNAME} already exists — skipping"
else
  api_call -X POST "${API}/access/users" \
    -d "userid=${USERNAME}" \
    -d "password=${CLIENT_PASSWORD}" \
    -d "comment=Client: ${CLIENT_NAME}"
  log "User ${USERNAME} created"
fi

# 2. Create namespace (idempotent: skip if exists)
NAMESPACE="${CLIENT_NAME}"
log "Checking namespace ${NAMESPACE} in ${DATASTORE}"
if api_call "${API}/admin/datastore/${DATASTORE}/namespace" \
    -G -d "ns=${NAMESPACE}" > /dev/null 2>&1; then
  log "Namespace ${NAMESPACE} already exists — skipping"
else
  api_call -X POST "${API}/admin/datastore/${DATASTORE}/namespace" \
    -d "name=${NAMESPACE}"
  log "Namespace ${NAMESPACE} created"
fi

# 3. Restrict user to their namespace only
log "Setting ACL: ${USERNAME} -> DatastoreBackup on /${DATASTORE}/${NAMESPACE}"
api_call -X PUT "${API}/access/acl" \
  -d "path=/datastore/${DATASTORE}/${NAMESPACE}" \
  -d "userid=${USERNAME}" \
  -d "role=DatastoreBackup" \
  -d "propagate=true"
log "ACL set"

# 4. Configure retention for the namespace
log "Configuring retention for namespace ${NAMESPACE}"
api_call -X PUT "${API}/admin/datastore/${DATASTORE}" \
  -d "ns=${NAMESPACE}" \
  -d "keep-daily=7" \
  -d "keep-weekly=4" \
  -d "keep-monthly=3"
log "Retention configured: 7 daily, 4 weekly, 3 monthly"

log "Onboarding complete for ${CLIENT_NAME}"
printf '\nCredentials:\n'
printf '  Username:   %s\n' "${USERNAME}"
printf '  Password:   %s\n' "${CLIENT_PASSWORD}"
printf '  Datastore:  %s\n' "${DATASTORE}"
printf '  Namespace:  %s\n' "${NAMESPACE}"
onboard-client.sh

Run it:

bash
export PBS_HOST="https://pbs.example.com:8007"
export TOKEN_ID="automation@pbs!onboarding"
export TOKEN_SECRET="your-token-secret"
export DATASTORE="client-data"

bash onboard-client.sh acme-corp 'S3cur3P@ssw0rd'
Run the onboarding script

The script is idempotent. Run it twice and the second run skips any steps that already completed rather than failing on duplicate errors.

Delivering credentials to clients

The script prints credentials to stdout. Pipe them into your secret management system — Vault, 1Password Secrets Manager, Bitwarden — rather than letting them land in a log file. Plaintext credentials sitting in /var/log/ on the PBS host are a problem waiting to happen.

For MSPs managing namespaces across multiple datastores, the PBS multi-tenant backup architecture guide covers namespace layout strategies, ACL inheritance, and how to structure datastores for clean client separation.

Usage Reporting for Billing

Billing accuracy requires actual namespace sizes from the API — not estimates. PBS exposes per-namespace storage stats. Pull them on the first of each month and you have a billing-ready CSV.

bash
#!/usr/bin/env bash
set -euo pipefail

PBS_HOST="${PBS_HOST:?PBS_HOST not set}"
TOKEN_ID="${TOKEN_ID:?TOKEN_ID not set}"
TOKEN_SECRET="${TOKEN_SECRET:?TOKEN_SECRET not set}"
DATASTORE="${DATASTORE:-client-data}"
MONTH="${1:-$(date +%Y-%m)}"

API="${PBS_HOST}/api2/json"

api_call() {
  curl -sf \
    -H "Authorization: PBSAPIToken=${TOKEN_ID}:${TOKEN_SECRET}" \
    "$@"
}

echo "client,datastore,namespace,size_gb,month"

# Enumerate all namespaces in the datastore
NAMESPACES=$(api_call \
  "${API}/admin/datastore/${DATASTORE}/namespace" | \
  jq -r '.data[].ns // empty')

for NS in $NAMESPACES; do
  # Fetch namespace storage stats
  STATS=$(api_call \
    "${API}/admin/datastore/${DATASTORE}/status?ns=${NS}")

  USED_BYTES=$(echo "$STATS" | jq -r '.data.used // 0')
  USED_GB=$(echo "scale=2; ${USED_BYTES} / 1073741824" | bc)

  echo "${NS},${DATASTORE},${NS},${USED_GB},${MONTH}"
done
billing-report.sh
Sample billing report
$
DATASTORE=client-data bash billing-report.sh 2026-04 > billing-2026-04.csv
$
cat billing-2026-04.csv
$
client,datastore,namespace,size_gb,month
$
acme-corp,client-data,acme-corp,142.30,2026-04
$
beta-labs,client-data,beta-labs,87.61,2026-04
$
cedar-hosting,client-data,cedar-hosting,321.05,2026-04
$
delta-it,client-data,delta-it,44.20,2026-04

The output imports directly into your billing system. Schedule it as a cron job on the first of each month:

bash
# Run billing report on the 1st of each month at 06:00 UTC
0 6 1 * * root \
  PBS_HOST=https://pbs.example.com:8007 \
  TOKEN_ID=automation@pbs!billing \
  TOKEN_SECRET=your-billing-token-secret \
  DATASTORE=client-data \
  /opt/scripts/billing-report.sh >> /var/log/pbs-billing.log 2>&1
/etc/cron.d/pbs-billing
Use a read-only token for billing

The billing script only needs DatastoreAudit and Sys.Audit. Create a separate token with those privileges rather than reusing the onboarding token. If the billing token is ever exposed, it cannot create users, modify ACLs, or delete snapshots.

Error Handling and Idempotency

API scripts fail in production. The question is whether they fail in a way you can recover from.

Two principles matter here: check before create, and log every call.

Check before create. Every resource creation is preceded by a GET request to see if the resource already exists. The onboarding script above does this for users and namespaces. If the script is interrupted after creating the user but before setting the ACL, re-running it skips user creation and picks up from the ACL step.

Log every call. Write a timestamped line to a log file for every API operation. When a client reports their credentials are not working, you need to know whether their user was created at all, and when.

Common errors you'll hit:

Common PBS API Errors
HTTP Status
401 Unauthorized
403 Forbidden
404 Not Found
409 Conflict
500 Internal Error
PBS Error Message
permission check failed
permission check failed (missing privilege on path)
datastore not found / no such namespace
already exists
path already exists on disk
Likely Cause
Token ID or secret is wrong, or token revoked
Token lacks the required privilege on the target path
Datastore or namespace name is wrong or not yet created
User, namespace, or datastore already exists
A new datastore path points to an existing directory
Fix
Verify TOKEN_ID format is user@realm!tokenname. Regenerate token if needed.
Add the correct ACL for the token's user at the specific path.
GET the parent resource first to confirm it exists before acting on it.
Use check-before-create: GET the resource first, skip creation if response is 200.
Choose a different path. Reusing an occupied path requires manual cleanup.
Do not suppress errors with || true

Appending || true to skip errors hides real failures. You want to know the difference between "already exists" (safe to continue) and "permission denied" (stop immediately). Use explicit GET-before-create checks and let real errors surface.

Offsite with remote-backups.com

The same API patterns apply when your Proxmox Backup Server offsite target is a managed instance. If you run on-prem PBS for clients and sync offsite to remote-backups.com, you configure one sync job per datastore — not one per client. Clients connect to your local PBS. That PBS syncs offsite using --encrypted-only, so your managed offsite target never receives plaintext data.

For MSPs who prefer not to run PBS infrastructure at all: remote-backups.com provides managed datastores your clients can target directly. You provision namespaces via the same API described in this post, hand clients their credentials, and skip the server maintenance entirely. The onboarding script works the same way against a managed PBS instance as it does against your own.

Wrapping Up

The Proxmox Backup Server REST API covers every provisioning action the UI exposes. A 150-line bash script handles full client onboarding in seconds, enforces consistent retention policies across all clients, and generates billing-ready usage reports without any manual steps. Check-before-create patterns make scripts safe to re-run after failures. Scoped API tokens keep automation credentials contained and revokable independently.

If you are managing multiple clients with Proxmox Backup Server today, scripted provisioning is not a nice-to-have at scale — it is the only way to stay consistent. The scripts in this post are a starting point. Add your own credential delivery, ticketing integration, and monitoring hooks as your workflow requires.

Skip the infrastructure. Keep the API.

remote-backups.com gives MSPs a managed PBS target with the same REST API — namespace provisioning, scoped tokens, usage stats — with no server to run or maintain.

View MSP Plans

Password auth creates a session token that expires and requires re-authentication. API tokens are stateless, scoped to specific privileges, and revokable without changing the user's password. Always use API tokens for unattended automation.

DatastoreBackup lets a user upload backups and read their own snapshots. DatastoreAdmin adds the ability to delete snapshots, modify retention settings, and manage namespaces. Client tokens should get DatastoreBackup. Your onboarding script token needs DatastoreAdmin and Sys.Modify.

PBS does not support hard per-namespace quotas. You can enforce soft limits in your billing script — check namespace usage and disable the client's ACL if they exceed a threshold. Hard quota enforcement is not built into the current PBS API.

The API is generally stable within major releases. Endpoint paths and parameter names have changed between minor versions in the past. Pin your scripts to a tested PBS version and review the changelog before upgrading. Test in a staging environment before rolling out to production clients.

Create a second token for the client with DatastoreReader privilege on their namespace. They can list and restore snapshots but cannot upload new backups or delete anything. Useful for giving clients self-serve restore without access to production backup credentials.
Bennet Gallein
Bennet Gallein

remote-backups.com operator

Infrastructure enthusiast and founder of remote-backups.com. I build and operate reliable backup infrastructure powered by Proxmox Backup Server, so you can focus on what matters most: your data staying safe.