Configure Syncoid on FreeBSD
Here’s a quick guide to configure Syncoid on FreeBSD for use with removable drives as an off-site backup solution.
It’s generally best to perform backups as a non-root user, operating in pull-mode.
To that end, the backup
host and each client
host to be backed up will get a non-root user named backup
and minimal ZFS permissions to perform the required operations on each side.
When running syncoid
on the backup host, the backup
user will ssh
into each client machine, find an old ZFS snapshot from a previous run, create a new ZFS snapshot,
and replicate all of the snapshots between the older one and the newly created one from the client
machine to the backup
machine.
Configure the destination drive on the backup host
To begin, find and partition the backup drive on the backup
machine. In this example, ada0
contains the OS and ada1
is a new empty backup drive.
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# geom disk status
Name Status Components
ada0 N/A N/A
ada1 N/A N/A
# gpart show
=> 40 33554352 ada0 GPT (16G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 29356032 3 freebsd-zfs (14G)
33552384 2008 - free - (1.0M)
# gpart show ada1
gpart: No such geom: ada1.
Create a new ZFS partition on the new backup drive and give it the GPT label backup
.
# gpart show ada1
gpart: No such geom: ada1.
# gpart create -s gpt ada1
ada1 created
# gpart add -a 1m -l backup -t freebsd-zfs "ada1"
ada1p1 added
# gpart show ada1
=> 40 33554352 ada1 GPT (16G)
40 2008 - free - (1.0M)
2048 33550336 1 freebsd-zfs (16G)
33552384 2008 - free - (1.0M)
GELI-encrypt the partition and attach the container. Save the passphrase.
# ls /dev/gpt/
backup gptboot0
# geli init -e AES-XTS -l 256 -s 4096 "/dev/gpt/backup"
Enter new passphrase:
Reenter new passphrase:
Metadata backup for provider /dev/gpt/backup can be found in /var/backups/gpt_backup.eli
and can be restored with the following command:
# geli restore /var/backups/gpt_backup.eli /dev/gpt/backup
# geli attach /dev/gpt/backup
Enter passphrase:
# geli status
Name Status Components
gpt/backup.eli ACTIVE gpt/backup
# ls /dev/gpt/
backup backup.eli gptboot0
Create a new zpool called backup
on the gpt/backup.eli
device.
# zpool create backup gpt/backup.eli
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 15.5G 408K 15.5G - - 0% 0% 1.00x ONLINE -
zroot 13.5G 925M 12.6G - - 0% 6% 1.00x ONLINE -
To support multiple destination targets from a single source, syncoid relies on the identifier option, which is used to uniquely identify the destination for proper snapshot management.
It can be any unique string, but a reasonable option is to use each backup drive’s serial number.
# geom disk list ada1 | awk '/ident:/ {print $2}'
VB51d1659c-994938ee
Install Syncoid on the backup host
Syncoid comes with the Sanoid package, so install it on the backup
host.
It does not need to be installed on any of the client
machines.
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# pkg search sanoid
sanoid-2.3.0 Policy-driven snapshot management and replication tools
sanoid-devel-1.0.0.20191105 Policy-driven snapshot management and replication tools
# pkg install -y sanoid-2.3.0
[...]
Create a new non-root user on the backup host to receive the snapshots
Create a non-root user called backup
on the backup
host, giving it a long random password using openssl
.
To operate as this user, one must first su
to root
and then su - backup
to switch to the backup
user.
Create new ssh
keys for the backup
user, without requiring a password, to allow it to easily ssh
into the clients.
Save the user’s public key somewhere so it can be copied to each of the client machines later on.
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# echo "$(openssl rand -base64 32)" | pw useradd -n backup -c "Backup user" -m -h 0
# su - backup
[...]
$ echo "$(whoami)@$(hostname):$(pwd)"
backup@backup:/home/backup
$ ssh-keygen -C "$(whoami)@$(hostname)" -f ~/.ssh/id_ed25519 -N "" -t ed25519
[...]
$ cat ~/.ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkvg2JPALSRZmQYxf7PaBLxSGoheE3+0JagDmk/6DVw backup@backup
$ cat ~/.ssh/id_ed25519.pub | nc termbin.com 9999
https://termbin.com/XXXX
$ curl https://termbin.com/XXXX
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkvg2JPALSRZmQYxf7PaBLxSGoheE3+0JagDmk/6DVw backup@backup
Give the non-root user the required ZFS permissions on the destination dataset
To run as a non-root user, syncoid
requires specific permissions
on the destination dataset. In this case, add them to the backup
dataset,
which will serve as the “root” dataset under which the backups for all of the client machines will be stored.
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
backup 336K 15.0G 96K /backup
zroot 1.03G 12.0G 96K /zroot
[...]
# zfs allow -u backup compression,mountpoint,create,mount,receive,rollback,destroy backup
# zfs allow backup
---- Permissions on backup -------------------------------------------
Local+Descendent permissions:
user backup compression,create,destroy,mount,mountpoint,receive,rollback
Create a non-root user on each client machine
On each client
machine that is to be backed up, create a backup
user.
Give it a long random password using openssl
and append the public key saved earlier to the end of the user’s authorized_keys
file.
# echo "$(whoami)@$(hostname):$(pwd)"
root@client:/root
# echo "$(openssl rand -base64 32)" | pw useradd -n backup -c "Backup user" -m -h 0
# su - backup
[...]
$ echo "$(whoami)@$(hostname):$(pwd)"
backup@client:/home/backup
$ mkdir .ssh
$ curl https://termbin.com/XXXX
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkvg2JPALSRZmQYxf7PaBLxSGoheE3+0JagDmk/6DVw backup@backup
$ curl https://termbin.com/XXXX >> .ssh/authorized_keys
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 95 100 95 0 0 107 0 --:--:-- --:--:-- --:--:-- 107
$ cat .ssh/authorized_keys
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAkvg2JPALSRZmQYxf7PaBLxSGoheE3+0JagDmk/6DVw backup@backup
Give the backup user on each client host the proper ZFS permissions
On each client
machine to be backed up, give the backup
user the proper ZFS permissions to manage snapshots and send data.
In this case, allow the backup
user on the client
host to snapshot and send the zroot
dataset.
# echo "$(whoami)@$(hostname):$(pwd)"
root@client:/root
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 1015M 12.1G 96K /zroot
[...]
# zfs allow -u backup send,hold,mount,snapshot,destroy zroot
# zfs allow zroot
---- Permissions on zroot --------------------------------------------
Local+Descendent permissions:
user backup destroy,hold,mount,send,snapshot
Make sure SSH works
From the backup
host, make sure the backup
user can ssh
into the client
host without entering a password
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# su - backup
[...]
$ echo "$(whoami)@$(hostname):$(pwd)"
backup@backup:/home/backup
$ ssh client
[...]
$ echo "$(whoami)@$(hostname):$(pwd)"
backup@client:/home/backup
$ exit
Connection to client.ccammack.com closed.
Create per-client datasets
To allow a single backup drive on the backup
machine to store backups from multiple clients,
create a new dataset for each client
machine using the client’s hostname
.
In this case, create a new dataset for the machine called client
below the existing backup
dataset.
The non-root user will not be able to mount the dataset, but it will still be created.
$ echo "$(whoami)@$(hostname):$(pwd)"
backup@backup:/home/backup
$ zfs list | grep backup
backup 408K 15.0G 96K /backup
$ zfs create -p backup/client
cannot mount '/backup/client': failed to create mountpoint: Permission denied
filesystem successfully created, but not mounted
$ zfs list | grep backup
backup 588K 15.0G 96K /backup
backup/client 96K 15.0G 96K /backup/client
Run syncoid on the backup host
Run syncoid
on the backup
host to replicate the zroot
dataset from the client
host to the backup
host.
Specify the --identifier
option with the backup drive’s serial number to uniquely identify the backup target.
Doing this will allow syncoid
to properly trim old snapshots for each backup drive when they are no longer needed.
Run syncoid
again as needed to replicate additional datasets other than zroot
.
During the first run, ZFS will attempt (and fail) to mount each new dataset as it is created. These errors can safely be ignored; the new datasets are still being created and their data replicated. Subsequent runs for the same datasets will finish more quickly and will not display these errors.
$ echo "$(whoami)@$(hostname):$(pwd)"
backup@backup:/home/backup
$ geom disk list ada1 | awk '/ident:/ {print $2}'
VB51d1659c-994938ee
$ syncoid --identifier=$(geom disk list ada1 | awk '/ident:/ {print $2}') --recursive --no-privilege-elevation backup@client:zroot backup/client/zroot
WARNING: lzop not available on source ssh:-S /tmp/syncoid-backupclient-1760194190-2734-7493 backup@client- sync will continue without compression.
WARNING: mbuffer not available on source ssh:-S /tmp/syncoid-backupclient-1760194190-2734-7493 backup@client - sync will continue without source buffering.
INFO: Sending oldest full snapshot backup@client:zroot@syncoid_VB51d1659c-994938ee_backup_2025-10-11:10:49:51-GMT-04:00 to new target filesystem backup/client/zroot (~ 43 KB):
46.3KiB 0:00:00 [ 327KiB/s] [=============================================================================================================================================] 107%
cannot mount '/backup/client/zroot': failed to create mountpoint: Permission denied
[...]
$ echo $?
0
$ zfs list | grep backup
backup 1016M 14.0G 96K /backup
backup/client 1014M 14.0G 96K /backup/client
backup/client/zroot 1014M 14.0G 96K /backup/client/zroot
[...]
Dismount the backup drive
After the backup finishes, export and dismount the backup
drive:
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 15.5G 1016M 14.5G - - 0% 6% 1.00x ONLINE -
zroot 13.5G 1.03G 12.5G - - 0% 7% 1.00x ONLINE -
# zpool export backup
# geli status
Name Status Components
gpt/backup.eli ACTIVE gpt/backup
# ls /dev/gpt/
backup backup.eli gptboot0
# geli detach /dev/gpt/backup
# geli status
Remount the backup drive
To prepare for the next run, GELI-attach the backup
drive, enter the passphrase and import the backup
pool:.
# echo "$(whoami)@$(hostname):$(pwd)"
root@backup:/root
# gpart show ada1
=> 40 33554352 ada1 GPT (16G)
40 2008 - free - (1.0M)
2048 33550336 1 freebsd-zfs (16G)
33552384 2008 - free - (1.0M)
# ls /dev/gpt
backup gptboot0
# geli attach /dev/gpt/backup
Enter passphrase:
# zpool import backup
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup 15.5G 1016M 14.5G - - 0% 6% 1.00x ONLINE -
zroot 13.5G 1.03G 12.5G - - 0% 7% 1.00x ONLINE -
$ zfs list | grep backup
backup 1016M 14.0G 96K /backup
backup/client 1014M 14.0G 96K /backup/client
backup/client/zroot 1014M 14.0G 96K /backup/client/zroot
[...]