ZFS Send/Receive Large Snapshot in the Cloud

Today I needed to migrate a ZFS pool from one server to another in the cloud.  It was about 20GB, and the standard method using zfs send/receive was not working because the ssh connection kept getting dropped.  No surprise on the connection drop for a single block of data that large, especially between competing cloud IaS providers! Here is a strategy you can use the accomplish that huge initial sync.

First, ZFS send the pool to gzip, and then to split in order to create several compressed files of smaller size (1GB). this command does this all in one line, saving the need to create a giant gzip file and then split it separately, taking up a bunch of storage:

zfs send pool1@autosnap-hourly.2015-01-05-20 | gzip -2 -c | split -b 1024MiB - pool1@autosnap-hourly.2015-01-05-20.split.gz.

Note the dot at the end of the final file name. That’s because split will append a serial suffix to each file it creates.

Once I get these files all transferred, I’ll update with the working command for the other side. I think it’ll be something like:

cat pool1@autosnap-hourly.2015-01-05-20.split.gz.* | gunzip -c | zfs receive pool1

Actual command:

sudo sh -c 'cat pool1@autosnap-hourly.2015-01-05-20.split.gz.* | gunzip -c | zfs receive pool0/expand'

Note that I needed to wrap the whole thing in sudo sh -c ”. This is because when you execute a command via sudo and try to use pipes, the command does not have elevated access to the other end of the pipe command.\

From here, I should be able to run zfs send/receive incrementally from snapshot to snapshot to fetch more recent changes with no problem, because the amount of data will be far smaller.

Leave a Reply