I got my new 16 TB SAS drive in yesterday a then started to create another vdev to extend my current raid z volume then decided to go with creating one large vdev and volume out of my 16tb drives in raidz2. Before destroying my current raidz volume with 3 drives in the vdev I compared the data with my backup data on my backup drive that used to be my primary and it was identical.
I thought about using “rsync” with the “parallel” command so I could have as many as 20 streams going from one of my spare bl460c gen7 blades I have Centos7 installed on. The thought was to go with nfs mounts of the volumes from each FreeNAS on the Centos7 system using the alternate network then about 20 rsync streams to take advantage of the 10gbe network. After getting the “parallel” package installed and nfs mounts tested an in “/etc/fstab” I decided not to because with rsync piped to parallel you have to create the directory and sub-directories first then there could be a lot of back and forth and fine tuning. I decided to use the FreeNAS built in ZFS replication.
I then pointed all of my Plex servers virtual and physical to my backup FreeNAS system and tested until I get the data replicated back to my primary FreeNAS system with my new Zvol.
Here’s what I did:
- Removed raidz volume Storage → Pools →<Pool Name> → Pool Operations (gear next to pool) → Export/Disconnect → <Delete Configuration/Data of Pool>.
- Storage → Pools → Create New Pool → Select Drives → <chose RAIDZ2 6 drives>
- Left me with 58tb usable space able to withstand 2 drive failures. You lose about 2.45 TB in overhead OS off each 16tb drive.
- As I was getting happy ready to start my replication over my 10Gbe network I recalled I hadn’t moved my primary NIC’s on my primary NAS off the 1gb NIC’s yet
- I started DHCP on one of my 10Gbe NIC’s and created the replication to go over that NIC.
- Started replication from backup FreeNAS to my primary from the top Zvol with my backup dataset and my file storage dataset selected as one job.
- The backup dataset went fine and replicated before I went to bed which is usually the next morning anyway. This was only 900GB
- The replication died 2.98TBs into the file storage dataset which has about 22TB I found once I woke up.
- The mistake I made was to push the data over my primary network which created issues with my network streams of my internet radio and some other localized issues.
- I decided to shutdown the 10gbe NIC on my primary network from my primary FreeNAS and activate dhcp on the 10gbe NIC of my alternate network of my primary FreeNAS system.
- This is now going just fine and at about 14TB data replicated so far
- The network shows it has been transferring data at a mean of 1.65GB with sustained bursts of 15 -20 minutes of
2.75gb3.56gb.
The other issue I was having a problem was is monitoring the progress of the replication. With ZFS it often doesn’t mount the volume in the OS until it finishes so traditional tools like “df” don’t see the growing size of the data.
I finally started using the following commands after re-familiarizing myself by going to some zfs cookbooks.
This works on initial replication from the cli target when the volume isn’t mounted:
“# zfs get -H -o value used <volume-name>”
2.98T
You can get the volume name of course from your replication job or :
#zfs mount
#zfs get readonly ← You have to sift through the output
Once the initial replication failed the volume became mounted and you have to use a different command to view the progress though the OS tools still may not show it mounted.
#zpool list
Now shows the progress and you can follow the progress in the FreeNAS Dashboard GUI for the pool since ZFS has it listed and mounted.
All looking good and after replication is completed will test and then start getting my VM volumes reconfigured an beef up the backup FreeNAS. Then go exclusively to my bl460c Gen8 XCP-Ng systems for my virtual environment.