Well I ended up having to re-push my data after the initial push failed and I then re-pushed using a different replication job of just the one dataset I needed for that volume. Appears the second replication dumped the dataset’s data in the Zvols root and didn’t create a separate dataset. I looked at the new replications configuration and I think since I didn’t tick the box of the zvol but just the dataset in the simplified version in the GUI it didn’t include the zvol name in the destination path.
I figure no big deal rename using ZFS tools and also change the mountpoints. That worked on the CLI but didn’t translate in the GUI plus the aborted effort with the dataset was still there. Once I deleted the old dataset went back and forth with various commands not propagating to the GUI and seeing the warning of the CLI that changes made on the CLI would not be kept in the database after reboot I decided to redo the original replication job just changing to my secondary network.
Now everything seems to be going well and not interfering with things on my primary network. I see the progress on the cli using commands including percentage of time left with the replication and size of the data being transferred.
Just a little addendum. The percentage shown with “ps -aux | grep send” seems inaccurate and I was wondering about it the whole time since I have 22TB of data and the percentage was consistently low compared to the data shown transferred to the target with the command “zfs get -H -o value used file-storage2/sonora-nfs1” on the target and in the GUI. This held true as the replication ended right around 50% mark of the ZFS send information and when the transferred data reached the expected levels