Well I built 2 new powerful workstation, one using Windows 10 Pro Workstation and the other Fedora 33 Workstation but of course with all types of server packages. The Windows workstation I’m using Asrock X299 WS/IPMI and the Linux Fedora 33 ASUS x99ws/ipmi. I put both into cases with hot swap backplanes for SAS with matching SAS controllers and K1200 Quadro GPU’s.
I have had to update bot motherboards BIOS and IPMI but the ASUS I had a few more challenges but tonight was able to get the IPMI working but not sure I’m happy with it since it requires the onboard VGA to be activated and when it’s activated it doesn’t see to allow my NVIDIA K1200 Quadro to run its one of the other and I am not going to 1024×768 video or the ‘90’s when I have these nice Display Port GPU’s. I will see what suggestions ASUS support can give me they have been pleasant but hoping for more.
Anyhow tried every iteration I could think of with monitors and in the bios now to determine if I want to keep the ASUS which was initially put out a few years before the Asrock X299 mother board I bought or can I get it to perform like the Asrock where I can go into a better resolution of 1920×1080 even on the VGA port but then extend to my Display Ports on my K1200 Quadro.
More on this later going to try to get some sleep and to think I thought I was going to bed early at about 11am, that didn’t work out!
Since I’m here monitoring some work migration processes in my computer room for the outside job that seem to be going well I decided to update my music streaming server running W2019 Datacenter version on one of my Bl460c blade servers @ https://dhoytt.com/snake-ice-radio-blog/ then updated some plugins and themes on my WordPress CMS blog and then my web virtual server OS which is running Centos 7 Linux all which I host here in my computer room on XCP-Ng that rides on my HPE C7000 Bl460c blade server.
I have already had the fun of updating my Linux laptop over the weekend to Fedora 33 and updating the OS earlier today. I also installed a new SSD drive in that laptop. I started to update my Linux workstation but I decided I want to wait and update to a different case with hot swap drives and a motherboard with IPMI\remote management plus an Enterprise SAS SSD boot drive and a normal SAS higher capacity drive for storage most likely 8TB SAS but may bet greedy and go 16TB SAS enterprise .
I have just done a ton of updates in my computer room completely turning over servers and network switches (10gbe now)
I digress, any how the web server is updated and the WordPress CMS blog as well along with my music streaming server!
After updating my 2 hypervisors I also decided to update my BL460c gen8 blade server I run Centos 8 Linux on with the latest security, kernel and a few other patches. I also ended up updating to Zoneminder 1.34.21-2 as part of the updates. After running zmupdate.pl Zoneminder came up just fine. As a maintenance step from the GUI I also set Zoneminder to trim some of my events.
After all of that I also went to my WordPress server and updated WordPress after backing up my database.
I have moved everything around placing the large UPS’ in the rack and moving my FreeNAS systems and older R710 systems in the above them. I still have quite a bit to do like more cable management in incremental stages. DNS is still propagating away from the quick WordPress maintenance website I put up on a hosted website I had never used before.
Taking the entire site down to restructure my systems and UPS physically in my computer room. This means every systems and site is being taken down and all power cables re-routed.
Well I ended up having to re-push my data after the initial push failed and I then re-pushed using a different replication job of just the one dataset I needed for that volume. Appears the second replication dumped the dataset’s data in the Zvols root and didn’t create a separate dataset. I looked at the new replications configuration and I think since I didn’t tick the box of the zvol but just the dataset in the simplified version in the GUI it didn’t include the zvol name in the destination path.
I figure no big deal rename using ZFS tools and also change the mountpoints. That worked on the CLI but didn’t translate in the GUI plus the aborted effort with the dataset was still there. Once I deleted the old dataset went back and forth with various commands not propagating to the GUI and seeing the warning of the CLI that changes made on the CLI would not be kept in the database after reboot I decided to redo the original replication job just changing to my secondary network.
Now everything seems to be going well and not interfering with things on my primary network. I see the progress on the cli using commands including percentage of time left with the replication and size of the data being transferred.
Just a little addendum. The percentage shown with “ps -aux | grep send” seems inaccurate and I was wondering about it the whole time since I have 22TB of data and the percentage was consistently low compared to the data shown transferred to the target with the command “zfs get -H -o value used file-storage2/sonora-nfs1” on the target and in the GUI. This held true as the replication ended right around 50% mark of the ZFS send information and when the transferred data reached the expected levels
I got my new 16 TB SAS drive in yesterday a then started to create another vdev to extend my current raid z volume then decided to go with creating one large vdev and volume out of my 16tb drives in raidz2. Before destroying my current raidz volume with 3 drives in the vdev I compared the data with my backup data on my backup drive that used to be my primary and it was identical.
I thought about using “rsync” with the “parallel” command so I could have as many as 20 streams going from one of my spare bl460c gen7 blades I have Centos7 installed on. The thought was to go with nfs mounts of the volumes from each FreeNAS on the Centos7 system using the alternate network then about 20 rsync streams to take advantage of the 10gbe network. After getting the “parallel” package installed and nfs mounts tested an in “/etc/fstab” I decided not to because with rsync piped to parallel you have to create the directory and sub-directories first then there could be a lot of back and forth and fine tuning. I decided to use the FreeNAS built in ZFS replication.
I then pointed all of my Plex servers virtual and physical to my backup FreeNAS system and tested until I get the data replicated back to my primary FreeNAS system with my new Zvol.
Here’s what I did:
Removed raidz volume Storage → Pools →<Pool Name> → Pool Operations (gear next to pool) → Export/Disconnect → <Delete Configuration/Data of Pool>.
Storage → Pools → Create New Pool → Select Drives → <chose RAIDZ2 6 drives>
Left me with 58tb usable space able to withstand 2 drive failures. You lose about 2.45 TB in overhead OS off each 16tb drive.
As I was getting happy ready to start my replication over my 10Gbe network I recalled I hadn’t moved my primary NIC’s on my primary NAS off the 1gb NIC’s yet
I started DHCP on one of my 10Gbe NIC’s and created the replication to go over that NIC.
Started replication from backup FreeNAS to my primary from the top Zvol with my backup dataset and my file storage dataset selected as one job.
The backup dataset went fine and replicated before I went to bed which is usually the next morning anyway. This was only 900GB
The replication died 2.98TBs into the file storage dataset which has about 22TB I found once I woke up.
The mistake I made was to push the data over my primary network which created issues with my network streams of my internet radio and some other localized issues.
I decided to shutdown the 10gbe NIC on my primary network from my primary FreeNAS and activate dhcp on the 10gbe NIC of my alternate network of my primary FreeNAS system.
This is now going just fine and at about 14TB data replicated so far
The network shows it has been transferring data at a mean of 1.65GB with sustained bursts of 15 -20 minutes of 2.75gb 3.56gb.
The other issue I was having a problem was is monitoring the progress of the replication. With ZFS it often doesn’t mount the volume in the OS until it finishes so traditional tools like “df” don’t see the growing size of the data.
I finally started using the following commands after re-familiarizing myself by going to some zfs cookbooks.
This works on initial replication from the cli target when the volume isn’t mounted:
“# zfs get -H -o value used <volume-name>”
2.98T
You can get the volume name of course from your replication job or :
#zfs mount
#zfs get readonly ← You have to sift through the output
Once the initial replication failed the volume became mounted and you have to use a different command to view the progress though the OS tools still may not show it mounted.
#zpool list
Now shows the progress and you can follow the progress in the FreeNAS Dashboard GUI for the pool since ZFS has it listed and mounted.
All looking good and after replication is completed will test and then start getting my VM volumes reconfigured an beef up the backup FreeNAS. Then go exclusively to my bl460c Gen8 XCP-Ng systems for my virtual environment.
I have my new HPE bl460c gen8 blade systems now in my existing virtual hypervisor XCP-Ng pool with my Dell Poweredge R710’s systems temporarily while I assess them. I need to keep them running see how they handle the loads, how those loads affect my UPS power systems, the network throughput, plus storage connectivity. I migrated some VM’s between them tonight and that went well though one spiked and kicked the blade server fans to full blast which caused one of my UPS’ to go into alert as it was maxed out for a while.
I was waiting for my backup FreeNAS system to be ready as well as getting my storage pools extended on my primary FreeNAS before joining the blade servers into the XCP-ng pool, but that was derailed waiting for one more drive. I also had a port on one of my 10gbe NIC’s not working and had to grab one from another server to test. The vendor sent me a replacement relatively fast as well which has tested well.
I was initially going to extend my new FreeNAS storage pool by adding individual drives to the vdev but I read that wrong that this feature is available now, it isn’t. Extending ZFS vdev’s with individual drives in FreeNAS is a future feature in FreeBSD not something I can do now.
Now since I have a vdev of 3 drives in raidz I could destroy that and create a 5 drive vdev and pool then replicate data from my backup FreeNAS with my 5 current 16 TB SAS drives that I imported my original drive pool on or order one more 16 TB drive and create a new vdev and extend the storage pool that way on my primary FreeNAS. Currently this is how you extend a pool in FreeNAS with ZFS, you need a vdev of equal number of drives and capacity. I chose to get one more drive to extend the 3 drive vdev with a new 3 drive vdev. I didn’t feel like replicating data from backup to primary since I would be making changes and testing this my 10gbe network I’m still solidifying in my blade chassis an my new Mikrotik switches.
Overall everything is looking solid and ready to move forward so I can start spinning up new VM’s and containers to start some serious cloud and devop work along with getting deeper into programming with the multiple environments I will be able to bring up at will.
I updated to the latest WordPress version 5.5.1 and updated all of my themes and plugins then decided to update the Centos 7 Linux OS patches of this virtual system as well. I also took care of some pesky ssl issues that had mixed ssl and non-ssl objects so that the site is more compliant and friendly to visitors. Now you will see the padlock indicating my site is safe. I updated the certificate and did the .htaccess ssl redirects over the weekend Saturday around noon.
I haven’t always taken advantage of proper storage of my garden produce. Well now I am finally using my FoodSaver and planning on using my dehydrator both purchased years ago but never used.
Over the weekend I cleaned up the dehydrator and took the FoodSaver vacuum packing storage system out of the box?
The following are my first time blanching and getting items ready in long term vacuum packed storage.