I have moved everything around placing the large UPS’ in the rack and moving my FreeNAS systems and older R710 systems in the above them. I still have quite a bit to do like more cable management in incremental stages. DNS is still propagating away from the quick WordPress maintenance website I put up on a hosted website I had never used before.
October 18, 2020
by dhoytt Comments Off on Taking Sites Down to Restructure Systems In Computer Room
Well I ended up having to re-pushing my data after the initial push failed and I then re-pushed using a different replication job of just the one dataset I needed for that volume. Appears the second replication it dumped the dataset’s data in the Zvols root and didn’t create a separate dataset. I looked at the new replications configuration and I think sine I didn’t tick the box of the zvol but just the dataset in the simplified version in the GUI it didn’t include the zvol name in the destination path.
I figure no big deal rename using ZFS tools and also change the mountpoints. That worked on the CLI but didn’t translate in the GUI plus the aborted effort with the dataset was still there. Once I deleted the old dataset went back and forth with various commands not propagating to the GUI and seeing the warning of the CLI that changes made on the CLI would not be kept in the database after reboot I decided to redo the original replication job just changing to my secondary network.
Now everything seems to be going well and not interfering with things on my primary network. I see the progress on the cli using commands including percentage of time left with the replication and size of the data being transferred.
October 9, 2020
by dhoytt Comments Off on Done With Song Pruning Into Some Friday Beats
Pruned some songs after loading up in the streaming application SAM Broadcaster due to hard drive path changes and done for now. Now just going to throw up some sounds Snakeice Gumbo mix of genres and eras style!
October 9, 2020
by dhoytt Comments Off on Doing Some Song Updates
I got my new 16 TB SAS drive in yesterday a then started to create another vdev to extend my current raid z volume then decided to go with creating one large vdev and volume out of my 16tb drives in raidz2. Before destroying my current raidz volume with 3 drives in the vdev I compared the data with my backup data on my backup drive that used to be my primary and it was identical.
I thought about using “rsync” with the “parallel” command so I could have as many as 20 streams going from one of my spare bl460c gen7 blades I have Centos7 installed on. The thought was to go with nfs mounts of the volumes from each FreeNAS on the Centos7 system using the alternate network then about 20 rsync streams to take advantage of the 10gbe network. After getting the “parallel” package installed and nfs mounts tested an in “/etc/fstab” I decided not to because with rsync piped to parallel you have to create the directory and sub-directories first then there could be a lot of back and forth and fine tuning. I decided to use the FreeNAS built in ZFS replication.
I then pointed all of my Plex servers virtual and physical to my backup FreeNAS system and tested until I get the data replicated back to my primary FreeNAS system with my new Zvol.
Here’s what I did:
Removed raidz volume Storage → Pools →<Pool Name> → Pool Operations (gear next to pool) → Export/Disconnect → <Delete Configuration/Data of Pool>.
Storage → Pools → Create New Pool → Select Drives → <chose RAIDZ2 6 drives>
Left me with 58tb usable space able to withstand 2 drive failures. You lose about 2.45 TB in overhead OS off each 16tb drive.
As I was getting happy ready to start my replication over my 10Gbe network I recalled I hadn’t moved my primary NIC’s on my primary NAS off the 1gb NIC’s yet
I started DHCP on one of my 10Gbe NIC’s and created the replication to go over that NIC.
Started replication from backup FreeNAS to my primary from the top Zvol with my backup dataset and my file storage dataset selected as one job.
The backup dataset went fine and replicated before I went to bed which is usually the next morning anyway. This was only 900GB
The replication died 2.98TBs into the file storage dataset which has about 22TB I found once I woke up.
The mistake I made was to push the data over my primary network which created issues with my network streams of my internet radio and some other localized issues.
I decided to shutdown the 10gbe NIC on my primary network from my primary FreeNAS and activate dhcp on the 10gbe NIC of my alternate network of my primary FreeNAS system.
This is now going just fine and at about 14TB data replicated so far
The network shows it has been transferring data at a mean of 1.65GB with sustained bursts of 15 -20 minutes of 2.75gb.
The other issue I was having a problem was is monitoring the progress of the replication. With ZFS it often doesn’t mount the volume in the OS until it finishes so traditional tools like “df” don’t see the growing size of the data.
I finally started using the following commands after re-familiarizing myself by going to some zfs cookbooks.
This works on initial replication from the cli target when the volume isn’t mounted:
“# zfs get -H -o value used <volume-name>”
You can get the volume name of course from your replication job or :
#zfs get readonly ← You have to sift through the output
Once the initial replication failed the volume became mounted and you have to use a different command to view the progress though the OS tools still may not show it mounted.
Now shows the progress and you can follow the progress in the FreeNAS Dashboard GUI for the pool since ZFS has it listed and mounted.
All looking good and after replication is completed will test and then start getting my VM volumes reconfigured an beef up the backup FreeNAS. Then go exclusively to my bl460c Gen8 XCP-Ng systems for my virtual environment.
October 4, 2020
by dhoytt Comments Off on Virtual Environment Expansion
I have my new HPE bl460c gen8 blade systems now in my existing virtual hypervisor XCP-Ng pool with my Dell Poweredge R710’s systems temporarily while I assess them. I need to keep them running see how they handle the loads, how those loads affect my UPS power systems, the network throughput, plus storage connectivity. I migrated some VM’s between them tonight and that went well though one spiked and kicked the blade server fans to full blast which caused one of my UPS’ to go into alert as it was maxed out for a while.
I was waiting for my backup FreeNAS system to be ready as well as getting my storage pools extended on my primary FreeNAS before joining the blade servers into the XCP-ng pool, but that was derailed waiting for one more drive. I also had a port on one of my 10gbe NIC’s not working and had to grab one from another server to test. The vendor sent me a replacement relatively fast as well which has tested well.
I was initially going to extend my new FreeNAS storage pool by adding individual drives to the vdev but I read that wrong that this feature is available now, it isn’t. Extending ZFS vdev’s with individual drives in FreeNAS is a future feature in FreeBSD not something I can do now.
Now since I have a vdev of 3 drives in raidz I could destroy that and create a 5 drive vdev and pool then replicate data from my backup FreeNAS with my 5 current 16 TB SAS drives that I imported my original drive pool on or order one more 16 TB drive and create a new vdev and extend the storage pool that way on my primary FreeNAS. Currently this is how you extend a pool in FreeNAS with ZFS, you need a vdev of equal number of drives and capacity. I chose to get one more drive to extend the 3 drive vdev with a new 3 drive vdev. I didn’t feel like replicating data from backup to primary since I would be making changes and testing this my 10gbe network I’m still solidifying in my blade chassis an my new Mikrotik switches.
Overall everything is looking solid and ready to move forward so I can start spinning up new VM’s and containers to start some serious cloud and devop work along with getting deeper into programming with the multiple environments I will be able to bring up at will.
The past weeks one of my many projects I have been completing as I get the proper materials is changing my front yard garden borders from using faux wooden posts that had started to fade and deteriorate to stone pavers. I first wanted to put up some real stone in chunks. I searched different landscape and stone yards for some interesting stones then concluded it would have been possible but quite a bit of work with the different sizes of widths, length and thicknesses, then setting them in place would take some minor trenching and concrete work. Had this been my only major project going right now I may have gone the real rock option for my garden borders.
I was also shopping all the specialty landscaping rock yards and ended up with the pavers I really wanted from the Home Depot a couple of miles from me.
For the longest straight stretch of border in line with my fence I ended up leveling the ground pulling roots from a tree I had chopped down moving a portion of my drip system near the border as well as relocating some flowers right on the borders edge.
I then set the stones pavers in place on top of a plastic paver retention bracket, used landscape adhesive to hold them then applied a landscape adhesive to the second layer of pavers and the first layer.
For the other section I ended up using bend board edging underneath the pavers I had to search for and got from Green Acres Garden center. I didn’t apply adhesive between the bend board edging and the bottom pavers since the sidewalk will hold them in place with the force of the soil and bark cover. I did apply landscaping adhesive between the bottom and upper layer of pavers though. Having that bend board edging and retention brackets underneath the pavers will prevent the pavers from unevenly sinking into the soil down the road. The bend board serves a separate role of adding height to almost match the concrete steps to my front door.
Here’s some pictures:
September 26, 2020
by dhoytt Comments Off on Back up Streaming After OS Configuration
The “Snakeice House of Beats” is back up and streaming. I like monitoring my resources and had to make a change in my ILO settings to monitor my CPU settings in W2019 “Task Manager” for this to happen. My bl460c gen7 blade uses ILO3 and I went to Power Management -> Power Settings -> Power Regulator Settings -> OS Control Mode, then upon reboot I could see my CPU settings properly in W2019 “Task Manager”.
September 26, 2020
by dhoytt Comments Off on OS Configuration Reboot