October 10, 2020
by dhoytt
Comments Off on Replicating Data Over Again Using FreeNAS
October 9, 2020
by dhoytt
Comments Off on Done With Song Pruning Into Some Friday Beats
Done With Song Pruning Into Some Friday Beats
Pruned some songs after loading up in the streaming application SAM Broadcaster due to hard drive path changes and done for now. Now just going to throw up some sounds Snakeice Gumbo mix of genres and eras style!
October 9, 2020
by dhoytt
Comments Off on Doing Some Song Updates
Doing Some Song Updates
Doing some song updates on the SAM Broadcaster application so there will be some brief interruptions as I preen some duplicate songs from the application/database
October 6, 2020
by dhoytt
Comments Off on New Drive Reconfiguration FreeNAS Volume
New Drive Reconfiguration FreeNAS Volume
I got my new 16 TB SAS drive in yesterday a then started to create another vdev to extend my current raid z volume then decided to go with creating one large vdev and volume out of my 16tb drives in raidz2. Before destroying my current raidz volume with 3 drives in the vdev I compared the data with my backup data on my backup drive that used to be my primary and it was identical.
I thought about using “rsync” with the “parallel” command so I could have as many as 20 streams going from one of my spare bl460c gen7 blades I have Centos7 installed on. The thought was to go with nfs mounts of the volumes from each FreeNAS on the Centos7 system using the alternate network then about 20 rsync streams to take advantage of the 10gbe network. After getting the “parallel” package installed and nfs mounts tested an in “/etc/fstab” I decided not to because with rsync piped to parallel you have to create the directory and sub-directories first then there could be a lot of back and forth and fine tuning. I decided to use the FreeNAS built in ZFS replication.
I then pointed all of my Plex servers virtual and physical to my backup FreeNAS system and tested until I get the data replicated back to my primary FreeNAS system with my new Zvol.
Here’s what I did:
- Removed raidz volume Storage → Pools →<Pool Name> → Pool Operations (gear next to pool) → Export/Disconnect → <Delete Configuration/Data of Pool>.
- Storage → Pools → Create New Pool → Select Drives → <chose RAIDZ2 6 drives>
- Left me with 58tb usable space able to withstand 2 drive failures. You lose about 2.45 TB in overhead OS off each 16tb drive.
- As I was getting happy ready to start my replication over my 10Gbe network I recalled I hadn’t moved my primary NIC’s on my primary NAS off the 1gb NIC’s yet
- I started DHCP on one of my 10Gbe NIC’s and created the replication to go over that NIC.
- Started replication from backup FreeNAS to my primary from the top Zvol with my backup dataset and my file storage dataset selected as one job.
- The backup dataset went fine and replicated before I went to bed which is usually the next morning anyway. This was only 900GB
- The replication died 2.98TBs into the file storage dataset which has about 22TB I found once I woke up.
- The mistake I made was to push the data over my primary network which created issues with my network streams of my internet radio and some other localized issues.
- I decided to shutdown the 10gbe NIC on my primary network from my primary FreeNAS and activate dhcp on the 10gbe NIC of my alternate network of my primary FreeNAS system.
- This is now going just fine and at about 14TB data replicated so far
- The network shows it has been transferring data at a mean of 1.65GB with sustained bursts of 15 -20 minutes of 2.75gb.
The other issue I was having a problem was is monitoring the progress of the replication. With ZFS it often doesn’t mount the volume in the OS until it finishes so traditional tools like “df” don’t see the growing size of the data.
I finally started using the following commands after re-familiarizing myself by going to some zfs cookbooks.
This works on initial replication from the cli target when the volume isn’t mounted:
“# zfs get -H -o value used <volume-name>”
2.98T
You can get the volume name of course from your replication job or :
#zfs mount
#zfs get readonly ← You have to sift through the output
Once the initial replication failed the volume became mounted and you have to use a different command to view the progress though the OS tools still may not show it mounted.
#zpool list
Now shows the progress and you can follow the progress in the FreeNAS Dashboard GUI for the pool since ZFS has it listed and mounted.
All looking good and after replication is completed will test and then start getting my VM volumes reconfigured an beef up the backup FreeNAS. Then go exclusively to my bl460c Gen8 XCP-Ng systems for my virtual environment.
October 4, 2020
by dhoytt
Comments Off on Virtual Environment Expansion
Virtual Environment Expansion
I have my new HPE bl460c gen8 blade systems now in my existing virtual hypervisor XCP-Ng pool with my Dell Poweredge R710’s systems temporarily while I assess them. I need to keep them running see how they handle the loads, how those loads affect my UPS power systems, the network throughput, plus storage connectivity. I migrated some VM’s between them tonight and that went well though one spiked and kicked the blade server fans to full blast which caused one of my UPS’ to go into alert as it was maxed out for a while.
I was waiting for my backup FreeNAS system to be ready as well as getting my storage pools extended on my primary FreeNAS before joining the blade servers into the XCP-ng pool, but that was derailed waiting for one more drive. I also had a port on one of my 10gbe NIC’s not working and had to grab one from another server to test. The vendor sent me a replacement relatively fast as well which has tested well.
I was initially going to extend my new FreeNAS storage pool by adding individual drives to the vdev but I read that wrong that this feature is available now, it isn’t. Extending ZFS vdev’s with individual drives in FreeNAS is a future feature in FreeBSD not something I can do now.
Now since I have a vdev of 3 drives in raidz I could destroy that and create a 5 drive vdev and pool then replicate data from my backup FreeNAS with my 5 current 16 TB SAS drives that I imported my original drive pool on or order one more 16 TB drive and create a new vdev and extend the storage pool that way on my primary FreeNAS. Currently this is how you extend a pool in FreeNAS with ZFS, you need a vdev of equal number of drives and capacity. I chose to get one more drive to extend the 3 drive vdev with a new 3 drive vdev. I didn’t feel like replicating data from backup to primary since I would be making changes and testing this my 10gbe network I’m still solidifying in my blade chassis an my new Mikrotik switches.
Overall everything is looking solid and ready to move forward so I can start spinning up new VM’s and containers to start some serious cloud and devop work along with getting deeper into programming with the multiple environments I will be able to bring up at will.
September 27, 2020
by dhoytt
0 comments
Done Changing Garden Borders Front Yard
The past weeks one of my many projects I have been completing as I get the proper materials is changing my front yard garden borders from using faux wooden posts that had started to fade and deteriorate to stone pavers. I first wanted to put up some real stone in chunks. I searched different landscape and stone yards for some interesting stones then concluded it would have been possible but quite a bit of work with the different sizes of widths, length and thicknesses, then setting them in place would take some minor trenching and concrete work. Had this been my only major project going right now I may have gone the real rock option for my garden borders.
I was also shopping all the specialty landscaping rock yards and ended up with the pavers I really wanted from the Home Depot a couple of miles from me.
For the longest straight stretch of border in line with my fence I ended up leveling the ground pulling roots from a tree I had chopped down moving a portion of my drip system near the border as well as relocating some flowers right on the borders edge.
I then set the stones pavers in place on top of a plastic paver retention bracket, used landscape adhesive to hold them then applied a landscape adhesive to the second layer of pavers and the first layer.
For the other section I ended up using bend board edging underneath the pavers I had to search for and got from Green Acres Garden center. I didn’t apply adhesive between the bend board edging and the bottom pavers since the sidewalk will hold them in place with the force of the soil and bark cover. I did apply landscaping adhesive between the bottom and upper layer of pavers though. Having that bend board edging and retention brackets underneath the pavers will prevent the pavers from unevenly sinking into the soil down the road. The bend board serves a separate role of adding height to almost match the concrete steps to my front door.
Here’s some pictures:
September 26, 2020
by dhoytt
Comments Off on Back up Streaming After OS Configuration
Back up Streaming After OS Configuration
September 26, 2020
by dhoytt
Comments Off on OS Configuration Reboot
OS Configuration Reboot
Changing some CPU power settings on the new W2019 streaming server and rebooting for them to take affect. Will be back shortly!
September 24, 2020
by dhoytt
Comments Off on Back Streaming Sounds Fine More Tuning to Come
Back Streaming Sounds Fine More Tuning to Come
Well back up and running again streaming music from my new W2019 server on my bl460c gen 7 c7000 blade which has a healthy 32gb of RAM and 10gbe infrastructure internally until it hits my router going out to the world then my speed is just 1gb to all of you.. I moved to having ShoutCast 2.5.1 on my new Centos 8 Linux server run with systemd, with the latest IceCast relays 2.4.4 running on that server as well which I already runs under systemd since IceCast is in the Linux repositories.
The reason I now have ShoutCast running on Linux at this time is I ran into a surprise trying to go to the latest version of ShoutCast 2.6.0 running from my Windows server as I have for years. My 192kbps and 320kbps kept getting the errors about the stream having network issues and in the Windows event logs saying “Frame settings changed could cause playback problems..”. The 128kbps and 64kbps ShoutCast encoders were playing fine and so were the IceCast encoders and relays. All ShoutCast relays played over the same ShoutCast instance so that error was especially strange.
After digging around on various forums I found on Winamp\Shoutcast that Shoutcast the company had indeed disabled the bit rates above 128kbps in the Shoutcast server software unless you were using their streaming services! Well a huge part of this is to learn and keep my skills up by doing things myself and I don’t want or need to use others streaming services. I’ll give specific steps I took later as its time to hit the bed.
Looks like I need to also get paths straight to where some of my music resides as I had 2 separate drives on the other server with music. I’ll be doing some tuning up on issues like that along with picture files plus more of the background items I performed.
September 23, 2020
by dhoytt
Comments Off on Rolling Over to New Streaming Server
Rolling Over to New Streaming Server
Time to officially roll over to my new W2019 streaming server from my W2012-r2 streaming server. I need to copy the current MariaDB database, to the new system, point ports in firewall to new systems, make sure those ports are opened locally as well. I am also pointing the IceCast relays to a new Linux Centos 8 server that doubles as my Zoneminder CCTV system and one of my Plex Media servers.