October 10, 2020
by dhoytt
0 comments

Replicating Data Over Again Using FreeNAS

Well I ended up having to re-push my data after the initial push failed and I then re-pushed using a different replication job of just the one dataset I needed for that volume. Appears the second replication dumped the dataset’s data in the Zvols root and didn’t create a separate dataset. I looked at the new replications configuration and I think since I didn’t tick the box of the zvol but just the dataset in the simplified version in the GUI it didn’t include the zvol name in the destination path.

I figure no big deal rename using ZFS tools and also change the mountpoints. That worked on the CLI but didn’t translate in the GUI plus the aborted effort with the dataset was still there. Once I deleted the old dataset went back and forth with various commands not propagating to the GUI and seeing the warning of the CLI that changes made on the CLI would not be kept in the database after reboot I decided to redo the original replication job just changing to my secondary network.

Now everything seems to be going well and not interfering with things on my primary network. I see the progress on the cli using commands including percentage of time left with the replication and size of the data being transferred.

#ps -aux | grep send <– From Target showing progress
#zfs get -H -o value used file-storage2/sonora-nfs1 <—This command using dataset name gets data transferred information from the source.

Just a little addendum. The percentage shown with “ps -aux | grep send” seems inaccurate and I was wondering about it the whole time since I have 22TB of data and the percentage was consistently low compared to the data shown transferred to the target with the command “zfs get -H -o value used file-storage2/sonora-nfs1” on the target and in the GUI. This held true as the replication ended right around 50% mark of the ZFS send information and when the transferred data reached the expected levels

Shows replication finished successfully

October 6, 2020
by dhoytt
0 comments

New Drive Reconfiguration FreeNAS Volume

I got my new 16 TB SAS drive in yesterday a then started to create another vdev to extend my current raid z volume then decided to go with creating one large vdev and volume out of my 16tb drives in raidz2. Before destroying my current raidz volume with 3 drives in the vdev I compared the data with my backup data on my backup drive that used to be my primary and it was identical.

I thought about using “rsync” with the “parallel” command so I could have as many as 20 streams going from one of my spare bl460c gen7 blades I have Centos7 installed on. The thought was to go with nfs mounts of the volumes from each FreeNAS on the Centos7 system using the alternate network then about 20 rsync streams to take advantage of the 10gbe network. After getting the “parallel” package installed and nfs mounts tested an in “/etc/fstab” I decided not to because with rsync piped to parallel you have to create the directory and sub-directories first then there could be a lot of back and forth and fine tuning. I decided to use the FreeNAS built in ZFS replication.

I then pointed all of my Plex servers virtual and physical to my backup FreeNAS system and tested until I get the data replicated back to my primary FreeNAS system with my new Zvol.

Here’s what I did:

  • Removed raidz volume Storage → Pools →<Pool Name> → Pool Operations (gear next to pool) → Export/Disconnect → <Delete Configuration/Data of Pool>.
  • Storage → Pools → Create New Pool → Select Drives → <chose RAIDZ2 6 drives>
    • Left me with 58tb usable space able to withstand 2 drive failures. You lose about 2.45 TB in overhead OS off each 16tb drive.
  • As I was getting happy ready to start my replication over my 10Gbe network I recalled I hadn’t moved my primary NIC’s on my primary NAS off the 1gb NIC’s yet
  • I started DHCP on one of my 10Gbe NIC’s and created the replication to go over that NIC.
  • Started replication from backup FreeNAS to my primary from the top Zvol with my backup dataset and my file storage dataset selected as one job.
  • The backup dataset went fine and replicated before I went to bed which is usually the next morning anyway. This was only 900GB
  • The replication died 2.98TBs into the file storage dataset which has about 22TB I found once I woke up.
    • The mistake I made was to push the data over my primary network which created issues with my network streams of my internet radio and some other localized issues.
  • I decided to shutdown the 10gbe NIC on my primary network from my primary FreeNAS and activate dhcp on the 10gbe NIC of my alternate network of my primary FreeNAS system.
    • This is now going just fine and at about 14TB data replicated so far
    • The network shows it has been transferring data at a mean of 1.65GB with sustained bursts of 15 -20 minutes of 2.75gb 3.56gb.

The other issue I was having a problem was is monitoring the progress of the replication. With ZFS it often doesn’t mount the volume in the OS until it finishes so traditional tools like “df” don’t see the growing size of the data.

I finally started using the following commands after re-familiarizing myself by going to some zfs cookbooks.

This works on initial replication from the cli target when the volume isn’t mounted:

“# zfs get -H -o value used <volume-name>”

2.98T

You can get the volume name of course from your replication job or :

#zfs mount

#zfs get readonly ← You have to sift through the output

Once the initial replication failed the volume became mounted and you have to use a different command to view the progress though the OS tools still may not show it mounted.

#zpool list

Now shows the progress and you can follow the progress in the FreeNAS Dashboard GUI for the pool since ZFS has it listed and mounted.

All looking good and after replication is completed will test and then start getting my VM volumes reconfigured an beef up the backup FreeNAS. Then go exclusively to my bl460c Gen8 XCP-Ng systems for my virtual environment.

October 4, 2020
by dhoytt
0 comments

Virtual Environment Expansion

I have my new HPE bl460c gen8 blade systems now in my existing virtual hypervisor XCP-Ng pool with my Dell Poweredge R710’s systems temporarily while I assess them. I need to keep them running see how they handle the loads, how those loads affect my UPS power systems, the network throughput, plus storage connectivity. I migrated some VM’s between them tonight and that went well though one spiked and kicked the blade server fans to full blast which caused one of my UPS’ to go into alert as it was maxed out for a while.

I was waiting for my backup FreeNAS system to be ready as well as getting my storage pools extended on my primary FreeNAS before joining the blade servers into the XCP-ng pool, but that was derailed waiting for one more drive. I also had a port on one of my 10gbe NIC’s not working and had to grab one from another server to test. The vendor sent me a replacement relatively fast as well which has tested well.

I was initially going to extend my new FreeNAS storage pool by adding individual drives to the vdev but I read that wrong that this feature is available now, it isn’t. Extending ZFS vdev’s with individual drives in FreeNAS is a future feature in FreeBSD not something I can do now.

Now since I have a vdev of 3 drives in raidz I could destroy that and create a 5 drive vdev and pool then replicate data from my backup FreeNAS with my 5 current 16 TB SAS drives that I imported my original drive pool on or order one more 16 TB drive and create a new vdev and extend the storage pool that way on my primary FreeNAS. Currently this is how you extend a pool in FreeNAS with ZFS, you need a vdev of equal number of drives and capacity. I chose to get one more drive to extend the 3 drive vdev with a new 3 drive vdev. I didn’t feel like replicating data from backup to primary since I would be making changes and testing this my 10gbe network I’m still solidifying in my blade chassis an my new Mikrotik switches.

Overall everything is looking solid and ready to move forward so I can start spinning up new VM’s and containers to start some serious cloud and devop work along with getting deeper into programming with the multiple environments I will be able to bring up at will.

September 1, 2020
by dhoytt
0 comments

Dhoytt Entire Site Update of Software and OS

I updated to the latest WordPress version 5.5.1 and updated all of my themes and plugins then decided to update the Centos 7 Linux OS patches of this virtual system as well. I also took care of some pesky ssl issues that had mixed ssl and non-ssl objects so that the site is more compliant and friendly to visitors. Now you will see the padlock indicating my site is safe. I updated the certificate and did the .htaccess ssl redirects over the weekend Saturday around noon.

August 31, 2020
by dhoytt
0 comments

Storing Tomatoes for Future

I haven’t always taken advantage of proper storage of my garden produce. Well now I am finally using my FoodSaver and planning on using my dehydrator both purchased years ago but never used.

Over the weekend I cleaned up the dehydrator and took the FoodSaver vacuum packing storage system out of the box?

The following are my first time blanching and getting items ready in long term vacuum packed storage.

August 31, 2020
by dhoytt
0 comments

Busy Sunday

Started Sunday staying up late Saturday into Sunday morning getting staged for preserving my garden vegetables and herbs. Then upon waking up my website was down so had to troubleshoot that by looking in the logs and it was an entry in the ssl logs as I had started making ssl updates and changes yesterday.

Due to the website issues I then got a late start on prepping my brisket I set to thaw in the refrigerator and then cleaning the ash off my grills. Then cleaned the Traeger grill and prepped the grill and got the brisket on. I then got my front lawn in order since the lawn service kept canceling and would not be by until September 10th.

I like doing my own lawn but when going for long periods traveling I need someone and I don’t want to find someone last minute. I used Sunday to get my rose bushes trimmed the way I normally have them the past 14 years so when the lawn service takes back over they understand better what I have said before. A demonstration shows so much better than words many times. The lawn service has been decent just need to show them how I wanted things done.

Then I fast smoked the brisket in about 8 or 9 hours instead of 15 hours and turned out tender to a fault as it was almost pulled beef. I also threw some beef ribs on since they had thawed out. The beef ribs meat was super tender fall off the bone tender too.

I had the ribs with dinner and a few pieces late of the brisket

Here’s the visuals including my grits breakfast.

August 29, 2020
by dhoytt
0 comments

Big Garden Harvest and Nice Vegetable Casserole

Picked more vegetables from the garden then feasted on an eggplant, squash, tomatoes casserole. I also threw in a bunch of basil from the garden along with garlic, a little pepper and salt, mixed with Parmesan cheese and olive oil.

Here’s pictures of the harvest and casserole.

Eggplant, squash, tomatoes and peppers 🌶.

August 23, 2020
by dhoytt
0 comments

Pizza Delight on Traeger Grill With Garden Vegetables

In looking for a different way to use my tomatoes out of my garden I decided to make fresh pizza sauce and a pizza from scratch.

I used early girl tomatoes, basil and rosemary from my garden in the pizza sauce and making some herb dough.

I like to cook but really don’t bake so had to go out and get some yeast from the past 5 years. I had yeast who’s expiration date was in 2012. Most likely yeast from about 2009 from making plum or peach liquor no distilling!

Anyway I didn’t realize how long I had to let the sauce cook and once I got the yeast hoe long I needed to let it rise. Then I went around finding good Italian sausages and pepperoni. Besides what I already mentioned I also used squash from the garden and some sliced olives from a can.

This was a disjointed impulse project and now that I have an ideal of how to do this I will plan and make it smoother plus enhance the experience.

Anyhow here’s the pictures of the process.

August 10, 2020
by dhoytt
0 comments

Steady Progress Updating Infrastructure

The past months I have been doing a complete planned upheaval of my entire infrastructure. I’m adding more capacity to my NAS plus implementing better RAID and updating my FreeNAS. I’m moving my entire network to 10gbe. I’m moving all servers from rack mounts to HPE C7000 chassis servers which I had to refamilarize myself with after being away from my surplus find for about 7 months and initially never really having the chance to dive into months ago. I have updated my Xcp-ng hypervisors as well and they will be decommissioned as part of the migration to the C7000 chassis eventually.

This infrastructure total rebuild along with both parents passing and dealing with the estate and family drama plus covid-19, my rental property, HOA board and Neighborhood Watch duties I have not had a boring moment in 2020. This doesn’t even include my possible plans to update my Linux and Windows workstations with hot swap cases as well as my gaming system that has been left totally neglected for a while.

All of this requires planning testing exhaustively before ordering even the smallest items poring over compatibility charts not just with HW but firmware all without support contracts to get to firmware for severe advisory or compatibility firmware fixes easily. Just getting the proper delivery platform to get the firmware to the various components is challenging. I think I have used about 10 different methods of updating different firmware updates just for the C7000 not to mention external switches and routers. With surplus parts getting them from all overt he country and locally you need to jump in and test right away to determine if they are working properly or you are doing something incorrectly because returning parts is time lost and can stall progress and with Covid-19 shipping has added days sometimes a whole week getting items.

I knocked out a couple of easy steps first like updating Xcp-ng hyper-visor to 8.1. Configured a few bl460c Gen7 chassis to play with Virtual Connect and get supremely comfortable with it and decide how to uplink to the network and that’s when the real fun began. I had to update and apply firmware just to be able to get initial control of the C7000 through the OA and Virtual Connect environment, through browsers.

Now with that lengthy summary here’s a bulletized version of what I have done minus things forgotten:

  • Updated my C7000 OA version to latest 4.95
  • Updated HP VC Flex-10 Enet Module HW to version that would work with OA and browsers.
  • Ordered 2 x Gen8 bl460c with 10 cores CPU’s and 128gb or RAM for Hyper-visor
  • Configured Gen7 bl460c to be new W2012-r2 music streaming server
    • After above firmware updates this Gen7 bl460c and others hung and took over 20 minutes to boot into OS and when it did the W2012-r2 crashed.
    • Updated to latest SPP for gen7 and individual ROM updates didn’t solve issue
    • Discovered it was due to firmware of Flex NIC’s but firmware updates even from advisories about this issue never resolved the issue
    • Turned my attention to 10gbe network and NAS as I was planning to virtualize this Streaming server anyway but not this soon.
    • Came back to this issue lately while waiting to touch network again while data is replicating for NAS backup.
      • Found another bl460c Gen7 blade that booted up fine and had what looked like a totally different fork of development train for it’s firmware. This server booted properly and I ended up installing w2019 evaluation on it and placed my streaming software and copied the database over from MySQL on my current sever to Mariadb on the new w2019 sever using HeidiSQL.
      • The bl460c Gen7 working blade has “emulex bios v11.1.183.23” while the other bl460c Gen7 blades had versions like 4.0.360.15. Took me all a long time to find the firmware with the proper delivery method that matched the firmware files I had but I finally found a bootable ISO that updated my Oneconnect onboard NIC’s and resolved the bl460c Gen7 boot issues with Legacy_OneConnect-Flash-11.1.183.635.ISO.
  • For my FreeNAS I got some 16TB SAS drives created a temporary striped volume and started replicating my NFS data and backups all 22 TB worth.
    • I started with rsync but since they had no way I could pipe through parallel as the command isn’t on FreeNAS I switched to replication from FreeNAS and that ended up being the better faster option
      • Initially the replication didn’t work until I updated to FreeNAS 11-2-UB from 11.2-u1 that had a bug.
  • For my 10GBE external switches I went with Mikrotik CRS 317 and made 3 different bridges for my 3 different networks.
    • This ended up being problematic with a NAS and the huge learning curve involved so I ended up getting 2 Mikrotik CRS309 to keep the networks physically separated.
    • More to this story later.
  • I also scored 2 Procurve 6120XG switches for my C7000 which were a serious pain updating the firmware and wiping the config and finally one ended up being bad and I swapped that out with the vendor I bought it from who has been a pleasure to work with and is another former HP person that’s close by with much of what I need for HP supplies.
  • On my FreeNAS I now have to place my original pools into a respectable RAID I should have earlier and told myself I would do but due to time I just kept adding to the current RAID.
  • SFP’s have been crazy as well. To uplink from the HP VC Flex-10 Enet Modules you need proprietary HP SFP’s that work on Virtual Connects on each end of the fiber for 10GBE. Even the Rj45 SFP’s for Virtual Connect have to be HP’s.
  • The Procurve 6120XG seem to have their own set of SFP’s though I found if you really can find some SFP’s with the “MSA” standard that cost considerably more they most likely would work.
  • I found that Mikrotik has some SFP’s that will allow me to use CAT6/5 copper solution into their switch called a “Mikrotik S+RJ10 SFP”. Right now it will work with my 1gb from my routers to the appropriate switches but it will also work with 10GBE once I get to that point.
  • I still need to move my stack of UPS into the rack once I phase out more systems.
  • Plus may build another NAS to backup this one I currently have.
  • Even my KVM decided to go haywire for some reason this past week so will be looking for a more solid version of that as well.

Well that is what I have been up to infrastructure wise the past several months along with the other craziness. I still have a ways to go and will catch up with all these tasks as I complete them hopefully! All this will end up being a scalable future proof environment I can hopefully start some serious programming and other testing and hosting with that will lower my power out put as well.

August 2, 2020
by dhoytt
0 comments

Huge Squash Many Tomatoes Today’s Garden Harvest

I ended up with huge squash picking my garden today along with a huge amount of tomatoes, yellow pear, cherry husky, early girl and beef eater. I also got eggplant, Serranto peppers, Havasu peppers, raspberries as well as gathering my normal daily herbs of basil, sage and rosemary for things like dip sauce for tri-tip, squash salad and salsa.

The huge squash and over abundance of tomatoes means I haven’t been harvesting in a timely manner.

The Sunday harvest