February 28, 2021
by dhoytt

Chili on Traeger with Chuck Roast

I was looking for a different way to use a well marbled chuck roast I picked up from Costco and decided to slow smoke it and use it in chili 🌶. A slow smoked chuck roast is almost like brisket but takes less time to cook as fibers aren’t as tightly packed.

Instead of cooking the chuck roast separate from the beans and other ingredients I placed the beans, peppers 🌶 and onions 🧅 and sauce in a dutch oven below the rack the chuck roast was smoking on so the wonderful drippings from the chuck roast would go into the chili.

This worked well except I discovered my Traeger Silverton 810 wasn’t keeping accurate cabin temperature. I already knew the temperature probe wasn’t working accurately as I had Traeger support send a replacement and was using my replacement thermometer. Then when the chuck roast stalled too long I placed another thermometer not in the meat but on the grills inner cabin and it was 20 degrees Fahrenheit less than indicated. Even raising the temperature the internal temperature didn’t rise much. At this point the chuck roast was about 165 and it was getting so late I had to cook a steak for dinner with some frozen vegetables then took the chuck roast and dutch oven ingredients off the grill.

Taking the chuck roast and the other ingredients in the dutch oven off the grill I put the dutch oven on the stove in the house and cut the roast. The chuck roast was still unbelievably tender and juicy. I placed the chunks into the dutch oven continued cooking. I wanted to go to bed but more cooking time was needed since the beans still weren’t tender enough and chili hadn’t reduced down and was watery. I thought about finishing the cook in a crock pot but didn’t want to dig that out and dirty more dishes. I decided to place the cast iron dutch oven into my oven in my kitchen at about 110 degrees then set the timer for 5 hours. This way the food could continue to cook super slow in that cast iron then the oven would turn off and the pot would slowly cool and be ready to put in the fridge not long after I wake up eat and get my storage containers ready.

The plan worked nicely I got up very late as I went to bed about 3 am combining that with going to bed about 4 am Friday without ample rest Saturday I needed to sleep in and the chili was ready. I made myself a brunch of potatoes breakfast style covered with chili I microwaved a minute just to get it piping hot with a couple of eggs 🥚 on top with toast. The brunch was some great comfort food to start a late relaxing Sunday.

Here’s some pictures of the process.

February 20, 2021
by dhoytt

Updated Systems in My Environment

Decided to do some updated on several systems in my environment. I updated my streaming media server running on W2019 Data Center, my Centos 8 Linux system that runs my: music server relays, Plex Media Server an Zoneminder CCTV software. I also updated my Fedora 33 Linux workstation, my W10 Pro Workstation version workstation and my Linux Centos 7 VM website server and another VM with Centos7 running Plex media server, plus some test VM’s I’m going to stand up some gaming servers on eventually.

After updating my Centos 8 system I needed to run the Zoneminder update perl script “zmupdate.pl” to upgrade the database to match Zoneminder version 1.34.23 and then “systemctl start zoneminder” to start zoneminder. IceCast and ShoutCast came up just fine after changes I made a few months ago.

I also updated WordPress plugins and themes after backing up the database.

Everything looks good after these updates!

February 15, 2021
by dhoytt

Gritty HPE C7000 Blade Enclosure Update

I updated my HPE C7000 blade enclosures Virtual control modules to VC-Enet 10/24. I then updated the firmware to 4.85 so that it wouldn’t rely on flash in the browsers which as of this year is blocked for security reasons in browsers. That all went pretty well. I was of course unable to trick the new modules into taking the old configuration since the hardware is pretty different but a guy has to try, but gave it just one try due to the known futility.

Once I got old modules out the new modules in and needed to start the new network configurations I discovered some of the fiber sfp’s say unsupported in virtual connect manager. This was a mixture of 10gb sfp’s and 1gb rj45 adapter sfp’s for things like my CCTV cameras. Well got that sorted after I went to BJ’s restaurant got some food a nice cold beer out of the garage fridge and nourished myself plus watched whatever was on the TV when I turned it on. Turns out the RJ45 1GB sfp’s needed to be in a select spot which are one of the four right ports as you face the modules. The 10gb modules were a different story as the sfp’s with part number 455891-001 no longer worked so I had to swap a lot of things around to get sfp’s with the part numbers 455885-001 in the module ports plus at the other end in the MikroTik switches. I’m sure this is all in the quick specs for the virtual control modules and I’ll order some more for backups and other configurations later.

After tracing wires making sure they were going to the appropriate switches and ports I decided to log into the MikroTik switches and name them appropriately since they show the switch name if its a managed switch in the virtual connect port information and that makes it so easy to verify I’m working with the correct network without peeking at the switch or ports or getting off my butt.

The goal in IT is always engineer or design things for monitoring or manipulating for butt sitting and access from anywhere in the world if possible. Put in that work on the front end then only serious hardware upgrades like today do you need to get down and dirty! Back your S#$% up too!!

Creating the networks again and making sure the server profiles had the appropriate networks assigned to the various ports went rather smoothly.

The bl460c gen7 servers came up smoothly one with w2019 Data Center which has the Spacial Audio SAM Broadcasting software I use to control the music and the Centos8 server which has the ShoutCast and IceCast relays I send the music out on to the world. My only issue now doesn’t concern at this time and that’s the bl460c gen8 servers going full power overload and shutting down with power overload symptoms or getting this error “1706 the extended bios data area in server memory has been overwritten..” then the systems shutdown.

Well I started looking for some firmware updates but you know how that goes I could only find old firmware for my 530m or 554m flex fabric NIC’s. I started trying to regulate the power settings via ILO but I don’t have the Advanced ILO license on my bl460c gen 8 systems. I then hits this article older article on a blog :https://practicaladmin.wordpress.com/2012/12/10/firmware-mismatch-with-flexfabric-adapters-causing-gen8-bl460-blades-to-not-post/. Well of course why not do the old process of elimination. I took my flex NIC’s out one by one and the system finally booted when neither was inserted. Well I was connecting to my 3rd network off that flex fabric NIC in mezzanine slot 2 and had some wild plans for the NIC in slot 1.

My wild plans will have to wait I decided to use my remaining 455885-001 SFP’s that had been creating a stacked link between the flex modules and use them to a flex module port in bay 1 to my third network’s switch for my XCP_NG hypervisors that are my bl460c gen8 blade servers. I then made the corresponding changes to my server profiles booted up and presto I have my beautiful XCP_NG hypervisors back up! I’ll move my VM’s back to the bl460c gen 8 servers later that I migrated to my old reliable Dell R710’s while I performed this update. Time for bed and then I’ll get up and tackle why some ports are not forwarding on my switch for my third network.

January 30, 2021
by dhoytt

New Workstations Notes

Well I built 2 new powerful workstation, one using Windows 10 Pro Workstation and the other Fedora 33 Workstation but of course with all types of server packages. The Windows workstation I’m using Asrock X299 WS/IPMI and the Linux Fedora 33 ASUS x99ws/ipmi. I put both into cases with hot swap backplanes for SAS with matching SAS controllers and K1200 Quadro GPU’s.

I have had to update bot motherboards BIOS and IPMI but the ASUS I had a few more challenges but tonight was able to get the IPMI working but not sure I’m happy with it since it requires the onboard VGA to be activated and when it’s activated it doesn’t see to allow my NVIDIA K1200 Quadro to run its one of the other and I am not going to 1024×768 video or the ‘90’s when I have these nice Display Port GPU’s. I will see what suggestions ASUS support can give me they have been pleasant but hoping for more.

Anyhow tried every iteration I could think of with monitors and in the bios now to determine if I want to keep the ASUS which was initially put out a few years before the Asrock X299 mother board I bought or can I get it to perform like the Asrock where I can go into a better resolution of 1920×1080 even on the VGA port but then extend to my Display Ports on my K1200 Quadro.

More on this later going to try to get some sleep and to think I thought I was going to bed early at about 11am, that didn’t work out!

November 18, 2020
by dhoytt

Updated Web Server OS Patches and WordPress Plugins\Themes

Since I’m here monitoring some work migration processes in my computer room for the outside job that seem to be going well I decided to update my music streaming server running W2019 Datacenter version on one of my Bl460c blade servers @ https://dhoytt.com/snake-ice-radio-blog/ then updated some plugins and themes on my WordPress CMS blog and then my web virtual server OS which is running Centos 7 Linux all which I host here in my computer room on XCP-Ng that rides on my HPE C7000 Bl460c blade server.

I have already had the fun of updating my Linux laptop over the weekend to Fedora 33 and updating the OS earlier today. I also installed a new SSD drive in that laptop. I started to update my Linux workstation but I decided I want to wait and update to a different case with hot swap drives and a motherboard with IPMI\remote management plus an Enterprise SAS SSD boot drive and a normal SAS higher capacity drive for storage most likely 8TB SAS but may bet greedy and go 16TB SAS enterprise .

I have just done a ton of updates in my computer room completely turning over servers and network switches (10gbe now)

I digress, any how the web server is updated and the WordPress CMS blog as well along with my music streaming server!

October 31, 2020
by dhoytt

Maintenance to XCP-Ng Hypervisors Updated Zoneminder Plus Installed Patches Updated WordPress

My BL460c gen8 blade servers have been really noisy with the fans kicking up a lot more lately as I have put more of a load on them so tried a setting that required a reboot in the bios setting the ensuring RBSU Thermal Configuration setting to “Optimal Cooling” (https://techlibrary.hpe.com/docs/iss/proliant_uefi/UEFI_TM_030617/s_select_thermal_config.html)

After updating my 2 hypervisors I also decided to update my BL460c gen8 blade server I run Centos 8 Linux on with the latest security, kernel and a few other patches. I also ended up updating to Zoneminder 1.34.21-2 as part of the updates. After running zmupdate.pl Zoneminder came up just fine. As a maintenance step from the GUI I also set Zoneminder to trim some of my events.

After all of that I also went to my WordPress server and updated WordPress after backing up my database.

October 19, 2020
by dhoytt

Systems Back In Place

I have moved everything around placing the large UPS’ in the rack and moving my FreeNAS systems and older R710 systems in the above them. I still have quite a bit to do like more cable management in incremental stages. DNS is still propagating away from the quick WordPress maintenance website I put up on a hosted website I had never used before.

October 10, 2020
by dhoytt

Replicating Data Over Again Using FreeNAS

Well I ended up having to re-push my data after the initial push failed and I then re-pushed using a different replication job of just the one dataset I needed for that volume. Appears the second replication dumped the dataset’s data in the Zvols root and didn’t create a separate dataset. I looked at the new replications configuration and I think since I didn’t tick the box of the zvol but just the dataset in the simplified version in the GUI it didn’t include the zvol name in the destination path.

I figure no big deal rename using ZFS tools and also change the mountpoints. That worked on the CLI but didn’t translate in the GUI plus the aborted effort with the dataset was still there. Once I deleted the old dataset went back and forth with various commands not propagating to the GUI and seeing the warning of the CLI that changes made on the CLI would not be kept in the database after reboot I decided to redo the original replication job just changing to my secondary network.

Now everything seems to be going well and not interfering with things on my primary network. I see the progress on the cli using commands including percentage of time left with the replication and size of the data being transferred.

#ps -aux | grep send <– From Target showing progress
#zfs get -H -o value used file-storage2/sonora-nfs1 <—This command using dataset name gets data transferred information from the source.

Just a little addendum. The percentage shown with “ps -aux | grep send” seems inaccurate and I was wondering about it the whole time since I have 22TB of data and the percentage was consistently low compared to the data shown transferred to the target with the command “zfs get -H -o value used file-storage2/sonora-nfs1” on the target and in the GUI. This held true as the replication ended right around 50% mark of the ZFS send information and when the transferred data reached the expected levels

Shows replication finished successfully

October 6, 2020
by dhoytt

New Drive Reconfiguration FreeNAS Volume

I got my new 16 TB SAS drive in yesterday a then started to create another vdev to extend my current raid z volume then decided to go with creating one large vdev and volume out of my 16tb drives in raidz2. Before destroying my current raidz volume with 3 drives in the vdev I compared the data with my backup data on my backup drive that used to be my primary and it was identical.

I thought about using “rsync” with the “parallel” command so I could have as many as 20 streams going from one of my spare bl460c gen7 blades I have Centos7 installed on. The thought was to go with nfs mounts of the volumes from each FreeNAS on the Centos7 system using the alternate network then about 20 rsync streams to take advantage of the 10gbe network. After getting the “parallel” package installed and nfs mounts tested an in “/etc/fstab” I decided not to because with rsync piped to parallel you have to create the directory and sub-directories first then there could be a lot of back and forth and fine tuning. I decided to use the FreeNAS built in ZFS replication.

I then pointed all of my Plex servers virtual and physical to my backup FreeNAS system and tested until I get the data replicated back to my primary FreeNAS system with my new Zvol.

Here’s what I did:

  • Removed raidz volume Storage → Pools →<Pool Name> → Pool Operations (gear next to pool) → Export/Disconnect → <Delete Configuration/Data of Pool>.
  • Storage → Pools → Create New Pool → Select Drives → <chose RAIDZ2 6 drives>
    • Left me with 58tb usable space able to withstand 2 drive failures. You lose about 2.45 TB in overhead OS off each 16tb drive.
  • As I was getting happy ready to start my replication over my 10Gbe network I recalled I hadn’t moved my primary NIC’s on my primary NAS off the 1gb NIC’s yet
  • I started DHCP on one of my 10Gbe NIC’s and created the replication to go over that NIC.
  • Started replication from backup FreeNAS to my primary from the top Zvol with my backup dataset and my file storage dataset selected as one job.
  • The backup dataset went fine and replicated before I went to bed which is usually the next morning anyway. This was only 900GB
  • The replication died 2.98TBs into the file storage dataset which has about 22TB I found once I woke up.
    • The mistake I made was to push the data over my primary network which created issues with my network streams of my internet radio and some other localized issues.
  • I decided to shutdown the 10gbe NIC on my primary network from my primary FreeNAS and activate dhcp on the 10gbe NIC of my alternate network of my primary FreeNAS system.
    • This is now going just fine and at about 14TB data replicated so far
    • The network shows it has been transferring data at a mean of 1.65GB with sustained bursts of 15 -20 minutes of 2.75gb 3.56gb.

The other issue I was having a problem was is monitoring the progress of the replication. With ZFS it often doesn’t mount the volume in the OS until it finishes so traditional tools like “df” don’t see the growing size of the data.

I finally started using the following commands after re-familiarizing myself by going to some zfs cookbooks.

This works on initial replication from the cli target when the volume isn’t mounted:

“# zfs get -H -o value used <volume-name>”


You can get the volume name of course from your replication job or :

#zfs mount

#zfs get readonly ← You have to sift through the output

Once the initial replication failed the volume became mounted and you have to use a different command to view the progress though the OS tools still may not show it mounted.

#zpool list

Now shows the progress and you can follow the progress in the FreeNAS Dashboard GUI for the pool since ZFS has it listed and mounted.

All looking good and after replication is completed will test and then start getting my VM volumes reconfigured an beef up the backup FreeNAS. Then go exclusively to my bl460c Gen8 XCP-Ng systems for my virtual environment.