Increased Drive Size Zoneminder CCTV and Plex Physical Server with Clonezilla

I bought a new larger hard drive locally and finally got a chance to increase the root size of my bl460 g7 blade server that serves as my CCTV Zoneminder system and Plex movie streaming system. The Plex database gets large due to the amount of media I have for Plex. The files are accessed via network, but the database stays local. I started running out of space on “/” because I didn’t feel the need to split off /var and I should have.

Anyhow I went from 146GB 15K SAS drive to a 15K 300GB SAS drive. I like to keep my boot drives fast as possible.

What I did was downloaded the latest ISO version of Clonezilla along with the latest version of Balenaetcher, which I used to burn Clonezilla to a USB making it bootable.

Once I booted into Clonezilla from the USB I created with Balenaetcher I used Clonezilla to create an image of my boot dive with “/” on to my NAS drive via NFS.  I then switched out my 146GB drive and put in my 300GB drive and restored the image to that drive. I then increased the partitions and voila I have a ton more space and could expand more if I needed.

Here’s the basic steps bulletized:

  • Created Clonezilla bootable USB using Balenaetcher from my Fedora Workstation
  • Booted off Clonezilla USB and copied boot image to FreeNAS via NFS
  • Shutdown system and took 146GB boot drive out and put in 300gb drive as replacement
  • During boot went into RAID utility and deleted 146GB virtual drive.
  • Still in RAID utility created new virtual 300gb drive to encompass full size of 300gb drive RAID0.
  • Booted back into Clonezilla Live USB and restored disk image from NAS via NFS.
  • Rebooted off the new drive into the OS successfully and verified everything was working
  • Deleted and extend the partition for “/” in fdisk.
# fdisk /dev/sda
Command (m for help): p
Disk /dev/sda: 279.4 GiB, 299966445568 bytes, 585871964 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x750d2bfe

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 286676991 284577792 135.7G 8e Linux LVM

Command (m for help): d
Partition number (1,2, default 2):

Partition 2 has been deleted.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (2-4, default 2):
First sector (2099200-585871963, default 2099200):
Last sector, +sectors or +size{K,M,G,T,P} (2099200-585871963, default 585871963):

Created a new partition 2 of type 'Linux' and of size 278.4 GiB.
Partition #2 contains a LVM2_member signature.

Do you want to remove the signature? [Y]es/[N]o: N

Command (m for help): w

Rebooted since the partition was busy.

# shutdown -r 0

After reboot I still don’t have enough physical extents to expand the logical volume.

 --- Volume group ---
  VG Name               cl
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <135.70 GiB
  PE Size               4.00 MiB
  Total PE              34738
  Alloc PE / Size       34738 / <135.70 GiB
  Free  PE / Size       0 / 0
  VG UUID               Yfe0bt-qxYw-Vme0-nioR-WexB-4Rku-xfW5SG
 --- Logical volume ---
  LV Path                /dev/cl/root
  LV Name                root
  VG Name                cl
  LV UUID                FnDp59-R9uo-kaTr-jRrF-WW35-maSm-qMmNze
  LV Write Access        read/write
  LV Creation host, time coral, 2020-08-28 00:46:01 -0700
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
--- Physical volumes ---
  PV Name               /dev/sda2
  PV UUID               e3FYP0-Us7d-fjUO-MfNI-BpbG-L1eU-jbqwYN
  PV Status             allocatable
  Total PE / Free PE    34738 / 0

So now I make the rest of the drive accessible to LVM:

# pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

I now have space to expand within LVM:

 --- Volume group ---
  VG Name               cl
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               278.36 GiB
  PE Size               4.00 MiB
  Total PE              71261
  Alloc PE / Size       34738 / <135.70 GiB
  Free  PE / Size       36523 / <142.67 GiB
  VG UUID               Yfe0bt-qxYw-Vme0-nioR-WexB-4Rku-xfW5SG
  --- Logical volume ---
  LV Path                /dev/cl/root
  LV Name                root
  VG Name                cl
  LV UUID                FnDp59-R9uo-kaTr-jRrF-WW35-maSm-qMmNze
  LV Write Access        read/write
  LV Creation host, time coral, 2020-08-28 00:46:01 -0700
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0

  --- Physical volumes ---
  PV Name               /dev/sda2
  PV UUID               e3FYP0-Us7d-fjUO-MfNI-BpbG-L1eU-jbqwYN
  PV Status             allocatable
  Total PE / Free PE    71261 / 36523

I then extended the logical volume and filesystem:

]# lvextend -L 100G /dev/mapper/cl-root -r
  Size of logical volume cl/root changed from 50.00 GiB (12800 extents) to 100.00 GiB (25600 extents).
  Logical volume cl/root successfully resized.
meta-data=/dev/mapper/cl-root    isize=512    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 26214400

Confirmed new filesystem size and system was operational and I was done.

]# df -kh
Filesystem                                  Size  Used Avail Use% Mounted on
devtmpfs                                     40G     0   40G   0% /dev
tmpfs                                        40G  198M   39G   1% /dev/shm
tmpfs                                        40G  9.8M   40G   1% /run
tmpfs                                        40G     0   40G   0% /sys/fs/cgroup
/dev/mapper/cl-root                         100G   49G   52G  49% /
/dev/mapper/cl-home                          72G  3.1G   69G   5% /home
/dev/sda1                                   976M  341M  569M  38% /boot
/dev/mapper/zoneminder-lvol0                1.8T  1.4T  401G  77% /zoneminder
192.168.0.8:/mnt/file-storage2/sonora-nfs1   32T   26T  6.1T  81% /nfs1
tmpfs                                       7.9G   20K  7.9G   1% /run/user/42
tmpfs                                       7.9G  4.0K  7.9G   1% /run/user/1000
]# ps -ef | grep plex
plex        1983       1  0 23:12 ?        00:00:03 /usr/lib/plexmediaserver/Plex Media Server
plex        2967    1983  0 23:12 ?        00:00:02 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/Framework.bundle/Contents/Resources/Versions/2/Python/bootstrap.py --server-version 1.23.6.4881-e2e58f321 /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/System.bundle
plex        3232    1983  0 23:12 ?        00:00:00 /usr/lib/plexmediaserver/Plex DLNA Server
plex        3245    1983  0 23:12 ?        00:00:00 /usr/lib/plexmediaserver/Plex Tuner Service /usr/lib/plexmediaserver/Resources/Tuner/Private /usr/lib/plexmediaserver/Resources/Tuner/Shared 1.23.6.4881-e2e58f321 32600
plex        3297    1983  0 23:12 ?        00:00:02 Plex Plug-in [com.plexapp.agents.imdb] /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/Framework.bundle/Contents/Resources/Versions/2/Python/bootstrap.py --server-version 1.23.6.4881-e2e58f321 /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/PlexMovie.bundle
root        8171    4424  0 23:18 pts/0    00:00:00 grep --color=auto plex
]# systemctl status zoneminder
● zoneminder.service - ZoneMinder CCTV recording and security system
   Loaded: loaded (/usr/lib/systemd/system/zoneminder.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/zoneminder.service.d
           └─zm-httpd.conf
   Active: active (running) since Sat 2021-08-21 23:12:36 PDT; 6min ago
  Process: 3342 ExecStart=/usr/bin/zmpkg.pl start (code=exited, status=0/SUCCESS)
 Main PID: 3842 (zmdc.pl)
    Tasks: 26 (limit: 513304)
   Memory: 9.3G
   CGroup: /system.slice/zoneminder.service
           ├─3842 /usr/bin/perl -wT /usr/bin/zmdc.pl startup
           ├─3886 /usr/bin/zmc -m 1
           ├─3890 /usr/bin/zmc -m 2
           ├─3904 /usr/bin/zmc -m 3
           ├─3910 /usr/bin/zmc -m 4
           ├─3914 /usr/bin/zmc -m 5
           ├─3919 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=1 --daemon
           ├─3924 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=2 --daemon
           ├─3929 /usr/bin/perl -wT /usr/bin/zmwatch.pl
           ├─3934 /usr/bin/perl -wT /usr/bin/zmtelemetry.pl
           └─3945 /usr/bin/perl -wT /usr/bin/zmstats.pl

Aug 21 23:18:42 coral zmc_m2[3890]: INF [zmc_m2] [Front-Porch_Speco-2: 11000 - Analysing at 29.91 fps from 10999 - >
Aug 21 23:18:42 coral zmc_m5[3914]: INF [zmc_m5] [Upstairs-Uview-2: 9300 - Capturing at 25.37 fps, capturing bandwi>
Aug 21 23:18:44 coral zmc_m4[3910]: INF [zmc_m4] [Kitchen-Uniview-1: 9300 - Capturing at 25.38 fps, capturing bandw>
Aug 21 23:18:44 coral zmc_m3[3904]: INF [zmc_m3] [Front-Driveway-Vitek-1: 10900 - Capturing at 29.65 fps, capturing>
Aug 21 23:18:44 coral zmc_m3[3904]: INF [zmc_m3] [Front-Driveway-Vitek-1: 10900 - Analysing at 29.64 fps from 10899>
Aug 21 23:18:45 coral zmc_m1[3886]: INF [zmc_m1] [Backyard-1-Speco: 11100 - Capturing at 30.12 fps, capturing bandw>
Aug 21 23:18:45 coral zmc_m1[3886]: INF [zmc_m1] [Backyard-1-Speco: 11100 - Analysing at 30.12 fps from 11099 - 109>
Aug 21 23:18:45 coral zmc_m2[3890]: INF [zmc_m2] [Front-Porch_Speco-2: 11100 - Capturing at 29.94 fps, capturing ba>
Aug 21 23:18:45 coral zmc_m2[3890]: INF [zmc_m2] [Front-Porch_Speco-2: 11100 - Analysing at 30.15 fps from 11099 - >
Aug 21 23:18:46 coral zmc_m5[3914]: INF [zmc_m5] [Upstairs-Uview-2: 9400 - Capturing at 25.39 fps, capturing bandwi>

Comments are closed.

Increased Drive Size Zoneminder CCTV and Plex Physical Server with Clonezilla

I bought a new larger hard drive locally and finally got a chance to increase the root size of my bl460 g7 blade server that serves as my CCTV Zoneminder system and Plex movie streaming system. The Plex database gets large due to the amount of media I have for Plex. The files are accessed via network, but the database stays local. I started running out of space on “/” because I didn’t feel the need to split off /var and I should have.

Anyhow I went from 146GB 15K SAS drive to a 15K 300GB SAS drive. I like to keep my boot drives fast as possible.

What I did was downloaded the latest ISO version of Clonezilla along with the latest version of Balenaetcher, which I used to burn Clonezilla to a USB making it bootable.

Once I booted into Clonezilla from the USB I created with Balenaetcher I used Clonezilla to create an image of my boot dive with “/” on to my NAS drive via NFS.  I then switched out my 146GB drive and put in my 300GB drive and restored the image to that drive. I then increased the partitions and voila I have a ton more space and could expand more if I needed.

Here’s the basic steps bulletized:

  • Created Clonezilla bootable USB using Balenaetcher from my Fedora Workstation
  • Booted off Clonezilla USB and copied boot image to FreeNAS via NFS
  • Shutdown system and took 146GB boot drive out and put in 300gb drive as replacement
  • During boot went into RAID utility and deleted 146GB virtual drive.
  • Still in RAID utility created new virtual 300gb drive to encompass full size of 300gb drive RAID0.
  • Booted back into Clonezilla Live USB and restored disk image from NAS via NFS.
  • Rebooted off the new drive into the OS successfully and verified everything was working
  • Deleted and extend the partition for “/” in fdisk.
# fdisk /dev/sda
Command (m for help): p
Disk /dev/sda: 279.4 GiB, 299966445568 bytes, 585871964 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x750d2bfe

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 286676991 284577792 135.7G 8e Linux LVM

Command (m for help): d
Partition number (1,2, default 2):

Partition 2 has been deleted.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (2-4, default 2):
First sector (2099200-585871963, default 2099200):
Last sector, +sectors or +size{K,M,G,T,P} (2099200-585871963, default 585871963):

Created a new partition 2 of type 'Linux' and of size 278.4 GiB.
Partition #2 contains a LVM2_member signature.

Do you want to remove the signature? [Y]es/[N]o: N

Command (m for help): w

Rebooted since the partition was busy.

# shutdown -r 0

After reboot I still don’t have enough physical extents to expand the logical volume.

 --- Volume group ---
  VG Name               cl
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <135.70 GiB
  PE Size               4.00 MiB
  Total PE              34738
  Alloc PE / Size       34738 / <135.70 GiB
  Free  PE / Size       0 / 0
  VG UUID               Yfe0bt-qxYw-Vme0-nioR-WexB-4Rku-xfW5SG
 --- Logical volume ---
  LV Path                /dev/cl/root
  LV Name                root
  VG Name                cl
  LV UUID                FnDp59-R9uo-kaTr-jRrF-WW35-maSm-qMmNze
  LV Write Access        read/write
  LV Creation host, time coral, 2020-08-28 00:46:01 -0700
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
--- Physical volumes ---
  PV Name               /dev/sda2
  PV UUID               e3FYP0-Us7d-fjUO-MfNI-BpbG-L1eU-jbqwYN
  PV Status             allocatable
  Total PE / Free PE    34738 / 0

So now I make the rest of the drive accessible to LVM:

# pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

I now have space to expand within LVM:

 --- Volume group ---
  VG Name               cl
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               278.36 GiB
  PE Size               4.00 MiB
  Total PE              71261
  Alloc PE / Size       34738 / <135.70 GiB
  Free  PE / Size       36523 / <142.67 GiB
  VG UUID               Yfe0bt-qxYw-Vme0-nioR-WexB-4Rku-xfW5SG
  --- Logical volume ---
  LV Path                /dev/cl/root
  LV Name                root
  VG Name                cl
  LV UUID                FnDp59-R9uo-kaTr-jRrF-WW35-maSm-qMmNze
  LV Write Access        read/write
  LV Creation host, time coral, 2020-08-28 00:46:01 -0700
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0

  --- Physical volumes ---
  PV Name               /dev/sda2
  PV UUID               e3FYP0-Us7d-fjUO-MfNI-BpbG-L1eU-jbqwYN
  PV Status             allocatable
  Total PE / Free PE    71261 / 36523

I then extended the logical volume and filesystem:

]# lvextend -L 100G /dev/mapper/cl-root -r
  Size of logical volume cl/root changed from 50.00 GiB (12800 extents) to 100.00 GiB (25600 extents).
  Logical volume cl/root successfully resized.
meta-data=/dev/mapper/cl-root    isize=512    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=13107200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=6400, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 26214400

Confirmed new filesystem size and system was operational and I was done.

]# df -kh
Filesystem                                  Size  Used Avail Use% Mounted on
devtmpfs                                     40G     0   40G   0% /dev
tmpfs                                        40G  198M   39G   1% /dev/shm
tmpfs                                        40G  9.8M   40G   1% /run
tmpfs                                        40G     0   40G   0% /sys/fs/cgroup
/dev/mapper/cl-root                         100G   49G   52G  49% /
/dev/mapper/cl-home                          72G  3.1G   69G   5% /home
/dev/sda1                                   976M  341M  569M  38% /boot
/dev/mapper/zoneminder-lvol0                1.8T  1.4T  401G  77% /zoneminder
192.168.0.8:/mnt/file-storage2/sonora-nfs1   32T   26T  6.1T  81% /nfs1
tmpfs                                       7.9G   20K  7.9G   1% /run/user/42
tmpfs                                       7.9G  4.0K  7.9G   1% /run/user/1000
]# ps -ef | grep plex
plex        1983       1  0 23:12 ?        00:00:03 /usr/lib/plexmediaserver/Plex Media Server
plex        2967    1983  0 23:12 ?        00:00:02 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/Framework.bundle/Contents/Resources/Versions/2/Python/bootstrap.py --server-version 1.23.6.4881-e2e58f321 /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/System.bundle
plex        3232    1983  0 23:12 ?        00:00:00 /usr/lib/plexmediaserver/Plex DLNA Server
plex        3245    1983  0 23:12 ?        00:00:00 /usr/lib/plexmediaserver/Plex Tuner Service /usr/lib/plexmediaserver/Resources/Tuner/Private /usr/lib/plexmediaserver/Resources/Tuner/Shared 1.23.6.4881-e2e58f321 32600
plex        3297    1983  0 23:12 ?        00:00:02 Plex Plug-in [com.plexapp.agents.imdb] /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/Framework.bundle/Contents/Resources/Versions/2/Python/bootstrap.py --server-version 1.23.6.4881-e2e58f321 /usr/lib/plexmediaserver/Resources/Plug-ins-e2e58f321/PlexMovie.bundle
root        8171    4424  0 23:18 pts/0    00:00:00 grep --color=auto plex
]# systemctl status zoneminder
● zoneminder.service - ZoneMinder CCTV recording and security system
   Loaded: loaded (/usr/lib/systemd/system/zoneminder.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/zoneminder.service.d
           └─zm-httpd.conf
   Active: active (running) since Sat 2021-08-21 23:12:36 PDT; 6min ago
  Process: 3342 ExecStart=/usr/bin/zmpkg.pl start (code=exited, status=0/SUCCESS)
 Main PID: 3842 (zmdc.pl)
    Tasks: 26 (limit: 513304)
   Memory: 9.3G
   CGroup: /system.slice/zoneminder.service
           ├─3842 /usr/bin/perl -wT /usr/bin/zmdc.pl startup
           ├─3886 /usr/bin/zmc -m 1
           ├─3890 /usr/bin/zmc -m 2
           ├─3904 /usr/bin/zmc -m 3
           ├─3910 /usr/bin/zmc -m 4
           ├─3914 /usr/bin/zmc -m 5
           ├─3919 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=1 --daemon
           ├─3924 /usr/bin/perl -wT /usr/bin/zmfilter.pl --filter_id=2 --daemon
           ├─3929 /usr/bin/perl -wT /usr/bin/zmwatch.pl
           ├─3934 /usr/bin/perl -wT /usr/bin/zmtelemetry.pl
           └─3945 /usr/bin/perl -wT /usr/bin/zmstats.pl

Aug 21 23:18:42 coral zmc_m2[3890]: INF [zmc_m2] [Front-Porch_Speco-2: 11000 - Analysing at 29.91 fps from 10999 - >
Aug 21 23:18:42 coral zmc_m5[3914]: INF [zmc_m5] [Upstairs-Uview-2: 9300 - Capturing at 25.37 fps, capturing bandwi>
Aug 21 23:18:44 coral zmc_m4[3910]: INF [zmc_m4] [Kitchen-Uniview-1: 9300 - Capturing at 25.38 fps, capturing bandw>
Aug 21 23:18:44 coral zmc_m3[3904]: INF [zmc_m3] [Front-Driveway-Vitek-1: 10900 - Capturing at 29.65 fps, capturing>
Aug 21 23:18:44 coral zmc_m3[3904]: INF [zmc_m3] [Front-Driveway-Vitek-1: 10900 - Analysing at 29.64 fps from 10899>
Aug 21 23:18:45 coral zmc_m1[3886]: INF [zmc_m1] [Backyard-1-Speco: 11100 - Capturing at 30.12 fps, capturing bandw>
Aug 21 23:18:45 coral zmc_m1[3886]: INF [zmc_m1] [Backyard-1-Speco: 11100 - Analysing at 30.12 fps from 11099 - 109>
Aug 21 23:18:45 coral zmc_m2[3890]: INF [zmc_m2] [Front-Porch_Speco-2: 11100 - Capturing at 29.94 fps, capturing ba>
Aug 21 23:18:45 coral zmc_m2[3890]: INF [zmc_m2] [Front-Porch_Speco-2: 11100 - Analysing at 30.15 fps from 11099 - >
Aug 21 23:18:46 coral zmc_m5[3914]: INF [zmc_m5] [Upstairs-Uview-2: 9400 - Capturing at 25.39 fps, capturing bandwi>

Comments are closed.