Look at ZFS, it’s a bit more intelligent about using the space. They’ll be part of a pool of drives that you create ‘datasets’(basically virtual drives) from and you can choose your level of redundancy (including none at all if you want to roll the dice there).
I have a 20TB array, 16TB available. It’s already saved me from a lost disk. Using Seagate 5x 4TB 5400s also, with a NVME drive for the ZIL (speeds up writes). I have a 32GB ARC (a ZFS cache in RAM) so, even though the drives are slow the RAM and NVME drives ensure that it always feels snappy.
You can use zfs-send to clone the data to a new system without them having to have an exact copy of your original setup (if you’re using drive images). It is a copy on write filesystem so it supports snapshotting (creating backups of the block level diffs, so it is very space efficient as it only stores the block-level changes to the file).
Look at ZFS, it’s a bit more intelligent about using the space. They’ll be part of a pool of drives that you create ‘datasets’(basically virtual drives) from and you can choose your level of redundancy (including none at all if you want to roll the dice there).
I have a 20TB array, 16TB available. It’s already saved me from a lost disk. Using Seagate 5x 4TB 5400s also, with a NVME drive for the ZIL (speeds up writes). I have a 32GB ARC (a ZFS cache in RAM) so, even though the drives are slow the RAM and NVME drives ensure that it always feels snappy.
You can use zfs-send to clone the data to a new system without them having to have an exact copy of your original setup (if you’re using drive images). It is a copy on write filesystem so it supports snapshotting (creating backups of the block level diffs, so it is very space efficient as it only stores the block-level changes to the file).