![]() ![]() ![]() It is in this way more similar to traditional hardware RAID. The amount of parity that is spread across the drives determines the level of RAID-Z. In ZFS, there is no dedicated “parity drive” like in Unraid, but it instead stores parity across all of the drives in the VDEV. RAID-Z is a way of putting multiple drives together into a VDEV and storing parity, or fault tolerance. Next, we need to define what RAID-Z is and what the various levels of RAID-Z are. Two Mirrors, One VDEV where Both Drives Failed If one VDEV fails, there is not enough information to rebuild the missing data. There is no redundancy of the pool itself, all redundancy in ZFS is in the VDEV layer. However, if 2 drives in a single VDEV, all of the data in our entire pool is lost. Two Mirrors, Each VDEV with One Bad Drive This means that one drive can fail in either (or both!) VDEV and the pool would continue to function in a degraded state. In this configuration, we would have each pair of drives in a mirror. The single largest container, then, is our pool. These two larger containers are our VDEVs. We can see that in this visualization we have two drives in each larger container. Starting from the smallest container size, we have our drives. You can think of the construction of a ZFS pool by visualizing the following graphic: Nested Storage Containers ZFS is very customizable, and therefore, there are many different types of configurations for VDEVs. A pool is then a logically defined group built from 1 or more VDEVs. A VDEV, or virtual device, is a logical grouping of one or more storage devices. Before we continue, it is worth defining some terms. To make ZFS pools easier to understand, we are going to focus on using small storage containers as you may have around the house or shop. However, the next time we go to read the file back, the read head of our spinning hard drive needs to read LBA 1000, go to the track where LBA 2001 is stored, read that, and then go back to the track where LBA 1002 is stored. This allows us to have both the current version of the file, and the previous one, while only storing the difference. LBA 1001 will be kept as-is until the snapshot keeping it there expires. Instead, it will write that block to LBA 2001. When we write that change, ZFS does not over-write the part of the file that was stored in 1001. Now, let us say we make a change to the file and the part that was stored at LBA 1001 needs to be modified. WD Red 10TB Pro NAS Top Use CMR with ZFS, not SMR For spinning hard drives, this is ideal, as the write head does not have to move off of the track it is on. This is considered a sequential write, as all of these blocks are stored directly next to each other. We are going to store that file in LBA 1000, 1001, and 1002. Let us say we need to write a file that is big enough to fit into 3 blocks. ZFS is aware of what LBAs a specific file is stored in. Hard Drives work such that the pieces of your data are stored in Logical Block Addresses, or LBAs. How a COW filesystem works, however, has some important implications that we need to discuss. A snapshot can be thought of like it sounds, a photograph of how something was at a point in time. This means that ZFS can do some cool things like snapshots that a normal filesystem like NTFS could not. ZFS is also classified as a copy-on-write or COW filesystem. What that means is ZFS directly controls not only how the bits and blocks of your files are stored on your hard drives, but it also controls how your hard drives are logically arranged for the purposes of RAID and redundancy. ZFS is a filesystem, but unlike most other file systems it is also the logical volume manager or LVM. Knowledge is key to the decision-making process, and we feel that ZFS is something worth considering for most organizations. Our hope is that we leave you with a better understanding of how and why it works the way it does. The purpose of this article is to help those of you who have heard about ZFS but have not yet had the opportunity to research it. iXsystems has adopted the newer codebase, now called OpenZFS, into its codebase for TrueNAS CORE. ZFS on Linux (ZoL) has pushed the envelope and exposed many newcomers to the ZFS fold. ZFS has become increasingly popular in recent years. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |