As mentioned earlier, I’ve been using the ZFS file system on my network backup drive for the last couple weeks. Last night, I decided to run a “scrub” operation on it (a file-check, similar to fsck
or chkdsk
):
$ sudo zpool status pool: zfs state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub in progress, 75.87% done, 1h9m to go config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 mirror ONLINE 0 0 0 loop0 ONLINE 0 0 4 loop1 ONLINE 0 0 1 errors: No known data errors
Five problems so far, which might seem horrible for only two weeks of use — until you think about it. That’s five problems on an encrypted network drive, using a chain of programs that are only mostly reliable on a local drive. I don’t know what kind of reliability people expect from network drives (this is my first), but in my very limited experience, that’s not too unusual.
My point is that ZFS not only detected the problems (where other file systems would happily have assumed that all is well, and either served up the erroneous files, or failed spectacularly if the damage was in a metadata block), it fixed them! If it had run into them during normal use, it would have fixed them silently too, only noting the problems in the log so that you could tell that something was going on!
That’s why I started using ZFS on that drive in the first place. Nice to see it in action. 🙂