At the end of the previous entry in this series, I mentioned that there were still some things that made the backup system I’d developed less than optimal:
- The backup files aren’t compressed;
- Backups should always be read-only, even when the media is mounted, so it’s a lot harder for a virus or user-error to damage one of them;
- Data on hard drives is subject to bit-rot if it’s not re-written occasionally, and the current designs can’t even detect that, let alone prevent or fix it;
- If the network goes down while a backup is being written, the backup volume will be damaged, maybe irretrievably;
- If the hardware is stolen or destroyed by a disaster at the office, all the data is lost.
Only off-site backups can prevent the last problem, but there’s a fairly easy solution to the rest: the ZFS file system.
File systems have always seemed rather archaic to me. There were so many things you logically should be able to do with them, easily and quickly, but that you had to jump through all kinds of hoops to do — if you could do them at all. Some of the things I’ve considered over the years are: There’s no need for a copy of a file to require as much disk-space as the original, and require lots of time and effort to copy; the files are identical, why not just make a new directory entry linked to the same data, and use copy-on-write if and when something is changed in one of the files? Why couldn’t the file system detect when a file was damaged, and at least let you know about the problem, even if it didn’t have enough redundant information to repair it with? Why does formatting a hard drive have to take so blasted long? Why isn’t some form of redundancy easily available in the file system itself?
I never gave any of these questions much thought. A lot of those things were theoretically possible, but they were just too expensive, in terms of money, time, bandwidth, memory or CPU resources, or some combination thereof. And even if I thought otherwise, file systems simply were, the only choice you had in them was which one of the currently-available options you wanted to use — if you used DOS, for instance, you were limited to the FAT system, period. So I griped about them and moved on.
But I wasn’t the only one who thought of those things, and as the cost barriers eroded due to the constant application of Moore’s Law, some bright people started working on them. ZFS is one of the most visible results, and from what I’ve seen, it’s a damn good one.
So how does it solve the problems I mentioned above?
- It includes an option to compress the stored data, which can be set on different sections as desired;
- You can make read-only “snapshots” (copies) of the current data in any section of the system, instantly and at any time;
- The file system automatically detects problems with the data, and tells you about them. And if given sufficient resources (such as multiple disks to work with), it can even correct the problems on the fly;
- If used over a network, and the network goes down, the file system can only lose the data it was writing at the time. Due to the way it writes information, there shouldn’t be any structural errors when the network returns, and it should be able to correct any that creep in.
It even helps with the off-site backup problem, because it apparently makes it very easy to export only the changes since the last backup. (I haven’t experimented with that part yet though.)
There are two drawbacks to it, though. It’s still comparatively slow on Linux, and on-disk encryption (an absolute requirement for my purposes) is still in an unusable alpha stage. However, the speed difference isn’t much of an issue for me… it’s not all that noticeable, and the network is the bottleneck in my application so the speed doesn’t make much difference here. And I’ve gotten around the encryption problem using the same method as in part II of this series, by using encrypted loop-mounted devices on the network drive — that likely has a detrimental effect on both the speed and the reliability, but I believe it’s still a lot better with ZFS than without it.
So there it is. It’s not perfect, but it’s a lot closer than it was previously, and it’s getting better all the time. 🙂 Once the on-disk encryption feature is available, I should be able to use ZFS for my encrypted home directory as well, and have it’s advantages all the time… I’m looking forward to that.