Linux compiles and massive IOWait delays, Part III

Remember the last IOWait delay entry, a couple days ago? If you took a look recently, you’ll have noticed a big WARNING! label on the top. There’s a good reason for that.

Everything was going along fine, but I started noticing some little problems (such as, the restart button on the shutdown dialog wouldn’t actually restart the system, it only logged me out, and the login screen wouldn’t remember that I wanted to use gnome-shell instead of the Unity interface). Nothing to cause me any concern, or to indicate anything but minor trouble.

Then, yesterday morning, the update-manager informed me I had some updates. As usual, I looked them over, then told it to install them. That’s when things started going wrong… I won’t bother describing the odd behaviors, suffice to say that it was pretty obvious something major was hosed. Looking back on it, I’m pretty sure I know how it happened, too: when I copied the /usr directory to its new drive, I didn’t take any precautions for handling hard or soft links.

Well, no big deal. I’d just done a backup of my data the night before, and hadn’t done anything that morning, so I didn’t even bother trying to back up any more-recent changes, I just popped the virtual installation DVD in the virtual CD drive (it’s a virtual machine, remember) and told it to reinstall the OS — but since I had to do it anyway, this time I decided to format the entire system drive as BTRFS. (My data is on a separate EXT4 virtual drive… I don’t quite trust BTRFS that far yet.)

The installation went off without a hitch. There was one small glitch, but that was easily worked around. As usual, restoring my data and installing my normal set of applications was pretty much a cinch.

Unfortunately I could find no way to turn on compression during the install, but that was easily remedied by adding the compress tag to the /etc/fstab entry and rebooting. The only problem is that it only affects files written after it’s turned on, but I found a page that mentioned a way to get existing files compressed. A simple Bash script applied it to the directories I felt were safe to do that on:

#!/bin/bash

doit () {
    for i in $(find * -type f)
    do
        echo "$i"
        btrfs filesystem defragment $i
    done
}

cd /bin
doit
cd /etc
doit
cd /lib
doit
cd /lib64
doit
cd /sbin
doit
cd /usr
doit

After that, I ran the same tests as in the earlier article. After dropping the cache (as described there) and reapplying the cache-pressure setting, I modified one source file and recompiled, as usual. The cache was still only 1.1GB when it was done, so the compression must have worked (and maybe that is as good as I could get it). But where that test took a little over four minutes in my earlier try, it took just under two minutes this time! (It’s apparently accurate too — I ran it again and got the same numbers.) A recompile after that took either 15 or 16 seconds, slightly better than the 16-or-17 from earlier.

The system also “feels” faster. I don’t know why that might be, maybe the extra cache is helping it or something hidden was slowing my older installation down. Given the numbers, I’m willing to believe that it’s not just observational bias. Booting it is noticeably slower, apparently because the BTRFS system has to do something on startup, but since I rarely have to reboot I can live with that easily.

I’ll post further on this subject if anything new comes up.

2 Comments

  1. With a filesystem upgrade like that, your system should be performing as smooth as BTR. BTW, does BTRFS have all of those cool ZFS features? ZFS can do some amazing stuff, though of course certain enterprise file systems did those things before it, it just brings it to the masses. 🙂

    • Well, it’s pretty smooth. 🙂

      BTRFS has a lot of the features of ZFS. I don’t know if it has all of them, but I think it has a few that ZFS didn’t, too — I don’t think ZFS had compression, for instance, though I believe it was planned.

Comments are closed.