As mentioned previously, I use a 13″ mid-2009 MacBook Pro as my development machine, with virtual Linux and Windows machines running under Parallels. All was mostly well, except that I’m doing a lot more compiling in the last few months than I had been previously, and the IOWait problem on the Linux VM — always an irritant — had become ever more painful.
How painful? The first compile of a small C++ source file took roughly three minutes and forty-eight seconds, almost all of it spent waiting for the hard drive. Subsequent ones (if I’d logged in no more than a few hours ago) took only sixteen seconds. If I did anything between compiles — switched to a Firefox window to do some research, for instance — then the time for the next compile started climbing toward the initial mark pretty quickly. And if I left it logged in overnight, a compile of the same file never took less than a full minute, until I logged out and back in again.
I don’t know why the hard drive on this system seems so slow. I can’t even figure out if what I’m seeing is normal for this machine’s specifications (and I really hope it isn’t). If the hard drive were noticeably faster, the IOWait bottleneck wouldn’t be as much of a problem, but replacing it to find out isn’t a viable option at the moment, so it was time to look for alternative ways to improve it.
The first thing you always look at in such cases is giving the machine more memory. That was problematic here though: I need to run at least two virtual machines (Linux and Windows) almost full-time, often with a third (an older version of Windows) as well. The third virtual machine can get away with only a gigabyte of memory, but the Linux system requires at least two gigabytes with the workload I use it for, and the other Windows one nearly as much. The host machine is maxes out at 8GB, and it’s fully loaded. Gritting my teeth, I decided to sacrifice the third VM and bumped the Linux VM up to 2.5GB. There was no noticeable change.
(I’d already ensured that both the host machine and the Linux VM were running fully in memory, without swapping.)
The next avenue to explore was figuring out what the compiler was spending its time reading. If I knew that, maybe I could do something to streamline it. But all attempts at identifying that have failed — GCC’s bewildering array of debugging options doesn’t seem to include anything that provides that information, and though I’m sure there are programs to log exactly what files are being accessed when under Linux, I haven’t been able to locate them.
Okay, plan C: maybe there was some error or setting on the disk that was slowing down the reads? fsck
gave the Linux virtual drives a clean bill of health, and using noatime
made no noticeable difference either. Disk Utility on the host machine claimed there were many permissions problems, so I let it grind away at it for more than an hour until it was satisfied, but that produced no change. Scanning the host disk for any problem areas took a while longer, and was equally fruitless. I even tried resetting the PRAM, though I’m not sure what that is or does; no effect.
Plan D involved digging into all the information Google could provide on Linux file system speeds. Maybe an alternate file system would help? From everything I was able to find, the only one that might help significantly was ReiserFS, and only if the files the compiler was spending its time on were small ones. Experimenting with that felt like it would waste more time than I’d save by solving the problem (assuming it did, which wasn’t assured), so scratch that idea.
On to plan E (and some concern that I’d run out of letters before this was through): maybe there’s a cache setting to improve things?
Paydirt! Or rather, something slightly better than just dirt. 😉 After maybe a couple days of work on that, spread out over several weeks, I finally found one page a couple days ago that described the only option that made a significant difference: the cache-pressure setting. Essentially it tells the caching system whether to prefer to keep the contents of files in the cache, or the file-system information that lets it find files. The default setting is 100, which means keep both equally; a higher setting favors the contents of files, a lower one favors the file-system info. Some experimentation with it (using the command sudo sysctl -w vm.vfs_cache_pressure=XX
, where XX is the number to set it to) showed that a setting of 10 kept the compile times and IOWait to a minimum — success!
Or was it? That worked well if I’d logged into the machine within the last few hours, but after it had been running for a while, compiles started slowing down again — to the point that, after leaving it running overnight, that file took more than a minute to compile, no matter how many times I tried it or how close together they were. Better that it was previously, but was there any way to improve it further?
What was happening overnight that could affect it that way? What did logging out and back in change that fixed it? The only answer seemed to be memory, again — as the VM ran longer, the memory in use grew, until it stabilised at between 600MB and 700MB (closer to 1GB if Firefox, with my current set of must-have extensions, were also running). That left a gigabyte for caching — surely that was sufficient for whatever GCC needed to look at? But there was no other difference I could find.
Maybe 2.5GB just wasn’t enough? I couldn’t imagine why that might be the case, but I bumped it up to a full 3GB.
It worked. The first compile after rebooting the machine still took the same amount of time, and subsequent compiles remained at about 16 seconds — but the next morning, after leaving it running overnight, compiles after the first one stayed at about 16 seconds. The IOWait was over! 😉
I’m not real happy about that solution. The machine is responsive, but it’s operating perilously close to its memory limit: there isn’t enough room left to sneeze in without forcing it to start swapping to disk. When it was running three VMs, I could always shut down the third one if I needed to free up some RAM; now that safety valve is gone. Even upgrading to more recent hardware wouldn’t help; the current crop of MacBook Pro machines also maxes out at 8GB.
I would really like to stay with a Mac, for the convenience of having all three major OSes available simultaneously. I must stay with a notebook system. I hope Apple’s next crop of MacBook Pro machines increases the memory limit, or I’ll have to look at non-Mac alternatives.
Maybe a hackintoshable notebook out there would have more than 8GB memory available? Of course, if you want to do professional development for iOS, that’s not an acceptable solution, or if it’s a sufficiently corporate machine that they’d care about EULAs, but for anything else… 🙂 Lion works well on hackintosh too. (Reportedly, as I haven’t bothered to get it yet on mine, even though it’ll work fine on my hardware.)
After watching your antics with your Hackintosh’d system, I rather doubt there would be many notebook systems that would work that way. I definitely don’t have the time to mess with it on a regular basis. No, if Apple doesn’t release a notebook that can take more memory, I’ll have to switch to a non-Apple system for Windows and Linux development, and leave the current Mac notebook for Mac development, or abandon the Mac platform entirely.
Believe it or not, there are. My Dell mini 9 actually runs OS X better than my desktop system, and although it’s a netbook, there are laptops with similar capabilities.
http://wiki.osx86project.org/wiki/index.php/HCL_10.7.0/Portables
That’s a list of only some of the portable computers known to work, and how to get them working.
Thanks for the information, and the link. I still doubt I’d be interested in that sort of thing, but it’s nice to have options.
OK. It is a bit of trouble to do the installation in some cases (though less than Linux sometimes could be in the 90s, and a few pieces of hardware, like my Dell mini 9, are very easy to install it to) but once you’ve got it working, assuming your hardware is compatible in the first place, it works quite smoothly. Lots less “high maintenance” than Windows. 😉 Let me know if you need further advice on what to get in this department hardware-wise, I have a hackintosh expert friend I consulted on getting a desktop who probably knows what the best laptop is too for this that he’d recommend.
Hm… it seems that Apple anticipated me: it won’t work in my current system, but the next time I upgrade I can get a MacBook Pro that holds 16GB of memory. 🙂
Oooo, nice. 🙂