Increasing General Lookup Performance
You might remember my post back from October, in which I promised to keep working on server-side performance. Well, as you might have noticed, upgrading the master server didn't happen in November... However, I didn't drop the ball and in the past couple of weeks finally found the time to actually get this done. Today the new master processing server went live. And so far the numbers look promising.
For those of you who care about technical details: The load on the server is disk-bound. That means, that neither the main memory, nor the CPU are the bottleneck, but rather the speed of input/output operations to disk (as opposed to the network). How do I know this? Simple. Just run the Unix tool
top and check the I/O wait load (
wa), i.e. the amount of time the CPU waits for I/O operations to complete. If that number is substantially higher than user CPU time (
us) and system CPU time (
sy), you just know that disk I/O is the limiting factor. Of course there are multiple approaches one can take to increase throughput. E.g. split up the load onto multiple servers, don't write as much or defer writing, attempt to cache smarter, decrease I/O wait, ... For now, I chose the last option, i.e. decrease I/O wait by adding more hardware. What used to be a RAID1 with two disks, is now a RAID0 with seven disks. You might correctly observe that this kind of system is prone to disk failures, as the failure rate for RAID0 with seven disks is something to take seriously... And yes. It's true. But for now I'm willing to take the risk, because the MySQL replication slave I'm running, is hosted on a RAID1 system and its backup is kept somewhere completely different. So even when the master system goes belly up, I can restore the system quickly based on the slower but way more reliable backup server. Plus, there is always the option to create a hot standby for the master, running the same setup.
I'll know in a couple of days, if the system is up for the load. But as I said, things look very promising so far.