Working to music

Cherry orchard at Kolomenskoe in winter

Today I was working on the performance testing of some Java code, and got very strange results. The new code should have been much faster than the old code, but the test harness showed that the new and old code performed about the same. I ran the test several times while I was trying to understand what was going on.

Then halfway through one run, the new code suddenly sprinted off, processing messages at ten times its previous rate, exactly like I had originally expected. So I ran the test again. Back to the sluggish performance of the old code.

This acceleration in the middle of the run is not strange. When performance testing Java programs, it's quite common to see an initial period of slow performance, and then greatly increased performance for the rest of the test. This is because JVMs (such as Sun's HotSpot) initially interpret the code, and only compile pieces of code that run a few thousand times. In most real-world applications, there are many active code paths, so different regions of code cross get compiled at different tims. This makes the transition from the interpreter to native code a gradual process, and you don't notice it (which is why this is a practical implementation strategy for JVMs). But performance testing can stress a small number of code paths, and the execution counts on these paths increase in lock step. So the transition when the compilation threshold is reached can be much more noticeable.

The odd thing about today's incident was that the compilation, and the corresponding jump in performance, would only occur on about 1 in 5 runs. The conditions of the test were exactly the same on each run: The messages being processed were read from a file, and there were tens of thousands of them during a run, so all compilation thresholds should have been comfortably crossed on each run. I was able to confirm that it was a compilation issue using the -Xcomp option, to force compilation of all code from the outset. This reliably resulted in the expected performance. So what was preventing compilation?

I found the answer a while later. I was running the tests on my laptop. At the same time, I was listening to music on headphones, from an MP3 player software also running on my laptop. But if I stopped the MP3 player, I got the accelerated performance on every run.

After further experimentation, this turned out to be unrelated to the music: The MP3 player had the same effect whether the music was playing or not! How this as such a pronounced effect on a separate process remains a mystery. There is no significant consumption of CPU by the MP3 player, as shown by control experiments with -Xcomp.

So the moral seems to be, during performance testing, make sure nothing else is running on the same machine, not even things that won't compete for hardware resources with the test.

Comment from Lalanda

What were you listening to?