Archon, I don't think there is anything wrong. One "core" or "thread" is not inherently the same as another, and having the same Ghz CPU speed does not always mean the same actual processing speeds.
Core 2 is a previous generation CPU technology. Newer Core i5/i7 architecture is a good deal faster *at the same clock speed*. So what you're seeing is the combination of a faster per-core speed (3.3Ghz vs. 2.8Ghz), and faster overall execution speed. It makes good sense to me.
Zaxxon is running a higher clocked, 6 core CPU, while BigBen is running a *virtual machine* with 16 "cores" (or threads, presumably). So you're seeing several effects here that create an interesting but not impossible result. First, on a pure Ghz equivalency, Zaxxon's got 20.4 while BigBen's got 41.6, so about double, and indeed it looks surprising to get a similar result. BUT, BigBen is running a virtual machine and that has some overhead, sometimes significantly so. There is also some amount of efficiency lost with greater numbers of render threads, if you could have 1 20Ghz CPU core, and 16 2.6Ghz cores (40Ghz theoretical in total), the 20Ghz single core would actually beat the 16 threads by a lot, despite being in theory half the speed. Lastly, we don't know what generation of CPU BigBen is working with, it could be older, meaning less performance per-Ghz compared to Zaxxon's. So, a surprising and interesting result, but not without explanation.
Frankly I'm a bit wary of including results for virtualized machines in the results table for this reason, but as long as they're clearly labeled I'm OK with it.
- Oshyan