Why is clock rate a poor metric of computer performance? What are the relative strengths and weaknesses of clock speed as a performance metric?
What will be an ideal response?
Clock rate is a measure of the speed at which a computer is clocked; that is, the rate at which fundamental operations are executed. So, you could argue that the speed or performance of computers is proportional to clock speed. However, in practice, clock speed is an almost useless measure of performance. First, the clock speed does not measure the work done per clock cycle. For example, a superscalar processor may perform eight operations per clock, whereas a non?pipelined device might require six cycles per instruction. We now have two devices where one is intrinsically 6 × 8 = 48 times faster than the other. So, one processor could have a clock speed of 1,000 MHz and be slower than the other processor running at 30 MHz. Moreover, the speed of a computer is also determined by factors such as its instruction set, the way the code is written and generated, the use of cache memory, the number of page faults per memory access, and so on. Speed can be divorced neither from the computer system nor the software being executed.
The only time that the use of clock speed as a metric might make some sense would be when comparing two identical processors (members of the same family) running the same software in the same environment. Then, you might be able to say that a processor running at 3.5 GHz is faster than one running at 3.2 GHz. But you could not say how much faster. You can’t claim a 3.5/3.2 = 1.09 increase in performance because the higher speed may not be reflected by a similar improvement in the rest of the hardware).