CPI/IPC
CPI
Let us assume a ‘classic RISC pipeline’, with the following five stages:
- Instruction fetch cycle (IF).
- Instruction decode/Register fetch cycle (ID).
- Execution/Effective address cycle (EX).
- Memory access (MEM).
- Write-back cycle (WB).
Each stage requires one clock cycle and an instruction passes through the stages sequentially. Without pipelining, in a multi-cycle processor, a new instruction is fetched in stage 1 only after the previous instruction finishes at stage 5, therefore the number of clock cycles it takes to execute an instruction is five (CPI = 5 > 1). In this case, the processor is said to be subscalar
. With pipelining, a new instruction is fetched every clock cycle by exploiting instruction-level parallelism, therefore, since one could theoretically have five instructions in the five pipeline stages at once (one instruction per stage), a different instruction would complete stage 5 in every clock cycle and on average the number of clock cycles it takes to execute an instruction is 1 (CPI = 1). In this case, the processor is said to be scalar
.
With a single-execution-unit
processor, the best CPI attainable is 1. However, with a multiple-execution-unit
processor, one may achieve even better CPI values (CPI < 1). In this case, the processor is said to be superscalar
.
IPC
It is the multiplicative inverse of CPI
.
Calculation of IPC:
The calculation of IPC is done through running a set piece of code, calculating the number of machine-level instructions required to complete it, then using high-performance timers to calculate the number of clock cycles required to complete it on the actual hardware. The final result comes from dividing the number of instructions by the number of CPU clock cycles.
https://www.brendangregg.com/perf.html
The frontend and backend metrics refer to the CPU pipeline, and are also based on stall counts. The frontend processes CPU instructions, in order. It involves instruction fetch, along with branch prediction, and decode. The decoded instructions become micro-operations (uops) which the backend processes, and it may do so out of order. For a longer summary of these components, see Shannon Cepeda’s great posts on frontend and backend.
The backend can also process multiple uops in parallel; for modern processors, three or four. Along with pipelining, this is how IPC can become greater than one, as more than one instruction can be completed (“retired”) per CPU cycle.
Stalled cycles per instruction
is similar to IPC (inverted), however, only counting stalled cycles, which will be for memory or resource bus access. This makes it easy to interpret: stalls are latency, reduce stalls. I really like it as a metric, and hope it becomes as commonplace as IPC/CPI. Lets call it SCPI.
Ref
https://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html
https://www.brendangregg.com/perf.html