Serial Execution

Pipelined Execution - Original RISC goal is to complete one instruction per clock cycleAdvanced Architectures - multiple instructions completed per clock cycle

  1. superpipelined (e.g., MIPS R4000)- split each stage into substages to create finer-grain stages
  2. superscalar (e.g., Intel Pentium, AMD Athlon)- multiple instructions in the same stage of execution in duplicate pipeline hardware
Alternatively, several instructions in the "execute" stage on different functional units
  1. very-long-instruction-word, VLIW (e.g., Intel Itanium) - compiler encodes multiple operations into a long instruction word so hardware can schedule these operations at run-time on multiple functional units without analysis
Figure 14.6: Conceptual Depiction of Superscalar Processing

machine parallelism - the ability of the processor to take advantage of instruction-level parallelism. This is limited by:

Limitations of superscalar - how much "instruction-level parallelism" (ILP) exists in the program. Independent instructions in the program can be executed in parallel, but not all can be. 1) true data dependency: SUB R1, R2, R3 ; R1 R2 - R3 ADD R4, R1, R1 ; R4 R1 + R1 Cannot be avoided by rearranging code 2) procedural dependency - cannot execute instructions after a branch until the branch executes 3) resource conflict / structural hazard - several instructions need same piece of hardware at the same time (e.g., memory, caches, buses, register file, functional units) Three types of orderings: 1) order in which instructions are fetched 2) order in which instructions are executed (called instruction issuing) 3) order in which instructions update registers and memory The more sophisticated the processor, the less it is bound by the strict relationship between these orderings. The only real constraint is that the results match that of sequential execution. Some Categories: a) In-order issue with In-order completion. b) In-order issue with out-of-order completion Problem: Output dependency / WAW dependency (Write-After-Write) I1: R3 R3 op R5 I2: R4 R3 + 1 I3: R3 R5 + 1 I4: R7 R3 op R4 ; R3 value generated from I3 must be used c) Out-of-Order Issue (decouple decode and execution) with Out-of-Order Completion Instruction window provides a pool of possible instructions to be executed: Antidependency / WAR (Write-After-Read) I1: R3 R3 op R5 I2: R4 R3 + 1 I3: R3 R5 + 1 ; If executed out-of-order, then I2 could get wrong value for R3 I4: R7 R3 op R4 Notice that I3 is just reusing R3 and does not need its value, so it is just a conflict for the use of a register. Register Renaming is a solution to this problem; We allocate a different register dynamically at run-time I1: R3b R3a op R5a ; R3b and R3a are different registers I2: R4b R3b + 1 I3: R3c R5a + 1 I4: R7b R3c op R4b Example using Tomasulo's Algorithm Tomasulo's Algorithm is an example of dynamic scheduling. In dynamic scheduling the D - W stages of the five-stage RISC pipeline (handout dated 2/1/05) are split into three stages to allow for out-of-order execution:
  1. Issue - decodes instructions and checks for structural hazards. Instructions are issued in-order through a FIFO queue to maintain correct data flow. If there is not a free reservation station of the appropriate type, the instruction queue stalls.
  2. Read operands - waits until no data hazards, then read operands
  3. Write result - send the result to the CDB to be grabbed by any waiting register or reservation stations
All instructions pass through the issue stage in order, but instructions stalling on operands can be bypassed by later instructions whose operands are available. RAW hazards are handled by delaying instructions in reservation stations until all their operands are available. WAR and WAW hazards are handled by renaming registers in instructions by reservation station numbers. Load and Store instructions to different memory addresses can be done in any order, but the relative order of a Store and accesses to the same memory location must be maintained. One way to perform dynamic disambiguation of memory references, is to perform effective address calculations of Loads and Stores in program order in the issue stage. Smith '95 Studied the relationship between out-of-order issue, duplication of resources, and register renaming on R2000 architecture. (Figure 14.5) 1) base machine - no duplicate functional units, but can issue out-of-order 2) + ld/st: duplicates load / store functional unit that access data cache 3) + alu: duplicates ALU 4) + both: duplicates both load/store and ALU Differences shown for window sizes of 8, 16, and 32 instructions with and without register renaming Conclusions: study shows that superscalar machines: Branch prediction - usually used instead of delayed branching since multiple instructions need to execute in the delay slot causing problems related to instruction dependencies Committing / Retiring Step - needed since instructions may complete out-of-order Using branch prediction and speculative execution means some instructions' results need to be thrown out Results held is some temporary storage and stores performed in order of sequential execution. Pentium 4 Processor

a) Fetch 64 bytes of Pentium 4 (CISC) instruction(s) from L2 cache and decode instruction boundaries and translates Pentium 4 (CISC) intructions into micro-op's (RISC) b) Trace cache (L1 cache) stores recently executed mico-op's

BTB uses dynamic branch prediction (a BHT) (4-bits used via Yeh's algorithm). Static prediction used if not in BTB.

c) Pulls micro-ops from cache (or ROM microprogrammed control unit for very complex instructions) in program sequence order d) Drive delivers decoded instructions from the trace cache to the rename/allocate module.

Out-of-Order Execution Logic:

Allocate - allocates resources needed for execution:
  • stalls pipeline if a resource (e.g., register) is unavailable
  • a reorder buffer (ROB) to store information about a micro-op as it executes
  • one of 128 integer or float registers for the result and/or one of 48 load buffers or one of 24 store buffers
  • an entry in one of the two micro-op queues
Two FIFO queues to hold micro-ops until there is room in the scheduler.

One queue holds load or stores micro-ops

One queue hold the remaining nonmemory micro-ops

Queues can operate at different speeds

ROB entry contains: state, memory address of generating instruction, micro-op, renamed register

Scheduler retrieves micro-ops from queues for dispatching/issuing for execution if all operands and execution unit are available. Up to 6 micro-ops can be dispatched per cycle.

Execution units retrieve necessary integer and floating point registers Compute flags - N, Z, C, V to use an input to the branches

Compares the actual branch result with the prediction. If branch outcome does not match prediction, remove micro-ops from the pipeline. Provide proper branch destination to the BTB which restarts the whole pipeline from the correct target address.

Itanium AL Instruction Examples

add r1 = r2,r3 // r1 = r2 + r3

add r1 = r2,r3,1 // r1 = r2 + r3 + 1

Compare instructions - used to set predicate reg(s)

cmp.eq p3 = r2,r4 // p3 set if r2 equals r4

cmp.gt p2,p3 = r3,r4 // p3 = not p2

Predicate instruction

(p4) add r1 = r2,r3 // result of add only

// seen if p4 is true

Branch instruction

br.cloop.sptk loop_back

if (r10 || r11 || r12 || r13) { Data transfer instructions (qp) ldSZ.ldtype.ldhint r1=[r3] ld8 r5 = [r6] ld8.a r5 = [r6] // Advanced ld8.s r5 = [r6] // Speculative None – Temporal locality, level 1

Three techniques for Reducing Branch Penalties:

Branch elimination - Best way to handle branches is not to have branches

if (R1 == R2)

R3 = R3 + R1

else

R3 = R3 - R1

end if

cmp.eq p1, p2 = r1, r2

(p1) add r3 = r3, r1

(p2) sub r3 = r3, r1

Branch speedup - Reduce the delay associated with branches

sub r6 = r7, r8;; // cycle1

sub r9 = r10, r6

ld8 r4 = [r5];; // cycle 2 (ld takes two cycles to fetch from L1)

add r11 = r12, r4 // cycle 4

Reorder instruction

ld8 r4 = [r5];; // cycle 1 (ld takes two cycles to fetch from L1)

sub r6 = r7, r8;; // cycle2

sub r9 = r10, r6

add r11 = r12, r4 // cycle 3

"Normal load:"

(p1) br some_label // cycle 0

ld8 r1 = [r5] ;; // cycle 1 (ld takes two cycles to fetch from L1)

add r2 = r1, r3 // cycle 3

"Speculative load:"

ld8.s r1 = [r5] ;; // cycle -2 (start speculative load)

(some other instruction)

(p1) br some_label // cycle 0

chk.s r1, recovery // cycle 0 (see if load completed and report exceptions)

add r2 = r1, r3 // cycle 0

Branch prediction - Discussed before