Instruction pipeline

From Free net encyclopedia

(Redirected from Instruction pipelining)
Pipelining redirects here. For HTTP pipelining, see HTTP pipelining.

An instruction pipeline is a technique used in the design of microprocessors and other digital electronic devices to increase their performance. Pipelining reduces cycle time of a processor and hence increases instruction throughput, the number of instructions that can be executed in a unit of time. However, despite this benefit, it is worth noting that pipelining does not help in all cases. There are several disadvantages associated.

Advantages of pipelining:

  1. The cycle time of the processor is reduced, thus increasing instruction bandwidth in most cases.

Advantages of not pipelining:

  1. The processor executes only a single instruction at a time. This prevents branch delays and problems with serial instructions being executed concurrently. Consequently the design is simpler and cheaper to manfacture.
  2. The instruction latency in a non-pipelined processor is slightly lower than in a pipelined equivalent. This is due to the fact that extra flip flops must be added to the data path of a pielined procesor.
  3. A non-pipelined processor will have a fixed instruction bandwidth. The performance of a pipelined processor is much harder to determine and varies widely between different programs.

Most modern CPU's are driven by a clock. The CPU consists internally of logic and flip flops. When the clock arrives, the flip flops take their new value and the logic then requires a period of time to decode the new values. Then the next clock pulse arrives and the flip flops again take their new values, and so on. By breaking the logic into smaller pieces and inserting flip flops betwen the pieces of logic, the delay before the logic gives valid outputs is reduced. This is why the clock period can be reduced. For example, the RISC pipeline is broken into five stages with a set of flip flops between each stage.

  1. Instruction fetch
  2. Instruction decode and register fetch
  3. Execute
  4. Memory access
  5. Register write back

Hazards: When a programmer (or compiler) writes assembly code, they make the assumption that each instruction is executed before execution of the subsequent instruction is begun. This assumption is invalidated by pipelining. When this causes a program to behave incorrectly, the situation is known as a hazard. Various techniques for resolving hazards such as forwarding and stalling exist.

The instruction cycle is easy to implement, however, it is extremely inefficient. The answer to this inefficiency is pipelining. Pipelining improves performance significantly in program code execution. This is done by the pipelining technique decreasing the time any component inside CPU is idle. It should be noted that pipelining does not completely cancel out idle time in a CPU but a significant impact is made. Processors with pipelining are organised inside into (stages) which can semi-independently work on separate jobs. Each stage is organised and linked into a 'chain' so each stage's output is inputted to another stage until the job is done. This organisation of the processor allows overall processing time to be significantly reduced.

Unfortunately, not all instructions are independent. In a simple pipeline, completing an instruction may require 5 stages. To operate at full performance, this pipeline will need to run 4 subsequent independent instructions while the first is completing. If 4 instructions that do not depend on the output of the first instruction are not available, the pipeline control logic must insert a stall or wasted clock cycle into the pipeline until the dependency is resolved. Fortunately, techniques such as forwarding can significantly reduce the cases where stalling is required. While pipelining can in theory increase performance over an unpipelined core by a factor of the number of stages (assuming the clock frequency also scales with the number of stages), in reality, most code does not allow for ideal execution.

Contents

History

In the mid-1950's, it was realized that most of the valuable circuitry of a computer was sitting idle during a computation. After a memory fetch, the memory would be idle while the CPU decoded an instruction, and after decode the decode circuitry would sit idle during execution, and after execution there would be idleness while results were written into memory.

Pipelining was invented to keep all these circuits working at the same time. The first pipelined computers had 3 stages: In ILLIAC II (1958), the stages were called advanced control, delayed control, and interplay, and in IBM's Stretch (1957), the stages were called fetch, decode, and execute. Even though the names were different, the functions were very similar.

Pipelining exploded in the 1960's along with sales for mainframe computers. Seymour Cray was responsible for several critical innovations, including scoreboarding and interlocks, that stall instructions that have dependencies on other instructions that have not yet completed.

In the early 1980's there was a move away from complex computers, towards Reduced-Instruction set computers (RISC). The MIPS 2000 (Machine without Interlocking Pipelined Stages) computer had a 5-stage pipeline and relied upon a smart compiler to insert NO-OP instructions when needed to stall the pipeline. This technique did not allow the common upgrade of increasing the number of pipelined stages, and was dropped after just a few years.

In the late 1980's super-scalar and super-pipelined machines were proposed. Superscalar machines executed 2 or 3 instructions at the same time, and super-pipelined machines had 8 or more pipelined stages (like a CRAY-1 computer.) Today's Pentium processors use superpipelining which incorporates between 15 and 30 stages, and the term "super" has fallen from use.

Examples

Example 1

For instance, a typical instruction to add two numbers might be ADD A, B, C, which adds the values found in memory locations A and B, and then puts the result in memory location C. In a pipelined processor the pipeline controller would break this into a series of instructions similar to:


LOAD A, R1
LOAD B, R2
ADD R1, R2, R3
STORE R3, C
LOAD next instruction

The R locations are registers, temporary memory inside the CPU that is quick to access. The end result is the same, the numbers are added and the result placed in C, and the time taken to drive the addition to completion is no different from the non-pipelined case.

The key to understanding the advantage of pipelining is to consider what happens when this ADD instruction is "half-way done", at the ADD instruction for instance. At this point the circuitry responsible for loading data from memory is no longer being used, and would normally sit idle. In this case the pipeline controller fetches the next instruction from memory, and starts loading the data it needs into registers. That way when the ADD instruction is complete, the data needed for the next ADD is already loaded and ready to go. The overall effective speed of the machine can be greatly increased because no parts of the CPU sit idle.

Each of the simple steps are usually called pipeline stages, in the example above the pipeline is three stages long, a loader, adder and storer.

Every microprocessor manufactured today uses at least 2 stages of pipeline. (The Atmel AVR and the PIC microcontroller each have a 2 stage pipeline).

Example 2

To better visualize the concept, we can look at a theoretical 3-stages pipeline:

StageDescription
Load Read instruction from memory
Execute Execute instruction
Store Store result in memory and/or registers

and a pseudo-code assembly listing to be executed:

	LOAD  #40,A      ; load 40 in A
	MOVE  A,B        ; copy A in B
	ADD   #20,B      ; add 20 to B
	STORE B, 0x300   ; store B into memory cell 0x300

This is how it would be executed:

Clock 1
LoadExecuteStore
LOAD  

The LOAD instruction is fetched from memory.

Clock 2
LoadExecuteStore
MOVELOAD 

The LOAD instruction is executed, while the MOVE instruction is fetched from memory.

Clock 3
LoadExecuteStore
ADDMOVELOAD

The LOAD instruction is in the Store stage, where its result (the number 40) will be stored in the register A. In the meantime, the MOVE instruction is being executed. Since it must move the contents of A into B, it must wait for the ending of the LOAD instruction.

Clock 4
LoadExecuteStore
STOREADDMOVE

The STORE instruction is loaded, while the MOVE instruction is finishing off and the ADD is calculating.

And so on. Note that, sometimes, an instruction will depend on the result of another one (like our MOVE example). When more than one instruction references a particular location for an operand, either reading it (as an input) or writing it (as an output), executing those instructions in an order different from the original program order can lead to problems, also known as hazards. There are several established techniques for either preventing hazards from occurring, or working around them if they do.

Complications

Many designs include pipelines as long as 7, 10 and even 31 stages (like in the Intel Pentium 4). The Xelerator X10q has a pipeline more than a thousand stages long [1]. The downside of a long pipeline is when a program branches, the entire pipeline must be flushed, a problem that branch predicting helps to alleviate. Branch predicting itself can end up exacerbating the problem if branches are predicted poorly. In certain applications, such as supercomputing, programs are specially written to rarely branch and so very long pipelines are ideal to speed up the computations, as long pipelines are designed to reduce clocks per instruction (CPI). However in many applications, such as office software, branching happens constantly resulting in significantly less improvement over an unpipelined CPU.

The higher throughput of pipelines falls short when the executed code contains many branches: the processor cannot know where to read the next instruction, and must wait for the branch instruction to finish, leaving the pipeline behind it empty. After the branch is resolved, the next instruction has to travel all the way through the pipeline before its result becomes available and the processor appears to "work" again. In the extreme case, the performance of a pipelined processor could theoretically approach that of an unpipelined processor, or even slightly worse if all but one pipeline stages are idle and a small overhead is present between stages.

Because of the instruction pipeline, code that the processor loads will not immediately execute. Due to this, updates in the code very near the current location of execution may not take effect because they are already loaded into the Prefetch Input Queue. Instruction caches make this phenomenon even worse. This is only relevant to self-modifying programs such as operating systems.

See also

it:Pipeline dati pl:Potokowość de:pipelining