Computer architecture
From Free net encyclopedia
In computer engineering, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements (especially speeds and interconnections) and design implementations for the various parts of a computer —focusing largely on the way by which the CPU performs internally and accesses addresses in memory.
"Architecture" hence typically refers to the fixed internal structure of the CPU (ie. electronic switches to represent logic gates) to perform logical operations, and may also include the built-in interface (ie. opcode) by which hardware resources (ie. CPU, memory, and also motherboard, peripherals) may be used by the software.
More specific usages of the term include:
- The design of a computer's CPU architecture, instruction set, addressing modes, and techniques such as SIMD and MIMD parallelism.
- More general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.
- Architecture is often defined as the set of machine attributes that a programmer should understand in order to successfully program the specific computer (i.e., being able to reason about what the program will do when executed). For example, part of the architecture are the instructions and the width of operands manipulated by them. Similarly, the frequency at which the system operates is not part of the architecture. This definition reveals the two main considerations for computer architects: (1) Design hardware that behaves as the programmers think it should. (2) Utilize existing implementation technologies (e.g., semiconductors) to build the best computer possible (best can be defined in many different ways as described in Design Goals). The latter consideration is often referred to as microarchitecture.
Contents |
Design goals
The most common goals in computer architecture revolve around the tradeoffs between cost and performance (i.e. speed), although other considerations, such as size, weight, reliability, feature set, expandability and power consumption, may be factors as well.
Cost
Generally cost is held constant, determined by either system or commercial requirements, and speed and storage capacity are adjusted to meet the cost target.
Performance
Computer retailers describe the performance of their machines in terms of clock speed (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds-up a program. Other factors aid speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.
But there are also different types of speed. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). This number is affected by a very wide range of design choices -- for example, adding cache usually makes latency worse (slower) but makes other things faster. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment, and as such, fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin breaking almost immediately after they have been instructed to brake. Low latencies can often be had very inexpensively.
Additionally, the performance of a computer can be measured using other metrics, depending upon the bounds in its intended application domain. Other than being bound by processor performance, a system can be either I/O bound (as in a webserving application) or memory bound (as in graphical editing).
Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, features which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general computational tasks. Naïve users are apt to be unaware of such deceptive tricks.
The general scheme of optimization is to find the costs of the different parts of the computer. In a balanced computer system, the data rate will be constant for all parts of the system, and cost will be allocated proportionally to assure this. The exact form of the computer system will depend on the constraints and goals for which it was optimized.
Virtual memory
Another common problem involves virtual memory.
Historically, random access memory has been thousands of times more expensive than rotating mechanical storage, such as hard drives in a modern computer.
For businesses, and many general computing tasks, it is a good compromise to never let the computer run out of memory, an event which would halt the program, and greatly inconvenience the user.
Instead of halting the program, many computer systems save less-frequently used blocks of memory to the rotating mechanical storage. In essence, the mechanical storage becomes main memory. However, mechanical storage is thousands of times slower than electronic memory.
Reconfigurable computing
Current research in reconfigurable computing is an attempt to break the structural limits of conventional processing architectures. A reconfigurable computing system compiles program source code to an intermediate code suitable for programming runtime reconfigurable field-programmable gate arrays, enabling a software design to be implemented directly in hardware. Since many different hardware-implemented programs can potentially perform in parallel, a reconfigurable computing system can be considered an advanced parallel processing architecture. Reconfigurable computing could also be categorized as computing in memory, which is inspired by the function of the neuronal brain, where the processor and the memory cannot be distinguished from each other.
See also
References
- ISCA: Proceedings of the International Symposium on Computer Architecture
- Micro: IEEE/ACM International Symposium on Microarchitecture
- HPCA: International Symposium on High Performance Computer Architecture
- ASPLOS: International Conference on Architectural Support for Programming Languages and Operating Systems
- ACM Transactions on Computer Systems
- IEEE Computer Society
- Microprocessor Report
External links
- http://www.aceshardware.com
- http://www.anandtech.com
- http://www.dansdata.com
- http://www.barefeats.com
- http://www.cs.wisc.edu/~arch/wwwbg:Компютърна архитектура
bn:কম্পিউটার স্থাপত্য bs:Računarska arhitektura de:Rechnerarchitektur es:Arquitectura de computadores fr:Architecture informatique hu:Architektúra ja:コンピュータ・アーキテクチャ pl:Architektura komputera pt:Arquitectura de computadores th:สถาปัตยกรรมคอมพิวเตอร์ uk:Архітектура ЕОМ zh:计算机系统结构 ta:கணினி கட்டுமானம்