CPU design
Encyclopedia
CPU design is the design engineer
Design engineer
Design Engineer is a general term that covers multiple engineering disciplines including electrical, mechanical, industrial design and civil engineering, architectural engineers in the U.S...

ing task of creating a central processing unit
Central processing unit
The central processing unit is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in...

 (CPU), a component of computer hardware
Computer hardware
Personal computer hardware are component devices which are typically installed into or peripheral to a computer case to create a personal computer upon which system software is installed including a firmware interface such as a BIOS and an operating system which supports application software that...

. It is a subfield of electronics engineering and computer engineering
Computer engineering
Computer engineering, also called computer systems engineering, is a discipline that integrates several fields of electrical engineering and computer science required to develop computer systems. Computer engineers usually have training in electronic engineering, software design, and...

.

Overview

CPU design focuses on these areas:
  1. datapath
    Datapath
    A datapath is a collection of functional units, such as arithmetic logic units or multipliers, that perform data processing operations. Most central processing units consist of a datapath and a control unit, with a large part of the control unit dedicated to regulating the interaction between the...

    s (such as ALU
    Arithmetic logic unit
    In computing, an arithmetic logic unit is a digital circuit that performs arithmetic and logical operations.The ALU is a fundamental building block of the central processing unit of a computer, and even the simplest microprocessors contain one for purposes such as maintaining timers...

    s and pipelines)
  2. control unit
    Control unit
    A control unit in general is a central part of the machinery that controls its operation, provided that a piece of machinery is complex and organized enough to contain any such unit. One domain in which the term is specifically used is the area of computer design...

    : logic which controls the datapaths
  3. Memory components such as register files, caches
  4. Clock circuitry such as clock drivers, PLLs
    Phase-locked loop
    A phase-locked loop or phase lock loop is a control system that generates an output signal whose phase is related to the phase of an input "reference" signal. It is an electronic circuit consisting of a variable frequency oscillator and a phase detector...

    , clock distribution networks
  5. Pad transceiver circuitry
  6. Logic gate cell library which is used to implement the logic


CPUs designed for high-performance markets might require custom designs for each of these items to achieve frequency, power-dissipation, and chip-area goals.

CPUs designed for lower performance markets might lessen the implementation burden by:
  • Acquiring some of these items by purchasing them as intellectual property
    Intellectual property
    Intellectual property is a term referring to a number of distinct types of creations of the mind for which a set of exclusive rights are recognized—and the corresponding fields of law...

  • Use control logic implementation techniques (logic synthesis
    Logic synthesis
    In electronics, logic synthesis is a process by which an abstract form of desired circuit behavior, typically register transfer level , is turned into a design implementation in terms of logic gates. Common examples of this process include synthesis of HDLs, including VHDL and Verilog...

     using CAD tools) to implement the other components - datapaths, register files, clocks


Common logic styles used in CPU design include:
  • Unstructured random logic
  • Finite-state machines
  • Microprogramming (common from 1965 to 1985)
  • Programmable logic array
    Programmable logic array
    A programmable logic array is a kind of programmable logic device used to implement combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a set of programmable OR gate planes, which can then be conditionally complemented to produce an output...

     (common in the 1980s, no longer common)


Device types used to implement the logic include:
  • Transistor-transistor logic
    Transistor-transistor logic
    Transistor–transistor logic is a class of digital circuits built from bipolar junction transistors and resistors. It is called transistor–transistor logic because both the logic gating function and the amplifying function are performed by transistors .TTL is notable for being a widespread...

     Small Scale Integration logic chips - no longer used for CPUs
  • Programmable Array Logic
    Programmable Array Logic
    The term Programmable Array Logic is used to describe a family of programmable logic device semiconductors used to implement logic functions in digital circuits introduced by Monolithic Memories, Inc. in March 1978. MMI obtained a registered trademark on the term PAL for use in "Programmable...

     and Programmable logic device
    Programmable logic device
    A programmable logic device or PLD is an electronic component used to build reconfigurable digital circuits. Unlike a logic gate, which has a fixed function, a PLD has an undefined function at the time of manufacture...

    s - no longer used for CPUs
  • Emitter-coupled logic
    Emitter-coupled logic
    In electronics, emitter-coupled logic , is a logic family that achieves high speed by using an overdriven BJT differential amplifier with single-ended input, whose emitter current is limited to avoid the slow saturation region of transistor operation....

     (ECL) gate array
    Gate array
    A gate array or uncommitted logic array is an approach to the design and manufacture of application-specific integrated circuits...

    s - no longer common
  • CMOS
    CMOS
    Complementary metal–oxide–semiconductor is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits...

     gate array
    Gate array
    A gate array or uncommitted logic array is an approach to the design and manufacture of application-specific integrated circuits...

    s - no longer used for CPUs
  • CMOS
    CMOS
    Complementary metal–oxide–semiconductor is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits...

     ASIC
    Application-specific integrated circuit
    An application-specific integrated circuit is an integrated circuit customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC...

    s - what's commonly used today, they're so common that the term ASIC is not used for CPUs
  • Field-programmable gate array
    Field-programmable gate array
    A field-programmable gate array is an integrated circuit designed to be configured by the customer or designer after manufacturing—hence "field-programmable"...

    s (FPGA) - common for soft microprocessor
    Soft microprocessor
    A soft microprocessor is a microprocessor core that can be wholly implemented using logic synthesis...

    s, and more or less required for reconfigurable computing
    Reconfigurable computing
    Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays...



A CPU design project generally has these major tasks:
  • Programmer-visible instruction set architecture, which can be implemented by a variety of microarchitecture
    Microarchitecture
    In computer engineering, microarchitecture , also called computer organization, is the way a given instruction set architecture is implemented on a processor. A given ISA may be implemented with different microarchitectures. Implementations might vary due to different goals of a given design or...

    s
  • Architectural study and performance modeling in ANSI C
    ANSI C
    ANSI C refers to the family of successive standards published by the American National Standards Institute for the C programming language. Software developers writing in C are encouraged to conform to the standards, as doing so aids portability between compilers.-History and outlook:The first...

    /C++
    C++
    C++ is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises a combination of both high-level and low-level language features. It was developed by Bjarne Stroustrup starting in 1979 at Bell...

     or SystemC
    SystemC
    SystemC is a set of C++ classes and macros which provide an event-driven simulation kernel in C++ . These facilities enable a designer to simulate concurrent processes, each described using plain C++ syntax...

  • High-level synthesis
    High-level synthesis
    High-level synthesis , sometimes referred to as C synthesis, electronic system level synthesis, algorithmic synthesis, or behavioral synthesis, is an automated design process that interprets an algorithmic description of a desired behavior and creates hardware that implements that behavior. The...

     (HLS) or RTL
    Register transfer level
    In integrated circuit design, register-transfer level is a level of abstraction used in describing the operation of a synchronous digital circuit...

     (e.g. logic) implementation
  • RTL Verification
  • Circuit design
    Circuit design
    The process of circuit design can cover systems ranging from complex electronic systems all the way down to the individual transistors within an integrated circuit...

     of speed critical components (caches, registers, ALUs)
  • Logic synthesis
    Logic synthesis
    In electronics, logic synthesis is a process by which an abstract form of desired circuit behavior, typically register transfer level , is turned into a design implementation in terms of logic gates. Common examples of this process include synthesis of HDLs, including VHDL and Verilog...

     or logic-gate-level design
  • Timing analysis
    Static timing analysis
    Static Timing Analysis is a method of computing the expected timing of a digital circuit without requiring simulation.High-performance integrated circuits have traditionally been characterized by the clock frequency at which they operate...

     to confirm that all logic and circuits will run at the specified operating frequency
  • Physical design including floorplanning, place and route
    Place and route
    Place and route is a stage in the design of printed circuit boards, integrated circuits, and field-programmable gate arrays. As implied by the name, it is composed of two steps, placement and routing. The first step, placement, involves deciding where to place all electronic components, circuitry,...

     of logic gates
  • Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent
  • Checks for signal integrity
    Signal integrity
    Signal integrity or SI is a set of measures of the quality of an electrical signal. In digital electronics, a stream of binary values is represented by a voltage waveform. However, digital signals are fundamentally analog in nature, and all signals are subject to effects such as noise,...

    , chip manufacturability
    Design rule checking
    Design Rule Checking or Check is the area of Electronic Design Automation that determines whether the physical layout of a particular chip layout satisfies a series of recommended parameters called Design Rules...



As with most complex electronic designs, the logic verification
Functional verification
Functional verification, in electronic design automation, is the task of verifying that the logic design conforms to specification. In everyday terms, functional verification attempts to answer the question "Does this proposed design do what is intended?" This is a complex task, and takes the...

 effort (proving that the design does
not have bugs) now dominates the project schedule of a CPU.

Key CPU architectural innovations include index register
Index register
An index registerCommonly known as a B-line in early British computers. in a computer's CPU is a processor register used for modifying operand addresses during the run of a program, typically for doing vector/array operations...

, cache
CPU cache
A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations...

, virtual memory
Virtual memory
In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture's various forms of computer data storage , allowing a program to be designed as though there is only one kind of memory, "virtual" memory, which...

, instruction pipelining, superscalar
Superscalar
A superscalar CPU architecture implements a form of parallelism called instruction level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate...

, CISC
Complex instruction set computer
A complex instruction set computer , is a computer where single instructions can execute several low-level operations and/or are capable of multi-step operations or addressing modes within single instructions...

, RISC
Reduced instruction set computer
Reduced instruction set computing, or RISC , is a CPU design strategy based on the insight that simplified instructions can provide higher performance if this simplicity enables much faster execution of each instruction. A computer based on this strategy is a reduced instruction set computer...

, virtual machine
Virtual machine
A virtual machine is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software emulation or hardware virtualization or both together.-VM Definitions:A virtual machine is a software...

, emulator
Emulator
In computing, an emulator is hardware or software or both that duplicates the functions of a first computer system in a different second computer system, so that the behavior of the second system closely resembles the behavior of the first system...

s, microprogram, and stack
Stack (data structure)
In computer science, a stack is a last in, first out abstract data type and linear data structure. A stack can have any abstract data type as an element, but is characterized by only three fundamental operations: push, pop and stack top. The push operation adds a new item to the top of the stack,...

.

Goals

The first CPUs were designed to do mathematical calculations faster and more reliably than human computers.

Each successive generation of CPU might be designed to achieve some of these goals:
  • higher performance levels of a single program or thread
  • higher throughput levels of multiple programs/threads
  • less power consumption
    Low-power
    In electronics, the term low-power may mean:* Low-power broadcasting, that the power of the broadcast is less, i.e. the radio waves are not intended to travel as far as from typical transmitters....

     for the same performance level
  • lower cost for the same performance level
  • greater connectivity to build larger, more parallel systems
  • more specialization to aid in specific targeted markets


Re-designing a CPU core to a smaller die-area helps achieve several of these goals.
  • Shrinking everything (a "photomask
    Photomask
    A photomask is an opaque plate with holes or transparencies that allow light to shine through in a defined pattern. They are commonly used in photolithography.-Overview:...

     shrink"), resulting in the same number of transistors on a smaller die, improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance
    Parasitic capacitance
    In electrical circuits, parasitic capacitance, stray capacitance or, when relevant, self-capacitance , is an unavoidable and usually unwanted capacitance that exists between the parts of an electronic component or circuit simply because of their proximity to each other...

    ) and reduces cost (more CPUs fit on the same wafer of silicon).
  • Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one VLSI chip (additional cache, multiple CPUs, or other components), improving performance and reducing overall system cost.

Performance analysis and benchmarking

Because there are too many programs to test a CPU's speed on all of them, benchmark
Benchmark (computing)
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it...

s were developed.
The most famous benchmarks are the SPECint and SPECfp
SPECfp
SPECfp is a computer benchmark designed to test the floating point performance of a computer. It is managed by the Standard Performance Evaluation Corporation. SPECfp is the floating point performance testing component of the SPEC CPU testing suit. The first stander SPECfp was released in 1989 as...

 benchmarks developed by Standard Performance Evaluation Corporation
Standard Performance Evaluation Corporation
The Standard Performance Evaluation Corporation is a non-profit organization that aims to "produce, establish, maintain and endorse a standardized set" of performance benchmarks for computers....

 and the ConsumerMark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC
EEMBC
EEMBC, the Embedded Microprocessor Benchmark Consortium, is a non-profit organization formed in 1997 with the aim of developing meaningful performance benchmarks for the hardware and software used in embedded systems...

.

Some important measurements include:
  • Instructions per second
    Instructions per second
    Instructions per second is a measure of a computer's processor speed. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads typically lead to significantly lower IPS values...

     - Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see Megahertz Myth
    Megahertz Myth
    The megahertz myth, or less commonly the gigahertz myth, refers to the misconception of only using clock rate to compare the performance of different microprocessors...

    ).
  • FLOPS
    FLOPS
    In computing, FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating-point calculations, similar to the older, simpler, instructions per second...

     - The number of floating point operations per second is often important in selecting computers for scientific computations.
  • Performance per watt
    Performance per watt
    In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed....

     - System designers building parallel computers
    Parallel computing
    Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...

    , such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. http://www.eembc.org/benchmark/consumer.asp?HTYPE=SIMhttp://news.com.com/Power+could+cost+more+than+servers,+Google+warns/2100-1010_3-5988090.html
  • Some system designers building parallel computers pick CPUs based on the speed per dollar.
  • System designers building real-time computing
    Real-time computing
    In computer science, real-time computing , or reactive computing, is the study of hardware and software systems that are subject to a "real-time constraint"— e.g. operational deadlines from event to system response. Real-time programs must guarantee response within strict time constraints...

     systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency
    Interrupt latency
    In real-time operating systems, interrupt latency is the time between the generation of an interrupt by a device and the servicing of the device which generated the interrupt. For many operating systems, devices are serviced as soon as the device's interrupt handler is executed...

     and when it has deterministic response. (DSP
    Digital signal processor
    A digital signal processor is a specialized microprocessor with an architecture optimized for the fast operational needs of digital signal processing.-Typical characteristics:...

    )
  • Computer programmers who program directly in assembly language want a CPU to support a full featured instruction set
    Instruction set
    An instruction set, or instruction set architecture , is the part of the computer architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external I/O...

    .
  • Low power - For systems with limited power sources (e.g. solar, batteries, human power).
  • Small size or low weight - for portable embedded systems, systems for spacecraft.
  • Environmental impact - Minimizing environmental impact of computers during manufacturing and recycling as well during use. Reducing waste, reducing hazardous materials. (see Green computing
    Green computing
    Green computing or green IT, refers to environmentally sustainable computing or IT. In the article Harnessing Green IT: Principles and Practices, San Murugesan defines the field of green computing as "the study and practice of designing, manufacturing, using, and disposing of computers, servers,...

    ).


Some of these measures conflict. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa.

Markets

There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets.

General purpose computing

The vast majority of revenues generated from CPU sales is for general purpose computing, that is, desktop, laptop, and server computers commonly used in businesses and homes. In this market, the Intel IA-32
IA-32
IA-32 , also known as x86-32, i386 or x86, is the CISC instruction-set architecture of Intel's most commercially successful microprocessors, and was first implemented in the Intel 80386 as a 32-bit extension of x86 architecture...

 architecture dominates, with its rivals PowerPC
PowerPC
PowerPC is a RISC architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM...

 and SPARC
SPARC
SPARC is a RISC instruction set architecture developed by Sun Microsystems and introduced in mid-1987....

 maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops.

Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption.

High-end processor economics

In 1984, most high-performance CPUs required four to five years to develop.
Developing new, high-end CPUs is a very costly proposition. Both the logical complexity (needing very large logic design and logic verification teams and simulation farms with perhaps thousands of computers) and the high operating frequencies (needing large circuit design teams and access to the state-of-the-art fabrication process) account for the high cost of design for this type of chip. The design cost of a high-end CPU will be on the order of US $100 million. Since the design of such high-end chips nominally takes about five years to complete, to stay competitive a company has to fund at least two of these large design teams to release products at the rate of 2.5 years per product generation.

As an example, the typical loaded cost for one computer engineer is often quoted to be $250,000 US dollars/year. This includes salary, benefits, CAD tools, computers, office space rent, etc. Assuming that 100 engineers are needed to design a CPU and the
project takes 4 years.

Total cost = $250,000 / Engineer-Man/Year x 100 engineers x 4 years = $100,000,000 USD.

The above amount is just an example. The design teams for modern day general purpose CPUs have several hundred team members.

Scientific computing

Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs.

Embedded design

As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors.

These single-function devices differ from the more familiar general-purpose CPUs in several ways:
  • Low cost is of utmost importance.
  • It is important to maintain a low power dissipation as embedded devices often have a limited battery life and it is often impractical to include cooling fans.
  • To give lower system cost, peripherals are integrated with the processor on the same silicon chip.
  • Keeping peripherals on-chip also reduces power consumption as external GPIO ports typically require buffering so that they can source or sink the relatively high current loads that are required to maintain a strong signal outside of the chip.
    • Many embedded applications have a limited amount of physical space for circuitry; keeping peripherals on-chip will reduce the space required for the circuit board.
    • The program and data memories are often integrated on the same chip. When the only allowed program memory is ROM
      Read-only memory
      Read-only memory is a class of storage medium used in computers and other electronic devices. Data stored in ROM cannot be modified, or can be modified only slowly or with difficulty, so it is mainly used to distribute firmware .In its strictest sense, ROM refers only...

      , the device is known as a microcontroller
      Microcontroller
      A microcontroller is a small computer on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals. Program memory in the form of NOR flash or OTP ROM is also often included on chip, as well as a typically small amount of RAM...

      .
  • For many embedded applications, interrupt latency will be more critical than in some general-purpose processors.

Embedded processor economics

The embedded CPU family with the largest number of total units shipped is the 8051, averaging nearly a billion units per year. The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.0127 square millimeters of silicon

As of 2009, more CPUs are produced using the ARM architecture
ARM architecture
ARM is a 32-bit reduced instruction set computer instruction set architecture developed by ARM Holdings. It was named the Advanced RISC Machine, and before that, the Acorn RISC Machine. The ARM architecture is the most widely used 32-bit ISA in numbers produced...

 instruction set than any other 32-bit instruction set. The ARM architecture and the first ARM chip were designed in about one and a half years and 5 man years of work time.

The 32-bit Parallax Propeller
Parallax Propeller
The Parallax P8X32A Propeller chip, introduced in 2006, is a multi-core architecture parallel microcontroller with eight 32-bit RISC CPU cores....

 microcontroller architecture and the first chip were designed by two people in about 10 man years of work time.

It is believed that the 8-bit AVR architecture
Atmel AVR
The AVR is a modified Harvard architecture 8-bit RISC single chip microcontroller which was developed by Atmel in 1996. The AVR was one of the first microcontroller families to use on-chip flash memory for program storage, as opposed to one-time programmable ROM, EPROM, or EEPROM used by other...

 and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology.

The 8-bit 6502 architecture and the first MOS Technology 6502
MOS Technology 6502
The MOS Technology 6502 is an 8-bit microprocessor that was designed by Chuck Peddle and Bill Mensch for MOS Technology in 1975. When it was introduced, it was the least expensive full-featured microprocessor on the market by a considerable margin, costing less than one-sixth the price of...

 chip were designed in 13 months by a group of about 9 people.

Research and educational CPU design

The 32 bit Berkeley RISC
Berkeley RISC
Berkeley RISC was one of two seminal research projects into RISC-based microprocessor design taking place under ARPA's VLSI project. RISC was led by David Patterson at the University of California, Berkeley between 1980 and 1984, while the other was taking place only a short drive away at Stanford...

 I and RISC II architecture and the first chips were mostly designed by a series of students as part of a four quarter sequence of graduate courses.
This design became the basis of the commercial SPARC
SPARC
SPARC is a RISC instruction set architecture developed by Sun Microsystems and introduced in mid-1987....

 processor design.

For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of 7400 series
7400 series
The 7400 series of transistor-transistor logic integrated circuits are historically important as the first widespread family of TTL integrated circuit logic. It was used to build the mini and mainframe computers of the 1960s and 1970s...

 integrated circuit
Integrated circuit
An integrated circuit or monolithic integrated circuit is an electronic circuit manufactured by the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material...

s.
One team of 4 students designed and built a simple 32 bit CPU during that semester.
Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15 week semester.

Soft microprocessor cores

For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis
Logic synthesis
In electronics, logic synthesis is a process by which an abstract form of desired circuit behavior, typically register transfer level , is turned into a design implementation in terms of logic gates. Common examples of this process include synthesis of HDLs, including VHDL and Verilog...

 techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market.

Research Topics

A variety of new CPU design ideas have been proposed,
including reconfigurable logic, clockless CPUs, computational RAM
Computational RAM
Computational RAM or C-RAM is random access memory with processing elements integrated into the design. This enables C-RAM to be used as a SIMD computer...

, and optical computing.

See also

  • Central processing unit
    Central processing unit
    The central processing unit is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in...

  • History of general purpose CPUs
    History of general purpose CPUs
    The history of general purpose CPUs is a continuation of the earlier history of computing hardware.- 1950s: early designs :Each of the computer designs of the early 1950s was a unique design; there were no upward-compatible machines or computer architectures with multiple, differing implementations...

  • Microprocessor
    Microprocessor
    A microprocessor incorporates the functions of a computer's central processing unit on a single integrated circuit, or at most a few integrated circuits. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and...

  • Microarchitecture
    Microarchitecture
    In computer engineering, microarchitecture , also called computer organization, is the way a given instruction set architecture is implemented on a processor. A given ISA may be implemented with different microarchitectures. Implementations might vary due to different goals of a given design or...

  • Moore's law
    Moore's Law
    Moore's law describes a long-term trend in the history of computing hardware: the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years....

  • Amdahl's law
    Amdahl's law
    Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved...

  • System-on-a-chip
    System-on-a-chip
    A system on a chip or system on chip is an integrated circuit that integrates all components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio-frequency functions—all on a single chip substrate...

  • Reduced instruction set computer
    Reduced instruction set computer
    Reduced instruction set computing, or RISC , is a CPU design strategy based on the insight that simplified instructions can provide higher performance if this simplicity enables much faster execution of each instruction. A computer based on this strategy is a reduced instruction set computer...

  • Complex instruction set computer
    Complex instruction set computer
    A complex instruction set computer , is a computer where single instructions can execute several low-level operations and/or are capable of multi-step operations or addressing modes within single instructions...

  • Minimal instruction set computer
    Minimal instruction set computer
    Minimal Instruction Set Computer is a processor architecture with a very small number of basic operations and corresponding opcodes. Such instruction sets are commonly stack based rather than register based to reduce the size of operand specifiers. Such a stack machine architecture is inherently...

  • Electronic design automation
    Electronic design automation
    Electronic design automation is a category of software tools for designing electronic systems such as printed circuit boards and integrated circuits...

  • High-level synthesis
    High-level synthesis
    High-level synthesis , sometimes referred to as C synthesis, electronic system level synthesis, algorithmic synthesis, or behavioral synthesis, is an automated design process that interprets an algorithmic description of a desired behavior and creates hardware that implements that behavior. The...

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK