Massively parallel
Encyclopedia
Massively parallel is a description which appears in computer science, life sciences
Life sciences
The life sciences comprise the fields of science that involve the scientific study of living organisms, like plants, animals, and human beings. While biology remains the centerpiece of the life sciences, technological advances in molecular biology and biotechnology have led to a burgeoning of...

, medical diagnostics, and other fields.

A massively parallel computer is a distributed memory
Distributed memory
In computer science, distributed memory refers to a multiple-processor computer system in which each processor has its own private memory. Computational tasks can only operate on local data, and if remote data is required, the computational task must communicate with one or more remote processors...

 computer system which consists of many individual nodes, each of which is essentially an independent computer in itself, and in turn consists of at least one processor, its own memory, and a link to the network that connects all the nodes together. Such computer systems have many independent arithmetic
Arithmetic
Arithmetic or arithmetics is the oldest and most elementary branch of mathematics, used by almost everyone, for tasks ranging from simple day-to-day counting to advanced science and business calculations. It involves the study of quantity, especially as the result of combining numbers...

 units or entire microprocessor
Microprocessor
A microprocessor incorporates the functions of a computer's central processing unit on a single integrated circuit, or at most a few integrated circuits. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and...

s, that run in parallel. The term massive connotes hundreds if not thousands of such units. Nodes communicate by passing messages, using standards such as MPI
Message Passing Interface
Message Passing Interface is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers...

.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing
Distributed computing
Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal...

 where massive numbers of separate computers are used to solve a single problem.

Supercomputers

Nearly all supercomputer
Supercomputer
A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling A supercomputer is a...

s as of 2005 are massively parallel, with the largest having several hundred thousand CPU
Central processing unit
The central processing unit is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in...

s. The cumulative output of the many constituent CPUs can result in large total peak FLOPS
FLOPS
In computing, FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating-point calculations, similar to the older, simpler, instructions per second...

 (FLoating point Operations Per Second) numbers. The true amount of computation accomplished depends on the nature of the computational task and its implementation. Some problems are more intrinsically able to be separated into parallel computational tasks than others. When problems depend on sequential stages of computation, some processors must remain idle while waiting for the result of calculations from other processors, resulting in less efficient performance. The efficient implementation of computational tasks on parallel computers is an active area of research. See also Parallel computing
Parallel computing
Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...

.

Through advances in Moore's Law
Moore's Law
Moore's law describes a long-term trend in the history of computing hardware: the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years....

, single-chip implementations of massively parallel processor array
Massively parallel processor array
A Massively Parallel Processor Array is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of channels...

s are becoming cost effective, and finding particular application in high performance embedded systems applications such as video compression. Examples include chips from Ambric
Ambric
Ambric-architecture processors, are developed and marketed by a division of Nethra, a fabless semiconductor company based in Santa Clara, California. Nethra purchased the Ambric technology in early 2009. Ambric the company was founded in 2003 and the current team, all from the original startup,...

, Coherent Logix, picoChip
PicoChip
Picochip is a venture-backed fabless semiconductor company based in Bath, England, founded in 2000.The company is active in two areas, with two distinct product families.-Multi-core DSP:...

, and Tilera
Tilera
Tilera Corporation is a fabless semiconductor company focusing on scalable multicore embedded processor design. The company is currently shipping multiple processors, including the TILE64, TILEPro64, and the TILEPro36, TILE-Gx36, TILE-Gx16 and TILE-Gx9...

.

In medicine

In life science and medical diagnostics, massively parallel chemical reactions are used to reduce the time and cost of an analysis or synthesis procedure, often to provide ultra-high throughput. For example, in ultra-high-throughput DNA sequencing as introduced in August 2005 there may be 500,000 sequencing-by-synthesis operations occurring in parallel.

Example systems

The earliest massively parallel processing systems all used serial computer
Serial computer
A serial computer is typified by internally operating on one bit or digit for each clock cycle. Machines with serial main storage devices such as acoustic or magnetostrictive delay lines and rotating magnetic devices were usually serial computers....

s as individual processing elements, in order to achieve the maximum number of independent units for a given size and cost. Some years ago many of the most powerful supercomputer
Supercomputer
A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling A supercomputer is a...

s were MPP systems. Early examples of such a system are the Distributed Array Processor
Distributed Array Processor
The Distributed Array Processor produced byInternational Computers Limited was the world's first commercialmassively parallel computer...

 (DAP), the Goodyear MPP
Goodyear MPP
The Goodyear Massively Parallel Processor was amassively parallel processing supercomputer built by Goodyear Aerospacefor the NASA Goddard Space Flight Center.It was designed to deliver enormous computational power at lower cost than...

, the Connection Machine
Connection Machine
The Connection Machine was a series of supercomputers that grew out of Danny Hillis' research in the early 1980s at MIT on alternatives to the traditional von Neumann architecture of computation...

, the Ultracomputer
Ultracomputer
The NYU Ultracomputer is a significant processor design in the history of parallel computing. The system has N processors, N memories and an N log N message-passing switch connecting them...

 and all machines coming out from the ESPRIT 1085 project (1985), such as the Telmat MegaNode using Transputers.

See also

  • Fifth generation computer systems project
  • Massively parallel processor array
    Massively parallel processor array
    A Massively Parallel Processor Array is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of channels...

  • Multiprocessing
    Multiprocessing
    Multiprocessing is the use of two or more central processing units within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them...

  • Parallel computing
    Parallel computing
    Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently . There are several different forms of parallel computing: bit-level,...

  • Process oriented programming
  • Shared nothing architecture
    Shared nothing architecture
    A shared nothing architecture is a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system...

     (SN)
  • Symmetric multiprocessing
    Symmetric multiprocessing
    In computing, symmetric multiprocessing involves a multiprocessor computer hardware architecture where two or more identical processors are connected to a single shared main memory and are controlled by a single OS instance. Most common multiprocessor systems today use an SMP architecture...

    (SMP)
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK