Parallel Computer

A parallel computer is a kind of computer structure where a large complex problem is broken into multiple small problems which are then executed simultaneously by several processors. We often consider this as parallel processing.

When it comes to the structure of parallel computers it can be characterized with the help of features such as pipeline computers, array processors, multiprocessor systems.

Parallel Computer Features:

  1. Pipeline Computer
  2. Array Processors
  3. Multiprocessor Systems

Pipeline Computer

Pipeline computers leverage parallel computing by overlapping the execution of one process with another process. If we consider executing a small program that has a set of instructions then we can categorize the execution of the entire program in four steps that must be repeated continuously till the last instruction in the program gets executed.

The four steps we mentioned above are instruction fetching (IF), instruction decoding (ID), operand fetch (OF), and instruction execution (IE). Each instruction of the program is fetched from the main memory, it is decoded by the processor, and the operands mentioned in the instructions that are required for the execution of instruction are fetched from the main memory, and at last, the instruction is executed.

Now if we would not have been using the pipeline computer these fours steps of an instruction execution must be completed before the execution of the next instruction starts. That means the execution of the next instruction cannot be started until the execution of the current instruction gets finished.

The figure below shows that to execute three instructions completely it takes 12 secs.

non-pipelining in Parallel Computing

And if we implement pipelining then instruction will be executed in an overlapped fashion. As you can see that the four instructions can be executed in 7 sec.

pipelining in Parallel Computing

Pipeline computer synchronizes operations of all stages under a common clock. Hence, we can say that executing instructions in pipelined fashion is more efficient.

Array Processors

Array processors leverage parallel computing by implementing multiple arithmetic logic units i.e. processing elements in a synchronized way. The processing elements operate in a parallel fashion.

Now consider that if we replicate the ALUs and all ALUs are working in parallel then we can say that we have achieved spatial parallelism. Array processors are also capable of processing array elements.

In the figure above, we can see that we have multiple ALUs which are connected in parallel with the control unit using a data routing network. Each ALU in the system i.e., processing elements consists of the processor and local memory. The pattern in which the processing elements are interconnected depends on the specific computation to be performed by the control unit.

The scalar instructions are implicitly executed in the control unit whereas the vector instructions are broadcasted to the parallelly connected processing elements. The operands are fetched directly from the local memories. The instruction fetch and instruction decode is done by the control unit and in this way the vector instructions are executed in a distributed manner.

However, the different array processors may be using different kinds of interconnection networks for connecting the processing elements. The array processors are somewhat complex as compared to the pipelined processors.

Multiprocessor Systems

Multiprocessor system supports parallel computing by using a set of interactive processors that have shared resources. In a multiprocessor system, there are multiple processors and processors that have access to a common set of memory modules, peripheral devices, and other input-output devices.

However, the entire system with multiple processors is controlled by a single operating system. It is the responsibility of the operating system to provide interaction between the multiple processors present in the system. Even though the processors share memories, peripheral devices, and I/O, each processor in the multiprocessor system has a local memory and even some private devices.

Communication between the processors can be achieved with the help of shared memories or through the interrupted network. The interconnection between the shared memories, I/O devices, and multiple processors in the system can be determined in three different ways:

  • Time shared common bus
  • Crossbar switch network
  • Multiport memories

We will be discussing these three organizations of multiprocessor systems in our future content. Using a multiprocessor system improves the throughput, flexibility, availability, and reliability of the system.

So far, we have discussed three structures of the parallel computer that has a centralized computing system where all the hardware and software are implemented in the same computing center.

All the three features of parallel computing are not mutually exclusive that is we can have a parallel computer with all the three features together. In fact, most of the parallel computers today are either based on pipelining computing, or either they use array processors or multiprocessor systems.

Leave a Reply

Your email address will not be published. Required fields are marked *