What Is Parallel Processing?
Parallel processing can be described as a class of techniques that enables the system to achieve simultaneous data-processing tasks to increase the computational speed of a computer system.
A parallel processing system can carry out simultaneous data-processing to achieve faster execution time. For instance, while an instruction is being processed in the ALU component of the CPU, the next instruction can be read from memory.
The primary purpose of parallel processing is to enhance the computer processing capability and increase its throughput, i.e. the amount of processing that can be accomplished during a given interval of time.
Why Parallel Processing Is Required?
A CPU is a microprocessor — a computing engine on a chip. While modern microprocessors are small, they’re also really powerful. They can interpret millions of instructions per second. Even so, there are some computational problems that are so complex that a powerful microprocessor would require years to solve them.
Computer scientists use different approaches to address this problem. One potential approach is to push for more powerful microprocessors. Usually, this means finding ways to fit more transistors on a microprocessor chip. Computer engineers are already building microprocessors with transistors that are only a few dozen nanometers wide.
Building more powerful microprocessors requires an intense and expensive production process. Some computational problems take years to solve even with the benefit of a more powerful microprocessor. Partly because of these factors, computer scientists sometimes use a different approach i.e., parallel processing.
In general, parallel processing means that at least two microprocessors handle parts of an overall task. The concept is pretty simple: A computer scientist divides a complex problem into component parts using special software specifically designed for the task. He or she then assigns each component part to a dedicated processor. Each processor solves its part of the overall computational problem. The software reassembles the data to reach the end conclusion of the original complex problem.
What Are Different Parallel Processing Approaches?
There are multiple types of parallel processing, but the three most commonly used types are
- Single Instruction Multiple Data (SIMD)
- Multiple Instruction Multiple Data (MIMD)
- Multiple Instruction Single Data (MISD)
Single Instruction Single Data (SISD)
SISD stands for ‘Single Instruction and Single Data Stream’. It represents the organization of a single computer containing a control unit, a processor unit, and a memory unit.
Instructions are executed sequentially, and the system may or may not have internal parallel processing capabilities.
Most conventional computers have SISD architecture which is also called Von-Neumann architecture.
Simple Instruction Multiple Data (SIMD)
SIMD processing, in which single instruction is applied on multiple data, is suitable for multimedia processing, and therefore it is implemented in contemporary processors.
Single instruction multiple data (SIMD), as the name suggests, takes an operation specified in one instruction and applies it to more than one set of data elements at the same time. For example, in a traditional scalar microprocessor, an add operation would add together a single pair of operands and produce a single result. In SIMD processing, a number of independent operand pairs are added together to produce the same number of independent sums. The following figure illustrates traditional and SIMD processing.
Multiple Instruction Multiple Data (MIMD)
Multiple Instruction Multiple Data refers to a parallel architecture, which is probably the most basic, but most familiar type of parallel processor. Its key objective is to achieve parallelism.
MIMD architecture includes a set of N-individual, tightly coupled processors. Each processor includes a memory that can be common to all processors and cannot be directly accessed by the other processors.
MIMD architecture includes processors that operate independently and asynchronously. Various processors may be carrying out various instructions at any time on various pieces of data.
There are two types of MIMD architecture:
- Shared Memory MIMD architecture
- Distributed Memory MIMD architecture
Shared Memory MIMD architecture has the following characteristics:
- Creates a group of memory modules and processors.
- Any processor is able to directly access any memory module by means of an interconnection network.
- The group of memory modules outlines a universal address space that is shared between the processors.
A key benefit of the architecture type is that itis very fast to program since there exist no explicit communications among processors with communications addressed through the global memory store.
Distributed Memory MIMD architecture has the following characteristics:
- Clones the memory/processor pairs, known as a processing element (PE), and links them by using an interconnection network.
- Each PE can communicate with others by sending messages.
By providing every processor its own memory, the distributed memory architecture bypasses the downsides of the shared memory architecture. A processor may only access the memory that is directly connected to it.
In case a processor requires data that resides in the remote processor memory, then the processor should send a message to the remote processor, requesting the required data.
Access to local memory could happen way quicker as opposed to accessing data on a remote processor. Furthermore, if the physical distance of the remote processor is greater, access to the remote data will take more time.
Multiple Instruction Single Data (MISD)
A Multiple Instruction Single Data computing system is a multiprocessor machine capable of executing different instructions on different PEs but all of them operating on the same dataset.
The system performs different operations on the same data set. Machines built using the MISD model are useful in most applications. A few machines are built, but none of them are available commercially.