This post is also available in: हिन्दी (Hindi) العربية (Arabic)

## What is an Algorithm?

An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and computer science, an algorithm usually means a small procedure that solves a recurrent problem.

Algorithms are widely used throughout all areas of IT (information technology). A search engine algorithm, for example, takes search strings of keywords and operators as input, searches its associated database for relevant web pages, and returns results.

In this article, we will discuss about 8 types of algorithms.

## Qualities of Good Algorithms

A good algorithm should have the following qualities:

- Input and output should be defined precisely.
- Each step in the algorithm should be clear and unambiguous.
- Algorithms should be most effective among many different ways to solve a problem.
- An algorithm shouldn’t include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.

### Example of Algorithm – Finding Largest Among Three Numbers

```
Step 1: Start
Step 2: Declare variables a, b and c
Step 3: Read variables a, b and c.
Step 4: If a > b
If a > c
Display a is the largest number
Else
Display c is the largest number
Else
If b > c
Display b is the largest number
Else
Display c is the largest number
Step 5: Stop
```

## Types of Algorithms

Algorithms that use a similar problem-solving approach can be grouped together. This classification scheme is neither exhaustive nor disjoint. The purpose of classifying algorithms is to highlight the various ways in which a problem can be attacked.

Based on the working principle, the algorithms are classified into 8 different types.

- Simple recursive algorithms
- Backtracking algorithms
- Divide and Conquer algorithms
- Dynamic Programming algorithms
- Greedy algorithms
- Branch and Bound algorithms
- Brute Force algorithms
- Randomized algorithms

## Simple Recursive Algorithms

In computer science, recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.

A recursive algorithm is an algorithm that calls itself with “smaller (or simpler)” input values, and which obtains the result for the current input by applying simple operations to the returned value for the smaller (or simpler) input. More generally if a problem can be solved utilizing solutions to smaller versions of the same problem, and the smaller versions reduce to easily solvable cases, then one can use a recursive algorithm to solve that problem. For example, the elements of a recursively defined set, or the value of a recursively defined function can be obtained by a recursive algorithm.

Initial steps of the recursive algorithm correspond to the basic clause of the recursive definition and they identify the basic elements. They are then followed by steps corresponding to the inductive clause, which reduce the computation for an element of one generation to that of elements of the immediately preceding generation.

In general, recursive computer programs require more memory and computation compared with iterative algorithms, but they are simpler and in many cases a natural way of thinking about the problem.

Let us consider a problem that a programmer has to determine the sum of first n natural numbers, there are several ways of doing that but the simplest approach is simply to add the numbers starting from 1 to n. So the function simply looks like, simply adding one by one – f(n) 1 + 2 + 3 + …

In recursive approach, it becomes:

```
f(n) = 1, n = 1;
f(n) = n + f(n - 1), n > 1;
```

There is a simple difference between the first approach and the second approach and it is that is in the second approach the function “ f( ) ” itself is being called inside the function, so this phenomenon is named as recursion and the function containing recursion is called recursive function, in the end, this is a great tool in the hand of the programmers to code some problems in a lot easier and efficient way.

## Backtracking Algorithms

Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, such as constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate (“backtracks”) as soon as it determines that the candidate cannot possibly be completed to a valid solution.

The classic textbook example of the use of backtracking is the eight queens puzzle, which asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of *k* queens in the first *k* rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.

Backtracking can be applied only for problems that admit the concept of a “partial candidate solution” and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate many candidates with a single test.

Backtracking is an important tool for solving constraint satisfaction problems, such as crosswords, verbal arithmetic, Sudoku, and many other puzzles. It is often the most convenient (if not the most efficient) technique for parsing, for the knapsack problem and other combinational optimization problems. It is also the basis of the so-called logic programming languages such as Icon, Planner, and Prolog.

Backtracking is a technique based on algorithms to solve problems. It uses recursive calling to find the solution by building a solution step by step increasing values with time. It removes the solutions that don’t give rise to the solution of the problem based on the constraints given to solve the problem.

- Backtracking algorithm is applied to some specific types of problems, such as
- Decision problem used to find a feasible solution to the problem.
- Optimization problem used to find the best solution that can be applied.
- Enumeration problem used to find the set of all feasible solutions of the problem.

In the backtracking problem, the algorithm tries to find a sequence path to the solution which has some small checkpoints from where the problem can backtrack if no feasible solution is found for the problem.

Let’s use this backtracking problem to find the solution to the N-Queen problem.

In the N-Queen problem, we are given an N×N chessboard and we have to place n queens on the board in such a way that no two queens attack each other. A queen will attack another queen if it is placed in horizontal, vertical, or diagonal points in its way. Here, we will do the 4-Queen problem.

Here, the solution is −

The binary output for n queen problem with 1s as queens to the positions placed.

{0 , 1 , 0 , 0}

{0 , 0 , 0 , 1}

{1 , 0 , 0 , 0}

{0 , 0 , 1 , 0}

For solving the n queens problem, we will try placing the queen into different positions of one row. And checks if it clashes with other queens. If they are attacking, we will backtrack to the previous location of the queen and change its positions. And check the clash of the queen again.

Algorithm for solving the n-Queen problem:

Step 1 − Start from 1st position in the array.

Step 2 − Place queens on the board and check. Do,

Step 2.1 − After placing the queen, mark the position as a part of the solution and then recursively check if this will lead to a solution.

Step 2.2 − Now, if placing the queen doesn’t lead to a solution and trackback and go to step (a) and place queens to other rows.

Step 2.3 − If placing the queen returns a lead to solution return TRUE.

Step 3 − If all queens are placed return TRUE.

Step 4 − If all rows are tried and no solution is found, return FALSE.

## Divide and Conquer algorithms

In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.

The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (quicksort, merge-sort), multiplying large numbers (e.g., Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform.

Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations.

This technique can be divided into the following three parts:

**Divide:**This involves dividing the problem into some sub-problem.**Conquer:**Sub problem by calling recursively until the sub-problem is solved.**Combine:**The Sub problem Solved so that we will find a problem solution.

The following are some standard algorithms that follow the Divide and Conquer algorithm.

- Binary Search is a searching algorithm. In each step, the algorithm compares the input element x with the value of the middle element in an array. If the values match, return the index of the middle. Otherwise, if x is less than the middle element, then the algorithm recurs for the left side of the middle element, else recurs for the right side of the middle element.
- Quicksort is a sorting algorithm. The algorithm picks a pivot element, rearranges the array elements in such a way that all elements smaller than the picked pivot element move to the left side of a pivot, and all greater elements move to the right side. Finally, the algorithm recursively sorts the subarrays on the left and right of a pivot element.
- Merge Sort is also a sorting algorithm. The algorithm divides the array into two halves, recursively sorts them, and finally merges the two sorted halves.
- The Closest Pair of Points is a problem to find the closest pair of points in a set of points in the x-y plane. The problem can be solved in O(n^2) time by calculating the distances of every pair of points and comparing the distances to find the minimum. The Divide and Conquer algorithm solves the problem in O(N log N) time.

## Dynamic Programming algorithms

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

In both contexts, it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have an optimal substructure.

If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation.

Dynamic programming approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub-problems are not solved independently. Rather, results of these smaller sub-problems are remembered and used for similar or overlapping sub-problems.

Dynamic programming is used where we have problems, which can be divided into similar sub-problems, so that their results can be re-used. Mostly, these algorithms are used for optimization. Before solving the in-hand sub-problem, dynamic algorithms will try to examine the results of the previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the best solution.

So we can say that −

- The problem should be able to be divided into smaller overlapping sub-problem.
- An optimum solution can be achieved by using an optimum solution of smaller sub-problems.
- Dynamic algorithms use Memoization.

The following computer problems can be solved using dynamic programming approach −

- Fibonacci number series
- Knapsack problem
- Tower of Hanoi
- All pair shortest path by Floyd-Warshall
- Shortest path by Dijkstra
- Project scheduling

Dynamic programming can be used in both top-down and bottom-up manner. And of course, most of the time, referring to the previous solution output is cheaper than recomputing in terms of CPU cycles.

## Greedy algorithms

A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not usually produce an optimal solution, but nonetheless, a greedy heuristic may yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time.

For example, a greedy strategy for the travelling salesman problem (which is of a high computational complexity) is the following heuristic: “At each step of the journey, visit the nearest unvisited city.” This heuristic does not intend to find a best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In mathematical optimization, greedy algorithms optimally solve combinatorial problems having the properties of matroids, and give constant-factor approximations to optimization problems with submodular structure.

Most networking algorithms use the greedy approach. Here is a list of few of them −

- Travelling Salesman Problem
- Prim’s Minimal Spanning Tree Algorithm
- Kruskal’s Minimal Spanning Tree Algorithm
- Dijkstra’s Minimal Spanning Tree Algorithm
- Graph – Map Coloring
- Graph – Vertex Cover
- Knapsack Problem
- Job Scheduling Problem

## Branch and Bound algorithms

Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores *branches* of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated *bounds* on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

The algorithm depends on efficient estimation of the lower and upper bounds of regions/branches of the search space. If no bounds are available, the algorithm degenerates to an exhaustive search.

The method was first proposed by Ailsa Land and Alison Doig whilst carrying out research at the London School of Economics sponsored by British Petroleum in 1960 for discrete programming, and has become the most commonly used tool for solving NP-hard optimization problems.The name “branch and bound” first occurred in the work of Little *et al.* on the traveling salesman problem.

Let’s see the Branch and Bound Approach to solve the **0/1 Knapsack problem**: The Backtracking Solution can be optimized if we know a bound on the best possible solution subtree rooted with every node. If the best in subtree is worse than current best, we can simply ignore this node and its subtrees. So we compute the bound (best solution) for every node and compare the bound with the current best solution before exploring the node.

Example bounds used in the diagram below are, A down can give $315, B down can $275, C down can $225, D down can $125 and E down can $30.

## Brute Force algorithms

In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem’s statement.

A brute-force algorithm to find the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides *n* without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard, and, for each arrangement, check whether each (queen) piece can attack any other.

While a brute-force search is simple to implement, and will always find a solution if it exists, its cost is proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases (Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than speed.

This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences; or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics.Indeed, brute-force search can be viewed as the simplest metaheuristic.Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a table – namely, check all entries of the latter, sequentially – is called linear search.

For example, imagine you have a small padlock with 4 digits, each from 0-9. You forgot your combination, but you don’t want to buy another padlock. Since you can’t remember any of the digits, you have to use a brute force method to open the lock.

So you set all the numbers back to 0 and try them one by one: 0001, 0002, 0003, and so on until it opens. In the worst case scenario, it would take 104, or 10,000 tries to find your combination.

A classic example in computer science is the traveling salesman problem (TSP). Suppose a salesman needs to visit 10 cities across the country. How does one determine the order in which those cities should be visited such that the total distance traveled is minimized?

The brute force solution is simply to calculate the total distance for every possible route and then select the shortest one. This is not particularly efficient because it is possible to eliminate many possible routes through clever algorithms.

## Randomized algorithms

A *randomized algorithm* is an algorithm that employs a degree of randomness as part of its logic. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the “average case” over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables.

One has to distinguish between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithm, for example, Quicksort), and algorithms that have a chance of producing an incorrect result (Monte Carlo algorithm, for example, Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem.

In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior.