# Algorithm Time Complexity

Given a piece of code, how to determine the complexities without much effort? Each piece of code has a  time complexity associated with it which you need to learn. Next time, you see any complex code break it into individual pieces and count the time complexity.

Before you begin, learn the basics of algorithm performance and its complexity.

An algorithm has following types of code and we are going to learn time complexity for each one of them.

1. Statements
2. If-Then-else
3. Loops
4. Nested loops
5. Functions
6. Statements with functions.

## Statements

Each of the statement in an algorithm takes a constant time to execute. If the algorithm procedure has N statements.

Then,

statement 1;

statement 2;

statement 3;

statement n
\begin{aligned}
&Constant \hspace{1mm}time = \mathcal{O}(1)\\ \\
&Total\hspace{1mm} running \hspace{1mm}time = \mathcal{O}(1) + \mathcal{O}(1) + \dots+  \mathcal{O}(1)\\ \\
\end{aligned}

Sum of all the constant times is also a constant.

Therefore,

The Total running time is also constant $\mathcal{O}(1)$.

## If-Then-Else

For the if-then-else block, only one block gets executed, based on the condition. But the run time for the block may be different from other blocks in the if-then-else statement.

if (condition) then
{

block 1;

}
else
{

block2;

}

Therefore,

\begin{aligned}
Total \hspace{1mm}runtime = Max(time(block 1), time(block2));
\end{aligned}

For Example,

Total\hspace{1mm} runtime =  Max(\mathcal{O}(1), \mathcal{O}(n))

Depending on the condition the total runtime of if-then-else block could be

\mathcal{O}(1) \hspace{1mm}or\hspace{1mm} \mathcal{O}(n)

## Loops

The loop runs for N time for any given N value. The statements inside the loop also get repeated for N times.

for j = 1 to N do {

statements;

}

The loop runs for N time and each of the statements inside the loop takes $\mathcal{O}(1)$. Then the total runtime is given below.

\begin{aligned}
&Total \hspace{1mm}runtime = N * \mathcal{O}(1)\\
&=  \mathcal{O}(N)
\end{aligned}

## Nested Loop

The nested loop is not different than the single loop statement. Now that there are two loops their running time get multiplied.

for i = 1 to N do
{

for j = 1 to M do
{

Statements;

}
}

Suppose the outer loop runs $N$ times and an inner loop has a complexity of $\mathcal{O}(M)$.

Total \hspace{1mm}Time\hspace{1mm} Complexity = \mathcal{O}(N * M)

Let us say that the inner also runs for N times, the total complexity is given by

= \mathcal{O}(N^2)

## Functions

Functions are blocks themselves and contain many other statements. The total time of a function depends on these statements.

Suppose a function $f(N)$ takes $\mathcal{O}(1)$ and there is another function $g(N)$ that has a loop. Then,

Total \hspace{1mm}Time\hspace{1mm} Complexity \hspace{1mm}of \hspace{1mm}g(N) = \mathcal{O}(N)

because of the loop.

## Statements with Function

A loop statement can be inside of a function, but it is also possible that a function is inside of a loop.

for j=1 to N do {

g(N);

}

Since, we already know that the running time of the function $g(N)$  is $\mathcal{O}(N)$.

\begin{aligned}
&Total \hspace{1mm} Complexity = N * \mathcal{O}(N)\\
&=  \mathcal{O}(N^2)
\end{aligned}

This is because the loop runs for N time and repeats the function N times as well.

# Algorithms Order Of Growth

The Big O notation, the theta notation and the omega notation are asymptotic notations to measure the order of growth of algorithms when the magnitude of inputs increases.

In the previous article – performance analysis – you learned that algorithm executes in steps and each step takes a “constant time“. You can count the number of steps and then arrive at total computing time required for the algorithm.

## Background

You also learned that complexity affects the performance of the algorithm. These complexities are space complexity and time complexity.

Complexity means how do use of resources usage scale for an algorithm, whereas performance account for actual resources (e.g disk, memory, CPU time) used by algorithms during execution. Complexity affects the performance, performance does not change complexity.

If you leave fixed space like variables, constants etc, and fixed time like compile time, the complexity depends on instance characteristics (operations or steps ) of the algorithm like input size or magnitude.

## What is a suitable method to analyze the algorithm?

When we consider instance characteristics which are the number of operations performed by an algorithm to solve a problem of size n.
You want to analyze three cases.

1. Worst Case Performance
2. Average Case Performance
3. Best Case Performance

Worst Case Performance: given an instance of a problem with the maximum number of operations what is the time required for the algorithm to solve the problem. This is the worst-case analysis for the algorithm.

Average Case Performance: the instance of a problem has the average number of operations that the algorithm needs to solve. The time to complete such operations is the average case analysis of the algorithm.

Best Case Performance: the problem is already in a solved state or needs fewer steps to solve.

While analyzing an algorithm you should be more concerned about the worst case than average or best case. It is better to compare two algorithms based on the worst-case scenario.

## Notations for Complexity

The number of operations for an algorithm when counted will give a function.

For example

an^2+ bn + c

This equation represents an exact step count for an algorithm. You do not need an exact count for the algorithm because when we analyze the algorithm for the worst case, consider only higher-order term and drop the lower order terms because they are insignificant when the input is huge.

therefore,

\begin{aligned}
&an^2 + bn + c\\\\
&becomes\\\\
&O(n^2)
\end{aligned}

This symbol is known as Landau symbol or $Big \hspace{3px} O$ notation named after German mathematician Edmund Landau. It tells us the fastest growing term in the function called the Order or rate of growth. That is why the lower order terms become insignificant and dropped.

The asymptotic notations such as $Big \hspace{3px} O$ is used to describe the running time of the algorithms. There are other notations to describe the running time as well.

Suppose $T(n)$ is the function of the algorithm for input size n,

then running time is given as

T(n) = O(n^2)

## Formal Definition of Big O

Let $f(n)$ and $g(n)$ be two functions, then $\mathcal{O}(g(n))$ is asymptotic upper bound to $f(n)$ for a problem size of n.

For a given function $f(n)$ , the function $\mathcal{O}(g(n))$is a set of functions

\begin{aligned}
&O(g(n)) = f(n) : there \hspace{1mm}exist \hspace{1mm} positive \hspace{1mm}constants \\ \\
&\hspace{1mm}c  \hspace{1mm}and  \hspace{1mm} n_0  \hspace{1mm} such \hspace{1mm}that \\\\
&0 \leq f(n) \leq cg(n),   for \hspace{1mm} all \{n \geq n_0\}\end{aligned}

The graph of the function is given by the following.

## Formal Definition of Theta-Notation

Let $f(n)$ and $g(n)$ be two functions. Then the function $\Theta(n)$ gives asymptotic upper bound and asymptotic lower bound, its means that the function $f(n)$ is “sandwiched” between $c_1 g(n)$ and $c_2 g(n)$

\begin{aligned}
&\theta (g(n)) = f(n) \hspace{1mm} such \hspace{1mm} that \\\\
&there \hspace{1mm} exist \hspace{1mm} positive\hspace{1mm} constants\hspace{1mm} c_1,c_2 \hspace{1mm} and\hspace{1mm}  n_0 \hspace{1mm} such\hspace{1mm}  that \\\\
&0 \leq c_1g(n) \leq f(n)\leq c_2g(n) for \hspace{1mm} all \{n \geq n_0\}
\end{aligned}

The graph of Theta notation is given below.

## Formal Definition of Omega-Notation

Let $f(n)$ and $g(n)$ be two functions. Then the function $\Omega(n)$ gives  asymptotic lower bound, its means that the function $f(n)$  is greater than $cg(n)$.

\begin{aligned}
&\Omega(g(n)) = f(n) :there \hspace{1mm}exists \hspace{1mm}positive \hspace{1mm}constants \hspace{1mm}c and \hspace{1mm} n_0 \hspace{1mm}such \hspace{1mm}that \\ \\
&0 \leq c g(n) \leq f(n) \hspace{1mm}for \hspace{1mm} all \{n \geq n_0\}
\end{aligned}

The graph of Omega notation is given below.

# Algorithms Complexities

Performance analysis of an algorithm is done to understand how efficient that algorithm is compared to another algorithm that solves the same computational problem. Choosing efficient algorithms means computer programmers can write better and efficient programs.

A computer resource is memory and CPU time and performance analysis revolves around these two resources. Two ways to evaluate an algorithm is listed below.

1. Space requirement
2. Computation time

### Space Complexity

The space requirement is related to memory resource needed by the algorithm to solve a computational problem to completion. The program source code has many types of variables and their memory requirements are different. So, you can divide the space requirement into two parts.

Fixed Variables

The fixed part of the program are the instructions, simple variables, constants that does not need much memory and they do not change during execution. They are free from the characteristics of an instance of the computational problem.

Dynamic Variables

The variables depends on input size, pointers that refers to other variables dynamically, stack space for recursion are some example. This type of memory requirement changes with instance of the problem and depends on the instance characteristics. It is given by the following equation.

\begin{aligned}&S(P) = c + SP \\
&where \\
&S(P) \hspace{1mm} is \hspace{1mm}space\hspace{1mm} requirement\\
&c \hspace{1mm}is\hspace{1mm} a \hspace{1mm}constant \\
&SP\hspace{1mm} is\hspace{1mm} the \hspace{1mm}instance \hspace{1mm}characteristics
\end{aligned}

Instance characteristics means that it cannot be determined unless the instance of a problem is running which is related to dynamic memory space. The input size usually has the control over instance of a computational problem.

### Time Complexity

The time complexity is the amount of time required to run the program to completion.It is given by following.

\begin{aligned}&T(P) \hspace{1mm} = compile \hspace{1mm}time\hspace{1mm} +\hspace{1mm} run-time\\
&P\hspace{1mm} is \hspace{1mm}the \hspace{1mm}program.\\
&T(P) \hspace{1mm}is \hspace{1mm}the \hspace{1mm}time\hspace{1mm} complexity \hspace{1mm}of \hspace{1mm}program.\end{aligned}

Program Execution in Steps

The computer executes a program in steps and each step has a time cost associated with it. It means that a step could be done in finite amount of time.See the table below.

You can count the number of steps an algorithm performed using this technique.

For example, consider the following example.

Note:-

\begin{aligned}&
S/e = steps \hspace{1mm}per \hspace{1mm} execution\hspace{1mm} of\hspace{1mm} a \hspace{1mm}statement.\\
&Frequency = The\hspace{1mm} number\hspace{1mm} of\hspace{1mm} times \hspace{1mm}a \hspace{1mm}statement \hspace{1mm}is \hspace{1mm}executed.
\end{aligned}

The result of the step count is $2n + 3$ which is a linear function. The performance analysis begins by counting the steps but count the steps to execute is not the goal. The goal is to find a function that will describe the algorithm in terms of its input size. This function will tell you how the algorithm will perform for different input size and magnitude.

# Algorithms Pseudo Code

In this article, you will learn how to represent an algorithm using a pseudo code and elements of pseudo codes. Learning a programming language is not necessary to understand pseudo code, but knowing a programming language like C, Pascal, etc. help you understand the pseudo codes better.

## Algorithms Components

The algorithm has two part – heading and the body. See the table below for more information.

### Example:

Algorithm Find_Max( A[n])
{

// A[n] is the list of unsorted numbers from which
// we need to  find Max value.
max := 0;
for i := i to n do
if max  >= A[i] then
{
max := A[i];
}
}

The above is sample algorithm to find max value from a list of numbers. The algorithm is written in pseudo code and contains lot of elements with their own notations.

## Notations

See the table below to understand each notation because you need them for writing algorithms.

# Algorithms

A computer program is clear instructions in a programming language that solve some problem.

You can write this program in different programming languages, but the solution to the problem irrespective of programming language remain the same. Therefore, an algorithm is an independent solution to a computer-based problem.

This tutorial is meant for beginner who are new to algorithms. Some experience with a programming language is sufficient to start learning algorithms, but here are some more information about prerequisites to learn algorithms.

• You must be familiar with basic mathematical concepts such as exponents, set theory, mathematical induction, trees, graphs, relations, limits and so on.
• Some knowledge of programming is recommended such as C/C++ or Java Programming.

You can visit our programming tutorials to learn programming concepts to get comfortable with algorithms.

## Algorithms Tutorial Topics

Here is a list of topics for algorithms. Read from top (easy) to bottom (difficult).

## Recommended Books

Beginners often find algorithms difficult to learn. Algorithms are purely logical and need clear thinking. You must be able to categorize algorithms and recognize which one is suitable for your programming projects one you understand them.

Apart from what you learn in this tutorial, we recommend some books in this section which will help you understand Algorithms in best possible way. We have picked the best books for you.

# Algorithm Introduction

The computer program is a set of basic instructions to computer hardware. The computer hardware executes the instruction to carry out some tasks. The computer program is written using a blueprint called an algorithm. Each step in an algorithm has a clear meaning and performs a specific task.

An algorithm is any well-defined computational procedure that transforms one or more input values into one or more output values. Algorithms solve computational problems in a step-by-step manner.

For example, given a set of unsorted numbers as input, a sorting algorithm can sort the numbers in ascending or descending order as output.

\begin{aligned}
&Unsorted \hspace{1mm}set\\
&\{ 2, 3, 5, 1, 8, 4, 7, 6\} \\ \\
&Sorted \hspace{1mm}list \hspace{1mm}solved \hspace{1mm}by \hspace{1mm}sorting \hspace{1mm}algorithm\\
&\{1, 2, 3, 4, 5, 6, 7, 8 \}
\end{aligned}

### Characteristics of an Algorithm

Algorithms have some common characteristics that separate them from computer programs. A computer program implements the algorithm to solve a computer-based problem, but the reverse is not true.

• Inputs
• Output
• Definiteness and Unambiguous
• Finiteness
• Effectiveness

Input

An algorithm needs zero or more input values to compute the outputs. The size of the input values can be different for each instance of a problem solved by the algorithm.

Certain algorithms require that you place constraints on the input values. For example, an algorithm may need only positive integers values, the negative values are not allowed.

Output

An algorithm processes the inputs and produces the desired output. The correct algorithm will always produce correct outputs for a computation problem.

Sometimes the output is a single value and sometimes it is an array of quantities.

Definiteness and Unambiguous

The algorithm must do definite operations or tasks that result in intermediate or final output. All procedures must contain unambiguous operations. If not, then the algorithm may not terminate or produce the wrong output and violate the finiteness characteristic or becomes an incorrect algorithm.

Finiteness

Finiteness means “having limits”. Every algorithm must end after a finite number of steps. It does not mean that an algorithm can take 100 steps to solve a problem because then you would not call that algorithm efficient.

An algorithm that terminates after reasonably finite steps maintains the finiteness characteristic.

Effectiveness

The algorithm must perform basic operations that can be done using a pen or pencil on a piece of paper. This is called the effectiveness of the algorithm.

### How to Represent an Algorithm?

Representing algorithm depends on the context. When you are writing a new algorithm then writing steps in plain English is suitable. The best way to represent an algorithm when writing a program is using a flowchart.

Three ways to represent an algorithm are

1. Language ( e.g English)
2. Flowcharts
3. Pseudo Code (e.g C, C++)

Algorithms in this tutorial are written in pseudo codes.The pseudo codes are inspired by programming languages like C, but they are not executable codes. Instead a mere representation of algorithm.