### Techniques for the Design of Algorithms

The first part of this course is concerned with:

That is:
• General approaches to the construction of efficient solutions to problems.
Such methods are of interest because:
• They provide templates suited to solving a broad range of diverse problems.
• They can be translated into common control and data structures provided by most high-level languages.
• The temporal and spatial requirements of the algorithms which result can be precisely analysed.
Over the next few lectures we shall examine the following paradigms:
• Divide and Conquer.
• Dynamic Programming
• Greedy Method
• Backtracking.
Although more than one technique may be applicable to a specific problem, it is often the case that an algorithm constructed by one approach is clearly superior to equivalent solutions built using alternative techniques.

The choice of design paradigm is an important aspect of algorithm synthesis.

## Questions

• How does one calculate the running time of an algorithm?
• How can we compare two different algorithms?
• How do we know if an algorithm is `optimal'?
1. Count the number of basic operations performed by the algorithm on the worst-case input

A basic operation could be:

• An assignment
• A comparison between two variables
• An arithmetic operation between two variables. The worst-case input is that input assignment for which the most basic operations are performed.

## Simple Example:

```n := 5;
loop
get(m);
n := n -1;
until (m=0 or n=0)
```
Worst-case: 5 iterations

Usually we are not concerned with the number of steps for a single fixed case but wish to estimate the running time in terms of the `input size'.

```get(n);
loop
get(m);
n := n -1;
until (m=0 or n=0)
```
Worst-case: n iterations

## Examples of `input size':

Sorting:

n == The number of items to be sorted;

Basic operation: Comparison.

Multiplication (of x and y):

n == The number of digits in x plus the number of digits in y.

Basic operations: single digit arithmetic.

Graph `searching':

n == the number of nodes in the graph or the number of edges in the graph.

### Counting the Number of Basic Operations

Sequence: P and Q are two algorithm sections:

```
Time( P ; Q )  =  Time( P ) + Time( Q )
```

Iteration:

```while < condition > loop
P;
end loop;
```
or

```for i in 1..n loop
P;
end loop
```

```
Time  =  Time( P ) * ( Worst-case number of iterations )
```

Conditional

```if < condition > then
P;
else
Q;
end if;
```

```
Time  =  Time(P)   if  < condition > =true
Time( Q ) if  < condition > =false

```

We shall consider recursive procedures later in the course.

## Example:

```for i in 1..n loop
for j in 1..n loop
if i < j then
swop (a(i,j), a(j,i)); -- Basic operation
end if;
end loop;
end loop;
```

```
Time  <  n*n*1
=  n^2
```

### Asymptotic Performance

We are usually concerned with the growth in running time as the input size increases, e.g.

Suppose P, Q and R are 3 algorithms with the following worst-case run times:

nPQR
115100
1010245001000
100210050,00010,000
1000210005 million100,000

If each is run on a machine that executes one million (10^6) operations per second

nPQR
11(*ms5(*ms100(*ms
51 millisec0.5 millisec1 millisec
100270years0.05 secs0.01 secs
10002970years5 secs0.1 secs

Thus,

The growth of run-time in terms of n (2^n; n^2; n) is more significant than the exact constant factors (1; 5; 100)

### `O'-notation

Let f:N-> R and g:N-> R. Then: f( n ) = O( g(n) ) means that

There are values, n0 and c, such that f( n ) < = c * g( n ) whenever n > = n0.

Thus we can say that an algorithm has, for example,

## Examples

• There is an algorithm (mergesort) to sort n items which has run-time O( n log n ).
• There is an algorithm to multiply 2 n-digit numbers which has run-time O( n^2 ).
• There is an algorithm to compute the nth Fibonacci number which has run-time O( log n ).

### OM-notation

To express the concept of an algorithm taking at least some number of steps

### OM-notation

can be used.

Again let, f:N-> R and g:N-> R. Then: f( n ) = OM ( g(n) ) means that there are values, n0 and c, such that f( n ) > = c * g( n ) whenever n > = n0.

### THETA-notation

Again let, f:N-> R and g:N-> R.

If f(n)=O(g(n)) and f(n)=OM(g(n))

Then we write, f( n ) = THETA ( g(n) )

In this case, f(n) is said to be asymptotically equal to g(n).

If f( n ) = THETA( g(n) ) then algorithms with running times f(n) and g(n) appear to perform similarly as n gets larger.

### Manipulating O- and OM-notation

f(n)=O(g(n)) if and only if g(n)=OM(f(n))

If f(n) = O(g(n)) then

f(n)+g(n)=O(g(n)) ; f(n)+g(n)=OM(g(n))

f(n)*g(n)=O(g(n)^2) ; f(n)*g(n)=OM(f(n)^2)

### Examples

Suppose f(n)=10n and g( n )=2n^2

• f(n)=O(g(n))
• 10n+2n^2 = O(n^2)
• 10n+2n^2 = OM(n^2)
• (10n)*(2n^2) = O(n^4)
• (10n)*(2n^2) = OM(n^2)