Uploaded by Himanshu Swarnkar

1.Algorithm

advertisement
Algorithm
An algorithm is a finite set of steps required to solve a problem.
An algorithm must have following properties:
1. Input:
An algorithm must have zero or more quantities as input whcih are externally
supplied.
2. Output:
An algorithm must produce one or more output after processing set of
statements.
3. Definiteness:
Each instruction must be clear and ditinct.
4. Finiteness:
The algorithm must terminate after a finite number of steps.
5. Effectiveness:
Each operations must be definite also it should be feasible
Visit for more Learning Resources
Example of an algorithm
1. Start
2. Accept size for an array : (Read size)
3. Accept array elements values from user i.e. Array elements.
4. Accept element to be searched from user i.e. Read Value
5. Set i=0,flag=0
6. Compare A[i] with value
If(A[i] is a value)
Set flag=1 go to step 8
Else
Move to next data element
i= i+1;
7. If (i<=n) go to step 6
8. If(flag=1) then
Print found and Return i as position
Else
Print not found
9. Stop.
Algorithm Design Approches

Top-Down Approach:

Bottom-Up Approach:
Top-Down Approach:





A top-down approach starts with identifying major
components of system or program decomposing
them into their lower level components & iterating
until desired level of module complexity is achieved .
In this we start with topmost module & incrementally
add modules that is calls.
It takes the form of step wise procedure.
In this solution is divided into sub task and each sub
task further divided into smallest subtask.
The sub task are then combined into single solution.
Bottom-Up Approach:




It is inverse of top down method.
A bottom-up approach starts with designing
most basic or primitive component & proceeds
to higher level components.
Starting from very bottom , operations that
provide layer of abstraction are implemented.
The programmer may write code to perform
basic operations then combined those to make
a modules ,whcih are finally combined to form
overall system structure.
Time complexity


Time complexity of program / algorithm is the amount
of computer time that it needs to run to completion.
While calculating time complexity, we develop
frequency count for all key statements which are
important.

Consider three algorithms given below:-

Algorithm A:- a=a+1

Algorithm B:- for x=1 to n step

a=a+1

Loop

Algorithm C:- for x=1 to n step 1

for y=1 to n step 2

a=a+1

loop


Frequency count for algorithm A is 1 as a=a+1
statement will execute only once.
Frequency count for algorithm B is n as a=a+1 is a
key statement executes “n‟ times as loop runs
“n‟ times.

Frequency count for algorithm C is n2 as a=a+1 is a
key statement executes n2 times as the inner
loop runs n times, each time the outer loop runs and
the outer loop also runs for n times.
Space complexity


Space complexity of a program / algorithm is the amount of memory that
it needs to run to completion.
The space needed by the program is the sum of the following components.

Fixed space requirements:-

Fixed space is not dependent on the characteristics of the
input and outputs.

Fixed space consists of space for simple variable, fixed
size variables, etc.
Variable space requirements:


Variable space includes space needed by variables whose
size depends upon the particular problem being solved,
referenced variables and the stack space required for
recursion on particular instance of variables.
e.g. Additional space required where function uses
recursion.
Algorithm analysis:




There are different ways of solving problem & there
are different algorithms which can be designed to
solve a problem.
There is difference between problem & algorithm.
A problem has single problem statement that
describes it in general terms.
However there are different ways to solve a problem
& some solutions may be more
efficient than others.

There are different types of time complexities
which can be analyzed for an algorithm:
–
Best Case Time Complexity:
–
Worst Case Time Complexity:
–
Average Case Time Complexity:
Best Case Time Complexity:



It is measure of minimum time that algorithm will
require for input of size “n‟.
Running time of many algorithms varies not only for
inputs of different sizes but also input of same size.
For example in running time of some sorting
algorithms, sorting will depend on ordering of input
data. Therefore if input data of “n‟ items is presented
in sorted order, operations performed by algorithm
will take least time.
Worst Case Time Complexity:


It is measure of maximum time that algorithm will
require for input of size “n‟.
Therefore if various algorithms for sorting are taken
into account & say “n‟ input data items are supplied
in reverse order for any sorting algorithm, then
algorithm will require n2 operations to perform sort
which will correspond to worst case time complexity
of algorithm.
Average Case Time Complexity:


The time that an algorithm will require to execute
typical input data of size “n‟ is known as average
case time complexity.
We can say that value that is obtained by averaging
running time of an algorithm for all possible inputs of
size “n‟ can determine average case time complexity.
Big O notation
• Big O notation is used in Computer Science to
describe the performance or complexity of an
algorithm.
• Big O specifically describes the worst-case
scenario, and can be used to describe the execution
time required or the space used (e.g. in memory or
on disk) by an algorithm.
• O(1)
– O(1) describes an algorithm that will always execute in the same time (or space)
regardless of the size of the input data set.
• E.g Push and POP operation for a stack
• O(N)
– O(N) describes an algorithm whose performance will grow linearly and in
direct proportion to the size of the input data set.
– Big O notation will always assume the upper limit.
• E.g. Linear search with unsorted data.
• O(N2)
– O(N2) represents an algorithm whose performance is directly proportional to the
square of the size of the input data set.
– This is common with algorithms that involve nested iterations over the data set.
• E.g. Comparing two dimensional arrays of size n.
O(log N). Logarithmic Time
The iterative halving of data sets described in the binary search example
produces a growth curve that peaks at the beginning and slowly flattens
out as the size of the data sets increase
E.g. Binary search:
an input data set containing 10 items takes one second to
complete, a data set containing 100 items takes two seconds,
and a data set containing 1000 items will take three seconds.
O(N log N). Logarithmic Time
– E.G. More advanced sorting algorithms: quick sort,merge sort.
For more detail contact us
Download