Uploaded by anshrathod2104

ADA Lab Manual

advertisement
Ahmedabad Institute of Technology
Computer Engineering Department
Analysis and Design of Algorithms (3150703)
Laboratory Manual
Year: 2022 - 2023
NAME
ENROLLMENT NUMBER
BATCH
YEAR
SUBJECT COORDINATOR
Prof. Darshana Patel (CE)
SUBJECT MEMBER
Prof.Namrata Gohel(CE)
DEPARTMENT OF COMPUTER ENGINEERING
VISION
To create competent professionals in the field of Computer Engineering and
promote research with a motive to serve as a valuable resource for the IT industry
and society.
MISSION
1. To produce technically competent and ethically sound Computer Engineering
professionals by imparting quality education, training, hands on experience and
value based education.
2. To inculcate ethical attitude, sense of responsibility towards society and leadership
ability required for a responsible professional computer engineer.
3. To pursue creative research, adapt to rapidly changing technologies and promote
self-learning approaches in Computer Engineering and across disciplines to serve the
dynamic needs of industry, government and society.
Program Educational Objectives (PEO):
PEO1: To provide the fundamentals of science, mathematics, electronics and computer
science and engineering and skills necessary for a successful IT professional.
PEO2: To provide scope to learn, apply skills, techniques and competency to use modern
engineering tools to solve computational problems.
PEO3: To enable young graduates to adapt to the challenges of evolving career opportunities
in their chosen fields of career including higher studies, research avenues, entrepreneurial
activities etc.
PEO4: To inculcate life-long learning aptitude, leadership qualities and teamwork ability
with sense of ethics for a successful professional career in their chosen field.
PROGRAM OUTCOMES (POs)
Engineering Graduates will be able to:
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give
and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change.
ANALYSIS AND DESIGN OF ALGORITHMS PRACTICAL BOOK
DEPARTMENT OF COMPUTER ENGINEERING
PREFACE
It gives us immense pleasure to present the first edition of Analysis and Design of
Algorithms Practical Book for the B.E. 3rd year students of Ahmedabad Institute of
Technology.
This manual is intended for use in an introductory Operating System course. LINUX is
an operating system which was first developed in the 1960s, and has been under constant
development ever since. By operating system, we mean the suite of programs which
make the computer work. It is a stable, multi-user, multi-tasking system for servers,
desktops and laptops. LINUX systems also have a graphical user interface (GUI) similar
to Microsoft Windows which provides an easy to use environment. However, knowledge
of LINUX is required for operations which aren't covered by a graphical program, or for
when there is no windows interface available, for example, in a telnet session. There are
many different versions of LINUX, although they share common similarities. The most
popular varieties of LINUX are Sun Solaris, GNU/Linux, and MacOS X. Therefore, it is
imperative that students in an introductory Operating System lab manual course are
introduced to both the existing theories in the classroom and to the ways of recognizing
natural patterns in the laboratory.
In addition, the effort put into the laboratory experiments will ultimately reward the
student with a better understanding of the concepts presented in the classroom.
The student is required to keep a laboratory manual in which the raw data will be
recorded as well as the questions will be kept. The lab write-ups form a permanent record
of your work.
Lab Manual Revised by: Prof. Darshana Patel, Ahmedabad Institute of Technology
INSTRUCTIONS TO STUDENTS
1. Be punctual in arriving to the laboratory with your lab manual and always wear your ID
card.
2. It is mandatory to bring lab manual in every practical session.
3. Students have to maintain the discipline in lab and should not create any unnecessary
chaos.
4. Students are supposed to occupy the systems allotted to them and are not supposed to
talk or make noise in the lab.
5. Students are required to carry their observation book and lab records with completed
exercises while entering the lab.
6. Lab records need to be submitted every week.
7. Students are not supposed to use pen drives in the lab.
8. All the observations have to be neatly recorded in the Operating System Practical Book
and verified by the instructor before leaving the laboratory.
9. The answers of all the questions mentioned under the section ‘Post Practical Questions’
at the end of each experiment in the Operating System.
Ahmedabad Institute of Technology
Computer Engineering Department
CERTIFICATE
This is to certify that Mr. / Ms._________________________________ Of
Enrolment No ___________________________has Satisfactorily completed
the course in ____________________________________as by the Gujarat
Technological University for ____ Year (B.E.) semester___ of Computer
Engineering in the Academic year ______.
Date of Submission:-
Faculty Name :
Signature
Dr. Dushyantsinh Rathod
(Professor &HOD, CE)
INDEX
Sr.N
o.
Experime
nt
Implement The Following Sorting
Algorithm & Compare Its Running
Time
1.
A. Insertion Sort
B. Bubble Sort
C. Selection Sort
D. Merge Sort
E. Quick Sort
Implementation And Time Analysis
2.
Of Linear And Binary Search.
Implementation Of Max-Heap
3.
Sort Algorithm.
Implementation And Time
Analysis Of Factorial Program
4.
Using Iterative And Recursive
Method.
Implement Of 0/1 Knapsack
5.
Problem Using Dynamic
Method.
Implementation Of Making
6.
Change Problem Using Dynamic
Programming.
Implement Of Knapsack
7.
Problem Using Greedy Method.
Implementation Prim’s
8.
Algorithm.
Implementation Of Kruskal’s
9.
Algorithm.
10. Implement LCS Problem.
Page No.
From
To
Date
Mark
s
Signatur
e
EXPERIMENT NO: 1
TITLE:
DATE:
/
/
Implement the following sorting algorithm & compare its running time
A. Insertion Sort
B. Bubble sort
C. Selection Sort
D. Merge Sort
E. Quick Sort
OBJECTIVES: On completion of this experiment student will able to…
● How to find Best, worst and Average case Complexity.
● How to find running time of different algorithms.
THEORY:
How to measure time taken by a function in C?
To calculate time taken by a process, we can use clock() function which is available time.h.
We can call the clock function at the beginning and end of the code for
which we measure time, subtract the
values, and then divide by
CLOCKS_PER_SEC (the number of clock ticks per second) to get processor time, like
following.
#include <time.h>
clock_t start, end;
double cpu_time_used;
start = clock();
... /* Do the work.
*/ end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
Following is a sample C program where we measure time taken by fun(). The function fun()
waits for enter key press to terminate.
Analysis And Design of Algorithm (3150703)Page 1
/* #include <stdio.h>
#include<time.h>
clock_t start, end;
void insert(int a[], int n) /* function to sort an aay with
insertion sort */
{
double start = clock();
int i, j, temp;
for (i = 1; i < n; i++) {
temp = a[i];
j = i - 1;
while(j>=0 && temp <= a[j]) /* Move the elements
greater than temp to one position ahead from their
current position*/
{
a[j+1] = a[j];
j = j-1;
}
a[j+1] = temp;
}
double end = clock();
double time_taken = ((double)(endstart)/CLOCKS_PER_SEC);
}
printf("Time taken in Sorting =%f",time_taken);
void printArr(int a[], int n) /* function to print the array */
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);
}
int main()
{
int a[] = { 12, 31, 25, 8, 32, 17 };
Analysis And Design of Algorithm (3150703)Page 2
int n = sizeof(a) / sizeof(a[0]);
printf("Insertion Sort : \n");
printf("Before sorting array elements are - \n");
printArr(a, n);
insert(a, n);
printf("\nAfter sorting array elements are - \n");
printArr(a, n);
}
return 0;
Program to demonstrate time taken by function fun()
#include <stdio.h>
#include <time.h>
void take_enter() {
printf("Press enter to stop the counter \n");
while(1) {
if (getchar())
break;
}
}
void main() {
// Calculate the time taken by take_enter()
clock_t t;
t = clock();
printf("Timer starts\n");
take_enter();
printf("Timer ends \n");
t = clock() - t;
double time_taken = ((double)t)/CLOCKS_PER_SEC; //
calculate the elapsed time
printf("The program took %f seconds to execute",
time_taken);
Analysis And Design of Algorithm (3150703)Page 3
}
Output:
The following output is obtained after waiting for around 4 seconds and then hitting enter
key.
fun () starts
Press enter to stop
fun fun () ends
fun () took 4.017000 seconds to execute
How do we put the items in a list in order? There are many different mechanisms that we
can use. In this section, we’ll look at four such algorithms in detail, and determine the
performance of each algorithm in terms of the number of data comparisons and the number
of times we swap list items.
Insertion Sort
⮚ Insertion sort is a simple sorting algorithm that works the way we sort playing cards
in our hands.
⮚ Insertion sort is a very simple method to sort numbers in an ascending or
descending order. This method follows the incremental method. It can be
compared with the technique how cards are sorted at the time of playing a game.
⮚ The numbers, which are needed to be sorted, are known as keys. Here is the
algorithm of the insertion sort method.
Algorithm: Insertion-Sort(A)
for j = 2 to A.length
key = A[j]
i=j–1
while i > 0 and A[i] > key
A[i + 1] = A[i]
Analysis And Design of Algorithm (3150703)Page 4
i = i -1
A[i + 1] = key
Complexity Analysis of Insertion Sort
2
⮚ Worst Case Time Complexity [ Big-O ]: O(n )
⮚ Best Case Time Complexity [Big-omega]: O(n)
2
⮚ Average Time Complexity [Big-theta]: O(n )
⮚ Space Complexity: O(1)
Example
Analysis And Design of Algorithm (3150703)Page 5
Analysis And Design of Algorithm (3150703)Page 6
Bubble sort
Bubble Sort is an elementary sorting algorithm, which works by repeatedly exchanging
adjacent elements, if necessary. When no exchanges are required, the file is sorted. This is
the simplest technique among all sorting algorithms.
Algorithm: Sequential-Bubble-Sort (A)
fori← 1 to length [A] do
for j ← length [A] down-to
i +1 do if A[A] < A[j - 1]
then
Exchange A[j] ↔ A[j-1]
Example
Analysis And Design of Algorithm (3150703)Page 7
Complexity Analysis of Bubble sort
An Bubble Sort, n-1 comparisons will be done in the 1st pass, n-2 in 2nd pass, n-3 in
3rd pass and so on. So the total number of comparisons will be,
(n-1) + (n-2) + (n-3) +
+3+2+1
Sum = n (n-1)/2
2
i.e. O (n )
⮚
Best Case Time Complexity [Big-omega]: O(n)
⮚
Average Time Complexity [Big-theta]: O(n )
⮚
Space Complexity: O(1)
⮚
Worst Case Time Complexity [ Big-O ]: O(n )
2
2
Selection Sort
⮚ Selection sort is conceptually the most simplest sorting algorithm. This algorithm will
first find the smallest element in the array and swap it with the element in the first
position, then it will find the second smallest element and swap it with the element
in the second position, and it will keep on doing this until the entire array is sorted.
⮚ It is called selection sort because it repeatedly selects the next-smallest element and
swaps it into the right place.
Algorithm: Selection-Sort(A)
for i ← 1 to
n-1 do min j
← i;
Analysis And Design of Algorithm (3150703)Page 8
min x ← A[i]
for j ←i + 1 to
n do if A[j] <
min x then
min j ← j
min x ← A[j]
A[min j] ← A
[i] A[i] ←
min x
Example
Complexity Analysis of Selection Sort
Selection Sort requires two nested for loops to complete itself, one for loop is in the
function selectionSort, and inside the first loop we are making a call to another
function indexOfMinimum, which has the second(inner) for loop. Hence for a given
Analysis And Design of Algorithm (3150703)Page 9
input size of n, following will be the time and space complexity for selection sort
algorithm:
2
Worst Case Time Complexity [ Big-O ]: O(n )
2
Best Case Time Complexity [Big-omega]: O(n )
2
Average Time Complexity [Big-theta]: O(n )
Space Complexity: O(1)
Quick Sort
⮚ Quick Sort is also based on the concept of Divide and Conquer, just like merge sort.
But in quick sort all the heavy lifting(major work) is done while dividing the array into
subarrays, while in case of merge sort, all the real work happens during merging the
subarrays. In case of quick sort, the combine step does absolutely nothing.
It is also called partition-exchange sort. This algorithm divides the list into three main
parts:
1.
Elements less than the Pivot element
2.
Pivot element(Central element)
3.
Elements greater than the pivot element
Pivot element can be any element from the array, it can be the first element, the last
element or any random element. In this tutorial, we will take the rightmost element or
the last element as pivot.
Algorithm: Quick-Sort(A,p,r)
if p < r then
q Partition (A, p, r)
Quick-Sort (A, p, q)
Quick-Sort (A, q + r, r)
Function: Partition (A, p, r)
Analysis And Design of Algorithm (3150703)Page 10
x ←A[p]
i ← p1
j ← r+1
while TRUE
do Repeat j
←j-1
until A[j] ≤
x Repeat i←
i+1 until A[i]
≥x
if i < j then
exchange A[i] ↔
A[j] else return
Example
Analysis And Design of Algorithm (3150703)Page 11
Complexity Analysis of Quick-Sort
For an array, in which partitioning leads to unbalanced subarrays, to an extent where on
the left side there are no elements, with all the elements greater than the pivot, hence on
Analysis And Design of Algorithm (3150703)Page 12
the right side. And if keep on getting unbalanced subarrays, then the running time is the
worst case, which is O(n2). Where as if partitioning leads to almost equal subarrays,
then the running time is the best, with time complexity as O(n*log n).
2
Worst Case Time Complexity [ Big-O ]: O(n )
Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)
Space Complexity: O(n*log n)
Merge Sort
⮚ Merge Sort follows the rule of Divide and Conquer to sort a given set of
numbers/elements, recursively, hence consuming less time.
Divide and Conquer
If we can break a single big problem into smaller sub-problems, solve the smaller subproblems and combine their solutions to find the solution for the original big problem, it
becomes easier to solve the whole problem.
⮚ The concept of Divide and Conquer involves three steps:
1.
Divide the problem into multiple small problems.
2.
Conquer the subproblems by solving them. The idea is to break down the problem
into atomic subproblems, where they are actually solved.
3.
Combine the solutions of the subproblems to find the solution of the actual problem.
Analysis And Design of Algorithm (3150703)Page 13
Algorithm: Merge-Sort (numbers[], p, r)
if p < r then
q = ⌊ (p + q) / 2⌋
Merge-Sort (numbers[], p, q)
Merge-Sort (numbers[], q + 1, r)
Merge (numbers[], p, q, r)
Function: Merge (numbers[], p, q, r)
n1 = q – p + 1
n2 = r – q
declare leftnums[1…n1 + 1] and rightnums[1…n2 + 1] temporary arrays
for i = 1 to n1
leftnums[i] = numbers[p + i - 1]
for j = 1 to n2
Analysis And Design of Algorithm (3150703)Page 14
rightnums[j] = numbers[q+ j]
Analysis And Design of Algorithm (3150703)Page 15
leftnums[n1 + 1] = ∞
rightnums[n2 + 1] =
∞i =1
j=1
for k = p to r
if leftnums[i] ≤
rightnums[j]
numbers[k] =
leftnums[i] i = i + 1
else
numbers[k] = rightnums[j]
j=j+1
Complexity Analysis of Merge-Sort
Worst Case Time Complexity [ Big-O ]: O(n*log n)
Best Case Time Complexity [Big-omega]: O(n*log n)
Average Time Complexity [Big-theta]: O(n*log n)
Space Complexity: O(n)
Analysis And Design of Algorithm (3150703)Page 16
Example
Analysis And Design of Algorithm (3150703)Page 17
EXCERCISE:
1) Implement the Bubble sort algorithm & compare its running time.
2) Implement the Insertion sort algorithm & compare its running time.
3) Implement the Selection Sort algorithm & compare its running time.
4) Implement the Quick-Sort algorithm & compare its running time.
5) Implement the Merge-Sort algorithm & compare its running time
EVALUATION:
Involveme
nt
(4)
Understandin
g / Problem
solving
(3)
Timely
Completi
on (3)
Signature with date:
To
tal
(10
)
EXPERIMENT NO: 2
DATE:
/
/
Title : Implementation And Time Analysis Of Linear And Binary Search.
OBJECTIVES: On completion of this experiment student will able to…
■
How to search the element using linear search or in binary search technique.
THEORY:
Linear Search and Binary Search are the two popular searching techniques. Here we will discuss
the Binary Search Algorithm.
Linear Search:
linear search (Searching algorithm) which is used to find whether a given number is present in
an array and if it is present then at what location it occurs. It is also known as sequential search.
It is straightforward and works as follows: We keep on comparing each element with the
element to search until it is found or the list ends.
Binary Search :
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an
element into some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found then,
the location of the middle element is returned. Otherwise, we search into either of the halves
depending upon the result produced through the match.
ALGORITHM:
Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array,
'lower_bound' is the index of the first array element, 'upper_bound' is the index of the
last array element, 'val' is the value to search
Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
Step 2: repeat steps 3 and 4 while beg <=end
Step 3: set mid = (beg + end)/2
Step 4: if a[mid] = val
set pos = mid
print pos
go to step 6
else if a[mid] > val
set end = mid - 1
else
set beg = mid + 1
[end of if]
[end of loop]
Step 5: if pos = -1
print "value is not present in the array"
[end of if]
Step 6: exit
Complexity Analysis of Binary Search
Worst Case Time Complexity [ Big-O]:O(log n)
Best Case Time Complexity [Big-omega]: O(1)
Average Time Complexity [Big-theta]: O(log n)
Space Complexity: O(1)
There are two methods to implement the binary search algorithm ● Iterative method
● Recursive method
The recursive method of binary search follows the divide and conquer approach.
Let the elements of array are -
Let the element to search is, K = 56
We have to use the below formula to calculate the mid of the array mid = (beg + end)/2
So, in the given array beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
EXCERCISE:
1) Implement the Linear search program.
2) Implement binary search algorithm using recursive Method.
EVALUATION:
Involvement
(4)
Understanding
/ Problem
solving
Timely Completion
(3)
(3)
Signature with date:
Total
(10)
EXPERIMENT NO: 3
TITLE:
DATE: /
/
Implementation Of Max-Heap Sort Algorithm
OBJECTIVES: On completion of this experiment students will be able to…
How Heap Sort works and is able to analyze its time complexity.
THEORY:
What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can have the
utmost two children. A complete binary tree is a binary tree in which all the levels except the
last level, i.e., leaf node, should be completely filled, and all the nodes should be left-justified.
What is heap sort?
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate
the elements one by one from the heap part of the list, and then insert them into the sorted
part of the list.
Heapsort is the in-place sorting algorithm.
Now, let's see the algorithm of heap sort.
Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4.
swap arr[1] with arr[i]
5.
heap_size[arr] = heap_size[arr] ? 1
6.
MaxHeapify(arr,1)
7. End
BuildMaxHeap(arr)
1. BuildMaxHeap(arr)
2.
heap_size(arr) = length(arr)
3.
for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End
MaxHeapify(arr,i)
1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10.
if largest != i
11.
swap arr[i] with arr[largest]
12.
MaxHeapify(arr,largest)
13.
End
Working of Heap sort Algorithm
Now, let's see the working of the Heapsort Algorithm.
In heap sort, basically, there are two phases involved in the sorting of elements. By using the
heap sort algorithm, they are as follows -
● The first step includes the creation of a heap by adjusting the elements of the array.
● After the creation of heap, now remove the root element of the heap repeatedly by
shifting it to the end of the array, and then store the heap structure with the remaining
elements.
Now let's see the working of heap sort in detail by using an example. To understand it more
clearly, let's take an unsorted array and try to sort it using heap sort. It will make the
explanation clearer and easier.
First, we have to construct a heap from the given array and convert it into max heap.
After converting the given heap into max heap, the array elements are -
Next, we have to delete the root element (89) from the max heap. To delete this node, we have
to swap it with the last node, i.e. (11). After deleting the root element, we again have to heapify
it to convert it into max heap.
After swapping the array element 89 with 11, and converting the heap into max-heap, the
elements of array are -
In the next step, again, we have to delete the root element (81) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (54). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 81 with 54 and converting the heap into max-heap, the
elements of array are -
In the next step, we have to delete the root element (76) from the max heap again. To delete
this node, we have to swap it with the last node, i.e. (9). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 76 with 9 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (54) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (14). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 54 with 14 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (22) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (11). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 22 with 11 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (14) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (9). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 14 with 9 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (11) from the max heap. To delete
this node, we have to swap it with the last node, i.e. (9). After deleting the root element, we
again have to heapify it to convert it into max heap.
After swapping the array element 11 with 9, the elements of array are -
Now, heap has only one element left. After deleting it, heap will be empty.
After completion of sorting, the array elements are -
Now, the array is completely sorted.
Complexity Analysis of Heap Sort :
Worst Case Time Complexity [ Big-O ]: O(n log n)
Best Case Time Complexity [Big-omega]: O(n log n)
Average Time Complexity [Big-theta]: O(n log n)
Space Complexity: O(1)
EXCERCISE:
1) Implement C program for Max Heap Sort algorithm.
EVALUATION:
Involveme
nt
(4)
Understandin
g / Problem
solving
(3)
Timely
Completi
on (3)
Signature with date:
To
tal
(10
)
EXPERIMENT NO: 4
DATE:
/
/
TITLE:
Implementation And Time Analysis Of Factorial Program Using Iterative And Recursive Method.
OBJECTIVES: On completion of this experiment student will able to…
■
How Recursive functions can be implemented and how to find its time complexity
THEORY:
The factorial of a non-negative integer n is the product of all positive integers less than or equal to n.
It is denoted by n!. Factorial is mainly used to calculate the total number of ways in which n distinct
objects can be arranged into a sequence.
Example:
The value of 5! is 120 as
5! = 1 × 2 × 3 × 4 × 5 = 120
The value of 0! is 1
Iterative Algorithm :
The iterative version uses a loop to calculate the product of all positive integers less than equal
to n. Since the factorial of a number can be huge, the data type of the factorial variable is
declared as unsigned long.
Step 1 → Take integer variable A
Step 2 → Assign value to the variable
Step 3 → From value A upto 1 multiply each digit and store
Step 4 → the final stored value is factorial of A
Recursive Algorithm:
A recursive algorithm calls itself with smaller input values and returns the result for the current
input by carrying out basic operations on the returned value for the smaller input. Generally, if
a problem can be solved by applying solutions to smaller versions of the same problem, and the
smaller versions shrink to readily solvable instances, then the problem can be solved using a
recursive algorithm.
Step 1: Start
Step 2: Read number n
Step 3: Call factorial(n)
Step 4: Print factorial f
Step 5: Stop
factorial(n)
Step 1: If n==1 then return 1
Step 2: Else
f=n*factorial(n-1)
Step 3: Return f
We can Define the recursive function like,
n! = | n * factorial(n – 1)
if n > 0
|1
if n = 0
Complexity Analysis of Factorial Recursive Function :
Worst Case Time Complexity [ Big-O ]: O(n log n)
Best Case Time Complexity [Big-omega]: O(n log n)
Average Time Complexity [Big-theta]: O(n log n)
Space Complexity: O(1)
EXCERCISE:
1) Implement C program for finding Factorial Using Iterative Method.
2) Implement C program for finding Factorial Using Recursive Function.
EVALUATION:
Involvement
(4)
Understanding
/ Problem
solving
Timely Completion
(3)
(3)
Signature with date:
Total
(10)
EXPERIMENT NO: 5
DATE:
TITLE: Implement 0/1 knapsack problem using dynamic method
OBJECTIVES: On completion of this experiment student will able to…
⮚ concept of Dynamic programming
⮚ concept of 0/1 knapsack problem
THEORY:
In 0-1 Knapsack, items cannot be broken which means the thief should take the item as
a whole or should leave it. This is reason behind calling it as 0-1 Knapsack. Hence, in
case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other constraints
remain the same. 0-1 Knapsack cannot be solved by Greedy approach. Greedy
approach does not ensure an optimal solution. In many instances, Greedy approach
may give an optimal solution. The following examples will establish our statement.
Example
Let us consider that the capacity of the knapsack is W = 25 and the items are as
shown in the following table.
Without considering the profit per unit weight (pi/wi), if we apply Greedy approach to
solve this problem, first item A will be selected as it will contribute maximum profit
among all the elements. After selecting item A, no more item will be selected. Hence,
for this given set of items total profit is 24. Whereas, the optimal solution can be
achieved by selecting items, B and C, where the total profit is 18 + 18 = 36.
/
/
Dynamic-Programming Approach
'
Let i be the highest-numbered item in an optimal solution S for W dollars. Then S = S - {i}
is an optimal solution for W - wi dollars and the value to the solution S is Vi plus the value
of the sub-problem.
We can express this fact in the following formula: define c[i, w] to be the solution for
items 1,2, … , i and the maximum weight w.
The algorithm takes the following inputs
●
The maximum weight W
●
The number of items n
●
The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>
Dynamic-0-1-knapsack (v, w, n, W)
for w = 0 to W do
c[0, w] = 0
for i = 1 to n do
c[i, 0] = 0
for w = 1 to W do
if wi ≤ w
then
if vi + c[i-1, w-wi] then
c[i, w] = vi + c[i-1, w-wi]
else c[i, w] = c[i-1, w]
else
c[i, w] = c[i-1, w]
The set of items to take can be deduced from the table, starting at c[n, w] and tracing
backwards where the optimal values came from.
If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing with c[i1, w]. Otherwise, item i is part of the solution, and we continue tracing with c[i-1, w- W].
Analysis
This algorithm takes θ(n, w) times as table c has (n + 1).(w + 1) entries, where each entry
requires θ(1) time to compute.
EXCERCISE:
1) Implement 0/1 knapsack problem.
EVALUATION:
Involveme
nt
(4)
Understandin
g / Problem
solving
(3)
Timely
Completi
on (3)
To
tal
(10
)
Signature with date:_____________
EXPERIMENT NO: 6
DATE:
/ /
TITLE:Implementation of Making change problem using dynamic programming.
OBJECTIVES: On completion of this experiment student will able to…
■
how dynamic algorithm works.
THEORY:
Dynamic programming:
Dynamic programming is a technique that breaks the problems into sub-problems, and saves
the result for future purposes so that we do not need to compute the result again. The
subproblems are optimized to optimize the overall solution is known as optimal substructure
property. The main use of dynamic programming is to solve optimization problems. Here,
optimization problems mean that when we are trying to find out the minimum or the maximum
solution of a problem. The dynamic programming guarantees to find the optimal solution of a
problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex
problem by first breaking into a collection of simpler subproblems, solving each subproblem
just once, and then storing their solutions to avoid repetitive computations.
Approaches of dynamic programming
There are two approaches to dynamic programming:
● Top-down approach
● Bottom-up approach
Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach
follows the tabulation method. Here memorization is equal to the sum of recursion and caching.
Recursion means calling the function itself, while caching means storing the intermediate
results.
Advantages
● It is very easy to understand and implement.
● It solves the subproblems only when it is required.
● It is easy to debug.
Disadvantages
● It uses the recursion technique that occupies more memory in the call stack. Sometimes
when the recursion is too deep, the stack overflow condition will occur.
● It occupies more memory that degrades the overall performance.
Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to implement the
dynamic programming. It uses the tabulation technique to implement the dynamic
programming approach. It solves the same kind of problems but it removes the recursion. If we
remove the recursion, there is no stack overflow issue and no overhead of the recursive
functions. In this tabulation technique, we solve the problems and store the results in a matrix.
Making change problem :
The problem of making a given value using minimum coins is a variation of coin change problem.
In this problem, a value Y is given. The task is to find the minimum number of coins that is
required to make the given value Y. The coins should only be taken from the given array C[] =
{C1, C2, C3, C4, C5, …}. If any number of coins is not suitable for making a given value, then
display the appropriate message.
Example 1:
Input: C[] = {5, 10, 25}, Y = 30
Output: Minimum of 2 coins are required to make the sum 30.
Explanation: There can be various combinations of coins for making the sum 30.
5 + 5 + 5 + 5 + 5 + 5 = 30 (total coins: 6)
5 + 5 + 5 + 5 + 10 = 30 (total coins: 5)
5 + 5 + 10 + 10 = 30 (total coins: 4)
10 + 10 + 10 = 30 (total coins: 3)
5 + 25 = 30 (total coins: 2)
Thus, we see that at least 2 coins are required to make the sum 30.
The time complexity of this algorithm id O(V), where V is the value.
Algorithm:
Begin
coins set with value {1, 2, 5, 10}
for all coins i as higher value to lower value do
while value >= coins[i] do
value := value – coins[i]
add coins[i], in thecoin list
done
done
print all entries in the coin list.
End
Complexity Analysis of Making change problem
Worst Case Time Complexity [ Big-O]:O(numberOfCoins*TotalAmount)
Best Case Time Complexity [omega]: O(numberOfCoins*TotalAmount)
Average Time Complexity [theta]: O(numberOfCoins*TotalAmount)
Space Complexity: O(numberOfCoins*TotalAmount)
EXERCISE:
1. Implementation of Making change problem using dynamic programming.
EVOLUTION:
Involveme
nt
(4)
Understandin
g / Problem
solving
(3)
Timely
Completi
on (3)
Signature with date:
To
tal
(10
)
EXPERIMENT NO: 7
DATE: /
/
TITLE: Implementation of knapsack problem using Greedy method.
OBJECTIVES: On completion of this experiment students will able to…
how greedy algorithms work.
THEORY:
Greedy Method
Among all the algorithmic approaches, the simplest and straightforward approach is the
Greedy method. In this approach, the decision is taken on the basis of current available
information without worrying about the effect of the current decision in future.
Greedy algorithms build a solution part by part, choosing the next part in such a way,
that it gives an immediate benefit. This approach never reconsiders the choices taken
previously. This approach is mainly used to solve optimization problems. Greedy
method is easy to implement and quite efficient in most of the cases. Hence, we can say
that Greedy algorithm is an algorithmic paradigm based on heuristic that follows local
optimal choice at each step with the hope of finding global optimal solution.
Components of Greedy Algorithm
Greedy algorithms have the following five components −
A candidate set − A solution is created from this set.
A selection function − Used to choose the best candidate to be added to the
solution.
A feasibility function − Used to determine whether a candidate can be
used to contribute to the solution.
An objective function − Used to assign a value to a solution or a partial
solution.
A solution function − Used to indicate whether a complete solution has been
reached.
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions
of items.
According to the problem statement,
There are n items in the store
th
Weight of i
item wi>0
th
Profit for i item pi>0
and
Capacity of the Knapsack is W
In this version of Knapsack problem, items can be broken into smaller pieces. So, the
th
thief may take only a fraction xi of i item.
0⩽ xi⩽ 1
th
The i item contributes the weight xi.wi to the total weight in the knapsack and profit
xi.pi to the total profit. Hence, the objective of this algorithm is to maximize∑n=1n(xi.pi)
subject to constraint, ∑ n=1n(xi.wi )⩽ W .It is clear that an optimal solution must
fill the knapsack exactly, otherwise we could add a fraction of one of the remaining
items and increase the overall profit. Thus, an optimal solution can be obtained by
∑n=1n(xi.wi)=W
In this context, first we need to sort those items according to the value of piwi , so that
pi+1wi+1 ≤ piwi . Here, x is an array to store the fraction of items.
Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)
for i = 1 to n do x[i] = 0
weight = 0 for i = 1 to n
if weight + w[i] ≤ W then x[i] = 1
weight = weight + w[i] else
x[i] = (W - weight) / w[i] weight = W
break return x
Example
Let us consider that the capacity of the knapsack W = 60 and the list of
provided items are shown in the following table −
As the provided items are not sorted based on pi/wi. After sorting, the items are as shown
in the following table.
Solution
After sorting all the items according to piwi. . First all of B is
chosen as weight of B is less than the capacity of the knapsack.
Next, item A is chosen, as the available capacity of the knapsack is
greater than the weight of A. Now, C is chosen as the next item.
However, the whole item cannot be chosen as the remaining
capacity of the knapsack is less than the weight of C. Hence,
fraction of C (i.e. (60 − 50)/20) is chosen. Now, the capacity of
the Knapsack is equal to the selected items. Hence, no more item
can be selected.
The total weight of the selected items is 10 + 40 + 20 * (10/20) = 60
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different
combination of item
Analysis
If the provided items are already sorted into a decreasing order of piwi , then the whileloop
takes a time in O(n); Therefore, the total time including the sort is in O(n logn).
EXCERCISES:
1) Implement fractional knapsack problem using greedy Method.
EVALUATION:
Involveme
nt
(4)
Understandin
g / Problem
solving
(3)
Timely
Completi
on (3)
Signature with date:
To
tal
(10
)
EXPERIMENT NO: 8
DATE:
/
/
TITLE:Implementation Prim’s algorithm.
OBJECTIVES: On completion of this experiment students will able to…
⮚
learn prim’s algorithm.
THEORY:
Spanning tree - A spanning tree is the subgraph of an undirected connected graph.
Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in which
the sum of the weights of the edge is minimum. The weight of the spanning tree is the sum of
the weights given to the edges of the spanning tree.
Now, let's start the main topic.
Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree from a
graph. Prim's algorithm finds the subset of edges that includes every vertex of the graph such
that the sum of the weights of the edges can be minimized.
Prim's algorithm starts with the single node and explores all the adjacent nodes with all the
connecting edges at every step. The edges with the minimal weights causing no cycles in the
graph got selected.
Prim’s algorithm:
Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add the
edges with the smallest weight until the goal is reached. The steps to implement the prim's
algorithm are given as follows ● First, we have to initialize an MST with the randomly chosen vertex.
● Now, we have to find all the edges that connect the tree in the above step with the new
vertices. From the edges found, select the minimum edge and add it to the tree.
● Repeat step 2 until the minimum spanning tree is formed.
The applications of prim's algorithm are Prim's algorithm can be used in network designing.
It can be used to make network cycles.
It can also be used to lay down electrical wiring cables.
Example of prim's algorithm
Now, let's see the working of prim's algorithm using an example. It will be easier to understand
the prim's algorithm using an example.
Suppose, a weighted graph is -
Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two edges
from vertex B that are B to C with weight 10 and edge B to D with weight 4. Among the edges,
the edge BD has the minimum weight. So, add it to the MST.
Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In
this case, the edges DE and CD are such edges. Add them to MST and explore the adjacent of C,
i.e., E and A. So, select the edge DE and add it to the MST.
Step 4 - Now, select the edge CD, and add it to the MST.
Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a cycle
to the graph. So, choose the edge CA and add it to the MST.
So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of
the MST is given below Cost of MST = 4 + 2 + 1 + 3 = 10 units.
Algorithm:
1. Step 1: Select a starting vertex
2. Step 2: Repeat Steps 3 and 4 until there are fringe vertices
3. Step 3: Select an edge 'e' connecting the tree vertex and fringe vertex that has minimum
weight
4. Step 4: Add the selected edge and the vertex to the minimum spanning tree T
5. [END OF LOOP]
6. Step 5: EXIT
Complexity Analysis of Prim’s algorithm
Data structure used for the minimum edge weight
Time Complexity
Adjacency matrix, linear searching
O(|V|2)
Adjacency list and binary heap
O(|E| log |V|)
Adjacency list and Fibonacci heap
O(|E|+ |V| log |V|)
EXERCISE:
1) Implement Prim’s algorithm.
EVALUATION:
Involvement
(4)
Understanding
/ Problem
solving
Timely Completion
(3)
(3)
Signature with date:
Total
(10)
EXPERIMENT NO: 9
DATE:
/
/
TITLE: Implementation of Kruskal's Algorithm.
OBJECTIVES: On completion of this experiment students will able to…
⮚
How kruskal’s algorithm is worked
THEORY:
Kruskal's algorithm :
Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted graph.
The main target of the algorithm is to find the subset of edges by using which we can traverse
every vertex of the graph. It follows the greedy approach that finds an optimum solution at
every stage instead of focusing on a global optimum.
In Kruskal's algorithm, we start from edges with the lowest weight and keep adding the edges
until the goal is reached. The steps to implement Kruskal's algorithm are listed as follows ● First, sort all the edges from low weight to high.
● Now, take the edge with the lowest weight and add it to the spanning tree. If the edge
to be added creates a cycle, then reject the edge.
● Continue to add the edges until we reach all vertices, and a minimum spanning tree is
created.
The applications of Kruskal's algorithm are ● Kruskal's algorithm can be used to layout electrical wiring among cities.
● It can be used to lay down LAN connections.
Example of Kruskal's algorithm
Now, let's see the working of Kruskal's algorithm using an example. It will be easier to
understand Kruskal's algorithm using an example.
Suppose a weighted graph is -
The weight of the edges of the above graph is given in the below table Edge
AB
AC
AD
AE
BC
CD
DE
Weight
1
7
10
5
3
4
2
Now, sort the edges given above in the ascending order of their weights.
Edge
AB
DE
BC
CD
AE
AC
AD
Weight
1
2
3
4
5
7
10
Now, let's start constructing the minimum spanning tree.
Step 1 - First, add the edge AB with weight 1 to the MST.
Step 2 - Add the edge DE with weight 2 to the MST as it is not creating the cycle.
Step 3 - Add the edge BC with weight 3 to the MST, as it is not creating any cycle or loop.
Step 4 - Now, pick the edge CD with weight 4 to the MST, as it is not forming the cycle.
Step 5 - After that, pick the edge AE with weight 5. Including this edge will create the cycle, so
discard it.
Step 6 - Pick the edge AC with weight 7. Including this edge will create the cycle, so discard it.
Step 7 - Pick the edge AD with weight 10. Including this edge will also create the cycle, so
discard it.
So, the final minimum spanning tree obtained from the given weighted graph by using Kruskal's
algorithm is -
The cost of the MST is = AB + DE + BC + CD = 1 + 2 + 3 + 4 = 10.
Algorithm
1. Step 1: Create a forest F in such a way that every vertex of the graph is a separate tree.
2. Step 2: Create a set E that contains all the edges of the graph.
3. Step 3: Repeat Steps 4 and 5 while E is NOT EMPTY and F is not spanning
4. Step 4: Remove an edge from E with minimum weight
5. Step 5: IF the edge obtained in Step 4 connects two different trees, then add it to the
forest F
6. (for combining two trees into one tree).
7. ELSE
8. Discard the edge
9. Step 6: END
Complexity Analysis of Making change problem
Worst Case Time Complexity [ Big-O]:O(E logE)
Best Case Time Complexity [omega]: O(E logE)
Average Time Complexity [theta]: O(E logE)
EXERCISE:
1) Implement Kruskal’s algorithm.
EVALUATION:
Involvement
(4)
Understanding
/ Problem
solving
Timely Completion
(3)
(3)
Signature with date:
Total
(10)
EXPERIMENT NO: 10
DATE:
TITLE: Implement LCS problem.
OBJECTIVES: On completion of this experiment students will able to…
⮚
learn the LCS.
THEORY:
Subsequence
Let us consider a sequence S = <s1, s2, s3, s4, …,sn>. A sequence Z = <z1, z2, z3, z4, …,zm>
over S is called a subsequence of S, if and only if it can be derived from S deletion
of some elements.
Common Subsequence
Suppose, X and Y are two sequences over a finite set of elements. We can say that Z is a
common subsequence of X and Y, if Z is a subsequence of both X and Y.
Longest Common Subsequence
If a set of sequences are given, the longest common subsequence problem is to find a
common subsequence of all the sequences that is of maximal length. The longest
common subsequence problem is a classic computer science problem, the basis of data
comparison programs such as the diff-utility, and has applications in bioinformatics. It
is also widely used by revision control systems, such as SVN and Git, for reconciling
multiple changes made to a revision-controlled collection of files.
Dynamic Programming
Let X = < x1, x2, x3,…, xm > and Y = < y1, y2, y3,…, yn > be the sequences. To compute the
length of an element the following algorithm is used. In this procedure, table C[m, n] is
computed in row major order and another table B[m,n] is computed to construct
optimal solution.
/
/
Algorithm: LCS-Length-Table-Formulation (X, Y)
m := length(X) n := length(Y) for i = 1 to m do
C[i, 0] := 0
for j = 1 to n do C[0, j] := 0
for i = 1 to m do for j = 1 to n do
if xi = yj
C[i, j] := C[i - 1, j - 1] + 1 B[i, j] := ‘D’
else
if C[i -1, j] ≥ C[i, j -1]
C[i, j] := C[i - 1, j] + 1
B[i, j] := ‘U’
else
C[i, j] := C[i, j - 1] + 1
B[i, j] := ‘L’
return C and B
Algorithm: Print-LCS (B, X, i, j)
if i = 0 and j = 0 return
if B[i, j] = ‘D’
Print-LCS(B, X, i-1, j-1)
Print(xi)
else if B[i, j] = ‘U’
Print-LCS(B, X, i-1, j) else
Print-LCS(B, X, i, j-1)
Analysis
To populate the table, the outer for loop iterates m times and the inner for loop iterates n
times. Hence, the complexity of the algorithm is O(m, n), where m and n are the length of
two strings.
In this example, we have two strings X = BACDB and Y = BDCB to find the longest
common subsequence.
Following the algorithm LCS-Length-Table-Formulation (as stated above), we have
calculated table C (shown on the left hand side) and table B (shown on the right hand
side).
In table B, instead of ‘D’, ‘L’ and ‘U’, we are using the diagonal arrow, left arrow and up
arrow, respectively. After generating table B, the LCS is determined by function LCS- Print.
The result is BCB.
EXCERCISE:
1) Implement LCS using Dynamic Programming.
EVALUATION:
Involvement
(4)
Understanding
/ Problem
solving
Timely Completion
(3)
(3)
Signature with date:
Total
(10)
Download