Uploaded by namra minhaj

reading material

advertisement
Step 1: Obtain a description of the problem.
This step is much more difficult than it appears. In the following discussion, the
word client refers to someone who wants to find a solution to a problem, and the
word developer refers to someone who finds a way to solve the problem. The developer must
create an algorithm that will solve the client's problem.
The client is responsible for creating a description of the problem, but this is often the weakest
part of the process. It's quite common for a problem description to suffer from one or more of the
following types of defects: (1) the description relies on unstated assumptions, (2) the description
is ambiguous, (3) the description is incomplete, or (4) the description has internal contradictions.
These defects are seldom due to carelessness by the client. Instead, they are due to the fact
that natural languages (English, French, Korean, etc.) are rather imprecise. Part of the
developer's responsibility is to identify defects in the description of a problem, and to work with
the client to remedy those defects.
Step 2: Analyze the problem.
The purpose of this step is to determine both the starting and ending points for solving the
problem. This process is analogous to a mathematician determining what is given and what
must be proven. A good problem description makes it easier to perform this step.
When determining the starting point, we
When determining the ending point, we
should start by seeking answers to the
need to describe the characteristics of a
following questions:
solution. In other words, how will we know
 What data are available?
when we're done? Asking the following
 Where is that data?
questions often helps to determine the
 What formulas pertain to the problem?
ending point.
 What rules exist for working with the
 What new facts will we have?
data?
 What items will have changed?
 What relationships exist among the data
 What changes will have been made to
values?
those items?
 What things will no longer exist?
Step 3: Develop a high-level algorithm.
An algorithm is a plan for solving a problem, but plans come in several levels of detail. It's
usually better to start with a high-level algorithm that includes the major part of a solution, but
leaves the details until later. We can use an everyday example to demonstrate a high-level
algorithm.
Problem: I need a send a birthday card to my brother, Mark.
Analysis: I don't have a card. I prefer to buy a card rather than make one myself.
High-level algorithm:
Go to a store that sells greeting cards
Select a card
Purchase a card
Mail the card
This algorithm is satisfactory for daily use, but it lacks details that would have to be added were
a computer to carry out the solution. These details include answers to questions such as the
following.
 "Which store will I visit?"
 "How will I get there: walk, drive, ride my bicycle, take the bus?"
 "What kind of card does Mark like: humorous, sentimental, risqué?"
These kinds of details are considered in the next step of our process.
Step 4: Refine the algorithm by adding more detail.
A high-level algorithm shows the major steps that need to be followed to solve a problem. Now
we need to add details to these steps, but how much detail should we add? Unfortunately, the
answer to this question depends on the situation. We have to consider who (or what) is going to
implement the algorithm and how much that person (or thing) already knows how to do. If
someone is going to purchase Mark's birthday card on my behalf, my instructions have to be
adapted to whether or not that person is familiar with the stores in the community and how well
the purchaser known my brother's taste in greeting cards.
When our goal is to develop algorithms that will lead to computer programs, we need to
consider the capabilities of the computer and provide enough detail so that someone else could
use our algorithm to write a computer program that follows the steps in our algorithm. As with
the birthday card problem, we need to adjust the level of detail to match the ability of the
programmer. When in doubt, or when you are learning, it is better to have too much detail than
to have too little.
Most of our examples will move from a high-level to a detailed algorithm in a single step, but this
is not always reasonable. For larger, more complex problems, it is common to go through this
process several times, developing intermediate level algorithms as we go. Each time, we add
more detail to the previous algorithm, stopping when we see no benefit to further refinement.
This technique of gradually working from a high-level to a detailed algorithm is often
called stepwise refinement.
Stepwise refinement is a process for developing a detailed algorithm by gradually adding detail
to a high-level algorithm.
Step 5: Review the algorithm.
The final step is to review the algorithm. What are we looking for? First, we need to work
through the algorithm step by step to determine whether or not it will solve the original problem.
Once we are satisfied that the algorithm does provide a solution to the problem, we start to look
for other things. The following questions are typical of ones that should be asked whenever we
review an algorithm. Asking these questions and seeking their answers is a good way to
develop skills that can be applied to the next problem.
 Does this algorithm solve a very specific problem or does it solve a more general
problem? If it solves a very specific problem, should it be generalized?
For example, an algorithm that computes the area of a circle having radius 5.2 meters
(formula π*5.22) solves a very specific problem, but an algorithm that computes the area of
any circle (formula π*R2) solves a more general problem.
 Can this algorithm be simplified?
One formula for computing the perimeter of a rectangle is:
length + width + length + width
A simpler formula would be:
2.0 * (length + width)
 Is this solution similar to the solution to another problem? How are they alike? How are they
different?
For example, consider the following two formulae:
Rectangle area = length * width
Triangle area = 0.5 * base * height
Similarities: Each computes an area. Each multiplies two measurements.
Differences: Different measurements are used. The triangle formula contains 0.5.
Hypothesis: Perhaps every area formula involves multiplying two measurements.
Analysis of the Problem (Step 2)
1. The flower is exactly three spaces ahead of the jeroo.
2. The flower is to be planted exactly two spaces South of its current location.
3. The Jeroo is to finish facing East one space East of the planted flower.
4. There are no nets to worry about.
High-level Algorithm (Step 3)
Let's name the Jeroo Bobby. Bobby should do the following:
Get the flower
Put the flower
Hop East
Detailed Algorithm (Step 4)
Let's name the Jeroo Bobby. Bobby should do the following:
Get the flower
Hop 3 times
Pick the flower
Put the flower
Turn right Hop 2 times Plant a flower
Hop East
Turn left Hop once
Review the Algorithm (Step 5)
1. The high-level algorithm partitioned the problem into three rather easy subproblems. This
seems like a good technique.
2. This algorithm solves a very specific problem because the Jeroo and the flower are in very
specific locations.
3. This algorithm is actually a solution to a slightly more general problem in which the Jeroo
starts anywhere, and the flower is 3 spaces directly ahead of the Jeroo.
Java Code for "Pick and Plant"
A good programmer doesn't write a program all at once. Instead, the programmer will write and
test the program in a series of builds. Each build adds to the previous one. The high-level
algorithm will guide us in this process.
A good programmer works incrementally, add small pieces one at a time and constantly rechecking the work so far.
FIRST BUILD
} // ===== end of method myProgram() =====
To see this solution in action, create a new
Greenfoot4Sofia scenario and use
The instantiation at the beginning
the Edit Palettes Jeroo menu command
of myProgram() places bobby at (0, 0),
to make the Jeroo classes visible. Rightfacing East, with no flowers.
click on the Island class and create a new
Once the first build is working correctly, we
subclass with the name of your choice. This
can proceed to the others. In this case,
subclass will hold your new code.
each build will correspond to one step in the
The recommended first build contains three
high-level algorithm. It may seem like a lot
things:
of work to use four builds for such a simple
1. The main method
program, but doing so helps establish habits
that will become invaluable as the programs
(here myProgram() in your island
become more complex.
subclass).
SECOND BUILD
2. Declaration and instantiation of every
This build adds the logic to "get the flower",
Jeroo that will be used.
which in the detailed algorithm (step 4
Public void myProgram()
above) consists of hopping 3 times and then
{
picking the flower. The new code is
Jeroo bobby = new Jeroo();
indicated by comments that wouldn't appear
this.add(bobby);
in the original (they are just here to call
attention to the additions). The blank lines
// --- Get the flower --help show the organization of the logic.
public void myProgram()
// --- Put the flower --{
Jeroo bobby = new Jeroo();
// --- Hop East ---
this.add(bobby);
// --- Get the flower --bobby.hop(3);
// <-- new code to ho
p 3 times
bobby.pick();
// <-- new code to pic
k the flower
// --- Put the flower --// --- Hop East --} // ===== end of method myProgram() =====
By taking a moment to run the work so far,
you can confirm whether or not this step in
the planned algorithm works as expected.
THIRD BUILD
This build adds the logic to "put the flower".
{
Jeroo bobby = new Jeroo();
this.add(bobby);
// --- Get the flower --bobby.hop(3);
bobby.pick();
bobby.plant();
nt a flower
// <-- new code to pla
// --- Hop East --} // ===== end of method myProgram() =====
FOURTH BUILD (final)
This build adds the logic to "hop East".
public void myProgram()
{
Jeroo bobby = new Jeroo();
this.add(bobby);
// --- Get the flower --bobby.hop(3);
bobby.pick();
// --- Put the flower --bobby.turn(RIGHT);
bobby.hop(2);
bobby.plant();
// --- Hop East --bobby.turn(LEFT);
urn left
bobby.hop();
1 time
// <-- new code to t
// <-- new code to hop
// --- Put the flower --bobby.turn(RIGHT);
// <-- new code to
turn right
} // ===== end of method myProgram() =====
bobby.hop(2);
// <-- new code to ho
p 2 times
 Brute Force Algorithm
Brute force algorithms are simple and straightforward solutions that use trial and error to solve
problems.
They are often used to solve problems where no other efficient solution is known. Although they
can be very slow, they are easy to understand and implement.
 Divide and Conquer Algorithm
Divide and conquer algorithms break a complex problem into smaller, simpler sub-problems.
The sub-problems are then solved separately and combined to produce a solution to the original
problem.
This approach reduces the complexity of the problem and makes it easier to solve.
 Dynamic Programming Algorithm
Dynamic programming algorithms are used to solve problems that can be broken down into
smaller sub-problems.
They use a bottom-up approach, starting with the simplest sub-problems and building up to the
more complex problems.
This approach is used to find the optimal solution to a problem by avoiding redundant
calculations.
 Greedy Algorithm
Optimization issues are resolved using greedy algorithms. They always choose well and never
turn around.
This strategy is utilized when the answer may be discovered by making a sequence of local
optimal decisions that ultimately result in a globally optimum solution.
 Backtracking Algorithm
Backtracking algorithms are used to solve problems that involve finding a solution by trying
different options and undoing them if they lead to a dead end.
This approach is used when there is no efficient way to solve the problem and it is necessary to
try all possible solutions.
 Unambiguous: Algorithms must have a clear and well-defined set of instructions that
are easy to understand and follow.
 Effective: The algorithm must produce the correct result in a finite amount of time.
 Feasible: The algorithm must be physically applicable and must be able to run on a
computer.
 Input and Output: Algorithms must take inputs and produce outputs.
 Finite: The algorithm must terminate after a finite number of steps and must not go into
an infinite loop.
 Deterministic: The algorithm must produce the same output for the same input every
time it is run.
Increased speed: Algorithms are designed to run faster than other computer programs and the time
taken to execute them is far less than that of other programs.
Improved accuracy: Algorithms are designed to be more accurate than other programs and are more
reliable when it comes to output.
Improved scalability: Algorithms are designed to scale to large data sets and can process large amounts
of data quickly and efficiently.
Reduced cost: Algorithms are designed to reduce the cost associated with certain tasks and can increase
the efficiency of business processes.
Increased security: Algorithms are designed to be secure and can help protect data from potential
threats.
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation. They are the following −
 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm
is measured by assuming that all other factors, for example, processor speed, are constant
and have no effect on the implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm. The selected
algorithm is implemented using programming language. This is then executed on target
computer machine. In this analysis, actual statistics like running time and space required,
are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or
running time of various operations involved. The running time of an operation can be defined as
the number of computer instructions executed per operation.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the
algorithm X are the two main factors, which decide the efficiency of X.
 Time Factor − Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
 Space Factor − Space is measured by counting the maximum memory space required by
the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by
the algorithm in terms of n as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. The space required by an algorithm is equal to the sum of the following
two components −
 A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and constants used,
program size, etc.
 A variable part is a space required by variables, whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I)
is the variable part of the algorithm, which depends on instance characteristic I. Following is a
simple example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space
depends on data types of given variables and constant types and it will be multiplied accordingly.
Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run
to completion. Time requirements can be defined as a numerical function T(n), where T(n) can be
measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational
time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that
T(n) grows linearly as the input size increases.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity
of an algorithm.
 Ο − Big Oh Notation
 Ω − Big Omega Notation
 Θ − Theta Notation
 o − Little Oh Notation
 ω − Little Omega Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It
measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
Big Omega Notation, Ω
The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly
take to complete.
Example
Let us consider a given function, f(n) = 4.n3+10.n2+5.n+1
Considering g(n) = n3 , f(n) ≥ 4.g(n) for all the values of n > 0.
Hence, the complexity of f(n) can be represented as Ω (g (n) ) ,i.e. Ω (n3).
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. Some may confuse the theta notation as the average case time
complexity; while big theta notation could be almost accurately used to describe the average
case, other notations could be used as well. It is represented as follows −
Example
Let us consider a given function, f(n) = 4.n3+10.n2+5.n+1
Considering g(n) = n3 , 4.g(n)≤ f(n)≤ 5.g(n) for all the values of n.
Hence, the complexity of f(n) can be represented as Ɵ (g (n) ) ,i.e. Ɵ (n3).
Little Oh (o) and Little Omega (ω) Notations
The Little Oh and Little Omega notations also represent the best and worst case complexities but
they are not asymptotically tight in contrast to the Big Oh and Big Omega Notations. Therefore,
the most commonly used notations to represent time complexities are Big Oh and Big Omega
Notations only.
Data Structures - Greedy Algorithms
An algorithm is designed to achieve optimum solution for a given problem. In greedy algorithm
approach, decisions are made from the given solution domain. As being greedy, the closest
solution that seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution, which may eventually lead to globally
optimized solutions. However, generally greedy algorithms do not provide globally optimized
solutions.
Counting Coins
This problem is to count to a desired value by choosing the least possible coins and the greedy
approach forces the algorithm to pick the largest possible coin. If we are provided coins of € 1, 2,
5 and 10 and we are asked to count € 18 then the greedy procedure will be −
 1 − Select one € 10 coin, the remaining count is 8
 2 − Then select one € 5 coin, the remaining count is 3
 3 − Then select one € 2 coin, the remaining count is 1
 4 − And finally, the selection of one € 1 coins solves the problem
Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly
change the problem then the same approach may not be able to produce the same optimum
result.
For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will
be absolutely optimum but for count like 15, it may use more coins than necessary. For example,
the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem
could be solved by using only 3 coins (7 + 7 + 1)
Hence, we may conclude that the greedy approach picks an immediate optimized solution and
may fail where global optimization is a major concern.
Examples
Most networking algorithms use the greedy approach. Here is a list of few of them −
 Travelling Salesman Problem
 Prim's Minimal Spanning Tree Algorithm
 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Graph − Map Coloring
 Graph − Vertex Cover
 Knapsack Problem
 Job Scheduling Problem
There are lots of similar problems that uses the greedy approach to find an optimum solution.
Data Structures - Divide and Conquer
To understand the divide and conquer design strategy of algorithms, let us use a simple real world
example. Consider an instance where we need to brush a type C curly hair and remove all the
knots from it. To do that, the first step is to section the hair in smaller strands to make the combing
easier than combing the hair altogether. The same technique is applied on algorithms.
Divide and conquer approach breaks down a problem into multiple sub-problems recursively until
it cannot be divided further. These sub-problems are solved first and the solutions are merged
together to form the final solution.
The common procedure for the divide and conquer design technique is as follows −
 Divide − We divide the original problem into multiple sub-problems until they cannot be
divided further.
 Conquer − Then these subproblems are solved separately with the help of recursion
 Combine − Once solved, all the subproblems are merged/combined together to form the
final solution of the original problem.
There are several ways to give input to the divide and conquer algorithm design pattern. Two
major data structures used are − arrays and linked lists. Their usage is explained as
Arrays as Input
There are various ways in which various algorithms can take input such that they can be solved
using the divide and conquer technique. Arrays are one of them. In algorithms that require input
to be in the form of a list, like various sorting algorithms, array data structures are most commonly
used.
In the input for a sorting algorithm below, the array input is divided into subproblems until they
cannot be divided further.
Since arrays are indexed and linear data structures, sorting algorithms most popularly use array
data structures to receive input.
Linked Lists as Input
Another data structure that can be used to take input for divide and conquer algorithms is a linked
list (for example, merge sort using linked lists). Like arrays, linked lists are also linear data
structures that store data sequentially.
Consider the merge sort algorithm on linked list; following the very popular tortoise and hare
algorithm, the list is divided until it cannot be divided further.
Examples
The following computer algorithms are based on divide-and-conquer programming approach −
 Merge Sort
 Quick Sort
 Binary Search
 Strassen's Matrix Multiplication
 Closest Pair
There are various ways available to solve any computer problem, but the mentioned are a good
example of divide and conquer approach.
Data Structures - Dynamic Programming
Dynamic programming approach is similar to divide and conquer in breaking down the problem
into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these subproblems are not solved independently. Rather, results of these smaller sub-problems are
remembered and used for similar or overlapping sub-problems.
Dynamic programming is used where we have problems, which can be divided into similar subproblems, so that their results can be re-used. Mostly, these algorithms are used for optimization.
Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the
previously solved sub-problems. The solutions of sub-problems are combined in order to achieve
the best solution.
So we can say that −
 The problem should be able to be divided into smaller overlapping sub-problem.
 An optimum solution can be achieved by using an optimum solution of smaller subproblems.
 Dynamic algorithms use Memoization.
Comparison
In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are
motivated for an overall optimization of the problem.
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall
solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a
bigger sub-problem. Dynamic algorithms use Memoization to remember the output of already
solved sub-problems.
Example
Basic Operations
The following computer problems can be
solved
using
dynamic
programming
The data in the data structures are
approach −
processed by certain operations. The
 Fibonacci number series
particular data structure chosen largely
 Knapsack problem
depends on the frequency of the operation
 Tower of Hanoi
that needs to be performed on the data
 All pair shortest path by Floydstructure.
Warshall
 Traversing
 Shortest path by Dijkstra
 Searching
 Project scheduling
 Insertion
Dynamic programming can be used in both
 Deletion
top-down and bottom-up manner. And of
 Sorting
course, most of the times, referring to the
 Merging
previous solution output is cheaper than
recomputing in terms of CPU cycles.
Data Structures and Algorithms - Arrays
Array is a type of linear data structure that is defined as a collection of elements with same or
different data types. They exist in both single dimension and multiple dimensions. These data
structures come into picture when there is a necessity to store multiple elements of similar
nature together at one place.
The difference between an array index and a memory address is that the array index acts like a
key value to label the elements in the array. However, a memory address is the starting address
of free memory available.
Following are the important terms to understand the concept of Array.
 Element − Each item stored in an array is called an element.
 Index − Each location of an element in an array has a numerical index, which is used to
identify the element.
#include <iostream>
using namespace std;
int main(){
int LA[3], i;
cout << "Array Before Insertion:" << endl;
for(i = 0; i < 3; i++)
cout << "LA[" << i <<"] = " << LA[i] << endl;
//prints garbage values
cout << "Inserting elements.." <<endl;
cout << "Array After Insertion:" << endl; // prints array values
for(i = 0; i < 5; i++) {
LA[i] = i + 2;
cout << "LA[" << i <<"] = " << LA[i] << endl;
}
return 0;
}
public class ArrayDemo {
public static void main(String []args) {
int LA[] = new int[3];
System.out.println("Array Before Insertion:");
for(int i = 0; i < 3; i++)
System.out.println("LA[" + i + "] = " + LA[i]); //prints empty array
System.out.println("Inserting Elements..");
// Printing Array after Insertion
System.out.println("Array After Insertion:");
for(int i = 0; i < 3; i++) {
LA[i] = i+3;
System.out.println("LA[" + i + "] = " + LA[i]);
}
}
}If arrays accommodate similar types of data types, linked lists consist of elements with different
data types that are also arranged sequentially.
But how are these linked lists created?
A linked list is a collection of “nodes” connected together via links. These nodes consist of the
data to be stored and a pointer to the address of the next node within the linked list. In the case
of arrays, the size is limited to the definition, but in linked lists, there is no defined size. Any amount
of data can be stored in it and can be deleted from it.
There are three types of linked lists −
 Singly Linked List − The nodes only point to the address of the next node in the list.
 Doubly Linked List − The nodes point to the addresses of both previous and next nodes.
 Circular Linked List − The last node in the list will point to the first node in the list. It can
either be singly linked or doubly linked.
Linked List Representation
Linked list can be visualized as a chain of nodes, where every node points to the next node.
C++ code for linked list:
#include <bits/stdc++.h>
#include <string>
using namespace std;
struct node {
int data;
struct node *next;
};
struct node *head = NULL;
struct node *current = NULL;
// display the list
void printList(){
struct node *p = head;
cout << "\n[";
//start from the beginning
while(p != NULL) {
cout << " " << p->data << "
";
p = p->next;
}
cout << "]";
}
//insertion at the beginning
void insertatbegin(int data){
//create a link
struct node *lk = (struct node*)
malloc(sizeof(struct node));
lk->data = data;
// point it to old first node
lk->next = head;
//point first to new first node
head = lk;
}
int main(){
insertatbegin(12);
insertatbegin(22);
insertatbegin(30);
insertatbegin(44);
insertatbegin(50);
cout << "Linked List: ";
// print list
printList();
}
Linked list in JAVA
public class Linked_List {
static class node {
int data;
node next;
node (int value) {
data = value;
next = null;
}
}
static node head;
// display the list
static void printList() {
node p = head;
System.out.print("\n[");
//start from the beginning
while(p != null) {
System.out.print(" " +
p.data + " ");
p = p.next;
}
System.out.print("]");
}
//insertion at the beginning
static void insertatbegin(int
data) {
//create a link
node lk = new node(data);;
// point it to old first node
lk.next = head;
//point first to new first
node
head = lk;
}
public static void main(String
args[]) {
int k=0;
insertatbegin(12);
insertatbegin(22);
insertatbegin(30);
insertatbegin(44);
insertatbegin(50);
insertatbegin(33);
System.out.println("Linked
List: ");
// print list
printList();
}
}
class Node:
def __init__(self, data=None):
self.data = data
self.next = None
class SLL:
def __init__(self):
self.head = None
# Print the linked list
def listprint(self):
printval = self.head
print("Linked List: ")
while printval is not None:
print (printval.data)
printval = printval.next
def
AddAtBeginning(self,newdata):
NewNode = Node(newdata)
# Update the new nodes next
val to existing node
NewNode.next = self.head
self.head = NewNode
l1 = SLL()
l1.head = Node("731")
e2 = Node("672")
e3 = Node("63")
l1.head.next = e2
e2.next = e3
l1.AddAtBeginning("122")
l1.listprint()
Brute Force Algorithm
The simplest possible algorithm that can be devised to solve a problem is called the brute force
algorithm. To device an optimal solution first we need to get a solution at least and then try to
optimize it. Every problem can be solved by brute force approach although generally not with
appreciable space and time complexity.
For example:
Later we will see this time complexity to be reduced to O (logn).
Greedy Algorithm
In this algorithm, a decision is made that is good at that point without considering the future.
This means that some local best is chosen and considers it as the global optimal. There are two
properties in this algorithm.
o Greedily choosing the best option
o
Optimal substructure property: If an optimal solution can be found by retrieving the
optimal solution to its subproblems.
Greedy Algorithm does not always work but when it does, it works like a charm! This algorithm
is easy to device and most of the time the simplest one. But making locally best decisions does
not always work as it sounds. So, it is replaced by a reliable solution called Dynamic
Programming approach.
Applications
o Sorting: Selection Sort, Topological sort
Dynamic Algorithm
This is the most sought out algorithm as it provides the most efficient way of solving a problem.
Its simply means remembering the past and apply it to future corresponding results and hence
this algorithm is quite efficient in terms of time complexity.
Dynamic Programming has two properties:
o Optimal Substructure: An optimal solution to a problem contains an optimal solution to its
subproblems.
o
Overlapping subproblems: A recursive solution contains a small number of distinct
subproblems.
This algorithm has two version:
o Bottom-Up Approach: Starts solving from the bottom of the problems i.e. solving the last
possible subproblems first and using the result of those solving the above subproblems.
o
Top-Down Approach: Starts solving the problems from the very beginning to arrive at the
required subproblem and solve it using previously solved subproblems.
Applications
o Longest Common Subsequence, Longest Increasing Subsequence, Longest Common
substring etc.
o
Bellman-Ford algorithm
o
Chain Matrix multiplication
o
Subset Sum
o
Knapsack Problem & many more.
Let us take a simple example of such algorithm. Finding the Fibonacci Sequence.
A stack follows a LIFO (Last In First Out) order, whereas a queue follows a FIFO (First In First
Out) order for storing the elements. A stack uses one end known as a top for insertion and
deletion whereas a queue uses two ends front and rear for insertion and deletion.
What are the examples of stacks and queues?
The stack of trays in a cafeteria; A stack of plates in a cupboard; A driveway that is only one car
wide.
...
Examples of queues in "real life":



A ticket line;
An escalator;
A car wash.
Difference Between Stack and Queue
Parameter
Stack Data Structure
Queue Data Structure
Basics
It is a linear data structure. The
objects are removed or inserted
at the same end.
It is also a linear data structure. The objects are
removed and inserted from two different ends.
Working
Principle
It follows the Last In, First Out
(LIFO) principle. It means that the
last inserted element gets deleted
at first.
It follows the First In, First Out (FIFO) principle.
It means that the first added element gets
removed first from the list.
Pointers
It has only one pointer- the top.
This pointer indicates the address
of the topmost element or the last
inserted one of the stack.
It uses two pointers (in a simple queue) for
reading and writing data from both the endsthe front and the rear. The rear one indicates
the address of the last inserted element,
whereas the front pointer indicates the address
of the first inserted element in a queue.
Operations
Stack uses push and pop as two
of its operations. The pop
operation functions to remove the
element from the list, while the
push operation functions to insert
the element in a list.
Queue uses enqueue and dequeue as two of
its operations. The dequeue operation deletes
the elements from the queue, and the enqueue
operation inserts the elements in a queue.
Structure
Insertion and deletion of
elements take place from one
end only. It is called the top.
It uses two ends- front and rear. Insertion uses
the rear end, and deletion uses the front end.
Full Condition
Examination
When top== max-1, it means that
the stack is full.
When rear==max-1, it means that the queue is
full.
Empty
Condition
Examination
When top==-1, it indicates that
the stack is empty.
When front = rear+1 or front== -1, it indicates
that the queue is empty.
Variants
A Stack data structure does not
have any types.
A Queue data structure has three typescircular queue, priority queue, and doubleended queue.
Visualization
You can visualize the Stack as a
vertical collection.
You can visualize a Queue as a horizontal
collection.
Implementation
The implementation is simpler in
a Stack.
The implementation is comparatively more
complex in a Queue than a stack.
3 The Big-O notation
 Big-O notation is a way of measure the time complexity of algorithms
 As time requirement is closely related to number of operations, this is normally
measured in the number of operations, thus the O in Big-O
 It is important to understand that Big-O notation is a measure on how the time will
grow as the number of data input grow.
 Big-O provides the worst case measure for time
 You are expected to:
o know the five Big-O functions
o calculate time complexity and express it in Big-O notation for a given algorithm
o rank time complexity functions in the order of their efficiency
Constant Time Function - O(1)
 This notation means an algorithm has the time complexity of a constant - regardless of
the growth of data input size, the algorithm will take the same amount of time.
 An example: access an item in a dictionary (hashtable) will take the same time
regardless how big the dictionary is. Another example will be when checking if a number
is even or odd by using the modular operation.
Logarithmic Time Function - O(logN)(assuming log base 2 in this note)
 Just to re-cap some basic logarithmic operations:
o log28 = 3
o log216 = 4
o log232 = 5
 This notation means an algorithm will increase its time by 1 when its input size doubled.
 An example: binary search. As shown in the following figure, even the array size has
doubled(right hand side), there is only one more comparison added.
Linear Time Function - O(N)
 This notation means an algorithm has the time complexity of a linear function - as the
data input size grows, the algorithm will take the amount of time that is directly
proportionate to the input size.
 An example: linear search (remember Big-O is for the worst case) as the number of
items increases, so is the number of comparisons.
Polynomial Time Function - O(N2)
 This notation means as the data input size increases, the time required will be squared.
 An example: bubble sort where as the number of items increases, the worst case
number of operations will be square of the number of items.
Exponential Time Function - O(2N)
 This notation means for every added input item, the time required will be doubled.
 This time complexity is for intractable problems which are problems cannot be solved
using polynomial time or less. The algorithms for this type problems are not optimal for
computers to run in a reasonable amount time to be useful.
 An example: the travelling salesman problem(TSP) where a salesman has to visit N
towns. Each pair of towns is joined by a route of a given length. Find the shortest
possible route that visits all the towns and returns to the starting point. The time
increases exponentially for large N.
Time Complexity Comparision
 From most time eficient to least:
1. O(1)
2. O(logN)
3. O(N)
4. O(N2)
5. O(2N)
Definition of OOP Concepts in Java
The main ideas behind Java’s Object-Oriented Programming, OOP concepts
include abstraction, encapsulation, inheritance and polymorphism. Basically, Java OOP
concepts let us create working methods and variables, then re-use all or part of them without
compromising security. Grasping OOP concepts is key to understanding how Java works.
Java defines OOP concepts as follows:
 Abstraction. Using simple things to represent complexity. We all know how to turn the
TV on, but we don’t need to know how it works in order to enjoy it. In Java, abstraction
means simple things like objects, classes and variables represent more complex
underlying code and data. This is important because it lets you avoid repeating the same
work multiple times.
 Encapsulation. The practice of keeping fields within a class private, then providing
access to those fields via public methods. Encapsulation is a protective barrier that
keeps the data and code safe within the class itself. We can then reuse objects like code
components or variables without allowing open access to the data system-wide.
 Inheritance. A special feature of Object-Oriented Programming in Java, Inheritance lets
programmers create new classes that share some of the attributes of existing classes.
Using Inheritance lets us build on previous work without reinventing the wheel.
 Polymorphism. Allows programmers to use the same word in Java to mean different
things in different contexts. One form of polymorphism is method overloading. That’s
when the code itself implies different meanings. The other form is method overriding.
That’s when the values of the supplied variables imply different meanings. Let’s delve a
little further.
Short Encapsulation Example in Java
In the example below, encapsulation is demonstrated as an OOP concept in Java. Here, the
variable “name” is kept private or “encapsulated.”
//save as Student.java
package com.javatpoint;
package com.javatpoint;
class Test {
public class Student {
public static void main(String[] args) {
private String name;
Student s = new Student();
public String getName() {
s.setName(“vijay”);
return name;
System.out.println(s.getName());
}
}
public void setName(String name) {
}
this.name = name
Compile By: javac -d . Test.java
}
Run By: java com.javatpoint.Test
}
//save as Test.java
Output: vijay
Example of Inheritance in Java
It’s quite simple to achieve inheritance as an OOP concept in Java. Inheritance can be as easy
as using the extends keyword:
Example
Let's try to learn algorithm-writing by using an example.
Problem − Design an algorithm to add two numbers and display the result.
Step 1 − START Step 2 − declare three integers a, b & c Step 3 − define values of a & b Step 4 − add
values of a & b Step 5 − store output of step 4 to c Step 6 − print c Step 7 − STOP
Algorithms tell the programmers how to code the program. Alternatively, the algorithm
can be written as pseudocode −
Step 1 − START ADD Step 2 − get values of a & b Step 3 − c ← a + b Step 4 − display
c Step 5 − STOP
Static Linear Data Structures
In Static Linear Data Structures, the memory allocation is not scalable. Once the entire memory is
used, no more space can be retrieved to store more data. Hence, the memory is required to be
reserved based on the size of the program. This will also act as a drawback since reserving more
memory than required can cause a wastage of memory blocks.
The best example for static linear data structures is an array.
Dynamic Linear Data Structures
In Dynamic linear data structures, the memory allocation can be done dynamically when required.
These data structures are efficient considering the space complexity of the program.
Few examples of dynamic linear data structures include: linked lists, stacks and queues.
Non-Linear Data Structures
Non-Linear data structures store the data in the form of a hierarchy. Therefore, in contrast to the
linear data structures, the data can be found in multiple levels and are difficult to traverse
through. However, they are designed to overcome the issues and limitations of linear data
structures. For instance, the main disadvantage of linear data structures is the memory allocation.
Since the data is allocated sequentially in linear data structures, each element in these data
structures uses one whole memory block. However, if the data uses less memory than the
assigned block can hold, the extra memory space in the block is wasted. Therefore, non-linear
data structures are introduced. They decrease the space complexity and use the memory
optimally.
Few types of non-linear data structures are −
•
•
•
•
Graphs
Trees
Tries
Maps
TOPIC
C++
Java
Python
Compiled vs.
Interpreted
Compiled
Programming
Language
Java is both Compiled
and Interpreted.
Interpreted
Programming
Language
Platform
Dependence
C++ is platform
dependent
Java is platformindependent
Python is platformindependent
Operator
Overloading
C++ supports
operator overloading
Java does not support
operator overloading
Python supports
operator overloading
Inheritance
C++ provides both
single and multiple
inheritances
In Java, single
inheritance is possible
while multiple
inheritances can be
achieved using
Interfaces
Python provides
both single and
multiple inheritances
Thread
Support
C++ does not have
built-in support for
threads; It depends
on Libraries
Java has built-in thread
support
Python supports
multithreading
Execution
Time
C++ is very fast. It’s,
in fact, the first
choice of competitive
programmers
Java is much faster than
Python in terms of speed
of execution but slower
than C++.
Due to the
interpreter, Python is
slow in terms of
execution
Program
Handling
Functions and
variables are used
outside the class
Every bit of
code(variables and
functions) has to be
inside the class itself.
Functions and
variables can be
declared and used
outside the class
Library
Support
C++ has limited
library support
Java provides library
support for many
concepts like UI
Python has a huge
set of libraries and
modules.
Code Length
Code length is lesser
than Java, around
1.5 times less.
Java code length is
bigger than Python and
C++.
Python has a smaller
code length
Download