Uploaded by Tâm Người vô

HoàngThanhLịchBKC12253-Assignment-1-lần-1-DSA

advertisement
PROGRAM TITLE: BTEC IN COMPUTING (SOFTWARE ENGINEERING)
UNIT TITLE: DATA STRUCTURE AND ALGORITHM
ASSIGNMENT NUMBER: ASSIGNMENTS 01
ASSIGNMENT NAME: OPTIMUM CARGO SHIP ROUTING (OASR)
SUBMISSION DATE: 28-10-2022
DATE RECEIVED: 28-10-2022
TUTORIAL LECTURER: NGUYEN VAN HUY
WORD COUNT: 9951
STUDENT NAME: HOANG THANH LICH
STUDENT ID: BKC12253
MOBILE NUMBER: 0366642541
Summative Feedback:
Internal verification:
CONTENT
KEYWORDS ABBREVIATION ............................................................................................................................. 5
I. INTRODUCTION ................................................................................................................................................. 6
II. THEORETICAL BASIS ...................................................................................................................................... 7
1. Data Structure and Algorithm ...................................................................................................................... 7
2. Design Specification ...................................................................................................................................32
3. Error handing ..............................................................................................................................................41
4. The effectiveness of data structures and algorithms. ..................................................................................41
4.1 Performance analysis ................................................................................................................................ 41
4.2 Asymptopic Analysis ................................................................................................................................43
4.2 Determine two ways in which the efficiency of an algorithm can be measured, illustrating your answer
with an example. .............................................................................................................................................48
4.5 What do you understand by time and space trade-off? Type of time and space trade-off ? .................... 55
III. IMPLEMENTATION ....................................................................................................................................... 60
1. Implement a complex ADT and algorithm in an executable programming language to solve a given problem.60
2. Create unit test / test documentation. .................................................................................................................. 61
3. Evaluate the complexity of an implemented ADT/ algorithm using Big-O. ...................................................... 64
4. Source code and evidence (result). ......................................................................................................................65
References ............................................................................................................................................................... 72
KEYWORDS ABBREVIATION
Abbreviations
Means
ADT
An abstract data type
SDL
Specification and Description Language
QA
Quality Assurance
IT
Information Technology
CLU
Chartered Life Underwriter
UML
Unified Modeling Language
LIFO
Last in First Out
I. INTRODUCTION
Situation: I work as an in-house software developer for BKC Software Company, a software
company that provides networking solutions. Your company is part of a service delivery
cooperation project and your company has won a contract to design and develop an "Optimal
Cargo Ship Routing (OASR)" solution.
My account manager has assigned me a special role to inform my team about the design and
implementation of abstract data types. I have been asked to create a report for all collaborating
partners on how to use ADT to improve software design, development, and testing, and how
to specify abstract data types and algorithms in a formal notation, a way of evaluating the
performance of data structures and algorithms.
For the "Optimal Cargo Ship Routing (OASR)" solution:
Suppose that the shipping network consists of nodes and links, where the nodes represent
coastal ports connecting different shipping segments and the links represent the costs
associated with each other. relative to transport segments (assuming the cost between the two
nodes does not change in the opposite direction). The company's shipping ships all depart
from port number 0. Use the learned knowledge to calculate the shortest path from port 0 to
the rest of the ports in the shipping network using Dijkstra's algorithm. With the following
initial data:
II. THEORETICAL BASIS
1. Data Structure and Algorithm
A data structure is a specialized format for organizing, processing, retrieving, and storing
data. There are several basic and advanced types of data structures, each designed to organize
data for a specific purpose. Data structures make it easy for users to access and manipulate the
data they need. Most importantly, data structures frame the organization of information so that
it can be better understood by machines and humans.
In computer science and computer programming, we can choose or design data structures to
store data for use in various algorithms. In some cases, the basic operations of algorithms are
closely related to the design of data structures. Each data structure contains information about
data values, relationships between data, and possibly functions that can be applied to the data.
For example, in object-oriented programming languages, data structures and their associated
methods are grouped together as part of the class definition. Non-object-oriented languages
allow you to define functions that manipulate data structures, but technically they are not part
of the data structure.
The importance of Data structure ?
As applications become more complex and the amount of data increases daily, the following
issues can arise:
Processor speed: Processing very large amounts of data requires high speed processing, but
when data grows to billions of files per entity per day, processors may not be able to handle
that amount of data.
data search: Let's say your store has an inventory size of 106 products. If your application
needs to search for a specific item, it will have to process 106 items each time, slowing down
the search process.
Multiple requests: If thousands of users are browsing data on his web server at the same time,
a very large server can go down during the process.
Data structures are used to solve the above problems. The data is organized to form a data
structure, so you can quickly find the data you need without having to search through every
item.
The type of data structure used in a particular situation depends on the type of operation
required or the type of algorithm applied. Various data structure types include:
Array: Arrays store a collection of elements in contiguous locations. Items of the same type
are stored together so that the position of each item can be calculated or easily retrieved via
index. Array lengths can be fixed or flexible.
Array example:
Stack: A stack stores a collection of elements in linear order to which operations are applied.
This order can be either last in, first out (LIFO) or first in, first out (FIFO).
Stack example:
Let's push 20, 13, 89, 90, 11, 45, 18, respectively into the stack.
Let's remove (pop) 18, 45, and 11 from the stack.
Queue: A queue stores a collection of items like a batch. However, the order of operations is
"first in, first out" only.
Queue example:
Linked list: A linked list stores a collection of items in linear order. Each item or node in a
linked list contains a data item and a reference or link to the next item in the list.
Linked list example:
Tree: A tree stores a collection of items in an abstract, hierarchical fashion. Each node has a
key value associated with it, and parent nodes are linked to child nodes (subnodes). There is a
root node that is the ancestor of all nodes in the tree.
Tree example:
Heap: A heap is a tree-based structure where the associated key value of each parent node is
greater than or equal to the key value of all of its child nodes.
Heap example:
Graph: A graph stores a collection of elements in a non-linear fashion. A graph consists of a
finite set of nodes, also called vertices, and the lines (also called edges) that connect them.
They are useful for representing real-world systems such as computer networks.
Graph example:
Trie: also called a keyword tree, is a data structure that stores strings of data as data items that
can be organized into visual diagrams.
Trie example:
Hash table: A hash table (also called a hash map) stores a collection of elements in an
associative array representing keys and values. A hash table uses a hash function to convert an
index into an array of buckets containing the desired data items.
Hash table:
They are considered complex data structures because they can store large amounts of
interconnected data.
Algorithm
An algorithm is a set of step-by-step procedures or a set of rules that must be followed in
order to accomplish a particular task or solve a particular problem. Algorithms surround us
everywhere. The recipe for baking a cake, the method used to solve the long crack problem,
and the process of doing laundry are all examples of algorithms. Like an algorithm, baking a
cake written as a list of instructions looks like this:

Preheat the oven

Gather materials

Weigh the ingredients

Mix the ingredients into the dough

Oil the frying pan

Put butter in the frying pan

Put the pan in the oven

Set timer

Remove the pan from the oven when the timer rings

Enjoy !
Algorithmic programming is writing a set of rules that tell a computer how to perform a task.
A computer program is basically an algorithm that tells a computer specific steps and a
specific order to perform in order to perform a specific task. Algorithms are written in a
specific syntax depending on the programming language used.
Why is it important to understand algorithms?
Algorithmic thinking, the ability to define clear steps for solving problems, is important in
many fields. We use algorithms and algorithmic thinking all the time, even if we don't realize
it. Algorithmic thinking allows students to decompose problems and conceptualize solutions
step by step. To understand and implement algorithms, students need to practice structured
thinking and reasoning skills.
Types of Algorithms
Algorithms are classified based on the concepts they use to accomplish a task. There are many
types of algorithms, but the most basic types of computer science algorithms are:

Divide and Conquer Algorithm - divides the problem into smaller sub-problems of the
same type. Solve these small problems and combine these solutions to solve the original
problem.

Brute Force Algorithm - Try all possible solutions until you find a solution that satisfies
you.

Randomized Algorithm - Uses random numbers at least once during a calculation to find
a solution to a problem.

Greedy Algorithm - finds the best solution at the local level with the intention of finding
the best solution to the problem as a whole. Recursive Algorithm - Solves the smallest,
simplest version of the problem and progressively larger versions of the problem until a
solution to the original problem is found.

Backtracking Algorithm - Decomposes the problem into sub-problems that can be tried to
solve. However, if I haven't reached the desired solution, I return to the problem until I
find a path to advance it.

Dynamic programming algorithms - decompose a complex problem into a set of simpler
subproblems, solve each of those subproblems only once, and save the solution for future
use instead of recomputing the solution To do.
An abstract data type (ADT) is a concept or model of a data type.
An object-oriented database should support all data types, not just built-in data types such as
character, integer, and real. To understand abstract data types, let's go back and remove the
abstract and then remove the data from the abstract data type. Now that we have a type, it is
defined as a collection of type values. A simple example of this is an integer type that holds
the values 0, 1, 2, 3, etc. When adding word data, we define the data type as a type and define
a set of operations to manipulate the type. Extending the integer example, the data type
becomes an integer variable, and the integer variable becomes a member of the integer data
type. Addition, subtraction, and multiplication are examples of operations that can be
performed on integer data types.
Adding the word abstract here allows us to define an abstract data type (ADT) as a data type,
a type and a set of operations that operate on that type. A set of operations is defined only by
its inputs and outputs. ADT does not specify how data types are implemented. All details of
ADT are hidden from ADT users. The process of hiding these details is called encapsulation.
Extending the integer data type example to an abstract data type, the possible operations are
removing an integer, adding an integer, outputting an integer, and checking if a given integer
exists. Note that we don't care how the activity is implemented, only how the activity is called.
Let's take a look at traditional programming languages and the data types they use. Traditional
languages are based on numeric and text data types and are limited to the data types supported
by the programming language. Variables used in a programming language must be defined
with one of the supported data types. OT has lifted restrictions on the use of these built-in data
types, allowing you to create a variety of data types. When these new data types are defined,
they are treated as built-in data types. The ability to create new data types on demand and use
those data types is called data abstraction, and the new data type is called abstract data types
(ADT).
An abstract data type is more than just a collection of values. When used to create objects,
they can also be associated with methods, hiding the details of those methods from the user.
Data abstraction and ADT are the foundation of OT. Because they can be created on demand,
they help you think about and design computer systems to more accurately reflect how data
types are represented in the real world. The real world.
One of the main reasons hierarchical, network, and relational databases were removed was
their inability to support ADT. These traditional databases have very strict data layout rules
and are not flexible enough to handle ADTs.
Queue
A queue is a linear data structure consisting of a collection of elements that follow a first-in,
first-out order. This means that the first element inserted is also the first element removed.
You can also remove items in the order they were inserted.
Using a real-world example, you can compare the queue data structure to a queue of people
queuing for service. As soon as a person is served, leave the queue to serve the next person.
They are helped in the order they come.
Queue structure
The cue mainly consists of her two parts:
front/head and back/tail/back. Stick to using front and back for clarity and consistency.
In the back the elements are inserted and in the front part of the cue is removed/deleted. Here
is a diagram for better understanding.
The pictures show different cell arrangements. Items are inserted from the back and removed
from the front. There are terms used for inserting and removing items from queues. This will
be explained in the next section.
Note that the queue structure can be reversed. You can have the front on the right and the back
on the left. Whatever structure you choose, always remember to insert elements from the back
and remove elements from the front.
Common Operations of a Queue
Queue operations work as follows:

two pointers FRONT and REAR

FRONT track the first element of the queue

REAR track the last element of the queue

initially, set value of FRONT and REAR to -1
Enqueue Operation

check if the queue is full

for the first element, set the value of FRONT to 0

increase the REAR index by 1

add the new element in the position pointed to by REAR
Dequeue Operation

check if the queue is empty

return the value pointed by FRONT

increase the FRONT index by 1

for the last element, reset the values of FRONT and REAR to -1
The following operations are commonly used in a queue:

Enqueue: Adds an item from the back of the queue.

Dequeue: Removes an item from the front of the queue.

Front/Peek: Returns the value of the item in front of the queue without dequeuing
(removing) the item.

IsEmpty: Checks if the queue is empty.

IsFull: Checks if the queue is full.

Display: Prints all the items in the queue.
Before we see how to implement this with code, you need to understand how the enqueue and
dequeue operations work and how they affect the front and back positions.
The indices of arrays in most programming languages start from 0. While implementing our
code, we are going to set the index of the front and back values of our array to -1. This will
enable us to move the front and back position properly as values are added.
Stack
A stack is an abstract data structure (Abstract Data Type - ADT for short) used in almost
every programming language. The stack is characterized by LIFO (Last In First Out), which
means Last In First Out. Name the deck because it behaves like a real deck, such as a deck of
cards or a stack of discs.
In practice, the stack only allows operations on the top of the stack. For example, you can only
place or add cards or plates on top of the stack. Therefore, the abstract data structure of the
stack allows data manipulation only at the top level. You can only access the top element of
the stack.
This feature makes the stack his LIFO data structure. LIFO stands for Last In First Out. Here,
the last positioned (inserted, added) element is accessed first. In stack terminology, an insert
operation is called a PUSH operation and a remove operation is called a POP operation.
Basic operations on stack data structures
Basic operations on the stack may involve initializing the stack, using it, and then deleting it.
In addition to these basic operations, a stack has two conceptually related primitive operations,
namely:

push() operation: keeps an element on the stack.

pop() operation: remove an element from the stack.
Once the data has been PUSHed onto the stack:
To use the stack efficiently, we also need to check the state of the stack. For this purpose, here
are some other supporting features of the stack:

peek() operation: get the data element at the top of the stack, without removing it.

isFull() operation: checks whether the stack is full or not.

isEmpty() operation : checks if the stack is empty or not.
At all times, we maintain a pointer to the last PUSHed data element on the stack. Since this
pointer always represents the top of the stack, it is named top. The top pointer gives us the
value of the top element of the stack without having to perform the above delete operation
(pop operation).
An example of a concrete data structure for First in First out (FIFO) queue (code).
Suppose you want to take out the best student in the school after having the final score. Then
Queue ADT is a perfect choice to retrieve a sorted list of students. Based on the above idea I
will use the Dequeue operation to get and delete the first student code added (FIFO). The code
executes as follows:
package datastruct;
public class Queue {
int capacity, front, rear;
int[] items;
public Queue(int size){
capacity = size;
items = new int[size];
front = rear = -1;
}
public boolean isFull() {
if(front == 0 && rear == capacity - 1) {return true;}
if(front == rear + 1) {return true;}
return false;
}
public boolean isEmpty() {
if(front == -1) {return true;}
return false;
}
public void enQueue(int element){
if(isFull()) {
System.out.println("Queue is full");
}else{
if(front == -1) {
front = 0;
}
rear = (rear + 1) % capacity;
items[rear] = element;
}
}
public int deQueue() {
int element;
if(isEmpty()) {
System.out.println("Queue is empty");
return -1;
}else{
element = items[front];
front = (front + 1) % capacity;
return element;
}}
public void display() {
int i;
if(isEmpty()) {
System.out.println("Queue is empty");
}else{
System.out.println("Front -> " + front);
System.out.println("Items -> ");
for(i = front; i != rear; i = (i+1)%capacity){
System.out.print(items[i] + " ");
}
System.out.println(items[i]);
System.out.println("Rear -> " + rear);
}}
public static void main(String[] args) {
Queue q = new Queue(5);
q.enQueue(12254);q.enQueue(12257);q.enQueue(12276);q.enQueue(12248);
q.enQueue(12283);q.display();
int top1 = q.deQueue();
System.out.println("-----------------------------------");
System.out.println("The best student of k12 is:"+ top1 );
System.out.println("-----------------------------------");
q.display();
}
}
Output :
Two shortest path algorithms.
Dijkstra Algorithm
As you know Dijkstra's algorithm is a Greedy algorithm ( Greedy Algorithm)
This means we will take a shorter path from one vertex to the other. The algorithm is complete
when we visit all the vertices of the graph.
Be careful though - sometimes when we find a new vertex, there can be shorter paths through
it from one visited vertex to another already visited vertex.
We can see below the steps to complete Dijkstra algorithm.
We can start with node A and we have 2 paths.

First comes the word A with Blength5

V a word A to C with length 3.
So we can write in the adjacent list with the visited vertices 2 new vertices ( B, C), and the
weight to get there.
Then, as said earlier - we will choose the path from A -> C.
When accessing the vertex C, we can see that there are 3 different paths:

The first way is C to B

The second way is C to D

The third way is C to E
So write to a list of two new vertices and choose the shortest path C to B.
Now at B, we have 3 paths:

B arrive D

B arrive E

And B come back C
We choose the shortest path B to D and we update the new weighted list of paths from A to
other vertices.
Now as we can see there is no new path from D to E.
In that case, we go back to the previous vertex to check the shortest path.
Now there is a path with length 4 to E and one to C.
In this case, we choose any path we like.
Finally, we can see that any alternative we take on the path from A to E has the same weight
because the shortest paths are recorded in the list.
Finally, we can see all the paths that we have used.
Bellman-Ford algorithm
We will find the shortest path from node 1 to the remaining nodes.
Step 1: We initialize the graph with the distance from node 1 to itself is 0, the rest is infinity
Step 2: Do 4 loops.
In the first loop , we update the shortest path through edges (1, 2); (1, 3); (1, 4):
In loop #2 , edges (2, 5) and (3, 4) are optimal edges:
In the 3rd loop , we only see an edge (4,5) that improves the path from 1 to 5 :
In loop 4 , we find that there is no longer any edge that can optimize the path from node 1, so
this graph will not have negative cycles. So we can end the algorithm here.
And the result we get is the shortest paths from node 1 as follows:

From 1 to 2: 1 -> 2

From 1 to 3: 1 -> 3

From 1 to 4: 1 -> 3 -> 4

From 1 to 5: 1 -> 3 -> 4 ->5
Discuss the view that imperative ADT are a basis for object orientation offering a
justification for the view.
ADT allows the creation of instances with well-defined properties and behaviors. Abstraction
allows entity instances to be grouped into groups where their common properties need to be
considered.
Example:
Suppose you need to create a program to manage cars and motorbikes. You can define classes
(entities) of Car and Bicycle and you will find that they have a lot in common (but not all). It
would be a mistake to take Car from Bike or vice versa. What you need to do is define a
generic abstract base class MotorVehicle and derive both Car and Bicycle from that class.
Data abstraction is the encapsulation of a data type and the procedures that provide operations
for that type.
Programs can declare instances of this ADT, called an object. Object-oriented programming is
based on this concept.
In object orientation, ADTs can be referred to as classes.
Thus, a class defines properties of objects that are instances in an object-oriented environment.
ADT and object-oriented programming are different concepts. OOP uses the concept of ADT.
Three advantages of using implementation-independent data structures (abstract data
structures):
Efficient memory usage: Efficient use of data structures can optimize memory usage. If you
don't know the size of your data, you can use a linked list vs array data structure. Once the
storage is no longer in use, it can be freed.
Re usability: Once you have implemented a particular data structure, you can use it elsewhere
and thus reuse it. Data structure implementations can be compiled into libraries that can be
used by a variety of clients.
Abstraction: Data structures serve as the basis for abstract data types, and data structures
define the physical format of ADTs (abstract data types).
2. Design Specification
Specification Description Language (SDL) is a modeling language for describing real-time
systems. The SDL diagram shows the modeling process for specifications and description
languages. It can be widely used in systems in the automotive, aerospace, communications,
medical and telecommunication industries.
An SDL diagram consists of three parts: System definitions, blocks, processes. A system
definition defines the main nodes (blocks) of the system, such as clients and servers, while
block diagrams show the details. The process diagram shows the processing steps for each
block. See State Machines and UML.
Organization of SDL
System - The overall design is called the system and anything outside the system is called the
environment. There is no specific graphical representation of the system, but a block
representation is available if desired.
Agents – Agents are items in the System Tree. There are two types of agents: blocks and
processes. The system is the outermost block.
Element
Description
Block

A block is a structuring element that does not
imply any physical implementation on the target.

A block can be further decomposed in blocks and
so on allowing to handle large systems.

A block symbol is a solid rectangle with its name
in it
Symbol
Process

A process is basically the code that will be
executed. It is a finite state machine based task and
has an implicit message queue to receive messages.
It is possible to have several instances of the same
process running independently. The number of
instances present when the system starts and the
maximum number of instances are declared
between parentheses after the name of the process.

The full syntax in the process symbol is:
<process name>[(<number of instances at startup>,
<maximum number of instances>)]

If omitted default values are 1 for the number of
instances at startup and infinite for the maximum
number of instances.
Architecture – the overall architecture can be seen as a tree where the leaves are the processes.
Behavior SDL
First, the process has an implicit message queue for receiving messages listed on the channel.
Describe processes based on extended state machines. Process states define the behavior of a
process when given a particular stimulus. A transition is a code between two states. For
example, a process can be suspended or run from a message queue or semaphore. executable
code. A stimulus message to an agent from the environment or another agent is called a signal.
Signals received by a processing agent are first queued (input ports). While the state machine
is waiting in a state, asserting the first signal on that state's input port initiates a transition to
another state.
Elements
Description
Symbol
Start
The start symbol represent the starting point for
the execution of the process
State
The name of the process state is written in the
state symbol
Stop
A process can terminate itself with the stop
symbol.
Message input The message input symbol represent the type of
message that is expected in an SDL-RT state. It
always follows an SDL-RT state symbol and if
received the symbols following the input are
executed.
The syntax in the message input symbol is the
following:
<Message
name>
[(<parameter
name>
{,
<parameter name>}*)]
<parameter name> is a variable that needs to be
declared.
Message
A
message
output
is
used
to
exchange
output
information. It puts data in the receiver’s message
queue in an asynchronous way.
<message
name>[(<parameter
value>
{,<parameter value>}*)] TO_XXX…
Message save
A process may have intermediate states that can
not deal with new request until the on-going job is
done. These new requests should not be lost but
kept until the process reaches a stable state. Save
concept has been made for that matter, it basically
holds the message until it can be treated.
The symbol syntax is: <message name>
Continuous
A continuous signal is an expression that is
signal
evaluated right after a process reaches a new state.
It is evaluated before any message input or saved
messages.
Action
An action symbol contains a set of instructions in
C code. The syntax is the one of C language.
Decision
A decision symbol can be seen as a C switch /
case.
Semaphore
The Semaphore take symbol is used when the
take
process attempts to take a semaphore.
Semaphore
To give a semaphore, the syntax in the
give
‘semaphore give SDL-RT graphical symbol’ is:
<semaphore name>
Timer start
o start a timer the syntax in the ‘start timer SDLRT graphical symbol’ is : <timer name>(<time
value in tick counts>)
Timer stop
To cancel a timer the syntax in the ‘cancel timer
SDL-RT graphical symbol’ is : <timer name>
Task creation
To create a process the syntax in the create
process symbol is:
<process
name>[:<process
class>]
[PRIO
<priority>]
Procedure call The procedure call symbol is used to call an SDLRT procedure.
The syntax in the procedure call SDL graphical
symbol is the standard C syntax: [<return
variable>
=]
<procedure
name>({<parameters>}*);
Connectors
Connectors are used to:
· split a transition into several pieces so that the
diagram stays legible and printable, to gather
different branches to a same point.
Transition
The branches of the symbol have values true or
option
false. The true branch is defined when the
expression is defined so the equivalent C code is:
#ifdef <expression>
Comment
The comment symbol allows to write any type of
informal text and connect it to the desired symbol.
If needed the comment symbol can be left
unconnected.
Extension
The extension symbol is used to complete an
expression in a symbol. The expression in the
extension symbol is considered part of the
expression in the connected symbol. Therefore the
syntax is the one of the connected symbol.
Procedure
This symbol is specific to a procedure diagram. It
start
indicates the procedure entry point.
Procedure
This symbol is specific to a procedure diagram. It
return
indicates the end of the procedure.
Text symbol
This symbol is used to declare C types variables.
Additional
This symbol is used to declare SDL-RT specific
heading
headings
symbol
Specify the abstract data type for a software stack using SDL Behavior (including
notation explanation).
A Stack is one of the most common Data Structure. We can implement stack using an Array
or Linked list. Stack has only one End referred as TOP. So the element can only be inserted
and removed from TOP only. Hence Stack is also known as LIFO (Last In First Out). The
various functions of Stack are PUSH(), POP() and PEEK().

PUSH(): For inserting element in the Stack.

POP(): For deleting element from the Stack.

PEEK(): To return the top element of Stack
Elements
Description
Start
The start symbol represent the starting point for
the execution of the process
Symbol
Message save
A process may have intermediate states that can
not deal with new request until the on-going job is
done. These new requests should not be lost but
kept until the process reaches a stable state. Save
concept has been made for that matter, it basically
holds the message until it can be treated.
The symbol syntax is: <message name>
Action
An action symbol contains a set of instructions in
C code. The syntax is the one of C language.
Decision
A decision symbol can be seen as a C switch /
case.
Step 1: Start
Step 2: Declare Stack[MAX]; //Maximum size of Stack
Step 3: Check if the stack is full or not by comparing top with (MAX-1)
If the stack is full, Then print "Stack Overflow" i.e, stack is full and cannot be
pushed with another element
Step 4: Else, the stack is not full
Increment top by 1 and Set, a[top] = x
which pushes the element x into the address pointed by top.
// The element x is stored in a[top]
Step 5: Stop
Step 1: Start
Step 2: Declare Stack[MAX]
Step 3: Push the elements into the stack
Step 4: Check if the stack is empty or not by comparing top with base of array i.e 0
If top is less than 0, then stack is empty, print "Stack Underflow"
Step 5: Else, If top is greater than zero the stack is not empty, then store the value pointed by
top in a variable x=a[top] and decrement top by 1. The popped element is x.
Step 1: Start
Step 2: Declare Stack[MAX]
Step 3: Push the elements into the stack
Step 4: Print the value stored in the stack pointed by top.
Step 6: Stop
3. Error handing
Error handling testing is a type of software testing performed to test the system's ability to
handle software errors and exceptions at runtime. These tests are done with the help of
developers and testers. Your error handling technique should include handling both error and
exception scenarios.
Objective of Error handling testing:
The objectives of error handling testing:

To check the system ability to handle errors.

To check system highest soak point.

To do sure errors can be handles properly by the system in the future.

To do system capable of execution handling also.
Advantages of Error handling testing:

It helps in construction of an error handling powered software.

It makes the software ready for all circumstances.

It developes the exception handling technique in the software.

It helps is maintenance of the software.
Disadvantages of the Error handling testing:

It is costly as both the developing and testing team is involved.

It takes lot of time to perform the testing operations.
4. The effectiveness of data structures and algorithms.
4.1 Performance analysis
Software performance analysis examines how a particular program performs on a day-to-day
basis, recording what is slowing performance and causing errors now and what could become
problems in the future. Performance issues are not always built into software for easy
detection in the QA process. Instead, it may evolve over time after the project is deployed.
Software performance analysis keeps your team honest. Developers continuously test what
they're doing as versions are updated, code is added, other applications interact with the code,
and hosting changes, and the IT team You should watch your code. Many companies do
random software performance analysis, but that's not enough. Performance should always be
monitored because problems are not always obvious and detecting them early (perhaps even
before end users are affected) can save a lot of time and effort .
Software Performance Evaluations & Analysis Eliminate Rework
Designing a new application or making changes to an existing application can introduce bugs
and problems. This can have an immediate impact on software performance or manifest as
slow leaks over time. In any case, you need to catch it as soon as possible. These pressing
issues may seem the most dangerous, but they don't have to be. Slow and small issues that
become more complex over time can eventually lead to crashes and security vulnerabilities
that can bring your entire portfolio down. It can be unsettling to think about web applications,
mobile applications, internal operations, and all that a simple mistake can bring to a halt.
Software performance must meet today's performance requirements, regardless of when it was
created. Even when you try to keep up, you can get lost and reveal so many problems. If you
leave something alone for too long, the answer might lose all your work and try to create
something completely new.
Lose everything you've already done. To avoid this, software performance analysis should be
automated at every stage of the process. IT teams and developers need to get their work right
the first time and eliminate a lot of rework. Integrating software performance analysis into
your existing culture is making a difference that leads to increased productivity and employee
satisfaction.
Software Performance Analysis Helps Businesses Achieve Optimization
Of course, IT teams are expected to perform at a certain level and take a certain amount of
pride in their work. Yet we all make mistakes. Mistakes are bound to happen when a task
becomes as routine or repetitive as coding. Everyone needs to take action so they can work
efficiently towards their goals. The work must be done well and quickly.
Software performance analysis allows you to check your work. It's important to note that this
is more than just testing. Instead, it's a complete and comprehensive oversight of your code
over the life of your application.
Incorporate Software Performance Analysis Into Your System
After all, software performance analysis helps you simulate how your system will perform
today, tomorrow, three weeks from now, and next year. Check for potential issues that may
arise when testing against traffic, load conditions, and business needs. The goal is to reduce
technical debt while getting better business value from your applications. To do this, software
performance analysis should be used to identify potential problems early in development.
Remember: The further into the lifecycle, the more expensive it is to fix bugs. Finding bugs
early can help reduce costs. However, it should be used properly and in a way that is
conducive to development.
With quality software performance, IT teams can focus more on quality development and risk
mitigation.
4.2 Asymptopic Analysis
Asymptotic analysis of an algorithm refers to defining mathematical bounds on its run-time
performance. Asymptotic analysis can be used to extrapolate best-case, average-case, and
worst-case scenarios for an algorithm very well.
Asymptotic analysis is input dependent. H. If there are no inputs to the algorithm, it is
assumed to work in constant time. All other factors are considered constant except for "input".
Asymptotic analysis is the calculation of the execution time of an operation in computational
units of operations. For example, the execution time of one operation is computed as f(n) and
possibly the execution time of another operation is computed as g(n2). This means that the
execution time of the first operation increases linearly as n increases, and the execution time
of the second operation increases exponentially as n increases. Similarly, when n is very small,
both operations have approximately the same execution time.
Usually, the time required by an algorithm falls under three types :

Best Case − Minimum time required for program execution.

Average Case − Average time required for program execution.

Worst Case − Maximum time required for program execution.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time
complexity of an algorithm.

Ο Notation

Ω Notation

θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time.
It measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.
For example, for a function f(n)
Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }
4.3 Compare the performance of two sorting algorithms
Quick sort is an internal algorithm which is based on divide and conquer strategy. In this:

The array of elements is divided into parts repeatedly until it is not possible to divide it
further.

It is also known as “partition exchange sort”.

It uses a key element (pivot) for partitioning the elements.

One left partition contains all those elements that are smaller than the pivot and one right
partition contains all those elements which are greater than the key element.
Merge sort is an external algorithm and based on divide and conquer strategy. In this:

The elements are split into two sub-arrays (n/2) again and again until only one element is
left.

Merge sort uses additional storage for sorting the auxiliary array.

Merge sort uses three arrays where two are used for storing each half, and the third
external one is used to store the final sorted list by merging other two and each array is
then sorted recursively.

At last, the all sub arrays are merged to make it ‘n’ element size of the array.
Partition of elements in the array :
In the merge sort, the array is parted into just 2 halves (i.e. n/2).
whereas
In case of quick sort, the array is parted into any ratio. There is no compulsion of dividing the
array of elements into equal parts in quick sort.
Worst case complexity :
The worst case complexity of quick sort is O(n2) as there is need of lot of comparisons in the
worst condition.
whereas
In merge sort, worst case and average case has same complexities O(n log n).
Usage with datasets :
Merge sort can work well on any type of data sets irrespective of its size (either large or small).
whereas
The quick sort cannot work well with large datasets.
Additional storage space requirement :
Merge sort is not in place because it requires additional memory space to store the auxiliary
arrays.
whereas
The quick sort is in place as it doesn’t require any additional storage.
Efficiency :
Merge sort is more efficient and works faster than quick sort in case of larger array size or
datasets.
whereas
Quick sort is more efficient and works faster than merge sort in case of smaller array size or
datasets.
Sorting method :
The quick sort is internal sorting method where the data is sorted in main memory.
whereas
The merge sort is external sorting method in which the data that is to be sorted cannot be
accommodated in the memory and needed auxiliary memory for sorting.
Stability :
Merge sort is stable as two elements with equal value appear in the same order in sorted
output as they were in the input unsorted array.
whereas
Quick sort is unstable in this scenario. But it can be made stable using some changes in code.
Preferred for :
Quick sort is preferred for arrays.
whereas
Merge sort is preferred for linked lists.
Locality of reference :
Quicksort exhibits good cache locality and this makes quicksort faster than merge sort (in
many cases like in virtual memory environment).
4.2 Determine two ways in which the efficiency of an algorithm can be measured,
illustrating your answer with an example.
A good algorithm is correct, but a great algorithm is both correct and efficient. The most
efficient algorithm is one that takes the least amount of execution time and memory usage
possible while still yielding a correct answer.
measure the efficiency
One way to measure the efficiency of an algorithm is to count how many operations it needs
in order to find the answer across different input sizes.
Let's start by measuring the linear search algorithm, which finds a value in a list. The
algorithm looks through each item in the list, checking each one to see if it equals the target
value. If it finds the value, it immediately returns the index. If it never finds the value after
checking every list item, it returns -1.
PROCEDURE searchList(numbers, targetNumber) {
index ← 1
REPEAT UNTIL (index > LENGTH(numbers)) {
IF (numbers[index] = targetNumber) {
RETURN index
}
index ← index + 1
}
RETURN -1
}
Let's step through the algorithm for this sample input:
The algorithm starts off by initializing the variable index to 1. It then begins to loop.
Iteration #1:

It checks if index is greater than LENGTH(numbers). Since 1 is not greater than 6, it
executes the code inside the loop.

It compares numbers[index] to targetNumber. Since 3 is not equal to 45, it does not
execute the code inside the conditional.

It increments index by 1, so it now stores 2.
Iteration #2:

It checks if index is greater than LENGTH(numbers). Since 2 is not greater than 6, it
executes the code inside the loop.

It compares numbers[index] to targetNumber. Since 37 is not equal to 45, it does not
execute the code inside the conditional.

It increments index by 1, so it now stores 3.
Iteration #3:

It checks if index is greater than LENGTH(numbers). Since 3 is not greater than 6, it
executes the code inside the loop.

It compares numbers[index] to targetNumber. Since 45 is equal to 45, it executes the code
inside the conditional.

It returns the current index, 3.
Now let's count the number of operations that the linear search algorithm needed to find that
value, by counting how often each type of operation was called.
That's a total of 10 operations to find the targetNumber at the index of 3. Notice the
connection to the number 3? The loop repeated 3 times, and each time, it executed 3
operations.
The best case for an algorithm is the situation which requires the least number of operations.
According to that table, the best case for linear search is when targetNumber is the very first
item in the list.
What about the worst case? According to that table, the worst case is when targetNumber is
the very last item in the list, since that requires repeating the 3 loop operations for every item
in the list.
There is actually a slightly worse worst case: the situation where targetNumber is not in the
list at all. That'll require a couple extra operations, to check the loop condition and return -1. It
doesn't require an entire extra loop repetition, however, so it's not that much worse.
Depending on our use case for linear search, the worst case may come up very often or almost
never at all.
The average case is when targetNumber is in the middle of the list. That's the example we
started with, and that required 3 repetitions of the loop for the list of 6 items.
Let's describe the efficiency in more general terms, for a list of n items. The average case
requires 1 operation for variable initialization, then n/2 loop repetitions, with 3 operations per
loop. That's 1 + 3(n/2).
This table shows the number of operations for increasing list sizes:
Let's see that as a graph:
Generally, we can say that there's a "linear" relationship between the number of items in the
list and the number of operations required by the algorithm. As the number of items increase,
the number of operations increases in proportion.
The number of operations does not tell us the amount of time a computer will take to actually
run an algorithm. The running time depends on implementation details like the speed of the
computer, the programming language, and the translation of the language into machine code.
That's why we typically describe efficiency in terms of number of operations.
However, there are still times when a programmer finds it helpful to measure the run time of
an implemented algorithm. Perhaps a particular operation takes a surprisingly long amount of
time, or maybe there's some aspect of the programming language that's slowing an algorithm
down.
To measure the run time, we must make the algorithm runnable; we must implement it in an
actual programming language.
function searchList(numbers, targetNumber) {
for (var index = 0; index < numbers.length; index++) {
if (numbers[index] === targetNumber) {
return index;
}
}
return -1;
The program below reports the run-time for lists of 6 numbers, 60 numbers, and 600 numbers.
}
var searchList = function(numbers, targetNumber) {
for (var index = 0; index < numbers.length; index++) {
if (numbers[index] === targetNumber) {return index;}
}return -1;
};
// Makes numCalls calls to searchList with a list of size listSize
// Returns approximate number of nanoseconds per call
var measureRunTime = function(listSize, numCalls) {
// Prepare sorted list of required size
var numbers = [];
for (var i = 1; i < (listSize + 1); i++) {
numbers.push(i);
}
// Set target number to random number in list
var targetNumber = numbers[floor(random(0, listSize))];
// Record start time in milliseconds
var startMS = millis();
for (var i = 0; i < numCalls; i++) {
searchList(numbers, targetNumber);
}
// Record end time in milliseconds
var endMS = millis();
// Calculate milliseconds per call to searchList
var msPerCall = (endMS - startMS)/numCalls;
// Return nanoseconds per call
println(msPerCall * 1000000);
};
measureRunTime(6, 100000);
measureRunTime(60, 100000);
measureRunTime(600, 100000);
Here are the results from one run on my computer:
This graph plots those points:
As predicted, the relationship between the points is roughly linear. As the number of
operations increases by a factor of 10, the nanoseconds increases at nearly that same factor of
10. The exact numbers aren't as clean as our operation counts, but that's expected, since all
sorts of real world factors can affect actual running time on a computer.
4.5 What do you understand by time and space trade-off? Type of time and space tradeoff ?
Space-time trade-offs in algorithms. A compromise is a situation where one increases and
another decreases. This is how you solve the problem:

using more space in less time, or

spending more time in a very small space.
The best algorithm is the one that helps solve the problem, uses less memory, and takes less
time to produce output. But in general it is not always possible to achieve both conditions at
the same time.
Types of Space-Time Trade-off

Compressed or Uncompressed data

Re Rendering or Stored images

Smaller code or loop unrolling

Lookup tables or Recalculation
Compressed or Uncompressed data: A space-time trade-off can be applied to the problem of
data storage. If data stored is uncompressed, it takes more space but less time. But if the data
is stored compressed, it takes less space but more time to run the decompression algorithm.
There are many instances where it is possible to directly work with compressed data. In that
case of compressed bitmap indices, where it is faster to work with compression than without
compression.
Re-Rendering or Stored images: In this case, storing only the source and rendering it as an
image would take more space but less time i.e., storing an image in the cache is faster than rerendering but requires more space in memory.
Smaller code or Loop Unrolling: Smaller code occupies less space in memory but it requires
high computation time that is required for jumping back to the beginning of the loop at the
end of each iteration. Loop unrolling can optimize execution speed at the cost of increased
binary size. It occupies more space in memory but requires less computation time.
Lookup tables or Recalculation: In a lookup table, an implementation can include the entire
table which reduces computing time but increases the amount of memory needed. It can
recalculate i.e., compute table entries as needed, increasing computing time but reducing
memory requirements.
For Example: In mathematical terms, the sequence Fn of the Fibonacci Numbers is defined by
the recurrence relation:
A simple solution to find the Nth Fibonacci term using recursion from the above recurrence
relation.
Below is the implementation using recursion:
// C++ program to find Nth Fibonacci
// number using recursion
#include <iostream>
using namespace std;
// Function to find Nth Fibonacci term
int Fibonacci(int N)
{
// Base Case
if (N < 2)
return N;
// Recursively computing the term
// using recurrence relation
return Fibonacci(N - 1) + Fibonacci(N - 2);
}
// Driver Code
int main()
{
int N = 5;
// Function Call
cout << Fibonacci(N);
return 0;
}
Output: 5
Time Complexity: O(2� )
Auxiliary Space: O(1)
Explanation: The time complexity of the above implementation is exponential due to multiple
calculations of the same subproblems again and again. The auxiliary space used is minimum.
But our goal is to reduce the time complexity of the approach even it requires extra space.
Below is the Optimized approach discussed.
Efficient Approach: To optimize the above approach, the idea is to use Dynamic
Programming to reduce the complexity by memoization of the overlapping subproblems as
shown in the below recursion tree:
Below is the implementation of the above approach:
// C++ program to find Nth Fibonacci
// number using recursion
#include <iostream>
using namespace std;
// Function to find Nth Fibonacci term
int Fibonacci(int N)
{
int f[N + 2];
int i;
// 0th and 1st number of the
// series are 0 and 1
f[0] = 0;
f[1] = 1;
// Iterate over the range [2, N]
for (i = 2; i <= N; i++) {
// Add the previous 2 numbers
// in the series and store it
f[i] = f[i - 1] + f[i - 2];
}
// Return Nth Fibonacci Number
return f[N];
}
// Driver Code
int main()
{
int N = 5;
// Function Call
cout << Fibonacci(N);
return 0;
}
Output: 5
Time Complexity: O(N)
Auxiliary Space: O(N)
Explanation: The time complexity of the above implementation is linear by using an auxiliary
space for storing the overlapping subproblems states so that it can be used further when
required.
III. IMPLEMENTATION
1. Implement a complex ADT and algorithm in an executable programming language to
solve a given problem.
Dijkstra's algorithm allows to find the shortest path from one vertex sto the remaining
vertices of the graph and the corresponding length (weight).
The method of the algorithm is to sequentially determine the vertices of arrival length sin
ascending order .
The algorithm is built on the basis of assigning each vertex temporary labels.
The temporary label of the vertices indicates the upper bound of the shortest path length from
sthat vertex.
The labels of the vertices will change in iterations, where at each iteration a temporary label
becomes official.
If the label of a vertex becomes formal, it is also the shortest path from sthat vertex.
According to Dijkstra's algorithm algorithm then I will implement the following steps:
Step1 : create vertex and edge layers to draw shapes in an abstract way.
Step2 : initialize a priorityQueue with the first element being the starting vertex and
temporarily assign the other vertices through loops and find the shortest path. Then add the
appropriate vertex to the priorityQueue.
Step3 : Display the adjacent points that meet the conditions to create the shortest path.
In conclusion, I will use the abstract data type Queue and the Dijkstra's algorithm algorithm
to complete the program.
2. Create unit test / test documentation.
I will use JUnit5 and this test document will test all the important methods of each class in the
program.
Test execution steps: test Vert class, Edge class, PathFinder class, Dijkstra class and last once
is main method.
You can see the attached test documentation outside.
Class VertTest:
package baiTapAss;
import static org.junit.jupiter.api.Assertions.*;
import org.junit.jupiter.api.Test;
class VertTest {
@Test
void testVert() {
Vert test = new Vert("201");
assertEquals(test.getName(), "201");
}
@Test
void testVisited() {
Vert test = new Vert("202");
test.setVisited(false);
assertFalse(test.Visited());
}
}
Class EdgeTest:
package baiTapAss;
import static org.junit.jupiter.api.Assertions.*;
import org.junit.jupiter.api.Test;
class EdgeTest {
@Test
void testEdge() {
Vert currVert = new Vert("301");
Vert nextVert = new Vert("302");
Edge testEdge = new Edge(10, currVert, nextVert);
assertNotNull(testEdge);
}
}
Class PathFinderTest:
package baiTapAss;
import static org.junit.jupiter.api.Assertions.*;
import java.util.PriorityQueue;
import org.junit.jupiter.api.Test;
class PathFinderTest {
@Test
void testShortestP() {
PathFinder shortestPath = new PathFinder();
assertNotNull(shortestPath);
}
@Test
void testGetShortestP() {
Vert test = new Vert("401");
Vert test1 = new Vert("402");
test.setPr(test1);
assertNotNull(test.getPr());
}
}
Class DijkstraTest:
package baiTapAss;
import static org.junit.jupiter.api.Assertions.*;
import java.util.List;
import org.junit.jupiter.api.Test;
class DijkstraTest {
@Test
void testAddNext() {
Vert currVert = new Vert("101");
Vert nextVert = new Vert("102");
currVert.addNeighbour(new Edge(10, currVert, nextVert));
assertEquals(currVert.getList().get(0).getTargetVert(), nextVert);
System.out.println("success");
}
}
Result :
3. Evaluate the complexity of an implemented ADT/ algorithm using Big-O.
Time Complexity of Dijkstra's Algorithm is O(�2 ) but with min-priority queue it drops down
to O(V+Elog �) .(according to :
https://www.hackerearth.com/practice/algorithms/graphs/shortest-path-algorithms/tutorial/ )
If you have an array of size n and you want to build a heap from all items at once, Floyd's
algorithm can do it with O(n) complexity. This corresponds to the std::priority_queue
constructors that accept a container parameter.
If you have an empty priority queue to which you want to add n items, one at a time, then the
complexity is O(n * log(n)).
So if you have all of the items that will go into your queue before you build it, then the first
method will be more efficient. You use the second method--adding items individually--when
you need to maintain a queue: adding and removing elements over some time period.
Removing n items from the priority queue also is O(n * log(n)).
(according to : https://stackoverflow.com/questions/44650882/time-complexity-of-a-priorityqueue-in-c )
4. Source code and evidence (result).
Class Vert:
package baiTapAss;
import java.util.ArrayList;
import java.util.List;
public class Vert implements Comparable<Vert> {
private boolean visited;
private String name;
private List<Edge> List;
private double dist = Double.MAX_VALUE;
private Vert pr;
public Vert(String name) {
this.name = name;
this.List = new ArrayList<>();
}
public List<Edge> getList() {
return List;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public void setList(List<Edge> List) {this.List = List; }
public void addNeighbour(Edge edge) { this.List.add(edge); }
public boolean Visited() {return visited;}
public void setVisited(boolean visited) {
this.visited = visited;
}
public Vert getPr() {
return pr;
}
public void setPr(Vert pr) {
this.pr = pr;
}
public double getDist() {
return dist;
}
public void setDist(double dist) {
this.dist = dist;
}
@Override
public String toString() {
return this.name;
}
@Override
public int compareTo(Vert otherV) {
return Double.compare(this.dist, otherV.getDist());
}
}
Class Edge:
package baiTapAss;
public class Edge {
private double weight;
private Vert startVert;
private Vert targetVert;
public Edge(double weight, Vert startVert, Vert targetVert) {
this.weight = weight;
this.startVert = startVert;
this.targetVert = targetVert;
}
public double getWeight() {return weight; }
public void setWeight(double weight) {this.weight = weight;}
public Vert getStartVert() { return startVert; }
public void setStartVert(Vert startVert) { this.startVert = startVert; }
public Vert getTargetVert() { return targetVert; }
public void setTargetVert(Vert targetVert) {
this.targetVert = targetVert;
}
}
Class PathFinder :
package baiTapAss;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.PriorityQueue;
public class PathFinder {
public void ShortestP(Vert sourceV) {
sourceV.setDist(0);
PriorityQueue<Vert> priorityQueue = new PriorityQueue<>();
priorityQueue.add(sourceV);
sourceV.setVisited(true);
while (!priorityQueue.isEmpty()) {
Vert actualVertex = priorityQueue.poll();
for (Edge edge : actualVertex.getList()) {
Vert v = edge.getTargetVert();
if (!v.Visited()) {
double newDistance = actualVertex.getDist()+
edge.getWeight();
if (newDistance < v.getDist()) {
priorityQueue.remove(v);
v.setDist(newDistance);
v.setPr(actualVertex);
priorityQueue.add(v);
}
}
}
actualVertex.setVisited(true);
}
}
public List<Vert> getShortestP(Vert targetVertex) {
List<Vert> path = new ArrayList<>();
for (Vert vertex = targetVertex; vertex != null; vertex =
vertex.getPr()) {
path.add(vertex);
}
Collections.reverse(path);
return path;
}
}
Class Dijkstra:
package baiTapAss;
public class Dijkstra {
public static void addNext(int x, Vert currVert, Vert nextVert) {
currVert.addNeighbour(new Edge(x, currVert, nextVert));
}
public static void main(String[] args) {
Vert v0 = new Vert("0");Vert v1 = new Vert("1");
Vert v2 = new Vert("2");Vert v3 = new Vert("3");
Vert v4 = new Vert("4");Vert v5 = new Vert("5");
Vert v6 = new Vert("6");Vert v7 = new Vert("7");
Vert v8 = new Vert("8");
addNext(5, v0, v1);
addNext(3, v0, v2);
addNext(4, v1, v3);
addNext(2, v1, v4);
addNext(6, v1, v5);
addNext(5, v2, v4);addNext(7, v2, v5);
addNext(8, v3, v6);addNext(7, v4, v6);
addNext(12, v4, v8); addNext(6, v4, v7);
addNext(5, v5, v7); addNext(8, v6, v7);
addNext(8, v6, v8);
addNext(7, v7, v8);
PathFinder shortestPath = new PathFinder();
shortestPath.ShortestP(v0);
System.out.println("Khoảng cách tối thiểu từ:");
System.out.println("0 đếnn 1: " + v1.getDist());
System.out.println("0 đếnn 2: " + v2.getDist());
System.out.println("0 đến 3: " + v3.getDist());
System.out.println("0 đến 4: " + v4.getDist());
System.out.println("0 đến 5: " + v5.getDist());
System.out.println("0 đến 6: " + v6.getDist());
System.out.println("0 đến 7: " + v7.getDist());
System.out.println("0 đến 8: " + v8.getDist());
System.out.println("Đường đi ngắn nhất từ:");
System.out.println("0 đến 1: " + shortestPath.getShortestP(v1));
System.out.println("0 đến 2: " + shortestPath.getShortestP(v2));
System.out.println("0 đến 3: " + shortestPath.getShortestP(v3));
System.out.println("0 đến 4: " + shortestPath.getShortestP(v4));
System.out.println("0 đến 5: " + shortestPath.getShortestP(v5));
System.out.println("0 đến 6: " + shortestPath.getShortestP(v6));
System.out.println("0 đến 7: " + shortestPath.getShortestP(v7));
System.out.println("0 đến 8: " + shortestPath.getShortestP(v8));
}
}
Output :
References
-online.visual-paradigm.com. (2022). What is Specification and Description Language (SDL). [online]
Available at: https://online.visual-paradigm.com/knowledge/sdl-diagram/what-is-sdl-diagram/[Accessed 102022].
-atechdaily.com. (2022). Algorithm and Flowchart for Stack using Arrays.By Shaddy. [online] Available at:
https://atechdaily.com/posts/algorithm-and-flowchart-for-stack-using-arrays [Accessed 10-2022].
-www.tutorialspoint.com. (2022). Data Structures - Asymptotic Analysis. [online] Available at:
https://www.tutorialspoint.com/data_structures_algorithms/asymptotic_analysis.htm [Accessed 10-2022].
-www.geeksforgeeks.org. (2022). Quick Sort vs Merge Sort.By Ankit_Bisht. [online] Available at:
https://www.geeksforgeeks.org/quick-sort-vs-merge-sort/ [Accessed 10-2022].
Download