Uploaded by Tanveer Ahmed

Amortized analysis

advertisement
AMORTIZED ANALYSIS
Amortized analysis is a technique used to estimate the time complexity of an algorithm that involves a
sequence of operations. It determines the average time complexity of each operation in the worst case
scenario, by taking into account the total cost of all operations performed in a sequence.
In general, we can use the following steps to perform an amortized analysis:
1. Identify the operations that are performed by the algorithm.
2. Determine the worst-case time complexity of each operation.
3. Determine the number of times each operation is performed.
4. Calculate the total time complexity of all operations.
5. Divide the total time complexity by the number of operations to get the average time
complexity per operation.
Amortized analysis is a useful tool for analyzing the performance of algorithms with complex operations,
as it provides a more accurate view of the algorithm's overall efficiency.
One common example where amortized analysis is used is in the analysis of dynamic arrays, which can
dynamically resize themselves as elements are added or removed. The most common implementation
of a dynamic array is to double its size whenever it runs out of space.
For example, suppose we have an initially empty dynamic array, and we perform a sequence of n
operations on it, where each operation is either adding an element to the array or removing an element
from it. Let's assume that the cost of adding an element to the array is O(1), and the cost of doubling the
size of the array is O(n).
Now, let's consider the worst-case scenario where all n operations are add operations, and the array
needs to be resized at every step. The total cost of the sequence of operations can be calculated as
follows:

The first operation takes O(1) time, and does not require resizing the array.

The second operation takes O(1) time, and also does not require resizing the array.

The third operation takes O(1) time, but requires resizing the array to size 2.

The fourth operation takes O(1) time, and does not require resizing the array.

The fifth operation takes O(1) time, and also does not require resizing the array.

The sixth operation takes O(1) time, but requires resizing the array to size 4.

...
In general, the kth operation takes O(1) time, but requires resizing the array when the size of the array is
2^k. Therefore, the total cost of the n operations can be calculated as follows:
O(1) + O(1) + O(n) + O(1) + O(1) + O(n) + O(1) + O(1) + O(n) + ...
This can be simplified as:
O(n) + O(n/2) + O(n/4) + ... + O(1)
Using the formula for a geometric series, we can simplify this to:
O(n)
Therefore, the average cost of each operation in this worst-case scenario is O(n)/n = O(1), which is the
cost of adding an element to the array in the absence of resizing.
This analysis shows that, on average, each add operation in the worst-case scenario has a constant time
complexity, even though the resizing operation has a linear time complexity. This is because the cost of
resizing the array is distributed over all the add operations, and each add operation contributes a
smaller and smaller portion of the total cost as the array size grows.
Amortized analysis is a technique used to analyze the average time complexity of an algorithm over a
series of operations. It is often used when analyzing data structures or algorithms that involve a
sequence of operations that may have varying time complexities.
There are three common methods of amortized analysis: Aggregate Analysis, Accounting Method, and
Potential Method. Here are examples of each:
1. Aggregate Analysis: Aggregate analysis computes the average time complexity of a sequence of
operations. For example, consider a dynamic array implementation with an initial capacity of 2
and a doubling strategy when the array is full. The time complexity of appending an element to
the array is O(1) in the average case, but O(n) in the worst case when the array needs to be
doubled. If we append n elements to the array, the total time complexity of the sequence of
operations is O(n) in the worst case, but the average time complexity per operation is O(1).
2. Accounting Method: The accounting method involves assigning a cost to each operation that is
greater than the actual cost of the operation. The difference between the assigned cost and the
actual cost is called the "credit". The credit can then be used to pay for future operations that
may have a higher actual cost. For example, consider a dynamic array implementation where
the cost of appending an element is assigned a cost of 2 units. The actual cost is 1 unit, so the
credit is 1 unit. When the array needs to be doubled, the cost of the operation is 2n units, but
the credit earned from previous append operations can be used to offset the cost. As long as the
total credit is greater than or equal to the actual cost of the operation, the data structure
maintains a constant time complexity per operation.
3. Potential Method: The potential method involves assigning a potential to the data structure
after each operation. The potential represents the amount of extra work that the data structure
is capable of performing in the future. The actual cost of an operation is then the difference in
potential before and after the operation. For example, consider a dynamic array implementation
where the potential is the number of empty slots in the array. Initially, the potential is 2. When
an element is appended to the array, the potential decreases by 1. When the array needs to be
doubled, the potential increases by n-2. The actual cost of the doubling operation is then n-2
(the difference in potential), which is constant per operation.
Download