Uploaded by Ali Dhiaa Al-ani

Ali Dhiaa Trashing

advertisement
TRASHING
COMPUTER ARCHITECTURE
Dr. Harith Fakhrey
By: Ali Dhiaa Abdulwahab
Thrashing
THRASHING
IS A CONDITION OR A
SITUATION WHEN THE
SYSTEM IS SPENDING A
MAJOR PORTION OF ITS
TIME IN SERVICING THE
PAGE FAULTS, BUT THE
ACTUAL PROCESSING
DONE IS VERY
NEGLIGIBLE.
Swapping out a piece of a process Just before
that piece is needed
The processor spends most of its time swapping
pieces rather than executing user instructions
This activity is called the thrashing.
If a process does not have enough
pages, the
page-fault rate is very high. This leads to:
-Low CPU utilization
-operating system thinks that it needs to
increase the degree of
multiprogramming
-Another process added to the system
-A process is busy swapping pages in
and out.
-This high paging activity is called
Thrashing
-More paging and less CPU utilization
Trashing Solutions:
DISADVANTAGES
We can reduce the effect of thrashing
by using:
· Increases the degree of multiprogramming
the local replacement algorithm
· System throughput decreases
-To prevent thrashing provide as many
frames as it needs
· Page fault rate increases
· Effective access time increases
-To know the number of frames use
working set strategy
Figure Below it explain it further.
Locality model
-Process migrates from one locality to
another Localities may overlap
But Why does thrashing occur?
To make it simple:
size of locality> total memory size
Principle of Locality
-Program and data references within a process tend to cluster.
-only a few pieces of a process will be needed over a short period of time.
-Possible to make intelligent guesses about which pieces will be needed in the future.
-This suggests that virtual memory may work efficiently.
Locality in a memory reference pattern:
2
Allocation of Frames
Each process needs minimum number of pages
Consider a single OS with 128K memory, with page size of 1K
OS takes 35K, Leaving 93 frames for the user Process.
The first 93-page faults would get free frames from the free frame list
When the free frame list is exhausted a page replacement algorithm is used to allocate the frame
constraints
Can`t allocate more than the total no. of available frames, only a minimum no. of frames that can
be allocated.
This no. Is defined by the instruction set architecture.
As the no. of frames allocated decreases the page fault increases
· Slows down the process execution.
There are different types of Localities
1-Temporal Locality
Temporal locality means current data or instruction that is being fetched may be needed soon.
So, we should store that data or instruction in the cache memory so that we can avoid again
searching in main memory for the same data.
2-Spatial Locality
Spatial locality means instruction or data near to the current memory location that is being fetched,
may be needed soon in the near future. This is slightly different from the temporal locality. Here we
are talking about nearly located memory locations while in temporal locality we were talking about
the actual memory location that was being fetched.
Cache Performance:
The performance of the cache is measured in terms of hit ratio. When CPU refers to memory and
find the data or instruction within the Cache Memory, it is known as cache hit. If the desired data or
instruction is not found in the cache memory and CPU refers to the main memory to find that data
or instruction, it is known as a cache miss.
Hit + Miss = Total CPU Reference
Hit Ratio(h) = Hit / (Hit + Miss)
Summary
Trashing is when the system is wasting time instead of real execution.
It occurs when size of locality> total memory size.
To solve this issue, we presented the Principle of Locality and we presented it`s types and their
differences.
We also learned how to calculate the cache performance and we stated that the more hits we get
the better.
3
Download