AUGMENTING OSCAL WITH ANIMATIONS OF OPERATING SYSTEM CONCEPTS A Project Presented to the faculty of the Department of Computer Science California State University, Sacramento Submitted in partial satisfaction of the requirements for the degree of MASTER OF SCIENCE in Computer Science by Sandeep Vedanthi SPRING 2014 © 2014 Sandeep Vedanthi ALL RIGHTS RESERVED ii AUGMENTING OSCAL WITH ANIMATIONS OF OPERATING SYSTEM CONCEPTS A Project by Sandeep Vedanthi Approved by: __________________________________, Committee Chair Du Zhang, Ph.D. __________________________________, Second Reader Ahmed Salem, Ph.D. ____________________________ Date iii Student: Sandeep Vedanthi I certify that this student has met the requirements for format contained in the University format manual, and that this project is suitable for shelving in the Library and credit is to be awarded for the project. __________________________, Graduate Coordinator Nikrouz Faroughi, Ph.D. Department of Computer Science iv ___________________ Date Abstract of AUGMENTING OSCAL WITH ANIMATIONS OF OPERATING SYSTEM CONCEPTS by Sandeep Vedanthi OSCAL is an effective online library which facilitates students trying to learn Operating system concepts by giving them graphical experience. The new concepts that are being added to this library are Dining Philosopher’s Problem and Race Condition Demonstration and User/Kernel Thread models. Dining Philosopher Problem demonstrates the different scenarios that can occur when multiple threads try to access shared resources to complete their work. The problem describes five philosophers sitting around a table, which is set with 5 plates (one for each philosopher), 5 chopsticks, and a bowl of rice. Each philosopher alternately thinks and eats. To eat, she needs the two chopsticks next to her plate. When finished eating, she v puts the chopsticks back on the table, and continues thinking. The scenarios that are graphically explained in this applet are Deadlock, Starvation and Synchronization. Race Condition occurs when multiple threads enter the critical section at the same time and try to write on a single data that is being shared between the two threads. This results in wrong data being written. In this applet we demonstrate the occurrence of Race condition using a Banking scenario with Deposit and Withdraw Transactions available for the user. User threads are supported above the kernel, without kernel support. Kernel threads are supported within the kernel of the OS itself. The user threads must be mapped to kernel threads, using one of the several strategies. There are different ways in which user threads can be mapped on to the kernel threads. This applet demonstrates One to One model, One to Many model and Many to Many model. The applets are being coded in JAVA using Net beans IDE. , Committee Chair Du Zhang, Ph.D. _______________________ Date vi ACKNOWLEDGEMENTS First and Foremost I would like express my sincere gratitude to Dr. Du Zhang for his continued guidance and advice throughout the course of this project. His guidance helped me all the time in rethinking ideas and implementing them in the project. Without his continued support this project would not have been a success. I would like to thank Dr. Ahmed Salem for agreeing to be my second reader and taking time to review my project. I would like to thank all the professors and other faculty members of Department of Computer Science who have helped me and guided me towards becoming a graduate. I would like to thank all my friends and family for their continuous support that has helped me achieve this personal goal. vii TABLE OF CONTENTS Acknowledgements……………………………………………………….…….…..…..vii List of Tables.……………………….……………………………………….….………ix List of Figures...…………………………………………………………….…..…..........x Chapter 1. INTRODUCTION…………………………………………………………………….1 2. BACKGROUND ……………………………….…….……………………………..14 2.1 Related Work……………………………………………………………….14 3. DESIGN …………………...……….………………………………………………..16 3.1 Race Condition Applet…………………...………..……………………......16 3.2 Dining Philosophers Applet …………………….…………………….……18 3.3 User and Kernel Threads Implementation Applet ………..……………..….19 4. IMPLEMENTATION……….…………………………………………………….…24 4.1 Race condition Applet…………………………………………………..…..24 4.2 Dining Philosophers Applet………………………………………….……..35 4.3 User and Kernel Threads Implementation Applet……………………..……49 5. PERFORMANCE EVALUATION....……………………………………….………64 5.1 Race condition Applet………………………………………………..……..64 5.2 Dining Philosophers Applet……...…………………………………..……..67 5.3 User and Kernel Threads Implementation Applet……………………..……69 6. CONCLUSION AND FUTURE ENHANCEMENTS….……………………..…….72 Bibliography……....…………………………………………………………………….73 viii LIST OF TABLES Tables Page Table 5.1 Race condition with 3 transaction output scenario………………………….65 Table 5.2 Race condition with 4 transaction output scenario………………………….66 Table 5.3 Starvation Output Scenarios………………………………………………...68 Table 5.4 Performance of User, Kernel threads with P cores…………………………69 ix LIST OF FIGURES Figures Page Figure 3.1 Race Condition Example...………………..……………………...................17 Figure 3.2 Many To One Thread Mapping……..…………..………………..................21 Figure 3.3 One To One Thread Mapping …………………............................................22 Figure 3.4 Many To Many Thread Mapping ………………………..……………..…..23 Figure 4.1 Use case diagram for Race Condition Applet...……………………………..25 Figure 4.2 Race Condition Input Screen ………………………..………………….…..26 Figure 4.3 Flowchart of Race Condition Algorithm……………………………………27 Figure 4.4 Race Condition Output 1 ………………………..………………….............29 Figure 4.5 Race Condition Output 2 …………………………………………………...30 Figure 4.6 Working of Mutex……...……......………………………………………….32 Figure 4.7 Critical Section …………………………………..…………………………33 Figure 4.8 Race Condition Handle Output ……………………………………..............34 Figure 4.9 Use case diagram of DP Problem…………………………………………...35 Figure 4.10 Flowchart of DPP solution…………………………………………………36 Figure 4.11 Pseudo code to pick forks……………………………………….……........37 Figure 4.12 Pseudo code to finish eating ……….…....…………………….……..........39 Figure 4.13 Pseudo code to eat…………........................................................................40 Figure 4.14 Pseudo code for overall flow........................................................................41 x Figure 4.15 DP Problem Output………………………………………………………..42 Figure 4.16 Final State Output …………………………………….……………….......43 Figure 4.17 Flowchart for Deadlock……………………………………………………44 Figure 4.18 Pseudo code for Deadlock …………………………….……….......…..….45 Figure 4.19 Deadlock Output. ………………………………………………......….…..46 Figure 4.20 Flowchart to indicate starvation……………………………………………48 Figure 4.21 Use case diagram for User, Kernel Applet………………………………..50 Figure 4.22 User, Kernel Threads Input Screen ……………………………………......51 Figure 4.23 Flowchart of One to One Design…………………………………………..52 Figure 4.24 One to One output with 3 User threads, 3 Kernel threads……………..…..54 Figure 4.25 Flowchart for Many to One Design………………………………………..56 Figure 4.26 Many to One output with 3 User threads, 1 Kernel thread...………………57 Figure 4.27 Flowchart of Many to Many design logic………………………………….59 Figure 4.28 Many to Many output with 4 User threads, 3 Kernel threads…..……….…60 Figure 4.29 Many to Many output with 4 User threads, 2 Kernel threads........………...61 Figure 4.30 Many to One output with 4 User threads, 1 Kernel threads…………...…..62 Figure 4.31 One to One output scenario 2……………………………………………...63 xi 1 Chapter 1 INTRODUCTION Operating system in simple words is the system software that the hardware and programs on the computer to communicate with each other. It is a platform on which other programs can run. Some of the basic responsibilities of Operating systems include recognizing the peripheral devices connected to the computer such as the mouse and keyboard, open and close the files and directories, sending the output to devices such as a display monitor and printer. Operating system also has some other complex responsibilities such as distributing the resources between multi-tasking applications, making sure they execute concurrently without causing the other concurrently executing applications to crash or halt. The Dining Philosopher’s Problem explains the problems faced when multiple threads are executing simultaneously without any synchronization in the allocation of resources. We may experience problems like Deadlock and Starvation. Race Condition occurs when we allow multiple threads to operate on a shared resource at the same time. This causes the data to get corrupted. By using some locking mechanism on the resource and allowing only one thread to execute on it in the critical section resolves this problem. 2 User and Kernel thread mapping models explains the different techniques used to map user and Kernel threads in a multi-threaded environment in order to execute an instruction. It also explains the overhead incurred in a multi core system while using some of these mapping techniques. OSCAL Although these concepts are well published as algorithms, a beginner would still find it difficult to comprehend them. Developing these algorithms into Graphical animations allows users to get visually involved in these concepts and allows the user to get a clearer idea of how each of the concepts works. Dr. Du Zhang’s OSCAL (Operating Systems Concepts Animation Library) provides such a platform for any student who wants to learn operating systems concepts [1]. This online library contains many animations that describe various concepts of operating Systems. The three animations developed in this project will also be added to OSCAL. These animations are written using Java programming language and are executed on the web. These are called applets. 3 JAVA APPLETS: Java applet is a small application written in Java in form of a byte code. Java applet can run in an application window, on a webpage or in tools used to test applets that can compile Java byte code. Since applets are very small applications, the execution time of these applets is very fast. Java is also a platform independent programming Language. With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes —the platformindependent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. Java’s magic: The Byte code The key that allows java to solve both security and the portability problems is that the output of a java compiler is not executable code. Rather, it is byte code. Byte code is a highly optimized set of instructions designed to be executed by the java run-time system, which is called the JVM. Java byte codes help make "write once, run anywhere" possible. You can compile your program into byte codes on any platform that has a Java compiler. Then the byte codes can run on any implementation of the Java VM. That means that as long as a computer 4 has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac. The Java Platform [5][6] The Java platform has two components: • The Java Virtual Machine (Java VM) • The Java Application Programming Interface (Java API) The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages The most common types of programs written in the Java programming language are applets and applications. If you've surfed the Web, you're probably already familiar with applets. An applet is a program that adheres to certain conventions that allow it to run within a Java-enabled browser. An application is a standalone program that runs directly on the Java platform. Some of the main Concepts involved in developing a Java applet are abstract window tool kit and Java swings. They are explained in the next section. 5 Abstract Window Toolkit (AWT): [7] The main purpose of the Abstract Window Toolkit (AWT) is to support applet windows; it can be used to create stand-alone windows that run in a GUI environment. The AWT provides a basic set of GUI widgets such as buttons, text boxes, and menus; the core of the GUI event subsystem; the interface between the native windowing system and the Java application; several layout managers; a java.awt.datatransfer package for use with the Clipboard and Drag and Drop; the interface to input devices such as mice and keyboards; the AWT Native Interface, which enables rendering libraries compiled to native code to draw directly to an AWT Canvas object drawing surface; access to the system tray on supporting systems; the ability to launch some desktop applications such as web browsers and email clients from a Java application Working with Frame Windows [7] A Frame is a top-level window with a title and a border. The default layout for a frame is BorderLayout. Here are two of Frame’s constructors. Frame (): Constructs a new instance of Frame that is initially invisible. Frame (String title): Constructs a new, initially invisible Frame object with the specified title. 6 Setting the Window’s Dimensions: The setSize ( ) method is used to set the dimensions of the window. Its signature is shown here void sestet(int newWidth, int newHeight) void setSize(Dimensions newSize) The getSize( ) method is used to obtain the current size of the window. Hiding and Showing a Window After a frame window has been created, it will not be visible until you call setVisible( ). The component is visible if the argument to this method is true. AWT Control: [8] Controls are components that allow the user to interact with the application in various ways. The AWT supports the following types of controls. Labels Push buttons Check boxes Choice lists 7 Lists Scroll bars Text editing Adding and Removing Controls To include a control on a window, it must be added to the window. To do this, an instance of the desired control must be created and then must be added to the window by the following methods. Here comp is an instance of the control that you want to add. add(Component comp): Appends the specified component to the end of this container. add(Component comp, int index): Adds the specified component to this container at the given position. To remove the control from the window the following method is used. obj is a reference to the control. void remove(Component obj): Removes the particular object. removeAll( ): Removes all the controls. Labels [8] 8 A Label object is a component for placing text in a container. A label displays a single line of read-only text. The text can be changed by the application, but a user cannot edit it directly. Some of the constructors used are: Label(): Constructs an empty label. Label(String text): Constructs a new label with the specified string of text, left justified. Label(String text, int alignment):Constructs a new label that presents the specified string of text with the specified alignment. Buttons [8] The most widely used control is the push button. This class creates a labeled button. The application can cause some action to happen when the button is clicked. Button(): Constructs a Button with no label. Button(String label): Constructs a Button with the specified label. After a button has been created, the label can be set by calling setLabel( ). The label can be retrived by calling getLabel( ).These methods are as follows. Void setLabel(String str) String getLabel( ) TextField 9 A TextField object is a text component that allows for the editing of a single line of text. TextField(): Constructs a new text field. TextField(int columns): Constructs a new empty text field with the specified number of columns. TextField(String text): Constructs a new text field initialized with the specified text. TextField(String text, int columns): Constructs a new text field initialized with the specified text to be displayed, and wide enough to hold the specified number of columns. Jawa Swings: [10] Swing is a GUI toolkit for Java. It is one part of the Java Foundation Classes (JFC). Swing includes graphical user interface (GUI) widgets such as text boxes, buttons, splitpanes, and tables. Swing widgets provide more sophisticated GUI components than the earlier Abstract Window Toolkit. Since they are written in pure Java, they run the same on all platforms, unlike the AWT which is tied to the underlying platform's windowing system. Swing supports pluggable look and feel - not by using the native platform's facilities, but by roughly emulating them. This means you can get any supported look and feel on any 10 platform. The disadvantage of lightweight components is slower execution. The advantage is uniform behavior on all platforms. Swing component relies on an AWT container, since (Swing's) JComponent extends (A WT's) Container. This allows Swing to plug into the host OS's GUI management framework, including the crucial device/screen mappings and user interactions (such as key presses, mouse movements, etc). Swing simply 'transposes' its own (OS agnostic) semantics over the underlying (OS specific) components. So, for example, every Swing component paints its rendition on the graphic device in response to a call to component.paint (), which is defined in (AWT) Container. But unlike AWT components, which delegated the painting to their OS-native "heavy weight" widget, Swing components are responsible for their own rendering. Further, this transposition and decoupling is not merely visual, and extends to the Swing's management of, and application of its own OS-independent semantics for, events fired within its component containment hierarchies. Overall, the Swing Architecture delegates the task of mapping the various flavors of as GUI semantics onto a simple, but generalized pattern to the A WT container. Then, building on that generalized platform, it establishes its own rich and complex GUI semantics, in form of the JComponent model. (The interested reader is encouraged to review the source of Container.java and JComponent.java classes for further insights into 11 the nature of the interface between Swing's lightweight components and A WT's heavyweight widgets.) Loosely-Coupled/MVC: [11] The Swing library makes heavy use of the Model/View/Controller software design pattern, which conceptually decouples the data being viewed from the user interface controls through which it is viewed. Because of this, most Swing components have associated models (which are specified in terms of Java interfaces), and the programmer can use various default implementations or provide their own. The framework provides default implementations of model interfaces for all of its concrete components. Typically, Swing component model objects are responsible for providing a concise interface defining events fired, and accessible properties for the (conceptual) data model for use by the associated JComponent. Given that the overall MVC pattern is a looselycoupled collaborative object relationship pattern, the model provides the programmatic means for attaching event listeners to the data model object. Typically, these events are 12 model centric (ex: row inserted event in a table model) and are mapped by the JComponent specialization into a meaningful event for the GUI component. For example, the JTable has a model called TableModel that describes an interface for how a table would access tabular data. A default implementation of this operates on a two-dimensional array. The view component of a Swing JComponent is the object used to graphically 'represent' the conceptual GUI control. A distinction of Swing, as a GUI framework, is in its reliance on programmatically rendered GUI controls (as opposed to the use of native host OS's GUI controls). (This distinction is a source of complications when mixing AWT controls, which use native controls, with Swing controls in a GUI.) It must be noted that the typical use of Swing framework does not require the creation of custom models, as the framework provides a set of default implementations that are transparently, by default, associated with the corresponding JComponent child class in the Swing library. In general, only complex components such as tables and collection views may require customization of the default models provided. (However, to get a good sense of the potential that the Swing architecture makes possible, consider the 13 hypothetical situation where custom models for tables and lists are wrappers over DAO and/or EJB services.) Finally, in terms of visual composition and management, Swing favors relative layouts (which specify the positional relationships between components) as opposed to absolute layouts (which specify the exact location and size of components). This bias towards 'fluid' visual ordering is a direct legacy of the development of Swing from the ashes of AWT (framework) and the associated assumptions regarding the Applet operating environment that framed the design and development of the original Java GUI toolkit. (Conceptually, this view of the layout management is quite similar to that which informs the rendering of HTML content in browsers, and addresses the same set of concerns that motivated the former.) Swings allows one to specialize the look and feel of widgets, by modifying the default (via runtime parameters), deriving from an exiting one, by creating one from scratch, or, beginning with JSE 5.0, by using the skin able Synth Look and Feel, which is configured with an XML property file. The look and feel can be changed at the runtime, and early demonstrations of Swing would frequently provide a way to do this. 14 Chapter 2 BACKGROUND 2.1 Related Work: A lot of work has already been done on developing the best possible algorithm to implement the Dining Philosopher’s problem. Researchers from University of California, Berkeley have implemented a simpler version of the Dining Philosopher’s problem applet on their website [12]. This applet has only two buttons GO and STOP. The Go button starts the applet and runs the fairness algorithm where each philosopher gets a chance to eat and do not result in a scenario of starvation or deadlock. The Stop button stops the applet. Dijkstra’s algorithm is used to implement this applet. The information on the working on the Dining Philosopher’s problem and how to implement the algorithm can be found on the Wikipedia page [3]. No applets have been implemented on Race Condition in operating systems, however a lot of information can be found online on this concept. The Microsoft support page on Race condition and Deadlocks gives a clear illustration of what Race Conditions are [2] . This page also have code snippets in assembly level programming language used in multithreaded programming environment which can cause a Race Condition to occur. Similar to Race condition, no applets can be found on User and Kernel Threads mapping implementation online, however there are websites like tutorialspoint which gives details about all the different thread mapping techniques [4] . 15 OSCAL contains a lot of animations related to various concepts in operating systems [1]. These animations are well designed and have very good graphical user interface. They are easy to use and have features like graphs, speed bars and step by step execution techniques that enables a user to clearly understand the concept behind the applet. 16 Chapter 3 DESIGN 3.1 RACE Condition: Race condition is a hazard or a flaw that occurs when multiple threads access a shared variable at the same time. The output is critically dependent on the sequence of execution of the other thread. For example if two threads are writing on a variable which is shared between them, the writes get over written and the last write will be saved. The tread that writes the last is writing its data over the data that previous thread wrote. Both the threads are executing in the critical section at the same time which could lead a situation like this. In order to avoid this we have to make sure that the threads waiting to execute in critical section are lined up in proper sequence and only one thread is allowed to enter the critical section at a particular time to execute. We can achieve this by using some sort of locking mechanisms such as Semaphores and Mutex [9]. 17 Thread 1 Load X = 10 Thread 2 Thread 3 Time Load X = 10 Load X = 10 Add X - 1 Add X + 1 Add X + 2 Store X = 9 Store X = 12 Store X = 11 Threads Figure 3.1: Race Condition Example In the figure above there are 3 processes trying to access a value X = 10. All three of them are executing on it and changing its value. Thread 1 is doing X + 1, Thread 2 is doing X -1 and Thread 3 is doing X + 2. The execution time of all of them are different. Thread 1 is the first one to access the value of X. However before it has finished it’s Store operation, Thread 2 and Thread 3 access the value X = 10 and start their operations 18 on it respectively. The final value stored in X will be the value from that thread which completes its execution last. From the figure, Thread 1 completes its execution after Thread 3 which in turn completes its execution after Thread 2. So Thread 2’s output of X = 9 will be overwritten by Thread 3’s output of X = 12 which in turn will be over written by Thread1’s output of X = 11. Semaphores and Mutex help thwart this situation by not allowing more than 1 thread to execute on the shared resource at the same time. The execution of multiple threads will be sequential. A thread will be allowed to access the shared resource only after the previous thread has completed the execution on the shared resource and stored its value in the register. Until the thread which is executing on the shared resource has not completed its operation, other threads are not allowed to access that resource and it is said to be locked for that particular resource. 3.2 Dining Philosophers Applet: The Dining Philosopher’s Problem explains the problem of Deadlock and Starvation occurred in a multitasking environment [2][3]. Deadlock: A situation when 2 or more parallel executing threads are waiting for each other to finish and enter an infinite loop. If 2 threads are executing in parallel and each of them are holding a resource that is needed by the other required to complete the execution, the two threads will wait indefinitely and will never complete execution. 19 Starvation: A situation in a multitasking environment when a thread is denied the necessary resources required for it to execute because it is currently held by another thread which is executing. The thread is said to starve indefinitely. The Dining Philosopher’s problem gives a theatrical representation of these situations. 5 Philosophers are sitting around a round table with a bowl of Spaghetti in front of each one of them. The Philosophers spend their time eating and thinking. There are 5 forks set in between each bowl. In order for the philosophers to eat the spaghetti each one of them has to pick up 2 forks, which they pick up one at a time. The philosophers do not communicate with each other. Each philosopher has to take turns in picking up the fork, eat for a specific time and put the forks back on the table. If this sequence is not followed we might end up in a situation of Deadlock or Starvation. If all of the Philosophers pick up a fork at the same time, Deadlock occurs. If a Philosopher is eating for a long time and do not put his forks back on the table, Starvation occurs. Other Philosophers who are waiting for the fork to eat will be starved. The Philosophers who are not eating are said to be in thinking state. 3.3 User and Kernel Thread Implementation Applet: In multicore and multithreaded environments, there are two type of threads to be managed by the operating system. User and Kernel Threads [4]. 20 User Threads: These threads are above the kernel and execute in user space [4]. Kernel does not support user threads. These threads work on application level and are present in the programs. Since they are not handled by kernel, the overhead of maintaining these threads are low. Kernel Threads: These threads are executed and supported with in the Kernel of the Operating system [4]. For any User thread to perform its task, an associated kernel thread has to execute the task in the kernel. In multi-threaded environment, multiple kernel threads are allowed to execute in parallel which allows us to perform simultaneous multiple tasks. The overhead of maintaining the kernel threads are higher than user threads. The user threads and the kernel threads must be mapped onto each other in order to perform any task by the user. They are done using one of the following strategies. Many to One Model: [4] In Many to One mapping model many user threads are mapped onto one kernel thread. This strategy is very efficient as threads are handled in user space. The drawback of this technique is if a blocking call is made, the entire process is blocked even if there are other threads that are free and can continue. Since there is only one kernel thread executing at all times, the process cannot be split across multiple CPU’s for execution. 21 U2 U1 U3 K1 Figure 3.2: Many to One Thread Mapping Figure 3.2 represents the Many to One mapping of user kernel threads. In the representation we have 3 user threads U1, U2 and U3 mapped onto 1 Kernel Thread K1. One to One Model: [4] In One to One mapping model, a single user thread is mapped onto a separate kernel thread. That is for ‘n’ user threads we have ‘n’ kernel threads. This technique resolves the problem of blocking, however there is a larger overhead involved due to a large number of kernel threads executing in parallel. This can potentially effect the performance of a system by causing it to slow down. 22 U2 U1 K1 K2 U3 K3 Figure 3.3: One to One Thread Mapping Figure 3.3 represents the One to One mapping model. Here three user threads U1, U2 and U3 are mapped onto K1, K2 and K3 respectively. Many to Many Model: [4] In Many to Many model many number of User threads are mapped onto many number of Kernel threads. That is, ‘n’ user threads can be mapped onto either ’n’ or ‘n-1’ number of kernel threads. This implementation makes use of the best features of One to One and Many to One models. The executing process in this implementation can be split across 23 the CPU for execution. One user thread can be allocated multiple Kernel threads for execution as well. U3 U2 U4 U1 U5 K3 K1 K2 Figure 3.4: Many to Many Thread Mapping Figure 2.3 displays the Many to Many mapping model. The user threads U1, U2, U3, U4 and U5 are mapped onto the kernel threads K1, K2 and K3. 24 Chapter 4 IMPLEMENTATION 4.1 Race Condition Applet: This applet is demonstrated using a Bank Transaction scenario. The user is asked to do ‘n’ transactions of deposit or withdraw. User can select up to 4 transactions. The bank always has a fixed amount of $1000. Any transaction of deposit or withdraw affects this amount. The user has the option to see the output with Race condition enabled or Synchronized condition enabled. We see how the transaction is affected when Race condition occurs. The applet also demonstrates how the transaction takes place when the resources are synchronized and no race condition occurs. Once the output is seen, the user can repeat the same scenario which will shuffle the transactions. This way the user can get a clear understanding of different possible outcomes that may occur during a Race condition. 4.1.1: Use Case Diagram: Once the user launches the applet, he has the option to choose between Race Condition use case and the handle Race condition use case. Each of these uses cases have similar attributes like Amount, and transaction type. The Output displayed will be specific to the use case selected by the user. 25 Input Amount Input Transaction number Race Condition Input Transaction Type Output Result Input Amount USER Input Transaction Race Condition Handle number Input Transaction Type Output Result Figure 4.1: Use case Diagram for Race Condition applet 26 Figure 4.2: Race Condition Input Screen Scenario 1: Race Condition In this scenario, multiple transactions enter the critical section at the same time. These transactions however take different amount of time to execute and finish. For example, let us consider 3 transactions occur as described in Figure 3.1. User selects Deposit for 1 st transaction, Withdraw for the 2nd transaction and Deposit again for the 3rd transaction. In an ideal scenario, we would have Deposit finish first, Withdraw next and then Deposit again. After the first transaction the amount should be $1100. After the 2nd transaction the amount should be $1000 and after the 3rd transaction the final amount that should be reflected back to the user is $1100. However when race condition occurs, all three transactions occur at the same time. All three transactions will operate on $1000. 27 START Input values Queue the Transactions No Check Race Condition Enter Critical Section 1 transaction at a time. Yes Transactions enter critical section randomly Display Result Display Result STOP Figure 4.3: Flowchart of Race Condition Algorithm 28 The algorithm for this applet is described as follows. A backend Database is created to hold the details of the user account. This account is locked and cannot be modified by the user. A work Queue is used to hold the transactions specified by the user. As the user specifies the number and type of transactions, the jobs enter the queue in the order specified. If the Handle Race condition option is not checked, the all the Jobs enter the critical section one after the other. The threads do not initiate the wait routine until the thread which is executing in the critical section is finished. Once the thread has completed its execution, it exits the critical section. The threads write their output on to a register which is overwritten by each executing thread after its completion. Final value of the register will display the value written by the last executing thread. 29 Figure 4.4: Race Condition Output 1 Figure 4.4 represents the Race condition Applet output. All three transactions enter the critical section at the same time and Load the value of Balance = 1000. The increment/Decrement operations for Deposit /Withdraw do not happen at the same time which would be the same in a real world scenario. This is achieved by using a random number generator to allocate the time slots for execution. The transactions take the same time to finish Increment or decrement and store the final result in the output variable. According to the figure 3.2, the final value stored in the register will be $1100, since the last occurring operation is the store from the deposit operation. Once the user clicks on the repeat button, the transactions get shuffled and a similar execution occurs. 30 Figure 4.5: Race Condition Output 2 Figure 3.3 describes another scenario where one transaction occurs after the completion of another transaction. In this scenario Transaction 2 and Transaction 0 enter the critical section at the same time and load the same amount of $1000. However Transaction 1: Withdraw loads after the completion of the two earlier transactions and loads the value which was written the last. Here Transaction1 loads $1100 and decrements a value of $100 on it and final output reflected to the user will be $1000. Scenario 2: Handle Race Condition: One way to handle the Race condition problem is by using synchronization techniques. Synchronization allows two or more concurrently executing threads do not execute specific portions of the program at the same time. If one thread has started the execution 31 of a serialized portion of the program, any other thread trying to execute this portion must wait until the first thread finishes. Synchronization gives access control to shared variables. The synchronization technique that is used in this applet is MUTEX. MUTEX: Mutex is short for Mutual Exclusion, is a synchronization technique or a locking mechanism used among multiple parallel executing threads that share the same resource [7]. The Mutex is usually an integer in the memory. There are 2 basic operations of Mutexs: Lock and Unlock. Once the program is started a mutex is created for a given resource at the beginning. It contains a unique name or an id for it. In order for any thread to use this resource, it must use the mutex to lock the resource form other threads while it is using that resource. If another thread needing the resource is already locked by the mutex, it will be queued and will be given the access to it once it is unlocked and becomes available. 32 Figure 4.6: Working of Mutex When the mutex is created, it starts with a value 0 which means it is unlocked. If a thread wishes to lock the mutex, it checks the value for 0 and and assigns 1. The thread now becomes the owner of the mutex. Once it unlocks the mutex, it sets it value back to 0. Using mutexs, only 1 thread is allowed to execute in the critical section. In our applet, if a transaction is operating in the critical section, the other transactions are queued and are only allowed to enter the critical section once the current transaction is completed. 33 T0 T1 1) 2) 3) Context Switch 4) 1) 2) 3) 4) 5) T1 enters critical Section T1 is executing T1 sets Mutex = 0 T1 Exits critical Section T1 Terminates 5) 1) 2) 1) 2) 3) 4) 5) 6) 3) 4) 5) 6) T0 sets Mutex = 1 T0 enters Critical Section T0 executes T0 sets Mutex = 0 T0 exits critical section T0 Terminates Figure 4.7: Critical Section Figure 4.7 explains the flow of execution in the critical section. Every time a thread finishes execution in the critical section and a new thread is supposed to enter, a context switch occurs. A thread can only terminate its execution after it exits from the critical section. In order for a thread to enter the critical section again, it must wait till all the threads in the queue has got a chance to execute in the critical section. Once this is done, the thread can request access for the shared resource again and enter the critical section. The same logic is applied in the applet to output the synchronized condition. 34 Figure 4.8: Race Condition Handle Output Figure 4.8 represents the synchronized output. Here only one transaction enters the critical section at a time. Each transaction is locks the value of 1000 using the mutex. The mutex allows the transactions to complete and exit the critical section before loading another transaction in the critical section. This allows the user to see the right value at the end of his final transaction. 35 4.2 Dining Philosophers Applet: In this applet we are demonstrating three different scenarios. The solution to the DP problem The Deadlock scenario The Starvation scenario Run Synchronous mode Run Deadlock mode USER Run Starvation mode Figure 4.9: Use Case Diagram of DP Problem The Solution: 36 START Philosopher starts to think. Hungry? YES NO YES NO Forks Available? NO Is the neighbor waiting longer? YES Pick up forks and Eat NO Full? Yes Figure 4.10: Flowchart of the DPP solution Finish 37 The solution guaranties that every philosopher sitting around the table will get his turn to eat without encountering a situation of Deadlock or Starvation. In order to achieve this, we have to ensure certain conditions are checked and satisfied. To enable sharing of the forks among philosophers: We use Mutexs to ensure that the forks are shared among the adjacent philosophers. When a philosopher tries to eat we need to set the Mutex appropriately. Before the philosopher picks up the fork, he will set the mutex to 1. Once the philosopher is done eating for the moment and is in a thinking state, he needs to set the mutex back to 0. We use a set and release function to set the mutex to 1 and 0 respectively. Void PickUp_Forks( int i) { Set(Mutex); State[i] = HUNGRY; Check(i); Release(Mutex); } Figure 4.11: Pseudo code to pick forks Figure 4.11 gives the pseudo code to pick up the forks. The explanation is as follows: 38 Int i represents the number of philosophers from 0 to n-1. In this applet we demonstrate by using n=5. That is we have 5 philosophers sitting around the roundtable. Set(Mutex) is used set the mutex to 1 and for the thread to enter the critical section. In order to eat, the philosopher has to set himself to HUNGRY state. This is done by using state[i]=HUNGRY. Once hungry, the philosopher cannot immediately pick up the forks to eat. He has to check several other conditions which are described in the later part of this section. This check is done using test(i). Release(mutex) is used to set the mutex back to 0 and exit the critical section. At this point of time the philosopher would have acquired the forks he needs to eat and has blocked them from being acquired by his neighbor. Once the philosopher has acquired the necessary forks and in order for the philosopher to start eating, he has to enter the critical section again. The pseudo code for eating is given below. 39 Void use_forks( int i) { Set(mutex); State[i]=THINKING; Check(LEFT); Check(RIGHT); Release(mutex); } Figure 4.12: Pseudo code to finish eating The pseudo code to eat is described as follows: Once the philosopher has blocked the necessary forks, he will set the mutex back to 1 and enter the critical section. Here the mutex is locked until the philosopher has finished eating. Once the philosopher has finished eating, he will have to change his state from EATING to THINKING. The philosopher will then do a check() to see if his left neighbor and his right neighbor is ready to eat. Then he sets the mutex back to 0 and releases the lock. The philosopher will then exit the critical section. 40 In order for the philosopher to eat, he will have to do a check for the current states of his neighbors. The pseudo code for that is given below. Void check(int i) { If (state[i]==HUNGRY && state[LEFT]!=EATING && state[RIGHT]!=EATING) { Set(MUTEX) } } Figure 4.13: Pseudo code to eat The pseudo code to eat is described as follows: The philosopher first checks for 3 main conditions before eating o His state is HUNGRY o His left neighbor’s state is not EATING o His right neighbor’s state is not EATING If these three conditions satisfy, the philosopher sets the mutex to 1 and locks it. He enters the critical sections and starts eating. 41 The same procedure is applied for individual philosophers sitting around the round table. Every philosopher will call these functions in order to check for forks, pick them up and start eating. The pseudo code for the overall flow is given below. Void DiningPhilosopher( int i) { While(TRUE) { PickUp_forks(i); Eat(); Use_Forks(i); } } Figure 4.14: Pseudo code for overall flow The pseudo code described in figure 3.9 will ensure that each philosopher will get a chance to eat without incurring the deadlock or the starvation situation. The use of binary semaphore or mutex allows us to lock the forks. Each philosopher will eat for a specific number of turns by which time they will change their state to FULL. 42 Figure 4.15: DP Problem Output Figure 3.10 represents the output of the dining philosopher’s problem applet with synchronized condition. The philosophers are represented by small circles. The different states of the philosophers are indicated by different colors. EATING is indicated by color Green THINKING is indicated by color Blue GRABBING is indicated by color Yellow DONE is indicated by color Grey Forks are indicated by a thick Black line. 43 Initially all the philosophers start with the state THINKING. The text box has the textual representation of the process of eating. Two forks are indicated around two green circles which shows that the two philosophers who are currently eating are holding those forks. The remaining free fork is indicated around the philosopher who’s turn is next to eat. This process is continued till all the philosophers have finished eating. Once they have finished eating and changed their state to FULL, the circles change their color to Grey. Figure 4.16: Final State Output Deadlock Situation: 44 Deadlock situation occurs when every philosopher picks up the fork at the same time. Since every philosopher needs 2 forks to eat, none of them will be able to eat. Each philosopher will be waiting for the other one to drop the fork they are holding back on the table, but none of them do. Deadlock can occur in 2 ways. Every philosopher picks up a fork to his left. Every philosopher attempts to pick up a fork to his right. Philosopher starts to think. NO Is the Philosopher hungry? YES Try to pick up the forks to eat Figure 4.17: Flowchart for Deadlock 45 do { Pickup_fork (fork[i]); Pickup_fork (fork[i+1] % 5); //eat Putdown_fork(fork[i]); Putdown_fork (fork[i+1] % 5); //think } while (true); Figure 4.18: Pseudo code for Deadlock The above pseudo code results in a deadlock situation when all the philosophers become hungry at the same time. This code will not guarantee that order of which philosopher will go first. Also this code does not lock the forks. If a philosopher becomes hungry and a fork to his left is available, he will have to lock the fork using a binary semaphore so that it won’t be acquired by the adjacent philosopher. When the fork is locked and if the adjacent philosopher is also hungry at the same time, he will go into thinking state until the fork becomes available for him to eat. If the fork is not locked, he can preempt the fork from the philosopher who is already eating. 46 Figure 4.19: Deadlock Output Starvation: Starvation can occur when one or two philosophers are always eating without letting their forks down which leads to other philosophers to starve. These philosophers will be in the EATING state for a long time. The other possibility is that these philosophers will immediately put themselves back to EATING state after they have just finished eating instead of going into THINKING state. In a multithreaded environment this is a common occurrence. Higher priority threads sometimes preempt the lower priority threads during execution. This causes the higher 47 priority threads to execute more frequently and the lower priority threads do not get any CPU time at all. One possible technique to avoid this situation is to disable preemption. Disabling preemption guarantees that higher priority threads will not preempt the lower priority threads at any given point of time during execution. This will ensure all the threads will get equal slice of CPU time irrespective of their priorities. The other possible technique that we have used in this applet is to use a lock using binary semaphore or mutex. If a thread has acquired a mutex on the shared resource it cannot be preempted by another higher priority thread. The resource can only be accessed once the lock is released by the thread holding it. 48 Philosopher starts to think. YES Is the Philosopher hungry? NO YES Are the forks free? NO Pickup the Forks to eat Figure 4.20: Flowchart to indicate starvation 49 4.3 User and Kernel Threads Implementation Applet User and Kernel Threads implementation applet clearly demonstrates the 3 different thread mapping techniques used in multi-threaded environment. We display how the One to One model, Many to One model and Many to Many model executes with different amount of CPUs used for each scenario. 50 1 User Thread 1 Kernel Thread One to One N cores m User Thread Many to One 1 Kernel Thread N cores m User Thread Many to Many n Kernel Thread N Cores Figure 4.21: Use case diagram for User, Kernel applet 51 Figure 4.22: User, Kernel Threads Input Screen The input Screen takes the Number of User Threads and number of Kernel Threads as Input. User also has an option of entering the number of Cores on which the threads get executed. The execution of the threads will be distributed among the different cores if more than 1 core is selected. The radio buttons allow the One to One, Many to One or Many to Many option to be selected. One To One: The flowchart explains the logic of the One to One thread mapping design 52 Enter the number of User and Kernel Threads Is User threads = Kernel threads? Yes No Report Error Is N cores free? No Yes Execute N User/Kernel Threads Wait No Remaining threads <= N ? Yes Execute remaining threads Figure 4.23: Flowchart of One to One Design The One to One design implementation is as follows: User enters M user threads and M kernel threads. 53 If user threads is not equal to the Kernel threads entered by the user the applet returns an error. Once the equality check is done, the applet looks for number of free cores. If there are N free cores, then N number of user and kernel threads are executed in parallel. We check if the remaining threads is less than equal to the number of free cores and execute N threads again. Same process is repeated till all the threads are done executing. 54 Figure 4.24: One To One Output with 3 User Threads, 3 Kernel Threads Figure 4.24 describes the output of the applet from One to One mapping model. This particular scenario has 3 user threads U0, U1 and U2 and 3 kernel threads K0, K1 and K2 selected by the user. Also according to the figure we see that 2 cores, Core 0 and Core 1 have been selected for execution. User threads U0 and U1 gets executed first by K0 and K1 on the 2 cores. However since we have only 2 cores in the system, We cannot have all three user threads executed at the same time. The execution of User thread U2 is not started until one of the cores has finished execution. Once a core becomes free, then the User thread U3 and Kernel Thread K3 gets loaded into it for execution. According to the scenario shown in the figure 4.24, U0 mapped to K0 gets executed in Core 0 and U1 55 mapped to K1 gets executed in Core 1 at the same time since both cores are active. U2 waits for one of the cores to finish execution and then it gets executed in Core 0 again. The Color legend is described as follows: RED describes that the thread is currently idle or not executing. BLUE indicates that the corresponding thread is currently being executed. GREY indicates the execution of that particular user thread is finished. BLACK LINES indicate to which kernel thread that particular user thread is mapped to. Many To One: The flowchart describes the Many to One Design Logic. 56 Enter the number of User threads Yes Is User threads >= Kernel threads? Report Error Execute 1 User, kernel thread User thread remaining? No Yes No Finish Execution Figure 4.25: Flowchart of Many to One Design The design logic of Many to One mapping technique is explained as follows. User enters M user threads. Applet checks if the User threads is greater than or equal to the kernel threads If successful, One user thread is executed at a particular time using the only kernel thread 57 We check if there are other user threads remaining. The remaining user threads are also executed one after the other using the single kernel thread. Figure 4.26: Many to One Output with 3 User threads, 1 Kernel Thread Figure 4.26 describes the Many to One output scenario in our applet. In this scenario, we have selected 3 user threads and 1 Kernel Thread. We have also selected only 1 Core for execution. Since we have only 1 kernel thread, only 1 user thread will be executed at any 58 point of time irrespective of the number of cores selected. U0 mapped to K0 gets executed first. Once execution of U0 has finished in Core 0, execution of U1 starts. U1 gets mapped on to K0 again and gets executed in Core 0. Similar process continues for the execution of U2. The same steps of execution will take place even if we had more than 1 core for execution as we have only 1 kernel thread to which the user threads can be mapped to. In this case we experience an overhead of idle core. Even though we have a core free for execution and user threads waiting to be executed, we cannot proceed with the execution until the kernel thread which is being used by the currently executing user thread becomes free. This is one of the drawbacks of the Many to One Model. Many To Many: The Flowchart for Many to Many design logic is explained as follows. User enters n user threads and m kernel threads. If user threads is not greater than or equal to the Kernel threads entered by the user the applet returns an error. Once the equality check is done, the applet looks for number of free cores. If there are N free cores, then N number of user and kernel threads are executed in parallel. We check if the remaining threads is less than equal to the number of free cores and execute N threads again. Same process is repeated till all the threads are done executing. 59 Enter n user threads and m kernel threads Yes Is n>=m ? No Report Error Is N cores free? No Yes Execute N User/Kernel Threads Wait No Remaining threads <= N ? Yes Execute remaining threads Figure 4.27: Flowchart of Many to Many design logic 60 Figure 4.28: Many to Many Output with 4 User threads, 3 Kernel Threads Figure 4.28 shows the output of a Many to Many thread mapping scenario. This output describes a scenario with 4 User threads mapped to 3 Kernel threads and uses 3 Cores for execution. The 3 User threads U0, U1 and U2 mapped on to K0, K1 and K2 gets executed first on Core 0, Core 1 and Core 2 respectively. User thread U3 waits till one of the Kernel thread finishes execution and one Core finishes execution. Once they are free the remaining user thread gets mapped on to the free kernel thread and gets executed on the free Core. Figure 4.28 shows U3 gets mapped onto K0 after it finishes execution of U0 and gets executed on Core 0. Here are a couple of snapshots of a few other User Kernel Mapping scenarios among many that can be achieved using this applet. 61 Figure 4.29: Many to Many Output with 4 User threads, 2 Kernel Threads 62 Figure 4.30: Many to One Output with 4 User threads, 1 Kernel Threads 63 Figure 4.31: One to One Output with 4 User threads, 4 Kernel Threads 64 CHAPTER 5 PERFORMANCE EVALUATION 5.1 Race Condition Applet: The user is able to view the output of a Race condition through the Banking algorithm scenario. The Race condition output is enabled to display 3 and 4 consecutive transactions. We can have various scenarios where each transaction enters the critical section at different time during execution. Hence we have N transactions enter the critical section at N-1 different times. Each transaction has 3 steps of execution. They are Load, Increment/Decrement and Store. Each of these steps may enter the critical section at different times. The time points at which these one of these execution paths may enter the critical section will vary depending upon the number of transactions performed in parallel. For example, during a 3 transaction scenario in our applet, we will have Load, Increment/Decrement and Store performed 3 times each. To calculate the total number of execution paths for this scenario, we will have to consider the total number of execution paths for each of the execution steps. 3 executions of load can enter the critical section at 3 different times and hence the total number of outcomes is calculated by 3*3*3 times. Similarly total number of outcomes for increment and Decrement can be 3*3*3 times. The total number of outcomes for store can be determined by 3*2*1 times. So the total 65 outcomes possible in a 3 transaction scenario is P = 3*3*3*3*3*3*3*2*1 = 4374 times. The table 5.1 shows 1 outcome among P possible outcomes. Table 5.1: Race Condition with 3 Transaction Output Scenario Time K1 Transaction T1 Load K2 K3 K4 K5 K6 K7 K8 K9 Transaction T2 Transaction T3 Load Increment Amount $1000 $1000 Increment Load Store $1000 $1100 Decrement Store Store $1100 $1000 Each transaction T1and T2 are doing Load, Increment and Store while Transaction T3 is doing Load, Decrement and Store. T1 and T2 increment $100 to the amount while T3 is decrementing $100 from the amount. The final amount displayed will be the last Store at the K9 time slot. Similarly in a 4 transaction scenario, we have 4 execution instances of Load, Increment/Decrement and Store. So the total number of possible outcomes can be calculated as follows: 4 instances of load enters the critical section at 4 different times and hence total number of outcomes possible for Load is 4*4*4*4. Total number of outcomes for Increment/Decrement is 4*4*4*4. And total number of outcomes for Store 66 is 4*3*2*1. So the total possible outcomes possible for a 4 transaction scenario is Q=4*4*4*4*4*4*4*4*4*3*2*1 = 1572864. Table 5.2 describes a 4 transaction scenario. Table 5.2: Race Condition with 4 Transaction Output Scenario Time K1 K2 K3 K4 K5 K6 K7 K8 K9 K10 K11 K12 Transaction Transaction Transaction Transaction T1 Load T2 T3 T4 Amount 1000 1000 1000 Load Load Decrement Increment Store 900 Decrement Store Load Store Increment Store 900 900 1100 1000 Table 5.2 shows 1 outcome of the Q possible outcomes. T1 and T3 are decrementing $100 to the amount while T2 and T4 are incrementing $100. T4 is loading an amount of $900 in Time K9 because a Store was done in Time K8 by T1. 67 If all the values entered during the input are correct, then the user will be able to view the output without any issue. However the applet will give an error message or fails to output the right results if the following input values are not as mentioned. The amount to withdraw must always be an integer. Floating point numbers returns an error from the applet. The Bank Balance is always $1000 during the start of any transaction. If the total number of transactions combined with the total amount becomes less than zero, the output will always display a zero and not negative number. The Number of transactions is limited from 2 to 4 for the convenience of the user. If the total exceeds 4, the input screen will refresh prompting the user to repeat the input values. 5.2 Dining Philosophers Applet: In the Dining Philosopher’s problem applet, the user will only be able to select one of the 3 output modes which he wishes to view. The radio buttons ensure that the user can only select one of the modes at a particular time. Since there are no other variables or parameters that can be manipulated while the applet is running, the applet cannot give any wrong outputs at any given point of time. The user do not have an option to input any variables in this applet. The performance of the Deadlock mode is pretty simple. We can have only 1 possible outcome. All the philosophers pick up chopsticks at the same time that result in a 68 deadlock situation. This is the only way to achieve a Deadlock with Dining philosophers’ problem. The starvation condition can be achieved when 1 philosopher among the 5 do not get a chance to eat. 4 other philosophers eat alternatively while the 5th philosopher is left hungry till the end. Table 5.3: Starvation condition Scenarios Philosopher Philosopher Philosopher Philosopher Philosopher 1 2 3 4 5 1 Eat Tn Eat Tm Eat Tn Eat Tm Hungry 2 Eat Tn Eat Tm Eat Tn Hungry Eat Tm 3 Eat Tn Eat Tm Hungry Eat Tn Eat Tm 4 Eat Tn Hungry Eat Tm Eat Tn Eat Tm 5 Hungry Eat Tn Eat Tm Eat Tn Eat Tm Scenario If a Philosopher P1 eats at a Time Tn then the philosopher next to him can only start eating at a time Tm. Tm is the time when the previous philosopher has stopped eating and has dropped his fork and the current philosopher has picked it up and is ready to eat. 69 Synchronous condition allows each philosopher to get equal opportunity to eat the food in front of him. No philosopher will be starved for a long time and the algorithm ensures that deadlock scenario won’t occur. Figure 4.10 in the Implementation chapter explains the algorithm used to display the synchronous output. 5.3 User, Kernel Thread Implementation Applet Table 5.4: Performance of User Kernel threads with P cores Number of Number of Total Cores Active Idle cores at User Threads Kernel available cores at a a time Threads One to One Many to N N time P N if P>N. P-N if P>N. Else P Else 0 P-1 N 1 P 1 N M<=N P M if P>M. P-M if P>M. One Many to Many Else P Else 0 70 Table 5.4 explains the performance of the different user and kernel threads models with P cores. In One to One mapping models, if we have N user threads then we also have N kernel threads. So the total number of cores used will be N and the total idle cores will be P-N. In Many to One model, with N user threads we can have only 1 active kernel thread. Hence we can have only 1 active core at a time and P-1 idle cores. This scenario is said to have the worst performance as we cannot do any multi-threaded execution. In Many to Many model with N user threads we can have M<=N kernel threads. Hence M user and kernel threads will be executed at a single time on M different cores. The total number of Idle cores will be P-M. The User Kernel thread implementation applet allows the user to select the number of User threads, the number kernel threads, number of cores for execution and the mode of execution in the input page. The user can view the output if the correct values are selected at the input. However, if the user selects one of the following input values, the applet returns an error or will not display the right values. In One to One implementation if the number of user threads is not equal to the number of kernel threads, the applet returns an error message, prompting the user to input the right values again. 71 In Many to One implementation if the number of Kernel threads selected is greater than 1, the applet returns an error message prompting the user to input the right values. In Many to Many implementation if the number of kernel threads is greater than the number of user threads specified the applet returns an error code prompting the user to input the right value. If the cores during any of the implementation specified is equal to zero, the applet returns an error message prompting the user to input the right value. 72 CHAPTER 6 CONCLUSION AND FUTURE ENHANCEMENTS The purpose of development of these applets is to help those users who want to learn the operating system concepts like Race Condition, Deadlocks etc. By demonstrating real world scenarios and allowing the users to actually manipulate the inputs, these applets are more intuitive and more informative than a textbook. This will help even a novice user to get a clearer picture of what is going on with the system when he encounters one of these scenarios. Future enhancements for Dining Philosopher’s Problem Applet: Use an even more optimized algorithm to solve the DP problem. This would make the execution more quick and efficient. Future Enhancements for Race Condition Applet: Use different techniques to counter the Race condition and to measure the performance of each technique to check which one is more efficient. Future enhancements for User Kernel Threads Applet: Use an algorithm to detect which core will be freed next and allocate it to the next waiting User thread in the queue. This scenario is particularly helpful in a large system with multiple User and Kernel Threads. 73 BIBLIOGRAPHY [1] Link to OSCAL from Dr. Du Zhang http://gaia.ecs.csus.edu/~zhangd/oscal/oscal.htm [2] Microsoft support page for Race Condition http://support.microsoft.com/kb/317723 [3] Wikipedia page for Dining Philosopher’s Problem http://en.wikipedia.org/wiki/Dining_philosophers_problem [4] Online tutorials for multithreading in Operating systems http://www.tutorialspoint.com/operating_system/os_multi_threading.htm [5] Online tutorial on Java applets by Oracle http://docs.oracle.com/javase/tutorial/deployment/applet/getStarted.html [6] Online tutorial on Java Virtual Machine http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-1.html#jvms-1.2 [7] Online tutorial on AWT by Oracle http://docs.oracle.com/javase/7/docs/api/java/awt/package-summary.html 74 [8] Online tutorial for AWT controls http://www.tutorialspoint.com/awt/awt_controls.htm [9] Online tutorial on Mutex by Paul Bridger http://www.paulbridger.com/mutexes/ [10] Online tutorial on Java Swings by Oracle http://docs.oracle.com/javase/tutorial/uiswing/ [11] Wikipedia page for loosely coupled and MVC http://en.wikipedia.org/wiki/Swing_(Java)#Loosely_coupled_and_MVC [12] Dining Philosopher’s problem applet implemented by UC Berkeley. http://ptolemy.eecs.berkeley.edu/ptolemyII/ptII8.0/ptII8.0.1/ptolemy/