12.0 Distributed Operating Systems

advertisement
ICT Standards and Guidelines
Segment 105
Operating Systems
Main Document
(Version 2.0)
Disclaimer
The Office of the Minister of State for Administrative Reform (OMSAR) provides the
contents of the ICT Standards and Guidelines documents, including any component or
part thereof, submission, segment, form or specification, on an 'as-is' basis without
additional representation or warranty, either expressed or implied. OMSAR does not
accept responsibility and will not be liable for any use or misuse, decision, modification,
conduct, procurement or any other activity of any nature whatsoever undertaken by any
individual, party or entity in relation to or in reliance upon the ICT Standards and
Guidelines or any part thereof. Use of or reliance upon the ICT Standards and Guidelines
is, will be and will remain the responsibility of the using or relying individual, party or
entity.
The ICT Standards and Guidelines are works in progress and are constantly being
updated. The documentation should be revisited regularly to have access to the most
recent versions.
The last date of update for this document was June 2003.
Table of Contents - Operating Systems
1.0
2.0
3.0
4.0
5.0
6.0
7.0
Executive Summary for Operating Systems .............................................. 1
The Background of Operating Systems ..................................................... 2
2.1
The Scope of Operating Systems ......................................................... 2
2.2
The Benefits of Standardization ........................................................... 2
2.3
Policies to Follow for Operating Systems .............................................. 3
2.4
Risks Resulting from the Standardization Activities ................................ 3
2.5
Related Documents ........................................................................... 3
2.6
How to Use This Document? ............................................................... 3
2.7
Related Terms and Acronyms ............................................................. 4
2.8
Related Segments and Cross References .............................................. 4
2.9
Related International Standards .......................................................... 5
2.10 All Segments in the ICT Standards and Guidelines ................................. 5
Classification of Operating Systems ......................................................... 6
3.1
Proprietary Operating Systems ........................................................... 7
Operating Systems Requirements ............................................................ 8
4.1
Mandatory Requirements ................................................................... 8
4.2
Operating Systems and Hardware Platforms ......................................... 8
4.3
Additional Evaluation Criteria .............................................................. 9
Scalability .............................................................................................. 10
5.1
The Scaling Modes........................................................................... 10
5.1.1 Scaling Up ............................................................................ 10
5.1.2 Scaling Out .......................................................................... 11
5.1.3 Scaling Down ........................................................................ 11
5.2
Measurement Factors for Scalability .................................................. 11
5.3
Shared-Memory Multiprocessing Scalability ........................................ 11
5.4
Interconnection Strategies and OS Flexibility ...................................... 12
5.5
Memory Scalability .......................................................................... 13
5.6
Storage Scalability .......................................................................... 13
5.6.1 Fast and Reliable Storage – Striping and Mirroring .................... 13
5.6.2 Storage Availability ............................................................... 14
5.7
64-BIT Support ............................................................................... 15
5.8
Scalability clustering Options ............................................................ 15
5.9
Low-Level Optimizations .................................................................. 15
5.10 Measuring the System Scalability ...................................................... 16
5.10.1 Using the TPC-C Benchmark for Online Transaction Processing ... 17
5.10.2 Web Server Performance (SPECWeb) ....................................... 17
RAS (Reliability, Availability and Serviceability) .................................... 19
6.1
Component Failure Resilience ........................................................... 19
6.1.1 Dynamic Processor Resilience ................................................. 19
6.1.2 Dynamic Memory Resilience ................................................... 19
6.1.3 Virtual IP Addresses .............................................................. 19
6.1.4 Alternate I/O Pathing (Multipath I/O) ....................................... 20
6.2
Dynamic Reconfiguration .................................................................. 20
6.3
RAID Solutions................................................................................ 20
6.4
Journaling File System ..................................................................... 21
6.5
High Availability (HA) Clustering Options ............................................ 21
6.6
Work Load-Management Tools .......................................................... 22
Memory Management ............................................................................. 23
7.1
Modeling Multiprogramming.............................................................. 23
7.2
Virtual Memory ............................................................................... 23
7.3
Segmentation ................................................................................. 23
8.0
9.0
10.0
11.0
12.0
13.0
Security in Operating Systems ............................................................... 24
8.1
Multilevel Security ........................................................................... 24
8.1.1 Discretionary Access Control ................................................... 24
8.1.2 Mandatory Access Control ...................................................... 24
8.1.3 Multilevel Security Models ...................................................... 24
8.2
Typical Security in Operating Systems ............................................... 25
I/O Management .................................................................................... 26
9.1
Basic I/O Techniques ....................................................................... 26
9.2
The I/O Software Levels ................................................................... 26
9.3
Other I/O criteria ............................................................................ 26
9.3.1 Synchronous /Asynchronous Transfers ..................................... 26
9.3.2 Sharable/Dedicated Devices ................................................... 27
9.3.3 Graphical User Interface (GUI) ................................................ 27
System Management .............................................................................. 28
10.1 System Management activities and processes ..................................... 28
10.2 Components of Systems Management ................................................ 28
10.2.1 Day to Day Knowledge and Control Components ....................... 28
10.2.2 Tools and Good Practices: Preventive Policy Setting Components 29
10.2.3 Forecast for Continuous Improvement Components ................... 31
10.3 Benefits of System Management ....................................................... 31
10.4 Basic OS System Management Features ............................................. 31
10.4.1 Hardware Management .......................................................... 32
10.4.2 Operating System Management .............................................. 32
10.4.3 Software Management ........................................................... 32
10.4.4 Event Management ................................................................ 33
10.4.5 Storage Management ............................................................. 34
10.4.6 Remote Management ............................................................. 34
Unicode and Multilingual Computing ...................................................... 35
11.1 Multilingual Computing .................................................................... 35
11.2 Software Internationalization ............................................................ 36
11.3 Internationalization Framework ......................................................... 37
11.4 Supporting the Unicode Standard ...................................................... 37
11.4.1 Benefits of Unicode ................................................................ 37
11.4.2 The Unicode Standard ............................................................ 38
Distributed Operating Systems............................................................... 39
12.1 DOS (Distributed Operating System) ................................................. 39
12.2 NOS (Network Operating System) ..................................................... 39
12.3 The Middleware Models .................................................................... 39
Application Server Support .................................................................... 40
13.1 Web-Protocol Support ...................................................................... 40
13.2 Enterprise Application Integration (EAI) ............................................. 41
13.2.1 Data Exchange: The XML Breakthrough ................................... 42
13.3 Interoperability with other Operating Systems .................................... 42
13.4 Network Connectivity and Services .................................................... 42
13.5 Universal Data Access: .................................................................... 43
13.5.1 Cross-Platform File and Print Sharing ....................................... 43
13.5.2 Database Access ................................................................... 44
13.6 Distributed Applications and Cross-platform Application Development .... 46
13.6.1 JAVA Support ....................................................................... 46
13.6.2 The Java platforms: J2EE, J2ME .............................................. 47
13.6.3 CORBA ................................................................................. 48
13.7 The Application Server platform ........................................................ 49
13.7.1 The Value of the Application Server Platform ............................ 49
13.8 Web Services .................................................................................. 50
1.0
Executive Summary for Operating Systems
The objective of this segment is to present guidelines that can be used during the
acquisition, development and maintenance of Operating Systems.
The segment defines the basic components of operating systems. It then proceeds to
define the mandatory requirements when selecting operating systems:




Multiuser
Multiprocessing
Multitasking
Multithreading
The bulk of the segment covers criteria to be addressed when selecting operating
systems that are desirable, depending on specific conditions and requirements. These
are summarized in the following list:









Scalability
RAS (Reliability, Availability and Serviceability)
Memory Management
Security in Operating Systems
I/O Management
System Management
Unicode and Multilingual Computing
Distributed Operating Systems
Application Server Support
The segment is related to other Standards and Guidelines segments such as Information
Integrity and Security, Databases and Software Applications which can be downloaded
from OMSAR's web site for ICT Standards and Guidelines at www.omsar.gov.lb/ICTSG.
Operating Systems
Page 1
2.0
The Background of Operating Systems
In order to place the products to be discussed in context, this Section classifies operating
systems according to use. For the sake of completeness, all types are presented whether
they are being used in Lebanon at the moment or not.
An Operating System is the program. It is initially loaded into the computer memory by
a boot program. Its main aim is to manage all other programs in a computer. The other
programs, called application programs, make use of the operating system by submitting
requests for services through a defined application program interface (API). In addition,
users can interact directly with the operating system through different interfaces. Some
are command driven others are graphic.
An operating system performs the following services for applications:

In a multitasking operating system, the operating system determines which
applications should run, in what order and how much time should be allowed for
each application before giving another application a turn.

It manages the sharing of internal memory among multiple applications.

It handles input and output to and from attached hardware devices, such as hard
disks, printers and dial-up ports.

It sends messages to each application, interactive user or to a system operator
about the status of operation and any errors that may have occurred.

It can take over the management of what are called batch jobs, such as printing,
so that the initiating application is freed from this work.

On computers that can provide parallel processing, an operating system can
manage the division of the programs so they can run on more than one processor
at a time.
Linux, Windows 2000, Solaris, VMS, OS/400, AIX and z/OS are all examples of operating
systems.
2.1
The Scope of Operating Systems
All types of operating system requirements are discussed in this segment with the
exception of those that pertain to large scale computing or to operating system
requirements that are not common in the Lebanese public sector such as those that are
used for parallel computing, etc.
2.2
The Benefits of Standardization
Standardizing Operating systems results in major benefits in the following areas:



Reduced training
Increased experience
Reduced costs when bulk purchasing is followed
Operating Systems
Page 2
2.3
Policies to Follow for Operating Systems
The following policies are proposed:



2.4
Operating Systems should be used with the mandatory criteria presented in this
segment.
The various evaluation criteria presented in this segment should be observed.
Standardized practices should be used as these would reduce training and
troubleshooting costs.
Risks Resulting from the Standardization Activities
When standardization is implemented, the following risks may arise:


2.5
The mandatory criteria are not observed while acquiring Operating Systems
The recommended evaluation criteria are not used
Related Documents
One document is related to this segment and that is a set of Appendices describing
various operating systems in detail. This can be downloaded from OMSAR's website for
ICT Standards and Guidelines at www.omsar.gov.lb/ICTSG/OS.
It would also be useful to refer to the document associated with the Data Definition and
Exchange segment which introduces XML and Web Services. This can be downloaded
from OMSAR's website for ICT Standards and Guidelines at
www.omsar.gov.lb/ICTSG/DE.
2.6
How to Use This Document?
There are two documents. The main document defines Operating Systems and defines
mandatory and selection criteria for each of the components and features. A separate
document presents a set of appendices covering such issues as specific Operating
Systems and Directory Services. It can be downloaded from OMSAR's web site for ICT
Standards and Guidelines at www.omsar.gov.lb/ICTSG/OS.
The Mandatory Criteria are discussed in Section 4.1.
The related selection or evaluation criteria are the following:






Scalability: See Section 5.0
RAS: See Section 6.0
Memory Management: See Section 7.0
Security Management: See Section 8.0
I/O Management: See Section 9.0
System Management: See Section 10.0
The following sections are included to cover the current trend for internationalization,
web and distributed applications. They deal with Application Integration issues and
solutions:
Operating Systems
Page 3



Multilingual Computing: See Section 11.0
Distributed Operating Systems: See Section 12.0
Application Server Support: See Section 13.0
This main document is also supplemented by the documents mentioned in the previous
section.
2.7
Related Terms and Acronyms
API
DOS
GB
GUI
JVM
LAN
LPAR
LVM
MFLOPS
NAS
OLTP
OS
PCI
RAID
RAS
RTOS
SAN
SMP
SPEC
TB
TPC-C
tpmC
2.8
Application Program Interface
Distributed Operating System
Gigabyte (This is approximately 1 billion bytes)
Graphical User Interface
Java Virtual Machines
Local Area Network
Logical partition
Logical Volume Managers
Millions of Floating Operations Per Second
Network Attached Storage
Online Transaction Processing
Operating System
Peripheral Component Interconnect
Redundant Array of Inexpensive Disks
Reliability, Availability and Serviceability
Real Time Operating Systems
Storage Area Networks
Symmetric Multiprocessing Platform
Standards Performance Evaluation Corporation
Terabyte (This is approximately 1 trillion bytes)
Transaction Processing Council (The C measure)
Transactions per minute (The C measure)
Related Segments and Cross References
The following segments are related to the Operating Systems segment:
101
104
202
203
204
www.omsar.gov.lb/ICTSG/HW
www.omsar.gov.lb/ICTSG/DB
www.omsar.gov.lb/ICTSG/SW
www.omsar.gov.lb/ICTSG/EV
www.omsar.gov.lb/ICTSG/SC
Hardware Systems
Database Systems
Software Applications
Evaluation + Selection Framework
Information Integrity and Security
Each page contains the main document and supplementary forms, templates and articles
for the specific subject.
Operating Systems
Page 4
2.9
Related International Standards
There are no related standards for Operating Systems.
2.10
All Segments in the ICT Standards and Guidelines
OMSAR's website for ICT Standards and Guidelines is found at www.omsar.gov.lb/ICTSG
and it points to one page for each segment. The following pages will take you to the
home page for the three main project document and the 13 segments:
101
101
102
103
104
105
106
201
202
203
204
205
206
207
www.omsar.gov.lb/ICTSG/Global
www.omsar.gov.lb/ICTSG/Cover
www.omsar.gov.lb/ICTSG/Legal
www.omsar.gov.lb/ICTSG/HW
www.omsar.gov.lb/ICTSG/HW
www.omsar.gov.lb/ICTSG/NW
www.omsar.gov.lb/ICTSG/TC
www.omsar.gov.lb/ICTSG/DB
www.omsar.gov.lb/ICTSG/OS
www.omsar.gov.lb/ICTSG/EN
www.omsar.gov.lb/ICTSG/QM
www.omsar.gov.lb/ICTSG/SW
www.omsar.gov.lb/ICTSG/EV
www.omsar.gov.lb/ICTSG/SC
www.omsar.gov.lb/ICTSG/DE
www.omsar.gov.lb/ICTSG/RM
www.omsar.gov.lb/ICTSG/CM
Global Policy Document
Cover Document for 13 segment
Legal Recommendations Framework
Hardware
Hardware Systems
Networks
Telecommunications
Database Systems
Operating Systems
Buildings, Rooms and Environment
Quality Management
Software Applications
Evaluation + Selection Framework
Information Integrity and Security
Data Definition and Exchange
Risk Management
Configuration Management
Each page contains the main document and supplementary forms, templates and articles
for the specific subject.
Operating Systems
Page 5
3.0
Classification of Operating Systems
Operating systems can be grouped into the following categories:

Supercomputing is primarily scientific computing, usually modeling real systems
in nature. Render farms are collections of computers that work together to render
animations and special effects. Work that previously required supercomputers
could be done with the equivalent of a render farm. Such computers are found in
public research laboratories, Universities, Weather Forecasting laboratories,
Defense and Energy Agencies, etc.

Mainframes used to be the primary form of computer. Mainframes are large
centralized computers. At one time, they provided the bulk of business computing
through time-sharing. Mainframes and mainframe replacements (powerful
computers or clusters of computers) are still useful for some large-scale tasks,
such as centralized billing systems, inventory systems, database operations, etc.
When mainframes were in widespread use, there was also a class of computers
known as minicomputers that were smaller, less expensive versions of
mainframes for businesses that could not afford mainframes.

Servers are computers or groups of computers used for Internet serving,
intranet serving, print serving, file serving and/or application serving. Clustered
Servers are sometimes used to replace mainframes.

Desktop operating systems are used on standalone personal computers.

Workstations are more powerful versions of personal computers. Often only one
person uses a particular workstation that run a more powerful version of a
desktop operating system. They usually have software associated with larger
computer systems thru a LAN network.

Handheld operating systems are much smaller and less capable than desktop
operating systems, so that they can fit into the limited memory of handheld
devices. Barcode scanners, PDA’s, are examples of such systems. Currently, the
PDA world is witnessing an operating system battle between several players
(Microsoft Windows, iPAQ, etc.)

Real time operating systems (RTOS) are designed to respond to events that
happen in real time. Computers using such operating systems may run ongoing
processes in a factory, emergency room systems, air traffic control systems or
power stations. The operating systems are classified according to the response
time they need to deal with: seconds, milliseconds, micro-seconds. They are also
classified according to whether or not they involve systems where failure can
result in loss of life. As in the case of supercomputers, there are no such systems
in Lebanon today. However, given the way the technology is growing, it may be
possible to use them in the future.

Embedded systems are combinations of processors and special software that
are inside another device, such as contents switches or Network Attached Storage
devices.

Smart Card Operating Systems are the smallest Operating Systems of all.
Some handle only a single function, such as electronic payments, others handle
Operating Systems
Page 6
multiple functions. Often these OS are proprietary systems but we are seeing
more and more smart cards that are Java oriented.

Specialized Operating systems, like Database Computers are dedicated high
performance data warehousing servers.
The above Operating Systems are commonly found in government agencies and private
industries.
3.1
Proprietary Operating Systems
The above classification has been “rationalized” somehow by the advent of the UNIX
standardization and the PC breakthrough. We find today two classes of Modern
Operating Systems:

Proprietary UNIX and Linux Operating Systems that span many if not all of
the above categories.

Microsoft Windows which initially targeted the desktop but is now penetrating
the handheld and the server markets.
Beside these two types of operating systems there are also Virtual Machine solutions.
Virtual Machine solutions facilitate portability and interoperability between these different
systems, presenting a layer that hides the OS for distributed applications. The Java
Virtual Machines (JVM), for instance, runs applications that span multiple computers with
multiple Operating Systems.
Operating Systems
Page 7
4.0
Operating Systems Requirements
This Section describes the various requirements to be placed when selecting operating
systems.
4.1
Mandatory Requirements
When selecting the operating system, it is critical to think of what type of application is
needed to run and what type of hardware is available. In general, the operating system
in the N-tier distributed software architecture needs, at least, to have the following
features:




Multiuser: Allows concurrent users to run programs at the same time.
Multiprocessing: Supports running a program on more than one CPU.
Multitasking: Allows more than one program to run concurrently.
Multithreading: Allows different parts of a single program to run concurrently.
Most available commercial operating systems satisfy these requirements. They are:




Microsoft Windows dominates the desktop market and has deeply penetrated the
server market.
LINUX, an Open Source UNIX based system, is supported by an increasing
number of editors, constructors and communities of developers. LINUX is gaining
a large following in government agencies, education and industry.
Apple’s MAC OS X is Apple’s Operating System that has transferred the famous
Apple look and feel to the UNIX world
Proprietary UNIX, like Solaris from Sun Microsystems, AIX from IBM and others
are well established in the server market with their recognized stability and
scalability due to their long experience in the server business.
The above requirements will be discussed in Sections 5.0 and onwards.
4.2
Operating Systems and Hardware Platforms
In 2002, the major commercial operating systems offerings on hardware platforms are:



For basic Desktops and Laptops: Windows XP, Mac OS X, Linux
For high-end Desktops, entry-level Servers and mid-range Servers: Windows
2000, Mac OS X, Linux
For Workstations and low to high-end Servers: Proprietary UNIX like Solaris from
Sun Microsystems, AIX from IBM, HP-UX,
An additional document can be downloaded from OMSAR's website for ICT Standards and
Guidelines. It is called “Operating Systems Appendices”. It provides details about specific
OS current status. It can be downloaded from OMSAR's web site for ICT Standards and
Guidelines at www.omsar.gov.lb/ICTSG/OS.
Operating Systems
Page 8
4.3
Additional Evaluation Criteria
The following Sections will address common evaluation criteria for choosing the
operating system:






Scalability: See Section 5.0
RAS: See Section 6.0
Memory Management: See Section 7.0
Security Management: See Section 8.0
I/O Management: See Section 9.0
System Management: See Section 10.0
The following sections are included to cover the current trend for internationalization,
web and distributed applications. They deal with Application Integration issues and
solutions:



Multilingual Computing: See Section 11.0
Distributed Operating Systems: See Section 12.0
Application Server Support: See Section 13.0
Operating Systems
Page 9
5.0
Scalability
Ever expanding ICT needs are pushing the limits and the capabilities of existing
information technology platforms. The scalability of Operating systems becomes one of
the most important evaluating criteria.
Scalability refers to how well the performance of a computer system responds to
changes in configuration variables, such as memory size or numbers of processors in an
SMP-based system (Symmetric Multiprocessing Platform). Most often, system architects
are interested in the response of a critical application to an increase or decrease in
resources available to end users.
Typical questions that ICT managers ask in this regard include:




As the size of the database or data warehouse grows, is it possible to keep
response time constant merely by adding CPU and memory resources?
How well can the messaging server respond to a burgeoning user population?
How well will the system respond to expanding use of the messaging system
beyond mail, as the platform for workflow and work group collaboration services?
How will adding clients, such as service users and other Agencies, connecting into
systems through the Internet (with appropriate security controls) affect system
performance?
How will increasing complexity in database transactions to accommodate new
user requirements affect response time, batch throughput or the number of
simultaneous users who can be supported by a fixed server configuration?
There is no single answer to these questions. Answers depend on the complex
interaction of multiple variables such as system components of the underlying hardware,
characteristics of the network, application design and the architecture and capabilities of
the operating system. This applies to any server based system that Ministries or
Agencies may consider deploying.
Measuring scalability: There are means for measuring the ability for an operating
system to scale from use on a very small personal computer to a very large network of
clusters of high-powered multi-processor servers or mainframes (Refer to Section 5.10).
5.1
The Scaling Modes
In general, the OS design goal has been to architect Server solutions so that customers
have the flexibility to scale the Server-based systems up, out or down without
compromising the multipurpose and price performance advantages of the Server
platform.
5.1.1 Scaling Up
This is achieved by adding more resources, such as memory, processors and disk drives
to a system. This technique enables the Server, to support very large decision support,
enterprise resource planning (ERP) and data mining applications in clustered, highly
available and manageable systems on industry-standard hardware. But scalability is
more than brute force. Application design, database tuning, network configuration and
well-developed data centre procedures matter substantially.
Operating Systems
Page 10
5.1.2 Scaling Out
This delivers high performance when the throughput requirements of an application
exceed the capabilities of an individual system. By distributing resources across multiple
systems, contention for these resources can be reduced and availability improved.
Clustering and system services, such as reliable transaction message queuing, allow
distributed Server-based applications to scale out in this manner.
5.1.3 Scaling Down
This can also deliver tangible service benefits. For example, an Agency may divest a
division or move a formerly centralized function from the data centre to a division. In
another instance, the ICT department may decide to transition, or possible repartition
workloads to improve overall performance and enhance life-cycle management of server
systems. In either case, distributed Server-based systems make this possible without
having to completely overhaul the system, or reprogram applications
5.2
Measurement Factors for Scalability
The following are the major measurement factors for scalability:







Shared-Memory Multiprocessing scalability.
Interconnection strategies and OS flexibility
Memory scalability
Storage scalability
64-bit support
Scalability clustering options
Low-level Optimizations
These factors will be discussed thoroughly in the following Sections.
5.3
Shared-Memory Multiprocessing Scalability
Scalable, well-designed hardware is a huge advantage for the customers of any
operating system. It provides the benefits of lower cost, broad choice and a constant
stream of innovation. These benefits are very much a part of the Shared-Memory and
the Clustered Systems solutions and a significant part of its growing popularity.
Multiprocessor systems for Server-based systems rely on many processor families. Major
processor families use sophisticated on-chip cache and bus-controller logic designed
specifically for Shared-Memory configurations.
Shared-Memory Multiprocessing boosts system performance by harnessing multiple
processors within a single server that share the same memory and I/O resources.
Shared-Memory Multiprocessing offer many advantages over other multiprocessing
techniques:


Shared-Memory Multiprocessing incurs fewer penalties related to management
and processing overhead
Shared-Memory Multiprocessing is also relatively easy for application developers
to exploit in their code
Operating Systems
Page 11
Shared-Memory Multiprocessing remains one of the most effective ways to increase
system performance for many key business applications, including:



Database servers
Online Transaction Processing (OLTP) functions
Heavy scientific computing where high MFLOPS (Millions of Floating Operations
Per Second) are required
Since all processors in a Shared-Memory server must be able to access all system
resources simultaneously, operating systems are deeply involved in the quality of an
Shared-Memory Multiprocessing implementation.
Currently, the Transaction Processing Performance Council’s TPC-C benchmark remains
the most widely accepted method to assess the Shared-Memory range of server
systems. When using benchmarks such as TPC-C to assess the quality of SharedMemory implementations, it is tempting to focus on the absolute tpmC (Transactions per
minute) reached or the greatest number of processors used as proof of an operating
system’s SMP capabilities.
However, all the other factors such ones as related to hardware and software, processor
performance of the systems involved, database or web server used need to be assessed
when gauging the ability of the operating system itself to exploit Shared-Memory
hardware.
5.4
Interconnection Strategies and OS Flexibility
Designing hardware for scalable computing depends on the interconnect strategy.
Specifically, as the number of processors increases and individual processor speeds
increase, the bottleneck preventing further growth in throughput moves to the bus
architecture. This ferries information between processors and memory. Today, there are
essentially two interconnect strategies:


All processors use a single shared address space
Each processor has a private memory - processes communicate with one another
using a message - passing mechanism
The overwhelming majority of today's multiprocessing configurations use the first
strategy. Scalable clustered configurations use the second strategy.
Today, chip designers are embedding greater "intelligence" into microprocessors. For
example pre-fetch data and instructions into the caches do minimize the time that
processors must spend waiting for data to cross the bus.
OS for Servers have been designed to take advantage of these features in order to
maximize the effectiveness of these processor caches.
In the short term, many constructors are developing a next generation of data center
hardware for new OS Server-based computing, incorporating a variety of advanced
technologies. New multiprocessing technology combines the following items with
eliminating some of the bottlenecks associated with traditional bus architectures.
Operating Systems
Page 12



A third level of caching
Directory-based cache coherency
Fully connected point-to-point crossbar
Developers assert that this method of interconnection is likely to enable breakthroughs
in both application and I/O scalability and as much as, 6.4 GB per second overall
throughput on up to 24 parallel PCI I/O buses.
Such systems can be configured either:


As a single 32-way SMP system
Divided into as many as eight subsystems, each with its own copy of the
operating system.
In the second case, each instance of the operating system has the ability to operate in
complete isolation from the other parts of the system. This allows a single system to
easily reflect the heterogeneous nature of the software needs of a data center.
5.5
Memory Scalability
Increasing physical memory capacity can significantly benefit database and Web server
applications because I/O to memory is much faster than I/O to disk.
The ability to address these memories is, of course, required; most 32-bit operating
systems can only address 4 GB of physical memory.
Some 32-bit OS, like Windows 2000, do get around this limitation and offer larger
memory capability. However such solutions usually impact performance quite severely.
The 64-bit support (Refer to Section 5.7) is by far a more adequate solution for scaling
up memory.
5.6
Storage Scalability
With the new ICT infrastructure which introduces many applications that fill storage
media quickly, data is continuously expanded to consume available storage space. Eservices, imaging, data warehousing and enterprise resource planning (ERP) are some of
these storage consuming applications.
Data accessibility for these applications needs to be fast, reliable and availability is
paramount.
5.6.1 Fast and Reliable Storage – Striping and Mirroring
Disk striping and disk mirroring in an operating system increase the scalability of storage
facilities and has to be integrated within the OS.


Striping is a technique used to scale file systems by allowing data from a single
logical disk volume to be physically stored across multiple physical disks.
Mirroring is a technique used to enhance both file system performance and
reliability by maintaining consistent copies of data across multiple disks. On read
Operating Systems
Page 13
operations, either copy of a mirror set can be read, allowing for greater
performance.
These techniques offer several benefits:


A fault-tolerant store maintains error-correction information on one disk for data
stored on the remaining disks in a set.
Caching controllers and Intelligent Input/Output can improve scalability by
offloading I/O operations from the main processor, freeing it to do real user work
instead of merely moving bits. This results in greater throughput and lower CPU
utilization.
OS features should include base support, specialized board support, network adapters
and Redundant Array of Inexpensive Disks (RAID) cards.
Fast interconnects are also important. Support for Fiber Channel is an example. These
hardware features should also be supported by the OS.
Most OS on Servers support software RAID, as well as many third-party hardware RAID
solutions.
5.6.2 Storage Availability
Two technologies that show great promise are Storage Area Networks (SANs) and
Network Attached Storage (NAS). Both SAN and NAS allow system administrators to
collect large amounts of disk storage in highly localized and manageable units, such as
large servers full of RAID arrays. Application servers and users can then store their data
on these large bit buckets.


NAS devices are analogous to a dedicated file server. A device includes large
amounts of storage managed by a local file system. Multiple application servers
can share NAS devices using standard network-level protocols.
SAN devices are essentially large storage buckets. The file system runs on the
application server and uses block level I/O to access SAN devices. SANs typically
use fiber channel and fiber switches to connect to application servers.
NAS is more appropriate for traditional LANs with heterogeneous hardware and software.
SAN is more appropriate for homogeneous hardware and software used in a computing
cluster.
Both NAS and SAN provide high-speed storage pools through a group of connected
servers and high-speed workstations. To fully support these devices, operating systems
need to manage file systems and files that range far beyond the 4 GB originally
permitted by 32-bit systems.
To achieve support for 64-bit files, the operating system needs to provide the necessary
base operating system functions and APIs. Theoretically, it can access up to 16 tetra
bytes (4GB*4GB) data. However, actual file system size will be limited. Usually common
operating systems should support at least 1 to 2 TB.
Operating Systems
Page 14
5.7
64-BIT Support
64-bit computing can be of benefit to many types of applications such as simulation,
digital content application and financial applications. These applications generally use
large databases and have high performance requirements. Direct 64-bit memory access
can provide the capabilities to cache the entire database indexes or even the database
contents themselves in physical memory. The access time will then be reduced.
In general, operating systems can provide 64-bit capabilities at three incremental levels:



Run on 64-bit processors such as Alpha, MIPS, PA-RISC, PowerPC, Intel Itanium
or UltraSPARC
Support large physical memory, i.e. real memory greater than four GB
Provide large virtual memory, which allows applications to run in a 64-bit process
address space
Only operating systems with these capabilities qualify as full 64-bit processing
environments. Today, the 64-bit is supported on all RISC-based microprocessors with
their corresponding Operating Systems. Vendors of IA-32 Systems are developing
solutions for IA-64 Architecture. They must also be IA-32 compatible.
5.8
Scalability clustering Options
For networked computers, clustering is the process of connecting multiple systems
together to provide greater overall system availability, performance, capacity, or some
combination of these. Because the term clustering is so broad, other terms such farm,
parallel and Beowulf are used to describe specific cluster implementations. However,
High Availability (HA) clustering solutions aim at providing enhanced availability for a
service or application.
Performance clustering options typically fall into one of three broad categories:



Higher-Performance Computing (HPC) Clusters will address the computational
problems
Database Clusters boost transaction throughput by spreading workload across
multiple instances of a database server running in parallel
Web-Server Farms (IP clusters) —IP clusters allow ISPs or corporate Intranet
sites to map all the traffic destined for a single website to a “farm” of multiple
web servers across which the Internet traffic is balanced.
Depending on the type of application, the clustering options should be chosen
accordingly. For example, with a huge data warehouse type of application, database
clusters are critical. The operating system should be able to support the leading
parallel-database servers. While some other web application will require the operating
system to have Web-Server Farms solutions, either hardware-based or Software-based
IP load-balancing options.
5.9
Low-Level Optimizations
Some special critical applications can benefit from low-level operating system
optimizations, such as:
Operating Systems
Page 15



5.10
Memory File System: some operating systems can provide a file-system
implementation that resides entirely in virtual memory, i.e., no permanent file
structures or data are written to disk.
Dynamic Page Sizing: Different applications will require different page size.
Applications that require large block transfers will not run well on small page
sizes. The Dynamic page sizing features can allow administrators to set I/O page
sized by process. Some operating systems support variable-sized virtual memory
pages and dynamically adjust the page sized used for application to optimize
performance.
Kernel Thread Architecture: Operating systems need to support kernel threads,
which are required to effectively scale multithreaded applications on SMP systems
and enable key programming techniques such as asynchronous I/O.
Measuring the System Scalability
System Scalability can be measured using different parameters. Examples are:



The size of the program
The number of processor operations per unit time
The number of current users
The performance of the system is then measured for different values of these
parameters. Scalability is then shown as the performance at one level of parameter
divided by the initial level. The nearer this ratio is to 1, the better the scalability.
Many factors have a role in determining a system's overall scalability, including:





Processor throughput and configuration
Memory capacity
Disk and network I/O bandwidth
Operating system features
Application software design
Although the most accurate comparisons are possible only when analyzing the
performance of specific, customer-relevant applications in a specific installation and
usage scenario, several industry standard benchmarks measure both relative and
absolute performance and price/performance in useful ways (the Transaction Processing
Performance Council is an example).
When used knowledgeably, these benchmarks help users to understand how hardware
and software alternatives can improve performance. They can also help compare
similarly configured servers. Benchmark data represents only one element in the
evaluation criteria. Other important factors to consider are:






Price/performance
Portability of the solution
Application availability
Total cost of ownership
Vendor credibility
Future technology directions
Operating Systems
Page 16
5.10.1 Using the TPC-C Benchmark for Online Transaction Processing
TPC-C is one of several benchmarks measuring relative scalability defined by the
Transaction Processing Performance Council. The TPC-C benchmark is the most widely
accepted measure of a transaction processing a system's cost and performance.
This benchmark emulates the characteristics of many real-world transaction-processing
systems making it an excellent measure for scalability of online systems.
Specifically, TPC-C is a relatively complex application that models an order management
system for a wholesale supplier. The benchmark specifies the user interface, one
short-duration transaction type (New Order) and four longer-duration transaction types
(Payment, Order-Status, Delivery, Stock-Level) that access nine database tables.
TPC benchmarks focus on two measures of a system:


Total throughput
Price/performance
For TPC-C, throughput is the number of short-duration transactions per minute (TPM),
while the system also executes longer-duration transactions. For a system to qualify
under this benchmark, 90 percent of New Order requests must complete in less than 5
seconds while the system fulfills the workload of other transactions.
There are two ways to increase performance in this benchmark:


Scaling up: A single computer system can scale up by adding memory, more
processors and high-speed disks. Tuning the operating system, placing data and
log files on disk and adding a TP monitor determines how these resources are
used.
Scaling out: A transaction-processing system can also "scale out" by partitioning
the data and workload across more than one computer. In this case, clustered
computers must maintain a degree of location transparency required by this
benchmark and they do so by the use of databases that can span multiple nodes
of a cluster.
Price/performance is the cost of a system over three years divided by the number of
transactions it can perform in one minute. Terminals, communications equipment,
backup storage, software, the computer (or computers) and three years of maintenance
are included in the cost of a system. (Costs do not include system architecture and
ongoing maintenance and service costs.) Therefore, if the total system costs $500,000
and the throughput is 10,000 TPM, the price/performance of the entire system is $50 per
tpmC.
5.10.2 Web Server Performance (SPECWeb)
There are no Web server benchmarks as widely accepted and as thoroughly defined as
those of the Transaction Processing Performance Council. However, Standards
Performance Evaluation Corporation (SPEC) develops suites of benchmarks intended to
measure computer performance.
Operating Systems
Page 17
Specifically, they developed SPECWeb96, the first standardized World Wide Web server
benchmark that measures basic Web server performance. Though not perfect, this
benchmark is a standardized test, which numerous vendors and users have embraced.
SPECWeb96 targets Web server systems designed to handle millions of hits per day and
hundreds or thousands of hits per second. The benchmark program uses a file mix
developed after analyzing the server logs from several Web service providers. The
SPECWeb96 benchmark measures HTTP GET commands, the command used to retrieve
a Web page. Results are reported in terms of operations per second.
Unfortunately, this benchmark does not contain several elements central to Web
application processing. For example, the benchmark only measures static page serving.
CGI calls, script execution and security event handling are not part of the test.
Generation of dynamic content would require the use of CGI or script execution. In
addition, tests are based on the HTTP 1.0 protocol. Also excluded from tests are HTTP
1.1 Keep-Alive packets, which can dramatically improve HTTP server performance.
Consequently, measurements overstate the performance of an HTTP server when CGI
calls are made, scripts are executed and security mechanisms are utilized. Similar to
other benchmarks cited, SPECweb may be more useful to assess relative performance
than it is to assess absolute performance, because the demands of Websites are highly
variable from enterprise to enterprise.
Operating Systems
Page 18
6.0
RAS (Reliability, Availability and Serviceability)
Today's globally networked economy demands powerful information systems that are
available to customers, business partners and employees around the clock, around the
world. When a company's system is down, the business costs can be enormous in lost
productivity, sales and profits, poor customer service and decreased customer loyalty.
This requires the operating system to be reliable so that system will be less prone to
errors. Availability needs to be maximized so that the downtime will be minimized for
planned or unplanned maintenance actions. Serviceability needs to be optimized. The
following criteria can be examined when assessing the RAS for operating system.






Component Failure Resilience
Dynamic Reconfiguration
RAID solutions
Journaling File System
High Availability (HA) clustering options
Work Load-Management Tools
These criteria are discussed in greater detail in the following Sections.
6.1
Component Failure Resilience
Failures can occur in critical components that are expensive and hard to replicate at the
hardware level, including processors, memory and I/O devices. In such a case, the
operating system needs to perform certain resilience to “self-healing” the failure. The
major ones are:
6.1.1 Dynamic Processor Resilience
Dynamic Processor Resilience allows an operating system to adapt to process failure by
isolating failed CPU components. If a processor failure results in a system crash, the
system restarts automatically after isolating the failed unit.
6.1.2 Dynamic Memory Resilience
Dynamic Memory Resilience allows an operating system to dynamically cordon off
memory that has suffered single-bit errors so that software no longer risks using
potentially unreliable areas.
6.1.3 Virtual IP Addresses
Virtual IP Addresses allow IP connections to remain unaffected if physical network
interfaces fail. System administrators define a virtual IP address for the host, which –
from a TCP connection standpoint – is decoupled from the IP address associated with
physical interfaces.
Operating Systems
Page 19
6.1.4 Alternate I/O Pathing (Multipath I/O)
Alternate I/O Pathing (Multipath I/O) allows an operating system to recover from the
failure of I/O devices such as disk or network adapters by re-routing I/O to a backup
device, while preserving logical references to the device so that applications continue
processing without interruption.
6.2
Dynamic Reconfiguration
As ICT infrastructures become increasingly web-based and globally oriented, servers
must be able to respond to requests 24/7/365. Operating systems need to reduce the
number of administrative tasks that require a system restart. Dynamic reconfiguration
will allow online addition and removal of components for repairs or upgrades without
rebooting. The following are the major areas:



6.3
Online CPU and memory reconfiguration: Processors and memory to be added or
removed without rebooting the operating system.
Online I/O reconfiguration: I/O devices such as disk adapters and network cards
to be added or removed, coupled with current hardware-reconfiguration
capabilities, i.e., hot-plug Peripheral Component Interconnect (PCI).
Capacity-On-Demand: Users to increase the processing power of systems without
disrupting operations.
RAID Solutions
RAID, short for Redundant Array of Inexpensive Disks, is a method whereby information
is spread across several disks, using techniques such as disk striping (RAID Level 0) and
disk mirroring (RAID level 1) to achieve:




Redundancy
Lower latency
Higher bandwidth for reading and/or writing
Recoverability from hard-disk crashes
Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley
paper, each providing disk fault-tolerance and each offering different trade-offs in
features and performance. In addition to these five redundant array architectures, it has
become popular to refer to a non-redundant array of disk drives as a RAID-0 array.
The approach to RAID solutions can be hardware or software based.


The hardware-based system manages the RAID subsystem independently from
the host and presents to the host only a single disk per RAID array. This way the
host doesn't have to be aware of the RAID subsystems(s). There are controller
based hardware solution and external hardware solution (SCSI---SCSI RAID).
The software approach is that every OS will build its own special driver for the
RAID solution. The driver will be build into the OS kernel. The software-based
solution is not adequate in light of today’s standard.
Operating Systems
Page 20
6.4
Journaling File System
A journaling file system is a fault-resilient file system in which data integrity is ensured
because updates to directories and bitmaps are constantly written to a serial log on disk
before the original disk log is updated. In the event of a system failure, a full journaling
file system ensures that the data on the disk has been restored to its pre-crash
configuration. It also recovers unsaved data and stores it in the location where it would
have gone if the computer had not crashed, making it an important feature for missioncritical applications.
Some of the criteria to evaluate the Journaling File System:




6.5
Quick Recovery: The system can be restarted very quickly after an unexpected
interruption, regardless of the number of files it is managing. Traditional file
systems must do special file system checks after an interruption, which can take
many hours to complete.
Fast Transactions: Provides the advantages of journaling while minimizing the
performance impact of journaling on read and write data transactions. Its
journaling structures and algorithms are tuned to log the transactions rapidly.
High Scalability and High bandwidth: Will scale to the largest file systems and
capable of delivering near-raw I/O performance.
Full journaling: It maintains the integrity of the files and the file system.
High Availability (HA) Clustering Options
Today's globally networked applications demand powerful information systems that are
available to customers, Agencies, partners and employees around the clock, around the
world (embassies for instance). When an Agency’s system is down, the service disruption
can be of importance. Wherever advanced systems availability and productivity are
critical for strategic reasons, clustered multiprocessing offers essential help for such
strategic services.
Clustering is the linking of two or more computers or nodes into a single, unified
resource. High availability clusters enable:



Parallel access of data
Redundancy
Fault resilience
The above are required for business-critical applications.
Full-function HA clustering solutions must usually include several key components,
including:



Failure Detection
Recovery
Configuration tools
Clustering software monitors the health of systems and applications by running agents
that continuously probe for certain conditions. Vendors usually provide agents for
monitoring hardware, the operating system and key applications such as databases and
messaging systems.
Operating Systems
Page 21
Since the nodes of an HA cluster must be able to take over the workloads of other
nodes, they must share access to application data. Thus, HA clustering inherently
depends on the ability for multiple servers to access the same storage devices.
Currently, the industry focus is on phasing out direct-attached storage in favor of
storage devices that are connected directly to the network. Network Attached Storage
(NAS) devices are connected to the mainstream network, just like traditional servers,
whereas Storage Area Networks (SANs) involve use of a private, high-speed network
dedicated to sharing data.
6.6
Work Load-Management Tools
Workload management is a key issue in SMP or Cluster environments in which multiple
tasks are to be executed on several nodes. Improper allocation of resources to these
tasks can result in a waste of resources or the allocation of critical resources to less
important tasks while higher-priority tasks wait. The goal of workload management is to
optimize the allocation of resources to the tasks that are to be executed by such
environment. The benefit is twofold:


The customer is satisfied with an improved response time
The system administrator has improved the utilization of resources with an
effective management of the system.
Workload-management tools can help to overcome such problems by allowing large
numbers of resource-intensive applications to run simultaneously on a single server.
Through flexible scheduling policies, such tools are thus a key enabler for a variety of
server-consolidation tactics.
Two classes of workload-management tools are available:

A logical partition (LPAR) is the division of a computer's processors, memory and
storage into multiple sets of resources so that each set of resources can be
operated independently with its own operating system instance and applications.
The number of logical partitions that can be created depends on the system's
processor model and resources available. Typically, partitions are used for
different purposes such as database operation or client/server operation or to
separate test and production environments. Each partition can communicate with
the other partitions as if the other partition is in a separate machine. Logical
Partitions allow administrators to run multiple instances of an operating system
within a single server. Each instance behaves as if it were running on a
standalone machine.

Resource management tools work also within a single operating system instance
to effectively manage massive, constantly changing workloads so that multiple
dominant applications can coexist in a single environment.
Operating Systems
Page 22
7.0
Memory Management
The part of the Operating System that manages the memory hierarchy is called the
memory manager. Memory management of modern OS for Desktops and Servers are
highly sophisticated. Three major functions are crucial within the evaluation process.
These are discussed in the next few sections.
7.1
Modeling Multiprogramming
Modern Operating Systems allow multiple programs to be in memory at the same time.
To keep them from interfering with one another some kind of protection mechanism is
needed. While this mechanism has to be implemented in the hardware, it is controlled by
the OS to insure:


7.2
Safe and efficient relocation and protection of memory addresses and
A good performing multiprogramming system
Virtual Memory
Operating Systems are of course in charge with the Virtual Memory Management. Beside
the distribution in Multilevel Page Tables and the choice of the Page Replacement
Algorithm, OS are closely involved with:




7.3
Page fault Handling
Instruction Backup
Locking Pages in Memory
Backing Store
Segmentation
Segmentation helps in handling data structures that change size during execution and
simplifies linking and sharing. OS should combine segmentation and paging, whenever
possible, providing a more efficient two dimensional virtual memory.
Operating Systems
Page 23
8.0
Security in Operating Systems
Protecting information against unauthorized usage is a major concern of all Operating
Systems. Computer Systems aim to secure information by fulfilling three general
protection goals. These goals should insure:



Data confidentiality
Data Integrity
System availability
Kindly refer to the segment on Information Integrity and Security for additional security
requirements. This can be downloaded from OMSAR's website for ICT Standards and
Guidelines at www.omsar.gov.lb/ICTSG/SC.
8.1
Multilevel Security
8.1.1 Discretionary Access Control
Most Operating Systems allow individual users to determine who may read and write
their files and other objects. This policy is called discretionary access control. In many
environments this model works reasonably well.
8.1.2 Mandatory Access Control
Other environments where much tighter security is required need mandatory access
controls in addition to the standard discretionary access controls. These mandatory
controls regulate the flow of information to make sure that it does not leak out in a way
it is not supposed to.
8.1.3 Multilevel Security Models
Multilevel Security Models were originally designed for handling military security. They
are also applicable to other organizations. In such models the following should be
observed:



Documents (Or objects in general) have a security level such as unclassified,
confidential, secret and top secret in the military environment.
People are also assigned these levels depending on which documents they are
allowed to see.
A process running on behalf of a user acquires the user’s security level
These models define rules about how information can flow. For instance:
8.1.3.1 Rules Ensuring Data Confidentiality
1. A process running at security level k can read only objects at its level or lower
2. A process running at security level k can write only objects at its level or higher
Operating Systems
Page 24
If such system is rigorously implemented it can be shown tat no information can leak out
from a higher security level to a lower one
8.1.3.2 Rules Ensuring Data Integrity
1. A process running at security level k can write only objects at its level or lower
(no write up)
2. A process running at security level k can read only objects at its level or higher
(no read down)
Of course some organizations want to apply both couple of rules which are in direct
conflict so they are hard to achieve simultaneously.
8.2
Typical Security in Operating Systems
Operating Systems are required to have certain properties in order to be classified as
secure enough for certain kind of work. The following is a reasonable minimal list of
properties inducing potential security if backed-up by a good security policy coupled with
controlled practices:
1. Secure login with anti-spoofing measures. This means that all users should have
a password in order to log in and that no malicious program is able to bypass or
simulate the genuine login screen
2. Discretionary access controls allowing the owner of a file or an object to authorize
other users to use it and in what way.
3. Privileged access controls allow the system administrator to override the
discretionary access controls when needed.
4. Address space protection per process insures that the process virtual space is not
accessible by unauthorized processes.
5. New pages must be zeroed before mapped in
6. Security auditing allowing the administrator to produce a log of certain securityrelated events
7. File Encryption giving the user the option to encrypt his files so that in case of the
event the computer is stolen or breached through the files will be unreadable.
Operating Systems
Page 25
9.0
I/O Management
A substantial fraction of any Operating System is concerned with I/O (Input/Output).
The main function of the OS is to control all the computer’s I/O devices by:



Issuing commands to the device
Catching interrupts from the device and
Handling errors
9.1
Basic I/O Techniques
Three basic techniques are available in current computers systems. From the least
efficient to the best performing, we find:
1. Programmed I/O where the CPU is strictly tied to the I/O until it is finished.
2. Interrupt-driven I/O per character transfer
3. DMA (Direct Memory Access) a memory controller mechanism programmed by
the OS to exchange data at high rate between main memory and the I/O device;
The big win with DMA is twofold. First we reduce the number of interrupts from
one per character to one per buffer. Second we benefit from the streaming effect
of the data transfer.
The Operating System is responsible for choosing and managing the right technique for
each specific I/O in order to deliver the best overall performance.
9.2
The I/O Software Levels
Beside the interrupt service procedures, the I/O Software is usually structured in three
other levels:



9.3
The device drivers: OS usually classify drivers in two categories, block devices
and character devices each having a standard interface
The device-independent I/O software: This software insures
1. Uniform interfacing for device drivers
2. Buffering
3. Error reporting
4. Allocating & releasing dedicated devices
5. Providing a device-independent block-size
The I/O libraries that runs in the user space
Other I/O criteria
9.3.1 Synchronous /Asynchronous Transfers
Most physical I/O is asynchronous (Interrupt-driven). The CPU starts the transfer and
goes off doing something else until the interrupt arrives.
Operating Systems
Page 26
User programs are much easier to write if the I/O is synchronous (Blocked). This means
that after a read operation, the program is automatically suspended until the data is
available.
It is up to the OS to make operations that are actually interrupt driven look blocking to
the user programs.
9.3.2 Sharable/Dedicated Devices
OS must be able to handle both shared and dedicated devices in a way that avoids
problems.
9.3.3 Graphical User Interface (GUI)
Personal computers use GUI for their output. These are based on the WIMP paradigm:
windows, icons, menus and a pointing device. GUI-based programs are generally eventdriven, with keyboard, mouse and other events being sent.
The GUI software can be implemented in either user-level code as in UNIX systems, or in
the OS itself, as in the case of Windows.
Operating Systems
Page 27
10.0 System Management
10.1
System Management activities and processes
System management is the entire set of activities and processes by which an Agency
manages its information technology assets. An effective systems management process
enables the Ministry’s or the Agency’s ICT professionals to operate efficiently on a day to
day basis and to plan for the Agencies’ needs in the future.
A well-developed system management process is based on the following key elements:




10.2
Automated productivity tools in order to simplify the daily operations of ICT,
freeing human resources from additional responsibilities.
Timely and informative reports to provide the knowledge required for effective
planning
Practical policies and procedures to enhance the relationship between ICT and
end users and assist in efficient operations
Goals supporting the Agency’s mission to raise the value of ICT within the overall
organization and enable the full value of the ICT investment to be realized.
Components of Systems Management
The Gartner Group has identified thirty-four components of the systems management
process. They fall within three major categories:



The day-to-day knowledge and control components form the foundation of
systems management, providing ICT professionals with the ability to automate
routine tasks and receive status and event information for planning and action
The preventive policy setting components include the decisions and processes
which allow ICT professionals to operate proactively, increasing the value to the
organization of the department and the individuals within it
The long-range of continuous improvement components foster an environment
for future planning and professional growth
10.2.1 Day to Day Knowledge and Control Components
The following tools need to be available as part of the operating system. These are tools
that assist the ICT unit in controlling the behaviour, efficiency, reliability, availability
maintenance, update and of a system.
They are used for systems management and are characterized by the automation of
frequent or repetitive activities and timely reporting of information relating to system
status to the ICT professional. The reports provide a historical perspective, which
permits the ICT professional to progress to preventive policy setting, the second stage of
systems management.
Asset Management: An automated process for tracking ICT assets during their life
cycles, including data on inventory, maintenance, costs and upgrades or other changes.
Reports are generated periodically and on demand.
Operating Systems
Page 28
Software Inventory: A listing kept current at all times with detailed information about
all software installed within an organization. Reports are generated periodically and on
demand.
Hardware Inventory: A listing kept current at all times with detailed information about
all hardware components, including end user computers, peripherals, servers and
network devices. Reports are generated periodically and on demand.
Software Distribution: An automated process for installing software on client
computers and network components from a centralized location. The process can
accommodate mobile devices and retry after failures. Reports detailing status can be
generated on demand.
Virus Management: An automated process that detects viruses on the network and on
individual devices, alerts system operators to attacks and repairs problems when found.
Typical reports include periodic status updates and incident reports.
Systems Event Management: An automated event management system that notifies
system operators of impending and actual failures, capacity issues, traffic issues and
other system and network events. Reports include system status and alarm tabulations.
Server-Based Client Image Control: The ability for a system administrator to create a
downloadable configuration used to set up end user computers and standardize
configurations.
User State Management and Restoration: The ability to mirror end user computers
on the server for rapid, automatic restoration after a system failure.
Unattended Power Up: The ability for an end user computer to be powered up
remotely on the network so that system administration functions can be performed even
when the user has powered off the system.
Client Hardware Event Management: The ability for devices on the network to
transmit information to systems operators about hardware-based abnormal performance
conditions and failures, such as temperature fluctuations or electrical spikes.
Automated Backup and Restore: provides backup for all systems to a centralized data
repository.
Service Desk Problem Management and Resolution: An automated process for
generating trouble tickets, tracking status, dispatching technicians and reporting on
problem resolution. A knowledge database expedites diagnosis of common problems.
Client Remote Control: The ability for a service desk technician to take control of an
end user's computer from another computer on the network.
10.2.2 Tools and Good Practices: Preventive Policy Setting Components
Once the knowledge and control components shown above are in place, the ICT
professional can address preventive policy setting components. It is essential that
logical, practical policies and procedures be implemented in order to provide the
framework for planning. In an effective systems management process, policies and
Operating Systems
Page 29
procedures are the result of a realistic understanding of the organization's requirements
and a means to achieve measurable results.
Scalable Architecture: The establishment of a technology infrastructure that is capable
of expanding both in capacity and in performance as the Agency's needs grow.
Quality Vendor/Provider Selection: The selection of low-risk, high-quality vendors
and suppliers, which offer services suitable to the Agency.
Change Management: The procedures established to review and approve changes to
end user computers, servers, peripherals and other network devices.
Vendor Standardization: The procedure that establishes the specifications for vendors
that an Agency purchases from and determines how many vendors will be on the
approved list.
Platform Standardization: The procedure that establishes the specifications for
different operating systems and hardware devices that may be purchased.
Application Standardization: The procedure that determines software applications
and release levels that may be installed throughout the Agency.
Hardware Physical Security Management: The process that protects hardware from
theft or damage, including property tags, locks and limited or guarded points of egress.
Data Security Management: The process that protects data from theft, corruption, or
destruction, including firewalls, user identification and authorization levels.
Fault Tolerance: A policy which establishes the requirement for redundancy of critical
system and network components, preferably with automatic switching in the event of
failure.
Agency’s Policy Management: An environment in which a system administrator
determines what level of access an end user will have to applications, databases and
shared network devices based on the user's profile.
Locked User Environment: An environment that prevents end users from changing
machine settings and/or installing unauthorized software
Centralized and Optimized Procurement: A set of policies and procedures to manage
ICT purchasing, including compliance with standards.
Low Impact Upgrade ability: The ability for network assets to be upgraded with
minimal disruption, accomplished by standardization and implementation of automated
productivity tools and soft switch capabilities, complemented by policies designed to
complete upgrades when the fewest users are on their systems.
User Training: policies and procedures on end user training, matching the training that
is delivered to the user's needs and job.
ICT Training: policies and procedures on training for ICT professionals, including task
training and professional development.
Operating Systems
Page 30
10.2.3 Forecast for Continuous Improvement Components
The Agency, which has successfully implemented the day-to-day knowledge, control
components and the preventive policy setting components, finds that the forecast for the
continuous improvement components are the next logical step in the systems
management process. Through Continuous Improvement, the ICT staff can increase its
value to the Agency and recognize an enhanced level of job satisfaction.
Better Planning, Faster Implementation: establishing a proactive environment that
allows adequate time for team members to understand, investigate, document and
communicate prior to design and implementation.
Service Level Tracking and Management: determining the specific levels of services
that ICT will provide to end users, including measures for performance, network and
system availability and problem resolution.
Capacity Planning: The process by which the capacity of the systems and network is
assessed based on current and future requirements.
TCO Lifecycle Management: The process by which overall costs of the systems and
network are managed.
Highly Motivated ICT Staff: The ability of the ICT staff to work as a team, providing a
superior level of services to users and providing backup for other team members when
needed.
Stable ICT Organization: Allows the staff to be consistent and focused.
10.3
Benefits of System Management
Through a consistent application of the right tools and practices, the ICT department and
the overall organization recognize significant benefits, including:



Improved levels of services for users
Increased status of ICT professionals
Better utilization of assets
Although all of the components of systems management reviewed above are appropriate
for all Agencies, the method in which they are applied varies based on a number of
factors, with the primary determinant being the size of the Agency.
10.4
Basic OS System Management Features
The OS system management can be categorized into the following:






Hardware Management
Operating-System Management
Software Management
Event Management
Storage Management
Remote Administration
Operating Systems
Page 31
10.4.1 Hardware Management
Hardware maintenance represents one of the most basic system management activities.
It includes adding and replacing memory, disks and storage arrays, processors, adapters
for I/O and networking, terminals, printers and other devices.
Hardware management usually works through several phases:




Physically installing and connecting the hardware
Reflecting the state of installed hardware at a low level (i.e., firmware)
Updating the operating system with appropriate device drivers
Making hardware resources available to applications
OS need to have the following capabilities:




New hardware should virtually configure itself after being attached physically. OS
enable plug-and-play hardware configuration.
Simplified hardware installation to the point where relatively little operator
intervention is required.
Support automatic configuration of I/O systems and device drivers at boot time
using various mechanisms as well.
All systems scan the I/O card space at boot time and whenever new hardware is
discovered, if the driver is included with the base operating system, the system
administrator is prompted for appropriate media containing the driver.
10.4.2 Operating System Management
In commercial server environments, routine administration of operating systems involves
primarily:



User account maintenance
Security management
Tuning the environment for particular applications software
Good GUI tools will help system administrators to perform their job more easily
providing effective administrative-role delegation for management functions that
normally require broad administrative privileges.
10.4.3 Software Management
Software management is the most common functionality. This includes software
installation, upgrades and system-software patches that sit on top of the operating
system. Several functions can help to simplify this task, including:

Software Registry: A central repository for configuring data related to
hardware, the operating system and applications that are manipulated and
searched with database-like queries.

Software Version Control and Patch Management Tools: A rigorous
mechanism to keep track of which versions of applications are installed on the
system and which patches to system software have been applied.
Operating Systems
Page 32

Two-phase Commit for Patch Installation: This patch management
mechanism allows automatic roll-backs if an installation of software or a patch
causes problems to the system, in which case administrators can fully back out of
installations, restoring the system to its original state.
10.4.4 Event Management
Operating Systems need event-management mechanisms, which have the ability to
track, view and notify administrators about many different types of system events using
a single consistent format, along with a unified interface. This allows a central event log
to serve as the only stored log required for debugging purposes. This approach helps
administrators manage the profusion of messages from a variety of sources that pops up
during day-to-day administration.
Typically, the event-management mechanism provides administrators with a single
console for tracking the following types of events on the system:









Disk full
Disk fails
CPU error
System panic
Configuration Change
Subsystem started/stopped
Application started/stopped
Application error
Repeated failed login
While event management has been somewhat facilitated by heterogeneous system
management tools, integrating this capability within the operating system itself allows a
greater range of system-specific information to be gathered and tracked. This capability
enables further event-management for a much broader range of administrators, who
would otherwise need to purchase, install and configure a complex framework.
Complex frameworks still retain their value for managing networks of hundreds of
systems or of heterogeneous systems.
Operating Systems offer Event Monitoring Service (EMS), a unified framework and a user
interface for system-wide logging and notification.
Event Management is well integrated into the base operating system. This feature allows
maximum access to key system-event information for use in managing and tuning the
system, resulting in a faster and easier diagnosis of problems.
Manager Agent provides local and remote management capabilities through a dedicated,
corporate-wide HTTP port. Working in conjunction with hardware and firmware, the
agents:




Export system information
Monitor various system components such as CPU, memory and I/O devices
Track storage, networking and environmental components such as fans and
power supplier
Provide information on CPU and file system use
Operating Systems
Page 33
The agents include a sophisticated SNMP-to-HTML rendering engine, from which
management data are displayed dynamically using smart Java scripts.
10.4.5 Storage Management
Logical Volume Managers (LVMs) are storage-management tools that allow the creation
of a “virtual disk” or “volume” – made up of one or more physical disks.
Without an LVM, file systems and individual files remain limited to a size no larger than
individual disks, which becomes a problem for data-intensive applications such as
databases, CAD/CAM/CAE and image processing.
Combining several disks to form a logical volume can increase capacity, reliability and/or
performance.
Unlike more primitive file system and physical partition approaches, logical volumes
allow administrators to manipulate them online, without requiring a reboot.
10.4.6 Remote Management
As enterprises and Agencies depend more and more on networks, the ICT infrastructure
becomes more distributed, dramatically increasing the number of servers that need to be
deployed.
Large enterprises routinely disperse servers geographically, in some cases across
different continents and time zones. Agencies should be accessible from remote
embassies. Thus, the capability to effectively manage operating systems remotely
becomes increasingly important. If an enterprise depends on too many servers, for
example, it is simply not feasible to maintain a thousand system administrators locally.
Web-Based System-Management Tools allow administrators to maintain servers
remotely over the Internet, in some cases using ordinary web browsers as entry-points.
Template-Based Installation Tools employ a “cookie-cutter” method to be used for
replicating tested configuration across large numbers of servers. Typically, a “template”
server is created and tested, then replicated across multiple servers using some
distribution mechanism.
Operating Systems
Page 34
11.0 Unicode and Multilingual Computing
Today, a computing environment must be in harmony with diverse cultural and linguistic
requirements. Users need the following:



Applications and file formats that they can share with colleagues and customers
in other countries using other languages
Application interfaces in their own language
Time and date displays that they understand at a glance
Essentially, users want to write and speak at the keyboard in the same way that they
always write and speak.
Modern OS addresses these needs at various levels, bringing together the components
that make possible a truly multilingual computing environment.
It begins with the internationalization framework in the OS environment. Developers
have different ways to internationalize their applications to meet the requirements of
specific cultural regions. This framework continues by incorporating the Unicode
encoding standard, a standard that provides users and developers with a universal
codeset.
Unicode is well-suited to applications such as multilingual databases, electronic
commerce and government research and reference.
Modern operating environment supports multilingual computing with multiple character
sets and multiple cultural attributes.
11.1
Multilingual Computing
The concept "multilingual" in practice takes different forms. It is important to distinguish
among the following types of environments:



Multilanguage
Multiscript
Multilingual
The movement from multilanguage to multiscript to multilingual implies an increasing
level of complexity in the underlying operating environment.
A multilanguage environment means that a locale supports one writing system, or
script and one set of cultural attributes.
Thus, in a multilanguage environment, the user must launch a separate instance of an
application in different locales for the application to take advantage of differing language
and cultural attributes.
A multiscript environment means that a locale may support more than one script, but
the locale is still associated with only one set of cultural attributes.
The multiscript environment supports text written in multiple scripts but is still limited to
one set of cultural attributes. This means, for example, that text is sorted according to
the sorting rules of the current locale.
Operating Systems
Page 35
A multilingual environment means that a locale can support multiple scripts and
multiple cultural attributes. In this environment, an application can have the ability to
transparently make use of both the language and cultural attributes of the locale within a
single locale. In this case, an application can create a document in multiple scripts and
because the application has access to multiple cultural attributes, it has greater control
over how text is manipulated. For example, a document containing text in multiple
scripts can sort text according to its script rather than the sort order of the current
locale. In a Unicode enabled locale, the application can eliminate the step of tagging
script runs.
The multilingual environment brings you closest to the ideal of multilingual computing.
An application can make use of locale data from any number of locales, while at the
same time allowing the user to easily manipulate text in a variety of scripts. Every user
can communicate and work in his or her language and still understand and be
understood by other users anywhere in the world.
11.2
Software Internationalization
A modern OS defines the following levels at which an application can support a
customer's international needs:


Internationalization
Localization
Software internationalization is the process of designing and implementing software to
transparently manage different cultural and linguistic conventions without additional
modification. The same binary copy of an application should run on any localized version
of the OS environment, without requiring source code changes or recompilation.
Software localization is the process of adding language translation (Including text
messages, icons, buttons, etc.), cultural data and components (Such as input methods
and spell checkers) to a product to meet regional market requirements.
Modern OS environment should support both internationalization and localization. Thus
with a single internationalized binary that is localized into various languages such as
French, Arabic and English and supports the associated cultural and language
conventions of each language.
When properly designed, applications can easily accommodate a localized interface
without extensive modification.
One suggestion for creating easy-to-localize software is to first internationalize the
software and then encapsulate the language- and cultural-specific elements in a localespecific database or file. This greatly simplifies the localization process, should a
developer choose to localize in the future.
Minimally, developers are encouraged to internationalize their software. In this way,
their applications can run on any localized version of the supporting OS operating
environment. As a result, such an application can easily manage the user's language and
cultural preferences.
Operating Systems
Page 36
11.3
Internationalization Framework
A major aspect of developing a properly internationalized application is to separate
language and culturally-specific information from the rest of the application code. The
internationalization framework in OS environment uses the following concepts to fulfil
this aim:



Local
Localizable interface
Codeset independence
A local is a set of language and cultural data that is dynamically loaded into memory at
runtime. Users can set the cultural aspects of their local work environment by setting
specific variables in a locale. These settings are then applied to the operating system and
to subsequent application launches.
The OS includes APIs for developers to access directly the cultural data of the current
locale. For example, an application will not need to encode the currency symbol for a
particular region. By calling the appropriate system API, the locale returns the symbol
associated with the current currency symbol the user has specified. Applications can run
in any locale without having special knowledge of the cultural or language information
associated with the locale.
Creating a localizable interface means accounting for the variations that take place
when an interface is translated into another language.
Codeset independence means designing applications that do not make assumptions
about the underlying codeset. For example, text-handling routines should not define in
advance the size of the character codeset being manipulated.
11.4
Supporting the Unicode Standard
Unicode, or Universal Codeset, is a universal character encoding scheme developed and
promoted by the Unicode Consortium, a non-profit organization that includes all OS
editors. The Unicode standard encompasses most alphabetic, ideographic and symbolic
characters used on computers today.
Using this one codeset enables applications to support text from multiple scripts in the
same documents without elaborate marking of text runs. At the same time, applications
need to treat Unicode as just another codeset, that is, apply the principle of codeset
independence to Unicode as well.
11.4.1 Benefits of Unicode
Support for the Unicode standard provides many benefits to application developers.
These benefits include:





Global source and binary
Support for mixed-script computing environments
Reduced time-to-market for localized products
Expanded market access
Improved cross-platform data interoperability through a common codeset
Operating Systems
Page 37

Space-efficient encoding scheme for data storage
Unicode is a building block that designers and engineers can use to create truly global
applications. By making use of one flat codeset, end-users can exchange data more
freely without relying on elaborate code conversions to make characters comprehensible.
In adopting the internationalization framework in an OS, Unicode can be thought of as
"just another codeset". By following the concepts of codeset-independent design,
applications will be able to handle different codesets without the need for extensive code
rework to support specific languages.
11.4.2 The Unicode Standard
On most computer systems supporting writing systems such as Roman or Cyrillic, user
input (usually via keypresses) is converted into character codes that are stored in
memory. These stored character codes are then converted into glyphs of a particular
font before being passed to the application for display and printing.
Each locale has one or more codesets associated with it. A codeset includes the coded
representation of characters used in a particular language. Codesets may span one-byte
(for alphabetic languages) or two or more bytes for ideographic languages. Each codeset
assigns its own code-point values to each character, without any inherent relationship to
other codesets. That is, a code-point value that represents the letter `a' in a Roman
character set will represent another character entirely in the Cyrillic or Arabic system and
may not represent anything in an ideographic system.
In Unicode, every character, symbol and ideograph has its own unique character code.
As a result, there is no overlap or confusion between the code-point values of different
codesets. There is, in fact, no need to define multiple codesets because each character
code used in each writing system finds its place in the Unicode scheme.
Unicode includes not only characters of the world's languages, but also includes
characters such as publishing characters, mathematical and technical symbols and
punctuation characters.
Unicode version 2.1 contains alphabetic characters from languages including AngloSaxon, Russian, Arabic, Greek, Hebrew, Thai and Sanskrit. It also contains ideographic
characters in the unified Han subset defined by national and industry standards for
China, Japan, Korea and Taiwan.
Operating Systems
Page 38
12.0 Distributed Operating Systems
Conceptually there are two types of distributed operating systems
12.1
DOS (Distributed Operating System)
This is a tightly-coupled operating system for multi-processors and homogeneous multicomputers. The main goal of a DOS is to hide and manage the multiple hardware
resources.
12.2
NOS (Network Operating System)
This is a loosely-coupled operating system for heterogeneous multi-computers (LAN and
WAN). The main goals of the NOS are:


To offer local services to remote clients.
To provide distribution transparency through an additional layer –a middlewareatop of NOS implementing general-purpose services
In contrast to distributed operating systems, network operating systems do not assume
that the underlying hardware is homogeneous. They are generally constructed from a
collection of uni-processor systems that may be different.
The easiest way to describe network operating systems is to point at some services they
typically offer like rlogin, rcp, etc.
12.3
The Middleware Models
Neither DOS nor NOS provide the view of a single coherent system handling a collection
of independent computers. The solutions in modern operating systems are constructed
by means of Middleware Models describing distribution and communication.
Middleware Models are constructed today for a range of operating systems. In this way,
applications built for a specific distributed system become operating system
independent. These middleware are the basic blocks of the Multi-tier architecture and the
Client Server Model.
In today’s Operating Systems multiple Middleware Models are participating in offering
solutions for distributed applications. We cover these solutions in the next Section about
the Application Server Support.
Operating Systems
Page 39
13.0 Application Server Support
With the current trend for web and distributed applications, the OSs need to support
Internet and Web-Application Services. The covered areas are the following:








13.1
Web-Protocol Support
Enterprise Application Integration
Web Services
Interoperability with other Operating Systems
Network connectivity and services
Universal data Access
Distributed Applications
The Application Server platform
Web-Protocol Support
Operating systems represent the software backbone of computing infrastructures. Thus,
strong support is required for the basic network protocols that make up the plumbing of
the web, many of which need to be implemented at the kernel level. A variety of
extensions to the basic TCP/IP protocol underlying the web can be used to improve the
reliability, security and performance of IP-based networks. Some of the key extensions
are listed below.
IPSec is used to secure traffic that passes over the public Internet by transparently
encrypting IP packets on both the transmission endpoints, preventing anyone from
intercepting it without requiring support in intervening routers or any special application
coding.
IPv6, the “next generation” Internet protocol that extends the 32-bit address range of
today’s IPv4 protocol to 128 bits. IPv6 also lays the groundwork for quality-of-service
priority flags with IP, encryption security and plug-and-play auto configuration when
dynamically connecting notebook computers or other devices to the network.
Simultaneous IPv4 and IPv6 stack on the same network, allowing both protocols to be
used by a server at the same time.
IPv6 Gateway facilities allow a server to route packets from IPv4 networks to IPv6
networks and vice-versa.
Internet Key Exchange (IKE), an automated protocol for establishing, negotiating,
modifying and deleting Security Associations (SAs) between two hosts in a network.
IKE is part of IPSEc
Resource Reservation Protocol (RSVP) can be used to assign varying priority levels to
IPv4 packets, allowing networks to promise varying quality-of-service guarantees,
assuming that intervening routers support RSVP.
RSVP is a resource reservation setup protocol for the Internet. Its major features
include:



The use of “soft state” in the routers
Receiver-controlled reservation requests
Flexible control over sharing of reservations and forwarding of sub flows
Operating Systems
Page 40

The use of IP multicast for data distribution
IP Multiplexing allows a single system to be seen as multiple numeric IP addresses, even
on the same network-interface card.
IP Multicasting simultaneously transmits IP packets to multiple hosts, which enables
“subscription” messaging for audio, video and software or data streams.
TCP selective acknowledgement (SACK) allows TCP to recover from multiple losses within
transmission windows, providing superior performance in overloaded networks and
traffic crossing multiple networks.
Service Location Protocol (SLP) provides a scalable framework for the discovery and
selection of IP network services.
Asynchronous Transfer Mode (ATM) IP switching: The ATM protocol supports a wide
range of bandwidth requirements and different classes of traffic at high capacity and has
thus found widespread acceptance as a multimedia and conferencing tool.
SOCKS is a protocol that a proxy server can use to accept requests from clients so that
they can be forwarded to the web.
Multilink PPP (Point-to-Point Protocol) allows the operating system to use two or more
communications ports as if they were a single port of greater bandwidth.
Ethernet bonding allows a server to harness multiple Network Interface Cards (NICs) for
use as a single Ethernet channel, increasing the effective bandwidth available for
connections.
TCP large windows improve performance over high-bandwidth networks such as ATM or
high-delay networks such as satellite links by using windows that exceed the normal
64KB limit.
13.2
Enterprise Application Integration (EAI)
EAI is a general computing term designating plans, methods and tools aimed at
modernizing, consolidating and coordinating the computer applications in an ICT
organization. Typically, an enterprise has existing legacy applications and databases and
wants to continue to use them while adding or migrating to a new set of applications that
exploit the Internet, e-commerce, extranet and other new technologies.
EAI may involve developing a new total view of an Agency's business and its
applications, seeing how existing applications fit into the new view and then devising
ways to efficiently reuse what already exists while adding new applications and data.
EAI encompasses methodologies such as:




Object-oriented programming (C++, Java)
Distributed cross-platform program communication using message brokers with
Common Object Request Broker Architecture (CORBA) and COM+
The modification of enterprise resource planning (ERP) to fit new objectives
Enterprise-wide content and data distribution using common databases and data
standards implemented with the Extensible Mark-up Language (XML)
Operating Systems
Page 41


Middleware
Message queuing
13.2.1 Data Exchange: The XML Breakthrough
For many Agencies, the most important criterion for ICT and Internet technologies is the
ability to enable cost-effective Service-to-Service transactions with partners that
eliminates the inefficiencies of paper trails, duplicate data processing and re-keying of
information.
XML (eXtensible Mark-up Language) based technologies are quickly becoming the
industry standard for integrating services, transforming and flowing data and structured
content over the Internet and within Agencies.
XML is described in detail in a document associated with the Data Definition and
Exchange segment which can be downloaded from OMSAR’s Standards and Guidelines
website at www.omsar.gov.lb/ICTSG/DE.
13.3
Interoperability with other Operating Systems
Interoperability is essential in today's increasingly heterogeneous computing
environments. As the Agency’s capabilities of the Server operating system evolve,
corporations that once relied on one unique Operating System for large,
processing-intensive applications and end user time-sharing are now facing multiple
distributed systems and applications that should span on all of them.
When industry analysts cite, high performance, application availability, low computing
costs and ease of administration as criteria of choice in a Server OS, they put
interoperability ahead of all.
Interoperability improves information-sharing, reduces computing costs and capitalizes
on past investments, it also opens the information technology infrastructures in a way
that leverages new technologies and products.
Interoperability begins with network protocols and directory security and extends to
heterogeneous, distributed enterprise applications and network and system
management. Layered in the middle are data access and sharing, application porting and
cross-platform application access.
Key features of interoperability are addressed in the context of:



13.4
Network connectivity and services, including low-level protocols, directory
services and distributed security
Data: Access to relational databases, XML-based applications and appropriate
portable solutions for file transfer, file-sharing and printer-sharing
Distributed, heterogeneous applications, including cross-platform application
development and support for UNIX and Windows clients
Network Connectivity and Services
Server OS requires reliable network connectivity between multiple environments. It
provides this foundation through built-in support for TCP/IP, the standard suite of
Operating Systems
Page 42
network transport protocols used in UNIX environments. By featuring TCP/IP support, a
server is able to communicate with all TCP/IP-based systems natively over enterprise
networks and the Internet. Built-in support for services, such as




Domain Name System (DNS) server
Dynamic Host Configuration Protocol (DHCP)
BootP
Remote Procedure Call (RPC) –
the building blocks of TCP/IP-based enterprise networks - ensures that the Server OS
can provide the necessary infrastructure to deploy and manage these networks.
This infrastructure becomes even easier to manage through the use of Agency directory
and security services. It can extend this functionality to a fully interoperable distributed
services infrastructure allowing cross-platform directory access and synchronization by
means of the Lightweight Directory Access Protocol (LDAP) and interoperable
authentication by means of Kerberos.
Another benefit of the common TCP/IP infrastructure across multiple TCP/IP-based OS is
the support for services such as



FTP
HTTP
TELNET
By means of FTP and HTTP services, users can copy files across networks of
heterogeneous systems and then manipulate them locally as text files. In addition to
copying files, PC users can access character-based UNIX applications through the
support for remote logon, a UNIX service enabled by TCP/IP's network terminal protocol
(TELNET).
By running terminal emulation software built into the operating systems, a user can log
on to a UNIX timesharing server in a manner similar to a dialup connection. After
entering an authorized user name and password, users will be able to use
character-based applications residing on the remote OS workstation as if they were
logged on to the system directly.
13.5
Universal Data Access:
13.5.1 Cross-Platform File and Print Sharing
Copying files and sharing character-based applications are a start toward achieving
multiple OS integration, but many organizations need to find ways to let end users—
regardless of the desktop systems they are running—access expensive networked
resources, such as network printers and file servers, across mixed environments.
By supporting the TCP/IP suite of protocols and utilities, an OS lets Agencies take full
advantage of their investments in expensive network printers. An OS that has built-in
support for TCP/IP lets employees use the printers of their choice, regardless of which
system they are running.
UNIX and Windows interoperability:
Operating Systems
Page 43
For example: UNIX users can print to Windows-based printers simply by employing "lpr,"
a TCP/IP printing utility. Similarly, clients connected to a Windows-based server can print
documents, spreadsheets, e-mails and so forth on a printer connected to a UNIX system.
13.5.1.1
NFS
PC users also can take advantage of UNIX systems as file servers, allowing organizations
to take advantage of UNIX disk space. By running Network File System (NFS) client
software on their PCs, end users can see and access a UNIX file system as if it were a
local drive.
Developed by Sun Microsystems, NFS is a file system used on most UNIX systems and it
has become the de facto standard for sharing resources across multiple vendors of UNIX
platforms. NFS client software for Windows-based PCs is available from a wide range of
vendors.
To allow UNIX users access to files on a Windows -based server, Agencies can install any
commercial NFS server on Windows and it will appear on the network like a UNIX server
running NFS protocols. And for the ultimate in cross-platform file server access, some
NFS Gateway can be installed on a Windows-based server, allowing Windows clients
using the native Windows networking software access to UNIX servers running NFS. This
has the added benefit of requiring no additional software to be installed on the client.
13.5.1.2
CIFS
Though NFS is a traditional protocol used to share files across Windows and UNIX
environments, an upgraded PC file-sharing protocol promises to extend heterogeneous
file-sharing across the Internet. Microsoft, in conjunction with more than 40 other
vendors—including AT&T Corp., Hewlett-Packard Co., IBM Corp. and Sun Microsystems
Inc.—has proposed the Common Internet File System (CIFS) protocol as a standard for
remote file-sharing over the Internet and corporate intranets.
Based on protocol standards, CIFS defines a common access protocol for sharing files
and data of all types—including Windows and UNIX—securely over the Internet and
corporate intranets. Microsoft submitted the CIFS specification to the Internet
Engineering Task Force (IETF) as an Internet Draft document in June 1996 and is
working with the industry to publish CIFS as an informational RFC (Request for
Comment).
CIFS, an evolution of the high-performance SMB protocols, is the native file-sharing
protocol in Windows NT. It is available on UNIX servers through AT&T's Advanced Server
for UNIX.
13.5.2 Database Access
What about making use of the important service data residing on different systems
throughout the Agency? Again, rather than moving the data, often a difficult and costly
endeavour, Agencies need the ability to access this information from key enterprise
business applications. Somehow, they need to make the data sitting on a UNIX
workstation or IBM DB2 database accessible to the business applications running on PCs.
And they need to make this happen in a way that is transparent to the PC user.
Providing universal access to data is vital to developing powerful distributed solutions for
running the business.
Operating Systems
Page 44
Two fundamental components of the universal data access strategy are


Open Database Connectivity (ODBC),
Java Database Connectivity (JDBC)
13.5.2.1
ODBC
ODBC provides a unified way to access relational data from heterogeneous systems; any
application that supports ODBC can access information stored in any database that
houses relational data. For example, any ODBC-enabled application could pull and use
data from an ORACLE, SYBASE, Informix, or any other UNIX relational database without
installing any software on the UNIX system.
With ODBC, developers do not need to write separate client/server applications to access
individual UNIX databases; by simply supporting ODBC, a single application can access a
variety of UNIX relational or host databases.
Exchanging data—not just accessing it—is also possible among heterogeneous
databases.
For example, through heterogeneous replication, an SQL Server database running on
Windows can automatically send data to an Informix database running on a UNIX
system.
To tap the vast potential of the Internet and intranet to run business applications,
Agencies need access to information residing in UNIX and host databases from a Web
browser. With the Internet Information Server running on Windows NT, Agencies can
build Websites that access information from any database that supports ODBC, such as
Sybase or Oracle. The Internet Information Server supports ODBC through an Internet
Database Connector, enabling developers to build Web pages that make dynamic queries
to UNIX, IBM, or any other ODBC-compliant databases.
13.5.2.2
JDBC
Java Database Connectivity (JDBC) is an application program interface (API) specification
for connecting programs written in Java to the data in popular databases. The application
program interface lets access request statements be encoded in Structured Query
Language (SQL) that are then passed to the program that manages the database. It
returns the results through a similar interface. JDBC is very similar to the SQL Access
Group's Open Database Connectivity (ODBC) and, with a small "bridge" program; the
JDBC interface can be used to access databases through the ODBC interface. For
example, a program designed to access many popular database products on a number of
operating system platforms can be written. When accessing a database on a PC running
Microsoft's Windows 2000 and, for example, a Microsoft Access database, a program
with JDBC statements would be able to access the Microsoft Access database.
JDBC actually has two levels of interface. In addition to the main interface, there is also
an API from a JDBC "manager" that in turn communicates with individual database
product "drivers," the JDBC-ODBC bridge if necessary and a JDBC network driver when
the Java program is running in a network environment (that is, accessing a remote
database).
Operating Systems
Page 45
When accessing a remote database, JDBC takes advantage of the Internet's file
addressing scheme and a file name looks much like a Web page address (or Uniform
Resource Locator). For example, a Java SQL statement might identify the database as:
jdbc:odbc://www.somecompany.com:400/databasefile
JDBC specifies a set of object-oriented classes for the programmer to use in building SQL
requests. An additional set of classes describes the JDBC driver API. The most common
SQL data types, mapped to Java data types, are supported. The API provides for
implementation-specific support for Microsoft Transaction Server requests and the ability
to commit or roll back to the beginning of a transaction.
13.6
Distributed Applications and Cross-platform Application Development
13.6.1 JAVA Support
Java is a programming language expressly designed for use in the distributed
environment of the Internet. It was designed to have the "look and feel" of the C++
language, but it is simpler to use than C++ and enforces an object-oriented
programming model. Java can be used to create complete applications that may run on a
single computer or be distributed among servers and clients in a network. It can also be
used to build a small application module or applet for use as part of a Web page. Applets
make it possible for a Web page user to interact with the page.
The major characteristics of Java are:





The programs created are portable in a network. (See portability.) The source
program is compiled into what Java calls byte code, which can be run anywhere in
a network on a server or client that has a Java virtual machine. The Java virtual
machine interprets the byte code into code that will run on the real computer
hardware. This means that individual computer platform differences such as
instruction lengths can be recognized and accommodated locally just as the
program is being executed. Platform-specific versions of the program are no
longer needed.
The code is robust, here meaning that, unlike programs written in C++ and
perhaps some other languages, the Java objects can contain no references to
data external to themselves or other known objects. This ensures that an
instruction cannot contain the address of data storage in another application or in
the operating system itself, either of which would cause the program and perhaps
the operating system itself to terminate or "crash." The Java virtual machine
makes a number of checks on each object to ensure integrity.
Java is object-oriented, which means that, among other characteristics, an object
can take advantage of being part of a class of objects and inherit code that is
common to the class. Objects are thought of as "nouns" that a user might relate
to rather than the traditional procedural "verbs." A method can be thought of as
one of the object's capabilities or behaviours.
In addition to being executed at the client rather than the server, a Java applet
has other characteristics designed to make it run fast.
Relative to C++, Java is easier to learn.
Java was introduced by Sun Microsystems in 1995 and instantly created a new sense of
the interactive possibilities of the Web. Both of the major Web browsers include a Java
virtual machine. Almost all major operating system developers (IBM, Oracle, Apple, HP
and others) have added Java compilers as part of their product offerings.
Operating Systems
Page 46
The Java virtual machine includes an optional just-in-time compiler that dynamically
compiles byte code into executable code as an alternative to interpreting one byte code
instruction at a time. In many cases, the dynamic JIT compilation is faster than the
virtual machine interpretation.
Because Java applets run on almost any operating system without requiring
recompilation and because Java has no operating system-unique extensions or
variations, Java is generally regarded as the most strategic language in which to develop
applications for the Web.
JavaScript should not be confused with Java. JavaScript, which originated at Netscape, is
interpreted at a higher level, is easier to learn than Java, but lacks some of the
portability of Java and the speed of byte code. However, JavaScript can be useful for
very small applications that run on the Web client or server.
13.6.2 The Java platforms: J2EE, J2ME
13.6.2.1
J2EE (Java 2 Platform, Enterprise Edition)
J2EE is a Java platform designed for the mainframe-scale computing typical of large
Agencies. Sun Microsystems (together with industry partners such as IBM, Oracle, BEA
and others) designed J2EE to simplify application development in a thin client tiered
environment. J2EE simplifies application development and decreases the need for
programming and programmer training by creating standardized, reusable modular
components and by enabling the tier to handle many aspects of programming
automatically.
J2EE includes many components of the Java 2 Platform, Standard Edition (J2SE):





The Java Software Development Kit (JSDK) is included as the core language
package.
Write Once Run Anywhere technology is included to ensure portability.
Support is provided for Common Object Request Broker Architecture (CORBA), a
predecessor of Enterprise JavaBeans (EJB), so that Java objects can communicate
with CORBA objects both locally and over a network through its interface broker.
Java Database Connectivity 2.0 (JDBC), the Java equivalent to Open Database
Connectivity (ODBC), is included as the standard interface for Java databases.
A security model is included to protect data both locally and in Web-based
applications.
J2EE also includes a number of components added to the J2SE model, such as the
following:



Full support is included for Enterprise JavaBeans. EJB is a server-based
technology for the delivery of program components in an enterprise environment.
It supports the Extensible Mark-up Language (XML) and has enhanced
deployment and security features.
The Java servlet API (application programming interface) enhances consistency
for developers without requiring a graphical user interface (GUI).
Java Server Pages (JSP) is the Java equivalent to Microsoft's Active Server Pages
(ASP) and is used for dynamic Web-enabled data access and manipulation.
Operating Systems
Page 47
The J2EE architecture consists of four major elements:




The J2EE Application Programming Model is the standard programming model
used to facilitate the development of multi-tier, thin client applications.
The J2EE Platform includes necessary policies and APIs such as the Java servlets
and Java Message Service (JMS).
The J2EE Compatibility Test Suite ensures that J2EE products are compatible with
the platform standards.
The J2EE Reference Implementation explains J2EE capabilities and provides its
operational definition.
13.6.2.2
J2ME (Java 2 Platform, Micro Edition)
J2ME is a technology that allows programmers to use the Java programming language
and related tools to develop programs for mobile wireless information devices such as
cellular phones and personal digital assistants (PDAs). J2ME consists of programming
specifications and a special virtual machine, the K Virtual Machine, which allows a
J2ME-encoded program to run in the mobile device.
There are two programming specifications:


Connected, Limited Device Configuration (CLDC)
Mobile Information Device Profile (MIDP)
CLDC lays out the application program interface (API) and virtual machine features
needed to support mobile devices. MIDP adds to the CLDC the user interface, networking
and messaging details needed to interface with mobile devices.
MIDP includes the idea of a midlet, a small Java application similar to an applet but one
that conforms to CLDC and MIDP and is intended for mobile devices.
Devices with systems that exploit J2ME are already available and are expected to
become even more available in the next few years.
13.6.3 CORBA
Common Object Request Broker Architecture (CORBA) is an architecture and
specification for creating, distributing and managing distributed program objects in a
network. It allows programs at different locations and developed by different vendors to
communicate in a network through an "interface broker".
CORBA was developed by a consortium of vendors through the Object Management
Group (OMG), which currently includes over 500 member companies. Both International
Organization for Standardization (ISO) and X/Open have sanctioned CORBA as the
standard architecture for distributed objects (which are also known as components).
CORBA 3 is the latest level.
The essential concept in CORBA is the Object Request Broker (ORB). ORB support in a
network of clients and servers on different computers means that a client program
(which may itself be an object) can request services from a server program or object
without having to understand where the server is in a distributed network or what the
interface to the server program looks like. To make requests or return replies between
the ORBs, programs use the General Inter-ORB Protocol (GIOP) and, for the Internet, it
is Internet Inter-ORB Protocol (IIOP). IIOP maps GIOP requests and replies to the
Internet's Transmission Control Protocol (TCP) layer in each computer.
Operating Systems
Page 48
A notable hold-out from CORBA is Microsoft, which has its own distributed object
architecture, the Distributed Component Object Model (DCOM). However, CORBA and
Microsoft have agreed on a gateway approach so that a client object developed with the
Component Object Model will be able to communicate with a CORBA server (and
vice versa).
Distributed Computing Environment (DCE), a distributed programming architecture that
preceded the trend toward object-oriented programming and CORBA, is currently used
by a number of large companies. DCE will perhaps continue to exist along with CORBA
and there will be "bridges" between the two.
13.7
The Application Server platform
Agencies can use the Application Server Platform to develop and deploy new applications
that feature standard and custom integrations with all of the major off-the-shelf systems
(for example, ERP, CRM and SCM systems), core computing technologies (for example,
RDMS and middleware products) and protocols (SOAP, IIOP and HTTP).
13.7.1 The Value of the Application Server Platform
Application Server Platform reduces the cost of application development and deployment
in three ways:
1.
It puts the ability to build and deploy complex distributed applications into the
hands of a diverse population of developers.
Since the advent of distributed computing, the task of building large distributed
systems has been difficult and expensive and typically required the best and
brightest of software engineers and system architects. The fundamental goal of
such projects has always been presenting application functions as a set of
reusable services.
The Application Server Platform addresses the creation of service-oriented
architectures The ultimate goal of such architecture is to make all application
functions accessible as reusable services, so that new applications can be created
by assembling such services, rather than by building the applications functions
from scratch. This new approach opens the ability to build and deploy complex
distributed applications to people more concerned in the service logic rather than
in the awkward distributed low-level programming.
2.
The Application Server Platform extends the value of existing software assets and
skills to new uses.
Application Server Platform provides the best approach to protect this
investment.
Most importantly, the inclusion of Web services integration support in the
Application Server Platform makes it easy for a broad range of services to
capitalize on these capabilities.
Operating Systems
Page 49
3.
The Application Server Platform delivers enterprise deployment capabilities for
today’s major application development technologies, including CORBA, J2EE and
Web services.
The Application Server Platform is neutral with respect to programming
languages, component models and network protocols. The Application Server
Platform supports J2EE and CORBA, Java, C++, Visual Basic, Cobol and PL/I.
The leading web application server today includes:





13.8
Weblogic Server from BEA
WebSphere from IBM
iPlanet from Sun Microsystems
Oracle application Server
many others like Silverstream
Web Services
The effectiveness of an operating system as a web-server environment ultimately
depends on the services that are layered on top of the basic Internet protocols to handle
HTML content, e-mail and sharing of files and printers. However, operating systems may
be differentiated through optimizations that make particular web services run better, or
by bundling key web-server software packages so that users do not have to deal with
third parties.
The web server itself assumes central importance as the core service in web
environments, serving as the primary interface for providing HTML content to browsers
that connect to a server. Currently, some operating systems bundled web server for
productions use. The operating systems can maintain kernel-level caches for frequently
requested HTML pages, which can dramatically improve the user-load a web server can
handle on the respective platform.
Web Services are described in detail in a document associated with the Data Definition
and Exchange segment which can be downloaded from OMSAR’s Standards and
Guidelines website www.omsar.gov.lb/ICTSG/DE.
Operating Systems
Page 50
Download