Computer_Notes_-_2009

advertisement
Welingkar
V0.1/2008 - MFM
MFM– Sem III
Introduction to Computers
1
Welingkar
MFM– Sem III
Introduction to Computers
Parts of a Computer
Schematic diagram showing the various parts of a laptop computer.
Performance
Features that affect the performance of the computer include:
 microprocessor
 Operating System
 RAM
 disk drives
 display
 input/output ports
 fax/modem
 sound cards and speakers
The microprocessor:
 Has a set of internal instructions stored in memory, and can access memory for its own
use while working.
 Can receive instructions or data from you through a keyboard in combination with
another device (mouse, touchpad, trackball, and joystick).
 Can receive and store data through several data storage devices (hard drive, floppy
drive, Zip drive, CD/DVD drive).
 Can display data to you on computer monitors (cathode ray monitors, LCD displays).
 Can send data to printers, modems, networks and wireless networks through various
input/output ports.
 Is powered by AC power and/or batteries.
V0.1/2008 - MFM
2
Welingkar
MFM– Sem III
Introduction to Computers
A Basic Computer and its parts
A standard fully featured desktop configuration has basically four types of featured devices
1. Input Devices
2. Output Devices
INPUT
CPU
MAIN MEMORY
3. Memory
4. Storage Devices
OUTPUT
EXTERNAL MEMORY
SYSTEM
BUS
Introduction to CPU








CPU (Central Processing Unit)
The Arithmetic / Logic Unit (ALU)
The Control Unit
Memory
Main
External
Input / Output Devices
The System Bus
Motherboard
A motherboard or main board is the physical arrangement in a computer that contains the
computer's basic circuitry and components.
V0.1/2008 - MFM
3
Welingkar
MFM– Sem III
Introduction to Computers
Input Devices
…..anything that feeds the data into the computer. This data can be in alpha-numeric form
which needs to be keyed-in or in its very basic natural form i.e. hear, smell, touch, see; taste &
the sixth sense …feel?
Typical input devices are:
1. Keyboard
3. Joystick
5. Touch Sensitive Screen
7. Space Mouse
9. Magnetic Ink Character
Recognition (MICR)
11. Image Scanner
13. Magnetic Reader
15. Voice Data Entry
17. Video Capture
2. Mouse
4. Digitizing Tablet
6. Light Pen
8. Digital Stills Camera
10. Optical Mark Reader
(OMR)
12. Bar Codes
14. Smart Cards
16. Sound Capture
The Keyboard is the standard data input and operator control device for a computer. It consists
of the standard QWERTY layout with a numeric keypad and additional function keys for control
purposes.
The Mouse is a popular input device. You move it across the desk and its movement is shown
on the screen by a marker known as a 'cursor'. You will need to click the buttons at the top of
the mouse to select an option.
Track ball looks like a mouse, as the roller is on the top with selection buttons on the side. It is
also a pointing device used to move the cursor and works like a mouse. For moving the cursor
in a particular direction, the user spins the ball in that direction. It is sometimes considered
better than a mouse, because it requires little arm movement and less desktop space. It is
generally used with Portable computers.
Magnetic Ink Character Recognition (MICR) is used to recognize the magnetically charged
characters, mainly found on bank cheques. The magnetically charged characters are written by
special ink called magnetic ink. MICR device reads the patterns of these characters and
compares them with special patterns stored in memory. Using MICR device, a large volume of
cheques can be processed in a day. MICR is widely used by the banking industry for the
processing of cheques.
The joystick is a rotary lever. Similar to an aircraft's control stick, it enables you to move within
the screen's environment, and is widely used in the computer games industry.
A Digitising Tablet is a pointing device that facilitates the accurate input of drawings and
designs. A drawing can be placed directly on the tablet, and the user traces outlines or inputs
coordinate positions with a hand-held stylus.
A Touch Sensitive Screen is a pointing device that enables the user to interact with the
computer by touching the screen. There are three types of Touch Screens: pressure-sensitive,
capacitive surface and light beam.
V0.1/2008 - MFM
4
Welingkar
MFM– Sem III
Introduction to Computers
A Light Pen is a pointing device shaped like a pen and is connected to a VDU.
The tip of the light pen contains a light-sensitive element which, when placed against the
screen, detects the light from the screen enabling the computer to identify the location of the
pen on the screen. Light pens have the advantage of 'drawing' directly onto the screen, but this
can become uncomfortable, and they are not as accurate as digitising tablets.
The Space mouse is different from a normal mouse as it has an X axis, a Y axis and a Z axis. It
can be used for developing and moving around 3-D environments.
Digital Stills Cameras capture an image which is stored in memory within the camera. When
the memory is full it can be erased and further images captured. The digital images can then be
downloaded from the camera to a computer where they can be displayed, manipulated or
printed.
The Optical Mark Reader (OMR) can read information in the form of numbers or letters and put
it into the computer. The marks have to be precisely located as in multiple choice test papers.
Scanners allow information such as a photo or text to be input into a computer. Scanners are
usually either A4 size (flatbed), or hand-held to scan a much smaller area. If text is to be
scanned, you would use an Optical Character Recognition (OCR) program to recognise the
printed text and then convert it to a digital text file that can be accessed using a computer.
A Bar Code is a pattern printed in lines of differing thickness. The system gives fast and errorfree entry of information into the computer. You might have seen bar codes on goods in
supermarkets, in libraries and on magazines. Bar codes provide a quick method of recording the
sale of items.
Card Reader This input device reads a magnetic strip on a card. Handy for security reasons, it
provides quick identification of the card's owner. This method is used to run bank cash points or
to provide quick identification of people entering buildings.
Smart Card This input device stores data in a microprocessor embedded in the card. This
allows information, which can be updated, to be stored on the card. This method is used in store
cards which accumulate points for the purchaser, and to store phone numbers for cellular
phones.
Voice Data (MIC) This system accepts the spoken word as input data or commands. Human
speech is very complex, involving emphasis and facial expressions, so complete voice
recognition will not be developed for some time. However, simple commands from one user can
be used to control machines. In this way a paralysed person can operate a wheelchair or control
heating and lighting.
Voice Capture With the addition of a sound card in one of the expansion slots of your computer
you can "record" voice or music. The sound card digitises the information into a form that the
computer can understand.
With a video capture board in one of your computer's expansion slots you can capture video
(photographic) images through a video camera. The video capture board digitises the image.
V0.1/2008 - MFM
5
Welingkar
MFM– Sem III
Introduction to Computers
In Summary
There are several ways to get new information or input into a computer. The two most common
ways are the keyboard and the mouse. The keyboard has keys for characters (letters,
numbers and punctuation marks) and special commands. Pressing the keys tells the computer
what to do or what to write. The mouse has a special ball that allows you to roll it around on a
pad or desk and move the cursor around on screen. By clicking on the buttons on the mouse,
you give the computer directions on what to do. There are other devices similar to a mouse that
can be used in its place.
A trackball has the ball on top and you move it with your finger. A touchpad allows you to
move your finger across a pressure sensitive pad and press to click. A scanner copies a
picture or document into the computer. Another input device is a graphics tablet. A pressure
sensitive pad is plugged into the computer. When you draw on the tablet with the special pen
(never use an ink pen or pencil!), the drawing appears on the screen. The tablet and pen can
also be used like a mouse to move the cursor and click.
Never trust a computer you can't throw out a window.
Steve Wozniak
V0.1/2008 - MFM
6
Welingkar
MFM– Sem III
Introduction to Computers
Output Devices
Output devices display information in a way that you can you can understand. The most
common output device is a monitor. It looks a lot a like a TV and houses the computer screen.
The monitor allows you to 'see' what you and the computer are doing together.
Brief of Output Device
Output devices are pieces of equipment that are used to get information or any other response
out from computer. These devices display information that has been held or generated within a
computer. Output devices display information in a way that you can understand. The most
common output device is a monitor.
Types of Output Device
Printing:
Plotter, Printer
Sound :
Speakers
Visual :
Monitor
A Printer is another common part of a computer system. It takes what you see on the
computer screen and prints it on paper. There are two types of printers; Impact Printers and
Non-Impact Printers.
Speakers are output devices that allow you to hear sound from your computer. Computer
speakers are just like stereo speakers. There are usually two of them and they come in various
sizes.
Types of Output Device - Visual
A Graphics Processing Unit or GPU (also occasionally called Visual Processing Unit or VPU) is
a dedicated graphics rendering device for a personal computer or game console. Modern GPUs
are very efficient at manipulating and displaying computer graphics, and their highly-parallel
structure makes them more effective than typical CPUs for a range of complex algorithms.
A computer display (also known as a computer monitor, computer screen, or computer video
display) is a device that can display signals generated by a computer as images on a screen.
Visual Display Units (VDU) or monitors are used to visually interface with the computer and are
similar in appearance to a television
Examples are:
 A cathode ray monitor
 A plasma monitor
Ports
The keyboard, mouse, monitor, and printer all plug into ports. There are also extra ports to plug
in extra hardware like joysticks, gamepads, scanners, digital cameras and the like. The ports
are controlled by their expansion cards which are plugged into the motherboard and are
connected to other components by cables - long, flat bands that contain electrical wiring.
Ports are the places on the outside of the computer case where you plug in hardware. On the
inside of the case, they are connected to expansion cards.
V0.1/2008 - MFM
7
Welingkar
MFM– Sem III
Introduction to Computers
Memory or Primary Storage
Purpose of Storage
The fundamental components of a general-purpose computer are arithmetic and logic unit,
control circuitry, storage space, and input/output devices. If storage was removed, the device
we had would be a simple calculator instead of a computer. The ability to store instructions that
form a computer program, and the information that the instructions manipulate is what makes
stored program architecture computers versatile.


Primary storage, or internal memory, is computer memory that is accessible to the central
processing unit of a computer without the use of computer's input/output channels
Primary storage, also known as main storage or memory, is the main area in a computer in
which data is stored for quick access by the computer's processor.
Primary Storage
Primary storage is directly connected to the central processing unit of the computer. It must be
present for the CPU to function correctly, just as in a biological analogy the lungs must be
present (for oxygen storage) for the heart to function (to pump and oxygenate the blood). As
shown in the diagram, primary storage typically consists of three kinds of storage:
Processors Register
It is the internal to the central processing unit. Registers contain information that the arithmetic
and logic unit needs to carry out the current instruction. They are technically the fastest of all
forms of computer storage.
Main memory
It contains the programs that are currently being run and the data the programs are operating
on. The arithmetic and logic unit can very quickly transfer information between a processor
register and locations in main storage, also known as a "memory addresses". In modern
computers, electronic solid-state random access memory is used for main storage, and is
directly connected to the CPU via a "memory bus" and a "data bus".
Cache memory
It is a special type of internal memory used by many central processing units to increase their
performance or "throughput". Some of the information in the main memory is duplicated in the
cache memory, which is slightly slower but of much greater capacity than the processor
registers, and faster but much smaller than main memory.
V0.1/2008 - MFM
8
Welingkar
MFM– Sem III
Introduction to Computers
Memory
Memory is often used as a shorter synonym for Random Access
Memory (RAM). This kind of memory is located on one or more
microchips that are physically close to the microprocessor in your
computer. Most desktop and notebook computers sold today include at
least 512 megabytes of RAM (which is really the minimum to be able to
install an operating system). They are upgradeable, so you can add
more when your computer runs really slowly.
The more RAM you have, the less frequently the computer has to access instructions and data
from the more slowly accessed hard disk form of storage. Memory should be distinguished from
storage, or the physical medium that holds the much larger amounts of data that won't fit into
RAM and may not be immediately needed there.
Storage devices include hard disks, floppy disks, CDROMs, and tape backup systems. The
terms auxiliary storage, auxiliary memory, and secondary memory have also been used for this
kind of data repository.
RAM is temporary memory and is erased when you turn off your computer, so remember to
save your work to a permanent form of storage space like those mentioned above before exiting
programs or turning off your computer.
Types of RAM
There are two types of RAM used in PCs - Dynamic and Static RAM.
Dynamic RAM (DRAM): The information stored in Dynamic RAM has to be refreshed after
every few milliseconds otherwise it will get erased. DRAM has higher storage capacity and is
cheaper than Static RAM.
Static RAM (SRAM): The information stored in Static RAM need not be refreshed, but it
remains stable as long as power supply is provided. SRAM is costlier but has higher speed than
DRAM.
Additional kinds of integrated and quickly accessible memory are Read Only Memory (ROM),
Programmable ROM (PROM), and Erasable Programmable ROM (EPROM). These are used to
keep special programs and data, such as the BIOS, that need to be in your computer all the
time. ROM is "built-in" computer memory containing data that normally can only be read, not
written to (hence the name read only).
ROM contains the programming that allows your computer to be "booted up" or regenerated
each time you turn it on. Unlike a computer's random access memory (RAM), the data in ROM
is not lost when the computer power is turned off. The ROM is sustained by a small long life
battery in your computer called the CMOS battery. If you ever do the hardware setup procedure
with your computer, you effectively will be writing to ROM. It is non volatile, but not suited to
storage of large quantities of data because it is expensive to produce. Typically, ROM must also
be completely erased before it can be rewritten,
PROM (Programmable Read Only Memory)
A variation of the ROM chip is programmable read only memory. PROM can be programmed to
record information using a facility known as PROM-programmer. However once the chip has
V0.1/2008 - MFM
9
Welingkar
MFM– Sem III
Introduction to Computers
been programmed the recorded information cannot be changed, i.e. the PROM becomes a
ROM and the information can only be read.
EPROM (Erasable Programmable Read Only Memory)
As the name suggests the Erasable Programmable Read Only Memory, information can be
erased and the chip programmed a new to record different information using a special PROMProgrammer. When EPROM is in use information can only be read and the information remains
on the chip until it is erased.
Storage Devices
The purpose of storage in a computer is to hold data or information and get that data to the
CPU as quickly as possible when it is needed. Computers use disks for storage: hard disks that
are located inside the computer, and floppy or compact disks that are used externally.
•
•
•
•
•
Computers Method of storing data & information for long term basis i.e. even after PC is
switched off.
It is non – volatile
Can be easily removed and moved & attached to some other device
Memory capacity can be extended to a greater extent
Cheaper than primary memory
Storage Involves Two Processes
a) Writing data
b)
Reading data
Definitions
Storage Media The materials on which data is stored
Storage Devices The hardware components that write data to, and read data from, storage
media
Categories of Storage Technology
•
•
Magnetic storage
Optical storage
Magnetic Storage
•
•
•
•
•
Diskettes / High-capacity floppy disks
Hard disks
Zip Drives
Disk cartridges
Magnetic tape
Magnetism Allows Data Storage Hard disks, diskettes, high-capacity floppy disks and tapes
have a magnetic coating on their surface that enables each medium to store data.
V0.1/2008 - MFM
10
Welingkar
MFM– Sem III
Introduction to Computers
Floppy Disks
The floppy disk drive (FDD) was invented at IBM by Alan Shugart in 1967. The first floppy drives
used an 8-inch disk (later called a "diskette" as it got smaller), which evolved into the 5.25-inch
disk that was used on the first IBM Personal Computer in August 1981. The 5.25-inch disk held
360 kilobytes compared to the 1.44 megabyte capacity of today's 3.5-inch diskette.
The 5.25-inch disks were dubbed "floppy" because the diskette packaging was a very flexible
plastic envelope, unlike the rigid case used to hold today's 3.5-inch diskettes.
By the mid-1980s, the improved designs of the read/write heads, along with improvements in
the magnetic recording media, led to the less-flexible, 3.5-inch, 1.44-megabyte (MB) capacity
FDD in use today. For a few years, computers had both FDD sizes (3.5-inch and 5.25-inch). But
by the mid-1990s, the 5.25-inch version had fallen out of popularity, partly because the
diskette's recording surface could easily become contaminated by fingerprints through the open
access area.
When you look at a floppy disk, you'll see a plastic case that measures 3 1/2 by 5 inches. Inside
that case is a very thin piece of plastic that is coated with microscopic iron particles. This disk is
much like the tape inside a video or audio cassette. Basically, a floppy disk drive reads and
writes data to a small, circular piece of metal-coated plastic similar to audio cassette tape.
At one end of it is a small metal cover with a rectangular hole in it. That cover can be moved
aside to show the flexible disk inside. But never touch the inner disk - you could damage the
data that is stored on it. On one side of the floppy disk is a place for a label. On the other side
is a silver circle with two holes in it. When the disk is inserted into the disk drive, the drive
hooks into those holes to spin the circle. This causes the disk inside to spin at about 300 rpm!
At the same time, the silver metal cover on the end is pushed aside so that the head in the disk
drive can read and write to the disk.
Floppy disks are the smallest type of storage, holding only 1.44MB.
3.5-inch Diskettes (Floppy Disks) features:
• Spin rate: app. 300 revolutions per minute (rpm)
• High density (HD) disks more common today than older, double density (DD) disks
• Storage Capacity of HD disks is 1.44 MB
Floppy Disk Drive Terminology
 Floppy disk - Also called diskette. The common size is 3.5 inches.
 Floppy disk drive - The electromechanical device that reads and writes floppy disks.
 Track - Concentric ring of data on a side of a disk.
 Sector - A subset of a track, similar to wedge or a slice of pie.
It consists of a read/write head and a motor rotating the disk at a high speed of about 300
rotations per minute. It can be fitted inside the cabinet of the computer and from outside, the slit
where the disk is to be inserted, is visible. When the disk drive is closed after inserting the
floppy inside, the monitor catches the disk through the Central of Disk hub, and then it starts
rotating.
V0.1/2008 - MFM
11
Welingkar
MFM– Sem III
Introduction to Computers
There are two read/write heads depending upon the floppy being one sided or two sided. The
head consists of a read/write coil wound on a ring of magnetic material. During write operation,
when the current passes in one direction, through the coil, the disk surface touching the head is
magnetized in one direction. For reading the data, the procedure is reverse. I.e. the magnetized
spots on the disk touching the read/write head induce the electronic pulses, which are sent to
CPU.
The major parts of a FDD include:
 Read/Write Heads: Located on both sides of a diskette, they move together on the
same assembly. The heads are not directly opposite each other in an effort to prevent
interaction between write operations on each of the two media surfaces. The same head
is used for reading and writing, while a second, wider head is used for erasing a track
just prior to it being written. This allows the data to be written on a wider "clean slate,"
without interfering with the analog data on an adjacent track.
 Drive Motor: A very small spindle motor engages the metal hub at the center of the
diskette, spinning it at either 300 or 360 rotations per minute (RPM).
 Stepper Motor: This motor makes a precise number of stepped revolutions to move the
read/write head assembly to the proper track position. The read/write head assembly is
fastened to the stepper motor shaft.
 Mechanical Frame: A system of levers that opens the little protective window on the
diskette to allow the read/write heads to touch the dual-sided diskette media. An external
button allows the diskette to be ejected, at which point the spring-loaded protective
window on the diskette closes.
 Circuit Board: Contains all of the electronics to handle the data read from or written to
the diskette. It also controls the stepper-motor control circuits used to move the
read/write heads to each track, as well as the movement of the read/write heads toward
the diskette surface.
Electronic optics check for the presence of an opening in the lower corner of a 3.5-inch diskette
(or a notch in the side of a 5.25-inch diskette) to see if the user wants to prevent data from being
written on it.
V0.1/2008 - MFM
12
Welingkar
MFM– Sem III
Introduction to Computers
Hard Disks
Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter
holding just a few megabytes. They were originally called "fixed disks" or "Winchesters" (a code
name used for a popular IBM product). They later became known as "hard disks" to distinguish
them from "floppy disks." Hard disks have a hard platter that holds the magnetic medium, as
opposed to the flexible plastic film found in tapes and floppies.
At the simplest level, a hard disk is not that different from a cassette tape. Both hard disks and
cassette tapes use the same magnetic recording techniques. Hard disks and cassette tapes
also share the major benefits of magnetic storage -- the magnetic medium can be easily erased
and rewritten, and it will "remember" the magnetic flux patterns stored onto the medium for
many years.
To increase the storage capacity for large computer system, hard disks were introduced. These
are two types, removable and fixed disks. A hard disk consists of a pack of magnetic disks
called platters mounted around a central spindle, rotating the set of disks at a high speed or
about 3600 revolutions per minute. The hard disk has many access arms containing two
read/write heads each for two surfaces of each individual disk. Thus the total number of heads
may be up to 15 heads as well.
IBM first introduced rotating removable hard disk with 10 disk surfaces with a capacity of about
7MB. Afterwards the Winchester Technology was developed. The entire disk was protected
from dust in the air by completely enclosing it and there by reducing the possibility of head crash
and simultaneously allowing the head to move nearer to the disk. The read/write head in the
Winchester disk never touches the surface but just lies above the surface on a cushion of air.
These disks are available in 5 1/4 inch or 3 ½ inch size. These disks can be removed and
replaced by the user. The speed of transferring the data is very high in Winchester disks as
compared to floppy disk. The storage capacity can be around 40, 90,120,240 even 600 MB. The
data is stored on the sectors of circular tracks, similar to a floppy disk.
A typical desktop machine will have a hard disk with a capacity of between 80 and 120 GB.
Data is stored onto the disk in the form of files. A file is simply a named collection of bytes. The
bytes might be the ASCII codes for the characters of a text file, or they could be the instructions
of a software application for the computer to execute, or they could be the records of a
database, or they could be the pixel colors for a GIF image. No matter what it contains,
however, a file is simply a string of bytes. When a program running on the computer requests a
file, the hard disk retrieves its bytes and sends them to the CPU one at a time.
There are two ways to measure the performance of a hard disk:
 Data rate - The data rate is the number of bytes per second that the drive can deliver to
the CPU. Rates between 5 and 40 megabytes per second are common.
 Seek time - The seek time is the amount of time between when the CPU requests a file
and when the first byte of the file is sent to the CPU. Times between 10 and 20
milliseconds are common.
The other important parameter is the capacity of the drive, which is the number of bytes it can
hold.
The best way to understand how a hard disk works is to take a look inside. It is a sealed
aluminum box with controller electronics attached to one side. The electronics control the
read/write mechanism and the motor that spins the platters. The electronics also assemble the
V0.1/2008 - MFM
13
Welingkar
MFM– Sem III
Introduction to Computers
magnetic domains on the drive into bytes (reading) and turn bytes into magnetic domains
(writing). The electronics are all contained on a small board that detaches from the rest of the
drive. Underneath the board are the connections for the motor that spins the platters, as well as
a highly-filtered vent hole that lets internal and external air pressures equalize. Removing the
cover from the drive reveals an extremely simple but very precise interior.


The platters - These typically spin at 3,600 or 7,200 rpm when the drive is operating.
These platters are manufactured to amazing tolerances and are mirror-smooth.
The arm - This holds the read/write heads and is controlled by the mechanism in the
upper-left corner. The arm is able to move the heads from the hub to the edge of the
drive. The arm and its movement mechanism are extremely light and fast. The arm on a
typical hard-disk drive can move from hub to edge and back up to 50 times per second
In order to increase the amount of information the drive can store, most hard disks have
multiple platters. This drive has three platters and six read/write heads. The mechanism that
moves the arms on a hard disk has to be incredibly fast and precise.
Data is stored on the surface of a platter in sectors and tracks. Tracks are concentric circles,
and sectors are pie-shaped wedges on a track. A sector contains a fixed number of bytes -- for
example, 256 or 512. Either at the drive or the operating system level, sectors are often grouped
together into clusters.
The process of low-level formatting a drive establishes the tracks and sectors on the platter.
The starting and ending points of each sector are written onto the platter. This process
prepares the drive to hold blocks of bytes. High-level formatting then writes the file-storage
structures, like the file-allocation table, into the sectors. This process prepares the drive to hold
files.
Hard Disks
Your computer uses two types of memory: primary memory which is stored on chips located on
the motherboard, and secondary memory that is stored in the hard drive. Primary memory holds
all of the essential memory that tells your computer how to be a computer. Secondary memory
holds the information that you store in the computer.
Inside the hard disk drive case you will find circular disks that are made from polished steel. On
the disks, there are many tracks or cylinders. Within the hard drive, an electronic reading/writing
device called the head passes back and forth over the cylinders, reading information from the
disk or writing information to it. Hard drives spin at 3600 or more rpm (Revolutions Per Minute) that means that in one minute, the hard drive spins around over 7200 times!
Today's hard drives can hold a great deal of information - sometimes over 160GB!
Hard Disks
• Spin rate: from 3,600 to 15,000 rpm
• Storage capacity ranges from several hundred MB to more than 160 GB
• The most common HDD are IDE (Integrated Drive Electronics) & SCSI (Small Computer
System Interface)
V0.1/2008 - MFM
14
Welingkar
MFM– Sem III
Introduction to Computers
Removable High-Capacity Magnetic Disks Combines the speed and capacity of a hard disk
with the portability of a diskette.
Three Kinds of Removable High-Capacity Magnetic Disks
•
•
•
High-Capacity Floppy Disks
Hot-Swappable Hard Disks
Disk Cartridges
Tape Drives Commonly used for (hard disk) backup, they can store huge data at half the price
of the hard disk
PC Cards Used to connect new components like memory and expanding storage capacity to a
computer
Zip Drives
 The zip drive size is similar to the floppy drive
 Its capacity varies from 250 MB to 700 MB
 The transfer rate is 1MB / Sec compared to 1.44 floppy disk’s 500kb / Sec.
 It can be integrated with IDE & SCSI interfaces
 It also has a read / write lock and can be password protected
Zip Drive
A high-capacity floppy disk(ZIP Disks) drive is slightly larger than conventional floppy disks, and
about twice as thick. They can hold 100 or 250 MB of data. Because they're relatively
inexpensive and durable, they have become a popular media for backing up hard disks and for
transporting large files
The 1.44-megabyte floppy disk drives that use 3.5-inch diskettes have been around for about 15
years. At the time of their introduction, they seemed like a miracle -- they were smaller than the
standard 5.25-inch disks, but they held more data!
Here are some of the parameters that determine how much data a floppy disk can hold:
 Tracks per inch: 135
 Total tracks per side: 80
 Sectors per track: 18
 Bytes per sector: 512
 Spin rate: 360 rpm
 Head movement mechanism: worm gear and stepper motor
Two important things to notice are the low number of tracks on the disk and the fixed number
of sectors per track. Neither one of these techniques makes very good use of the surface of the
disk.
The main thing that separates a Zip disk from a floppy disk is the magnetic coating used on
the disk. On a Zip disk, the coating is much higher quality. The higher quality coating means that
a Zip disk read/write head can be significantly smaller than a floppy disk's (by a factor of 10 or
so).
V0.1/2008 - MFM
15
Welingkar
MFM– Sem III
Introduction to Computers
The smaller head, combined with a head positioning mechanism similar to that used in a hard
disk, means that a Zip drive can pack thousands of tracks per inch on the track surface. Zip
drives also use a variable number of sectors per track to make the best use of disk space. All
of these things combine to create a floppy disk that holds a huge amount of data!
Optical Storage
•
•
•
•
•
Compact Disk Read-Only Memory (CD-ROM)
CD-Recordable (CD-R)/CD-Rewritable (CD-RW)
Digital Video Disk Read-Only Memory (DVD-ROM)
DVD Recordable (DVD-R/DVD Rewritable (DVD-RW)
Photo CD
Optical Storage Devices Data is stored on a reflective surface so it can be read by a beam of
laser light.
Two Kinds of Optical Storage Devices
•
•
CD-ROM (compact disk read-only memory)
DVD-ROM (digital video disk read-only memory)
Compact Disks
Instead of electromagnetism, CDs use pits (microscopic indentations) and lands (flat surfaces)
to store information much the same way floppies and hard disks use magnetic and nonmagnetic storage. Inside the CD-Rom is a laser that reflects light off of the surface of the disk
to an electric eye. The pattern of reflected light (pit) and no reflected light (land) creates a code
that represents data.
CDs usually store about 650MB. This is quite a bit more than the 1.44MB that a floppy disk
stores. A DVD or Digital Video Disk holds even more information than a CD, because the DVD
can store information on two levels, in smaller pits or sometimes on both sides.
Compact Disk (CD)
•
•
•
•
Standard CDs store 650 MB of data or 70 minutes of audio
New generation CDs hold 700 MB of data or 80 minutes of audio
CD-ROM drives are slower than hard disk drives
CD-ROM speed is expressed in multiples and range from 2x to 52x
V0.1/2008 - MFM
16
Welingkar
MFM– Sem III
Introduction to Computers
Digital Video Disk (DVD) Storage capacity ranges from 9.4 GB to 17 GB
Recordable Optical Technologies
•
•
•
•
•
CD-Recordable (CD-R)
CD-Rewritable (CD-RW)
PhotoCD
DVD-Recordable (DVD-R)
DVD-RAM
Emerging Storage Technologies
•
•
•
FMD-ROM (Fluorescent Multi-Layer Disc). It can store data up to 140 GB
Smart Cards
Holographic memory
CD ROM - Compact Disc Read Only Memory.
Unlike magnetic storage device which store data on multiple concentric tracks, all CD formats
store data on one physical track, which spirals continuously from the center to the outer edge of
the recording area. Data resides on the thin aluminum substrate immediately beneath the label.
The data on the CD is recorded as a series of microscopic pits and lands physically embossed
on an aluminum substrate. Optical drives use a low power laser to read data from those discs
without physical contact between the head and the disc which contributes to the high reliability
and permanence of storage device.
To write the data on a CD a higher power laser are used to record the data on a CD. It creates
the pits and land on aluminum substrate. The data is stored permanently on the disc. These
types of discs are called as WORM (Write Once Read Many). Data written to CD cannot
subsequently be deleted or overwritten which can be classified as advantage or disadvantage
depending upon the requirement of the user. However if the CD is partially filled then the more
data can be added to it later on till it is full. CDs are usually cheap and cost effective in terms of
storage capacity and transferring the data.
The CD’s were further developed where the data could be deleted and re written. These types
of CDs are called as CD Rewritable. These types of discs can be used by deleting the data and
making the space for new data. These CD’s can be written and rewritten at least 1000 times.
CD ROM Drive
CD ROM drives are so well standardized and have become so ubiquitous that many treat them
as commodity items. Although CD ROM drives differ in reliability, which standards they support
and numerous other respects, there are two important performance measures.
 Data transfer rate
 Average access
V0.1/2008 - MFM
17
Welingkar
MFM– Sem III
Introduction to Computers
Data transfer rate: Data transfer rate means how fast the drive delivers sequential data to the
interface. This rate is determined by drive rotation speed, and is rated by a number followed by
‘X’. All the other things equal, a 32X drive delivers data twice the speed of a 16X drive. Fast
data transfer rate is most important when the drive is used to transfer the large file or many
sequential smaller files. For example: Gaming video.
CD ROM drive transfers the data at some integer multiple of this basic 150 KB/s 1X rate. Rather
than designating drives by actual KB/s output drive manufacturers use a multiple of the standard
1X rate. For example: a 12X drive transfer data at (12*150KB/s) 1800 KB/s and so on.
The data on a CD is saved on tracks, which spirals from the center of the CD to outer edge. The
portions of the tracks towards center are shorter than those towards the edge. Moving the data
under the head at a constant rate requires spinning the disc faster as the head moves from the
center where there is less data per revolution to the edge where there is more data. Hence the
rotation rate of the disc changes as it progresses from inner to outer portions of the disc.
CD Writers
CD recordable and CD rewritable drives are collectively called as CD writers or CD burners.
They are essentially CD ROM drives with one difference. They have a more powerful laser that,
in addition to reading discs, can record data to special CD media.
Following are the parameters for CD writer.
Transfer rate: CD writer has two speeds. The lower refers to the write speed and the higher
refers to read speed. An 8X/24X CD-R drive, for example, writes data at 1200 KB/s and reads it
at 3600KB/s. CD-RW drive has three speeds. The lowest refers to how fast data can be written
to a CD-RW disc, the middle to how fast the data can be written to CD-R disc and the highest to
how fast the drive can read data. Usually the drive, which provides 4X write performance, is
considered as ideal.
Average Access: The more powerful lasers are required for burning CDs require heavier heads
than standard CD ROM drives, which in turn means that average access times are slower.
Usually an average access of 200 ms is considered to be and proper model.
Interface: Writing CDs is less trouble prone if SCSI interface is used rather than ATAPI or
Windows NT is used rather than windows 9x. However if the burner is used infrequently and if
nothing is being used when a CD is being written and ATAPI model may suffice.
Buffer size: A large buffer helps avoid ruining CD-R discs due to buffer under runs. How large
buffer is large enough depends upon the maximum write speed of the drive. E.g. A 8X burner
with 2 MB buffer can store about 1.7 seconds of data when the drive is writing at maximum
speed.
DVD Drives
DVD originally stood for Digital Video Disc, later for Digital Versatile Disc. DVD is basically a CD
on steroids. Like a CD, a DVD stores data using tiny pits and lands embossed on a spiral tracks
on an aluminized surface. But CD ROM uses a 780 – nanometer infrared laser, DVD uses 636 –
nm or 650 – nm laser. Shorter wavelengths can resolve smaller pits, which enable pits to be
spaced more closely. This allows improved sector formatting, more efficient correction codes,
tighter tolerances and somewhat larger recording area, which allows DVDs to store seven times
as much as data than that in CD.
V0.1/2008 - MFM
18
Welingkar
MFM– Sem III
Introduction to Computers
One significant enhancement of DVD over CD is that DVD does away with the plethora of
incompatible CD formats. Every DVD disc uses the same physical file structure, promoted by
the Optical Storage Technology Association (OSTA), and is called Universal Disc Format
(UDF). That means any DVD drive or player can read any file or any DVD disc.
DVD-ROM Types and Capacities
DVD–ROMs are available in numerous standardized types, most of which are uncommon or not
used at all. Discs may be either of two physical sizes, and may have one or two sides, each of
which may store data in a single or double layer. Like CDs, single sided DVD-ROM discs are
1.2 mm thick. Double sided discs are simple two thin discs (0.6mm) glued back to back. Most
DVD players and drives require manually flipping the disc to access the data on other side.
DVD ROM Speed
Like CD drives DVD drives use the X-factor to specify the output. However DVD X has a
different meaning than CD ROM. 1X CD drive transfer data at 150KB/s, but a 1X DVD drive
transfers data at 11.08 million bits/s or 1.321MB/s. The data transfer rate is 9 times more that of
a DVD than a CD.
Pen Drives / Flash Drives
 Pen Drives / Flash Drives are flash memory storage devices.
 They are faster, portable and have a capability of storing large data.
 It consists of a small printed circuit board with a LED encased in a robust plastic
 The male type connector is used to connect to the host PC
 They are also used a MP3 players
Computers are useless. They can only give you answers.
Pablo Picasso
V0.1/2008 - MFM
19
Welingkar
MFM– Sem III
Introduction to Computers
Printers
Printers are hardware devices that allow you to create a hard copy of a file. Today a printer is a
necessary requirement for any home user and business. Allowing individuals to save their work
in the format of paper instead of electronically.
Types of Printers
 Impact printers
o In case of Impact printer an inked ribbon exists between the print head and paper
,the head striking the ribbon prints the character.
 Non Impact Printers
o Non Impact printers use techniques other than the mechanical method of head
striking the ribbon
Impact printers
Impact printers are basically divided into 2 types
 Serial/Character printers
o Dot matrix printers
 Daisy wheel printers
o Line Printers
Non-Impact Printers
Non Impact Printers are divided into 3 categories
 Thermal printers
 Ink jet printers
 Laser printers
Classification
Printers are classified by the following characteristics:
Quality of type: The output produced by printers is said to be either letter quality (as good as a
typewriter), near letter quality, or draft quality. Only daisy-wheel, ink-jet, and laser printers
produce letter-quality type. Some dot-matrix printers claim letter-quality print, but if you look
closely, you can see the difference.
Speed: Measured in characters per second (cps) or pages per minute (ppm), the speed of
printers varies widely. Daisy-wheel printers tend to be the slowest, printing about 30 cps. Line
printers are fastest (up to 3,000 lines per minute). Dot-matrix printers can print up to 500 cps,
and laser printers range from about 4 to 20 text pages per minute.
Impact or non-impact: Impact printers include all printers that work by striking an ink ribbon.
Daisy-wheel, dot-matrix, and line printers are impact printers. Non-impact printers include laser
printers and ink-jet printers. The important difference between impact and non-impact printers is
that impact printers are much noisier.
Graphics: Some printers (daisy-wheel and line printers) can print only text. Other printers can
print both text and graphics.
Fonts: Some printers, notably dot-matrix printers, are limited to one or a few fonts. In contrast,
laser and ink-jet printers are capable of printing an almost unlimited variety of fonts. Daisywheel printers can also print different fonts, but you need to change the daisy wheel, making it
difficult to mix fonts in the same document.
V0.1/2008 - MFM
20
Welingkar
MFM– Sem III
Introduction to Computers
Dot Matrix Printers
A dot matrix printer or impact matrix printer refers to a type of computer printer with a print head
that runs back and forth on the page and prints by impact, striking an ink-soaked cloth ribbon
against the paper, much like a typewriter. Unlike a typewriter or daisy wheel printer, letters are
drawn out of a dot matrix, and thus, varied fonts and arbitrary graphics can be produced.
Because the printing involves mechanical pressure, these printers can create carbon copies and
carbonless copies. The standard of print obtained is poor. These printers are cheap to run and
relatively fast.
The moving portion of the printer is called the print head, and prints one line of text at a time.
Most dot matrix printers have a single vertical line of dot-making equipment on their print heads;
others have a few interleaved rows in order to improve dot density. The print head consists of 9
or 24 pins each can move freely within the tube; more the number of pins better are the quality
of output. Dot Matrix Printer Characters are formed from a matrix of dots.
The speed is usually 30 - 550 characters per second (cps). These types of printers can print
graphs also. They can only print text and graphics, with limited color performance. Impact
printers have one of the lowest printing costs per page. These machines can be highly durable,
but eventually wear out. Ink invades the guide plate of the print head, causing grit to adhere to
it; this grit slowly causes the channels in the guide plate to wear from circles into ovals or slots,
providing less and less accurate guidance to the printing wires. After about a million characters,
even with tungsten blocks and titanium pawls, the printing becomes too unclear to read.
As of 2006, dot matrix impact technology remains in use in devices such as cash registers,
ATM, and many other point-of-sales terminals
Nearly all inkjet, thermal, and laser printers use a dot matrix to describe each character or
graphic. However, in common parlance these are seldom called "dot matrix" printers, to avoid
confusion with dot matrix impact printers
Daisy Wheel Printer
A daisy wheel printer is a type of computer printer that produces high-quality type, and is often
referred to as a letter-quality printer (this in contrast to high-quality dot-matrix printers, capable
of near-letter-quality, or NLQ, output). There were also, and still are daisy wheel typewriters,
based on the same principle. The DWP is slower the speed range is in 30 to 80 CPS.
The system used a small wheel with each letter printed on it in raised metal or plastic. The
printer turns the wheel to line up the proper letter under a single pawl which then strikes the
back of the letter and drives it into the paper. In many respects the daisy wheel is similar to a
standard typewriter in the way it forms its letters on the page, differing only in the details of the
mechanism (daisy wheel vs typebars or the typeball used on IBMs electric typewriters).
Daisy wheel printers were fairly common in the 1980s, but were always less popular than dot
matrix printers (ballistic wire printers) due to the latter's ability to print graphics and different
fonts. With the introduction of high quality laser printers and inkjet printers in the later 1980s
daisy wheel systems quickly disappeared but for the small remaining typewriter market.
V0.1/2008 - MFM
21
Welingkar
MFM– Sem III
Introduction to Computers
Line Printer
The line printer is a form of high speed impact printer in which a line of type is printed at a time.
The wheels spin at high speed and paper and an inked ribbon are stepped (moved) past the
print position. As the desired character for each column passes the print position, a hammer
strikes the paper and ribbon causing the desired character to be recorded on the continuous
paper. The speed is 300 to 2500 lines per minute(LPM). This technology is still in use in a
number of applications. It is usually both faster and less expensive (in total ownership) than
laser printers. In printing box labels, medium volume accounting and other large business
applications, line printers remain in use
or
Line printers, as the name implies, print an entire line of text at a time. Two principle designs
existed. In drum printers, a drum carries the entire character set of the printer repeated in each
column that is to be printed. In chain printers (also known as train printers), the character set is
arranged multiple times around a chain that travels horizontally pas the print line. In either case,
to print a line, precisely timed hammers strike against the back of the paper at the exact
moment that the correct character to be printed is passing in front of the paper. The paper
presses forward against a ribbon which then presses against the character form and the
impression of the character form is printed onto the paper.
These printers were the fastest of all impact printers and were used for bulk printing in large
computer centres. They were virtually never used with personal computers and have now been
partly replaced by high-speed laser printers.
Thermal Printers
Direct thermal printers create an image by selectively heating coated paper when the paper
passes over the thermal print head. The coating turns black in the areas where it is heated,
creating the image. More recently, two-color direct thermal printers have been produced, which
allow printing of both red (or another color) and black by heating to different temperatures.
Thermal Printer Characters are formed by heated elements being placed in contact with special
heat sensitive paper forming darkened dots when the elements reach a critical temperature. A
fax machine uses a thermal printer. Thermal printer paper tends to darken over time due to
exposure to sunlight and heat. The standard of print produced is poor. Thermal printers are
widely used in battery powered equipment such as portable calculators.
Direct thermal printers are increasingly replacing the dot matrix printer for printing cash register
receipts, both because of the higher print speed and substantially quieter operation. In addition,
direct thermal printing offers the advantage of having only one consumable - the paper itself.
Thus, the technology is well-suited to unattended applications like gas pumps, information
kiosks, and the like.
Until about 2000, most fax machines used direct thermal printing, though, now, only the
cheapest models use it, the rest having switched to either thermal wax transfer, laser, or ink jet
printing to allow plain-paper printouts. Historically, direct thermal paper has suffered from such
limitations as sensitivity to heat, abrasion (the coating can be fragile), friction (which can cause
heat, thus darkening the paper), light (causing it to fade), and water. However, more modern
thermal coating formulations have resulted in exceptional image stability, with text remaining
legible for an estimated 50+ years.
V0.1/2008 - MFM
22
Welingkar
MFM– Sem III
Introduction to Computers
Ink-Jet Printers
Inkjet printers spray very small, precise amounts (usually a few picolitres) of ink onto the media.
They are the most common type of computer printer for the general consumer due to their low
cost, high quality of output, capability of printing in vivid color, and ease of use. It is the most
common printer used with home computers and it can print in either black and white or color.
Compared to earlier consumer-oriented printers, ink jets have a number of advantages. They
are quieter in operation than impact dot matrix or daisywheel printers. They can print finer,
smoother details through higher print head resolution, and many ink jets with photorealisticquality color printing are widely available. For color applications including photo printing, ink jet
methods are dominant.
Working principle:
A cartridge of ink is attached to a print head with up to hundreds of nozzles, each thinner than a
human hair. The number of nozzles and the size of each determine the printer’s resolution. As
the print head moves across the paper, a digital signal from the computer tells each nozzle
when to propel a drop of ink onto the paper. On some printers, this is done with mechanical
vibrations.
Piezoelectric crystals change shape when a voltage is applied to them. As they do so, they
force ink through the nozzles onto the paper. Each pixel in the image can be made up of a
number of tiny drops of ink. The smaller the droplets, and the more of them, the richer and
deeper the colors should be.
These printers give incomparable resolution, vivid color, and sharp text and are easily affordable
to purchase. They are great for home use, and are fairly compact and easy to transport. Inkjet
printing speed varies and depends on the resolution you're using and whether you're printing
images or text.
Inkjet printers use colour cartridges which combine magenta, yellow and cyan inks to create
colour tones. A black cartridge is also used for crisp monochrome output. This method of
printing can generate up to 200 cps and allows for good quality, cheap colour printing
Laser Printers
A laser printer is a common type of computer printer that produces high quality printing, and is
able to produce both text and graphics. The process is very similar to the type of dry process
photocopier first produced by Xerox.
Laser Printers use a laser beam and dry powdered ink to produce a fine dot matrix pattern. This
method of printing can generate about 4 pages of A4 paper per minute. The standard of print is
very good and laser printers can also produce very good quality printed graphic images too.
Working principle:
An electric charge is first projected onto a revolving drum by a corona wire (in older printers) or
a primary charge roller. The drum has a surface of a special plastic or garnet. Electronics drives
a system that writes light onto the drum. The light causes the electrostatic charge to leak from
the exposed parts of the drum. The surface of the drum passes through a bath of very fine
particles of dry plastic powder, or toner. The charged parts of the drum electrostatically attract
V0.1/2008 - MFM
23
Welingkar
MFM– Sem III
Introduction to Computers
the particles of powder. The drum then deposits the powder on a piece of paper. The paper
passes through a fuser, which, with heat and pressure, bonds the plastic powder to the paper.
Each of these steps has numerous technical choices. One of the more interesting choices is
that some "laser" printers actually use a linear array of light-emitting diodes to write the light on
the drum. The toner is essentially ink and also includes either wax or plastic. The chemical
composition of the toner is plastic-based or wax-based so that, when the paper passes through
the fuser assembly, the particles of toner will melt. The paper can be oppositely charged, or not.
The fuser can be an infrared oven, a heated roller, or (on some very fast, expensive printers) a
xenon strobe.
The slowest printers of this type print about 4 pages per minute (ppm), and are relatively
inexpensive. Printer speed can vary widely, however, and depends on many factors. The fastest
print mass mailings (commonly for utilities) at several thousand pages per minute.
The cost of this technology depends on a combination of costs of paper, toner replacement, and
drums replacement, as well as the replacement of other consumables such as the fuser
assembly and transfer assembly. Often printers with soft plastic drums can have a very high
cost of ownership that does not become apparent until the drum requires replacement.
One helpful trait is that in very high volume offices, a duplexing printer (one that prints on both
sides of the paper) can halve paper costs, and reduce filing volumes and floor weight as well.
Not all laser printers, however, can accommodate a duplexing unit. Duplexing can also result in
slower printing speeds, because of the more complicated paper path.
Many printers have a toner-conservation mode, which can be substantially more economical at
the price of only slightly lower contrast.
The Warm Up is the process that a laser printer goes through when power is initially applied to
the printer. Lasers are used because they generate a coherent beam of light for high degree of
accuracy.
Plotter
A plotter is a vector graphics printing device that connects to a computer. Plotters print their
output by moving a pen across the surface of a piece of paper. This means that plotters are
restricted to line art, rather than raster graphics as with other printers. They can draw complex
line art, including text, but do so very slowly because of the mechanical movement of the pens.
(Plotters are incapable of creating a solid region of colour; but can hatch an area by drawing a
number of close, regular lines.)
Another difference between plotters and printers is that a printer is aimed primarily at printing
text. This makes it fairly easy to control, simply sending the text to the printer is usually enough
to generate a page of output. This is not the case of the line art on a plotter, where a number of
printer control languages were created to send the more detailed information like "draw a line
from here to here". The most popular of these is likely HPGL.
Plotters are used primarily in technical drawing and CAD applications, where they have the
advantage of working on very large paper sizes while maintaining high resolution. Another use
has been found by replacing the pen with a cutter, and in this form plotters can be found in
many garment and sign shops.
V0.1/2008 - MFM
24
Welingkar
MFM– Sem III
Introduction to Computers
Note that in many of today's environments, Plotters in the traditional sense have been
supplanted with (and, in many cases, obsolete by) large-format inkjet printers. Such printers are
often informally known as plotters.
Types of Plotter
Pen Plotter: have an ink pen attached to draw the images,
Electrostatic Plotter electrostatic plotters work similarly to a laser printer
Barcode printer
A barcode printer (or bar code printer) is a computer peripheral for printing barcode labels or
tags that can be attached to physical objects. Barcode printers are commonly used to label
cartons before shipment, or to label retail items with Universal Product Codes.
The most common barcode printers employ one of two different printing technologies. Direct
thermal printers use a print head to generate heat that causes a chemical reaction in specially
designed paper that turns the paper black. Thermal transfer printers also use heat, but instead
of reacting with the paper, the heat melts a waxy substance on a ribbon that runs over the label
or tag material. The heat transfers ink from the ribbon to the paper. Direct thermal printers are
generally less expensive, but they produce labels that can become illegible if exposed to heat,
direct sunlight, or chemical vapors.
Barcode printers are designed for different markets. Industrial barcode printers are used in large
warehouses and manufacturing facilities. They have large paper capacities, operate faster and
have a longer service life. For retail and office environments, desktop barcode printers are most
common. Barcode printers are commonly used to label cartons before shipment, or to label
retail items
Label Printers
A label printer is a computer peripheral that prints on self-adhesive label material and
sometimes card-stock. Label printers are different from ordinary printers because they need to
have special feed mechanisms to handle rolled stock, or tear sheet (fanfold) stock
Types of Label Printers
Desktop label printers are designed for light to medium duty use with a roll of stock up to 4".
They are quiet and inexpensive. Commercial label printers can typically hold a larger roll of
stock (up to 8") and are geared for medium volume printing. Industrial label printers are
designed for heavy duty, continuous operation in warehouses, distribution centers and factories.
PRINTER INTERFACES
Below is a listing of different types of computer printer interfaces.
* Firewire
* MPP-1150
* Parallel port
* SCSI
* Serial port
* USB
V0.1/2008 - MFM
25
Welingkar
MFM– Sem III
Introduction to Computers
PRINTER CHARACTERISTICS
Quality of Print - How good is the output quality of text that the printer prints.
Speed - How fast does the printer print. Generally there are different qualities of print. Keep in
mind the printer will print slower if set to the highest quality.
Ink / Ribbon - What type of ink and or ribbon does the printer use? How much will it cost to
replace? Can it be purchased locally or does it need to ordered from the printer company?
Paper - Does the printer require special paper? How is the printer paper loaded?
Other types of Printers
There are other types of printers available, mostly special-purpose printers for professional
graphics or publishing organizations. These printers are not for general purpose use, however.
Because they are relegated to niche uses, their prices (both one-time and recurring
consumables costs) tend to be higher relative to more mainstream units.
Solid Ink Printers
Used mostly in the packaging and industrial design industries, solid ink printers are prized for
their ability to print on a wide variety of paper types. Solid ink printers, as the name implies, use
hardened ink sticks that that are melted and sprayed through small nozzles on the print head.
The paper is then sent through a fuser roller which further forces the ink onto the paper. The
solid ink printer is ideal for prototyping and proofing new designs for product packages; as such,
most service-oriented businesses would not have a need for this type of printer.
Dye-Sublimation Printers
Used in organizations such as service bureaus — where professional quality documents,
pamphlets, and presentations are more important than consumables costs — dye-sublimation
(or dye-sub) printers are the workhorses of quality CMYK printing. The concepts behind dye-sub
printers are similar to thermal wax printers except for the use of diffusive plastic dye film instead
of colored wax as the ink element. The print head heats the colored film and vaporizes the
image onto specially coated paper.
Dye-sub is quite popular in the design and publishing world as well as the scientific research
field, where preciseness and detail are required. Such detail and print quality comes at a price,
as dye-sub printers are also known for their high costs-per-page.
V0.1/2008 - MFM
26
Welingkar
MFM– Sem III
Introduction to Computers
Printers
A computer printer is a computer peripheral device that produces a hard copy (permanent
human-readable text and/or graphics, usually on paper) from data stored in a computer
connected to it.
Printing technology – Types of Printer
Modern print technology
Toner-based printers
Liquid inkjet printers
Solid inkjet printers
Dye-sublimation printers
Obsolete and special-purpose printing technologies
Impact printers
Letter quality printers
Dot-matrix printers
Line printers
Thermal printers
Printing speed
The speed of early printers was measured in units of characters per second. More modern
printers are measured in pages per minute. These measures are used primarily as a marketing
tool, and are not well standardised. Usually pages per minute refer to sparse monochrome
office documents, rather than dense pictures which usually print much more slowly.
SCANNERS:
Technology today is rising to it’s heights. For time saving and to have paperless offices we have
a need of electronic version of invoice, Material ordering forms, Contract ordering data etc…for
filing and database management. Even to automate the process of logging sales data into
Excel, a scanner can help one with all of these tasks and more.
A scanner is an optical device that captures images, objects, and documents into a digital
format. The image is read as thousands of individual dots, or pixels. It can convert a picture into
digital bits of information which are then reassembled by the computer with the help of scanning
software. The file of the image can then be enlarged or reduced, stored in a database, or
transferred into a word processing or spreadsheet program.
Some of the key considerations for choosing the right scanner for your needs are given below.
a)
b)
c)
d)
How you intend to use the scanner?
Which type of scanner fits the exact usage?
Does one require a Black & White or a Colour quality output?
What is the Price and the Software bundles?
Depending upon the usage and the importance of the business if one would like to have quality
photographs or other images, than colour quality will be an important characteristic. With both a
V0.1/2008 - MFM
27
Welingkar
MFM– Sem III
Introduction to Computers
black and white or a color quality output the bit depth, resolution and dynamic range are
essential to selecting the right scanner for ones need.
Scanner Types:
Scanners create a digital reproduction of an image or document and come in a variety of
shapes and sizes designed to perform different types of tasks. There are three types of office
scanners usually seen in the market and the functions they serve are as follows:
a) Flatbed
The flatbed scanner consists of its own base with a flat piece of glass and cover just as
is found on most copiers. The scanning component of flatbeds runs over the length of
the image in order to gather data. Flatbeds are useful when a user needs to scan more
than single page documents. Pages from a book, for example, can easily be scanned
without having to copy each page individually first.
Scanning objects is also done by flatbeds. By placing a white sheet of paper over a
bouquet of flowers a scanner can reproduce what appears to be a stock photo onscreen.
Flatbeds have large footprint and hence take up a lot of desk thus if space is a concern
one may go for an alternative.
b) Sheetfed
Sheetfed scanners are only used if one wants to scan for anything other than sheets of
paper. The scanning component of a sheetfed is stationary while the document being
scanned passes over it's 'eyes' similar to a fax machine. It is so thin just a couple of
inches deep, such that it can easily fit between a keyboard and monitor.
Sheetfeds usually work best in conjunction with an automatic document feeder for large
projects. Pictures and other documents which are smaller than a full page can also be
scanned using a sheetfed scanner. They have been known to bend pictures and
reproduce less than quality images.
c) Slide
There is a need for accurate reproduce of very small images. For such application the
resolution required is very sharp and slide types of scanner create a totally different
scanner market. Slides are usually inserted into a tray, much like a CD tray on ones
computer, and scanned internally. Most slide scanners can only scan slides, though
some newer models can also handle negative strips.
Scanner Uses:
A scanner can do far more than simply scan a photograph, and many of its uses could go a
long way to helping a small business. Below are indicated some of the applications for the
scanner in a business environment.
1) Graphics
Graphic images are an important part of many businesses specially in marketing and sales
functions. Scanners, like digital cameras, enable users to convert photographs, slides, and
three-dimensional objects into files that can be pasted into a brochure, inserted into a
V0.1/2008 - MFM
28
Welingkar
MFM– Sem III
Introduction to Computers
presentation or posted on the Internet. Using accompanying software, these images can be
edited, cropped, or manipulated to fit space and size requirements.
2) Data-Entry
Scanners automatically convert the data into digital files using OCR (Optical Character
Recognition) software, this would save time and money which one would pay to someone to
manually enter the reams of data into the computer. In conjunction with the software, a scanner
reads each page and transfers the text to any number of programs. A form letter can be saved
to a word processing program, sales figures to a spreadsheet, even a brochure to web-editing
software.
3) Digital-Files
One observes that there are numerous papers filed in three-ring binders or different kinds of
manual filing in the offices for records. The process of the manual paper flow can be avoided by
using scanners of Digital type. Such scanners can help to create electronic filing cabinets for
everything from invoices to expense reports. Forms can be reproduced online, and searchable
databases can provide relevant information in seconds.
Digital camera
A still camera records images in digital form. Unlike the traditional analog cameras that record
infinitely variable intensities of light, digital cameras record discrete numbers for storage on a
flash memory card or optical disk. As with all digital devices, there is a fixed, maximum
resolution and number of colors that can be represented. Images are transferred to the
computer with a USB cable or via the memory card.
Advantages of Digital Cameras :
There are two distinct advantages of digital cameras.
1. The first being able to see the final image right away so you know you have the picture
you wanted. Bad pictures can be instantly erased.
2. The second is convenience. You can take one picture and print it without waiting to
develop an entire roll of film or wasting the whole roll for just a few pictures. In addition,
"digital film" is reusable, except for the write-once optical disc (CD-R) variety.
The film in a digital camera is made up of photo sensor chips and flash memory. The camera
records color images as intensities of red, green and blue, which are stored as variable charges
on a CCD or CMOS image sensor chip. The charges, which are actually analog, are converted
to digital and stored in one of several flash memory formats such as Compact Flash or Memory
Stick. Instead of memory cards, some still cameras use optical disc for storage, and video
cameras use discs or tape.
The size of the chip determines the resolution, but the analog-to-digital converter (ADC), which
converts the charges to digital data, determines the color depth. Digital video cameras also use
these same image sensing methods, but may also output traditional analog signals.
V0.1/2008 - MFM
29
Welingkar
MFM– Sem III
Introduction to Computers
Major Features :
Following are the major features of digital cameras.
Resolution (in Mega pixels)
The number of pixels determines the maximum size of a printed photo without sacrificing
quality. For 3x5" and 4x6" prints, 2 mega pixels is good. For 5x7" and 8x10" prints, 5 mega
pixels is preferred. For low-resolution images on the Web, almost any digital camera will suffice.
However, one can easily reduce a high-quality image to the low resolution required online. The
higher the resolution from the start, the better the results.
Optical Quality
Mega pixel resolution is a quantitative measurement, but the lens itself is qualitative. The optical
quality of the lens greatly contributes to the resulting picture quality as it has in analog cameras
for more than a century.
Optical vs. Digital (Interpolated) Zoom
The optical zoom is the real resolution of the lenses. The digital zoom is an interpolated
resolution computed by software. The higher the optical number, the better the result. A 10x
optical is far superior to a 10x digital. Some digital zoom numbers go into the stratosphere,
especially for video, but optical is what counts.
Storage Media
There are several types of flash memory cards used for "digital film," but no matter which type
the camera uses, the one that comes with the camera is typically undersized.
Data Transfer
Digital cameras come with a USB cable for transfer directly to the computer, and many
computers come with one or more memory card slots. Printers may also come with card slots,
allowing you to print your photos without using the computer at all.
Battery Duration
Digital cameras use either rechargeable or standard AA batteries. It can take an hour or more to
recharge a battery, so an extra one, fully charged, is always a good idea to have along. AA
batteries can be purchased almost anywhere, and rechargeable AA batteries can also be used.
Interchangeable Lenses
Digital single lens reflex (DSLR) cameras are the digital counterparts of their analog
predecessors and may use the same removable lenses that you already own. However, the
chip is often smaller in size than a 35mm frame, which means your 28mm wide angle lens may
function like a 42mm lens. Increasingly larger chips and wider angle lenses are solving the
problem.
V0.1/2008 - MFM
30
Welingkar
MFM– Sem III
Introduction to Computers
WEB CAMS :
A web camera or a web cam is a real time camera whose images can be accessed using the
World Wide Web, instant messaging, or a PC video calling application. Generally, a digital
camera delivers images to a web server, either continuously or at regular intervals.
A webcam is a desktop or video camera whose output is placed on a webpage either by
displaying images at intervals or producing a live video stream. This webpage is then viewable
via the internet.
As webcam capabilities have been added to instant messaging text chat services such as
Yahoo Messenger, AOL Instant Messenger (AIM) and MSN Messenger, one-to-one live video
communication over the internet has now reached millions of mainstream PC users worldwide.
Web cam software allows users to share snapshots or live images which can be made available
on internet to all or only to authenticated users. The most important area in this is the bandwidth
and the quality of the pictures. They are easy to connect through the USB port.
Hardware required for Webcams : The camera is required to be connected with the computer
vis a hardware port. The port available are serial port, parallel port, USB port, Firewire etc…The
serial & parallel port webcams are today old and obsolete. USB is today’s required solution for
advanced users.
Network cameras do not require PCs and Wireless cameras are today’s reality. The PnP (Plug
and Play) camera devices are cheap and easy to install but have low frame rates than regular
cameras. They take power from the Keyboard or USB connector, hence a separate power
supply is not required for them. The images provided by them are of high quality
Treat your password like your toothbrush. Don't let anybody
else use it, and get a new one every six months.
Clifford Stoll
V0.1/2008 - MFM
31
Welingkar
MFM– Sem III
Introduction to Computers
What is Ports-serial ?
Serial ports are a type of computer interface that complies with the RS-232 standard. They are
9-pin connectors that relay information, incoming or outgoing, one byte at a time. Each byte is
broken up into a series of eight bits, hence the term serial port.
Serial ports are one of the oldest types of interface standards. Before internal modems became
commonplace, external modems were connected to computers via serial ports, also known as
communication or “COM” ports. Computer mice and even keyboards also used serial ports.
Some serial ports used 25-pin connectors, but the 9-pin variety was more common. Serial ports
are controlled by a special chip call a UART (Universal Asynchronous Receiver Transmitter).
Serial ports differ from 25-pin parallel ports in that the parallel ports transmit one byte at a time
by using eight parallel wires that each carry one bit. With data traveling in parallel, the transfer
rate was greater. A parallel port could support rates up to 100 kilobytes per second, while serial
ports only supported 115 kilobits per second (kbps). Later, enhanced technology pushed serial
port speeds to 460 kbps.
[Optional]
[Shared Serial Ports is an advanced utility, the purpose of which is to share real serial ports
between multiple applications in the way that all applications will receive same data from real
serial port simultaneously. It is achieved by creating virtual serial ports, which are same copies
as real one. Each application will think that it is working with serial port in exclusive mode. All
virtual serial ports will be able also to send data to real serial port. Also, you will be able to set
permissions to read, write or change control lines state for every application separately.]
What is a Parallel port?
Parallel interface: an interface between a computer and a printer where the computer sends
multiple bits of information to the printer simultaneously
Normally devices such as printers or external devices such as a Zip Drive or Snappy! Video
Capture is hooked up to a parallel port. A parallel port allows for several bits of data to be
moved at the same time along different lines. For example, a parallel interface can transmit
eight bits (a whole byte) at one time, over eight parallel lines. A serial interface transmits only
one bit at a time
An input/output connection on a computer that sends and receives information in groups of eight
bits (binary digits) at a time, traveling at high speeds along parallel wires to a peripheral device
such as a printer. The other simple connection mechanism is a serial port, which transmits just
one bit at a time -- a data side-road compared with a multi-lane motorway.
In computing, a parallel port is an interface from a computer system where data is transferred in
or out in parallel, that is, on more than one wire. A parallel port carries one bit on each wire thus
multiplying the transfer rate obtainable over a single cable (contrast serial port). There are
usually several extra wires on the port that are used for control signals to indicate when data is
ready to be sent or received
A cable connector on the back of the CPU used to connect devices (usually printers and
scanners) to the computer. The parallel port is the one with the most pins meaning it is wider
than the other connectors. A Parallel Cable is used between Parallel Ports
V0.1/2008 - MFM
32
Welingkar
MFM– Sem III
Introduction to Computers
What is USB? (Universal Serial Bus)
A plug-and-play interface between a computer and add-on devices (such as mobile phones,
audio players. scanners and printers).
An external bus standard that supports data transfer rates of 12 Mbps (12 million bits per
second). A single USB port can be used to connect up to 127 peripheral devices, such as mice,
modems, and keyboards. USB also supports Plug-and-Play installation.
A hardware interface for low-speed peripherals such as the keyboard, mouse, joystick, scanner,
printer, and telephony devices
A plug-and-play interface between a computer and add-on devices (such as keyboards, phones
and PDAs). With USB, a new device can be added to a computer without having to add an
adapter card or even having to turn the computer off. USB supports a data speed of 12
megabits per second and is now being incorporated in some cell phones. This is useful for
synchronizing information with a computer or downloading ring tones.
A plug-and-play interface between a computer and add-on devices such as mobile devices
(PalmOS, PocketPC), printers, keyboards etc. New devices can be added without adapter cards
or without the computer being shut off.
Typically located on the back of the computer near the expansion bay area (and sometimes
accessible through one or more front-mounted ports, too), USB ports provide an easy way to
connect USB-compatible peripherals such as scanners, cameras, joysticks, mice and keyboards
(etc.). To connect more than two USB devices, you will have to add an item with more ports
known as a USB hub. Generally, "powered" USB hubs provide better compatibility with cameras
and other USB devices than do inexpensive unpowered hubs. The 12Mbps USB 1.1 spec is
officially referred to as "full-speed" USB
The USB (for Universal Serial Bus) is a low cost serial bus which can provide upto 12 Mb/S.
That's about 100 times faster than the RS-232 style serial interfaces used in earlier generations
of computers. First developed in 1996, the USB is now widely used in Macs, PC's and even
Linux systems. USB is typically used to connect devices such as printers, scanners, keyboards,
digital cameras, MP3 players and low speed storage devices. In June 2002, Intel and others
started to demonstrate USB 2.0, which increases the speed of the peripheral to PC connection
from 12 Mbps per to 480 Mbps
V0.1/2008 - MFM
33
Welingkar
MFM– Sem III
Introduction to Computers
Modems
Modems differ in design, set-up aids, essential hardware performance, and service and support
policies. Be it for sending electronic mail, for data transfer, or for Internet surfing, a small,
Walkman-size gadget called a "modem" now accompanies the PC. Till recently part of the
corporate desktop only, modems are now becoming part of home PC configurations too, as
Internet usage at home is growing considerably.
A modem is a device that allows computers to transmit data over regular copper telephone
lines. Computers store, send, receive and process data in digital format. This means that the
data in your computer is stored as a series of binary digits, or bits. Phone lines however,
transmit data in a continuous analogue wave. A modem converts the signal from digital format
to analogue format for transmission to a remote modem - this is called modulation.
The growth in the modem market is fallout of the growth in Internet usage, increased
telecommuting, use of E-mail for communication, setting up of WANs (Wide Area Networks)
and implementation of intranets and extranets. Connectivity can be established through analog
lines, leased lines, ISDN lines and satellite.
Basically there are three types of modems, which facilitate connectivity” dial-up, leased line and
ISDN (Integrated Service Digital Network). The system is so programmed that billings take
place accordingly. Dial-up modems could be external, internal and PC Card. An external or
desktop modem is a small box, equipped with a set of indicator lights, connected to the
computer using a serial cable.
External modems are more prevalent in India, primarily because internal modems are difficult
to configure and install, while an external modem is easier to troubleshoot or replace in case of
failure and further the LED indicators on an external modem helps the user to visually monitor
and troubleshoot.
There are two ways by which modems could be configured” as data/fax modems or
data/fax/voice modems. Data/fax modems provide only these two facilities while voice
capability in a modem acts as an answering machine. Irrespective of type, all modems are
designed to comply with the international standards.

To communicate with another computer over copper phone lines, both the sending and
receiving computers must be connected to a modem.

Data is sent from your computer to your modem as a digital signal.

Your modem converts the digital signal into an analogue signal (modulation) then transmits
the data to the receiving (remote) modem.

The remote modem converts the analogue signal into a digital signal (demodulation) then
transmits the data to the receiving computer for processing.
The data that you sent to the remote computer may then be forwarded to another computer for
processing - as is the case when you connect to VSNL, SIFY and others
V0.1/2008 - MFM
34
Welingkar
MFM– Sem III
Introduction to Computers
Networking
Network Design
Geography
Topology
Medium
Strategies
Protocol
LAN
Bus
Connected
Client/Server
Ethernet
MAN
Ring
Wireless
Peer-Peer
FDDI
WAN
Star
LocalTalk
CAN
Mesh
Token Ring
PAN
ATM
Advantages of Networking
Connectivity and Communication:
Networks connect computers and the users of those computers. Once connected, it is
possible for network users to communicate with each other using technologies such as
electronic mail. This makes the transmission of business (or non-business) information
easier, more efficient and less expensive than it would be without the network.
Data Sharing:
One of the most important uses of networking is to allow the sharing of data. Before
networking was common, an accounting employee who wanted to prepare a report for
her manager would have to produce it on his PC, put it on a floppy disk, and then walk it
over to the manager, who would transfer the data to her PC's hard disk.(This sort of
“shoe-based network” was sometimes sarcastically called a “sneakernet”.)
Hardware Sharing:
Networks facilitate the sharing of hardware devices. For example, instead of giving each
of 10 employees in a department an expensive color printer (or resorting to the
“sneakernet” again), one printer can be placed on the network for everyone to share.
Internet Access:
V0.1/2008 - MFM
35
Welingkar
MFM– Sem III
Introduction to Computers
The Internet is itself an enormous network, so whenever you access the Internet, you
are using a network. The significance of the Internet on modern society is hard to
exaggerate, especially for those of us in technical fields.
Internet Access Sharing:
Small computer networks allow multiple users to share a single Internet connection.
Special hardware devices allow the bandwidth of the connection to be easily allocated to
various individuals as they need it, and permit an organization to purchase one highspeed connection instead of many slower ones.
Data Security and Management:
In a business environment, a network allows the administrators to much better manage
the company's critical data. Instead of having this data spread over dozens or even
hundreds of small computers in a haphazard fashion as their users create it, data can be
centralized on shared servers. This makes it easy for everyone to find the data, makes it
possible for the administrators to ensure that the data is regularly backed up, and also
allows for the implementation of security measures to control who can read or change
various pieces of critical information.
Performance Enhancement and Balancing:
Under some circumstances, a network can be used to enhance the overall performance
of some applications by distributing the computation tasks to various computers on the
network.
Entertainment:
Networks facilitate many types of games and entertainment. In addition, many multiplayer games exist that operate over a local area network. Many home networks are set
up for this reason, and gaming across wide area networks (including the Internet) has
also become quite popular. Of course, if you are running a business and have easilyamused employees, you might insist that this is really a disadvantage of networking and
not an advantage!
Summary - Advantages of Networking

Increased Employee Productivity

Reduced Communication Costs

Reduced Office Equipment Costs

Access to Resources Anytime/Anywhere
V0.1/2008 - MFM
36
Welingkar
MFM– Sem III
Introduction to Computers
Disadvantages of Networking
Network Hardware, Software and Setup Costs:
Computers don't just magically network themselves, of course. Setting up a network
requires an investment in hardware and software, as well as funds for planning,
designing and implementing the network.
Hardware and Software Management and Administration Costs:
In all but the smallest of implementations, ongoing maintenance and management of
the network requires the care and attention of an IT professional. In a smaller
organization that already has a system administrator, a network may fall within this
person's job responsibilities, but it will take time away from other tasks. In more
substantial organizations, a network administrator may need to be hired, and in large
companies an entire department may be necessary.
Undesirable Sharing:
With the good comes the bad; while networking allows the easy sharing of useful
information, it also allows the sharing of undesirable data. One significant “sharing
problem” in this regard has to do with viruses, which are easily spread over networks
and the Internet. Mitigating these effects costs more time, money and administrative
effort.
Illegal or Undesirable Behavior:
Typical problems include abuse of company resources, distractions that reduce
productivity, downloading of illegal or illicit materials, and even software piracy. In
larger organizations, these issues must be managed through explicit policies and
monitoring, which again, further increases management costs.
Data Security Concerns:
If a network is implemented properly, it is possible to greatly improve the security of
important data. In contrast, a poorly-secured network puts critical data at risk,
exposing it to the potential problems associated with hackers, unauthorized access
and even sabotage.
Summary - Disadvantages of Networking
 If the Server develops a fault then users may not be able to run the application
programs.
 A fault in the network can cause user to lose the data.
 If the network stops operating then it may not be possible to access to various
computers.
 It is difficult to make the system secure from hackers, novices or industrial
espionage.
 Decisions on resource planning tend to become centralized.
 Networks that have grown with little thought can be inefficient in the long term.
 As traffic increases on a network the performance degrades unless it is designed
properly.
 The larger the network the more difficult it is to manage.
V0.1/2008 - MFM
37
Welingkar
MFM– Sem III
Introduction to Computers
Network Topologies
Best-fit topology for a Network is crucial, as rearranging computers from one topology to
another is difficult and expensive. A network configuration is also called a network topology.
A network topology is the shape or physical connectivity of the network.
The network designer has three major goals when establishing the topology of a
network:
Provide the maximum possible reliability: provide alternative routes if a node fails and be
able to pinpoint the fault readily, deliver user data correctly (without errors) and recover from
errors or lost data in the network.
Route network traffic through the least cost path within the network: minimizing the actual
length of the channel between the components and providing the least expensive channel
option for a particular application.
Give the end users the best possible response time and throughput.
Network Topology – Definition
 The topology of the network can be viewed in two ways:
o The topology as seen from the layout of the cable, or the route followed by the
electrical signals. This is the physical topology.
o The connections between nodes as seen by data traveling from one node to
another - reflects the network's function, use, or implementation without regard to
the physical interconnection of network elements. This is the logical topology,
and may be different from the physical topology.
 Common patterns for connecting computers include the star and bus topologies.
V0.1/2008 - MFM
38
Welingkar
MFM– Sem III
Introduction to Computers
Bus Topology
Simplest network configuration that uses a single transmission medium called a bus to
connect computers together. Coaxial cable is often used to connect computers in a bus
topology. It often serves as the backbone for a network. The cable, in most cases, is not one
length, but many short stands that use T-connectors to join the ends.
T-connectors allow the cable to branch off in a third direction to enable a new computer to
be connected to the network. Special hardware has to be used to terminate both ends of
the coaxial cable such that a signal traveling to the end of the bus would come back as a
repeat data transmission.
Since a bus topology network uses a minimum amount of wire and minimum special
hardware, it is inexpensive and relatively easy to install. In some instances, such as in
classrooms or labs, a bus will connect small workgroups.
Since a hub is not required in a bus topology, the set-up cost is relatively low. One can
simply connect a cable and T-connector from one computer to the next and eventually
terminate the cable at both ends. The number of computers attached to the bus is limited, as
the signal loses strength when it travels along the cable. If more computers have to be
added to the network, a repeater must be used to strengthen the signal at fixed locations
along the bus.
The problem with bus topology is that if the cable breaks at any point, the computers on
each side will lose its termination. The loss of termination causes the signals to reflect and
corrupt data on the bus. Moreover, a bad network card may produce noisy signals on the
bus, which can cause the entire network to function improperly. Bus networks are simple,
easy to use, and reliable.
Repeaters can be used to boost signal and extend bus. Heavy network traffic can slow a
bus considerably. Each connection weakens the signal, causing distortion among too many
connections.
V0.1/2008 - MFM
39
Welingkar
MFM– Sem III
Introduction to Computers
Ring Topology
In a ring topology, the network has no end collection. It forms a continuous ring through
which data travels from one node to another. Ring topology allows more computers to be
connected to the network than do the other two topologies.
Each node in the network is able to purify and amplify the data signal before sending it to
the next node. Therefore, ring topology introduces less signal loss as data traveling along
the path.
Ring-topology network is often used to cover a larger geographic location where
implementation of star topology is difficult. The problem with ring topology is that a break
anywhere in the ring will cause network communications to stop.
A backup signal path may be implemented in this case to prevent the network from going
down. In a ring network, every device has exactly two neighbours for communication
purposes. All messages travel through a ring in the same direction (either "clockwise" or
"counter clockwise").
V0.1/2008 - MFM
40
Welingkar
MFM– Sem III
Introduction to Computers
Star Topology
A star network is a LAN in which all nodes are directly connected to a common central
computer. Every workstation is indirectly connected to every other through the central
computer. In some Star networks, the central computer can also operate as a workstation.
The star network topology works well when the workstations are at scattered points. It is
easy to add or remove workstations.
The ring network topology may serve the intended purpose at lower cost than the star
network topology. If the workstations lie nearly along a straight line, the bus network
topology may be best.
In a star network, a cable failure will isolate the workstation that is linked to the central
computer, while all other workstations will continue to function normally, except that the
other workstations will not be able to communicate with the isolated workstation.
If any of the workstation goes down other workstation would not be affected, but, if the
central computer goes down the entire network will suffer degraded performance of
complete failure. The star topology can have a number of different transmission
mechanisms, depending on the nature of the central hub.
Star networks are easy to modify and one can add new nodes without disturbing the rest of
the network. Often there are facilities to use several different cable types with hubs.
V0.1/2008 - MFM
41
Welingkar
MFM– Sem III
Introduction to Computers
Comparison of the three topologies
Features
Expense
Bus topology
Low
Star topology
Medium
Ring topology
High
Reliability
Good
Excellent
Good
Geographical coverage
Poor
Good
Excellent
Ease of troubleshooting
Poor
Excellent
Good
V0.1/2008 - MFM
42
Welingkar
MFM– Sem III
Introduction to Computers
Mesh Topology
The mesh topology has been used more frequently in recent years. Its primary attraction is
its relative immunity to bottlenecks and channel/node failures. Due to the multiplicity of paths
between nodes, traffic can easily be routed around failed or busy nodes.
A mesh topology is reliable and offers redundancy. If one node can no longer operate, all
the rest can still communicate with each other, directly or through one or more intermediate
nodes. It works well when the nodes are located at scattered points that do not lie on a
common point.
Given that this approach is very expensive in comparison to other topologies like star and
ring network, some users will still prefer the reliability of the mesh network to that of others.
What are the considerations when selecting a topology?
Money:
A linear bus network may be the least expensive way to install a network and you do not
have to purchase concentrators.
Length of cable needed.
The linear bus network uses shorter lengths of cable.
Future growth:
With a star topology, expanding a network is easily done by adding another
concentrator.
Cable type:
The most common cable is unshielded twisted pair, which is most often used with star
topologies.
V0.1/2008 - MFM
43
Welingkar
MFM– Sem III
Introduction to Computers
Network Size/Geography

Local Area Networks (LAN)

Metropolitan Area Networks (MAN)

Wide Area Networks (WAN)

Campus Area Networks (CAN)

Personal Area Network (PAN)
Local Area Network
A Local Area Network (LAN) is a group of computers and associated devices that share a
common communication line and typically share the resources of a single processor or
server within a small Geographic Area.
Specifically it has the properties:
 A limited-distance (typically under a few kilometers).
 High-speed network.
 Usually the server has application and data storage that are shared in common by
multiple computers.
 Supports many computers (typically two to thousands).
 A very low error rate.
 Users can order printing and other services as needed through application run as on
LAN server.
 Owned by a single organization.
 A user can share files with others at the LAN server, read and write access is
maintained by a LAN administrator.
V0.1/2008 - MFM
44
Welingkar
MFM– Sem III
Introduction to Computers
Metropolitan Area Network
A MAN is a network that interconnects users with computer resources in a geographic area
or region larger than that covered by even large LAN but smaller than the area covered by a
WAN. The term is applied to the interconnection of networks in a city into a single larger
network. It is also used to meet the interconnection of several LAN by bridging them with
backbones lines. A MAN typically covers an area of between 5 & 50 Kms diameter.
V0.1/2008 - MFM
45
Welingkar
MFM– Sem III
Introduction to Computers
Wide Area Network (WAN)



A Wide Area Network (WAN) is a communications network that covers a wide
geographic area, such as state or country. The WAN can span any distance and is
usually provided by a public carrier.
It is two or more LANs connected together and covering a wide geographical area. A
wide network area may be privately owned or rented. Contrast this to a LAN (local area
network), which is contained within a building or complex, and, a MAN (metropolitan
area network), which generally covers a city or suburb.
For example an individual office will have a LAN at all its office but interconnected
through a WAN. The LANs are connected using devices such as bridges, routers or
gateways.
V0.1/2008 - MFM
46
Welingkar
MFM– Sem III
Introduction to Computers
Campus Area Network



A computer network made up of an interconnection of local area networks (LANs) within
a limited geographical area. It can be considered one form of a metropolitan area
network, specific to an academic setting.
In the case of a university campus-based campus area network the network is likely to
link a variety of campus buildings including academic departments, the university library
and student residence halls.
A campus area network is larger than a local area network but smaller than a wide area
network (WAN).
V0.1/2008 - MFM
47
Welingkar
MFM– Sem III
Introduction to Computers
Personal Area Network
A personal area network (PAN) is a computer network used for communication among
computer devices (including telephones and personal digital assistants) close to one person.
The devices may or may not belong to the person in question. The reach of a PAN is
typically a few meters.
PANs can be used for communication among the personal devices themselves
(intrapersonal communication), or for connecting to a higher level network and the Internet
(an uplink).Personal area networks may be wired with computer buses such as USB and
Firewire.
A wireless personal area network (WPAN) can also be made possible with network
technologies such as IrDA, Bluetooth and UWB.
V0.1/2008 - MFM
48
Welingkar
MFM– Sem III
Introduction to Computers
Comparison between the networks
V0.1/2008 - MFM
49
Welingkar
MFM– Sem III
Introduction to Computers
Medium of Data communication
Cable type
Cable is what physically connects network devices together, serving as the conduit for
information traveling from one computing device to another.
The type of cable you choose for your network will be dictated in part by the network's
topology, size and media access method. Small networks may employ only a single
cable type, whereas large networks tend to use a combination.
Coaxial Cable
Coaxial cable includes a copper wire surrounded by insulation, a secondary conductor
that acts as a ground, and a plastic outside covering. Because of coaxial cable's two
layers of shielding, it is relatively immune to electronic noise, such as motors, and can
thus transmit data packets long distances.
Coaxial cable is a good choice for running the lengths of buildings (in a bus topology) as
a network backbone. LANs primarily use two sizes of coaxial cable, referred to as thick
and thin.
Thick coaxial cable can extend longer distances than thin and was a popular backbone
(bus) cable in the 1970s and 1980s. However, thick is more expensive than thin and
difficult to install.
Today, thin (which looks similar to a cable television connection) is used more frequently
than thick.
V0.1/2008 - MFM
50
Welingkar
MFM– Sem III
Introduction to Computers
Cable Type – Twisted Pair
Twisted-pair cable consists of two insulated wires that are twisted around each other and
covered with a plastic casing. It is available in two varieties, unshielded and shielded.
UTP cabling wire is grouped into categories, numbered 1-5. The higher the category
rating, the more tightly the wires are twisted, allowing faster data transmission without
crosstalk.
Since many buildings are pre-wired with extra UTP cables, and because UTP is
inexpensive and easy to install, it has become a very popular network media over the
last few years.
Shielded twisted-pair cable (STP) adds a layer of shielding to UTP. Although STP is less
affected by noise interference than UTP and can transmit data further, it is more
expensive and more difficult to install
V0.1/2008 - MFM
51
Welingkar
MFM– Sem III
Introduction to Computers
Cable Type – Fiber Optic
Fiber-optic cable is constructed of flexible glass and plastic. It transmits information via
photons, or light. It is significantly smaller, which could be half the diameter of a human
hair. Although limited in the distance they can carry information and have several
advantages. More resistant to electronic interference than the other media types, fiberoptic is ideal for environments with a considerable amount of noise (electrical
interference).
Furthermore, since fiber-optic cable can transmit signals further than coaxial and
twisted-pair, more and more educational institutions are installing it as a backbone in
large facilities and between buildings.
They are much lighter and less expensive. The cost of installing and maintaining fiberoptic cable remains too high, however, for it to be a viable network media connection for
classroom computers.
V0.1/2008 - MFM
52
Welingkar
MFM– Sem III
Introduction to Computers
Fiber-Optic - Advantages
SPEED:
Fiber optic networks operate at high speeds - up into the gigabits
BANDWIDTH:
Large carrying capacity
DISTANCE:
Signals can be transmitted further without needing to be "refreshed" or strengthened.
RESISTANCE:
Greater resistance to electromagnetic noise such as radios, motors or other nearby
cables.
MAINTENANCE:
Fiber optic cables costs much less to maintain.
Microwave
In case of the microwave communication channel, the medium is not a solid substance
but rather the air itself. It uses a high radio frequency wave that travels in straight lines
through the air. Because the waves cannot bend with the curvature of the earth, they
can be transmitted only over short distances.
A Wireless Microwave link can connect over a distance of up to 25km, although line of
sight is essential.
V0.1/2008 - MFM
53
Welingkar
MFM– Sem III
Introduction to Computers
Satellite
It uses satellites orbiting above the earth as microwave relay station. Satellites rotate at
a precise point and speed above the earth. This makes them appear stationery, so they
can amplify and relay microwave signals from one transmitter on the ground to another.
Thus they can be used to send large volumes of data. The major drawback is that bad
weather can interrupt the flow of data.
V0.1/2008 - MFM
54
Welingkar
MFM– Sem III
Introduction to Computers
Networking Components
Hubs
A Hub is a connection device for networks. Hubs are the simplest network devices, and
their simplicity is reflected in their low cost. Small hubs with four or five ports (often
referred to as workgroup hubs) with the requisite cables, they provide everything
needed to create a small network.
Allows multiple segments or computers to connect and share packets of information. A
hub has several ports to which clients are connected directly, and one or more ports
that can be used to connect the hub to the backbone or to other active network
components. Simply receives incoming packets and broadcasts these packets out to all
devices on the network - including the one that originally sent the packet. Hubs do not
read any of the data passing through them and are not aware of their source or
destination.
Hubs with more ports are available for networks that require greater capacity. Due to
the inefficiencies of the hub system and the constantly increasing demand for more
bandwidth, hubs are slowly but surely being replaced with switches.
Hubs used shared bandwidth, They must share their speed across the total number of
ports on the device. Which causes hubs to be inferior to switches (shared bandwidth
versus dedicated bandwidth.) a 10Mbps 5-port hub shares its 10Mbps speed across the
5 ports. So, if 5 computers are connected to the 5 ports, each port can only transfer
data at a rate of 2Mbps, because 10 divided by 5 equals 2.
A very important fact to note about hubs is that they only allow users to share Ethernet.
V0.1/2008 - MFM
55
Welingkar
MFM– Sem III
Introduction to Computers
Switches
Switches occupy the same place in the network as hubs. Switches are usually more
expensive than hubs, but the performance is better. Switches examine each packet and
process it accordingly rather than simply repeating the signal to all ports. Switches split
large networks into small segments, decreasing the number of users sharing the same
network resources and bandwidth. This helps prevent data collisions and reduces
network congestion, increasing network performance.
Switches allow dedicated bandwidth to be designated to each device on the network.
The bandwidth is not shared among the users, but, it is switched between users. So a
100Mbps 5-port switch with 5 computers attached would transfer data at 100Mbps over
every port. This is an obvious advantage over a hub. Hubs operate using a broadcast
model and switches operate using a virtual circuit model. Rather than forwarding data to
all the connected ports, a switch forwards data only to the port on which the destination
system is connected.
It looks at the Media Access Control (MAC) addresses of the devices connected to it to
determine the correct port. A MAC address is a unique number that is programmed into
every NIC. By forwarding data only to the system to which the data is addressed, the
switch decreases the amount of traffic on each network link dramatically. In effect, the
switch literally channels (or switches, if you prefer) data between the ports.
Additionally, switches prevent bad or misaligned packets from spreading by not
forwarding them. This helps prevent data collisions and reduces network congestion,
increasing network performance. Filtering of packets, and the regeneration of forwarded
packets enables switching technology to split a network into separate domains.
Regeneration of packets allows for greater distances and more nodes to be used in the
total network design, and, dramatically lowers the overall collision rates. Switches are
self learning. They determine the Ethernet addresses in use on each segment, building
a table as packets are passed through the switch. This "plug and play" element makes
switches an attractive alternative to hubs.
V0.1/2008 - MFM
56
Welingkar
MFM– Sem III
Introduction to Computers
Bridges
Bridges are networking devices that divide up networks. In the days before routers and
switches became popular, bridges were used to divide up networks and thus reduce the
amount of traffic on each network. Network switches have largely replaced them.
Bridges pass data packets from one LAN, or segment of a LAN, to another,
retransmitting the data packets flowing across a network. Bridges may be used to
segment networks using different protocols, such as Ethernet to Token Ring, or, they
can be used to segment networks and increase their efficiency by connecting users in
groups to resources most appropriate for their use.
Bridges are programmed to recognize the addresses of workstations on the network,
and whether or not a specific packet of data needs to pass over a network divider in
order to reach its destination. Should a packet of data be required to pass over a bridge,
the bridge accepts that packet and then retransmits it to another segment of the network
where its destination is located.
A device that connects two LAN segments together, which may be of similar or
dissimilar types, such as Ethernet and Token Ring. A bridge is inserted into a network to
segment it and keep traffic contained within the segments to improve performance.
Bridges learn from experience and build and maintain address tables of the nodes on
the network. By monitoring which station acknowledged receipt of the address, they
learn which nodes belong to the segment.
V0.1/2008 - MFM
57
Welingkar
MFM– Sem III
Introduction to Computers
Router
Routers are an increasingly common sight in any network environment, from a small
home office that uses one to connect to an Internet service provider (ISP) to a corporate
IT environment; where racks of routers manage data communication with disparate
remote sites.
Routers make internetworking possible.A router is a physical device that joins multiple
networks together. Routers are network devices that literally route data around the
network. By examining data as it arrives, the router is able to determine the destination
address for the data; then, by using tables of defined routes, the router determines the
best way for the data to continue its journey.
Unlike bridges and switches, which use the hardware-configured MAC address to
determine the destination of the data, routers use the software-configured network
address to make decisions. This approach makes routers more functional than bridges
or switches, and it also makes them more complex because they have to work harder to
determine the information.
The basic requirement for a router is that it must have at least two network interfaces. If
there are LAN interfaces, then the router can manage and route the information
between two LAN segments. More commonly, a router is used to provide connectivity
across wide area network (WAN) links.
V0.1/2008 - MFM
58
Welingkar
MFM– Sem III
Introduction to Computers
Gateway
"Gateway" is a generic term for an internetworking system (a system that joins two
networks together). The term gateway is applied to any device, system, or software
application that can perform the function of translating data from one format to another.
The key feature of a gateway is that it converts the format of the data, not the data itself.
It performs a host of functions but is essentially a “gate” for transferring data. You can
use gateway functionality in many ways.
A router that can route data from an IPX network to an IP network is, technically, a
gateway. Software gateways can be found everywhere. Many companies use an email
system such as Microsoft Exchange or Novell GroupWise. These systems transmit mail
internally in a certain format. When email needs to be sent across the Internet to users
using a different email system, the email must be converted to another format, usually
to Simple Mail Transfer Protocol (SMTP). This conversion process is performed by a
software gateway.
V0.1/2008 - MFM
59
Welingkar
MFM– Sem III
Introduction to Computers
Wireless Access Point (WAPs)
Wireless network devices gain access to the network via WAPs. WAPs are typically
deployed as part of a larger network infrastructure, but in some environments, such as
small businesses or home offices, they can operate completely independently of a
normal network.
Wired Access Points
When a WAP connects to a wired network, it is often referred to as a wired access point
because it joins the wireless portion of the network with the wired portion. WAPs are
hub-like devices; the only giveaway to their function is the antennae that protrude from
the box. Because WAPs process signals and are often connected into a wired network,
they require power, which is supplied through an external AC power adapter or a built-in
power supply.
V0.1/2008 - MFM
60
Welingkar
MFM– Sem III
Introduction to Computers
VSATs
VSAT is an acronym for Very Small Aperture Terminal. It describes a small satellite
terminal that can be used for one-way and/or interactive communications via satellite.
Literally the term refers to any fixed satellite terminal that is used to provide interactive
or receive-only communications.
In brief a VSAT as a one or two-way terminal used in a star, mesh or point to point
network(*). Antenna size is restricted to being less than or equal to 1.8 m at Ka band,
3.8 m at Ku band and 7.8 m at C band.
A VSAT network consists of a large high performance hub earth station and a number
of smaller, lower performance terminals, that can be receive only, transmit only or
transmit/receive.
Note that in particular networks, for example meshed VSAT networks, all terminals are
of the same size and performance specifications. VSATs are small, cheap, and easy to
install and are used for all kinds of telecommunications applications such as: corporate
networks (for example connecting fuel station pay systems), rural telecoms, distance
learning, telemedicine, transportable satellite news gathering uplinks etc.
Placed outdoors in direct line of sight to the satellite and a device that is placed indoors
to interface the transceiver with the end user's communications device, such as a PC.
The computer attached to the incoming or outgoing signal transceiver (the dish
antenna), is called the hub station to which all the other computers of the rest of the
LAN/WAN are connected. A satellite terminal is the complete combination of equipment
parts that compose the end-user communication system: it consists in general of an
outdoor part (the antenna) and an indoor part (the user interface, display, console). In
some cases the antenna and user interface are in one single unit.
Each end user is interconnected with the hub station via the satellite, forming a star
topology. VSAT offers a number of advantages over terrestrial alternatives. For private
applications, companies can have total control of their own communication system
without dependence on other companies capable of supporting: Internet, data, LAN,
voice/fax communications, and,
can provide powerful, dependable private and public network communications solutions.
VSAT networks come in various shapes and sizes ranging from point-to-point, point-tomultipoint, and on demand. When it comes down to cost, making general comparisons
between VSAT services and their terrestrial equivalents is almost impossible.
Charges for terrestrial services are nearly always distance-dependent, while VSAT
connections cost the same whether sites are 1 or 1,000 miles apart. And with most
VSAT services, the cost per connection comes down considerably when a customer
addes users. Generally, these systems operate in the Ku-band and C-band frequencies.
V0.1/2008 - MFM
61
Welingkar
V0.1/2008 - MFM
MFM– Sem III
Introduction to Computers
62
Welingkar
MFM– Sem III
Introduction to Computers
Virtual Private Network
A Virtual Private Network (VPN) allows you to connect branch offices, telecommuting
workers, field representatives and other users into one seamless network.
As opposed to traditional WAN solutions utilizing private leased lines or frame-relay
networks, VPNs allow you to use the public Internet to carry your information securely
and economically. VPNs can do this and still remain private by incorporating the latest
advances in encryption technology:



Encryption scrambles data crossing the link;
Authentication of incoming information to assure that it has not been tampered
with or corrupted, and comes from a legitimate source;
Access control verifying the identity of the person or network address requesting
network entry.
The result is a secure 'tunnel' of information flow which utilizes the Internet but is not
accessible to anyone outside your network. When an authorized user logs off, the
tunnel between them and the network disappears.
Business today is constantly evolving and adapting. Offices are opened, moved, and
closed. Staffing rises and falls, and flows to where resources are needed on projects.
Any networking strategy you adopt must be able to grow and flex with your business.
Unlike traditional WAN solutions, VPNs are very flexible:




Anyone in the field who needs access to your network can do so by making a
local phone call to any Internet Service Provider (ISP) - a giant cost savings over
maintaining a traditional remote dial-in network.
The Internet is international - in any country where you can access the Web, you
can access your network.
Expansion is painless, requiring minimum administration and hardware/software.
Incremental cost to add users or branch offices is very low.

V0.1/2008 - MFM
63
Welingkar
V0.1/2008 - MFM
MFM– Sem III
Introduction to Computers
64
Welingkar
V0.1/2008 - MFM
MFM– Sem III
Introduction to Computers
65
Welingkar
MFM– Sem III
Introduction to Computers
A virtual private network (VPN) is the extension of a private network that encompasses
links across shared or public networks like the Internet. A VPN enables you to send
data between two computers across a shared or public internetwork in a manner that
emulates the properties of a point-to-point private link. The act of configuring and
creating a virtual private network is known as virtual private networking.
To emulate a point-to-point link, data is encapsulated, or wrapped, with a header that
provides routing information allowing it to traverse the shared or public transit
internetwork to reach its endpoint. To emulate a private link, the data being sent is
encrypted for confidentiality. Packets that are intercepted on the shared or public
network are indecipherable without the encryption keys. The portion of the connection in
which the private data is encapsulated is known as the tunnel. The portion of the
connection in which the private data is encrypted is known as the virtual private network
(VPN) connection.
VPN connections allow users working at home or on the road to connect in a secure
fashion to a remote corporate server using the routing infrastructure provided by a
public internetwork (such as the Internet). From the user’s perspective, the VPN
connection is a point-to-point connection between the user’s computer and a corporate
server. The nature of the intermediate internetwork is irrelevant to the user because it
appears as if the data is being sent over a dedicated private link.
VPN technology also allows a corporation to connect to branch offices or to other
companies over a public internetwork (such as the Internet), while maintaining secure
communications. The VPN connection across the Internet logically operates as a wide
area network (WAN) link between the sites.
In both of these cases, the secure connection across the internetwork appears to the
user as a private network communication—despite the fact that this communication
occurs over a public internetwork—hence the name virtual private network.
VPN technology is designed to address issues surrounding the current business trend
toward increased telecommuting and widely distributed global operations, where
workers must be able to connect to central resources and must be able to communicate
with each other.
To provide employees with the ability to connect to corporate computing resources,
regardless of their location, a corporation must deploy a scalable remote access
solution. Typically, corporations choose either an MIS department solution, where an
internal information systems department is charged with buying, installing, and
maintaining corporate modem pools and a private network infrastructure; or they choose
a value-added network (VAN) solution, where they pay an outsourced company to buy,
install, and maintain modem pools and a telecommunication infrastructure.
V0.1/2008 - MFM
66
Welingkar
MFM– Sem III
Introduction to Computers
Neither of these solutions provides the necessary scalability, in terms of cost, flexible
administration, and demand for connections. Therefore, it makes sense to replace the
modem pools and private network infrastructure with a less expensive solution based on
Internet technology so that the business can focus on its core competencies. With an
Internet solution, a few Internet connections through Internet service providers (ISPs)
and VPN server computers can serve the remote networking needs of hundreds or
thousands of remote clients and branch offices.
V0.1/2008 - MFM
67
Welingkar
MFM– Sem III
Introduction to Computers
Common Terms
Booting
The Bootstrapping process that starts operating systems when the user turns on a
computer system. A boot sequence is the set of operations the computer performs
when it is switched on, which load an operating system. The process of reading disk
blocks from the starting of the system disk and executing the code within the bootstrap.
This will read further information off the disk to bring the whole operating system online.
Device drivers are contained within the bootstrap code that support all the locally
attached peripheral devices. If the computer is connected to a network, the operating
system will transfer to the Network Operating system for the "client" to log onto a server
Graphic User Interface
Abbreviated GUI (pronounced GOO-ee). A program interface that takes advantage of
the computer's graphics capabilities to make the program easier to use. Well-designed
graphical user interfaces can free the user from learning complex command languages.
On the other hand, many users find that they work more effectively with a commanddriven interface, especially if they already know the command language.
Pointer:
A symbol that appears on the display screen and that you move to select objects and
commands. Usually, the pointer appears as a small angled arrow. Text -processing
applications, however, use an I-beam pointer that is shaped like a capital I.
pointing device:
A device, such as a mouse or trackball, that enables you to select objects on the display
screen.
icons:
Small pictures that represent commands, files, or windows. By moving the pointer to the
icon and pressing a mouse button, you can execute a command or convert the icon into
a window. You can also move the icons around the display screen as if they were real
objects on your desk.
desktop:
The area on the display screen where icons are grouped is often referred to as the
desktop because the icons are intended to represent real objects on a real desktop.
windows:
You can divide the screen into different areas. In each window, you can run a different
program or display a different file. You can move windows around the display screen,
and change their shape and size at will.
menus:
Most graphical user interfaces let you execute commands by selecting a choice from a
menu.
V0.1/2008 - MFM
68
Welingkar
MFM– Sem III
Introduction to Computers
In addition to their visual components, graphical user interfaces also make it easier to
move data from one application to another. A true GUI includes standard formats for
representing text and graphics. Because the formats are well-defined, different
programs that run under a common GUI can share data. This makes it possible, for
example, to copy a graph created by a spreadsheet program into a document created
by a word processor.
Character User Interface
Short for Character User Interface or Command-line User Interface, CUI is another
name for a command line. Early user interfaces were CUI. That is they could only
display the characters defined in the ASCII set. Examples of this type of interface are
the command line interfaces provided with DOS 3.3 and early implementations of UNIX
and VMS.
This was limiting, but it was the only choice primarily because of 2 hardware constraints.
 Early CPUs did not have the processing power to manage a GUI.
 Also, the video controllers and monitors were unable to display the high
resolution necessary to implement a GUI.
Protocol
In information technology, is the special set of rules that end points in a
telecommunication connection use when they communicate. Protocols exist at several
levels in a telecommunication connection. For example, there are protocols for the data
interchange at the hardware device level and protocols for data interchange at the
application program level.
In the standard model known as Open Systems Interconnection (OSI), there are one or
more protocols at each layer in the telecommunication exchange that both ends of the
exchange must recognize and observe. Protocols are often described in an industry or
international standard.
[Alternative Definition]
An agreed-upon format for transmitting data between two devices. The protocol
determines the following:
 the type of error checking to be used
 data compression method, if any
 how the sending device will indicate that it has finished sending a message
 how the receiving device will indicate that it has received a message
There are a variety of standard protocols from which programmers can choose. Each
has particular advantages and disadvantages; for example, some are simpler than
others, some are more reliable, and some are faster. From a user's point of view, the
only interesting aspect about protocols is that your computer or device must support the
V0.1/2008 - MFM
69
Welingkar
MFM– Sem III
Introduction to Computers
right ones if you want to communicate with other computers. The protocol can be
implemented either in hardware or in software
V0.1/2008 - MFM
70
Welingkar
MFM– Sem III
Introduction to Computers
Software
Nature of Software
 Software is intangible
 Hard to understand development effort
 Software is easy to reproduce
 Cost is in it’s development
 In other engg. products, manufacturing is costly
 The industry is labour intensive
 Hard to automate



Untrained people can hack something together
 Quality problems are hard to notice
Software is easy to modify
 People make changes without fully understanding it
Software does not wear out
 It deteriorates by having its design changed:
 Erroneously; or
 in ways that were not anticipated, leading to further complexity
Conclusions
 Much software has poor design and is getting worse
 Demand for software is high and rising
 We are in a perpetual software crisis
 We have to learn to ‘engineer software’
Computer Software
Is the programs that enable a computer to perform a specific task, as opposed to the
physical components of the system (hardware).


Application software - word processor, which enables a user to perform a task,
and,
System software – Operating System
In computers, software is loaded into RAM and executed in the central processing unit.
At the lowest level, software consists of a machine language specific to an individual
processor. A machine language consists of groups of binary values signifying processor
instructions (object code), which change the state of the computer from its preceding
state.
Software is an ordered sequence of instructions for changing the state of the computer
hardware in a particular sequence. It is generally written in high-level programming
languages that are easier and more efficient for humans to use (closer to natural
language) than machine language. High-level languages are compiled or interpreted
into machine language object code.
V0.1/2008 - MFM
71
Welingkar
MFM– Sem III
Introduction to Computers
Software may also be written in an assembly language, essentially, a mnemonic
representation of a machine language using a natural language alphabet. Assembly
language must be assembled into object code via an assembler.
Relationship to data [Software] has historically been considered an intermediary
between electronic hardware and data, As computational math becomes increasingly
complex, the distinction between software and data becomes less precise.
Data has generally been considered as either the output or input of executed software.
However, data is not the only possible output or input. For example, (system)
configuration information may also be considered input, although not necessarily
considered data (and certainly not applications data).
The output of a particular piece of executed software may be the input for another
executed piece of software.
Therefore, software may be considered an interface between hardware, data, and/or
(other) software.
Types..
 Custom
– For a specific Customer/Area
 Generic
– Sold on Open Market
–Often called
COTS (Commercial Off the Shelf)
Shrink wrapped
 Embedded
–Built into Hardware
–Hard to change
Practical computer systems divide software into three major classes:
 system software,
 programming software and
 application software,
although the distinction is somewhat arbitrary, and often blurred.
System software
is a generic term referring to any computer software that is an essential part of the
computer system. System software helps run the computer hardware and computer
system.
It includes operating systems, device drivers, diagnostic tools, servers, windowing
systems, utilities and more. The purpose of systems software is to insulate the
applications programmer as much as possible from the details of the particular
computer complex being used.
V0.1/2008 - MFM
72
Welingkar
MFM– Sem III
Introduction to Computers
Such as memory and other hardware features, and such accessory devices as
communications, printers, readers, displays, keyboards, etc.
Programming Software
Programming software usually provides tools to assist a programmer in writing
computer programs and software using different programming languages in a more
convenient way. The tools include text editors, compilers, interpreters, linkers,
debuggers, and so on. An Integrated development environment (IDE) merges those
tools into a software bundle, and a programmer may not need to type multiple
commands for compiling, interpreter, debugging, tracing, and etc., because the IDE
usually has an advanced graphical user interface, or GUI.
Application Software
Application software allows humans to accomplish one or more specific (non-computer
related) tasks. Typical applications include industrial automation, business software,
educational software, medical software, databases and computer games. Businesses
are probably the biggest users of application software, but almost every field of human
activity now uses some form of application software. It is used to automate all sorts of
functions.
Retail software:
This type of software is sold off the shelves of retail stores. It includes expensive
packaging designed to catch the eye of shoppers and, as such, is generally more
expensive. An advantage of retail software is that it comes with printed manuals and
installation instructions, missing in hard-copy form from virtually every other category of
software. However, when hard-copy manuals and instructions are not required, a
downloadable version off the Internet will be less expensive, if available.
OEM software:
OEM stands for "Original Equipment Manufacturer" and refers to software sold in bulk to
resellers, designed to be bundled with hardware. For example, Microsoft has contracts
with various companies including Dell Computers, Toshiba, Gateway and others.
Microsoft sells its operating systems as OEM software at a reduced price, minus retail
packaging, manuals and installation instructions. Resellers install the operating system
before systems are sold and the OEM CD is supplied to the buyer. The "manual"
consists of the Help menu built into the software itself. OEM software is not legal to buy
unbundled from its original hardware system.
Shareware:
This software is downloadable from the Internet. Licenses differ, but commonly the user
is allowed to try the program for free, for a period stipulated in the license, usually thirty
days. At the end of the trial period, the software must be purchased or uninstalled.
Some shareware incorporates an internal clock that disables the program after the trial
period unless a serial number is supplied. Other shareware designs continue to work
with "nag" screens, encouraging the user to purchase the program.
V0.1/2008 - MFM
73
Welingkar
MFM– Sem III
Introduction to Computers
Crippleware:
This software is similar to shareware except that key features will cease to work after
the trial period has ended. For example, the "save" function, the print function, or some
other vital feature necessary to use the program effectively may become unusable. This
"cripples" the program. Other types of crippleware incorporate crippled functions
throughout the trial period. A purchase is necessary to unlock the crippled features.
Demo software:
Demo software is not intended to be a functioning program, though it may allow partial
functioning. It is mainly designed to demonstrate what a purchased version is capable of
doing, and often works more like an automated tutorial. If a person wants to use the
program, they must buy a fully functioning version.
Adware:
This is free software that is supported by advertisements built into the program itself.
Some adware requires a live Internet feed and uses constant bandwidth to upload new
advertisements. The user must view these ads in the interface of the program. Disabling
the ads is against the license agreement. Adware is not particularly popular.
Spyware:
Spyware software is normally free, but can be shareware. It clandestinely "phones
home" and sends data back to the creator of the spyware, most often without the user's
knowledge. For example, a multimedia player might profile what music and video files
the software is called upon to play. This information can be stored with a unique
identification tag associated with the specific program on a user's machine, mapping a
one-to-one relationship.
Spyware:
The concept of spyware is very unpopular, and many programs that use spyware
protocols were forced to disclose this to users and offer a means to turn off reporting
functions. Other spyware programs divulge the protocols in their licenses, and make
acceptance of the spyware feature a condition of agreement for using the software.
Freeware:
Freeware is also downloadable off the Internet and free of charge. Often freeware is
only free for personal use, while commercial use requires a paid license. Freeware does
not contain spyware or adware. If it is found to contain either of these, it is reclassified
as such.
Public domain software:
This is free software, but unlike freeware, public domain software does not have a
specific copyright owner or license restrictions. It is the only software that can be legally
modified by the user for his or her own purposes. People are encouraged to read
licenses carefully when installing software, as they vary widely.
V0.1/2008 - MFM
74
Welingkar
MFM– Sem III
Introduction to Computers
Malware:
Malware is software designed to infiltrate or damage a computer system, without the
owner's informed consent. The term is a portmanteau of "mal-" (or perhaps "malicious")
and "software", and describes the intent of the creator, rather than any particular
features. Malware is commonly taken to include computer viruses, worms, Trojan
horses, spyware and some adware.
V0.1/2008 - MFM
75
Welingkar
MFM– Sem III
Introduction to Computers
Generations of Computers
The Zeroth Generation
The term zeroth generation is used to refer to the period of development of computing, which
predated the commercial production and sale of computer equipment. The period might be
dated as extending from the mid-1800s. In particular, this period witnessed the emergence of
the first electronics digital computers on the ABC, since it was the first to fully implement the
idea of the stored program and serial execution of instructions. The development of EDVAC set
the stage for the evolution of commercial computing and operating system software. The
hardware component technology of this period was electronic vacuum tubes. The actual
operation of these early computers took place without be benefit of an operating system. Early
programs were written in machine language and each contained code for initiating operation of
the computer itself. This system was clearly inefficient and depended on the varying
competencies of the individual programmer as operators.
The First Generation, 1951-1956
The first generation marked the beginning of commercial computing. The first generation was
characterized by high-speed vacuum tube as the active component technology. Operation
continued without the benefit of an operating system for a time. The mode was called "closed
shop" and was characterized by the appearance of hired operators who would select the job to
be run, initial program load the system, run the user’s program, and then select another job, and
so forth. Programs began to be written in higher level, procedure-oriented languages, and thus
the operator’s routine expanded. The operator now selected a job, ran the translation program
to assemble or compile the source program, and combined the translated object program along
with any existing library programs that the program might need for input to the linking program,
loaded and ran the composite linked program, and then handled the next job in a similar
fashion. Application programs were run one at a time, and were translated with absolute
computer addresses. There was no provision for moving a program to different location in
storage for any reason. Similarly, a program bound to specific devices could not be run at all if
any of these devices were busy or broken.
At the same time, the development of programming languages was moving away from the basic
machine languages; first to assembly language, and later to procedure oriented languages, the
most significant being the development of FORTRAN
The Second Generation, 1956-1964
The second generation of computer hardware was most notably characterized by transistors
replacing vacuum tubes as the hardware component technology. In addition, some very
important changes in hardware and software architectures occurred during this period. For the
most part, computer systems remained card and tape-oriented systems. Significant use of
random access devices, that is, disks, did not appear until towards the end of the second
generation. Program processing was, for the most part, provided by large centralized computers
operated under mono-programmed batch processing operating systems.
The most significant innovations addressed the problem of excessive central processor delay
due to waiting for input/output operations. Recall that programs were executed by processing
the machine instructions in a strictly sequential order. As a result, the CPU, with its high speed
V0.1/2008 - MFM
76
Welingkar
MFM– Sem III
Introduction to Computers
electronic component, was often forced to wait for completion of I/O operations which involved
mechanical devices (card readers and tape drives) that were order of magnitude slower.
These hardware developments led to enhancements of the operating system. I/O and data
channel communication and control became functions of the operating system, both to relieve
the application programmer from the difficult details of I/O programming and to protect the
integrity of the system to provide improved service to users by segmenting jobs and running
shorter jobs first (during "prime time") and relegating longer jobs to lower priority or night time
runs. System libraries became more widely available and more comprehensive as new utilities
and application software components were available to programmers.
The second generation was a period of intense operating system development. Also it was the
period for sequential batch processing. Researchers began to experiment with
multiprogramming and multiprocessing.
The Third Generation, 1964-1979
The third generation officially began in April 1964 with IBM’s announcement of its System/360
family of computers. Hardware technology began to use integrated circuits (ICs) which yielded
significant advantages in both speed and economy. Operating System development continued
with the introduction and widespread adoption of multiprogramming. This marked first by the
appearance of more sophisticated I/O buffering in the form of spooling operating systems.
These systems worked by introducing two new systems programs, a system reader to move
input jobs from cards to disk, and a system writer to move job output from disk to printer, tape,
or cards. The spooling operating system in fact had multiprogramming since more than one
program was resident in main storage at the same time. Later this basic idea of
multiprogramming was extended to include more than one active user program in memory at
time. To accommodate this extension, both the scheduler and the dispatcher were enhanced. In
addition, memory management became more sophisticated in order to assure that the program
code for each job or at least that part of the code being executed, was resident in main storage.
Users shared not only the system’ hardware but also its software resources and file system disk
space.
The third generation was an exciting time, indeed, for the development of both computer
hardware and the accompanying operating system. During this period, the topic of operating
systems became, in reality, a major element of the discipline of computing.
The Fourth Generation, 1979 - Present
The fourth generation is characterized by the appearance of the personal computer and
the workstation. Miniaturization of electronic circuits and components continued and
Large Scale Integration (LSI), the component technology of the third generation, was
replaced by Very Large Scale Integration (VLSI), which characterizes the fourth
generation. However, improvements in hardware miniaturization and technology have
evolved so fast that we now have inexpensive workstation-class computer capable of
supporting multiprogramming and time-sharing. Hence the operating systems that
supports today’s personal computers and workstations look much like those which were
available for the minicomputers of the third generation. Examples are Microsoft’s DOS
for IBM-compatible personal computers and UNIX for workstation. However, many of
V0.1/2008 - MFM
77
Welingkar
MFM– Sem III
Introduction to Computers
these desktop computers are now connected as networked or distributed systems.
Computers in a networked system each have their operating system augmented with
communication capabilities that enable users to remotely log into any system on the
network and transfer information among machines that are connected to the network.
The machines that make up distributed system operate as a virtual single processor
system from the user’s point of view; a central operating system controls and makes
transparent the location in the system of the particular processor or processors and file
systems that are handling any given program
V0.1/2008 - MFM
78
Welingkar
MFM– Sem III
Introduction to Computers
Generation of Programming Languages
A proper understanding of computer software requires a basic knowledge of
programming languages. These allows programmers and end users to develop the
program of instructions that are executed by a computer. To be acknowledgeable end
uses one should know the basic categories of programming languages. Each
generation of programming language has it an unique vocabulary, grammar and uses.
First Generation Language – 1 GL
They are the most basic level of programming languages. In the early stages of
computer development all program instructions had to be written using binary codes
unique to each computers. This involved the difficult task of writing instructions in the
form of strings of binary digits (ones and zeros) or other number system.
Programmers had to write long series of detailed instructions even to accomplish simple
processing tasks. Programming in machine language sequences specifying the storage
locations for every instruction and item of data used. Instruction must be included for
every switch and indicator used by the program. All these above mentioned
requirements makes it difficult and error prone.
Instructions in a machine language, programming consists of:
 an operation code which specifies what is to be done and
 an operand which specifies the address of the data or device to be operated
upon.
Furthermore portability is significantly reduced - in order to transfer code to a different
computer it needs to be completely rewritten since the machine language for one
computer could be significantly different from another computer.
Second Generation Language–2 GL
They were developed to reduce the difficulties in writing language programs. Use of
these languages require language translator programs called assemblers which allow a
computer to convert the instructions of such languages into machine instructions.
Assembly languages are known as Symbolic languages became symbols are used to
represent operation code and storage locations.
Convenient alphabetic abbreviations called mnemonics (memory aids) and others
symbols are used.
Advantages
 Alphabetic abbreviations are easier to remember are used in the place of
numerical addresses of the data.
 Simplifies programming
V0.1/2008 - MFM
79
Welingkar
MFM– Sem III
Introduction to Computers
Disadvantages
 Assembler language is machine oriented because assembler language
instructions correspond closely to the machine language instructions of the
particular computer model used.
 Eg. mov al, 061h
Third Generation Language – 3 GL
HLL are also known as compiler languages. Instructions of HLL are called statements
and closely resemble human language or standard notation of mathematics. Individual
high level language statements are actually macro instructions.That is each individual
statement generates several machine instructions when translated into machine
language by HLL translator programs called compilers or interpreters.
Advantages
 Easy to learn and understand
 Have less rigid rules forms
 Potential for error is reduced
 Machine independent
Disadvantages
 Less efficient than assembler language programs
 Require greater amount of time for translation into machine instruction. eg.
COBOL, FORTRAN, Ada, C
Fourth Generation Language – 4 GL
The term 4GL is used to describe a variety of programming languages that are more
procedural and conversational than prior languages. Natural languages are 4GL that
are very close to English or other human languages While using a 4GL the end users
and programmers only need to satisfy the resulting they want, while the computer
determines the sequence of instructions that will accomplish those results. etc.
FOXPRO, ORACLE, Dbase.
Advantages
 Ease of use and technical sophistication
 Natural query languages that impose no rigid grammatical rules.
 More useful in end user and departmental applications without a high volume of
transactions to process.
V0.1/2008 - MFM
80
Welingkar
MFM– Sem III
Introduction to Computers
Disadvantages
 Not very flexible
 Difficult for an end user to override some of the pre-specified formats or
procedures of a 4GL.
 Machine language code generated by a program developed by a 4GL is
frequently much less efficient than a program written in a language like COBOL.
 Unable to provide reasonable response times when faced with large amount of
real time transaction processing and end user inquiries.
Fifth Generation Language – 5 GL
These are designed to make the computer solve the problem for you. This way, the
programmer only needs to worry about what problems need to be solved and what
conditions need to be met, without worrying about how to implement a routine or
algorithm to solve them.
 Languages using Artificial Intelligence techniques.
 Artificial Intelligence (AI) is a science and technology based on disciplines such
as:
o Computer Sciences, Biology, Psychology, Linguistics, Mathematics,
Engineering
 Prolog, OPS5, and Mercury are the best known fifth-generation languages.
V0.1/2008 - MFM
81
Welingkar
MFM– Sem III
Introduction to Computers
Operating Systems
System Software
System Software consists of programs that coordinate the various parts of the computer
system to make it run efficiently. Performs task such as: translating commands into a
form that the computer can understand, managing program and data files, and getting
applications software and hardware to work together
3 basic types: operating systems, utility programs, and language translator
The Operating System
The main collection of programs that manage a computer's activities
 primary chores = management and control
 ensures that all actions requested by the user are valid and are processed in an
orderly fashion
 manages the computer system's resources to perform operations efficiently and
consistently
 considered to be the most critical piece of software in the computer system without it no other program can run!
Differences Among Operating Systems
Primary Difference: whether they meet personal or network-administration needs in other words is the operating system for a single user? or multiple users?
 single user products = MS DOS, Win 95, Win 98, Win ME
 multiple user products = MS Win NT, Win 2000, Unix
 lines are becoming increasingly blurred between uses - Windows XP
Functions of an Operating System
 Interacting with Users
One of the principal roles of the OS is to translate user intentions into a form that
the computer understands and to translate back any feedback from the hardware in
a form that the user can understand

Making Resources Available
When you first turn the computer on the OS boots up - that is parts of the OS are
loaded into memory. Before control is turned over to the user, the OS determines
what hardware devices are online, makes sure that its own files tell it how to deal
with these devices and reads an opening batch of directives. The user session
begins and the OS gives some control to the applications that are launched. the
applications still allow the OS to manage storage and availability of hardware. In
managing storage, the OS protects memory so that vital data will not be corrupted
by errors

Scheduling Resources and Jobs scheduling routines
in the OS determine in which order jobs are processed on the hardware devices,
often in a multi-user environment the jobs are not scheduled on a first-come first
V0.1/2008 - MFM
82
Welingkar
MFM– Sem III
Introduction to Computers
served basis but rather who has highest priority and which devices are freethe OS
also schedules operations throughout the computer system so that different parts
work on different portions of the same job at the same time. Input and output devices
work much more slowly than the CPU so the OS must make use of interleaved
processing techniques such as multitasking and multiprogramming to make sure
the system devices are employed most efficiently

Interleaved Processing Techniques
ways in which computers enhance efficiency, enable the computer to process many
programs at almost the same time so they increase the number of jobs the computer
system can handle in any given period
 multiprogramming - allows a multiuser computer system to work concurrently
on several programs from several users
 multitasking - allows concurrent execution of 2 or more programs from any
single user as well as concurrent execution of 2 or more tasks performed by a
single program
 time-sharing - a technique in which the OS cycles through all active programs
currently running in the system that need processing giving a small slice of time
on each cycle to each one



virtual memory - employs disk storage to extend conventional memory
multiprocessing - links together 2 or more computers to perform work at the
same time
spooling programs - free the CPU from time-consuming interaction with I/O
devices such as printers

Monitoring Activities
overseeing activities while processing is underway - apprising users of problems,
monitoring system performance and reporting to the user on its status

Housekeeping
organize the hard disk and make users aware of its contents, compile records of
user log-on and log-off times, program running times, programs that each user has
run, etc. Organize files hierarchically into directories

Security
The OS can protect the computer from unauthorized access by collecting system
usage statistics for those in charge of the computer system and by reporting any
attempts to breach system security. Many OSes contain password procedures to
prevent outside users from accessing systems. Many also provide encryption
procedures that disguise valuable programs and data
V0.1/2008 - MFM
83
Welingkar
MFM– Sem III
Introduction to Computers
Personal Operating Systems
DOS (Disk Operating System) –
2 versions were created PC-DOS for IBM microcomputers and MS-DOS for PCcompatible computers. Original design based on 16-bit CPU chips
Windows 3.x –
A graphical operating environment created by Microsoft to run in conjunction with DOS,
replacing the DOS command line with a system of menus, windows and icons. Not a full
fledged operating system - instead they merely define operating environments that form
a graphical shell around DOS
Window 9x –
The OS that succeeded the combination of DOS with Windows, 32 bit operating system,
permits pre-emptive multitasking, longer file names and plug-and-play support, Win 98
incorporates web-browsing capabilities, more options for customizing the desktop user
interface, the ability to turn the computer on and off automatically in order to perform
tasks while the user is away, improved support for large hard disks
Windows Millenium Edition (ME)
One of the latest operating systems from Microsoft designed for the home user. It is an
upgrade for either Windows 95 or 98. Windows Me focuses on multimedia support,
usability and stability, home networking, and Internet support.
Windows XP –
The recent edition of the Windows operating system, comes in 2 versions - Home
Edition and Professional. Microsoft created Windows XP in order to update the user
interface, add new features, unify the code base between the separate families of
Windows, and gives the user a more stable platform. Heavily based on the Windows NT
and 2000 core and continues the 32-bit programming model.
Mac OS
the OS for Apple's Macintosh line of computer systems
Macintosh computers that help manage networks often run the Unix OS
OS/2 –
An OS designed for both desktop PC-compatible computers and office networks. A full
32-bit operating system with capabilities for preemptive multitasking and multiuser
support
Network Operating Systems
Unix
A long-standing OS for midrange computers, microcomputer networks, graphics
workstations and the Internet. Has a long and relatively successful track record as a
multi-user, multitasking OS. Most frequent choice in OS for server computers that store
information carried over the Internet. Flexibly built so it can be used on a wide variety of
machines - not built around a single family of processors - computers from micros to
V0.1/2008 - MFM
84
Welingkar
MFM– Sem III
Introduction to Computers
mainframes can run Unix. Can easily integrate a variety of devices from different
manufacturers through network connections. Not as easy to use as Windows or Mac
OS. Several brands of Unix are available now and many are not compatible with each
other
Netware
Most widely used OS on local area networks (LANs). Developed by Novell during the
mid 1980's. Provides a shell around your personal, desktop OS through which you can
retrieve files from or save them on a shared hard disk and also print them on a shared
printer
Windows NT (New Technology)
An OS designed by Microsoft for both workstations and network applications with
organizations. Full 32 bit OS. Same GUI as with Win 9x. Can run on a variety of
computer
systems
not
just
those
with
Intel
chips
There are 2 versions
 workstation edition for ordinary users working at powerful desktop computers and
 server edition for network administration and advanced network management
tasks
Windows 2000 –
The next to last release of Windows NT comes in three flavors:



Windows 2000 Professional (the counterpart to NT 4 Workstation, and the most
direct replacement for Windows 9x),
Windows 2000 Server, and
Windows 2000 Advanced Server.
Summary
Components of an Operating System
 Process Management
 Main Memory Management
 Secondary-Storage Management
 I/O System Management
 File Management
 Protection System
 Networking
 Command-Interpreter System
V0.1/2008 - MFM
85
Welingkar
MFM– Sem III
Introduction to Computers
System Development Lifecycle
The Systems Development Life Cycle (SDLC) method is an approach to developing an
information system or software product that is characterized by a linear sequence of steps that
progress from start to finish without revisiting any previous step. The SDLC is a methodology
that has been constructed to ensure that systems are designed and implemented in a
methodical, logical and step-by-step approach. The SDLC method is one of the oldest systems
development models and is still probably the most commonly used.
The SDLC consists of the following activities:
1. Preliminary Investigation
2. Determination of Systems Requirements ( Analysis Phase)
3. Design of the System
4. Development Of Software
5. System Testing
6. Implementation and Evaluation
7. Review
1) The Preliminary Investigation
The Preliminary Investigation Phase may begin with a phone call from a customer, a
memorandum from a Vice President to the Director of Systems Development, a letter from a
customer to discuss a perceived problem or deficiency, or a request for something new in an
existing system.
The purpose of the Preliminary Investigation is not to develop a system, but to verify that a
problem or deficiency really exists, or to pass judgement on the new requirement. This
refers to having a project that can be completed based on considering the financial costs of
completing the project versus the benefits of completing it.
There are three factors, typically called Feasibility Study:
a) Technical Feasibility.
i) It assesses whether the current technical resources and skills are sufficient for the
new system
ii) If they are not available, can they be upgraded to provide the level of technology
necessary for the new system
iii) It centers around the existing computer system and to what extent it can support the
proposed addition
iv) It refers to having a project that can be technically self reliant of completing the
project having the right technology hardware, software and skill technicians to
execute the system.
b) Economic Feasibility.
i) It examines the benefits in creating the system to make its costs acceptable. It refers
to having a project that can be completed based on considering the financial costs of
completing the project versus the benefits of completing it.
ii) It determines whether the time and money are available to develop the system
iii) Includes the purchase of new equipment, Hardware, Software
V0.1/2008 - MFM
86
Welingkar
MFM– Sem III
Introduction to Computers
c) Operational Feasibility.
i) Operational feasibility determines if the human resources are available to operate the
system once it has been installed.
ii) Whether the system will be used if it is developed and implemented? Or will there be
resistance from users?
iii) Users that do not want a new system, may prevent it from becoming operationally
feasible
It could be an individual constraint, or any combination of the three that prevents a project
from being developed any further. When a project is both desirable and feasible for the
organization the Analysis Phase is implemented
2) Determination of Systems Requirements ( Analysis Phase)
Systems analysis is the study of a current business information systems application and the
definition of user requirements and priorities for a new or improved application.
The Analysts study the problem deficiency or new requirement in detail. Depending upon
the size of the project being undertaken, the key to understanding the analysis phase
gaining a rigorous understanding of the problem or opportunity which is driving development
of the new application. He has to work closely with employees and managers to gather
details about the business process and their opinions of why things happen as they do and
their ideas of changing the process. System analysts should do more than study the current
problems, they should closely inspect the various documents available about the various
operations and processes.
They are frequently called upon to help handle the planned expansion of a business. They
assess the possible future needs of the business and what changes should be considered to
meet they needs. He has to help the user visualize the system. An analyst mostly
recommends more than 1 alternative for improving the situation. He makes a prototype and
conducts a walk through of the prototype with the prospective user.
A system analyst needs to possess strong people skills and strong technical skills. People
skills will assist in working with clients to help the team define requirements and resolve
conflicting objectives. Interpersonal Skills helps in Communication, understanding and
identifying problems, having a grasp of company goal and objectives and selling the system
to the user. Technical skills will help document these requirements with process, data, and
network models. It helps to focus on procedure and techniques for operations and
computerization.
At the end of this phase, the Requirements Statement should be in development: this
provides details about what the program should do. A requirement document includes
Business Use Cases, Project Charter / Goals, Inputs and Output details to the system and
the broad process involved in the system. It can easily form the basis of a contract between
the customer and the developer. The Requirements Statement should list all of the major
details of the program.
3. Design of the System
The design of an information system produces the details that state how a system will meet
the requirements identified during systems analysis. This stage is known as logical design
phase in contrast to the process of developing program software, which is referred to as
V0.1/2008 - MFM
87
Welingkar
MFM– Sem III
Introduction to Computers
physical design. Design in the SDLC encompasses many different elements. The different
components that are 'designed' in this phase are: Input , Output, Processing, Files
By the end of the design phase, a formal Requirements Statement for the program is made
along with a rough sketch of what the user interface will look like. To understand the
structure and working of the SDLC each phase is examined in turn.
Most programs are designed by first determining the output of the program. If you know
what the output of the program should be, you can determine the input needed to produce
that output more easily. Once you know both the output from, and the input to the program,
you can then determine what processing needs to be performed to convert the input to
output. You will also be in a position to consider what information needs to be saved, and in
what sort of file.
While doing the Output and Input designs, more information will be available to add to the
Requirements Statement. It is also possible that a first screen design will take shape and at
the end of these designs, and a sketch will be made of what the screen will look like. At this
stage of the SDLC it isn't necessary to discuss the 'how' or ‘what’ the program will do, just to
get the requirements down on paper.
Designers are responsible for providing programmers with complete and clearly outlined
software specifications. As programming starts, designers are available to answer
questions, clarify fuzzy areas, and handle problems that confront the programmers when
using the design specifications.
4. Development Of Software
During Development Phase, computer hardware is purchased and the software is
developed. That means that actual coding the program is initiated. In the Development
phase, examination and re-examination of the Requirements Statement is needed to ensure
that it is being followed to the letter. Any deviations would usually have to be approved
either by the project leader or by the customer.
The Development phase can be split into two sections, that of Prototyping and Production
Ready Application Creation. Prototyping is the stage of the Development phase that
produces a pseudo-complete application, which for all intents and purposes appears to be
fully functional. Developers use this stage to demo the application to the customer as
another check that the final software solution answers the problem posed. When they are
given the ‘OK’ or ‘go-ahead’ from the customer, the final version code is written into this
shell to complete the phase.
5. System Testing
During systems testing, the system is used experimentally to ensure that the software runs
according to its specifications and in the way the user expects. Special test data are input
for processing, and the results examined. If necessary, adjustments must be made at this
stage. Although the programmer will find and fix many problems, almost invariably, the user
will uncover problems that the developer has been unable to simulate.
V0.1/2008 - MFM
88
Welingkar
MFM– Sem III
Introduction to Computers
6. Implementation and Evaluation
In the Implementation Phase, the project reaches fruition. After the Development phase of
the SDLC is complete, the system is implemented. Any hardware that has been purchased
will be delivered and installed. The designed software and programmed will be installed on
any PCs that require it. Any person that will be using the program will also be trained during
this phase of the SDLC. The system is put into use. This can be done in various ways. The
new system can phased in, according to application or location, and the old system
gradually replaced. In some cases, it may be more cost-effective to shut down the old
system and implement the new system all at once.
The implementation phase also includes training systems operators to use the equipment,
diagnosis of malfunction and trouble shooting.
Evaluation of the system is performed to identify the strengths and weaknesses of the new
system. The actual evaluation can be any of the following
a) Operational Evaluation :Assessment of the manner in which the system functions ,
including ease of use , response time , suitability of information formats, overall
reliability and level of utilization.
b) Organizational Impact: Identification and measurement of benefits to the
organization in such areas as financial concerns ( cost , revenue and profit ),
operational efficiency and competitive impact. Includes impact on internal and
external information flows.
c) User Manager Assessment: Evaluation of the attitudes of senior and user
managers within the organization, as well as end –users.
d) Development Performance : Evaluation of the development process in accordance
with such yardsticks as overall development time and effort, conformance to budgets
and standards, other project management criteria. Includes assessment of
development methods and tools.
7. Review
After system implementation and evaluation, a review of the system is conducted by the
users and the analysts to determine how well the system is working, whether it is accepted,
and whether any adjustments are needed.
Review is important to gather information for maintenance of the system. No system is ever
complete. It has to be maintained as changes are required because of internal
developments such as new user or business activities, and external developments such as
industry standards or competition. The post implementation review provides the first source
of information for maintenance requirement.
The most fundamental concern during post implementation review is determining whether
the system has met its objectives. The analysts assess if the performance level of the users
has improved and whether the system is producing the results intended. The systems output
quality has to be optimum.
V0.1/2008 - MFM
89
Welingkar
MFM– Sem III
Introduction to Computers
System Development Lifecycle (SDLC) Models
Waterfall Model
The waterfall model is a popular version of the systems development life cycle model for
software engineering. It is a classic approach to the SDLC. It describes a development method
that is linear and sequential. The Waterfall Model has distinct goals for each phase of
development
Once a phase of development is completed, the development proceeds to the next phase and
there is no turning back. The advantage of waterfall development is that it allows for
departmentalization and managerial control. A schedule can be set with deadlines for each
stage of development and a product can proceed through the development process, and
theoretically, be delivered on time.
Development moves from concept, through design, implementation, testing, installation,
troubleshooting, and ends up at operation and maintenance. Each phase of development
proceeds in strict order, without any overlapping or iterative steps.
The disadvantage of waterfall development is that it does not allow for much reflection or
revision. Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
V0.1/2008 - MFM
90
Welingkar
MFM– Sem III
Introduction to Computers
Advantages
1. Simple and easy to use.
2. Easy to manage due to the rigidity of the model – each phase has specific deliverables
and a review process.
3. Phases are processed and completed one at a time.
4. Works well for smaller projects where requirements are very well understood.
Disadvantages
1.
2.
3.
4.
5.
6.
Adjusting scope during the life cycle can kill a project
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Poor model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Poor model where requirements are at a moderate to high risk of changing.
Modified Waterfall model with feedback
V0.1/2008 - MFM
91
Welingkar
MFM– Sem III
Introduction to Computers
Spiral Model
This model of development combines the features of the prototyping model and the waterfall
model. The spiral model is favored for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
1) The new system requirements are defined in as much detail as possible.
This usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.
2) A preliminary design is created for the new system.
3) A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.
V0.1/2008 - MFM
92
Welingkar
MFM– Sem III
Introduction to Computers
4) A second prototype is evolved by a fourfold procedure:
a. evaluating the first prototype in terms of its strengths, weaknesses, and risks;
b. defining the requirements of the second prototype;
c. planning and designing the second prototype;
d. constructing and testing the second prototype.
5) At the customer's option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involve development cost overruns,
6) The existing prototype is evaluated in the same manner as was the previous prototype,
and, if necessary, another prototype is developed from it according to the fourfold
procedure outlined above.
7) The preceding steps are iterated until the customer is satisfied that the refined prototype
represents the final product desired.
8) The final system is constructed, based on the refined prototype.
9) The final system is thoroughly evaluated and tested. Routine maintenance is carried out
on a continuing basis to prevent large-scale failures and to minimize downtime.
Spiral Model …. Summary
Each round consists of four phases
1) Determine objectives: product definition, determination of business objects,
specification of constraints, generation of alternatives
2) Evaluate alternatives: risk analysis, prototyping
3) Develop product: detailed design, code, unit test, integration
4) Plan next cycle: customer evaluation, design planning, implementation, customer
delivery
Advantages of the Spiral Model
1) High amount of risk analysis
2) Good for large and mission-critical projects.
3) Software is produced early in the software life cycle.
Disadvantages of the Spiral Model
1)
2)
3)
4)
Can be a costly model to use.
Risk analysis requires highly specific expertise.
Project’s success is highly dependent on the risk analysis phase.
Doesn’t work well for smaller projects.
V0.1/2008 - MFM
93
Welingkar
MFM– Sem III
Introduction to Computers
Alternative Diagram for the Spiral Model …. 1
V0.1/2008 - MFM
94
Welingkar
MFM– Sem III
Introduction to Computers
Information Gathering/Analysis
Information usually originates from
External sources
Vendors, Govt. Documents, Newspapers and professional journals
Internal sources
Financial reports, Personnel staff, Professional staff, Transaction documents and reports.
Analysts use fact-finding techniques such as interviews, questionnaires, record inspections (onsite review) and observation for collecting data.
Interviews
A device to identify relations or verify information and to capture information as it exists. The
respondents are people chosen for their knowledge of the system under study. This method is
the best source of qualitative information (opinions, policies, subjective description of activities
and problems. Interviews can be structured or unstructured.
Unstructured Interviews
Use of question and answer format and are appropriate when the analyst wants to acquire
general information about the system.
Structured Interviews
Use standardized questions in either open-response (in own words) or closed-response format
(set of prescribed answers)
Questionnaires
It can be administered to a large number of people simultaneously. The respondent feels
confident in the anonymity and leads to honest responses. It places less pressure on the
subjects for immediate response. The use of standardized question formats can yield reliable
data. Closed ended questionnaires control the frame of reference due to the specific responses
provided. Analysts use Open-ended questionnaires to learn about feelings, opinions and
general experiences or to explore a process or problem.
Record Review
Here Analysts examine information that has been recorded about the system and users.
Records include written policy manuals, regulations and standard operating procedures used by
the organisation. These familiarize the analyst with what operations must be supported and with
the formal relations within the organization.
Observation
Allows analyst to gain information they cannot obtain by any other methods. They can obtain
first hand information about how activities are carried out Experienced observers know what to
look for and how to assess the significance of what they observe.
V0.1/2008 - MFM
95
Welingkar
MFM– Sem III
Introduction to Computers
Importance of documentation
Documentation
A description of the system used to communicate, instruct, record information for Historical,
Operational, Reference purposes. Documents establish and declare the performance criteria of
a system Documentation explains the system and helps people interact with it
Types of documentation
o Program documentation:
Begins in the systems analysis phase and continues during systems implementation.
Includes process descriptions and report layouts. Programmers provide documentation
with comments that make it easier to understand and maintain the program. An analyst
must verify that program documentation is accurate and complete.
o System documentation:
It describes the system’s functions and how they are implemented. Most system
documentation is prepared during the systems analysis and systems design phases.
Documentation consists of
o Data dictionary entries
o Data flow diagrams
o Object models
o Screen layouts
o Source documents
o Initial systems request.
o Operations documentation:
Typically used in a minicomputer or mainframe environment with centralized processing
and batch job scheduling Documentation tells the IT operations group how and when to
run programs. Common example is a program run sheet, which contains information
needed for processing and distributing output
o
User documentation:
Typically includes the following items
o System overview
o Source document description, with samples
o Menu and data entry screens
o Reports that are available, with samples
o Security and audit trail information
o Responsibility for input, output, processing
o Procedures for handling changes/problems
o Examples of exceptions and error situations
o Frequently asked questions (FAQ)
o Explanation of Help & updating the manual
o Online documentation can empower users and reduce the need for direct IT
support
o Context-sensitive Help
o Interactive tutorials
o Hints and tips
o Hypertext
o Interactive tutorials
V0.1/2008 - MFM
96
Welingkar
MFM– Sem III
Introduction to Computers
System documentation



System documentation describes the system’s functions and how they are implemented
Most system documentation is prepared during the systems analysis and systems
design phases
Documentation consists of
 Data dictionary entries
 Data flow diagrams
 Object models
 Screen layouts
 Source documents
 Initial systems request
File design
The design of files include decisions about nature and content of files such as
o Whether it is to be used for storing transaction details, historical data, or reference
information
o Among design decisions we look into
o Which data items to include in record format within the file
o Length of each record
o The arrangement of records within the file (the storage structure indexed,
sequential or relative)
User Involvement
Users (Managers and employees in business) are highly involved in systems development as:
o They have accumulated experience working with applications developed earlier. They
have better insight into what the information system should be. If they have experienced
systems failures they will have ideas about avoiding problems.
o The applications developed in organizations are often highly complex, hence systems
analyst need continual involvement of user to understand the business functions being
studied
o With better system development tools emerging user can design and develop
applications without involving trained systems analysts.
Contents of User Manual
Contents of a User Manual must be divided into different modules on a need to know basis into
o Information flow diagrams
o Flow charts
o Instructions to use the system
o Data repository
V0.1/2008 - MFM
97
Welingkar
MFM– Sem III
Introduction to Computers
-- Space for Notes --
V0.1/2008 - MFM
98
Welingkar
MFM– Sem III
Introduction to Computers
Database Management Systems (DBMS)
A database management system (DBMS) is a program that lets one or more computer users
create and access data in a database. The DBMS manages user requests (and requests from
other programs) so that users and other programs are free from having to understand where the
data is physically located on storage media and, in a multi-user system, who may also be
accessing the data. In handling user requests, the DBMS ensures the integrity of the data (that
is, making sure it continues to be accessible and is consistently organized as intended) and
security (making sure only those with access privileges can access the data).
A Database is a collection of interrelated “Files”. DATABASE is a system where all data are kept
in one large linked set of files and allow access to different applications. Computer
manufacturers produce DBMS for use on their own systems or by independent companies for
use over a wide range of machines.
There are three main features of a database management system that make it attractive to use
a DBMS in preference to more conventional software. These features are:
1. centralized data management,
2. data independence, and,
3. systems integration.
In DBMS, all files are integrated into one system thus reducing redundancies and making
data management more efficient. In addition, DBMS provides centralized control of the
operational data. Some of the advantages of data independence, integration and centralized
control are:
1.
Redundancies and inconsistencies can be reduced
In conventional data systems, an organization often builds a collection of application
programs created by different programmers. The data in conventional data systems is
often not centralized. Some applications may require data to be combined from several
systems. These several systems could well have data that is redundant as well as
inconsistent (that is, different copies of the same data may have different values).
Data inconsistencies are often encountered in everyday life. For example, we have all
come across situations when a new address is communicated to an organization that we
deal with (e.g. a bank), we find that some of the communications from that organization
are received at the new address while others continue to be mailed to the old address.
Combining all the data in a database would involve reduction in redundancy as well as
inconsistency. It also is likely to reduce the costs for collection, storage and updating of
data. With DBMS, data items need to be recorded only once and are available for
everyone to use.
2.
Better service to the Users
A DBMS is often used to provide better service to the users. In conventional systems,
availability of information is often poor since it normally is difficult to obtain information
V0.1/2008 - MFM
99
Welingkar
MFM– Sem III
Introduction to Computers
that the existing systems were not designed for. Once several conventional systems are
combined to form one centralized database, the availability of information and its up-todatedness is likely to improve since the data can now be shared and the DBMS makes it
easy to respond to unforeseen information requests.
Centralizing the data in a database also often means that users can obtain new and
combined information that would have been impossible to obtain otherwise. Also, use of
a DBMS should allow users that do not know programming to interact with the data more
easily. The ability to quickly obtain new and combined information is becoming
increasingly important.
An organization running a conventional data processing system would require new
programs to be written (or the information compiled manually) to meet every new
demand.
3.
Flexibility of the system is improved
Changes are often necessary to the contents of data stored in any system. These
changes are more easily made in a database than in a conventional system in that these
changes do not need to have any impact on application programs. Thus data processing
becomes more flexible and enable it to respond more quickly to the expanding needs of
the business.
4.
Cost of developing, implementation and maintaining systems is lower
It is much easier to respond to unforeseen requests when the data is centralized in a
database than when it is stored in conventional file systems. Although the initial cost of
setting up of a database can be large, the input/output routines normally coded by the
programmers are now handled through the DBMS, the amount of time and money spent
writing an application program is reduced. Since the programmer spends less time
writing applications, the amount of time required to implementing implement new
applications is reduced.
5.
Standards can be enforced
Since all access to the database must be through the DBMS, standards are easier to
enforce. Standards may relate to the naming of the data, the format of the data, the
structure of the data etc.
6.
Security can be improved
In conventional systems, applications are developed in an ad hoc manner. Often
different system of an organization would access different components of the operational
data. In such an environment, enforcing security can be quite difficult.
Setting up of a database makes it easier to enforce security restrictions since the data is
now centralized. It is easier to control who has access to what parts of the database.
However, setting up a database can also make it easier for a determined person to
breach security.
V0.1/2008 - MFM
100
Welingkar
7.
MFM– Sem III
Introduction to Computers
Integrity can be improved
Since the data of the organization using a database approach is centralized and would
be used by a number of users at a time, it is essential to enforce integrity controls.
Integrity may be compromised in many ways.
For example; a student may be shown to have borrowed books but has no enrolment.
Salary of a staff member in one department may be coming out of the budget of another
department. If a number of users are allowed to update the same data item at the same
time, there is a possibility that the result of the updates is not quite what was intended.
Controls therefore must be introduced to prevent such errors to occur because of
concurrent updating activities. However, since all data is stored only once, it is often
easier to maintain integrity than in conventional systems.
8.
Data model must be developed
Perhaps the most important advantage of setting up a database system is the
requirement that an overall data model for the enterprise be built. In conventional
systems, it is more likely that files will be designed as needs of particular applications
demand. The overall view is often not considered.
Building an overall view of the enterprise data, although often an expensive exercise is
usually very cost-effective in the long term.
It is easier to find a score of men wise enough to discover the truth than to find one
intrepid enough, in the face of opposition, to stand up for it.
~A.A. Hodge
V0.1/2008 - MFM
101
Welingkar
MFM– Sem III
Introduction to Computers
Office Automation
The term “Office Automation” is generally used to describe the use of computer systems to
perform office operations such desktop application suites, groupware systems and workflow. An
Office Automation Team is the domain team responsible for selecting product standards,
defining standard configurations, collaborating on component architecture design principles with
the architecture team, and planning and executing projects for office automation.
Scope
The Office Automation Team will take ownership of issues related to Desktop Application Suites
and Groupware Systems. Although Workflow is considered a part of office automation, the
Document Management Domain Team will cover it separately. Responsibility for some
Advanced Features will be shared with the Security Domain Team.
Desktop Application Suites
Desktop is a metaphor used to describe a graphical user interface that portrays an electronic file
system. Desktop application suites generally include:
 Word Processors to create, display, format, store, and print documents.
 Spreadsheets to create and manipulate multidimensional tables of values arranged in
columns and rows.
 Presentation Designers to create highly stylized images for slide shows and reports.
 Desktop Publishers to create professional quality printed documents using different
typefaces, various margins and justifications, and embedded illustrations and graphics.
 Desktop Database Support to collect limited amounts of information and organize it by
fields, records, and files.
 Web Browsers to locate and display World Wide Web content.
Groupware Systems
Groupware refers to any computer-related tool that improves the effectiveness of person-toperson process. Simply put, it is software that helps people work together. Groupware systems
generally include:
 Email to transmit messages and files.
 Calendaring to record events and appointments in a fashion that allows groups of users
to coordinate their schedules.
 Faxing to transmit documents and pictures over telephone lines.
 Instant Messaging to allow immediate, text-based conversations.
 Desktop Audio/Video Conferencing to allow dynamic, on-demand sharing of information
through a virtual “face-to-face” meeting.
 Chat Services to provide a lightweight method of real-time communication between two
or more people interested in a specific topic.
 Presence Detection to enable one computer user to see whether another user is
currently logged on.
 White-boarding to allow multiple users to write or draw on a shared virtual tablet.
 Application Sharing to enable the user of one computer to take control of an application
running on another user’s computer.
 Collaborative Applications to integrate business logic with groupware technologies in
order to capture, categorize, search, and share employee resources in a way that makes
sense for the organization.
V0.1/2008 - MFM
102
Welingkar
MFM– Sem III
Introduction to Computers
Workflow
Workflow is defined as a series of tasks within an organization to produce a final outcome.
Sophisticated applications allow workflows to be defined for different types of jobs. In each step
of the process, information is automatically routed to the next individual or group that is
responsible for a specific task. Once that task is complete, the system ensures that the
individuals responsible for the next task are notified and receive the data they need to execute
their stage of the process. This continues until the final outcome is achieved.
Although workflow applications are considered part of Office Automation, workflow itself is part
of a larger document management initiative. Therefore, the Document Management Domain
Team will take responsibility for it.
Advanced Features
In any large environment, the responsibility for advanced features is generally shared among
various workgroups. Although the Security Domain Team is responsible for the overall security
of the enterprise, the Office Automation team needs to work closely with them to implement
secure technology. Some examples of advanced features include:
 Anti-Virus Protection to identify and remove computer viruses.
 Anti-Spam Protection to identify and remove unsolicited commercial email.
 Open-Relay Protection to ensure that email servers within our environment are not used
by outside parties to route Spam.
 Enterprise Directories to provide a single repository of user accounts for authentication,
access control, directory lookups, and distribution lists.
 Digital Signatures and Encryption to allow for authentic and secure transfers of data.
Principle A - Minimize System Complexity

Definition
o Office Automation system will be designed to balance the benefits of an enterprise
deployment against the user’s need for flexibility.

Benefits/Rational
o Avoids duplication in system resources and support issues.
o Increases interoperability.
o Enterprise investments will be better managed (Increased ROI).
o Increases ease of use for end user.
o Solutions will meet end-user needs and expectations.
o Projects a consisted view of state government to the public.
o Leverages enterprise licensing.

Implications
o In order to achieve the long-term benefits of enterprise systems, short-term migration
investments will need to be made.
o Will be limiting the number of supported software versions, products, and
configurations.
o Existing technologies need to be identified.
o Must understand user’s work process before applying technology.
o Requires coordination to implement enterprise-level technology.
o Enterprise-level solutions require enterprise-level budgets.
V0.1/2008 - MFM
103
Welingkar
MFM– Sem III
Introduction to Computers
Principle B - Maximize Agency Interoperability

Definition
o Office Automation systems will be deployed across the enterprise in a standardized
fashion so that complete interoperability among agencies exists.

Benefits/Rational
o Scheduling of meetings and resources across agencies.
o Instant messaging, desktop audio/video conferencing, white boarding, chat,
application sharing, and presence detection across agencies.
o Unified Inbox for email, fax, and telephone messages.
o Collaborative applications can be developed for the enterprise.
o Electronic information can be transferred without the need for conversion utilities.
o Makes information more accessible to users.

Implications
o A single, standardized suite of desktop software across the enterprise.
o A single, standardized set of technologies deployed across the enterprise for
groupware functionality.

Counter Arguments
o Agency business requirements supercede enterprise concerns when deploying
technology solutions
Principle C - Responsive Training

Definition
o The overall investment in OA will include the responsive training of end users.

Benefits
o More knowledgeable and efficient users.
o Maximize technology ROI through appropriate feature use.
o Reduces support burden.

Implications
o Creation of an enterprise-level training plan for OA systems.
o Agencies may need to develop supplemental training and survey users to determine
their level of knowledge.
o May require higher training investments.
V0.1/2008 - MFM
104
Welingkar
MFM– Sem III
Introduction to Computers
Groupware





Groupware is technology designed to facilitate the work of groups. This technology may
be used to communicate, cooperate, coordinate, solve problems, compete, or negotiate.
While traditional technologies like the telephone qualify as groupware, the term is
ordinarily used to refer to a specific class of technologies relying on modern computer
networks, such as email, newsgroups, videophones, or chat.
Some industry observers define groupware as any networked application
Others have narrower definition requiring for example a higher level of user-to-user
interaction via the application in order for a product to be deemed true “Groupware”
Groupware is built around three key principles: communication, collaboration and coordination.
Groupware technologies are typically categorized along two primary dimensions:
1. Whether users of the groupware are working together at the same time ("real-time" or
"synchronous" groupware) or different times ("asynchronous" groupware), and
2. Whether users are working together in the same place ("collocated" or "face-to-face") or
in different places ("non-collocated" or "distance").
Same time
Different time
"synchronous"
"asynchronous"
Same Place “collocated"
Voting, presentation support Shared computers
Different Place "distance"
Videophones, chat
Email, workflow
Asynchronous
• Email is by far the most common groupware application (besides of course, the
traditional telephone). While the basic technology is designed to pass simple messages
between 2 people, even relatively basic email systems today typically include interesting
features for forwarding messages, filing messages, creating mailing groups, and
attaching files with a message.
•
Newsgroups and mailing lists are similar in spirit to email systems except that they are
intended for messages among large groups of people instead of 1-to-1 communications.
In practice the main difference between newsgroups and mailing lists is that newsgroups
only show messages to a user when they are explicitly requested (an "on-demand"
service), while mailing lists deliver messages as they become available (an "interruptdriven" interface).
•
Workflow systems allow documents to be routed through organizations through a
relatively fixed process. A simple example of a workflow application is an expense report
in an organization: an employee enters an expense report and submits it, a copy is
archived then routed to the employee's manager for approval, the manager receives the
document, electronically approves it and sends it on and the expense is registered to the
group's account and forwarded to the accounting department for payment.
V0.1/2008 - MFM
105
Welingkar
MFM– Sem III
Introduction to Computers
•
Hypertext is a system for linking text documents to each other, with the Web being an
obvious example. Whenever multiple people author and link documents, the system
becomes group work, constantly evolving and responding to others' work.
•
Group calendars allow scheduling, project management, and coordination among many
people, and may provide support for scheduling equipment as well. Typical features
detect when schedules conflict or find meeting times that will work for everyone. Group
calendars also help to locate people. Typical concerns are privacy (users may feel that
certain activities are not public matters), completeness and accuracy (users may feel
that the time it takes to enter schedule information is not justified by the benefits of the
calendar).
•
Collaborative writing systems may provide both real-time support and non-real-time
support. Word processors may provide asynchronous support by showing authorship
and by allowing users to track changes and make annotations to documents. Authors
collaborating on a document may also be given tools to help plan and coordinate the
authoring process, such as methods for locking parts of the document or linking
separately authored documents.
Synchronous
• Shared whiteboards allow two or more people to view and draw on a shared drawing
surface even from different locations. This can be used, for instance, during a phone
call, where each person can jot down notes (e.g. a name, phone number, or map) or to
work collaboratively on a visual problem.
•
Video communications systems allow two-way or multi-way calling with live video,
essentially a telephone system with an additional visual component. Cost and
compatibility issues limited early use of video systems to scheduled videoconference
meeting rooms. Video is advantageous when visual information is being discussed, but
may not provide substantial benefit in most cases where conventional audio telephones
are adequate.
•
Chat systems permit many people to write messages in real-time in a public space. As
each person submits a message, it appears at the bottom of a scrolling screen. Having
listing chat rooms by name, location, number of people, topic of discussion, etc usually
forms chat groups.
•
Many systems allow for rooms with controlled access or with moderators to lead the
discussions, but most of the topics of interest to researchers involve issues related to
unmoderated real-time communication including: anonymity, following the stream of
conversation, scalability with number of users, and abusive users.
•
Decision support systems are designed to facilitate groups in decision-making. They
provide tools for brainstorming, critiquing ideas, putting weights and probabilities on
events and alternatives, and voting. Such systems enable presumably more rational and
even-handed decisions. Primarily designed to facilitate meetings, they encourage equal
participation by, for instance, providing anonymity or enforcing turn taking.
V0.1/2008 - MFM
106
Welingkar
•
MFM– Sem III
Introduction to Computers
Multi-player games have always been reasonably common in arcades, but are
becoming quite common on the Internet. Many of the earliest electronic arcade games
were multi-user, for example, Pong, Space Wars, and car racing games. Games are the
prototypical example of multi-user situations "non-cooperative", though even competitive
games require players to cooperate in following the rules of the game. Other
communication media, such as chat or video systems, can enhance games.
Groupware vs. Single user Interface
•
Groupware offers significant advantages over single-user systems. These are some of
the most common reasons people want to use groupware:
 to facilitate communication: make it faster, clearer, more persuasive
 to enable communication where it wouldn't otherwise be possible
 to enable telecommuting
 to cut down on travel costs
 to bring together multiple perspectives and expertise
 to form groups with common interests where it wouldn't be possible to gather a
sufficient number of people face-to-face
 to save time and cost in coordinating group work
 to facilitate group problem-solving
 to enable new modes of communication, such as anonymous interchanges or
structured interactions
Groupware design issues
• Groupware’s user testing is often significantly more difficult than with single-user
systems for the following reasons:
 Organizing and scheduling for groups is more difficult than for individuals.
 Group interaction style is hard to select for beforehand, whereas individual
characteristics are often possible to determine before a study is conducted.
 Pre-established groups vary in interaction style, and the length of time they've
been a group affects their communication patterns.
 New groups change quickly during the group formation process.
 Groups are dynamic; roles change.
 Many studies need to be long-term, especially when studying asynchronous
groupware.
 Modifying prototypes can be technically difficult because of the added complexity
of groupware over single-user software.
 In software for large organizations, testing new prototypes can be difficult or
impossible because of the disruption caused by introducing new versions into an
organization.
Design Issues
Adoption and Acceptance
•
Many groupware systems simply cannot be successful unless a critical mass of users
choose to use the system. Having a videophone is useless if you're the only one who
has it. Two of the most common reasons for failing to achieve critical mass are lack of
interoperability and the lack of appropriate individual benefit.
V0.1/2008 - MFM
107
Welingkar
MFM– Sem III
Introduction to Computers
Interoperability
•
In the early 90s, AT&T and MCI both introduced videophones commercially, but their two
systems couldn't communicate with each other. This lack of interoperability/compatibility
meant that anyone who wanted to buy a videophone had to make sure that everyone
they wanted to talk to would buy the same system. Compatibility issues lead to general
wariness among customers, who want to wait until a clear standard has emerged.
Perceived benefit
•
Even when everyone in the group may benefit, if individuals make the choice, the
system may not succeed. An example is with office calendar systems: if everyone enters
all of their appointments, then everyone has the benefit of being able to safely schedule
around other people's appointments. However, if it's not easy to enter your
appointments, then it may be perceived by users as more beneficial to leave their own
appointments off, while viewing other people's appointments.
•
This disparity of individual and group benefit is discussed in game theory as the
prisoner's dilemma or the commons problem. To solve this problem, some groups can
apply social pressure to enforce groupware use (as in having the boss insist that it's
used), but otherwise it's a problem for the groupware designer who must find a way to
make sure the application is perceived as useful for individuals even outside the context
of full group adoption.
Avoiding Abuse
•
Most people are familiar with the problem of spamming with email. Some other common
violations of social protocol include: taking inappropriate advantage of anonymity,
sabotaging group work, or violating privacy.
The Commons Problem
•
If a village has a "commons" area for grazing cattle then this area can be a strong
benefit to the community as long as everyone uses it with restraint. However, individuals
have the incentive to graze as many cattle as possible on the commons as opposed to
their own private property. If too many people send too many cattle to the commons, the
area will be destroyed, and the whole village is worse off as a result.
•
There are a couple of straightforward solutions to the Commons Problem: an appropriate
fee can be charged for each head of cattle or a limit can be imposed on the number of
cattle any individual may bring. These solutions are an appropriate starting point for
solving problems of abuse in groupware.
Customization and Grounding
•
When groups are working together with the same information, they may individually
desire customized views. The challenge of customized views is to support grounding:
the establishment of a common ground or shared understanding of what information is
known and shared between the different users.
V0.1/2008 - MFM
108
Welingkar
•
MFM– Sem III
Introduction to Computers
Take for example a healthcare setting. When a physician talks to a lab technician about
a patient, they may both have access to the same patient record, but because of their
different interests, each may want a view on their computer screen which selects and
emphasizes different pieces of information. This may cause confusion when a given
piece of information, and therefore an obvious inference about a patient's condition, is
readily available to one person and not the other. Another concern is if one user chooses
to display exceptional values in red and another chooses to display exceptional values in
blue, different users may be confused. When working together on the same screen, this
inconsistency can result in dangerous miscommunication.
Session Control
• A session is a situation where a group of people is in a conversation together at a given
time, such as a group of people together in a chat room or people talking together over
the telephone.
•
Session control issues include finding out what rooms are available, determining who
can enter and exit the room, and when and how.
Floor Control
• Once people have joined a conversational session, it must be decided what kind of
access each person has to shared artifacts, or conversational props. For instance, when
using a shared whiteboard, can everyone draw on it at the same time (simultaneous
access), can only one person access it at a time (by passing a token, or baton), is there
a moderator who controls access, and is there a time limit for each person?
Privacy
• Privacy, Security, and Anonymity: Whenever using groupware, some information
needs to be shared, and there is a concern that all other information remain private, and
that critical information be secure even against aggressive attempts to obtain the
information. In many situations, users choose to be anonymous or use a consistent
pseudonym. Anonymity can be crucial in encouraging fair participation in discussions
and is useful for providing protection from harassment.
•
Sharing Information, Identification, and Accountability: On the other hand, there is
continuing pressure to share more information. The more information gets shared, the
more easily common ground can be achieved. Sharing information about yourself
enables many systems to provide more useful customization and matching to your
interests. Furthermore, while anonymity can protect an individual, there are also quite
legitimate reasons for identifying people for accountability, especially where security and
the risk of abusive behavior are involved.
•
Control and Reciprocity: To resolve these conflicting needs, it's important to give
users as much control as possible over what information gets shared and what remains
private. Let users decide how much information to share, and use that to determine what
kinds of information they can access. One example of privacy policy is the principle of
reciprocity: if a user wants information about another user, then they must provide the
equivalent information about themselves. Reciprocity isn't always the right policy, but
serves as a useful starting point.
V0.1/2008 - MFM
109
Welingkar
MFM– Sem III
Introduction to Computers
Awareness
• In addition to explicit communication, such as sending a message or speaking to
someone, many group work situations benefit from implicit communication, such as
indirect gestures, information about people's environment (whether their office door is
open or closed), or biographical information about people in a conversation (what their
job position is and what they had for lunch).
•
This information helps people to establish common ground, coordinate their activities,
and helps avoid surprises.
•
Awareness information takes many forms. In videoconferencing, simply providing a
wide-angle camera lens can provide a greater degree of environmental awareness.
•
In email, simple information about the time and date of the message or the signature file
of the sender (i.e. with contact info, company info, etc.) gives context for making sense
of the message. Awareness tools can be designed for letting others know when you're in
the office or not, letting them know what document you're working on, or how you're
feeling at any given time
If you have integrity, nothing else matters. If you don't have integrity, nothing else
matters.
~Alan Simpson
V0.1/2008 - MFM
110
Welingkar
MFM– Sem III
Introduction to Computers
-- Space for Notes --
V0.1/2008 - MFM
111
Welingkar
MFM– Sem III
Introduction to Computers
Web Technologies/ Internet
Internet Infrastructure
One of the greatest things about the Internet is that nobody really owns it. It is a global
collection of networks, both big and small. These networks connect together in many different
ways to form the single entity that we know as the Internet. In fact, the very name comes from
this idea of interconnected networks.
Basics
Since its beginning in 1969, the Internet has grown from four host computer systems to tens of
millions. The Internet Society, a non-profit group established in 1992, oversees the formation of
the policies and protocols that define how we use and interact with the Internet
A Hierarchy of Networks
Every computer that is connected to the Internet is part of a network, even the one in your
home. At work, you may be part of a local area network (LAN), but you most likely still connect
to the Internet using an ISP that your company has contracted with.
When you connect to your ISP, you become part of their network. The ISP may then connect to
a larger network and become part of their network. The Internet is simply a network of networks.
Most large communications companies have their own dedicated backbones connecting various
regions. In each region, the company has a Point of Presence (POP). The POP is a place for
local users to access the company's network, often through a local phone number or dedicated
line. There is no overall controlling network. Instead, there are several high-level networks
connecting to each other through Network Access Points or NAPs.
V0.1/2008 - MFM
112
Welingkar
MFM– Sem III
Introduction to Computers
The Network Example
Company A is a large ISP. In each major city, Company A has a POP. The POP in each city is a
rack full of modems that the ISP's customers dial into. Company A leases fiber optic lines from
the phone company to connect the POPs together
Company B is a corporate ISP. Company B builds large buildings in major cities and
corporations locate their Internet server machines in these buildings. Company B is such a
large company that it runs its own fiber optic lines between its buildings so that they are all
interconnected.
In this arrangement, all of Company A's customers can talk to each other, and all of Company
B's customers can talk to each other, but there is no way for Company A's customers and
Company B's customers to intercommunicate.
Therefore, Company A and Company B both agree to connect to NAPs in various cities, and
traffic between the two companies flows between the networks at the NAPs.
In the real Internet, dozens of large Internet providers interconnect at NAPs in various cities.
The Internet is a collection of huge corporate networks that agree to all intercommunicate with
each other at the NAPs. In this way, every computer on the Internet connects to every other.
Bridging the Divide
All of these networks rely on NAPs, backbones and routers to talk to each other. What is
incredible about this process is that a message can leave one computer and travel halfway
across the world through several different networks and arrive at another computer in a fraction
of a second! The routers determine where to send information from one computer to another.
Routers are specialized computers that send your messages and those of every other Internet
user speeding to their destinations along thousands of pathways. A router has two separate,
but related, jobs:
1. It ensures that information doesn't go where it's not needed. This is crucial for keeping
large volumes of data from clogging the connections of "innocent bystanders."
2. It makes sure that information does make it to the intended destination.
In performing these two jobs, a router is extremely useful in dealing with two separate computer
networks. It joins the two networks, passing information from one to the other.
It also protects the networks from one another, preventing the traffic on one from unnecessarily
spilling over to the other. Regardless of how many networks are attached, the basic operation
and function of the router remains the same. Since the Internet is one huge network made up of
tens of thousands of smaller networks, its use of routers is an absolute necessity.
Backbones
Backbones are typically fiber optic trunk lines. The trunk line has multiple fiber optic cables
combined together to increase the capacity. Fiber optic cables are designated OC for optical
carrier, such as OC-3, OC-12 or OC-48. An OC-3 line is capable of transmitting 155 Mbps while
an OC-48 can transmit 2,488 Mbps (2.488 Gbps).
Today there are many companies that operate their own high-capacity backbones, and all of
them interconnect at various NAPs around the world. In this way, everyone on the Internet, no
matter where they are and what company they use, is able to talk to everyone else on the
V0.1/2008 - MFM
113
Welingkar
MFM– Sem III
Introduction to Computers
planet. The entire Internet is a gigantic, sprawling agreement between companies to
intercommunicate freely.
Internet Protocol: IP Addresses
Every machine on the Internet has a unique identifying number, called an IP Address. The IP
stands for Internet Protocol, which is the language that computers use to communicate over the
Internet. A protocol is the pre-defined way that someone who wants to use a service talks with
that service. The "someone" could be a person, but more often it is a computer program like a
Web browser.
A typical IP address looks like this:
216.27.61.137
To make it easier for us humans to remember, IP addresses are normally expressed in decimal
format as a dotted decimal number like the one above. But computers communicate in binary
form.
Look at the same IP address in binary: 11011000.00011011.00111101.10001001
The four numbers in an IP address are called octets, because they each have eight positions
when viewed in binary form. If you add all the positions together, you get 32, which is why IP
addresses are considered 32-bit numbers. Since each of the eight positions can have two
different states (1 or zero), the total number of possible combinations per octet is 28 or 256.
So each octet can contain any value between zero and 255. Combine the four octets and you
get 232 or a possible 4,294,967,296 unique values! Out of the almost 4.3 billion possible
combinations, certain values are restricted from use as typical IP addresses. For example, the
IP address 0.0.0.0 is reserved for the default network and the address 255.255.255.255 is used
for broadcasts. The octets serve a purpose other than simply separating the numbers. They are
used to create classes of IP addresses that can be assigned to a particular business,
government or other entity based on size and need.
The octets are split into two sections: Net and Host. The Net section always contains the first
octet. It is used to identify the network that a computer belongs to. Host (sometimes referred to
as Node) identifies the actual computer on the network. The Host section always contains the
last octet. There are five IP classes plus certain special addresses. These are: Class A, Class
B, Class C, Class D and Class E. Default Network - The IP address of 0.0.0.0 is used for the
default network.
V0.1/2008 - MFM
114
Welingkar
MFM– Sem III
Introduction to Computers
IP: Domain Name System
When the Internet was in its infancy, it consisted of a small number of computers hooked
together with modems and telephone lines. You could only make connections by providing the
IP address of the computer you wanted to establish a link with. For example, a typical IP
address might be 216.27.22.162. This was fine when there were only a few hosts out there, but
it became unwieldy as more and more systems came online.
The first solution to the problem was a simple text file maintained by the Network Information
Center that mapped names to IP addresses. Soon this text file became so large it was too
cumbersome to manage. In 1983, the University of Wisconsin created the Domain Name
System (DNS), which maps text names to IP addresses automatically. This way you only need
to remember http://yahoo.com, for example, instead of yahoo.com's IP address.
Uniform Resource Locators
When you use the Web or send an e-mail message, you use a domain name to do it. For
example, the Uniform Resource Locator (URL) "http://www.yahoo.com" contains the domain
name yahoo.com. So does this e-mail address: email@yahoo.com. Every time you use a
domain name, you use the Internet's DNS servers to translate the human-readable domain
name into the machine-readable IP address.
Top-level domain names, also called first-level domain names, include .COM, .ORG, .NET,
.EDU and .GOV. Within every top-level domain there is a huge list of second-level domains.
For example, in the .COM first-level domain there is:
HowStuffWorks
Yahoo
Microsoft
Every name in the .COM top-level domain must be unique. The left-most word, like www, is the
host name. It specifies the name of a specific machine (with a specific IP address) in a domain.
A given domain can, potentially, contain millions of host names as long as they are all unique
within that domain. DNS servers accept requests from programs and other name servers to
convert domain names into IP addresses.
V0.1/2008 - MFM
115
Welingkar
MFM– Sem III
Introduction to Computers
When a request comes in, the DNS server can do one of four things with it:
1. It can answer the request with an IP address because it already knows the IP address
for the requested domain.
2. It can contact another DNS server and try to find the IP address for the name requested.
It may have to do this multiple times.
3. It can say, "I don't know the IP address for the domain you requested, but here's the IP
address for a DNS server that knows more than I do."
4. It can return an error message because the requested domain name is invalid or does
not exist.
Domain Named Servers
If you type the URL www.altavista.com into your browser. The browser contacts a DNS server
to get the IP address. A DNS server would start its search for an IP address by contacting one
of the root DNS servers. The root servers know the IP addresses for all of the DNS servers that
handle the top-level domains (.COM, .NET, .ORG, etc.).
Your DNS server would ask the root for www.altavista.com, and the root would say, "I don't
know the IP address for www.altavista.com, but here's the IP address for the .COM DNS
server."
Your name server then sends a query to the .COM DNS server asking it if it knows the IP
address for www.altavista.com. The DNS server for the COM domain knows the IP addresses
for the name servers handling the www.altavista.com domain, so it returns those.
Your name server then contacts the DNS server for www.altavista.com and asks if it knows the
IP address for www.altavista.com. It actually does, so it returns the IP address to your DNS
server, which returns it to the browser, which can then contact the server for www.altavista.com
to get a Web page.
One of the keys to making this work is redundancy. There are multiple DNS servers at every
level, so that if one fails, there are others to handle the requests. The other key is caching. Once
a DNS server resolves a request, it caches the IP address it receives. Once it has made a
request to a root DNS server for any .COM domain, it knows the IP address for a DNS server
handling the .COM domain, so it doesn't have to bug the root DNS servers again for that
information. DNS servers can do this for every request, and this caching helps to keep things
from bogging down.
Even though it is totally invisible, DNS servers handle billions of requests every day and they
are essential to the Internet's smooth functioning. The fact that this distributed database works
so well and so invisibly day in and day out is a testimony to the design.
Clients and Servers
Internet servers make the Internet possible. All of the machines on the Internet are either
servers or clients. The machines that provide services to other machines are servers. And the
machines that are used to connect to those services are clients. There are Web servers, e-mail
servers, FTP servers and so on serving the needs of Internet users all over the world. When you
connect to www.google.com to read a page, you are a user sitting at a client's machine. You are
accessing the HowStuffWorks Web server. The server machine finds the page you requested
and sends it to you. Clients that come to a server machine do so with a specific intent, so clients
direct their requests to a specific software server running on the server machine. If you are
running a Web browser on your machine, it will want to talk to the Web server on the server
V0.1/2008 - MFM
116
Welingkar
MFM– Sem III
Introduction to Computers
machine, not the e-mail server. A server has a static IP address that does not change very
often. A home machine that is dialing up through a modem, typically has an IP address
assigned by the ISP every time you dial in. That IP address is unique for your session -- it may
be different the next time you dial in. This way, an ISP only needs one IP address for each
modem it supports, rather than one for each customer.
Web Sites and Portals
A collection of related Web pages is called a Web site. Web sites are housed on Web servers,
Internet host servers that often store thousands of individual pages. Popular Web sites receives
millions of hits or page views every day. When you visit a Web page – that is download a page
from the Web server to your computer for viewing – the act is commonly called “hitting” the Web
site.
Web sites are now used to distribute news, interactive educational services, product information
and catalogs, highway traffic reports, and live audio and video among other items. Interactive
Web sites permit readers to consult databases, order products and information, and submit
payment with a credit card or other account number.
A Web portal is a free, personalized start page, hosted by a Web content provider, which you
can personalize in several ways. Your personalized portal can provide various content and links
that simply cannot be found in typical corporate Web sites. By design, a portal offers two
advantages over a typical personal home page:

Rich, Dynamic Content
Your portal can include many different types of information and graphics, including news,
sports, weather, entertainment news, financial information, multiple search engines, chat
room access, email and more.

Customization
You customize a portal page by selecting the types of information you want to view.
Many portal sites allow you to view information from specific sources, such as CNN,
Time, and others. Some portals even provide streaming multimedia content. You can
also choose the hyperlinks that will appear in your portal, making it easy to jump to other
favorite sites. Most portal sites let you change your custom selections whenever you
want.
V0.1/2008 - MFM
117
Welingkar
MFM– Sem III
Introduction to Computers
Intranet & Extranet
Intranet is a term used to describe the organizations private network that can be used to access
one or more internal websites(in practice the term frequently refers not to the network itself , but
to the use of Web Technology – browsers and servers to provide access to the enterprise
applications and information). The sites can be located centrally or can be distributed among
departments or divisions. The primary purpose of an intranet is to provide access to information
and computing resources and to allow for collaborative work. An enterprise can provide secure
remote access via the Internet to its intranet using a Virtual Private Network(VPN) technology,
such as Point to Point Tunneling Protocol(PPTP) .
An extranet, like an intranet is a controlled access network based on internet protocols (or in
practice the Web sites accessible through that networks) that allows access to designated
people outside the enterprise, such as the customers and suppliers. An extranet user’s access
to enterprise information resources is greater than that offered to the casual Web surfer but is
less than that given to the employees over the corporate intranet. Security features such as the
passwords and digital certificates are used to restrict access to intranets and extranets.
A single enterprise may maintain an intranet, extranet and a public Internet presence
simultaneously. Whether someone refers to the public Web, an intranet or an extranet, the
same functional model holds: the exchange of Web pages between servers and clients.
Static & Dynamic IP Addresses
IP Addresses are assigned by ISP’s to identify and locate each computer uniquely on the
Internet. This is similar to our telephone numbers so that each person knows which telephone
number to call when they want to connect to a certain individual.
Static IP address: Static IP address as the name suggests are ‘static’. They remain constant
for a particular user. These addresses are assigned by the ISP’s by charging a certain fee.
These addresses are used when the user wants to offer web-based services and therefore it is
required that other users know its location. This location is identified by the use of static IP
addresses.
Dynamic IP addresses: When the Internet was first conceived the architects did not foresee
the use of unlimited numbers of IP addresses. To get around that problem, many Internet
service providers limit the number of static IP addresses they allocate, and economize on the
remaining number of IP addresses they possess by temporarily assigning an IP address to a
requesting Dynamic Host Configuration Protocol (DHCP) computer from a pool of IP addresses.
The temporary IP address is called a dynamic IP address. This IP address is then valid for a
particular session only and after the session is over the IP address is added back to the pool of
available IP addresses so that it can be used for other computers.
Search Engine
A user enters a key word or key phrase. The "search engine" then "returns a result set" of "links"
that the search engine "thinks" is "most relevant" to the key word or phrase. "search engine"obviously there are the main 3 (google, yahoo, msn), but a search engine is an application that
has a simple front-end interface and a HUGE database of words and websites.
A Search Engine
"returns a result set"- is a list of websites ranked in order based upon the search engine's
"algorithm". "algorithm"- each search engine has its own special code or algorithm that sorts or
V0.1/2008 - MFM
118
Welingkar
MFM– Sem III
Introduction to Computers
ranks the database full of websites. Needless to say, the algorithm is VERY, VERY complex. It
is made of thousands and thousands of variables that determine the sort order for key words
entered into the search engine
A Search Engine
"links"- basically each link is a website that has been indexed or spidered or inserted into the
search engine's database
"thinks"- the search engine's brains, if you will, is the algorithm. It has rules and values that help
determine the sort order of listing a result set.
"most relevant"- again, determined by the algorithm, the search engine's job is to give you a list
of websites IT THINKS is what you want.
Sometimes it works. Sometimes the search engine fails to return to the user, what the user is
looking for. Search engines have special programs called spiders that are constantly searching
around the web for new pages and logging the addresses that’s what the meta tags in the
HTML source code are for.
A wise man can see more from the bottom of a well than a fool can from a mountain
top.
~Author Unknown
V0.1/2008 - MFM
119
Welingkar
MFM– Sem III
Introduction to Computers
Key differences between applications designed for the web/internet
vis-à-vis conventional applications. [For reference only]
Architecture Differences
Although Internet and client/server applications have basic architectural similarities—such as
application servers, database servers, and a graphical user interface—there are several key
differences that have increased the popularity of Internet applications over client/server
applications:
 Client Processing
 Network Protocols
 Security
 Content Management
 Extensibility and Integration
These architecture differences fundamentally change the characteristics of applications that
developers build, how people use those applications, and the collaborative business nature of
the enterprise.
Client Processing
Both Internet and client/server architectures consist of multiple tiers, but the client tier is very
different. This is probably the single biggest technology difference between Internet and
client/server applications. Client/server is extremely client intensive, often referred to as a “fat
client” architecture.
Very simply, with Internet applications, the web browser or client access device is the client. The
Internet application user interface is generated at the server, delivered to the web browser
through HTML, and renders the user interface at the client device. All business rules are
executed on the server.
Network Protocols
Internet applications use standard Internet protocols and are compatible with Internet security
models. The client uses secure HTTP and Secure Sockets Layer (SSL)—which is supported by
the web browser. This protocol is familiar to firewall administrators and is supported by firewalls.
Other protocols, especially non-standard protocols, are typically blocked by firewalls.
Client/server applications do not use HTTP to communicate between the client and server. They
use proprietary protocols not natively supported by the web browser. This prevents client/server
applications from executing over the Internet, seriously limiting access to the application.
The use of HTTP in Internet applications is the key to opening up access to your application
over the Internet so that anyone in any location with a web browser can interact with your
application and collaborate in your business process. Convenient access is a key reason why
the Internet has been successful and grown so rapidly.
V0.1/2008 - MFM
120
Welingkar
MFM– Sem III
Introduction to Computers
Security
In addition to working with the security provided by a firewall, Internet applications are
compatible with the emerging Internet security model. This model is based on user
authentication—using user names and passwords over an SSL connection—along with digital
certificates or tokens. User administration and access control is performed at the application
server level, based on Lightweight Directory Access Protocol (LDAP). Organizations are
increasingly using LDAP as a central directory in their enterprise to maintain the growing
number of user profiles. Internet applications use LDAP for end user authentication.
The traditional client/server approach has not leveraged many of these new Internet security
technologies. User profiles are maintained in the database. System administrators must
maintain user profiles in many places instead of a central LDAP directory. End users must
remember numerous user IDs and passwords.
Structured and Unstructured Data
A client/server application deals with structured, relational data that resides in the application
database. Client/server typically does not deal with data outside of its own database or
unstructured data. A client/server application may store attachments such as documents or
spreadsheets in a database, or it may invoke third-party systems that display data from other
systems, but this no nowhere near the degree that internet applications accept unstructured
data and data from outside systems.
Portals are an increasingly popular piece of Internet applications and use a very different data
delivery approach than client/server. Portal technology provides common services such as
simple navigation, search, content management, and personalization. The portal delivers
content to end users from a wide variety of sources and this content is usually unstructured
data. The Internet applications accessed from the portal support both structured and
unstructured data through the use of popular technologies such as HTML, HTTP, and XML.
Internet applications can be designed with the assumption that the data can be structured or
unstructured and can reside outside the database from any type of Internet enabled content
provider. This results in the delivery of much richer content to the end user.
Extensibility & Integration
The Internet offers unlimited sources of information because of HTTP, HTML, and XML
standards. This standard technology enables the open flow of information between people and
systems. Internet applications leverage this open flow of information to enable system to system
collaboration in several ways:

An Internet application can host contents from other Internet applications and also acts as a
content provider using HTTP, HTML, and XML.

Systems can pass transactions between themselves using XML messaging. This is similar
to traditional Electronic Data Interchange (EDI), but has been extended beyond the
American National Standards Institute (ANSI) standard set of EDI transactions.
V0.1/2008 - MFM
121
Welingkar
MFM– Sem III
Introduction to Computers
Integration is important with client/server technology, but only moved data in and out of the
application database. Internet technologies were not used with client/server to address
integration and the idea of hosting content from other systems was not really considered. EDI
has also been part of client/server applications, but it uses proprietary, expensive networks and
has a limited, rigid transaction set. Another type of integration made popular by client/server is
desktop integration with applications such as Microsoft Excel and Microsoft Word. This has
proved to be costly due to client configuration and limited in functionality.
Leveraging new technologies, Internet applications can be extended and integrated to much
greater degrees than client/server applications. Integration never before considered with
client/server is being created today with Internet applications.
Of course there's a lot of knowledge in universities: the freshmen bring a little in; the
seniors don't take much away, so knowledge sort of accumulates.
Abbott Lawrence Lowell
V0.1/2008 - MFM
122
Welingkar
MFM– Sem III
Introduction to Computers
Application Service Providers
ASPs are a completely new way to sell and distribute software and software services. Although
ASPs were possible before the advent of the Web, the Web makes them so easy to create that they
have proliferated hugely in the last several years.
The Web and the Internet began to really heat up and receive significant media exposure starting
around 1994. Initially, the Web started as a great way for academics and researchers to distribute
information; but as millions of consumers flocked to the Internet, it began to spawn completely new
business models. Three good examples of innovative models include:



Amazon - Amazon (which opened its doors in July, 1995) houses a database of millions of
products that anyone can browse at any time. It would have been impossible to compile a list
this large in any medium other than the Web.
Ebay - Online auctions make it easy and inexpensive for millions of people to buy and sell
any imaginable item. It would be impossible to do this at a reasonable cost or in a timely
manner with any medium other than the Web.
Epinions - Thousands of people contribute to a shared library of product reviews. One of the
Web's greatest strengths is its worldwide view and collaborative possibilities.
An Internet ASP
Even though airlines fit the model for an ASP, we generally do not refer to airlines as ASPs. The
terms "ASP" and "Application Service Provider" are applied specifically to companies that provide
services via the Internet. In most cases, the term ASP has come to denote companies that supply
software applications and/or software-related services over the Internet.
Here are the most common features of an ASP:




The ASP owns and operates a software application.
The ASP owns, operates and maintains the servers that run the application. The ASP also
employs the people needed to maintain the application.
The ASP makes the application available to customers everywhere via the Internet, either in
a browser or through some sort of "thin client."
The ASP bills for the application either on a per-use basis or on a monthly/annual fee basis.
In many cases, however, the ASP can provide the service for free or will even pay the
customer.
Advantages of ASPs
The ASP model has evolved because it offers some significant advantages over traditional
approaches. Here are some of the most important advantages:



Especially for small businesses and startups, the biggest advantage is low cost of entry and,
in most cases, an extremely short setup time.
The pay-as-you-go model is often significantly less expensive for all but the most frequent
users of the service.
The ASP model, as with any outsourcing arrangement, eliminates head count. IT headcount
tends to be very expensive and very specialized (like pilots in the airline example), so this is
frequently advantageous.
V0.1/2008 - MFM
123
Welingkar


MFM– Sem III
Introduction to Computers
The ASP model also eliminates specialized IT infrastructure for the application as well as
supporting applications. For example, if the application you want to use requires an Oracle or
MS-SQL database, you would have to support both the application and the database.
The ASP model can shift Internet bandwidth to the ASP, who can often provide it at lower
cost.
One thing that led to the growth of ASPs is the high cost of specialized software. As the costs grow,
it becomes nearly impossible for a small business to afford to purchase the software, so the ASP
makes using the software possible.
Another important factor leading to the development of ASPs has been the growing complexity of
software and software upgrades. Distributing huge, complex applications to the end user has
become extremely expensive from a customer service standpoint, and upgrades make the problem
worse. In a large company where there may be thousands of desktops, distributing software (even
something as simple as a new release of Microsoft Word) can cost millions of dollars. The ASP
model eliminates most of these headaches.
To get something done a committee should consist of no more than three people, two
of whom are absent.
Robert Copeland
V0.1/2008 - MFM
124
Welingkar
MFM– Sem III
Introduction to Computers
--- Space for Notes ---
V0.1/2008 - MFM
125
Welingkar
MFM– Sem III
Introduction to Computers
Distinguish between
Main/ Primary Memory
1. Used to store a variety of critical
information required for processing by
CPU.
2. Two types of memory in the Immediate
Access Store of the computer, RAM and
ROM
3. Made up of number of memory locations
or Cells.
4. Measured in terms of capacity and speed
5. Storage capacity of main memory is
limited.
6. Cost is high for high speed storage and
hence high for primary storage.
7. The main memory stores the program
instructions and the data in binary
machine code.
8. Offers temporary storage of data
(Volatile)
An Ordinary Desktop
1. Desktop has one Processor
Secondary Memory
1. Essential to any computer system to provide
backup storage
2. The two main ways of storing data are serialaccess and direct-access. Like economical
storage of large volumes of data on magnetic
media, Floppy Disk, Magnetic disk.
3. Made up of sectors and tracks.
4. Measured in terms of storage space
5. Storage capacity of Secondary memory is
huge.
6. Cost is comparatively low for secondary
memory and hence secondary storage.
7. The secondary memory stores data in the
form of bytes made up of bits.
8. Offers permanent storage of data.(Non
Volatile)
1.
Professional grade server
Server has more the one processor( Min.
100)
Normal Memory is in GB
Slots available for connecting devices are
more
Used for Low performance application
eg. Data Server, Networking server, Proxy
server.
2. Normal Memory is in MB
3. Slots available for connecting devices
are less
4. Used for Low performance application
5. e.g. Mail server
2.
3.
Graphical user interface
1. Generally used in Multimedia
Character User Interface
1. Generally used in programming Languages
2. It lies in its graphical control features
such as toolbars, buttons or icons
2. It lies in its Character control features such as
textual elements or characters.
3. Used to create animations or pictures
3. Used to create word and sentences.
4. Variety of input devices are used to
manipulate text & images as visually
displayed
4. Enables to give users the ability to specify
desired options through function keys.
5. Employs graphical interface E.G web
pages, Image maps which helps user
navigate any sites. E.g.: Windows, Mac
5. Can Create popup / pull down menus scrolling
any text is possible. E.g. Unix, Cobol, FoxPro
V0.1/2008 - MFM
4.
5.
126
Welingkar
MFM– Sem III
Compilers
Introduction to Computers
Interpreters
1. A compiler is a translation program that
translates the instructions of a high level
language in to machine language.
2. Compiler merely translates the entire
source program into an object program
and is not involved in execution.
3. The object code is permanently saved for
future use and is used every time the
program is to be executed. because the
translation
4. Compilers are complex programs
1. An Interpreter is another type of translator
used for translating high level language in to
machine code.
2. Interpreter is involved in execution also.
5. Require large memory space
5. Does not require large memory space.
6. Less time consuming
6. More time consuming
7. Runs faster as no translation is required
every time code is executed, since it is
precompiled.
8. Needs the source code to be compiled
after any changes are made to take
effect and to run the program.
9. Slow for debugging and testing.
7. Each statement requires translation every
time source code is executed.
Batch Processing
1. Applicable for high volume transactions –
payroll / invoicing
2. Data is collected in time periods and
processed in batches
3. No direct access to system for the user.
4. Files are online only when processing
takes place
Online Processing
1. Suitable for business control application –
railway reservation
2. Random data input as event occurs.
3. No object code is saved for future use
because the translation and execution is
alternate.
4. Interpreters are easy to write.
8. Faster response to changes made in the
source code as it eliminates the need to
compile and run the program.
9. Good for faster debugging
3. All users have direct access to the system
4. Files are always online.
Centralised Data Processing & Distributed Processing
Historically mainframe computers were widely used in business data processing. In this kind of
a system, several dumb terminals are attached to the central mainframe computer. Dumb
terminals are the machines which users can input data and see the results of processed data.
However no processing takes place at the dumb terminals. In earlier days individual
organisations processed large amount of data usually at the head office. The main advantage of
such systems was that design was much straight forward and the organisation can have tighter
control on the main database.
In such systems one or more processors handle the workload of several distant terminals. The
central processor switches from one terminal to another and does a part of each job in a time
phased mode. This switching from task of one terminal to another continues till all tasks are
completed. Hence such systems are also called time sharing systems.
V0.1/2008 - MFM
127
Welingkar
MFM– Sem III
Introduction to Computers
The biggest disadvantage of such a system is if the main computer fails, the whole system fails.
All remotes have to stop working. Also all end users have to format data based on the format of
central office. The cost of communication of data to the central server is high as even the
minutest of processes have to be done centrally.
Distributed Processing
It is a system of computers connected together by a communication
network, in a true distributed data processing system. Each
computer is chosen to handle its local workload and the network is
designed to support the system as a whole.
Distributed data processing system enable sharing of several
hardware and significant software resources among several users
who may be located far away from each other.
Advantages
Of both a centralised and a decentralised system each computer can be used to process data
like a decentralised system. In addition, a computer at one location can also transfer data and
processing jobs to and from computers at other location.
a) Flexibility : Greater flexibility in placing true computer power at locations where it is
needed.
b) Better utilisation of resources : Computer resources are easily available to the end
users.
c) Better accessibility : Quick and better access to data on information especially where
distance is a meagre factor.
d) Lower cost of communication : Telecommunication costs can be lower when much of
the local processing is handled by on-sit mini and micro computers rather than by distant
central mainframe computers.
Disadvantages
Lack of proper security controls for protecting the confidentiality and integrity of the user
programs and data that are stored on-line and transmitted over network channels (its easy to
tap a data communication line)
Linking of different systems – due to lack of adequate computing /communication standards it
is not possible to link different items of equipment’s produced by different vendors. Thus several
good resources may not be available to users of a network.
Maintenance difficulty – due to decentralisation at resources at remote sites, management
from a central control point becomes very difficult. This normally results in increased complexity,
poor documentation and non availability of skilled computer/communication specialists at the
various sites for proper maintenance of the system
V0.1/2008 - MFM
128
Welingkar
MFM– Sem III
Introduction to Computers
Computer with Multiple Processors and Parallel Processors
In a computer with multiple processors the calculations are divided between the numerous
processors to tackle. Since each processor now has less work to do, the task can be finished
more quickly. It is sometimes possible to get a 210% speed increase from a computer with 2
processors.
The speed increase obtained by a multiprocessor computer depends greatly on the software
being used to perform the calculations. This needs to be able to co-ordinate the calculations
between the multiple processors. In practice, quite a bit of effort is required to divide the
calculations between the processors and re-assembling the results into a useful form. This is
known as an "overhead" and explains why computers with multiple processors are sometimes
slower than those with single processors.
Super linear speedup is possible only on computers with multiple processors. It occurs because
modern processors contain a piece of high speed memory known as a cache. This is used to
accelerate access to frequently used data. When the processor needs some data, it first checks
to see if it is available in the cache. This can avoid having to retrieve the data from a slower
source such as a hardisk. In a computer with 2 processors, the amount of cache is doubled
because each processor includes its own cache. This allows a larger amount of data to be
quickly available and the speed increases by more than what is expected.
Computers with Parallel processors rely on dozens to thousands of ordinary microprocesors-integrated circuits identical to those found in millions of personal computers --that
simultaneously carry out identical calculations of different pieces of data. Massively parallel
machines can be dramatically faster and tend to possess much greater memory than vector
machines, but they tax the programmer, who must figure out how to distribute the workload
evenly among the many processors. Massively parallel machines are especially good at
simulating the interactions of large numbers of physical elements, such as those contained
within proteins and other biological macromolecules -- the types of molecules that computational
biologists are interested in modeling.
The difference between intelligence and education is this: intelligence will make you
a good living.
~Charles F. Kettering
V0.1/2008 - MFM
129
Welingkar
MFM– Sem III
Introduction to Computers
Note: The section on Information Security – (Pages 145 to 156) are to be understood and framed into a
relevant answer. This section is meant for reference only, and is included as a comprehensive note on
Information Security and Disaster Recovery and Business Continuity Planning.
Constituents of a network security policy. Reference to Corporate
Governance, IT Governance, IT Security Governance, IT Policy, IT Security
Policy.
Network security policy
A security policy is a formal statement of the rules by which people who are given access to an
organization's technology and information assets must abide.
One generally accepted approach of developing security policy includes the following steps:
1. Identify what you are trying to protect.
2. Determine what you are trying to protect it from.
3. Determine how likely the threats are.
4. Implement measures which will protect your assets in a cost-effective manner.
5. Review the process continuously and make improvements each time a weakness is
found
However, there are two elements of a risk analysis that needs to be carried out for the above
things:
1) Identifying the assets like hardware, software, data, people, documentation and suppliers.
2) Identifying the threats like unauthorized access to resources and/or information,
Unintended and/or unauthorized Disclosure of information, Denial of service.
However, you cannot make good decisions about security without first determining what your
security goals are. Until you determine what your security goals are, you cannot make effective
use of any collection of security tools because you simply will not know what to check for and
what restrictions to impose.
Your goals will be largely determined by the following key tradeoffs:
a) services offered versus security provided
b) ease of use versus security
c) cost of security versus risk of loss.
Your goals should be communicated to all users, operations staff, and managers through a set
of security rules, called a "security policy." We are using this term, rather than the narrower
"computer security policy" since the scope includes all types of information technology and the
information stored and manipulated by the technology
Who should be Involved When Forming Policy?
In order for a security policy to be appropriate and effective, it needs to have the acceptance
and support of all levels of employees within the organization.
1. Site security administrator
2. Information technology technical staff (e.g., staff from computing center)
3. Administrators of large user groups within the organization (e.g., business divisions,
computer science department within a university, etc.)
V0.1/2008 - MFM
130
Welingkar
4.
5.
6.
7.
MFM– Sem III
Introduction to Computers
Security incident response team
Representatives of the user groups affected by the security policy
Responsible management
Legal counsel (if appropriate)
The characteristics of a good security policy are:
(1) It must be implementable through system administration procedures, publishing of
acceptable use guidelines, or other appropriate methods.
(2) It must be enforceable with security tools, where appropriate, and with sanctions, where
actual prevention is not technically feasible.
(3) It must clearly define the areas of responsibility for the users, administrators, and
management.
The components of a good security policy include:
(1) Computer Technology Purchasing Guidelines which specify required, or
preferred, security features. These should supplement existing purchasing policies
and guidelines.
(2) A Privacy Policy which defines reasonable expectations of privacy regarding such
issues as monitoring of electronic mail, logging of keystrokes, and access to users'
files.
(3) An Access Policy which defines access rights and privileges to protect assets from
loss or disclosure by specifying acceptable use guidelines for users, operations staff,
and management. It should provide guidelines for external connections, data
communications, connecting devices to a network, and adding new software to
systems. It should also specify any required notification messages (e.g., connect
messages should provide warnings about authorized usage and line monitoring, and
not simply say "Welcome").
(4) An Accountability Policy which defines the responsibilities of users, operations
staff, and management. It should specify an audit capability, and provide incident
handling guidelines (i.e., what to do and who to contact if a possible intrusion is
detected).
(5) An Authentication Policy which establishes trust through an effective password
policy, and by setting guidelines for remote location authentication and the use of
authentication devices (e.g., one-time passwords and the devices that generate
them).
(6) An Availability statement that sets users' expectations for the availability of
resources. It should address redundancy and recovery issues, as well as specify
operating hours and maintenance down-time periods. It should also include contact
information for reporting system and network failures.
(7) An Information Technology System & Network Maintenance Policy which
describes how both internal and external maintenance people are allowed to handle
V0.1/2008 - MFM
131
Welingkar
MFM– Sem III
Introduction to Computers
and access technology. One important topic to be addressed here is whether remote
maintenance is allowed and how such access is controlled. Another area for
consideration here is outsourcing and how it is managed.
(8) A Violations Reporting Policy that indicates which types of violations (e.g., privacy
and security, internal and external) must be reported and to whom the reports are
made. A non-threatening atmosphere and the possibility of anonymous reporting will
result in a greater probability that a violation will be reported if it is detected.
(9) Supporting Information which provides users, staff, and management with contact
information for each type of policy violation; guidelines on how to handle outside
queries about a security incident, or information which may be considered
confidential or proprietary; and cross-references to security procedures and related
information, such as company policies and governmental laws and regulations.
V0.1/2008 - MFM
132
Welingkar
MFM– Sem III
Introduction to Computers
Information Security







definitions
information infrastructure
the IT organization
information asset oversight
information systems access
contingency planning
incident handling
Definitions
“You can have security without privacy, but you cannot have privacy without security”
 Information Security is the protection of information to prevent loss, unauthorized access or
misuse. It is also the process of assessing threats and risks to information and the
procedures and controls to preserve:
o Confidentiality: Access to data is limited to authorized entities
o Integrity: Assurance that the data is accurate and complete
o Availability: Data is accessible, when required, by those who are authorized to
access it
Security Controls
 Security controls are the set of organizational structures, policies, standards, procedures,
and technologies which support the business functions of the enterprise while reducing risk
exposure and protecting information
o Preventative: Designed to keep errors or irregularities from occurring
o Detective: Designed to detect errors and irregularities which have already occurred
and to report to appropriate personnel
o Responsive: Designed to respond to errors or irregularities to restore operations and
prevent future issues
o Administrative: Processes and procedures
o Technical: Software and hardware technologies
o Physical: Facility and environmental security
Information Infrastructure – Data Management
 Security protection of personal information starts with strong data management practices
o Database Management
 User access controls
 Database administrator access controls
 Restrictions on view, update, modification, or deletion of data
 Appropriate usage guidelines for data
 Use of real personal information in development and test environments
o Backups
 Backup media should be secure
 Backups should be reliable for recovery purposes
 Backup and restore processes should be controlled to avoid errors and
unauthorized access
 Backup media should be tested regularly to ensure integrity
o Recovery
 Recovery plans should be documented and tested
 Data recovery is usually integrated with disaster recovery and business
continuity plans
V0.1/2008 - MFM
133
Welingkar
MFM– Sem III
Introduction to Computers
Information Infrastructure – Hardware
Know where personal information resides within your organization and how to protect the
repositories
 Mainframe / Servers / Storage Systems
o Large, computing hardware installations, generally housed within a defined building
area with good physical security
o Usually have defined security processes and controls
o Access is typically controlled and data may be classified
 Desktops / Laptops / Handheld Devices
o Each of these computing platforms provides additional challenges for security and
privacy as both functionality increases and control decreases
o Personal information stored on local systems is an issue due to greater exposure
and lack of appropriate backups
o Personal information on laptops and handhelds presents additional concerns due to
portability
o Hardware theft is a common occurrence, and some have specifically targeted the
data stored on them
o Encryption technologies can be utilized to lower risk exposure
 Media / Mass Storage Devices
o Increasing capacity and availability
o Difficult to track location
o Easy to steal and lose
Information Infrastructure – Networks
Networks allow for communication between systems and people, but introduce significant
challenges from a privacy and security perspective
 Local Area Networks (LANs)
o Within the operational facility
o Considered within local operational control and easier to manage
 Wide Area Network (WANs)
o Considered outside of local operational controls and are more difficult to manage
o May involve coordination between several groups
 Internet
o Public resource used and accessed by anyone
o Generally considered untrusted and requires additional network security controls
such as encryption
 Network Topologies
o Ethernet
o Optical
o Wireless
 Remote Access
o Provides connectivity with employees and partners from outside of local operational
control
o Requires additional measures such as access controls
 Mobile and Wireless Networks
o Susceptible to eavesdropping and unauthorized access
o Use caution and implement encryption where possible
 Telecom – Voice over Internet Protocol (VoIP)
o Utilizes Internet and WAN connectivity
o Susceptible to Internet attacks
V0.1/2008 - MFM
134
Welingkar

MFM– Sem III
Introduction to Computers
Broadband
o Always on, high bandwidth connections often targeted by attackers
o Digital Subscriber Line (DSL) – Dedicated connection to Internet
o Cable Internet – Local network shared with other users
o Virtual Private Networks (VPN)
o Uses encryption technology to set up private connections across public networks
o Adds layer of security to communications
Information Infrastructure – Internet
Sharing and accessing personal information over the Internet requires special controls
 Web-Based Applications
o Accessible from anywhere in the world, hence open to attacks from anywhere
o Transfers of personal information should be encrypted using SSL
 E-Commerce
o Online commercial transactions require personal financial information to be
exchanged
o Transmission and storage of financial information poses additional risks with the rise
of identity theft
 E-Business
o Transmission of personal information as part of e-business transactions is common
o Data sharing between business partners via e-business channels should follow the
same procedures and controls as other data sharing mechanisms
Information Infrastructure - Email
The ubiquitous and ad hoc nature of email communications makes personal information difficult
to protect
o Information sent in email can be intercepted, read and manipulated unless there is
network or application level encryption
o Standard email communication sent outside of the business is analogous to sending
a postcard (unless encrypted)
o Information transmitted via email is no longer under your control
o Email phishing schemes to steal personal information are on the rise
The IT Organization – Information Technology Management
Good information security starts with sound information technology management practices
o Information technology and information security should be formal, budgeted
functions, supporting the business operation
o Information security must be included in the business life cycle from design through
retirement
o Information technology infrastructure must be built to include information security
through all interfaced systems
o Project management must be formalized and include change management and
security controls
o Outsourcing must include security controls and be managed internally to ensure
protection of personally identifiable information
V0.1/2008 - MFM
135
Welingkar
MFM– Sem III
Introduction to Computers
The IT Organization – Roles & Responsibilities
To maintain security within the organization, roles and responsibilities must be clearly
understood
 Chief Executive Officer & Executive Committee
o Oversee overall corporate security strategy
o Lead by example and sponsor adoption of security
 Chief Security Officer
o Sets security strategy and policies
o Facilitates the implementation of security controls
o Undertakes security risk assessments
o Designs risk management strategy
o Coordinates independent audits
 Security Personnel
o Implement, audit, enforce, & assess compliance
o Advise and validate security designs and maintenance
o Keep abreast of new security developments (vulnerabilities, exploits, patches)
o Communicate policies, programs & training
o Monitor for security incidents
o Respond to security breaches
 Outsourced Security Functions
o Supplements internal security personnel
o Should be overseen by internal security personnel
 Managers & Employees
o Implement security controls
o Report security vulnerabilities and breaches
o Maintain awareness of security in action
The IT Organization – Outsourced Activities
The security requirements of an organization engaging in outsourcing should be addressed in a
contract agreed upon between the parties
It should reflect:
o Security roles and responsibilities
o Requirements for data protection to achieve comparable levels of security
o Data ownership and appropriate use
o Physical and logical access controls
o Security control testing of the service provider
o Continuity of services in the event of disaster
o Incident coordination process
o Right to conduct audits
o Respective liabilities
V0.1/2008 - MFM
136
Welingkar
MFM– Sem III
Introduction to Computers
The IT Organization – Security Awareness Training
Technology alone cannot provide information security – education and awareness of personnel
is key
Ensure that all employees understand:
o The value of security and are trained to recognize and report incidents
o Their roles and responsibilities in fulfilling their security responsibilities
o Security policies and procedures, including password protection, data sensitivity,
information protection
o Basic security issues such as virus, hacking, and social engineering
o The importance of compliance with regulatory requirements such as HIPAA,
Sarbanes-Oxley and Gramm-Leach-Bliley
Information Asset Oversight – Asset Management
To effectively manage information security, an organization must understand which data assets
are critical to the function of the company
 Locate and identify the information to be protected
o Develop tracking for data assets and the systems which house them
 Differentiate between owned vs. used assets
o Owners create and change
o Users access and execute
 Record Retention
o Retention schedules should address record types and retention periods
o Retention should be based on business need or regulatory requirement
o Inventories of key information should be maintained
o Controls should be implemented to protect essential records and information from
loss, destruction and falsification
Information Asset Oversight – Asset Classification Criteria
Data should be protected in accordance with the value of the asset—the higher the value, the
greater the security needed
 Value should be evaluated based on:
o Sensitivity
o Confidentiality
o Potential liability
o Intelligence value
o Criticality
 Effective risk management balances the potential for loss with cost of security protection
and management
V0.1/2008 - MFM
137
Welingkar
MFM– Sem III
Introduction to Computers
Information Asset Oversight – Data Classification
A data classification scheme, like the one in the example below, provides the basis for
managing access to and protection of data assets
 Confidential
o Data whose loss, corruption or unauthorized disclosure would be a violation of
federal or state laws/regulations
o Examples of this data type are social security numbers and credit card numbers
 Proprietary
o Data whose loss, corruption or unauthorized disclosure would tend to impair the
function of the business, or result in any business, financial, or legal loss
o Examples of this data type could be customer financial records
Information Asset Oversight - Data Classification
 Internal Use Only
o Data whose audience is intended to be those who work within the organization
o Examples of this data type could be phone lists or email distribution lists
 Public
o Data whose audience may include the general public
o Examples of this data type could be press releases or marketing materials
 Additional Classification Types
o Depending on the amount and types of data collected by the organization, additional
classifications may be advised due to current regulatory requirements
Information Systems Security – Authentication
Authentication identifies an individual based on some credential (e.g., password, smartcard,
biometric)
 Identification of the individual
o Individual account requirement
o Separate administration and user accounts
o No anonymous or shared accounts for access to personal information
o Special care for system accounts
 Passwords
o Encryption in transfer and storage
o Complexity requirements
o Change frequency requirements
o Repetition restrictions
o Storage guidelines
o No-sharing policy
o Disabling on change, termination or departure
 Non-repudiation
o The ability to ensure that neither the originator nor the receiver can dispute the
validity of a transaction
 Public key infrastructure (PKI)
o System of digital certificates, Certificate Authorities, and other registration
authorities that verify and authenticate the validity of each party involved in an
electronic transaction using cryptography
V0.1/2008 - MFM
138
Welingkar
MFM– Sem III
Introduction to Computers
Information Systems Security - Authorization
Authorization is the process of determining if the user, once identified, is permitted to have
access to the resource and may be based on:
 Organizational role
 Job function
 Group membership
 Level of security clearance
 Purchased access
Information Systems Security – Access
Access defines the intersection of identity and data; that is, who can do what to which data




Role-based: need to know / have
o Based on the principle of the least possible access required to perform the function
o Periodic review of accounts and access rights
o Deletion or termination of access based on work or position changes
o Periodic review of idle accounts
Workstation locking mechanisms
o Password-protected screen savers
o Time-activated lockouts
Access policy for email, Internet and portable devices
Identity management solutions
o Authoritative source
o Single or reduced sign-on
o Segregation of duties
o Ease of access with controls
Information Systems Security – Intrusion Prevention
Prevention is the best possible cure
 Firewalls
 Anti-virus
 Content scanning
 Security patches
 Emerging intrusion prevention systems
 User awareness
Contingency Planning – Threats and Vulnerabilities
Risk is a function of the likelihood of a threat exploiting a security vulnerability with a resulting
impact
 Potential threats
o Emergency situations or natural events
o Organized or deliberate malicious actions
o Internal accidents, carelessness, or ignorance
o Malicious code (virus, worms, spyware, malware)
o Loss of utilities or services
o Equipment or systems failure
o Serious information security events
 Security vulnerabilities
o Unsecured accounts
o Unpatched systems
V0.1/2008 - MFM
139
Welingkar
o
o
o
o
MFM– Sem III
Introduction to Computers
Insecure configurations
Network perimeter weaknesses
Inappropriate trust models
Untrained users and administrators
Contingency Planning – Disaster Recovery
The disaster recovery plan allows an organization to respond to an interruption in services by
implementing a disaster recovery plan to restore critical business functions and data
 Disaster recovery plan
o Conditions for activating
o Emergency procedures
o Fallback procedures
o Resumption procedures
o Roles and responsibilities
o Awareness and education
o Maintenance schedule
 Backups
o Systems, applications and information
o Cold, warm or hot sites
o Redundant or recoverable
o Secure and reliable
Contingency Planning – Business Continuance
The ability of an organization to ensure continuity of service and support for its customers and to
maintain its viability before, after, and during an event
 Business continuity plan
o Business process recovery
o Human resource management
o Facilities availability
o Information systems recovery
o Customer service restoration
o External business partner functions
Incident Handling – Incident Response
 Indications of an incident
o Failed login
o Dormant account login
o Nonwork-hour activity
o Presence of new accounts (unknown)
o System log gaps
o Unfamiliar programs or files
o Unexplained elevation of user privileges
o Unexplained change in file permissions
o Unexpected malfunction or error message
o System failures
 Early alerts from multiple sources
o User community
o System administration staff
o Security team
o Intrusion detection systems
o Vendors & security companies
V0.1/2008 - MFM
140
Welingkar

MFM– Sem III
Introduction to Computers
Formal incident response plans
o Identification
o Communications
o Containment
o Backup
o Recovery
o Post-mortem
Incident Handling – Incident Documentation
 Forensic evidence
o Admissibility of evidence
o Quality of evidence
o Preservation of integrity

Post-mortem
o Document incident, response activities, and results
o Gather lessons learned
o Measure metrics such as type, frequency, cost
o Monitor for trends
o Incorporate into future security controls design
The real danger is not that computers will begin to think
like men, but that men will begin to think like computers.
Sydney J. Harris
V0.1/2008 - MFM
141
Welingkar
MFM– Sem III
Introduction to Computers
Virus







Virus Basics
How Viruses Get into Computers and Spread
Virus Symptoms
Virus Defense
Microsoft Office protection
Anti-Virus Software
Links for Information on Current Viruses
What is a virus?
A virus is a software program that piggybacks on other programs and self-replicates whenever
those programs are run. A virus is not data. An e-mail virus moves around in e-mail messages.
The virus replicates itself by automatically mailing itself to people in the victim’s e-mail address
book. You can ONLY catch a virus by running a program. Your computer runs different kinds of
programs. All computer viruses are man-made.
Basic virus terminology
Virus
A Program that is self-replicating and attaches itself to other programs. " An e-mail virus is
computer code sent to you which, if activated, will cause some unexpected and usually harmful
effect…”
Worm
Special type of virus that can replicate itself and use memory, but cannot attach itself to other
programs. Uses computer networks and security holes to replicate itself. “A self-replicating
virus that does not alter files but resides in active memory and duplicates itself…”
Trojan Horse
A computer program that claims to do one thing (such as a game) but instead does damage
when you run it. Trojan Horses do not replicate automatically. “A Trojan horse is a program in
which malicious or harmful code is contained inside apparently harmless programming…”
How viruses get into computers:
The origin of the four most common virus infections:
 File – A virus type that infects existing files on the computer
 Macro – A virus that runs as a macro in a host application; i.e., MS Office applications
such as Word or Excel
 VBScript – A virus that uses Windows Visual Basic Script
 Internet Worm – A virus that is primarily characterized by its replication across the
Internet
How viruses spread
 By downloading infected files or programs from a network. If you download and run
software from the Internet, or receive e-mail attachments, there is a chance that you can
contract a computer virus.Once you RUN an infected program, the virus can spread
rapidly, especially on networks. That is why the Internet, the largest network, is a fertile
breeding ground for viruses.
 By inserting infected disks into your computer.
V0.1/2008 - MFM
142
Welingkar

MFM– Sem III
Introduction to Computers
Computers do get viruses from e-mail. You must be aware of the fact that you CANNOT
get a computer virus from simply the text of an e-mail. The virus will come in the form of
some kind of attachment. Opening the attachment can give your computer a virus.
Virus Symptoms
 Unusual messages or displays on your monitor.
 Unusual sounds or music played at random times.
 A file name has been changed.
 A change in dates against the filenames in a directory.
 Programs or files are suddenly missing.
 Unknown programs or files have been created.
 Reduced memory or disk space.
 Unexpected writes to a drive.
 Bad sectors on your floppy disk.
 Your entire system crashing.
 Some of your files become corrupted – meaning that the data is damaged in some way –
or suddenly don’t work properly.
 Programs take longer to load, they may hang the computer or not work at all.
Basic virus defense
 Don’t open files that you are not expecting.
 Many viruses automatically send files without the e-mail account owner’s knowledge.
 Ask the sender to confirm unexpected files.
 If you don’t know who the message is from, don’t open it.
 Messages that appear more than once in your Inbox can be suspect for a virus.
 If you receive a suspicious message, delete it.
 Don’t use or share floppies without scanning with anti-virus software first.
 Learn file extensions.
 Your computer will display both an icon and a file extension for files you receive. Open
only file extensions you know are safe.
 If you are not sure, ask your Technical Support person.
 Never double-click to open an attachment that contains an executable that arrives as an
e-mail attachment.
 Regularly back up your files.
 Do not install pirated software, especially computer games.
 Make sure your computer runs anti-virus software. If you don’t have it, buy and install it
immediately.
 If you have anti-virus software on your computer, it has to be updated at least weekly, as
new viruses appear daily.
 Scan the entire hard disk twice a month.
Examples of potentially unsafe file types
 The following file types should not be opened unless you have verified the sender and
the reason sent:
 .EXE
 .PIF
 .BAT
 .VBS
 .COM
V0.1/2008 - MFM
143
Welingkar
MFM– Sem III
Introduction to Computers
Microsoft Office
 Microsoft Office files are mostly data with some program code.
 Office macros are programs, which can be viruses.
 Office will prompt you to enable macros.
 Enable macros only when you know why Office is asking. Never run macros in a
document unless you know what they do.
Outlook E-Mail File Security
 Outlook will automatically block some kinds of executable files, but not all.
 Do not assume that the file is safe if it made it through Outlook.
Anti-Virus Software
 Anti-Virus Software watches for viruses, identifies them, and kills the ones on your
computer.
 Virus detection software regularly scans the files on your disk looking for infected files.
When a virus is spotted, the virus software will inform you and allow you to choose what
action to take.
 All major anti-virus software includes an “e-mail scan” feature that will check your e-mail
attachments for viruses before you open the attachments.
 Information on Current Viruses
 Symantec site which lists hoaxes. Refer to this page whenever you receive what
appears to be a bogus message regarding a new virus, or promotion that sounds too
good to be true.www.symantec.com/avcenter/hoax.html
 Symantec site containing Virus Description Database along with a list of the latest virus
threats, the risk level, the date the virus was discovered, and the date of protection.
http://www.symantec.com/avcenter/vinfodb.html
 Information on Current Viruses (continued)
 McAfee Site containing a Virus Information Library which has detailed information on
where viruses come from, how they infect your system, and how to remove them.
http://vil.nai.com/vil/default.asp
 Computer Associates site which is an up-to-the-minute resource containing detailed
information on computer viruses, worms, Trojan Horses, and hoaxes.
http://www3.ca.com/virusinfo/
SPAM
What is “SPAM”?
The “official” definition of spam is “unsolicited commercial email”. Technically, SPAM is the
online version of bulk (junk) mail. Only, unlike bulk mail-, which keeps the price of stamps down
and is paid for by the sender. SPAM inadvertently costs everyone more- in productivity, online
fees, bandwidth, etc.
Is there “legitimate” spam? Yes and no.
Often when we sign up to a newsletter, register a product, or join a forum or other online service
(like Yahoo), we “agree” to get email advertisements and other “marketing products”. Why
would we agree to such a thing? Easy- we didn’t read the “agreement” that says “we reserve the
right to send you junk and sell your address to whomever we please…”. This is known as both
“opt-in” and “permission-based" email marketing. So while these are spam to us, we did sign up
for it, therefore “technically” and by law, it is not “SPAM”.
V0.1/2008 - MFM
144
Welingkar
MFM– Sem III
Introduction to Computers
Why is SPAM so popular if everyone hates it?
Spam is “popular” amongst the get-rich-quick set, scammers, and lately identity thieves:
Spammers can send a piece of e-mail to one, 100, or a distribution list in the millions for
roughly the same cost to them. Spammers expect only a tiny number of readers will respond
to their offer.
7% of email users have bought something from a spam message (STOP IT!) Spammers
say, “We get paid to deliver the mail. we don't care if they read it... For the most part,
spammers aren’t even really trying to sell you anything anymore. They just want to know if
your address is real, so they can sell it to someone else.
How did they get my email address?
Spammers use a variety of tactics to get into your inbox and many hi-tech techniques to get
past filters and aggravate us! Email lists -- Buying, stealing, renting, trading (see”opt-in”)
Trickery – e-greeting cards, freeware, and anything else that asks you to enter your email
address. Spambots, Harvesters – search the Internet for email addresses on forums, web
pages, newsgroups, blogs, etc. Dictionary attacks – sends out emails to guessed/random
addresses. Blanket attacks – “send this to anyone@csusb.edu”
What can I do about SPAM?
 Preventative Measures“- to reduce or avoid it:
 Read Privacy Policies
 Never "send this article to a friend"…
 Create an alternate email account…
 Don't post to Newsgroups, Forums, blogs, guestbooks, etc.
 Use Bcc! hide your email address and your friends...
 Turn OFF HTML in your email…
 Use FILTERS FILTERS FILTERS!…
 You can't put the genie back in the bottle!
 CAN-SPAM – “Official” but does it work?
 “OPT-out” - often all this does is let spammers know your address works and you check
it!
 COMPLAIN to domain owners.
 FILTERS FILTERS FILTERS!!
SCAMS, HOAXES AND OTHER THREATS
What is a (email) scam, hoax, etc., and why do I care?
 Scams – The sole purpose of the scam in any format is to get your money.
 Hoaxes - "Whenever you receive what appears to be a bogus message regarding a new
virus, or promotion that sounds too good to be true, it is probably a hoax…”
 What is a (email) scam, hoax, etc., and why do I care?
 Chain letters - "Chain letters, like their printed ancestors, generally offer luck or money if
you send them on…”
 Phishing – “Uses spoofed/hijacked email addresses and fake but authentic looking
messages/websites to trick us into giving up our personal info
 So we don’t get victimized
 So we sound smart when we tell our friends how not to get victimized
 Lost Productivity. If everyone on the Internet were to receive one hoax message and
spend one minute reading and discarding it, the cost would be something like:
50,000,000 people * 1/60 hour * $50/hour = $41.7 million
 Identify theft. The fastest growing crime in the U.S.
V0.1/2008 - MFM
145
Welingkar
MFM– Sem III
Introduction to Computers
Why do people send chain letters, hoaxes, etc.?
 silly children
 ego, "fame" (demented and sad script-kiddies)
 well-intentioned do-gooders (ex: amber alert)
 uninformed newbies
 malicious intentions (spam-collectors)
How do I avoid chain letters, hoaxes, etc.?
 You can't. But you can help others:
 Don't fwd them, delete them!
 Filter them
 Yell @ your friends
 Do your own Research (to find out what is real and what is a hoax/scam)
Treat your password like your toothbrush. Don't let anybody
else use it, and get a new one every six months.
Clifford Stoll
V0.1/2008 - MFM
146
Welingkar
MFM– Sem III
Introduction to Computers
Note: General Questions – The following section has been provided as a
guideline for the answers. Please use the material carefully and review the same
before use. I appreciate the effort put in by the teams from MFM(2007) and
MMM(2007) to collate and submit material for these questions.
Q1. Write a note on how IT can help an organisation in gaining a competitive advantage.
(16 marks-2002; 20 marks – 2005)
Introduction
-
IT has been a critical part of the process of running a business since the early days of
the IBM 360.
-
IT supports every aspect of the business from human resources to operations. IT has
been extremely important to the success of the show, but it is not seen as a key actor
who is centre stage and critical to determining the final outcome.
-
IT organizations have been mostly perceived as one of many business support
functions, just like finance, manufacturing, or HR.
-
It is only recently that IT has been recognized as a determining factor in a business's
ability to gain or lose market share or as the driver of the competitive dynamics of
organisations and even entire industries.
IT and market share
In a recent Wall Street Journal article titled "Dog Eat Dog," researchers from Harvard Business
School and MIT's Sloan School of Management share their research findings regarding IT and
market share. They divided the US private sector into 61 industries and determined the IT
intensity of each one by the amount of spending on computer hardware and software as a
percentage of total spending on fixed assets, grouping them into high-IT, medium-IT, and low-IT
industry groups. Their study focused on the period after 1996 when technology investments
increased sharply in the high-IT industries. Some of their key findings include:
-
Market share increases were greatest in industries that used IT most extensively.
-
High-IT industries experienced different competitive dynamics than other industries.
-
Sales turbulence (i.e., the amount of shifting in where a company ranks in sales within
an industry from year to year) was substantially higher in the high-IT industries than in
the other two categories.
V0.1/2008 - MFM
147
Welingkar
MFM– Sem III
Introduction to Computers
This is a clear indicative that there is a direct link between IT and market share. This also
highlights the role that IT plays in helping an organisation gain or lose competitive advantage.
How exactly has IT caused this to happen?
It has been through the use of IT in process innovation and replication.
1. Enterprise-wide software applications have enabled companies to integrate across
functions, and the Internet has made IT-enabled tasks widely accessible.
2. Improvements to algorithms, software updates, and new capabilities can be made very
quickly and distributed broadly in a seamless manner.
3. A company's leadership position today can be disrupted by a competitor the very next
day if the competitor is more insightful, more connected to its customers, and quicker to
deliver solutions.
This is the new norm in this highly competitive and dynamic environment, and the turbulence
will only become more intense and more pervasive as IT usage expands across all industries.
IT and competitive advantage
It is not sufficient to merely react to the competition's last move. Companies have to drive the
competitive dynamics of their industries to be successful in keeping and/or gaining market
share. And IT has come to play a major role in this
-
As companies realize how important IT is in determining competitive advantage, they are
making changes where necessary so that they can engage IT as a true business
partner.
-
As a result, IT is being forced to be much more agile and increasingly more engaged
and integrated with both internal and external customers and suppliers.
-
IT is back in the hot seat but with a much better negotiating position. Its performance
and capability are being directly linked to the revenue-and-profit side of the financial
equation.
-
It is no longer a mere overhead expense IT is being recognized as a true asset in
determining a company's competitive position. However, the IT mindset has to shift in a
number of ways to effectively meet the challenge.
In enabling an organisation gain competitive advantage IT organizations face a new challenge:
V0.1/2008 - MFM
148
Welingkar
MFM– Sem III
Introduction to Computers
1. How to capitalize on the growing demand for IT-enabled innovation. CEOs are giving
their IT in their organizations a new, broader mission: enable business strategy, drive
productivity, and facilitate company-wide innovation. In fact, a CEO from one of the
largest technology companies has recently stated that he wants to enable every
company move through its IT organization and eliminate the shadow or hidden IT
functions across the company.
2. In response to the pull from the businesses and CEOs, CIOs must transition their
organizations from utility provider to business partner while shifting a significant portion
of IT spending from supporting the business to growing the business.
3. CIOs are increasingly being challenged to deliver value by driving common processes
across the enterprise, optimizing IT spending with a shift to value-add, partnering to
define enabling technical solutions, and utilizing these enabling technologies to generate
revenue.
4. IT organizations are discovering that a "copycat" portfolio and IT strategy will not suffice
in this intensely competitive environment. Over the last 10 years, IT organizations have
taken on a new leadership role and responsibility in the areas of processes
transformation, back-office consolidation, and shared services.
5.
The new IT is not only about automation, not just about process transformation, not
primarily cost-focused, not necessarily focused on new technologies and not about IT
alone. The IT world is becoming more encompassing with both an external and an
internal focus. Today's IT perspective is shifting to take on a joint ownership of the
business's extended value network by helping businesses connect with customers in
innovative ways and providing intelligent communication networks that serve their entire
enterprise.
This overall trend is affecting IT in many ways, including its portfolio of projects, people,
organizational structure, processes, corporate role, governance, investments, technologies,
partnerships, and suppliers.
-
IT must construct a balanced portfolio focused on maintaining IT operational excellence
while investing in the business goals for growth, profit, and innovation.
-
IT-enabled transformation projects will be more and more integrated across sales and
marketing, engineering, and manufacturing, where they can drive their company's profit.
V0.1/2008 - MFM
149
Welingkar
-
MFM– Sem III
Introduction to Computers
IT is becoming a necessary and welcomed partner in the quest for innovation across the
corporation as they lead efforts to take advantage of new and emerging technologies.
-
In addition, the people in IT organizations are required to be more broad, flexible,
versatile, and collaborative while blending IT technical skills with product development
and marketing skills.
-
Processes are being forced to become more agile and risk-tolerant to meet the demand
for speed, creativity, and flexibility.
-
At some leading-edge companies, IT investments are starting to be seen as a cost of
doing business with longer-term payback. And finally, IT will be moving into more
complex multidimensional relationships with internal functional departments, suppliers,
and third-party technology organizations such as universities.
Conclusion
-
While pressure to control costs and maintain operational efficiency is still a priority for
businesses of every size and across every industry, most are reporting a renewed
emphasis on top-line growth.
-
Globalisation and technology advances are giving rise to an unprecedented level of
competition while creating extraordinary opportunities to differentiate. For many
companies, growth – perhaps even survival – will depend on innovation.
-
Today’s leading organisations know that execution is critical in order to gain competitive
advantage. They know that profit and opportunity lie ahead for those who can step
outside their traditional comfort zone and deliver innovations that can truly differentiate
their business
-
A study has found that today CEOs are focusing nearly 30 percent of their innovative
efforts on business model innovation.
-
These important perspectives on innovation are changing the way companies view their
business operation and map their future strategy. They are causing business leaders to
rethink long-held business models and envision change from the ground up.
-
IT is recognised as an integral element of this process and is being called upon to
determine how exactly these changes can be implemented. IT is one of the critical
player in a company’s business model innovation efforts and involuntarily or voluntarily
becoming part of the company’s strategy and vision.
V0.1/2008 - MFM
150
Welingkar
-
MFM– Sem III
Introduction to Computers
To enable and drive business model innovation within the organisation and thus help
organisation gain competitive edge IT organisations need to take action in three focus
areas:
1. Deepen
business
understanding
by
leveraging
componentisation
techniques
(Systematically breaking down the business into its component processes and focusing
on the business value of those individual processes helps IT to understand the needs
and objectives of the business at a granular, more in-depth level)
2. Innovate the IT business model first and implement processes to run IT like a business
3. Implement a flexible, responsive infrastructure capable of supporting innovation
initiatives.
Source:
1) ComputerWeekly.com Bread Crumb Menu, Home IT Management, Sunday 7
October 2007IT Management, Cutter paper: IT's role in creating business
advantage, Author: Christine Davis, Fellow, Cutter Business Technology Council.
Posted: 05 Oct 2007 (for introduction and main body)
2) IBM Global Technology Services September 2006. Business model innovation –
the new route to competitive advantage. Part of the CIO implications series (for
Conclusion)
V0.1/2008 - MFM
151
Welingkar
MFM– Sem III
Introduction to Computers
Using specific examples, describe how computerization would help your department
perform more efficiently and effectively. Also explain how computerization will help
better decision making, analysis and planning? (10 marks – 1998)
Introduction
-
Information Technology, like language, affects us on many levels and has fast become
integral to all of our lives. In this course we aim to strike a balance in studying both the
social and commercial forces of Information Technology, and networking, in particular
-
The role of computers in business has risen to the point where computer networks, even
more than personnel, are synonymous with the corporate entity
-
Information technology
-
(IT), defined as computers as well as related digital communication technology, has the
broad power to reduce the costs of coordination, communications, and information
processing. Thus, it is not surprising that the massive reduction in computing and
communications costs has engendered a substantial restructuring of the economy.
Virtually every modern industry is being significantly affected by computerization.
Role of Computerisation in increasing efficiency and effectiveness across department/s
The need today is to match organizational structure to technology capabilities. Thus the
challenge is to make a transition to an IT-intensive production process. The role of computers in
increasing departmental efficiency and effectiveness can be shown as follows with the help of 3
among several factors important for success of a business:
1. Changing Interactions with Suppliers
-
Due to problems coordinating with external suppliers, large firms often produce many of
their required inputs in-house. General Motors is the classic example of a company
whose success was facilitated by high levels of vertical integration.
-
Technologies such as electronic data interchange (EDI), internet-based procurement
systems, and other interorganizational information systems have significantly reduced
the cost, time and other difficulties of interacting with suppliers.
-
Firms can place orders with suppliers and receive confirmations electronically,
eliminating paperwork and the delays and errors associated with manual processing of
purchase orders.
-
However, the even greater benefits can be realized when interorganizational systems
are combined with new methods of working with suppliers. An early successful
V0.1/2008 - MFM
152
Welingkar
MFM– Sem III
Introduction to Computers
interorganizational system is the Baxter ASAP system, which lets hospitals electronically
order supplies directly from wholesalers.
-
Technological innovations related to the commercialization of the Internet have
dramatically decreased the cost of building electronic supply chain links
-
Computer enabled procurement and on-line markets enable a reduction in input costs
through a combination of reduced procurement time and more predictable deliveries,
which reduces the need for buffer inventories and reduces spoilage for perishable
products, reduced price due to increasing price transparency and the ease of price
shopping, and reduced direct costs of purchase order and invoice processing. These
innovations are estimated to lower the costs of purchased inputs by 10% to 40%
depending on the industry
2. Changing Customer Relationships
-
The Internet has opened up a new range of possibilities for enriching interactions with
customers. Dell Computer has succeeded in attracting customer orders and improving
service by placing configuration, ordering, and technical support capabilities on the web.
It coupled this change with systems and work practice changes that emphasize just-intime inventory management, build-to-order production systems, and tight integration
between sales and production planning.
-
Dell has implemented a consumer-driven build-to-order business model, rather than
using the traditional build-to stock model of selling computers through retail stores, which
gives Dell as much as a 10 percent advantage over its rivals in production cost.
-
Some of these savings represent the elimination of wholesale distribution and retailing
costs. Others reflect substantially lower levels of inventory throughout the distribution
channel.
-
However, a subtle but important by product of these changes in production and
distribution are that Dell can be more responsive to customers. When Intel releases a
new microprocessor, as it does several times each year, Dell can sell it to customers
within seven days compared to 8 weeks or more for some less Internet-enabled
competitors.
-
This is a non-trivial difference in an industry where adoption of new technology and
obsolescence of old technology is rapid, margins are thin, and many component prices
drop by 3-4% each month.
V0.1/2008 - MFM
153
Welingkar
MFM– Sem III
Introduction to Computers
3. IT and Productivity
-
Many micro-level studies have focused on the use of computerized manufacturing
technologies. A few studies have stated that found that the most productive metalworking plants use computer-controlled machinery. Reserachers have found that plants
where a larger percentage of employees use computers are more productive in a
sample containing multiple industries.
-
Taken collectively, these studies suggest that IT is associated with substantial increases
in output.
Role of Computerisation in decision making, analysis and planning
-
Strategic planning helps an organization to stay afloat in the market despite competition.
It facilitates in making strategic moves in order to sustain competitive advantage in the
market
-
IT world is becoming more encompassing with both an external and an internal focus.
Today's IT perspective is shifting to take on a joint ownership of the business's extended
value network by helping businesses connect with customers in innovative ways and
providing intelligent communication networks that serve their entire enterprise.
Conclusion
-
Computers and computer networks act as the central nervous system of today’s
enterprise. Today's regular business people aren’t just relying on them...they're directly
administering, monitoring, and configuring them.
-
While IT staff with specialized skills may focus on application development, integration,
and support, today’s business professional requires information technology knowledge
to navigate and operate IT systems, to design, customize, and test systems for
competitive advantage, and to seek out and identify new solutions that can transform
their business.
Thus a significant component of the value of IT is related to the ability of computers to enable
complementary organizational investments such as business processes and work practices;
second, these investments, in turn, lead to productivity increases by reducing costs and, more
importantly, by enabling firms to increase output quality in the form of new products or in
improvements in intangible aspects of existing products like convenience, timeliness, quality,
and variety.
V0.1/2008 - MFM
154
Welingkar
MFM– Sem III
Introduction to Computers
Giving suitable examples, explain how and why IT and computers have contributed in
increasing productivity and competitiveness of organization in doing business?
Information Technology is the study, design, development, implementation, support or
management of computer-based information systems, particularly software applications and
computer hardware.
IT deals with the use of electronic computers and computer software to convert, store, protect,
process, transmit and retrieve information, securely.
Today Information Technology and computerization is playing a major role in increasing the
productivity and competitiveness of the organization in doing business
Information technology (IT) impacts organizational characteristics and outcomes.
There are two principal performance enhancing benefits of IT

Information efficiencies

Information synergies
IT plays major role in moderating the relationship between organizational characteristics
including structure, size, learning, culture, and inter-organizational relationships and the most
strategic outcomes, organizational efficiency and innovation.
A best example that we can take here is “The ATM technology”
ATM (Automatic Teller Machine) technology is a fast emerging as an important IT investment for
a bank.
World ATM installations are set to rise by 45 percent over the next few years till 2004, according
to a new report by Retail Banking Research. Currently there are over 800,000 machines
operating worldwide and this figure will rise to over 1,150,000 by 2004. Of the global figure, the
largest market was Asia-Pacific which accounted for 253,000 installations, nearly a 32 percent
global share. Of these machines, the majority—142,500—were in Japan. North America has
become the second largest region with 221,000 machines. Western Europe has fallen to third
position with 219,000 ATMs. Both regions each hold over 27 percent of the world total. During
the past two years Latin America has significantly increased its share in the world market and
V0.1/2008 - MFM
155
Welingkar
MFM– Sem III
Introduction to Computers
now has 82,500 installations, over 10 percent of the global total. There are a further 14,500
machines in the Middle East and Africa. Finally, the emerging markets of Eastern Europe
account for 11,500 terminals.
In a move to earn greater revenues, every vendor worth his salt is trying out innovative
strategies. For example, Diebold HBA offers a total managed service called Total
Implementation Solution (TIS). This gives banks a single window for procuring all their ATM
related needs. The basket of solutions includes ATM monitoring, software distribution for ATMs,
cash management and network management.
Many Indian banks that were hampered because of lack of knowledge of technology are now
actively talking to ATM vendors for outsourcing their needs. For example, Bank of India recently
signed an agreement with India Switch Company, a Diebold HMA group company, for
outsourcing the setting up of ATMs. Other banks-especially PSU and co-operative banks—are
expected to follow this trend.
As part of its strategy in offering innovative services, NCR is talking to the Railways in Mumbai
for deploying an ATM which could be used to dispense railway tickets. The focus is on letting
the customer use the ATM as a medium which can be used for non-cash transactions like
payment of bills, insurance payments, printing of statements or accessing the Internet. Adds
Rao, “The key idea is to get the customer used to these channels and then migrate him to
different low cost channels like the Internet. For example, a customer using a Web-enabled
ATM would be more likely to go in for, say, a service like Internet banking. Also, from the bank’s
point of view this would be more cost effective as a transaction over the Internet would only cost
the bank approximately Rs 10-12 per transaction.”
But in spite of all the positive signals, there are problems galore, which if not set right, can come
in the way of ATM growth rates in India. One is the familiar infrastructure problem. Other
problems are issues like obtaining many different permissions from different authorities like the
municipal authorities, building society permission, permission for locating VSATs on top of a
building, obtaining permission from the local telecom provider, etc. The rapid deployment of
ATMs earlier was because of the fact that there was no permission required from the Reserve
Bank of India. But today this is mandatory. Industry experts point out that this was done
because there were a lot of banks which set up ATMs without adequate funds. The RBI wanted
to check the status of banks before allowing them to set up ATMs.
V0.1/2008 - MFM
156
Welingkar
MFM– Sem III
Introduction to Computers
Most banks today are looking at ATMs not only as a delivery channel that bring in customers in
droves but also significantly reduce transaction costs. But, whatever form the ATM assumes in
future, one thing cannot be ignored by any bank—the fact that ATMs have come to stay.
V0.1/2008 - MFM
157
Welingkar
MFM– Sem III
Introduction to Computers
In what ways will use of IT and Internet enhance your job function as middle manager?
Discuss with examples, with respect to either HRD function or Marketing function or
Finance function?
Human Resource Management Systems (HRMS, EHRMS), Human Resource Information
Systems (HRIS), HR Technology or also called HR modules, shape an intersection in between
human resource management (HRM) and information technology. It merges HRM as a
discipline and in particular its basic HR activities and processes with the information technology
field, whereas the planning and programming of data processing systems evolved into
standardised routines and packages of enterprise resource planning (ERP) software. On the
whole, these ERP systems have their origin on software that integrates information from
different applications into one universal database. The linkage of its financial and human
resource modules through one database is the most important distinction to the individually and
proprietary developed predecessors, which makes this software application both rigid and
flexible.
The HR function's reality:
All in all, the HR function is still to a large degree administrative and common to all
organizations. To varying degrees, most organizations have formalised selection, evaluation,
and payroll processes. Efficient and effective management of the "Human Capital" Pool (HCP)
has become an increasingly imperative and complex activity to all HR professionals. The HR
function consists of tracking innumerable data points on each employee, from personal
histories, data, skills, capabilities, experiences to payroll records. To reduce the manual
workload of these administrative activities, organizations began to electronically automate many
of these processes by introducing innovative HRMS/HCM technology. Due to complexity in
programming, capabilities and limited technical resources, HR executives rely on internal or
external IT professionals to develop and maintain their Human Resource Management Systems
(HRMS). Before the "client-server" architecture evolved in the late 1980s, every single HR
automation process came largely in form of mainframe computers that could handle large
amounts of data transactions. In consequence of the high capital investment necessary to
purchase or program proprietary software, these internally developed HRMS were limited to
medium to large organizations being able to afford internal IT capabilities. The advent of clientserver HRMS authorised HR executives for the first time to take responsibility and ownership of
their systems. These client-server HRMS are characteristically developed around four principal
V0.1/2008 - MFM
158
Welingkar
MFM– Sem III
Introduction to Computers
areas of HR functionalities: 1) "payroll", 2) time and labour management 3) benefits
administration and 4) HR management.
The payroll module automates the pay process by gathering data on employee time and
attendance, calculating various deductions and taxes, and generating periodic paycheques and
employee tax reports. Data is generally fed from the human resources and time keeping
modules to calculate automatic deposit and manual cheque writing capabilities. Sophisticated
HCM systems can set up accounts payable transactions from employee deduction or produce
garnishment cheques. The payroll module sends accounting information to the general ledger
for posting subsequent to a pay cycle.
The time and labor management module applies new technology and methods (time
collection devices) to cost effectively gather and evaluate employee time/work information. The
most advanced modules provide broad flexibility in data collection methods, as well as labour
distribution capabilities and data analysis features. This module is a key ingredient to establish
organizational cost accounting capabilities.
The benefit administration module permits HR professionals to easily administer and track
employee participation in benefits programs ranging from healthcare provider, insurance policy,
and pension plan to profit sharing or stock option plans.
The HR management module is a component covering all other HR aspects from application
to retirement. The system records basic demographic and address data, selection, training and
development, capabilities and skills management, compensation planning records and other
related activities. Leading edge systems provide the ability to "read" applications and enter
relevant data to applicable database fields, notify employers and provide position management
and position control. Human resource management function involves the recruitment,
placement, evaluation, compensation and development of the employees of an organisation.
Initially, business used computer based information system to: (1) produce paychecks and
payroll reports, (2) maintain personnel records;(3) analyse the use of personnel in business
operations. Many organisations have gone beyond the traditional functions and developed
human resource management information system, which supports recruitment, selection, hiring,
job placement, performance appraisals, employee benefit analysis, training development,
health, safety and security.
V0.1/2008 - MFM
159
Welingkar
MFM– Sem III
Introduction to Computers
An Integrated Companywide Computerization is the only way of deriving the benefits of
information technology - discuss
Yes – this is true.
Computers have brought in a lot of automation. They have made our work simpler and easier.
When a company first starts up, it usually takes it some time to stand up on its feet. The first
steps are usually focused on getting processes right and then subsequently break even so that
profits can be made. The only way to do that is to grow.
As time goes by, the company would invariable grow. As this happens, the way things are done
need to change. What process was fine for a group of 20, would not be fine for a group of 2000.
The way things are done then change. The company expands it presence in many locations.
However to succeed, all the units need to make a effort in the same direction. There have to be
proper systems in place which will help the organization keep a check and control on the things
that are and should be happening.
IT systems help connect this huge number of people. Communication is fast and as we all
know. That is a very crucial aspect. IT helps a company make efforts in the right direction and
gives the ability to gauge the efforts. It also enables the top management to guide their ship in
the right direction.
A Major benefit can be derived by implementing an integrated company wide system. This
would then help do away with other smaller systems which would due to their nature cause
compatibility and functional issues. They do not encourage smooth flow of data which has a
direct impact on Turn Around Times and Quality issues. MIS is also difficult to generate.
V0.1/2008 - MFM
160
Welingkar
MFM– Sem III
Introduction to Computers
Computers and Communications seem to merge together seamlessly
Computers are all about people communicating with other people, in any way they can and
for many purposes: exchange pleasantries and argue, engage in intellectual discourse,
conduct commerce, exchange knowledge, share emotional support, make plans, brainstorm,
gossip, feud, fall in love, find friends and lose them, play games etc.
The activities today carried out over the Computer and Internet is many and complex, in a
way that could not be imagined at the beginning. Certainly one of the emerging features is
the relational and communicative nature: the initial centrality of the information exchange is
moving to the building of on-line Information and communication network
The relationship between the communications and computing industries these two industries,
which began almost at the birth of the modern computing industry, has grown stronger and
closer over the years. Just as computing devices are becoming more communicationsdriven, communication networks are becoming more computing-centric. Computer-enabling
services such as broadband and high-speed wireless data will succeed or fail based on how
well they connect many millions of people in useful ways.
For those who use computers as a routine element of their work, the impact of computer
media on communication with others is increasingly obvious. For these individuals, the
computer now mediates a large percentage of their daily interaction, in part because
computer media facilitate contacts with people they might not otherwise communicate with or
even know of.
Today's keyboard-oriented computer conferencing and electronic mail are just two of a
variety of existing and prospective computer media that may enhance and possibly change
the ways in which people communicate with each other. Computer media, including
hypermedia, multi-modal documents incorporating combinations of text, graphics, image,
voice, video, and other presentation formats, voice-into-text concurrent interaction, and
virtual reality have bought radical changes in the way we communicate.
Data exchange across two or more networked computers, via computer-mediated formats
(i.e., instant messages, e-mails, chat rooms) between two or more individuals. The way
V0.1/2008 - MFM
161
Welingkar
MFM– Sem III
Introduction to Computers
humans communicate in professional, social, and educational settings varies widely,
depending upon not only the environment but also the method of communication in which the
communication occurs. Anonymity and in part privacy and security depends more on the
context and particular program being used.
Computer aided communication is improving productivity of workforce in organizations,
saving time and money. Computers have certainly improved the way we can communicate
faster. Education, medicine, sports and personal lives of human beings have been impacted
due to seamless transfer of information between individuals by the user of computers.
Continuing progress in the area of computer voice recognition opens up possibilities for
additional computer media including one that might be called "voice-into-text concurrent
interaction" in which people talk to each other verbally but read a real time transcript of the
text rather than listening. The major advantage of voice over text, given highly accurate
transcription of voice into a textual transcript, is the relative speeds with which people can
talk and write, read and listen. Other advantages can be found in reduced requirements for
synchrony and turn taking and increasing opportunities for review relative to purely verbal
(face to face and telephone) media
Perhaps the most ambitious goals for computer mediation of human interaction are found in
recent efforts to create "virtual realities" via computer. Virtual reality is, in some sense, a
covering term for a wide range of experiments in computer interfaces, including video
goggles, motion sensing data gloves, and other technologies that attempt to bring an
observer into a dynamic and high bandwidth frame.
The role of the computer in human communication is one of an integrator. Where there were
once clearly interpersonal media that were obviously different from what were clearly mass
media there is now a growing continuum that allows any individual to interact, almost as if on
a one to one basis, with a large audience. The effect of this growing range of media will be
an increasingly complex media environment in which individuals will have many choices
depending on the kind of interaction they want to have or are constrained to having, the kind
of message they want to deliver or receive, and the kind of audience they want to reach or be
a part of.
V0.1/2008 - MFM
162
Welingkar
MFM– Sem III
Introduction to Computers
Internet technology and Electronic commerce has brought the manufacturers and the
customers very close to each other. This should result into better customer relationship
management and supply chain management. Please explain.
Electronic commerce and the Internet are fundamentally changing the nature of supply chains,
and redefining how consumers learn about, select, purchase, and use products and services.
The result has been the emergence of new business-to business supply chains that are
consumer-focused rather than product-focused. They also provide customized products and
services.
E-commerce impacts supply chain management in a variety of keyways. These include:

Cost efficiency: E-commerce allows transportation companies of all sizes to exchange
cargo documents electronically over the Internet. E-commerce enables shippers, freight
forwarders and trucking firms to streamline document handling without the monetary and
time investment required by the traditional document delivery systems.
By using e-commerce, companies can reduce costs, improve data accuracy, streamline
business processes, accelerate business cycles, and enhance customer service. Ocean
carriers and their trading partners can exchange bill of lading instructions, freight
invoices, container status messages, motor carrier shipment instructions, and other
documents with increased accuracy and efficiency by eliminating the need to re-key or
reformat documents. The only tools needed to take advantage of this solution are a
personal computer and an Internet browser.

Changes in the distribution system: E-commerce will give businesses more flexibility
in managing the increasingly complex movement of products and information between
businesses, their suppliers and customers. E-commerce will close the link between
customers and distribution centers. Customers can manage the increasingly complex
movement of products and information through the supply chain.

Customer orientation: E-commerce is a vital link in the support of logistics and
transportation services for both internal and external customers. E-commerce will help
companies deliver better services to their customers, accelerate the growth of the ecommerce initiatives that are critical to their business, and lower their operating costs.
V0.1/2008 - MFM
163
Welingkar
MFM– Sem III
Introduction to Computers
Using the Internet for e-commerce will allow customers to access rate information, place
delivery orders, track shipments and pay freight bills.
E-commerce makes it easier for customers to do business with companies: Anything
that simplifies the process of arranging transportation services will help build companies'
business and enhance shareholder value. By making more information available about
the commercial side of companies, businesses will make their web site a place where
customers will not only get detailed information about the services the company offers,
but also where they can actually conduct business with the company.
Ultimately, web sites can provide a universal, self-service system for customers.
Shippers can order any service and access the information they need to conduct
business with transportation companies exclusively online. E-commerce functions are
taking companies a substantial step forward by providing customers with a faster and
easier way to do business with them.

Shipment tracking: E-commerce will allow users to establish an account and obtain
real-time information about cargo shipments. They may also create and submit bills of
lading, place a cargo order, analyze charges, submit a freight claim, and carry out many
other functions. In addition, e-commerce allows customers to track shipments down to
the individual product and perform other supply chain management and decision support
functions. The application uses encryption technology to secure business transactions.

Shipping notice: E-commerce can help automate the receiving process by
electronically transmitting a packing list ahead of the shipment. It also allows companies
to record the relevant details of each pallet, parcel, and item being shipped.

Freight auditing: This will ensure that each freight bill is efficiently reviewed for
accuracy. The result is a greatly reduced risk of overpayment, and the elimination of
countless hours of paperwork, or the need for a third-party auditing firm. By intercepting
duplicate billings and incorrect charges, a significant percent of shipping costs will be
recovered. In addition, carrier comparison and assignment allows for instant access to a
database containing the latest rates, discounts, and allowances for most major carriers,
thus eliminating the need for unwieldy charts and tables.
V0.1/2008 - MFM
164
Welingkar

MFM– Sem III
Introduction to Computers
Shipping Documentation and Labeling: There will be less need for manual
intervention because standard bills of lading, shipping labels, and carrier manifests will
be automatically produced; this includes even the specialized export documentation
required for overseas shipments. Paperwork is significantly reduced and the shipping
department will therefore be more efficient.

Online Shipping Inquiry: This gives instant shipping information access to anyone in
the company, from any location. Parcel shipments can be tracked and proof of delivery
quickly confirmed. A customer's transportation costs and performance can be analyzed,
thus helping the customer negotiate rates and improve service.
V0.1/2008 - MFM
165
Welingkar
MFM– Sem III
Introduction to Computers
Giving suitable examples explain how and why IT & Computers have contributed in
increasing productivity and competitiveness of organizations in doing business.
The Role of Computers in Business
Information Technology, like language, affects us on many levels and has fast become integral
to all of our lives. In this course we aim to strike a balance in studying both the social and
commercial forces of Information Technology, and networking, in particular.
I am quite certain that each and everyone of you has witnessed first hand, the impact that
computers and computer networks have had on business.
In fact, by now the role of computers in business has risen to the point where computer
networks, even more than personnel, are synonymous with the corporate entity. Is this not
true?
The following example of Dell Computers exemplifies the fact that IT and computers have
acted as a catalyst towards productivity. Dell computers is not just a company wherein a
group of people making and selling personal computers as much as it is a collection of loosely
affiliated computer systems that, upon receiving an order or customer service request (all
online!), come together in a linear process to do a job. Cisco Systems: isn’t so much a
manufacturer of switches as it is a trusted brand name and expert marketer who happens to use
the Internet and a sophisticated network of networks, to weave together suppliers,
manufacturers, and distributors to form a coordinated, fully branded, fully customized virtual
entity that we know as Cisco. During the recession period of 1999, Cisco’s response involved
rationalizing their supply-base, leaving capital-intensive subcontractors to squeeze already razor
thin margins just to participate in the new, leaner, and ever-responsive sales network.
Computers and computer networks act as the central nervous system of today’s enterprise.
Today's regular business people aren’t just relying on them, they're directly administering,
monitoring, and configuring them. While IT staff with specialized skills may focus on application
development, integration, and support, today’s business professional requires information
technology knowledge to navigate and operate IT systems, to design, customize, and test
systems for competitive advantage, and to seek out and identify new solutions that can
transform their business.
V0.1/2008 - MFM
166
Welingkar
MFM– Sem III
Introduction to Computers
Planning & Decision making techniques - the Role computer plays .
A number of changes have occurred in recent years in the structure,policies and operations of
many organizations as a direct result of the use of computers.
Planning ,decision making and organizing activities within an organization can be enhanced with
the usage of Computers. Each of the activities can be briefly explained .
1) Planning with Computers: Businesses have expanded in recent years. As firms have
geared themselves to the various complexities that rise within organizations due to the
growth ,the need of the hour is better planning tools and techniques
2) Causing faster awareness of problems and opportunities.: computers can quickly signal
out of control conditions the corrective action to be taken when actual performance
deviates from the plan. Exhaustive current and historical ,internal and external data can
be analysed by the use of computers .
3) Enable managers to devote more time in planning :Use of Computers can help the
manager in clerical data gathering tasks so that more time and attention can be given to
analytical and intellectual matters.
4) The computer gives the manager the ability to evaluate more possible alternatives that
may have a bearing on the outcome of the alternatives .It makes them to do a better job
of identifying and assessing the economic and social effects of the different courses of
action. Computers can furnish managers with planning information that could not have
been possible some years ago.
5) Computer information systems now regularly support the planning and decision making
activities of managers in a number of business areas. for eg in marketing data may be
gathered that show consumer preferences from consumer surveys, results of market
testing in a limited geographic area and past sales data on similar products in the
industry.
With the introduction of quick response computer systems however information may be
processed and communicated to the top personnel quickly thus reaction time may be
drastically reduced
V0.1/2008 - MFM
167
Welingkar
MFM– Sem III
Introduction to Computers
Prior to the introduction of computers ,data processing activities were generally handled
by manufacturing ,marketing and finance departments on a decentralized basis. But with
the introduction of computers it has become feasible for businesses to use a centralized
approach to data processing.
Computers can also be used to apply decision making techniques such as
PERT/CPM,linear programming and simulation techniques especially in complex
problem areas like research .In fact it has made it possible to eliminate routine
procedures and to use their creative abilities in more challenging and rewarding ways.
Before the advent of computers companies had to depend heavily on the expertise of the
personnel .The core important functions of distribution, Invoicing, Providing market data were
done manually and there was a solid dependency on the caliber of the accounts, sales
coordinator & warehouse in charge .It was time and energy consuming .With the advent of
computers the broad functions of distribution ,compilation of data for analytical purposes
,Invoicing and planning of dispatches which hitherto used to be done manually was done at a
faster pace with the help of computers thus proving to be counter productive and less cost
effective is simplified, for eg the Airline Industry. Imagine before the advent of the desktop
computers ,the entire process of Ticketing, reservations and onward bookings if it would have to
be done manually then there would have loss of precious man hours which could otherwise
been deployed for increasing business.
V0.1/2008 - MFM
168
Welingkar
MFM– Sem III
Introduction to Computers
What are the primary functions of marketing of Finance or Human resource dept. of an
organization?
Explain how IT can help different departments in the organization interact?
Basic Marketing Functions:
Marketing encompasses many different parts of your business. Most people think marketing
falls in with advertising. All advertising is a form of marketing but all marketing is much more
than just advertising. That's why we need to cover the basic marketing functions so you can see
why you'll spent 80% of your time marketing your business. The whole reason for marketing is
to bring people to your business and exchange a product or service for more than the cost to
you. Many businesses that don't understand the basic functions of marketing, stop doing it once
the ads are placed in the local media. Advertising is merely the first step in the marketing
process.
Primary functions of marketing of Human resource dept. of an organization:
Despite the widespread use of computer-based human resource information systems (HRIS),
the availability of internal support for users also represented a critical condition. Overall, the
findings of this study provide support for a model of HRIS success and present a basis for
planning, designing, and implementing successful systems. Finally, this study brings with it new
questions for HRIS research.
Information technology has been cited as a critical driver of HR's transition from a focus on
administrative tasks to a focus on serving as a strategic business partner. This strategic role not
only adds a valuable dimension to the HR function, but also changes the competencies that
define the success of HR professionals. Interviews were conducted with HR representatives
from 19 firms to examine the linkage between electronic human resources (e-HR) and the
reshaping of professional competence in HRM. Based on the findings, we draw implications for
the development of HR competencies and identify learning strategies that HR professionals can
utilize to fulfill their changing roles and responsibilities. Dynamic trends in the external business
environment, in the challenges that companies face, and in the nature of HR itself demand that
HR departments develop new capabilities and that HR professionals develop new
competencies. BAE Systems acted on that understanding by providing a comprehensive HR
V0.1/2008 - MFM
169
Welingkar
MFM– Sem III
Introduction to Computers
professional development program to enhance the competencies of its HR professionals in
order to encourage better business performance. Pre- and postprogram measures and
extensive qualitative interviews about HR's impact on business performance evidences the
effectiveness of this comprehensive approach to the development of HR professional
competencies.
If HR were really strategically proactive: Present and future directions in HR's contribution to
competitive advantage
Current business conditions mandate greater competitive advantage from HR agendas and
processes. To add greater competitive advantage, HR must contribute strategic value against
criteria from customer and capital markets. HR can add strategic value either reactively or
proactively. In its strategically reactive mode, HR assumes the existence of a business strategy
and adds value by linking HR practices to the business strategy and by managing change. In its
strategically proactive mode, HR creates competitive advantage by creating cultures of creativity
and innovation, by facilitating mergers and acquisitions, and by linking internal processes and
structures with ongoing changes in the marketplace. This article defines and describes these
specific practices through which HR can contribute to greater competitive advantage.
Infomation Technology Implementation:
Based on the innovation & technology, there is considerable pressure on most organizations to
make their operational, tactical,& strategic processes more efficient and effective. An
increasingly attractive means of improving these processes lies in today's wide variety of
information technologies. The term infomation technology(IT) is viewed in a broad sense as it
refers to any artifact whose underlying technological base is comprised of computer or
communications hardware & software. In many organizational environments, such as
manufacturing firms, over half of a firm's capital expenditures involve IT.
How IT can help different departments in the organization interact?
The key information processing building blocks for yesterday's organizations were typewriters,
carbon paper, filing cabinets, and a government mail service. The constraints of these crude
information processing technologies often required workers to be located under one roof and
organizations to arrange themselves as efficient, but relatively change-resistant, management
V0.1/2008 - MFM
170
Welingkar
MFM– Sem III
Introduction to Computers
hierarchies. Those legacy organization designs have persisted despite fundamental changes in
information processing technology. Tomorrow's successful organizations will be designed
around the building blocks of advanced computer and communications technology. The success
of these organizations will come from the ability to couple to, and decouple from, the networks
of knowledge nodes. These networked organizations will link, on an as-needed basis, teams of
empowered employees, consultants, suppliers, and customers. These ad hoc teams will solve
one-time problems, provide personalized customer service, and then, as lubricant for
subsequent interactions, evaluate one another's performance. In the network organization,
structure will dominate strategy, credentials will give way to performance and knowledge, and
human resources will be the only sustainable advantage. Despite the promise, networked
organizations present difficult information management challenges. Among these are
developing a flexible and efficient information architecture, establishing new values, attitudes,
and behaviors concerning information sharing, building databases that can provide integrated
customer support on a worldwide basis, and protecting personal freedoms and privacy. Here,
we explore the opportunities and challenges that networked organizations will present for
information technology management.
Information technology are: (1) it is steadily increasing in value, (2) academic demand for
information technology and computing power is virtually unlimited; (3)the per unit price of
information technology is declining rapidly; and (4) the total cost of owning and maintaining
these systems is steadily rising. In other words, the potential benefits are truly revolutionary .
The Value of IT is Increasing
Information technology has tremendous potential. Computers can already talk; they process
visual images; and they will even have the capability to sense smells in a few short years. It
would not be unreasonable within the decade to have our personal computers wake us up in
the morning, read us the newspaper, report on the weather, and download the traffic report to
our car before we leave for work. Scholarly scenarios have computers assessing prospective
students' knowledge base for course placement; managing curricula,interactions, data, and
visualizations; and building lifelong connections to scholarship through distance learning
technologies. The potential value of information technology is limited only by our imagination
and our willingness to invest in change. What was optional only a decade ago is now so
valuable it is a necessity. Neither campus libraries, nor laboratories, nor research facilities would
be viable today without computers.
V0.1/2008 - MFM
171
Welingkar
MFM– Sem III
Introduction to Computers
Over the last decade we've witnessed revolutionary changes in the level of computing and
networking power that resides on the faculty desktop. The technology does so much more than
it did a few years ago--the computer is already indispensable.
The problem is that many people don't realize its increasing value because they have
incorporated the expectation for constant improvements into the very nature of information
technology. For example, the Commerce Department estimates
that 70 percent of America's top 500 companies use artificial intelligence (AI) in their computing.
The quandary is that this innovation doesn't get the credit it's due. Whenever artificial
intelligence works, it ceases to be called AI; instead, it becomes an integral part of the system
and is then taken for granted. This phenomenon appears to be common whenever an explicit
valuation of information technology is called for. Nevertheless, the implicit evaluation is
changing. Just as we would be very reluctant to give up our heating, air-conditioning, or phone,
we are quickly becoming equally loath to give up our computers.
This article was written on a computer that corrects my spelling as I type, monitors my e-mail
communications in the background, reminds me of important appointments, travels easily in my
briefcase, and scans the Wall Street Journal daily for articles relating to information technology-and it cost less to buy than my first computer purchased ten years ago. More to the point, that
original computer wasn't able to do any of these things. This computer is not just more valuable
to me than my previous ones, it has become critical to what I do.
Each successive generation of information technology brings new levels of performance and
functionality that weren't there previously. There is very little information technology on
campuses today that couldn't be replaced with something that is both less expensive and
superior in performance and function. It seems clear that the value of information technology is
increasing from year to year, as well as its respective value to our institutions. IT supports
teaching, learning, communications, and collaboration in ways that simply weren't available only
a few years ago.
The Aggregate Value of IT
The total value of information technology is greater than the sum of its parts. To the extent that
enterprise-wide systems function in aggregate-like ecosystems, much of IT's value grows
exponentially as its supporting infrastructure and interconnections grow richer. For example, the
V0.1/2008 - MFM
172
Welingkar
MFM– Sem III
Introduction to Computers
value of a departmental e-mail system is enhanced if the entire campus community is also on
the network, and is greater still if the campus is connected to the Internet. Similarly, connecting
faculty to a campus network would be valuable, but the value of connecting the entire campus
community of faculty, staff, and students would be much greater still.
In these cases there is a multiplying effect on the value accrued to the institution that goes
beyond the sheer number of users. There is a synergistic aspect to this aggregation of users
and resources. It appears that the cost/benefit curve for technology investments is a step
function, where particular levels of investment can produce superior value.
The challenge for financial planners is to target the specific level of functionality desired and
identify the minimum investments needed to move from one plateau to the next. The demand
for information technology is driven by more than just the need to answer questions. Successful
implementation almost always creates new demand and expectations that grow exponentially.
Computationally intense researchers can bring any quantity of CPU power to its knees simply
by relaxing a few restrictions in their models. The challenge is to accept this exponential growth
in demand and work to develop financial and management strategies to accommodate it. The
academic value of IT systems is growing--it is only natural to expect individuals, departments,
schools, and institutions to desire more of it. The fact that they do is an affirmation that our
scholarly values are strong and that our campuses are vigorous.
V0.1/2008 - MFM
173
Welingkar
MFM– Sem III
Introduction to Computers
Question: How and why IT and computers have contributed in increasing productivity
and competitiveness of organizations in doing business?
Giving suitable examples, explain how and why IT and computers have contributed in
increasing productivity and competitiveness of organizations in doing business.
Information Technology has occupied a very important place in today’s world. Be it any
industry, there is no one who wants to stay out of the race. Everyone is fighting for their survival
and the bottom line would be ‘Let the best man win’.
Before the advent of Information Technology:
 Increased paper work
 Lack of storage space.
 Communication problems. - High Telephone costs etc
Computerized transactions and one single database is being used by all the branches of one
particular branch and so retrieval of customer information and the transactions carried out can
take place in no time, resulting in quick dealings and negligible loss of time.
Let’s take the simple example of the ATMs. (Automatic Teller Machines) deployed at various
locations in India. The ATMs reduce the workload of the banking staff to a great extent as
previously, customers had to deposit and withdraw cash or cheques on the counters. Now, the
ATMs do the job. Thus, the staff can be utilized for more substantial work.
Emails, video conferencing etc has also brought different branches of the organizations closer
to each other as communication mediums have become much more advanced. Work can be
done at a brisk pace as reminders and other details can be mailed easily and savings on huge
telephone bills is possible.
Printing of customer transactions for e.g., the transaction summary etc can be given out as per
the customer requirement instantly, without delay. This would result into customer satisfaction
and avoiding of long queues.
Analysis of data and comparison of balance sheets etc is possible in no time as all the data
would be present in the computer and accurate information and comparisons can be made and
the unique selling points of the company and its weaknesses can also be detected.
Previously all the work, say in Finance, Marketing, Human Resources or any other department
for that matter would be done manually. A lot of paper work would be involved. Repetitive
information could not be avoided. Communication between departments was possible only
through telephone lines. Now, for e.g. the Human resource department, keeps all the
information about an employee in the computer, which is available at their fingertips. Retrieval
and updating of the data is possible much faster. One branch say, In London can access the
employee details where records are being kept in the other, Mumbai for e.g.
There is no paper work required. Bank transactions passed through the computer can be
retrieved as and when required using different criteria. Less amount of manpower required as
one operator can handle the work of multiple workers, resulting in saving of finance.
V0.1/2008 - MFM
174
Welingkar
MFM– Sem III
Introduction to Computers
Computers can calculate amounts and figures accurately. The work is never monotonous for the
computer.
It can give accurate results every time, something which a human being is never able to do all
the time. Time punching machines and computerized attendance musters can be implemented
so that accurate attendance sheets can be produced without manipulation.
Contributed by - MFM – SEM-III ------------------Roll Nos. 21 - 30
-- Space for Notes --
V0.1/2008 - MFM
175
Welingkar
MFM– Sem III
Introduction to Computers
Question: Contribution of IT and Computers in increasing productivity and to get a
competitive edge
Introduction
Information Technology (IT), a particular component of a computer based information system is
rapidly making inroads in the financial industry. Framework for development of computer based
revenue system has been conceived in the last few decades of the past century. It
encompasses all required functions including fraud control mechanism. Many software have
been developed based on the concept of various aspects.
Information Technology in the Banking Industry
While the banking fraternity has shown a phenomenal growth in terms of Revenue and profit
margins, the fact that the banking environment has always been a paper-ridden industry has
been haunting one and all. It has also been accepted that due to these traditional methods of
functioning, Inefficiencies continued to be rampant throughout the banking channel, resulting in
higher costs and obstacles in true customer relationship building. To remain competitive and
speed money movement within the organizations, banks started to focus on automation tools
that could drive further efficiencies - for their customers and for their internal operations. The
advancement of Information Technology has challenged the industry to think outside the box
and develop concepts like,
E-banking
E-banking has enabled banks to offer instant solutions to customer needs to a great extent.
Customers have been migrated to this mode of banking compared to branch banking, which has
freed up substantial number of staff at the branches who are utilized to garner more business
CRM Tools
Having an effective CRM tool has played a major role in enhancing sales opportunities and
diverts employee attention on tasks that drive revenue
Image based document processing
In the traditional method of document processing, banks would collect documents from
customers, prepare files and then circulate the files between departments which were time
consuming. Under the above process, documents once received are scanned and input into a
workflow system, which is then updated by the processing departments accordingly on the
system. This process has lead to substantial reduction in processing time.
Image based Cheque deposit slip
In the traditional method, customers depositing cheques through ATMs were given slips
mentioning the amount only; hence the customer would require to maintain the slip along with
the cheque details separately. With the new system in place, when the customer deposits a
cheque, the system scans the instrument and prints a copy of the instrument on a slip along
with the details of the cheque. This becomes more convenient and assuring for the customer.
Internet Security Device
Growth in technology could be seen in both positive and negative light. The positive could be
more access to customers onto the banking systems and the negative aspect to this would be
vulnerability of the system to fraud and hacking. In order to counter such situations banks have
come out with security devices which would generate passwords over and above the customer
login details in order to add more security and be more customer friendly.
V0.1/2008 - MFM
176
Welingkar
MFM– Sem III
Introduction to Computers
Conclusion
Use of information technology is becoming more and more common in each and every field and
the banking industry is not lagging behind, or rather it is just taking an entry. We cannot tell just
now what lies in the store. Day by day new horizons are opening up to help everyone to brace
the technology to make banking easier, productive and competitive. The splendoure and
excellence will be achieved through practice, patience and perseverance.
Contributed by - MFM – SEM-III ------------------ Roll No – 81 to 90
-- Space for Notes --
V0.1/2008 - MFM
177
Welingkar
MFM– Sem III
Introduction to Computers
Question: Using specific examples, describe how computerization would help your
department perform more efficiently and effectively. Also explain how computerization
will help better decision making, analysis and planning?
Industry NO 1:
Department:
Activity:
TELECOM
Credits & Collections
Payment reconciliation for a Corporate
Methodology:
In the current scenario to understand unpaid invoices for particular corporate payment
reconciliation is done manually by outsourced personnel from the raw data that is retrieved from
the system. This is a time consuming process and in certain cases could take up-to a few days
depending upon the no of cellular connections a corporate has acquired. Only on the basis of
this information the front end personnel can revert to the corporate on the outstanding.
The issue here is that in majority of cases, from the raw data it is difficult to identify the actual
unpaid invoice which in turn affects our response time to a corporate. Since the requirement of
such data is critical to a corporate a dedicated team is assigned to have this activity completed.
This has a direct impact on cost, time and productivity.
Snap shot: [removed]
Objective: To design a system that would automate detailed reconciliation for multiple accounts
without having to have the same task done manually thus saving time, improving productivity
and reducing cost.
Snapshot: [removed]
Benefits:
Decision making: Information or confirmation regarding outstanding or payment history can be
provided to a corporate in a short period of time thus enabling the team to take decisions in
terms of settlement etc.
Analysis: This Automated system can provide data not only for our company but also the
corporate to analyze the following:
 Ageing of the outstanding
 How many months are unpaid?
 How many such companies have an old outstanding?
Planning:
Depending on the Ageing of the outstanding discuss the same with the company thus avoiding
any accounts in the future to have a very old outstanding.
Other Benefits include:
 Reduce Outsourcing cost
 Quicker payment processing
 Data Security
 Improves customer service
V0.1/2008 - MFM
 Increased Productivity
 Enhances business relations
 Saves time

178
Welingkar
Industry NO 2:
Department:
Activity:
MFM– Sem III
Introduction to Computers
BANKING
Credits Administration
Management of various facilities provided to Corporates
Banks play a very significant role and are a key infrastructure of the financial sector of any
economy. Different types of computer packages offering are available to automate the various
activities of any department in the bank. Still there are some problems which are created due to
system constraint in my department which affects day-to-day functional activities.
The same can be illustrated by an example as under:
1. Current situation:
In the department, we monitor the Portfolios of various corporate clients. There is a
system where we online maintain the various facilities given to a particular client. Out of
the facilities given, there is a Loans facility (short tenor & long tenor). Clients are given
various long tenor loans where repayment which can be drawn in various tranches and
which have different repayment schedules e.g. in 24, 60, 84 etc monthly / quarterly
installments.
2. Problem:
The system currently, which we are using for processing the loans only, supports
processing of short term / tenor loans (where tenor of the loan is max 1 yr). While there
are various Term loans, (where the tenor goes beyond 1 to 5 yrs etc) as sanctioned to
the client, which need to be processed individually.e.g. A client has a sanctioned a limit
of 100 million & the repayment is in 20 monthly installments.
On 01/07/07 client requires amount of INR 20 million out of the entire limit (INR 100mln),
so due to system constraint, 20 separate loans are required to be processed.
Then on 10/07/07 client wants to draw INR 50 million out of the remaining unutilized
limit, which again needs to be processed in 20 different loans as per repayment
schedule approved.
So out of the Total INR 100 million-limit client utilizes 70 million, which is booked in
system in total separate 40 loans.
Hence, in clients a/c on the day the monthly interest application happens initially there
are these 40 disbursal entries & additional now 40 interest application entries happen on
individual loans. Therefore, it is really a big task for the client & the relationship manager
from the bank to identify how much interest is applied & how much the client needs to
fund and the same pertain to which loan.
This above quoted is just one instance , but there are various clients who have not only
1, but 4 to 5 term loans which are processed in various repayment schedules as
individual loans. Hence, in month end, various 100 to n no of entries occur in client’s a/c
and it is very difficult to identify that the interest application entry is relating to which loan.
V0.1/2008 - MFM
179
Welingkar
3.
MFM– Sem III
Introduction to Computers
IT support:
We would require IT support definitely to design or modify current system to suit the
Term loan structure to perform more efficiently and effectively.
1) Long Tenor loans can be processed at one go & in month end there is only one
interest application entry on the entire loan outstanding.
2) The system should have inbuilt trigger to debit the clients a/c on the various
repayment dates instead of processing individual loans as per the repayment
schedule.
3) The system developed should have option to clearly identify the interest
application pertains to which loan (e.g. loan details like start date, approved
schedule)
4. Benefits incase the required changes are done:
Decision Making: Incase the above changes are done then that would definitely help in
creating a clear picture of the transactions /loan /interest entries in the customer’s a/c.
The actual interest charged on different loans can be identified easily, which will helps
not only the department to the monitor the customer’s a/c, but also easy reference to the
customer regarding transactions in his a/c.
Planning: Due to upgraded systems, proper planning is also possible. e.g. Department
can plan as to how much further the client can avail facilities after payment of his
interest. Non-payment of interest for 3 months can term that client as an NPA (Nonperforming asset).
Analysis: Computerization will help to effectively analyze the portfolio of the client. At
any point of time client or the bank should be able to identify the interest application
entries in his a/c pertain to which loan etc.
Cost saving can be done, since currently this process of processing requires three full
time people on the job which is a cost to the bank.
Currently cost for one person is say INR 15000/ per month - hence for 3 people cost to
the company due to the system constraint on an annual basis is INR 540,000 , which
can be curtailed only with further up gradation in system which supports the structure of
the Term loans.
Hence in this case not only banks productive time is being saved , but also the cost
which is occurred to retain that employees & also the infrastructure cost (computer,
electricity etc) which is incurred for the additional resources .
Such a plan requires periodic updating, preferably at the same time that the business
plan is updated. Such a plan also gives a budget profile for future IT-applications that are
in conformity with the business plan.
Such a budget proposal should be also acceptable to the top management because it is
based on a business plan in which they were involved during formulation.
Contributed by - MFM – SEM-III ------------------ Roll nos 11 to 20.
V0.1/2008 - MFM
180
Welingkar
MFM– Sem III
Introduction to Computers
-- Space for Notes --
V0.1/2008 - MFM
181
Welingkar
MFM– Sem III
Introduction to Computers
 Information Technology Importance
 Information Technology has occupied a very important place in today’s
world.
 Be it any industry, there is no one who wants to stay out of the race.
Everyone is fighting for their survival and the bottom line would be ‘Let the
best man win’.
 Before the advent of Information Technology:
 Increased paper work
 Lack of storage space.
 Communication problems. - High Telephone costs etc
 Examples where IT is used
 Computerized transactions
 ATMs. (Automatic Teller Machines) deployed at various locations in India.
 Emails, video conferencing etc
 Printing of customer transactions
 How IT influence various sectors
 Previously all the work, say in Finance, Marketing, Human Resources or
any other department for that matter would be done manually.
 A lot of paper work would be involved. Repetitive information couldn't be
avoided.
 Communication between departments was possible only through
telephone lines.
 Now, for e.g.
 the Human resource department keeps all the information about an
employee in the computer, which is available at their fingertips.
Retrieval and updating of the data is possible much faster.
 IT influence in HRD functions
 HRD manager, the functions would be in relation to recruitment, induction,
payroll,etc
 Information about whether a particular candidate has appeared for an
interview earlier his other personal details etc can be stored together and
can be retrieved easily and fast.
 Details such as the employee’s salary, bonus, the dearness allowance etc,
can be fed into the computer one single time
 special software packages available for this purpose.
 IT influence in Finance functions
 Finance Manager, the functions would be in relation to Balance Sheets,
Petty cash statements etc.
 Packages such as Tally etc are really helpful as once the entries are fed
on daily basis.
 Balance Sheets can be compared with the previous years figures
 Profits and losses can be determined easily without delay.
V0.1/2008 - MFM
182
Welingkar
MFM– Sem III
Introduction to Computers
 IT influence in Marketing sector




As a Marketing manager, the functions would be in relation to Sales,
Turnover, Client details, purchases ,etc.
Information about whether a particular items sales ,puchase.
the outstanding amounts to be received from debtors and amounts to be
given to creditors etc.
Easier in decision making by analyzing the data using IT.
Contributed by - MFM – SEM-III ------------------ Roll nos 91 to 99.
-- All the best --
V0.1/2008 - MFM
183
Download