Computer Technician Training Course

advertisement
Computer Technician Training Course
By: Peter Paskowsky
1
Contents
Section One: Computer Hardware
03
Section Two: Computer Hardware Troubleshooting
24
Section Three: Computer Software
29
Section Four: Computer Software Troubleshooting
37
Section Five: Computer Networking
44
Afterword:
58
2
Section One: Computer Hardware
1.1 Microprocessors
The microprocessor is the part of the computer which executes instructions, which carries out data manipulation, and
generally handles the main tasks of the computer.
Microprocessors are based on certain instruction set architectures, which determine how the microprocessor functions
and how it behaves with other system components, such as memory. Normally, there is no software compatibility
between two architectures. That is to say a program compiled for execution on a particular architecture cannot run on
another without recompiling or the use of emulation. Below is a table of common instruction set architectures and
their common uses.
Table 1 - Instruction Set Architectures
Architectures
Uses
ARM
Cell phones, PDAs, tablets, digital media players, consumer electronics
Power PC
Older Apple PCs, Game Consoles, Servers
SPARC
Servers
x86
Windows PCs, Servers
The most common Instruction Set Architecture (ISA) for personal computers is x86, created by Intel. However, for every
microprocessor made for a PC there are 99 made for embedded purposes, such as cell phones, digital media players,
digital cameras, printers, etc. So by far most microprocessors are used in consumer devices, but in the world of PCs
Intel’s x86 ISA is the most popular. There are three vendors for x86 microprocessors, listed in the table below.
Table 2 - Major x86 Vendors
Intel
AMD (Advanced Micro Devices)
VIA
Market Share
81%
18%
01%
Uses
Desktop, Laptop, Server
Desktop, Laptop, Server
Low Power Devices
3
Figure 1 - Various Microprocessors
When examining and evaluating microprocessors there are several characteristics to be kept in mind including:






Microprocessor architecture - The internal design of the microprocessor, including the number of functional
units and instructions permitted.
Clock Speed (measured in Hertz) - The number of clock cycles per second of the microprocessor. A useful
measure of performance, although higher clock speed does not always mean higher performance. (For example
Pentium 3 processors are faster clock for clock than Pentium 4s. This means a Pentium 4 at 1.4 GHz will be slower
than a Pentium 3 at 1.4 GHz. The main advantage of the Pentium 4 architecture was to allow higher clock
frequencies while sacrificing efficiency.)
Number of Microprocessors on the package – more than one microprocessor is useful for multithreaded
applications and multitasking. More processors do not necessarily mean more performance. Applications must be
written to take advantage of multiple processors, and some tasks are more suited to multiprocessing than others.
Cache Memory (measured in Bytes) - A type of memory located on the CPU which helps speed up memory access
by storing commonly used information. Cache memory is many times faster than system memory (RAM).
Power usage – Very important for mobile computers (laptops, notebooks) which run on limited battery power.
Also important for servers, where generated heat of many computers can become a problem.
Numbers of bits – Early x86 processors were 8 or 16 bits wide, most processors from the 386 to the Pentium IV
were 32 bit, and modern processors are 64 bits wide. This mostly allows more memory to be addressed, but also
shows small improvements in some types of applications. The microprocessor must run an operating system
4

designed for the correct amount of bits. For example, a 32 bit processor can run 16 bit and 32 bit operating
systems, but not 64 a bit OS. A 64 bit processors can run 64 bit and 32 bit operating systems therefore backward
compatibly is available. A 32 bit OS can address a maximum of 4GB of memory while a 64 bit OS can address 2^64
bits of memory. 64 bit x86 processors also have double the number of computer registers than 32 bit designs.
Fabrication Technology – The technology used to build the microprocessor. Usually measured in nanometers,
such as 90 nm, 65 nm, 45 nm, etc. Generally smaller fabrication processes allow for lower power consumption
and higher clock sped at reduced costs.
Below is a table listing the history of Intel and AMD’s product lines. As you can see, the Intel processors began with the
8086 (8 bit), followed by the 80286 (16 bit), 80386 (which was the first 32 bit microprocessor built by intel, and thus the
forbearer of all modern microprocessors), and 80486. This is why the architecture is known as x86, because each of the
processors ends with 86. Generally there have been many architectural improvements between microprocessor
generations, as well as improvements in manufacturing processes. This allows for big improvements between
generations.
Table 3 – Microprocessor History (Comparing Intel and AMD)
Intel 8086 80286
80386
80486 Pentium
Pentium Pro
Pentium Pentium
(First 32 bit
(AKA 80586
(First out of order
II
III
processor
after Penta microprocessor design
design by Intel)
= 5)
by Intel)
AMD
K5
K6 / K6-2 / K6-III
Athlon
Athlon /
Athlon XP
Pentium IV
Core 2
Core ix
(Radically new (Great increase in core i3,
architecture)
performance)
i5, i7
Athlon XP/
Athlon 64
Phenom
Phenom
II
Examples of architectural improvements include in order architectures vs. out of order architectures. In order
architectures do all commands in the order they are received, whereas out of order architectures allows the processor
to choose the most efficient method for executing commands. Newer generations often include new instructions for
speeding up modern applications, such as video encoding or video gaming.
Laptop microprocessors typically have smaller dimensions and use less power. This is to accommodate the demands for
small system size and long battery life in a laptop.
Table 4 – Partial Product Line History (Intel)
Name
Date Transistors
8080
1974
6,000
Fabrication
Clock Data
Technology
MIPS
speed width
(Micrometers)
6
2 MHz 8 bits
0.64
0.33
8088
1979
29,000
3
5 MHz
16
bits
8-bit
bus
80286
1982
134,000
1.5
6 MHz
16
bits
1
80386
1985
275,000
1.5
16
MHz
32
bits
5
5
80486
Pentium
1989
1993
Pentium II 1997
Pentium
III
1999
1,200,000
3,100,000
7,500,000
9,500,000
Pentium 4 2000 42,000,000
Pentium 4
2004 125,000,000
"Prescott"
1
25
MHz
32
bits
20
0.8
60
MHz
32
bits
64-bit
bus
100
0.35
233
MHz
32
bits
64-bit
bus
~300
0.25
450
MHz
32
bits
64-bit
bus
~510
0.18
1.5
GHz
32
bits
~1,700
64-bit
bus
0.09
3.6
GHz
32
bits
~7,000
64-bit
bus
Intel divides their x86 products into three main lines; Celeron for low cost computers, Pentium or Core for mainstream
or performance computers, and Xenon for servers. These three are usually based on the same architecture; therefore
the difference must be in clock speed, cache memory, number of microprocessors, and power usage. Below is a
comparison of the three product lines.
Table 5 - Intel Product Family Comparisons
Celeron
Uses
Clock frequency
Price
Cache Memory
Low cost computers
Normally lower
Very low
Very low
Higher
Medium
Medium
Higher
Very high
High
Pentium / Core Midrange to high end desktops
Xenon
Servers
1.2 Random Access Memory (or Computer Memory)
Random Access Memory (RAM) stores programs which are in execution, as well as data used by these programs. Often
increasing the amount of RAM in a computer can drastically increase performance, preventing the need for slower
virtual memory (disk drives used as memory).
Like microprocessors when evaluating memory it is important to measure many characteristics, for example:

Architecture – the type of memory used (SDRAM, DDR, etc.) each architecture has a different slot type and
performance characteristics.
6



Capacity (measured in Bytes) – the overall storage capacity of the memory, more is better.
Speed (measured in Hz) – the speed at which the memory operates, the higher the better.
Timings (measured in clock cycles) – the amount of clock cycles required to access a certain address in memory,
the lower the better.
There are many different types of memory you may find in a computer. The types listed below are in order from oldest
to newest.

(EDO, SDRAM RDRAM or “RAMBUS”, DDR , DDR 2, DDR 3)
EDO RAM
These types of memory differ in the physical package (number of pins used), speed, capacity, voltage used, etc. Speed
and capacity have generally increased from generation to generation, voltage requirements typically decreased, and
each generation has required different slots and thus different motherboards which could support them. Laptop
memory is architecturally identical to desktop memory, but it is in a smaller form factor and uses less power.
Memory speeds are typically listed as “PC 100” or “PC 2100”; normally the higher the number the higher the
performance. The actual clock speed can be calculated from this number using a different formula based on the type of
architecture. For example, for SDRAM PC100 means 100MHz and for DDR PC3200 means 400MHz.
7
In terms of performance, faster memory speeds and quicker timings often make very little real world impact. If given
the choice between a small amount of fast memory and a large amount of slow memory, higher capacity will almost
always be the better choice.
1.2.1
Sub topic - Memory hierarchy
There are several types of memory in a computer, and they are arranged in a hierarchy ranging from fastest but
smallest to slowest but largest. They are also arranged from most expensive to least expensive.
On the microprocessor itself, there is cache memory. This type of memory is the fastest available, and it is used
to store information which has been repeatedly used. For example, if there is data in main memory which has
just been accessed, it will be stored into cache memory so that the next time it is needed it can be read from
relatively fast cache memory and not main memory, thus saving the long access time. There are three levels of
cache, Level One (L1), Level two (L2), and Level three (L3). L1 is the smallest but fastest, and L3 is the largest but
slowest. Not all Microprocessors have all three levels of cache.
Main memory or RAM is where the majority of running applications and data used by them are stored.
Virtual memory is the process of using other storage mediums (flash memory, hard drives, solid sate drives) as
computer memory. This is by far the slowest form of memory, but it can be very large (and cheap).
Where data is
manipulated
L1
Found on the
microprocessor
L2
L3
RAM
On HDD
Figure 2 - Memory Hierarchy
8
Table 6 - Typical cache memory sizes for x86 processors
Registers- 8 in 32bit processors and 16 in 64 bit processors
L1 cache 64-128kB
L2 cache 64 KB-8MB+
L3 cache 2-16MB+ (typically larger than L2)
1.3 Hard disk drives
Hard disk drives are a form of computer storage. Files and folders are stored on them. They are used to store
documents, video, music, applications, and the operating system itself.
The characteristics of hard disk drives are:





Spindle Speed – the speed at which the platters rotate measured in rotations per minute (RPM), the faster the
better, this measurement applies only for mechanical drives. (5400 or7200 RPM for desktops and 10000 or 15000
RPM for servers are typical speeds found)
Cache Size – The amount of memory onboard the drive, increasing access speeds measured in Bytes
Capacity – The amount of storage capacity of the drive, measured in Bytes. Hard drive manufacturers measure
capacity in decimal numbers, which leads to distinction between the advertised size and the true size. For
example a hard drive manufacturer will advertise 1 GB as 10^9 bytes = 1,000,000,000 bytes but 1 GB in actuality
is 2^30 bytes = 1,073,741,824. Therefore the capacity of hard drives is about 7.3% less than advertised.
Interface – The method used to connect the drive to the computer (typically IDE or SATA)
Size – 3.5 inch or 2.5 inch. 3.5 inch drives are used primarily for desktops and servers, 2.5 inch drives are used
primarily for laptops and SSDs.
There are two types of disk drives available today: mechanical hard disk drives (HDDs) and solid state drives (SSDs).
Conventional mechanical drives which feature a rotating magnetic disc and magnetic heads to store data. Solid state
drives utilize flash memory for storing data. In general, SSDs are much faster but offer less storage capacity for much
more money. SSDs are a much newer upcoming technology and as of yet very rare in Cameroon.
9
Table 7 - HDD vs. SSD
Access times
Transfer Speeds
Capacity
Price
Mechanical Hard
Drives
Slow
Slow
High
Low
Solid State Drives
Fast
Fast
Low
High
There are two main interface types for hard drives, IDE and SATA. IDE uses a ribbon cable with 40 or 80 wires and can
support up to two devices per channel. The two devices on a channel are called Master and Slave. You may only have
one master and one slave per channel. IDE devices can be configured master or slave using jumper caps located on the
back of the drive. SATA uses a seven wire cable and only supports one device per channel. In general, most new
computers use SATA which allows for higher transfer speeds than IDE.
10
Figure 3 - IDE Master / Slave configuration
Note: 80 wire IDE cables should be used for Hard disk drives, as they allow for higher transfer speeds. Save 40 wire
cables for low speed devices such as optical drives.
There are several other interfaces such as SCSI and Fibre Channel which are usually used in servers. SCSI and Fibre
Channel can both connect multiple devices per channel, and allow easy configuration of many hard drives as well as
data preservation methods.
1.4 Optical Drives
Optical drives are used to read optical media, such as CDs, DVDs, and Blu Ray Discs. These types of discs are often used
to store music, videos, and computer software.
Optical drives function by reflecting light off the disk which has different reflectivity, one representing zero and the
other representing one. Optical drives use light in the visual spectrum (ROYGBIV), the newer standards use smaller
wavelengths and thus can fit more data per disk.
In general there are two types of drives, disc readers and disc readers/writers. Disc readers can only read data on discs,
they cannot create new discs. However, disc readers/writers can both read and write to blank media.
Newer drives are backwards compatible with older medium types, but older drives are not compatible with newer
media types. Thus a CD-RW drive can read/write CDs but not DVDs or Blu Ray Discs. Conversely a BD-RW drive can
read/write CDs, DVDs, and Blu Ray Discs.
11
Table 8 - Optical Disc Types
Capacity
Uses
CDs (compact disc)
700 MB
Music, Data
DVDs (digital video disc)
4.7 – 9 GB
Video, Music, Data
BDs (blu ray discs)
25 – 50 GB
Video, Data
1.5 Motherboards
The motherboard is a printed circuit board which connects all the critical computer components together while also
providing connections to other peripheries. This is where the CPU, memory, expansion cards, hard drives, optical
drives, USB devices, etc are connected.
Internally the CPU is connected to the northbridge where it can access the main memory and graphics card. The
northbridge is connected to the southbridge where all other peripheries are connected, such as SATA, IDE, USB, etc.
These two chips in conjunction are known as the chipset.
Newer microprocessors have moved components typically found on the northbridge onto the microprocessor die
(piece of silicon) itself. For example, a modern Intel Core i5 has 4 CPUs, a memory controller, a graphics card, and PCI
express lanes all built on die. Because of this, performance and features are less dependent on motherboards and more
on the CPU used.
Below is an example diagram of a motherboard layout.
12
Figure 4 - Motherboard Functional Diagram
There are several different form factors, or physical layouts, of motherboards. The form factor determines the
computer case which can be used with the motherboard. The most common form factor today is called ATX. There are
versions of ATX which are created for compact systems, such as mini ITX, nano ITX, and pico ITX which allow for much
smaller system footprints. For older machines (8086 - Pentium era) the standard form factor was AT.
Although many computers use the ATX form factor, you will likely run across many systems with non standard form
factors. These form factors are not compatible with ATX which can make switching out motherboards difficult,
requiring a motherboard of the same non standard form factor, which will most likely be very difficult to find.
Figure 5 - Various ATX sizes
13
The motherboard contains many connections. On the motherboard one can find:






The CPU socket (often called a ZIF “Zero Insertion Force” socket) where the CPU is connected.
Memory slots (where the RAM is inserted)
Expansions slots (ISA, PCI, PCI-X, PCI Express) which allow the insertion of add-on cards which increase the
functionality of the system (sound cards, graphics cards, network cards, etc)
Disk drive connectors (IDE, SATA) which allow the connection of hard disk drives and optical drives for storage.
The Power connectors which power the motherboard
Back panel connections such as USB, FireWire, PS/2, serial ports, parallel ports, audio, etc. which allow for adding
other peripheries such as mice, keyboards, printers, speakers, etc.
14
Figure 6 - Motherboard Diagram
1.6 Power Supply
A power supply unit (PSU) converts alternating current from the wall (220 Volts in Cameroon) into the direct current
used by digital electronics, such as a computer. Computers use many voltage levels, commonly 3.3 V, 5 V, and 12 V, -5
V, and -12 V DC.
Figure 7 - Power Supply Unit (PSU)
15
There have been several types of power supplies used over the history of computing. Early computers used AT power
supplies, which provide direct power. That is to say the power button was the only source for turning on or off the
computer. When you “shut down” the PC, the power supply would still be running until you turned it off. That was later
fixed with ATX power supplies, which allowed for soft power. When you touched the power button on the
motherboard of an ATX power supply, the motherboard would send a signal to power on the machine. This allowed
software to control the power supply. Now when you “shut down” the machine, the motherboard sends a signal to the
PSU to power off.
With modern computers, power demands have grown substantially. Because of this fact, the number of pins required
to power the motherboard has increased as well. Original ATX power supplies have 20 pin headers. These are used for
most PCs before Pentium IVs. Pentium IVs require not only the 20 pin ATX connector, but also a supplementary 4 pin 12
volt connector. Even newer machines require 24 pin connectors as well as a supplementary 4 or 8 pin connectors.
Just like with motherboards, some computer makers use non standard designs. These are not compatible with normal
ATX power supplies and thus a replacement can be difficult to find.
Aside from the main power connector, you can also find:




4 pin Molex connectors for connecting IDE hard drives and optical drives
4 pin SATA connectors for connecting SATA hard drives and optical drives
6 or 8 pin PCI-E connectors used for powering high end graphics cards
Floppy connectors used for connecting floppy drives (now obsolete)
Figure 8 - Power Supply Connectors
16
1.7 Expansion Cards
An expansion card is a printed circuit board that can be added to an expansion slot on a computer motherboard to add
extra functionality to the system.
There are four main types of expansion slots you will see in the field. The oldest is ISA, which is used for computers
mostly below the Pentium 3 era. Next is PCI, which has been used for a very long time and is still common even on new
motherboards. AGP connectors were used for graphics cards until superseded by PCI express. The newest standard is
PCI Express, which can be found in all modern computers. PCI Express comes in 1x, 4x, and 8x slots. The higher the
number the more data lanes are available, and the larger physical connector is needed. Most graphics cards use 8x
slots, and most other add in cards use 1x slots.
Table 9 - Expansion Slots
Slot Type
Description
ISA
General connectivity, 16 bit interface, now obsolete
PCI
General connectivity, 32 bit interface, very common even in new computers, many devices available
AGP
Special purpose slot for graphics cards, superseded by PCI Express
PCI
Express
General connectivity, found in all new PCs, many devices available, comes in 3 physical connectors 1x, 4x,
and 8x
PCI Express x8
PCI Express x1
Laptop computers have two common expansion slots: PCMCIA (or cardbus), an older standard, and ExpressCard, a new
standard for high end laptops.
17
Figure 9 - Typical ExpressCard sizes, compared
to PCMCIA
Figure 10 - Typical PCMCIA Cards
Table 10 - Comparison between Laptop and Desktop Expansion Slots
LAPTOP
DESKTOP
ISA (obsolete)
PCMCIA
PCI
EXPRESS CARDS (2 types)
PCI EXPRESS (3 slots)
34 and 54
1x, 4x and 8x (increases in speed)
Common uses for expansion cards include:





Network cards (for connecting computer together for file sharing or internet access)
Graphics cards create images on the computer monitor (for video gaming or photo/video editing)
Sound cards (for improved sound quality, inputs/outputs)
TV Tuners (for watching/recording television)
Periphery expansion (adding USB, FireWire, IDE, SATA, etc.)
Networking cards come in two main types, wired (Ethernet) and wireless (802.11). Wired Ethernet cards come in three
speeds, which are backwards compatible, 10 Mbps, 100 Mbps, and 1000 Mbps. Wireless cards (802.11) come in four
main speeds: 802.11a, 802.11b, 802.11g, and 802.11n. Their top speeds are 54 Mbps, 11 Mbps, 54 Mbps, and 150
Mbps respectfully. Higher speeds are backwards compatible with older standards, however to use for example 1 Gbps
Ethernet requires a 1 Gbps switch, and using 802.11n requires a 802.11n access point. Below is a table summarizing the
different wireless protocols.
18
Figure 11 - Ethernet Card
Figure 12 - Wireless Card
Table 11 - Ethernet and Wireless Standards
Wired
Ethernet
10 Mbps,
100Mbps and
1000Mbps or 1Gbps
Wireless
802.11
802.11a
54Mbps same Hz as phones 5GHz
802.11b.
11Mbps
802.11g,
54Mbps
2.4GHz
802.11n
150Mbps
Cheap but less flexible Not as reliable and slower but more flexible
Graphics cards come in two varieties, integrated and discrete. Integrated graphics cards are integrated onto the
computer motherboard. Discrete graphics cards are in the form of an expansion card. Most low cost computers use
integrated graphics because of the low cost, although they also have very low performance.
There are three main vendors for graphics cards, Intel, NVIDIA, and AMD. Intel only supplies integrated graphics cards,
and because most computers use Intel processors, most computers also use Intel integrated graphics. AMD and NVIDIA
supply both integrated and discrete cards, and are focused on high performance cards for gaming and photo/video
editing.
19
Figure 13 - Graphics Card
There are 5 main graphics connector types. The oldest is VGA, a 15 pin blue connector used on most CRT (cathode ray
tube) monitors. The replacement for VGA is DVI, a digital connector used for LCD monitors. SVideo is a connector used
specifically to connect a graphics card to an analog television. HDMI is a newer connector which is compatible with DVI
and used to connect to digital TVs. The newest connector type is called display port, which hopes to standardize
connectors across PC monitors and televisions.
Table 12 - Graphics Connectors
Type
VGA
DVI can communicate same as HDMI
Uses
Analog monitors
Digital monitors
SVideo
HDMI
TVs
TVs
Display port
TVs / monitors
Description
Very common (analog)
For high resolution digital
images
Analog TVs
Digital, video and sound on one
cable, prevents recording in
between
An attempt for computers and
TVs to use the same connector
20
Figure 14 - Different video connection types
Figure 15 - A modern Graphics card with HDMI and DisplayPort
1.8 – Hardware trends
Microprocessors today are growing more and more integrated. Things like graphics cards, memory controllers, and
expansion lanes are being moved onto the microprocessor chip itself. This can improve performance because these
components are now directly on the same piece of silicon as the CPU, so communication is much faster than
speaking through the north bridge.
GPU & CPU
North Bridge
Memory
PCI Express
South Bridge
21
Parallelism is a huge trend in today’s microprocessors. Most microprocessors have at least two and up to four or
eight CPUs on chip. Some processors also implement SMT (simultaneous multithreading) which allows two threads
to be run on a single CPU, causing each CPU to appear as two CPUs to your operating system.
Software today is being written with this in mind, allowing code to take advantage of the added resources.
However more microprocessors does not necessarily mean more performance, it depends on how well the
application can use the added resources. Certain applications such as video editing and password cracking can
easily make use of multiple microprocessors. Others, such as older software, cannot and gain no performance from
multiple microprocessors. New software is starting to take more and more advantages of these extra cores, so they
will become more and more useful over time.
Processors today are not making large gains in clock speed – in 5 years microprocessor clock speeds have barely
increased, however microprocessors typically now use less power for more performance. A new shift in thinking is
taking place, where one measures performance per watt of power instead of raw performance. That is to say an
improvement to a CPU that increased performance by 1% should increase power consumption by no more than 1%.
Most microprocessors today are 64 bit, allowing the use of 64 bit operating systems and software.
Memory today is getting higher capacity, faster, uses less power, and is getting cheaper. All trends in this
department are great for consumers.
Hard drives are getting higher capacities, larger cache sizes, and faster interfaces. Storage is cheaper than ever
before. SSDs are becoming more and more popular, especially among enthusiasts for their large performance
gains, and their prices are falling sharply.
Optical drives are becoming less and less important as data is more often stored and transported using USB flash
drives, external hard drives, and the internet (cloud storage), as this is more convient and has less problems such as
scratched disks.
Motherboards are becoming less complex as components typically found on the motherboard are moved to the
CPU. This means performance between motherboards is often negligible, and motherboards can be chosen for
features rather than raw performance.
Power Supplies are growing in wattage and efficiency. PSUs are also gaining more and more connectors to meet
the demand for higher power components such as newer motherboards and graphics cards.
Expansion Slots are getting faster and faster, allowing higher connection speeds for expansion cards. PCI express
has undergone many revisions increasing speed.
Video Cards are getting faster, having more execution units, and holding more memory than ever before. There is a
new trend towards GPGPU (general-purpose computing on graphics processing units) which allows GPUs to
perform tasks much like a typical microprocessor. A GPU is much more efficient than typical CPUs for certain tasks
such as video editing and scientific or mathematical applications.
22
Network Cards such as 1 Gbps Ethernet networks and 802.11n are becoming more and more common, replacing
older 100 Mbps and 802.11g networks everywhere, providing higher bandwidth for file sharing / video or audio
streaming in the home or office.
23
Section Two: Hardware Troubleshooting
2.1 Hardware Troubleshooting Basics
Hardware Troubleshooting is the process of determining the cause of a problem caused by computer hardware through
analysis and research. This deals with problems which take place before the operating system has been loaded.
Therefore a problem with the machine training on, unrecognized hardware, blurry distorted or blank images on the
screen all fall into this category. Hardware troubleshooting requires applying your knowledge of computer hardware
and the functionality of a computer system to determine where the problem lies.
In general there are a few steps needed to be taken when troubleshooting. First one must define the problem, for
example “The machine does not power on” or “The machine powers on but I see nothing on the screen”. The second
step is to do research on a problem. Included in this section are some common hardware problems and their solutions,
but research can come from many other sources such as the internet or other repair manuals or even your own
experience and knowledge. After having some understanding of the problem you should be able to narrow down the
possible problems using the process of elimination. Once you think you have determined where the problem lies, you
can go along and try to test your hypothesis by switching out the problem parts with known working parts. So in short,
the steps to be taken are in this order:
o
o
o
o
Define Problem
Do Research
Process of Elimination
Switching out parts
2.2 Common problems and solutions
There are many common problems when it comes to hardware troubleshooting. Most symptoms cause an error during
the boot-up process when the machine is being powered on. We will be focusing right now on problems that prevent
the operating system from loading.
Common problems include when the machine simply does not respond when the power button is pressed, to having
the machine start up but with no visuals on the screen sometimes accompanied by beeps. Other problems could be
constant rebooting, seemingly undetected hardware components, or problems with the monitor. We are going to step
through these problems one by one, determine their possible causes and thus their probable solutions.
The main idea here is to find and separate the part causing the problem, so that you can verify were the problem lies
and correct it.
General advice
For a computer to turn on it requires a motherboard, power supply, processor, memory, and video card. When the
machine is not working with only these components, one component is not well connected or broken. When
troubleshooting hardware problems, it is a good idea to disconnect all components other than the vital components to
24
narrow down the problem. This may solve the problem, in which case you can keep adding components again one by
one to find the problem.
You should clean computers regularly. Dust buildup inside of computers, especially on CPU heat sinks, can cause
overheating and erratic behavior.
If possible a voltage regulator should be used to protect the computer from power spikes and low voltage. Investing in
a voltage regulator could save you the cost of replacing many power supplies, it is a good investment.
The BIOS (Basic Input/Output System) is used for configuring basic options on your computer, such as boot order,
power options, and enabling/disabling onboard components. Many problems can be solved through making changes in
the BIOS or by resetting the BIOS to its factory defaults.
Problem 1 – Computer does not start (no lights, fans are not moving)
If the machine does not seem to respond at all to pushing the power button, there is most often trouble with the
power coming to the computer. This can manifest itself in many ways, it could be as simple as a power outage, an
unplugged power cord, incorrect setting on the power supply, a broken power button, or as serious as a broken power
supply or motherboard.
To troubleshoot this problem, first verify power is working correctly. If other machines are running OK, then the power
from the wall is likely OK. Remember the problem could be caused by low voltage. Having a voltage regulator will
ensure that is not the problem. Next, check the cable and socket you are using are both working properly and well
connected. If the problem wasn’t this simple, it is time to examine the power supply.
If available, the best troubleshooting solution is to try another known working power supply. Open another machine
that has a functioning power supply of the same type (ATX, ATX +4, etc.) and swap the parts. If the known working
power supply can power on the troubled machine, then you know the problem was with the old power supply.
If you don’t have other known parts to work with you can still try to assess the problem by testing the power supply
directly. To test the power supply separate from the computer, you must first disconnect the power supply from all
other devices (motherboard, optical drives, hard drives, etc).
Next, connecting the green wire and any black wire on a standard ATX power supply will cause it to turn on. Note that
you can break non standard supplies using this method! This process can be done with a paper clip rather easily. Make
sure to unplug the power before connecting black and green. Afterwards connect power and see if the fan spins. If the
fan spins the power supply is likely functional, although not always.
If you have not found a problem with the power supply than it is unfortunately most likely a problem with the
motherboard.
25
Figure 16 - Testing an ATX power supply
Problem 2 – Computer starts, nothing is on the screen, and the machine is beeping
Normally when a computer is beeping it is giving you an error code. The motherboard is trying to communicate that
there is a problem with one or more system components. These error codes, sometimes called beep codes,
communicate the problem. An example of a beep code is one long beep followed by two short beeps. This could state
there is a problem with the system memory, microprocessor, or video card. Unfortunately, beep codes are different
from manufacturer to manufacturer, so you must search the internet or a manual for each specific computer.
Although even without the manual you can troubleshoot this problem. It is most likely a problem with the CPU,
memory, or graphics card. First disconnect all parts besides the CPU, memory, and graphics card. If the problem
persists, check that the CPU, memory, and graphics card are well connected. Next, try swapping out the memory
graphics card, and CPU with known working components. If one of the replacements works then you have found the
problem. Remember to always power off the computer before chaning hardware components!
If changing out the components does not work, it is often a problem with the motherboard.
Problem 3 – Computer starts, nothing is on the screen, and there is no beeping
Often if there is no beeping it is often a problem with the motherboard or power supply. However this problem should
be treated as the problem above. The reasoning is that the motherboard may have no speaker or its speaker is
disconnected. Therefore follow the instructions above to solve this problem.
Sometimes the motherboard is not beeping because there is a problem with the board itself, rendering it incapable of
even emitting an error code. If you cannot find the problem, the motherboard is probably to blame.
Problem 4 – Computer reboots / freezes often
If the computer is rebooting often, there can be three culprits – poor cooling, low voltage, or bad memory.
26
It is good practice to clean your computer regularly by removing dust and other debris from inside the computer case.
Special care should be taken to clean out the heat sinks on the motherboard and/or graphics card. When these are
clogged with dirt they are much less effective at cooling. When the computer overheats, the machine has a small
thermometer which turns the computer off. So if your machine keeps rebooting, it could be caused to poor airflow due
to dust or perhaps a broken fan. Make sure all fans are working and replace them immediately if they are not
functioning.
In addition, you should test your memory. Reboots/freezes can be caused by memory errors. Luckily it is very simple to
test computer memory. Windows Vista and later come with a memory testing program, just press F8 when booting to
see the test memory option. Otherwise there are many other programs for testing memory, such as Memtest86.
Memtest86 can be run on any computer from the CDROM drive. Simply place the disc in the drive while the computer
is booting and the application will run automatically. It will report if there are memory errors. These could be solved as
simply as reseating your memory or they could be caused by defective memory chips. If the chips have errors, they may
function in another machine without errors although often they should just be tossed out.
A voltage regulator should be used to eliminate problems caused by low voltage.
Problem 5 – Computer cannot find components (Hard drives, Memory, etc.)
Often this is caused simply because of poor connections. Make sure all connections are well attached, including the
IDE/SATA cables, power cables, memory modules, etc.
If reseating the component in question isn’t working, try replacing the component with another known working part. If
the new part works, then the problem lies with the component, not with the connections.
Problem 6 – Monitor Problems (Fuzzy images, mixed up colors, etc.)
If your computer seems to be booting fine but you are seeing problems with your monitor there could be many
problems.
Most likely it is simply a problem with the connection between the video card and the monitor. Try firmly connecting
the cables. If this does not solve the problem, it could be caused by the monitor itself. Check the brightness/contrast
settings through the menu on the monitor.
If the monitor is a CRT model and the image is fuzzy, it is often just a problem caused by age. There is no solution to this
problem aside from getting a new monitor.
If the monitor sometimes turns black, or shows strange moving dots or lines, it could be a problem with the graphics
card. Try another graphics card you know is working to see if the problem is caused by the GPU.
In summary, here are some common problems and their solutions:
27
o
o
o
o
o
o
Computer doesn’t start (no lights, fans)
 Power supply, Power Cord, Outlet Power, Power Button, Cables
 Try new cords, check connections, try known working PSU
Computer starts, but nothing on screen (audible beeps)
 Something not well installed, bad components
 Check connections, check beep codes, process of elimination, try known working parts
Computer Starts, but nothing on screen (no beeps)
 Motherboard, sometimes power supply, try known working parts
Computer Reboots Often
 Heat, Memory
 Clean out dust, check memory (using memtest x86)
Computer doesn’t see components
 Bad connections, bad hardware
 Check connections, try known working parts
Monitor Problems
 Check power, cables, brightness/contrast, check which video card, backlight problems, try known working
monitor
28
Section 3: Computer Software
3.1 Operating Systems
An operating system (OS) is software which manages computer hardware resources and provides common services for
execution of applications. The OS operates as an intermediary between the computer software and computer
hardware. It manages things such as memory management, disk management, microprocessor scheduling, etc.
Figure 17 - Operating System Interaction Diagram
The main advantage an operating system offers is the added layer of abstraction between computer programs and
computer hardware. Abstraction is the process of dividing a problem into smaller more manageable parts. This allows
the programmer to focus only on interacting with the operating system, not the hardware. That is the problem for the
operating system designer.
An operating system allows any program written for it to work on any hardware. This way a programmer only has to
write a program for one operating system and the program can function on any hardware that runs this operating
system. Meaning computers with vastly different hardware configurations can run the same programs without the
need to re-write the program for each specific hardware configuration. The software designer does not have to know
anything about the underlying hardware at all for their programs to function as desired.
3.2 Operating Systems Basics
Today there are three main operating systems in use on desktop computers: Microsoft Windows, Apple Macintosh
Operating System (Mac OS), and GNU/Linux. The most commonly used is Microsoft Windows by a large margin,
followed by Mac OS, and then by Linux. Windows has about 90% of the PC OS market share, followed by apple at
around 9% and about 1% for Linux. Other Operating systems such as UNIX and BSD are designed for and often run on
servers.
29
Figure 18 - OS Market share
For the most part, an application written for one operating system cannot be executed on another. Therefore a
windows application (such as Winzip) cannot be installed on Linux or Mac OS. Some applications are cross platform,
meaning they can be run on several of these operating systems, although you do need the correct installer for your
operating system. This is the case for most common internet browsers (such as firefox, chrome, and opera) and some
audio/video players (such as VLC and Mplayer).
Different Operating Systems also use different file systems, so data stored by one operating system is often unreadable
by another operating system. However, certain third party software can allow interoperability between different file
systems. Modern Windows versions use NTFS, modern Mac OS versions use HFS+, and modern Linux versions use Ext 4.
Table 13 - File System Comparisons
OS
File
System
Windows

FAT 32 ( 4GB max file
size)

NTFS (Windows NT and
newer)
Mac
Linux

HFS

HFS+

Ext 2, cannot easily recover from errors
without file system check

Ext 3, can easily recover from errors

Ext 4, improved Ext 3
3.3 Example Operating Systems
3.3.1 – Microsoft Family
Microsoft Windows is by far the most common desktop operating system in use today. Microsoft was founded by Bill
Gates, who designed early Microsoft products. Microsoft and its early operating system (DOS) was able to win out over
competing operating systems such as IBM’s OS/2, despite being a very small and new company.
30
Figure 19 - Microsoft Disk Operating System (DOS)
The Microsoft operating system family began with an operating system named MS-DOS (disk operating system). MSDOS uses a command line interface (CLI), meaning that all functions of the operating system are achieved by typing in
commands to a text only interface. The default interface has no images, only text.
Windows 3.1 came with a graphical user interface (GUI) like we are accustomed to today. Windows 3.1 was based on
DOS but with added graphical features. Windows 95 and 98 and Windows ME (millennium edition) followed Windows
3.1 with the same software base but with improvements to usability and performance.
Windows NT (New Technology) was Microsoft’s attempt at a new redesigned operating system. It is no longer based on
DOS but a newer high performance kernel. Because of this, not all legacy (old) applications run properly on Windows
NT based systems. Windows NT is the basis for all subsequent operating systems including Windows 2000, Windows
XP, Windows Vista, and Windows 7, the most recent Microsoft Operating System.
Windows is known for its near domination of the industry, everywhere one goes one sees Microsoft Windows. But
because of this Windows has the best hardware support and many applications have been written for it.
3.3.2 – Apple Family
The Apple Macintosh Operating System (Mac OS) is designed by Apple Computer. Mac OS can only be run on specific
Apple branded hardware. That is to say that a machine which was not built by Apple cannot run Mac OS.
31
Figure 20 - Mac OSX
Apple computer has used several computer architectures over the years, each switch completely eliminated software
compatibility with older software. Early Macs such as the Apple II used a Motorola based microprocessor, second
generation Macs, such as eMacs, used IBM Power PC processors, and current third generation Macs use x86 processors
which are the same as the processors found in most PCs.
Similarly, Mac OS underwent great changes. Early versions of Mac OS (before version 10) were based on a custom
Apple design. The current versions of Mac OS (Mac OSX – the X is for 10) are based on BSD, which again broke software
compatibility with legacy applications. There are currently 8 versions of Mac OS X, 10.0 – 10.7.
Apple is less afraid to break software compatibility with legacy applications than Microsoft, thinking that this is a
sacrifice which has to be made to improve the user experience. These drastic changes often come with great rewards.
Because Apple computers come directly from Apple, their hardware library is smaller and very controlled. This allows
great ease of configuration, because there are only a handful of different hardware configurations unlike what is seen
with Windows or Linux computers.
OSX is known for its ease of use and intuitive interface. For example it utilizes intuitive features such as hand gestures
to improve the user experience and it uses a streamlined system for application installation, where an installation file is
simply dragged into the application folder to install it. It has less software compatibility than Windows but also is more
protected from viruses by its design and because of its smaller market share.
3.3.3 – UNIX / UNIX-like family
Linux is a free operating system created by Linus Torvalds as an alternative to UNIX, an operating system used mostly
on servers. It can be used at no cost and also its code is open to analysis and editing by anyone. It is often very good at
running on older outdated hardware because it uses less computer memory than Windows or OSX. Linux is also much
32
more adaptable and flexible then Windows or OSX, allowing the user to change almost anything and choose whatever
software they would like. Unlike Windows or OSX, the options for how the user wants to interact with their computer
are almost limitless.
Figure 21 - Ubuntu Linux Operating System
Unlike Windows or Mac OS, there is not one provider of Linux. Instead there are many different organizations which
create different “distributions” of Linux. All of these distributions can run the same applications. Differences between
the distributions can range from package management systems to different graphical user interfaces. Popular
distributions include Ubuntu, Fedora, SUSE, and Debian.
Many Linux distributions have a package management system, which allows programs to be installed at a click of the
button directly from the internet. This one system can manage, organize, and update all of the currently installed
applications. This way, installation files are rarely needed and installations are quick and painless.
Linux is also known for its stability and security. Because of its design, it is less prone to viruses, and because of its small
market share very few viruses are written for Linux. Linux is also often used for servers, such as web servers, because of
its stability, security, and adaptability. Linux machines are known for their ability to run for years without requiring
rebooting, a feature very important on enterprise servers.
3.4 Advantages and Disadvantages of Operating Systems
Below is a chart summarizing some advantages and disadvantages of different operating systems. There are many more
differences which are not covered in this chart.
33
Table 14 - Advantages / Disadvantages of OS
Advantages
Windows
 Large user base
 Large software library
 Backwards
compatibility
 Excellent Hardware
support
Disadvantages  Malware / Viruses
 Price
 Instability
GNU/Linux
 Price (free)
 Package management
 Stability
 Security
 Configurability
 Easy second OS
Macintosh
 Simple and intuitive interface
 Not a target for viruses
 Easy program installations
 Can be complicated
 Less hardware / software
support
 Costly hardware
 Less hardware/software support
 Proprietary hardware
3.5 Installation of Operating Systems
Operating systems are commonly distributed on optical discs (CDs, DVDs), but they are starting to be distributed on
flash drives. When an operating system is installed on a hard drive partition all data on that partition is lost. Therefore
when installing an operating system, it is important to back up all the data on the disk beforehand.
To install an operating system you must first boot to the installation medium. This may be as simple as inserting the
disk in the CD/DVD drive and rebooting the computer. However, if this does not work you may have to change the boot
order. Changing the boot order requires accessing the BIOS (Basic Input/Output System). This is done by pressing
delete, F1, F2, F12, or another key while the system is booting. Often there is a message during the booting process
telling which button to press, such as “Press F10 to enter setup.”
Once you have entered the setup, you need to search for the boot tab. On this tab you will find a setting for boot order.
Unfortunately the BIOS setup on all machines is not the same, although they are all similar. The boot order tab
determines in which order devices are allowed to boot. For installing an operating system, the CD-ROM drive should be
first in the order, followed by the hard drive. This way, the CD-ROM drive will be checked before the hard drive. If a
bootable CD is found, the system can boot from the CD. If no bootable CD is found, it will load the OS from the hard
drive.
34
Figure 22 - BIOS Boot order menu
Installations for different operating systems vary, although they have many common features. It is necessary to choose
the location for the OS to be installed, and to partition (divide) the hard drive accordingly. This can be done
automatically by the installer, which often results in using the entire disk in one partition. Following this method you
would destroy any existing data on the hard drive.
A more advanced method of installing an operating system involves using two disk partitions. This creates two logical
drives on your one physical hard drive. That is to say you can have a C: and a D: drive on the same physical hard drive
by allocating a certain amount of space to one and the remainder to another.
By using one partition solely for the operating system, data can safely be stored on the other. If the operating system
becomes corrupt, one can reinstall the OS only, keeping all data intact on the second partition.
3.5.1 Installing multiple operating systems
More than one operating system can be installed on a computer; however only one operating system may be running
at the same time. To install multiple operating systems at least two partitions must be made on the disk, one for each
OS. As stated before, Mac OS can only run on Apple branded computers, so unless the computer is an Apple Mac, one
is limited to Windows and Linux.
This method can be used to install Windows and Linux together, multiple Windows versions, or multiple Linux
distributions.
The order of installation is important, as it can avoid problems with the boot loader. The boot loader is a piece of
software that selects which operating system will be loaded. Normally operating systems should be installed in order of
35
oldest to newest. This is because newer versions of Windows can recognize older versions and update the boot loader
accordingly, but older ones cannot.
When installing both Windows and Linux, its best to install Windows first. This is because Windows cannot recognize
Linux partitions, whereas Linux will find any Microsoft operating systems and update the boot loader accordingly.
Windows does not recognize Linux partitions by default, so more work is required to edit the Windows boot loader to
recognize Linux partitions.
To dual boot, simply install the operating systems to different partitions. It is important to partition your hard drive
accordingly when installing the first OS (leaving space for the second OS), so that it will not be necessary to repartition
the disk after installing the first time.
3.6 – Computer Software
Applications are a sequence of instructions written to perform a specified function on a computer. There are two main
categories of computer software, system software and application software.

System software is the operating system which manages computer hardware and allows the execution of
application software. Utilities are also considered system software, as they help manage the computer. These
programs help to configure the OS.

Application software is all other software on the computer. This includes software for word processing,
audio/video playback, scientific programs, internet browsers, programming software, video games, etc.
3.7 – Software Trends
Software is growing more and more parallel in nature, taking advantage of modern multi-core microprocessors
Operating Systems are working off the strengths of their competitors, choosing the best features and incorporating
them into their design. For example Microsoft Windows is changing its security approach to take advantage of the
more secure architecture used by OSX and Linux. Windows and Linux are incorporating Expose-like graphical features
found in OSX into their own desktops. OSX has adopted multiple workspaces as has been commonplace in Linux for
years.
Many different hardware platforms are being supported by modern software through cross platform programing. This
is because of the trend towards and growing importance of low power devices, which often use different underlying
hardware than normal PCs.
The cloud or internet storage is quickly becoming a more and more important part of computer software. Most
software now has some online component allowing users to sync data, store data online to access anywhere, or share
with their friends. Because of the many conveniences offered by these services and the proliferation of high speed
internet connections in the developed world, they are really taking off. However, due to the severe bandwidth
restrictions and high cost of internet, these services are not feasible in the moment without a big headache.
36
Section 4: Software Troubleshooting
4.1 – Software Troubleshooting
Software troubleshooting is the process of identifying and solving problems with computer software. This involves
using many types of utilities designed to aid in software maintenance, such as antivirus software, anti-malware
software, firewalls, and performance tune-up software.
4.2 – Common software problems
Often Windows computers begin to slow down with age and use. Common problems include general slowdown, and
unresponsiveness. The machine could take minutes to boot to desktop or minutes to launch a simple program such as
Microsoft Word. Other common problems include the infestation of malicious software such as viruses, malware, or
adware. These can manifest themselves by popup ads, strange messages, or general erratic behavior.
Other aspects of software troubleshooting have to do with the inability of the operating system to boot properly, or for
various hardware resources to be nonfunctional.
4.4 – Software troubleshooting good practices
When managing a computer system, there are several activities that can drastically reduce the need for software
maintenance. These include installing antivirus/antimalware software, disabling startup items, using ad blocking, etc.
4.4.1 - Antivirus/Antimalware software
Antivirus and antimalware software can find and remove malicious software from a computer. The term "computer
virus" is used as a catch-all phrase to include all types of malware, which is software which performs some unwanted
function on the computer, such as damaging a system’s data or performance. Malware includes computer viruses,
computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted
software, including true viruses.
37
In general, antivirus software should always be installed and kept up to date. There are many free antivirus programs,
including AVG, Avira, Avast, Clamwin, Comodo, Microsoft Security Essentials, and more. For malware removal, a
specialized tool such as Malware Bytes should be used. Scans with this software should be run regularly to ensure that
the system is virus and malware free.
Antivirus and Antimalware should be used in conjunction with one another. Antimalware is not a replacement for
antivirus, but an addition to it.
4.4.2 - System Cleanup Software
System cleanup software is used to remove old unused files from a computer. This can free up hard drive space for
other uses and increase performance. Windows comes with basic cleanup program that can be accessed by going to
Start Menu  Accessories  System Tools  Disk Cleanup.
Another third party software package with more advanced features is called CCleaner. This software can clean up even
more temporary files and other unneeded data than the Windows Disk Cleanup tool.
4.4.3 – Hard Drive Defragmenting
Hard drives store data on rotating magnetic platters. When data is written to the hard disk, it is best that the data is
placed sequentially, that is all of the data should be placed together. Sometimes however, data is written to several
parts of this disk, which slows down reading and writing to this data. Disk defragmentation helps solve this problem by
consolidating files on the disc, pulling together the fragments into one complete piece.
To access the disk defragmentation tool in windows, click on Start Menu  Accessories  System Tools  Disk
Defragmenter.
4.4.4 – Disabling boot up items
One cause of poor computer performance is having too many running applications. When applications are running in
the background, they are using computer memory even if not currently in use. When many programs are installed, they
are set to be loaded into memory as the computer powers on, which can be good if you use the program often, but bad
if the program is rarely used. By disabling certain boot items, you can free up memory and thus increase performance.
38
Figure 23 - MSCONFIG startup utility
To disable start up programs, click on Start Menu  Run and enter MSCONFIG into the run dialog box. Go to the
startup tab. Here are all the applications that are loaded when windows starts. You can disable them one by one, or
press the disable all button and then choose only those applications you want to load on startup.
Figure 24 - MSCONFIG Services
39
One can also use the services tab to disable startup services. To do this, click on the services tab and choose “hide all
Microsoft services.” Disabling Microsoft services could render the machine unusable. The services that remain can be
disabled one by one, or one can press the disable all button and choose the services which will remain.
Using the MSCONFIG utility can greatly increase performance and should be a part of any technician’s tools.
4.4.5 - Ad blocking and Firewalls
If the computer in question is going to be connected to the internet, a software firewall should be used. In windows XP
Service Pack 2 (SP2) and higher, there is a competent firewall installed by default. This chooses which connections out
of the computer to the internet, and which connections from the internet to the computer are allowed. For some
applications firewall rules must be defined allowing access in or out, circumventing the default rules which are set up
for maximum protection. Commercial firewall alternatives are also available, which offer more control.
In general the internet connections in Cameroon are very slow, so bandwidth should be conserved. One method to
conserve bandwidth is to use ad blocking software. This will prevent advertisements seen on the internet from being
downloaded, thus freeing up the bandwidth for legitimate traffic.
For Mozilla Firefox and Google Chrome browsers, there is a simple ad blocking add-on named Adblock. Adblock simply
leaves blank areas where ads are supposed to be. It is very easy to install and is under a megabyte in size. Simply search
for the Firefox “add-on” or the Chrome “extension” in Google and simply click a button to install.
For system wide protection, the HOSTS file can be modified. This file is used to map IP addresses to domain names, but
can be used for ad blocking. By replacing the HOSTS file with one containing the IP addresses of many ad sites, these
websites can be blocked entirely, preventing any data being transferred from these malicious sources.
To install the HOSTS file, unzip the package and double click on the mvps.bat file. In windows 7 and Vista, it is necessary
to run the program as administer, by right clicking on the mvps.bat file and choosing “run as administrator”.
Figure 25 - Updating the HOSTS file
4.4.6 – Advanced performance options
40
Windows performance can be fine-tuned using the advanced performance options found in the control panel. To
access advanced performance options, one must first open system properties. To do this, enter the control panel and
double click on the system icon. This will show the system properties box. To change the performance options, click on
the tab named advanced.
Here performance settings can be altered. The first tab called visual effects allows choosing between best appearance
and best performance. If the machine is slow and unresponsive, adjusting for best performance is best.
The advanced tab allows for changing the size of the page file. To change these settings, click on the advanced tab and
then choose the change… box under virtual memory.
Figure 26 - System Properties
Figure 28 - Adjusting Visual Effects
The page file is the name Microsoft uses for virtual memory, hard drive
space used as computer memory. By default windows uses an adjustable
paging file. This means the size can change during usage of the computer.
When the page file changes sizes, performance can suffer due to
fragmentation. To alter this, it is best to create a static page file by setting
the initial size and maximum size to the same level.
(Initial size = maximum size)
41
Figure 27 - Adjusting the paging file
Normally the page file should be 1.5 to 2 times the amount of system memory. However, if you have little system
memory, for example less than 512 MB try to create a page file at least 512 or 768 MB in size.
4.4.7 – Software Updates
In Cameroon internet connections are often very low bandwidth, so efforts should be taken to maximize the available
speed. By default Microsoft Windows will attempt to download Windows Updates when it is connected to the internet.
These updates can be hundreds of megabytes in size and will really slow down your internet connection. Additionally,
each computer in your lab will try to do this, resulting in potentially gigabytes of unwanted traffic! In general, it is best
to disable automatic updates and install updates on your own manually to your lab computers.
To disable windows updates, enter the control panel and choose “Windows Updates” for Windows 7 and Vista or
“Automatic Updates” for Windows XP, next click on “change settings” and choose “never check for updates” or “check
for updates but let me choose to download and install them”.
Figure 28 - Windows Updates Options
42
To install windows updates without an internet connection you can use the AutoPatcher tool. This is a program which
allows you to download windows updates one time, and then put them on a flash drive, CD, or network share to install
on other machines without using the internet.
To install updates using AutoPatcher, first install the newest service pack for your operating system. Next run the
AutoPacther application and allow the updates to be installed. Once this is finished it will be necessary to reboot the
computer. Next, run AutoPatcher one more time (this is because some updates depend on others to be installed first).
Reboot again and your computer should be up to date.
4.4.8 – Other general advice

In general the operating system updates should also be installed and kept up to date.

If the machine is used to store any important data, backups should be made regularly to ensure no data is lost in
case of hardware or software problems.

The computer should be scanned regularly for viruses and malware, and the virus definitions should be
updated regularly as well.

Disc clean up should be performed periodically to free up disk space and improve performance.

The hard drive should be defragmented regularly to improve hard disk performance.
43
Section 5: Computer Networking
5.1 – Computer Networking Basics
A computer network is collection of computers connected to one another. This allows for sharing of information and
resources among these devices. The internet is a collection of other networks, that is to say it is a network of networks.
Networks are very important to computing as a computer with access to the resources of other computers is much
more useful than a computer on its own.
5.2 – Computer Network Topologies
A network topology is the physical method used for connecting computers together. There are four main types of
network topologies, star, ring, bus, and mesh networks.
Figure 29 - Network Topologies
A star network is by far the most common today. It is very rare to see other types of networks outside of obsolete
hardware or enterprise environments. A star network functions by having each machine connect to one centralized
point on the network. This centralized point can be a hub or a switch. Modern Ethernet and 802.11 wireless networks
are based on star topologies. The cost of this network is low and a break in the cable will only compromise one
machine. However, there is one centralized point of failure. If the hub/switch is compromised, the entire network will
cease to function.
In a ring network, each computer is connected to its two neighbors, forming a circle with all the machines. Because
each computer is connected to the next, a fault (break) anywhere in the cable will cause the entire network to fail.
A bus network simply uses one backbone cable to connect all machines together. This means that the communication
medium is shared, because all the machines have only one wire on which to communicate. This can cause many
problems when many computers want to use the network at the same time. If the bus cable is compromised the entire
network will fail. The two ends of the Bus cable require terminators installed. Without these on the two ends the
network will not function.
44
A mesh network is formed when each computer in the network has a direct connection to each other computer. This
allows for great speed of communication between the machines, but also requires much more hardware. This is the
most expensive type of network, but for machines which will be communicating regularly it is the fastest and the most
fault tolerant.
Table 15 - Network Topology Comparison
Number of wires Uses
Point of failure
Star
N
Most home/office networks
Hub/switch
Ring
N
Older obsolete networks, fiber optics Any computer
Bus
n+1
Older obsolete networks
Backbone cable
Military applications, datacenters
Almost impossible
Mesh n(n - 1) / 2.
5.3 – Networking Devices
In a common star network, there are three main devices in use. These are the hub (now obsolete), the switch, and the
router.
Both hubs and switches are used to connect computers together in a star network. Hubs are an older form of
technology. They work by directly connecting all of the machines together. That is to say a hub in actuality creates a bus
network internally because all the machines are directly connected and share the same medium for communication.
If two machines try to use the network at the same time, there will be a collision. The collision will make all the data on
the line unreadable and thus useless. The computers will then wait a random amount of time before starting again,
hoping to avoid more collisions. Collisions can bring a network to its knees when many machines are sharing the same
medium. With a small network of three computers, the odds that any two would like to speak at the same time are not
high. However, if you have 32 computers on a hub, the likelihood of collisions is very high.
Therefore, hubs should not be used in large networks, or where performance is needed. Hubs are much cheaper than
switches because they use very simple hardware. They come in two types, powered and non-powered. Powered hubs
boost the signal of all incoming and outgoing data. Non-powered hubs do not, and are thus cheaper.
Figure 30 - NetGear Ethernet Switch
45
Switches are a more advanced form of networking technology. When one computer wants to speak to another on a
switch, the switch will create a direct connection between the two machines. The switch is smart enough to understand
the destination and will create a direct connection so that there can be no collisions. This way, many machines may be
speaking at the same time without any problems. Switches have mostly replaced hubs as their prices have fallen
sharply to be affordable. When using large networks or where performance is needed, a switch should be used.
A router is a device which sends data between networks. A router finds the correct path for any information flowing on
the network by reading the packet headers then it determines the best route and forwards the data to the next
network on the way to the destination host. It is used to connect networks together. Routers are often intelligent
enough to determine the best routes through communicating with its neighbors.
Router 4
A network
Switch
Router 3
switch
Router 2
A network
Router 1
switch
A network
switch
A network
For example, in the network above, a computer connected to router 3 who wishes to speak with a computer on router
1 would know the best route was through router 2. It would also know the best route to router 4 was direct and not
through router 2.
5.4 – Wired vs. Wireless networks
Modern networks often use a combination of these two networking types. Wired Ethernet networks are used where
reliability and speed is very important. In general, wired networks are fuss free and simply work. They also have higher
transmitting speeds than their wireless counterparts. Most computers come with integrated Ethernet cards, so the
price of installing a wired network is low, simply the cost of cables and a switch.
10mbps
Uses 4 of 8 cables
100mbps
Uses 4 of 8 cables
1gbps latest 4 yrs ago
Uses all 8 cables
46
Backwards compatible with 10 Mbps
devices
Backwards compatible with 10 and
100 Mbps devices
Figure 31 - Ethernet Standards
802.11 wireless networks are used where portability is needed. For example, education inspectors who go in to the
office or out into the field would benefit from a wireless network. When the inspectors are in the field, they can bring
their work laptops to take notes. When they return to the office, they can use the wireless network to work and share
files. This way the same computer can be used for tasks in the office and in the field, without having to worry about
wires. Wireless networks can have more problems with reception due to environmental obstacles (walls, doors, rain,
etc.) and can sometimes not function as reliably as wired networks. Even in the best case scenario wireless networks
are slower than their wired counterparts. The cost is often higher as well, as laptops are more expensive than desktops
and most desktops do not come with wireless cards installed by default. Therefore both a wireless card and wireless
access point are needed.
802.11 a
802.11 b
802.11 g
802.11 n
54mbps
11mbps
54 mbps
150 mbps
5 Ghz
2.4 GHz
Figure 32 - Common 802.11 Wireless Standards
5.5 – Ethernet cable wiring
Ethernet uses either category 5 or category 6 cables. These cables contain 8 wires, 4 pairs of twisted cables. For this
reason, category 5/6 is referred to as twisted pair cables. Category 5 and 6 are very similar, and for all intents and
purposes interchangeable. Category 6 is used for home installations (in the walls).
Ethernet cables come in two varieties, shielded and non-shielded. Shielded cables have a layer of tin foil outside of the
cables. This helps reduce EMF (electromagnetic force) interference. Other devices such as radios, florescent lights, cell
phones, electrical machines, other networking equipment, etc. can create interference which can damage the integrity
of data transmitted across the cable. In most situations this is not a problem, which is where cheaper non-shielded
cables are used.
The end connectors used on Ethernet cables are called RJ-45 connectors. These are normally clear plastic, have 8 pins,
and small piece of plastic for snapping into an Ethernet card. For shielded cables, the connectors are made of metal to
allow for grounding the shielding inside the cable.
47
Figure 33 - Ethernet Cable
There are two main types of Ethernet cables, strait through cables and crossover cables. Strait through, or patch cables,
are used for connecting a machine to a switch, hub, or router. Crossover cables are used to connect two machines
together correctly. Strait through cables are generally used unless you are directly connecting two machines together.
Normal Ethernet cables have orange, green, blue, and brown cables. Half are striped with white. To create a normal
strait through or crossover cable, the wires are arranged as in the images below.
48
Figure 34 - Ethernet Cable Wiring
5.6 – Network Addressing
For computers to communicate with another, they must have a way to refer to each other. Just as humans use names
to address one another, computers use various addresses.
In Ethernet networks, all Ethernet cards use MAC addresses. MAC addresses are 48 bits long and are unique to every
Ethernet card. These are usually written as 6 pairs of hexadecimal numbers, for example 60-4E-9A-16-2C-11.
When computers communicate on a local network, they refer to one another by their MAC addresses. For example
when computer A wants to send data to computer B, computer A sends its address as the sender and sends computer
B’s address as the recipient. The switch will then send the data to the correct computer. These addresses are known as
level 2 addresses.
IP (internet protocol) addresses are a human-readable method for addressing computers on a network. These
addresses work on a higher level than MAC addresses. They are made to be simpler to read and use than MAC
addresses. These addresses are known as level 3 addresses.
49
IP addresses are used to refer to computers both on the local network and on the internet. They allow the routing of
data between hosts. There are two versions of IP addresses in use today, IPv4 and IPv6. IPv4 uses 32 bit addresses,
creating a total of 4.29 billion addresses. Because there are so many computers in use today the world has run out of
IPv4 addresses. IPv6 offers a solution to this problem. IPv6 is still new and not extensively used at this time, but it
should be increasing in use. It uses 128 bit addresses, which is more addresses than will probably ever be needed.
IPv4 addresses are read as 4 numbers separated by periods. An example IP address is 192.168.0.1. Each number can be
between 0 and 255.
Certain address ranges are reserved for private use. This way each internal network can reuse certain IP addresses
without using an IP used for the public internet. Each computer on the public internet must use a unique IP address.
Computers on a local network do not need to have globally unique IP addresses; only one public internet address is
needed for the internet connection. All other machines can use private IP addresses to conserve the total number of IP
addresses available. One public IP can be shared to many computers by using NAT (network address translation) and is
done commonly in most home/office routers.
IP Range
Uses
127.x.x.x
Loopback Address (this allows a computer to speak with itself)
192.168.x.x
Local Use
10.x.x.x
Local Use
172.16.0.0 – 172.31.255.255
Local Use
Figure 35 - Special IP Ranges
Local addresses
Local addresses
192.168.0.3
Router
Public IP address
67.9.34.7
LAN PCs use local address
Above is an example of Network Address Translation. One public internet address is used by several computers behind
a router. The machines in the local network are using local addresses to conserve to total number of IP addresses
available.
50
5.7 – Setting IP addresses manually and Network Testing
To edit the IP address in windows, first open the control panel and choose network connections. Next, right click on the
network connection you would like to edit and choose properties. On the next window choose IPv4 and click
properties. Next click “use the following IP address” and enter the IP address and default gateway desired. The subnet
mask will be filled in automatically.
The default gateway is the IP address of your router which has access to the internet. The subnet mask is used to
create sub networks, and will not be explained here.
You can also choose to set DNS servers manually, in this example I am using 8.8.8.8, Google’s open DNS service.
Figure 36 - Network Connections
51
Figure 37 - Connection Properties
52
Figure 38 - IPv4 Properties
To see your current IP settings, open a command prompt by clicking on Start  Run  and enter cmd, next type
ipconfig to view to current IP configuration settings. For example in the image below the Ethernet adapter has the IP
address 192.168.3.11 with a default gateway of 192.168.3.1.
Figure 39 - IP Configuration Utility
53
To test network connectivity open a command prompt using the above method and use the ping command. To test
internet access, try pining google.com. Do this by typing ping google.com. If you receive a reply, then internet access is
functioning.
Figure 40 - Ping Network Connectivity Test
5.7 – OSI Model
The Open Systems Interconnection model (OSI model) is used to understand how computer networks function. It divides
the functionality of a network into different layers, each with their own function.
The lowest level is the physical layer, which is the physical connections used to interconnect computers, for example
Category 5 cables and Ethernet cards. This is referring to actual bits being passed over a while being represented as
high and low voltages.
The second layer is the data link layer, this layer involves physical addressing, such as MAC addresses in Ethernet
networks.
The third layer is the Network layer, which uses IP addresses to determine routing paths.
The fourth layer is the transport layer, which uses protocols such as TCP to ensure reliable connections are established
from end to end.
The last 3 layers (session, presentation, and application) are the end resulting applications, such as HTTP for serving
web pages, and FTP for serving files.
OSI Model
Data unit
Host
Data
Layer
7. Application
Function
Network process to application
54
layers
Segments
6. Presentation
Data representation, encryption and decryption, convert machine dependent data to machine independent data
5. Session
Interhost communication
4. Transport
End-to-end connections and reliability, flow control
Packet/Datagram 3. Network
Media
Frame
layers
Bit
Path determination andlogical addressing
2. Data Link
Physical addressing
1. Physical
Media, signal and binary transmission
5.7 – Application Layer Protocols
Protocols at the application layer are used for a variety of common tasks on the internet. This layer is seen and
interacted with directly by the user. These include the protocols used for transmitting web pages and data, protocols
used for file sharing, protocols for looking up websites and giving commands to remote computer. Below are examples
of popular application layer protocols in use today.
Protocol
Uses
DNS (Domain Name
System)
DNS is used to translate host names used by people (www.google.com) to IP addresses
used by computers (74.125.230.80)
DHCP ( Dynamic Host
Configuration Protocol)
DHCP is used to automatically assign computers IP addresses. This allows for easier
network configuration, as each machine does not need to be assigned an IP address
manually.
FTP (File Transport
Protocol)
FTP is used to transmit files over the internet.
HTTP (Hypertext Transfer
Protocol)
HTTP is used to transmit web sites over the internet.
SSH (Secure Shell)
SSH allows terminal access to a Unix like system remotely.
SMTP (Simple Mail
Transport Protocol)
SMTP is used for transmitting email over the internet to mail clients such as outlook.
Gnutella / Bittorent
These are examples of peer to peer file sharing protocols, which allow users to share files
on the internet.
SMB (Server Message
Block)
SMB is used by Windows computers for file, folder, and printer sharing.
55
5.8 – File folder and printer sharing
In Windows networks sharing files and folders is simple and can be incredibly useful. For example, storing one copy of
an installer application on a server can allow easy installation on all the other machines through the network. This can
save time and avoids the hassle of moving a cd or USB flash drive around.
To share files in windows, simply navigate to the folder to be shared and right-click on the folder and select properties.
Next navigate to the sharing tab and press the share button. Here you can choose who to share the file with. If the file
is for use by everyone, select “Everyone” and click add. The permissions can be changed here as well. By default the
users can read the files but not edit them. To allow other users to edit, modify, and delete files/folders, choose
read/write permissions.
To access the shared files from another computer, simply open an explorer window and choose network. Here you will
see the names of all the machines on the network. Choose the computer to connect to, and double click. Here you will
find a listing of all the files and folders shared to the user. You can copy these files to your computer or even execute
these files over the network if need be.
There are some caveats with Windows file sharing. For example, older versions of windows may have trouble
connecting to newer versions, such as a Windows XP machine attempting to connect to a Windows 7 shared folder.
Often, the permissions just need to be tweaked to allow access from anyone.
Figure 41 - Sharing a folder
56
Figure 42 - Sharing permissions
5.9 – Internet sharing
The most readily available option for internet connectivity available to schools in Cameroon is the USB wireless modem.
These keys are intended to be used by one computer; however the internet connection can be shared across an entire
network using Windows. One machine will have the key for connecting to the net and this machine will share its
connection to the others over the network.
To share a network connection in windows, open network connections from the control panel. Right click on the
connection to be shared and choose properties. Next choose the sharing tab. To share the connection, select “allow
other computers to connect through this computers internet connection.” The local interface also needs to be selected,
normally the Ethernet card in the computer. This will share the wireless key through the Ethernet network. Select OK to
finish.
To connect to the internet with other machines, connect all the computers to the local network and establish a
connection with the wireless key. The other machines should be assigned an IP automatically and they will use the
computer with the key as their default route to the internet.
57
Figure 43 - Internet Connection Sharing
Afterword
The information in this text is intended to be used by computer science teachers and inspectors in Cameroon. This
manual should help train teachers to better plan, build, and maintain their computer networks and improve teaching
practices in these subjects by encouraging practical exercise. The data covered by no means complete but should serve
as a good starting point and reference for both educational purposes and as a field repair manual.
This text was written for the Teacher Resource Center, Bamenda, North West Region, Cameroon by Peter Paskowsky,
United States Peace Corps Volunteer with the aid of the TRC staff.
58
Download