Uploaded by Batt Berlsliky

eeeeeeeeeeeeeeeeeeeeeeeeeeeeee

advertisement
We are living through a digital revolution. A super-connected world in which technology
engulfs every aspect of our lives.
Since the end of the second world war, humanity has been on a relentless pursuit of
innovation
and technological progress. The proportion of people living in extreme poverty has
dropped
from almost 3/4 in 1950 to less than an eighth, a testament to this progress.
Of course, this rapid advancement doesn't just come out of nowhere, and one of the
key drivers was the microprocessor. The ability to shrink an entire computer to a chip
the size of a finger has allowed for the mass adoption of both home and mobile
computers.
They have also had far reaching implications, helping to advance every industry,
from manufacturing, finance, retail, to healthcare.
The last 75 years has seen computer technology grow at a truly incredible rate.
We're going to look at the complete journey: from early vacuum tube machines to the
birth of home computers, from the multimedia madness of the 1990s,
to the the multicore mindset of the 2000s and 2010s. And finally: what lays ahead.
To understand how all of this came about, we need to go back to the very beginning
of digital electronics, and look at how e arly post-war computers were designed.
In 1945, with allied victory immanent, it could easily had been assumed that the military
use
of computer technology would subside, leaving the endeavour to have a purely
academic role.
However, the slow emergence of the bitter rivalry between the Soviet Union and the
United States meant that this was far from the case.
Indeed, the first digital computer, the Electronic Numerical Integrator and Computer (or
ENIAC for short), was used by the United States Army for calculating firing tables for
artillery, and later for research into the Hydrogen Bomb.
Outperforming mechanical computers by a factor of 1000, ENIAC was a revolution in
computer technology. Press articles at the time referenced how it could vastly
outperform existing computers.
The US Army was so proud of it, they even used it in their own recruitment
advertisements.
Overall, it was met with huge critical acclaim, despite being challenging to program for.
The inventors of ENIAC, John Mauchly and John Eckert proposed a successor shortly
after,
called EDVAC, or Electronic Discrete Variable Automatic Computer.
Unlike ENIAC, which used a decimal representation for numbers, EDVAC used a
binary representation, making data storage much more compact.
influential report on the EDVAC, detailing how the computer could be extended by
storing the programs
along side the data, rather than having the two in separate parts of memory. This idea
would be known
as the von Neumann architecture, which virtually every microprocessor has since been
built on.
Britain's loss of superpower status in the early 20th century meant that its efforts were
far more research focused than in America.
The Manchester Baby, developed a couple of years after ENIAC, was the first
computer to be equipped with random access memory, meaning it could store
programs along side data.
This was achieved by using a modified set of 4 Williams CV1131 CRT monitors.
The Baby was slow, even for the time, but its use of Random Access Memory was
pioneering, and
it would lay the groundwork for the Ferranti MK 1, the worlds first commercially
available computer.
During development of the Baby, another computer, called the EDSAC, was being
developed at the
University of Cambridge. The EDSAC hardware was not as technologically advanced
as the Baby,
featuring only serial memory, but nonetheless correctly implemented the von
Neumann architecture, and had a more advanced instruction set.
Featuring 14 instructions, compared to the Baby's 7, it allowed for easier software
development,
particularly later in its life. One such example was OXO, an implementation of tic-tactoe,
which is widely considered to be the first video game ever created.
It was becoming increasingly clear that scientific and commercial applications of
computers would far outstrip the military use of the mid-to-late-40s.
By 1954, 95\% of computers were used for non-military purposes.
Despite their increased use, there were still limitations preventing wider adoption of
computers.
The defining characteristic of all these early machines was the use of vacuum tubes or
valves
for their main processing elements. Vacuum tubes were unreliable, hot, power hungry,
and heavy;
even a small computer like the Manchester Baby weighed in excess of a tonne. The
solution
was the transistor. The first transistor was manufactured at the great Bell Labs in
1947.
It performed the same job as a vacuum tube but could be smaller, more reliable, and
less power hungry due to its lack of an electron beam.
The first computer to use these was the Manchester Transistor Computer, the great
grandson of the Manchester Baby. It used 250 transistors and consumed only
150 watts of power, and in the shadow of vacuum tube domination, seemed like a
miracle.
This was until engineer Mohamed Atalla proposed to cover the silicon wafer with a thin
layer
of Silicon Dioxide, which allowed the electric field to more easily pass through the
silicon.
This process lead to the creation of the MOSFET (Metal-Oxide Silicon) in 1959,
a transistor with high manufacturability and extremely low power cost.
MOSFETs made it possible to develop high density integrated circuits, which allowed
for cheaper and smaller system, thus drastically increasing adoption numbers.
Early MOSFET machines, such as the IBM/System 360 family line of computers, used
a hybrid integrated circuit approach, combining integrated circuits with traditional
electronic components.
The System/360 family was particularly influential due to IBM's decision to separate the
system architecture from the implementation. This meant that all computers in the
family,
from cheap Model 30 (\$150,000) to expensive Model 75 (\$3,000,000) could run the
same software,
a concept that would become widely accepted in the industry.
Its successor, the System/370 made full use of monolithic integrated circuits.
The adoption of these integrated circuits created a new type of computer entirely: the
minicomputer.
These systems were scaled-down versions of large mainframe computers, designed to
provide computing
power to organisations who wanted to use computers but couldn't afford or justify a
large mainframe.
Largely accepted as the first commercial minicomputer, the DEC PDP-5 was one fifth
the price and one quarter the weight of the PDP-1, released 4 years earlier.
It was less powerful, featuring only 12-bit memory compared to the 18-bit larger PDP
machines,
but it was hugely successful, selling more than 1,000 units.
This entry point was continually lowered throughout the early 1970s, allowing even
more widespread adoption. The Data General Nova, with its 200 kHz clock speed and
8K
of RAM, cost just \$8,000, and was one of the most popular minicomputers the decade.
Some however, knew that the idea of miniaturization could be taken to even more
extremes. The first use of a microprocessor was in the F-14 Tomcat fighter jet,
and was developed by Garret AiResearch.
The company was asked by the US Navy to build a flight computer that could compete
with the
electromechanical system that was being used during development. The processor
created
to do this took up 1/20th the space of the existing system, and was much more
reliable.
Impressed by its capabilities, the Navy used this chip in all early F-14's. But,
all information regarding its development was classified until 1997, so its effects on
the wider industry are little, if any.
The story of the first commercial microprocessor involves Japanese company Busicom,
who were
developing a range of programmable calculators but were struggling to produce the
chipset required.
The company contacted Intel, who were large producers of computer memory for
mainframe and minicomputers, and asked whether they could produce a 7-chip
design for their calculator.
Intel accepted, but project lead Ted Hoff proposed that this could be reduced to 4chips to lower costs. Hoff, not being a chip designer, moved onto other projects, and
was
succeeded Italian engineer Federico Faggin, whom Intel had hired from Fairchild
Semiconductor.
While working at Fairchild, Faggin invented a new transistor technology called a selfaligned gate;
this allowed for the gate of the transistor to have a much smaller overlap with
the source and drain regions by automatically generating it within the mask process.
Removing this bottleneck in transistor design meant that performance could easily be
increased when moving to a smaller lithography.
Fairchild were reluctant to adopt this technology, so when Faggin moved to Intel he
immediately put it to use in the Busicom project.
This lead to the creation of the Intel 4004, which was delivered to Busicom in 1971.
Busicom would go on to sell more than 100,000 4004-powered calculators. The world's
first
commercial microprocessor was born, and a new era of computing was about to begin.
The year is 1971. While microprocessors were starting to be seen in calculators,
many had seen how MOSFETs had given rise to the minicomputer and wondered
whether microprocessors could be used to make an all new type of computer: a home
computer.
One such person was former NASA engineer Gus Roach, who set up a company in
1968 called
Computer Terminal Corporation (CTC) to manufacture terminals for mainframes and
minicomputers.
Roach was determined to produce a personal computer, one that was small enough
to fit on a desk, but could perform more work that
just acting as a terminal. His designs highlighted the need of a single 8-bit
microprocessor,
the CTC 1201, rather a more traditional approach of a collection of TTL logic chips.
There was just one problem: at the time, no-one had ever manufactured one.
Roach knew this, and so approached two chipmakers to ask if they could
manufacturer the microprocessor. One of these was Texas Instruments, who were a
large producer of
integrated circuits and logic chips. The other was Intel, who mainly produced memory
chips.
Texas Instruments attempted to produce the chip, but it suffered from reliability
problems
and thus development was not continued. Intel, who were in the early days of
developing the 4004,
didn't see much point to the project; for them, memory was a far bigger priority.
Nonetheless, they decided to take on the project and got to work implementing CTC's
design.
Initial progress was good, but it suffered delays and Intel could not meet CTC's time
schedules.
As a result, CTC decided to abandon the single-processor design in favour
of a multiple-chip approach. Intel continued working on the chip and a year later
released it
as the Intel 8008, the world's first 8-bit microprocessor. Since the 8008
design was based of CTC specifications, their computer, dubbed the Datapoint 2200,
was completely instruction compatible with the Intel 8008, despite not using the
processor.
The 8008 was soon replaced with the 8080, which improved compatibility with existing
TTL chips,
had a much higher clock speed, and a included a larger 16-bit address bus.
This caught the attention of calculator manufacturer MITS, who had used the 4004
processor in some of its designs. Ed Roberts, the company's founder,
knew the 8080 was powerful enough to be used in a home computer, but the chip's
high price meant
that it wasn't commercially viable to do so. However, as one of Intel's largest
customers,
Roberts was able to negotiate the price down from \$350 to \$75 per unit.
This lead to the creation of the Altair 8800, generally considered the first commercially
successful personal computer.
The 8800 was popular with a generation of computer scientists and engineers who
had learnt computer programming at college, but no longer had any hardware to use.
Several other home computers followed suit, and the chip also appeared in early
arcade games such as TAITO's Space Invaders.
Soon after, more designs appeared on the market. The Motorola 6800 was released
just 4 months after the 8080, and saw use in several early home computers.
Motorola had been a huge player in developing transistor-powered electronics, and
were confident that their chip would be successful.
However, poor decisions by management lead most of the design team to walk out
around the time of its
release. The team, lead by engineers Chuck Peddle and Bill Mensch, would join a
small semiconductor
company called MOS Technology, who were best known for creating the processor for
Atari's Pong.
They got to work by taking the 6800 design and removing features that were deemed
unnecessary
in order to drive cost down; they removed one of the accumulators, simplified the
memory bus,
and shrank the instruction set by 25 percent. Another dramatic cost saving
was achieved by modifying the masking process in the silicon to significantly improve
yields.
This allowed MOS to have a defect rate of less than 30\%, compared to the 70\%
defect rate of competitor foundries.
The result was the MOS Technologies 6501 and 6502 processors, first shown at the
WESCON75 tradeshow
in San Francisco. The 6501 was pin-compatible with the Motorola 6800,
whereas the 6502 was a modification of the 6501 design with an on-chip
clock oscillator. The result of the costsaving meant that the 6502 could be sold for just
\$25,
compared to the \$175 of the 6800 and \$350 of the Intel 8080. When first unveiled,
many people could not believe the price and thought chip was a scam. Despite this, the
news was covered in great detail by the technology press and the product was soon in
high demand.
However, Motorola were furious with the 6501's compatibility with its pin-layout,
and sued MOS Technology citing patent infringement.
MOS, being a small company, settled with Motorola and stopped selling the 6501.
This wasn't an issue, since the 6502 was far more popular and Motorola made no such
demands
regarding that chip. In fact, Motorola knew that the 6502 would be successful
regardless; two
months after the chip was released they reduced the cost of the 6800 from from \$175
to \$69.
9 Months later, it was dropped again to \$35. But it was too late. Everyone knew the
6502 was king.
The chip was used in a whole host of early home computers, including the Apple II,
Commodore PET and VIC-20, Atari 400 and 800,
and the BBC Micro. It also found success in the emerging home video game market,
powering the
Atari 2600, the Atari Lynx, the TurboGraphx, and the NES. Despite the success, the
legal battles
with Motorola had bled the company of funds, and so they were taken over by
Commodore in late 1976.
Commodore would fully utilise MOS Technology's expertise with the Commodore 64,
which used a lightly modified 6502 chip, the 6510.
The 6502 had massively disrupted the market with its high performance and low cost,
but it soon faced competition of its own. The head engineer of the Intel 8080 project,
Federico Faggin, soon left the
company after its completion and started Zilog microcomputers with fellow Intel
engineers.
The team took the 8080 design and added several improvements:
a whole host of new instructions, reduced the number of support chips required,
and introduced new indexing registers that made repeated execution faster.
This chip was released as the Zilog Z80, and while its intended use was in embedded
systems,
it was soon widely adopted in the home computer market. The chip was featured in the
Amstrad CPC,
various MSX computers, the RadioShack TRS-80, and the Sinclair ZX80, 81, and
Spectrum.
Like the 6502, the Z80 also found its fair share of success in the video game industry,
powering the ColecoVision, Sega Master System, and Sega Game Gear.
In addition to home systems, the Z80 was the go-to processor for the emerging video
arcades;
it was the chip that powered Donkey Kong, Galaxian, Galaga,
Pac Man, Mario Bros, Frogger and Bezerk.
But this was just the tip of the iceberg. The Z80's success would came not from its
use in home computers, or in video games. It came
from software. The Z80 was the de facto processor to run CP/M,
the first successful cross platform operating system. CP/M was originally built for the
Intel 8080, but being fully compatible and cheaper, the Z80 was the obvious choice.
Due to the cross-platform nature, CP/M quickly gained a large software library, which
made it the dominant operating system in the late 1970s and early 1980s.
One such example was WordStar, one of the earliest word processors to gain
commercial success. Its
'What you see is what you get' interface gave it a clear advantage over existing fully
text-based word processors. Another example was Microsoft's Multiplan,
an early spreadsheet application that would become Microsoft Excel 3 years later.
In fact, Microsoft were a key developer of CP/M software, and recommended using the
CP/M
operating system to partner IBM, who were in the process of developing their own
home computer.
However, Digital Research, the company who created CP/M, were not happy with
IBM's contract, leading Microsoft to abandon this plan.
They instead purchased 86-DOS, an imitation of CP/M that had been designed to take
advantage of Intel's new 16-bit 8086 processor, and renamed it to MS-DOS.
This was then licensed to IBM, who used it in their 1981 product, the IBM PC.
Powered by a cheaper version of the 8086, the 8088, it was an instant success,
and the MS-DOS operating system quickly overtook CP/M in market share.
The proliferation of different microprocessors and architectures created a golden age
of home computer technology in the early 1980s.
The adoption of computers rapidly increased in both home, education and in the
workplace.
However, by 1985, IBM were clearly ahead;
more than half of the home computers in America were IBM's. The original IBM PC had
sold well,
but the recently released PC/AT lead the industry with its blazing fast Intel 80286 CPU.
The PC/AT
was so fast that IBM had to purposely slow down the chip to not cannibalise its
minicomputer line.
27:06
Faced with dwindling sales of 8-bit computers, others in the industry needed
a microprocessor that could keep up.
Thankfully, this already existed.
Introduced in 1980, the Motorola 68K was first used in high-performance systems,
such as the HP 9000 series and SiliconGraphics Iris workstations.
However, by the middle of the decade, prices had become low enough that use in a
personal computer
would be cost effective. The 68K was perfect for the next-generation of home
computers; its forward
thinking 32-bit design and high clock speed meant that it was suited to graphics-heavy
workloads.
It would be the chip to power the Apple Macintosh, Commodore Amiga, and Atari ST
home computers.
With IBM PC-compatibles dominating the office space, these new machines would
position themselves as creative workstations.
The Macintosh, by far the most popular of the three, had Aldus Pagemaker,
a desktop publishing application which allowed the Mac to rule the desktop publishing
space.
The Amiga had DeluxePaint, a digital graphics editor, and the Video Toaster, a video
production suite. And the Atari, with its built in MIDI ports,
was a music production powerhouse, with software such as Cubase and Logic.
By the late 1980s, the market was made up almost entirely of PC compatibles and 68K
powered
machines. IBM, who had lead the market with the PC and PC/AT, did not want to lose
is its influence,
so released a new lineup of computers in 1987 called the PS/2 series.
The more powerful PS/2 machines used the new Intel 80386 CPU, and offered a range
of new features.
One such feature was a new expansion bus called 'Micro Channel Architecture', which
was designed
to replace the existing ISA bus that had been used on previous IBM machines and PCcompatibles.
IBM had patented the new bus and charged license fees to companies who wanted to
use it.
The move was met with poor reception though; it was seen as a desperate attempt to
regain control
of standards during a time when IBM's grip on the personal computer market had been
ever slipping.
PS/2 machines sold poorly, and PC vendors worked together in a consortium
to create standards outside IBM's influence.
By 1990 it was clear that IBM had lost control of the industry.
Management issues at Atari and Commodore had meant that both companies
struggled in the early 1990s,
effectively ending support for the Amiga and ST computer lines. This lead to a bloom
of
Mac adoption, with Apple becoming the America's Number 1 computer manufacturer
during this time.
However, even the newest Apple machines could not compete with the latest PC
technology.
The Intel 80486, released in 1990, was twice as fast as the outgoing 386.
This was achieved by using an in-built floating-point unit, an on-chip Level 1 cache,
and advanced pipelining.
The Motorola 68040, which drove the most powerful Macintosh systems, could beat the
486 clock
for clock, but simply couldn't reach the clock speeds without reaching furnace-like heat
output.
The market was now segregated into two sectors: the more niche Motorola powered
Macintosh,
and the widespread Intel powered PC. One thing was for certain though: the PC was
winning.
As computing power increased throughout the late 80s and early 90s, the potential
use cases of home computers grew at a similar rate. The 1990s saw an
explosion of multimedia software and hardware which took advantage of the new
technology.
At the heart of it was the CD-ROM, packing huge amounts of data into a single optical
disk.
Software titles such as Microsoft Encarta and Myst made full use of the storage that
CD-ROMs offered.
To power these, a fast processor was a must. It was an age of Multimedia madness.
Intel, by far the market leader in the PC space, knew that they would need to continue
their advancements in order to power this new multimedia age. By late 1992 the
company had
spent 3 years developing the 486's successor, and didn't want anyone else to profit off
of it.
Up until this point, second sourcing was an almost universal staple of the industry -
multiple manufacturers would produce variants of a company's microprocessor so that
supply was always
ensured. However, as Intel entered near monopoly levels of market share, the
chipmaker didn't want
other companies to profit of its new products, and cut all second sourcing agreements.
In addition,
it also didn't want other CPU manufacturers to use its branding for their advantage.
Other companies, such as Cyrix and AMD, had created products which used the
'386' and '486' tagline in the brand names.
The result of this meant that their new chip would have a new name: the Intel Pentium.
The Pentium was launched in early 1993 and was an immediate hit.
The chip was blazing fast and blew the doors off anything else on the market.
Its dual pipeline designed allowed the chip to process simple and complex instructions
simultaneously, known as a superscalar architecture.
The chip also benefited from a turbocharged floating-point unit, which was up to 15
times faster than the one used in the 486.
BYTE magazine put it best when they ran a double page spread saying 'Pentium
changes the PC',
indicating that the Pentium was so fast it would be bottle-necked by other system
components.
In fact, this was the Pentium's biggest problem. The other components (RAM,
Storage,
Display Adapter) weren't ready to take advantage of the Pentium's true performance
and the need
for the latest hardware meant the cost of Pentium machines was prohibitively
expensive.
This created heightened demand for 486 machines, and allowed Intel's competition to
flourish.
The AMD Am486 and Cyrix Cx486 were two microprocessors that had been reverse
engineered
from Intel's part. These processors performed at roughly the same level as Intel's 486,
but were cheaper. For example, a 40Mhz Am486 cost the same as a 33Mhz Intel 486.
This competition continued although it would take several years, both AMD and Cyrix
came up with Pentium competitors. The AMD K5 and Cyrix 6x86
were both released in 1996, delivering Pentium performance at a lower price.
Things were looking good for the market. Until one application changed everything.
Quake was the hottest PC game of 1996, and was a visual marvel.
Quake's levels guided players through a fully texture-mapped 3D world, rendered in
real time. To power such a feat, a Pentium processor was a must.
id Software had taken maximum advantage of Pentium specific optimisations, and the
game heavily relied on floating point calculations. Up until this time, almost all
applications relied exclusively on integer maths, where the 686 and K5 could beat the
Pentium.
In Quake, however, they couldn't come close.
Cyrix was hit particularly hard, as its floating point performance was extremely weak in
comparison
to Intel's. The chipmaker was soon relegated to producing low cost products for entry
level
machines, and the company merged with National Semiconductor in 1997, ending its
x86 development.
This issue didn't hurt AMD in anywhere near the same extent though.
This was due to the chipmaker acquiring NexGen, another semiconductor company
who had produced 486-class CPUs for PCs.
NexGen had been working on a successor to their own Pentium competitor, the NX586, and this design would become the basis for AMD's new processor.
The result was the K6, which would release in 1997.
Despite launching 4 years after the Pentium, the chip could match the very best on the
market.
To keep on top of the industry, Intel had intended the Pentium to be replaced with the
Pentium Pro in 1995, thus extending their performance lead.
The Pentium Pro used an advanced architecture called P6, which put heavy emphasis
on improved
32-bit performance, but suffered in 16-bit applications. This was a problem, since had
Intel overestimated the amount of 32-bit code that would be used in Windows 95 and
its applications,
resulting in the Pentium Pro delivering no real performance gain over the Pentium.
This meant that that by early 1997, 4 years after the Pentium had launched, it was still
the company's fastest chip.
The bad news for Intel was that the AMD's K6 was fast, faster than an equivalent
Pentium,
and cost less. The K6 architecture was vastly superior to the K5's, and the chip was a
huge success.
Feeling the heat, Intel released the Pentium II the same year, which used the same
architecture
as the Pentium Pro, but with a number of improvements and cost reductions.
The Pentium II was fast, but it wasn't the leap forward that the Pentium had been.
While the chip was faster in floating point applications, AMD's K6 (and its successor,
the K6-2) were faster in integer performance. This meant that for gaming, Intel was
ahead,
but for everyone else, you could just save the money and get a K6.
Intel's competition was not limited to the x86 space. Apple, who were the last
remaining
manufacturer of 68k computers in the early 90s, saw the rapid performance increases
of Wintel PCs,
and knew that they had to look to a new architecture in order to keep up.
Motorola, who had built the 68k, also knew that their architecture would not last the
next 10 years. So, to combat Intel, Apple and Motorola teamed up with IBM
to produce a brand new RISC-based micro-architecture called PowerPC.
The RISC philosophy, standing for Reduced Insuction Set Computing,
aimed to simplify the low-level micro-operations that made up computer programs.
The RISC instruction set had far fewer instructions than the CISC instruction set found
in x86 machines, meaning that, in theory,
architectural improvements would be easier to design and implement.
Many in the 1990s believe that RISC instruction sets were the future, and that the
complex nature of a CISC architecture would limit its potential.
It was during this time that many RISC architectures were developed, including
PowerPC, UltraSparc from Sun, and Intel's Itanium.
Power was first introduced on the desktop in 1994 with Apple's Power Macintosh line,
and provided a much needed performance boost to Apple's family. The first PowerPC
chip,
the 601 was up to 3 times faster than the Motorola 68040 CPU that it replaced.
The performance improvements continued with the PowerPC 750, launched in 1997.
The chip, branded by Apple as the G3, was the fastest CPU in the world,
and solidified the idea that RISC was the architecture of the future.
The chip was up to 30\% faster than the Pentium II in some applications,
and due to its low power usage, meant that Apple's laptops would easily outperform
Intel equivalents.
Intel was starting to feel pressure from competition on both the PC and Mac side,
and by early 1999, AMD had overtaken them in sales. The response? The Pentium III.
Considered by many to be a re-brand, early Pentium III's were nothing more than
Pentium II's
but with a new cache controller and Intel's new SSE instructions.
SSE allowed the CPU to drive the floating point unit as a vector processor, allowing a
program to
do a calculation on 4 floating point numbers in the time it would take to do 1. This
technology
had been introduced by AMD on the K6-2, called 3DNow!, but it was not widely
adopted,
and Intel's adaptation would become the industry standard.
The Pentium III was the fastest x86 CPU on release, but only briefly.
AMD would release the AMD Athlon in late 1999, a chip that was definitively faster
than the Pentium III.
The Athlon was ahead in integer calculations, but even further ahead in floating point.
Intel responded with a whole host of modifications to the Pentium III core, including
introducing an on-board L2 cache.
These second-generation Pentium III CPUs were known as 'Coppermine', and while
they were not
faster than the Athlon clock for clock, they had far better motherboard support
and used less power. They should have been big sellers. But they weren't.
Intel was in the process of transitioning from the 250nm to 180nm manufacturing
node,
and this transition was fraught with problems. A 1Ghz Coppermine Pentium III would
outperform
a first generation Athlon, but you just couldn't buy the Intel part. Stock was severely
limited,
and faced with no other choice, retailers and OEMs were forced to build Athlon PCs.
To further boost performance, AMD refreshed the Athlon by moving the cache to be ondie,
which allowed them to crank up the clock speeds to 1.4Ghz.
With continued lack of supply of Pentium III's, AMD invested in another
manufacturing plant in Germany to keep up with demand.
The news got worse for Intel, because it was around this time that the PowerPC 7400,
more commonly known as the G4, was released.
The G4 was the first PowerPC chip to implement SSE-like features, called 'AltiVec',
and performance easily overtook x86 CPUs in PCs, even at low clock speeds.
Intel released the Pentium 4 in late 2000, but it was clear that the product had been
rushed to
market, as it was power hungry, hot, and slow. Not only could the Pentium 4 not match
the Athlons,
it couldn't even match AMD's budget friendly Duron CPUs.
In early 2002, the product was updated to include more cache, support faster
memory,
and included higher clock speeds. Using Intel's new 130nm process,
the chip could now clock fast enough to beat AMD's flagship, which had been renamed
Athlon XP.
However, with the upcoming launches from PowerPC and AMD, it was clear that the
performance crown
would not be held for long.
The multimedia driven 1990s had seen a paradigm shift in the design of CPUs.
An emphasis of floating point performance meant that by the turn of the millennium,
processors were equipped to handle high resolution displays, complex creative
applications and rich
3D graphics. But, with a clock speed wall soon approaching, the next 20 years would
be defined
by packaging multiple CPU cores onto a single die.
While the previous 25 years was based around ever growing clock speeds, these new
designs would require a whole new way of thinking.
This was the era of the multicore mindset.
The year is 2002, and with Intel's mainstream and workstation performance behind its
rival AMD,
the company attempted to boost performance with a new technology called 'HyperThreading'.
Hyper-threading was an implementation of SMT or simultaneous multithreading, a
technique that created an additional virtual processor
which allowed applications to take advantage of underused parts of the processing
pipeline.
This was first seen in new Intel Xeon workstation processors, which replaced the
Pentium III Xeons which occupied the space.
The technology was used to close the performance gap to the AMD Athlon MP, a dual
socket version
of the Athlon XP. The feature was brought into the desktop with high end Pentium 4
CPUs in May 2003.
This, combined with a super fast 800Mhz front side bus
meant that the Pentium 4 pulled ahead of the Athlon XP.
AMD responded by releasing the Athlon 64, the first 64-bit x86 CPU.
The processor was based on AMD's Opteron server CPU, which had been released
earlier in the year.
At first, the achievement didn't seem that impressive. There were other 64-bit CPUs
around;
many RISC architectures had reached the 64-bit milestone far earlier.
It wasn't even the first 64-bit desktop CPU - the PowerMac G5, powered by the
PowerPC 970,
had been released 3 months prior. But none of this mattered, because AMD's
architecture could do
what no other architecture could - run both 64 and 32-bit x86 applications at full
speed.
There was an another architecture that could run both - Intel's Itanium,
which was a RISC based 64-bit CPU that featured a 32-bit x86 compatibility layer.
The Itanium however suffered a huge performance penalty when running 32-bit x86
code - the Athlon 64 architecture didn't.
Intel tried to combat AMD's press coverage by announcing an fully unlocked version of
the
Pentium 4, the Pentium 4 Extreme Edition. This quickly gained the nickname
'Emergency Edition',
due to its sudden announcement being conveniently a week before the Athlon 64's
release.
The initial launch of the Athlon was somewhat mixed the chip was the fastest out there, no question, but it was expensive,
and the requirement to use buffered memory on high end Athlon 64 FX products added
further costs.
There were also production issues - at launch AMD could only produce 100,000 chips a
month.
These problems were eventually ironed out, and the Athlon 64 sold well. While the
Pentium 4
was fast for content creation workloads, in gaming the Athlon 64 and 64 FX were far
ahead.
Revisions of the Pentium 4 and Athlon 64 chips were rolled out throughout 2004 and
early 2005,
but while the Pentium 4 remained competitive, the product was suffering from high
power draw
and temperatures as Intel cranked the clock speeds higher and higher.
The true step forward was met in mid-2005 with the release of the dual-core Pentium
D,
and Athlon 64 X2. The Pentium D was essentially 2 Pentium 4 processors on the same
package, whereas
the Athlon was a monolithic design, again coming from AMD's work in the server
space. The Athlons,
just like their single core counterparts, were faster out the box, but AMD charged a
heavy price premium, and with a good overclock the Pentium could trade blows for less
money.
It was clear by now however that the Pentium 4 was not a smart design.
In an effort to try and continuously improve clock speeds, questionable design
decisions were made that would harm the architecture in the long run.
The P4's notoriously long 31-stage instruction pipeline made the processor inefficient,
and when it was clear that going beyond 4Ghz would be impossible, Intel knew
that they had to change the architecture. But things were already looking up for the
company.
Apple had transitioned away from PowerPC processors to Intel's,
making AMD Intel's only competitor. In addition, AMD had became complacent with
their performance,
using revenue to buy graphics company ATI, rather than invest in R\&D.
Intel decided to base their new architecture on the Pentium M, a mobile chip based
originally on
the Pentium III design. The Pentium M was designed to be as power efficient as
possible,
and while being based on the Pentium III, took many of the improvements from the
Pentium 4
and incorporated them into the core. To show that the company was moving away
from the Pentium 4 approach,
they created a new brand name for their new products - Intel Core.
The first Intel Core product was the Intel Core Solo, and Intel Core Duo, in early 2006.
These chips were laptop exclusives, and were nothing more than Pentium
M processors with minor tweaks. However, ported to a new manufacturing process,
they provided class leading performance, and were successful.
Their biggest weakness was that they were still only 32-bit chips, but Intel knew that
this would be fixed in the next design. And boy, was it a big one.
The Core 2 Duo family, released in July 2006, was a nothing short of a game changer.
The chips, based on Intel's new Conroe architecture, were up to twice as fast
as the Pentium D's they replaced, and crushed the performance of competing Athlon
64 CPUs.
The midrange E6600 was a particular gem; it cost around \$320, but was faster than
the \$900 Athlon FX-52, which was AMD's flagship.
The chips received glowing reviews in the technology press, citing the high
performance,
low power consumption, and great value. The architecture had been given a wealth of
upgrades, which each gave
small single digit performance increases. The chip itself was given a huge 4MB L3
cache,
could fuse instructions together to decode more instructions per cycle, and had beefy
out-of-order instruction capabilities. When combined together,
along with a whole host of other improvements, the chip was a quantum leap in
performance.
And crucially, AMD had nothing to match the it.
The company refreshed its lineup with slightly faster Athlons, with much lower prices,
but they just weren't fast enough. The launch of Core 2 Quad,
a dual die version of the Core 2 Duo, pushed Intel's technology lead even further
ahead.
In an age where dual cores were only just being utilised, most people had no need for a
quad core
processor, and at a launch price of \$851 for the cheapest version, the chip was a niche
product.
But this didn't matter, because the halo chip was a demonstration that Intel was king.
The Core 2 Quads rapidly dropped in price throughout 2007.
Three months after release the Q6600, the most popular Core 2 Quad part
had its price reduced by \$300. By July, the price had dropped to just \$266.
AMD finally responded to Intel's lineup in late 2007 with the launch of Phenom, but the
chips were
overpriced and hit with a architectural bug which dropped performance further. The
main problem
with Phenom was it simply couldn't clock fast enough; the chips struggled to get past
2.5Ghz.
The product was relaunched 6-months later, with the architectural bug fixed,
increased clock speeds, and a lower price. The Phenom was now competitive, but only
just.
Towards the end of 2008 and early 2009, successors from both companies were
released. The Core i7
from Intel was launched in Late 2008, and offered noticeable performance gains over
the Core 2 Quad.
But the product was hampered by the requirement for expensive DDR3 memory,
and a reduction in L2 cache size meant that some software saw no performance gain.
AMD released its new architecture, the Phenom II, in January 2009 to warm reception.
The chip
wasn't as fast as the Core i7, but was competitive with the Core 2 Quads, and could be
dropped into
the now 3-year old AM2 motherboards for a cost effective upgrade. 6-core CPUs came
in early 2010,
with the Core i7 980X being released by Intel, and the Phenom II X6 being released by
AMD.
The i7 was a chip that could do it all - it had impressive multithreaded threaded
performance
when applications could use all 6 cores, and blistering single core speed when they
couldn't.
The Phenom wasn't so impressive, but thanks to its low price, it was at least
competitive.
Going into 2011, both AMD and Intel had brand new micro-architectures on the
horizon,
with Intel detailing its new Sandy Bridge architecture in late 2010.
From the surface, Sandy Bridge was an evolution of the Nehalem architecture of the
previous Core i7.
But under the hood, there were a variety of improvements, which gave it a boost of
around \%30 over the original core i7.
This was impressive on its own, but the biggest advantage with Sandy Bridge was the
affordability.
The i7-2600K could match the previous flagship, the 6-core i7 980X,
but was 700 dollars cheaper at only \$317. But real star was the Core i5-2500K,
delivering 90\% of the performance of the 2600K, at roughly half the price.
The chips were given glowing reviews in the tech press - Anandtech called them a 'nobrainer',
and AMD would have to deliver something special to keep up.
Early in development, AMD had reported that its upcoming 'Bulldozer' architecture was
up to
50 percent faster than the original Core i7, and many hoped for a return to top-tier
performance.
Bulldozer was a radical departure from previous designs. In prior architectures, each
core got
its own caches, units, and interconnects, and were largely independent with one
another.
In Bulldozer, AMD grouped two cores together into a module, which shared some
resources such as the Level 2 cache and floating point unit.
This compromise was done to try and boost clock speeds of the part, something
which had always troubled AMD with the original Phenom.
If clock speeds were high, then the chip would easily outperform its predecessor.
There was one problem. They weren't. The flagship FX-8150 was barely faster than the
Phenom II X6,
despite having 8 cores compared to the Phenom's 6. In single-threaded workloads,
the FX chips were slower than Phenom, and compared to Sandy Bridge, they were
hopeless.
which meant that revenue decreased. At first it wasn't a problem, with strong GPU
sales
and sales of Phenom enabling the company to bring in a net income of \$491m in
2011.
But, a year later, the company made a net loss of \$1.18bn. AMD were forced to sell its
foundries,
lay off staff, drastically cut down on R\&D, and even sell its headquarters.
The company tried to fixed the broken architecture, first by cranking up the clock
speed, and then gradually modifying the core design.
But this did little to help, because Intel's architecture was so far ahead.
Faced with no competition, Intel spent almost all of its focus on the mobile market,
beefing up the integrated graphics and improving power consumption.
This was mainly achieved using the company's unmatched lead in silicon
manufacturing.
Even if AMD could make a chip as powerful, it would be slower due to Intel's foundry
advantage.
Intel were dominant for the next 5 years, and faced with no competition,
rolled out endless quad-cores with slightly higher performance.
Sandy Bridge was followed by Ivy Bridge, essentially a die shrink to the 22nm
platform.
This was followed by Haswell, Broadwell, and then Skylake, each being minor
improvements.
Many people started to feel like these new chips were lazy upgrades, but with AMD so
far behind, people had no choice but to buy them.
Faced with Bankruptcy, AMD took drastic action.
First, most resources were put behind developing a brand new CPU architecture, called
'Zen'.
This left AMD's graphics lineup vulnerable, but AMD had never really broke through
with graphics, and so this was deemed a necessary sacrifice.
They beefed up efforts with Microsoft and Sony to produce the hardware for the Xbox
One and PlayStation 4 consoles, helping to diversify revenue streams.
The company also gained a new CEO in late 2014, Lisa Su, who had joined the
company in 2012
from HP Enterprise. With restructuring in place, and the company stable,
AMD went into 2017 with great hope. Their new CPU architecture was ready.
The Zen core was far more traditional than Bulldozer concept, reverting back to an
independent core design over Bulldozer's modules. It brought new features such as a
micro-op cache,
simultaneous multi-threading, and a dramatically improved floating point engine. The
architecture
was also built on the new 14nm FinFET process, which yielded further efficiency
gains.
The biggest advantage Zen had however, was its high core count, featuring up to 8
cores on a single Zen die.
Launched in early 2017, Ryzen, which was the brand of the Zen product family, was
met with praise from the tech press.
The top end chips were equal in performance to the Intel Core i7-6900K, but were a
third the price.
While single threaded performance and clocks were still behind Intel's latest
architecture,
Kaby Lake, you got twice the cores for the same price.
Consumers, who had been stuck with a maximum of 4 cores for the last 11 years,
were suddenly shown just how dangerous a monopoly could be.
Intel quickly responded, releasing a 6-core mainstream processor, the i7-8700K, 6
months
later. The Ryzen family was refreshed in April 2018 with minor tweaks and higher clock
speeds,
being released as the Ryzen 2000 series, and October, Intel further increased core
count
with the Core i9-9900K, which brought 8 cores to Intel's mainstream platform.
Intel had hoped to transition products across to their new 10nm node in 2015,
but it had been plagued with problems, and even 5 years later it still wasn't ready for
the desktop.
The company mitigated this by making continuous improvements to its 14nm process,
but AMD, who used TSMC and Samsung foundries, could leap ahead and gain a node
advantage.
While TSMC's 7nm was only slightly better than Intel's 10nm process,
it had great yields, and could be used for desktop sized dies.
In July 2019, AMD fully utilised the new node with their Ryzen 3000 family, based
on the Zen 2 architecture. The core design was the same as the original Ryzen but
with
an improved memory controller and double the L3 cache, significantly improving
gaming performance.
The Zen 2 architecture also innovated by using a chiplet design, which separated the
I/O and compute portions of the chip into two distinct dies.
This was done because the compute portion gained large benefit when moving to the
new expensive process, whereas moving the I/O die saw little benefit.
Thus, by separating the dies, the I/O could be manufactured using an older and
cheaper lithography without impacting performance. This chiplet design allowed
AMD to double the core count, bringing 16-cores to the mainstream desktop. Stuck on
its 14nm process,
Intel could only increase core count by 25\%, releasing its 10-core i9-10900K in May
2020.
So what's next? Since 2005, we've hit a wall with clock speeds,
being limited by power draw and heat dissipation. Furthermore, as the process nodes
shrink,
the cost to develop new chips is growing at an exponential rate. In 2001, when the
Pentium 4 was
first introduced, there were 23 foundries who were manufacturing the bleeding edge
130nm process.
With the latest 7nm node, only 3 are left - Intel, Samsung, and TSMC.
With these limitations, the future of x86 looks underwhelming.
More cores. Bigger caches. Minor architectural improvements. But 50\% increases in
performance?
You can forget it. To keep pushing boundaries, it will require truly groundbreaking
innovation.
But that's exactly how we got here in the first place.
Today, the PC processor industry is dominated by AMD and Intel,
with essentially 100\% marketshare in laptops, home computers, and servers.
But, in televisions, infotainment systems, smart devices, and particularly in
smartphones,
they have almost zero market influence. So why is there such a divide between these
two markets?
It's not performance; modern smartphones are just as powerful as some home compute
Discovering the answer is critical in understanding why the home computer industry
is becoming armed and dangerous. To find it, we need to back to 1981.
This was at the beginning of the home computer revolution, and Acorn Computers
were a key player in the UK.
Their first computer, the Atom, was a minor hit, but real success came with the follow
up.
The company had signed a deal with the BBC to produce a computer for a series of
educational programmes as part of the broadcaster's Computer Literacy Programme.
The computer was developed quickly, and was released as BBC Micro, which went on
to become one of the best selling home computers of the period.
Its success was linked to the education market - the government subsided computers
in schools
by offering 50\% off, provided that they choose one of three models: the BBC Micro,
Sinclair ZX Spectrum, or RM 380Z, from Oxford company Research Machines. The
Micro was the obvious choice however, since it was the machine that was featured in
the BBC television programmes.
To capitalise on their success, and to address the lower end of the market dominated
by Sinclair products, Acorn released the Electron, a cost reduced version of the BBC
Micro.
During this time, Acorn engineers began to think about a next generation machine,
but found existing 16-bit microchips, such as the Motorola 68k, to be lacking. The
company
therefore decided to start development on an in house solution, called the Acorn RISC
Machine,
which was based on the reduced instruction set idea that would become popular in late
1990s.
The project resulted in the ARM1 microprocessor, which was used mainly as a coprocessor to speed
up development of the ARM2. The ARM2 would be used in the Acorn Archimedes,
the successor to the BBC Master, a more powerful version of the BBC Micro.
The machine saw success in education, but most people chose to upgrade their 8bit machines with either an Amiga, or an Atari ST, and the Archimedes never saw
mainstream success.
Acorn
sales of the Archimedes, they were asked to find partners to use the ARM chip.
This lead to the venture being spun off as a separate company - ARM Holdings.
The venture was split 3 ways - one third was controlled by Acorn, another third by
VLSI,
and the final third by Apple, who wanted to use the chip in its upcoming Newton
MessagePad PDA.
Despite the venture, sales of the ARM chips were initially poor, with the manufacturers
hesitant to
use its expensive 32-bit design. In 1993, the company was approached by Texas
Instruments,
who wanted to license their instruction set and produce their own version.
ARM accepted, and this idea lead the company to create its licensing business model,
where the company would produce the instruction set and core designs, but let other
companies produce the final hardware.
The business model turned out to be a smart move, as the company was able to
capitalise on the emerging mobile phone market. In 1998 the
company was floated on the stock market, and were generating a net income of £3m a
year.
Despite a dip in the dotcom crash, the company continued to grow. The advantage of
the licensing model was that companies could
modify the architecture to suit a particular products, allowing a huge amount of
flexibility
for device manufacturers. The rise of the early smartphone market, lead by devices
such as the Blackberry Quark and Palm Treo, futher grew ARM's market share.
The launch of the Apple iPhone in 2007, and the Android operating system in 2008
lead to an explosion in the smartphone market, which dramatically improved the
company's profit.
By 2012, the company had a 96\% share of the mobile phone market.
So, in the mobile space you've got a familiar architecture, consistent instruction set
improvements, offering cheap licensing costs.
On the desktop, an x86 duopoly where the market leader has a broken 10nm process,
and its competitor having an inconsistent record in delivering performance.
If you want to develop a new in-house processor for a smartphone, the only cost is
licensing the architecture and modifying the chip.
If you want a new processor for a laptop, you've gotta pay a hefty price for the newest
Intel chip,
which is slow and suffers from high power draw compared to the ARM designs.
It's unsurprising then, to learn that the home and server markets are slowly moving to
ARM.
Cost is not the only advantage. ARM has been a pioneer of heterogeneous
computing,
the idea that different types of processor cores should be used for different purposes.
The big.LITTLE concept, which has been widely adopted in smartphones, splits the
CPU into 2 or more clusters, with which scale from low power to high performance.
Additionally, the flexibility of the ARM instruction set means that manufacturers
can tailor chips from device to device.
Buying an Intel CPU means you get a predetermined amount of compute performance,
power consumption,
SIMD capabilities and graphics horsepower. With ARM, all of these can be easily
modified.
Moving from x86 to ARM won't be easy. The instruction sets are not compatible with
each other, and software will need to be recompiled,
or run in software emulation where performance will be drastically reduced. But
many manufacturers of computers also have strong mobile divisions, which will aide
this transition.
Apple, LG, Microsoft, Samsung and Lenovo all have strong experience in developing
for ARM platforms,
making the transition easier. ARM powered laptops have been an emerging market for
several years,
particularly spurred on by Microsoft's port of Windows 10 to the ARM platform.
The Surface Pro X, powered by the Microsoft SQ1 SoC, is a notable example.
However, the desktop landscape has been void of ARM powered systems
since Acorn's death in the late 1990s. But even this is changing.
In June 2020, Apple announced that it would be moving its entire Mac family away from
Intel
to Apple designed ARM silicon. The A12Z, which powers the Developer Transition Kit,
is the first desktop ARM CPU
in the modern era. Furthermore, all new Apple computers will be running ARM chips by
2022.
The significance of this event, combined with the advantages of using ARM instruction
sets,
means that it is likely more and more vendors will start transitioning across.
The CISC x86 instruction set may have killed of its RISC competition 20 years prior,
but now is its time to be retired. It may seem like a drastic move, but
as microprocessor development costs increase, and
progress becomes ever more uncertain, big changes are needed. ARM may be what
the industry needs.
We've experienced 75 years of rapid technological development,
and the progress of the microprocessor has changed the world.
While companies have come and gone, the pace of innovation has been relentless.
From the dawn of post-war computing, to a home computer revolution.
A multimedia frenzy followed by an explosion of multicore computing.
But with costs spiralling and facing a brick wall of power consumption, we must ask: is
this it?
Thank you very much for watching.
English (United Kingdom)
Download