History of RAM Technology & Industry

advertisement
Morin 1
A History of RAM Technology & Industry
CNIT 176
Prof. Raymond Hansen
By. Nathan Morin
Morin 2
Introduction
When we think of computing systems, RAM is one of the first and most important
components we think of. It would be hard if not impossible to imagine a computing world
without RAM working as main memory. However, it’s development and architectures are often
overlooked in favor of “flashier” CPU architectures. After all, the CPU is the core. This paper
seeks to explain the development and technology behind this sometimes overlooked part of
computing. I will describe the theories leading to RAM’s development from the 1970’s. Along
the way, I will describe the companies that made RAM’s into a mature technology and the steps
leading to our modern RAM systems. Finally I will take a look at what the future may look like.
Throughout this journey, I hope to show how RAM technology has developed into the integral
and complex part of computing we rely on every day. In addition, I hope to show how future
technologies may totally change the basic RAM designs created in the 1970’s and used until
today.
Early RAM development (1970-1975)
In the early days of computing, data was stored in magnetic memory systems. These
early sequential data storage systems were slow and unreliable. However, because transistor
technology was so expensive, they were used as main memory to record and access data.
However, this all began to change as transistors became cheaper and the theoretical MOSFET
transistor design became a reality.
Although the MOSFET design is a given in current memory systems, early MSOFET
memory designs faced competition from other memory designs. Most important among these
Morin 3
was bipolar memory. In fact, early MSOFET systems were actually slower than bipolar
memory. Although bipolar memory was originally faster, MOSFET gained traction because of
its “processing simplicity and layout advantages.” The basic design of early MOSFET circuits
was relatively simple. They consisted of a source, a drain, and a gate to decide whether
electricity would flow from the source to the drain. As designers began building MOSFET
chips, two main circuit designs developed: NFET and PFET. The main difference between
NFET and
PFET designs
is that NFET
designs use
bodies
charged with
electrons,
while PFET types
Figure 1 - NFET & PFET (Hu 199)
do not have this modification (see Figure 1). This difference makes NFET and PFET circuits
inverse (when NFET is on PFET will be off). Each circuit type had an advantage: NFET is
faster, but PFET does not leak electricity. Early designs used PFET memory because leakage
was too large an obstacle in the NFET designs. As designers developed technology to improve
the NFET design, they started combining PFET and NFET designs onto the same circuit. Theses
circuits were called CMOS. These new designs used the inverse relationship between N and
PFET designs to slow electricity flow through MSOFET circuits, saving power in the process
(Hu 196-200). These designs allowed for the MOSFET memory design to become a reality and
eventually displace bipolar and magnetic memory.
Morin 4
Throughout this time period, the market was dominated by a handful of key US players.
Largest among these was Intel. The original Intel founders pioneered the change from magnetic
memories to RAM chips (Kang J. 8). Although volatile memory is now accepted, at that time it
was a major leap to move from relatively stable, cheap and fast magnetic tape memory to volatile
expensive RAM memories. Because of this, Intel's first RAM products were small SRAM chips:
256-bit MOS and 64-bit bipolar products. According to Goorden More, Intel founder, “they
[these early RAM chips] still did not approach cost competitiveness with the established
magnetic cores. The bipolar one filled the need for very fast, small memories, while the MOS
product was competitively priced with magnetic core memories in small memory sizes”. The
first product Intel produced that proved RAM and especially MOSFET DRAM was a viable
alternative to magnetic memories was the i1103, a 1K (1024 bit) DRAM chip. This was a major
leap in an industry that had major concerns about the volatility of even SRAM. However, Intel
developed a refresh method that allowed these DRAM chips to be cheaply produced. In
addition, the smaller size and speed advantages finally made RAM an attractive alternative to
traditional magnetic technologies (Moore 6). However, Intel was far from the only player in the
early RAM industry.
Another early player in the US market was Texas Instruments. Although TI did not
create the initial revolutionary innovations Intel produced, they were not far behind (Kang J. 8).
TI actually had the opportunity to create a 1K chip but declined it because of sketchy
“insufficient design resources” (TI 3.3). After Intel developed their 1K chip, TI started
development of the next generation of DRAM chips: 2K and beyond. By 1974, both Intel and TI
had developed 4K DRAM memory chips. The DRAM industry was now a race to larger
capacities and faster speeds. “Each new generation of DRAM stores four times as many bits as
Morin 5
does the previous one; new generations are introduced approximately every three years” (Moore
10). In this race, Mosteck, another American company founded by former TI employees,
overtook both companies (Phipps 24-25). Mosteck gained the lead in the 3rd generation of
DRAM, the 16K DRAM chip. While Intel developed a chip using a three-transistor design,
Mosteck was able to create a cheaper chip with only one transistor (Moss 71). This innovation
propelled Mosteck into the lead in the later 1970’s.
Throughout this period, US companies dominated the development and production of
RAM chips. However, this all changes in the later years of 1970 and into the 1980’s. During
this period the RAM industry matured into an integral part of the computing industry, an industry
dominated by Japanese innovators who developed RAM into PC age.
The Japanese takeover (1975-1980)
Although the US dominated the early of computing development and in particular RAM
development, foreign competitors began making significant innovations in the 1980’s that
propelled them to the forefront of the RAM industry. In particular, Japanese companies focused
on reducing the cost of RAM and producing the large quantities needed for a massive market
growth.
One of the difficulties developing RAM or any transistor technology is the capitalintensive nature of the business. This requires producers to minimize production and
development cost to stay competitive. Starting in the late 1970’s, Japanese businesses did just
that. A few factors gave Japanese businesses the advantage. First, the Japanese government
encouraged collaboration on research, reducing development cost. Second, Japanese businesses
had access to capital necessary to make those innovations. Third and finally, Japanese
Morin 6
businesses tuned their manufacturing processes to produce larger quantities of higher quality
products. This focus along with the developmental advantages allowed Japanese firms to
achieve 70% to 80% yields compared to the 50% to 60% yields US businesses achieved (Kang J.
10). These
impressive
innovations
pushed most US
companies out of
the RAM
manufacturing
industry. As can
be seen, by 1987,
Figure 2 - RAM Industry 1975 (qtd. in Kang J. 11)
the vast majority
of the most profitable companies were foreign companies.
In the US Intel continued developing DRAM chips as the industry moved toward 64-bit
and 256-bit designs. However, they were losing money trying to compete with Japanese
companies’ low prices. By 1985, they reached a turning point. They could continue DRAM
development by constructing a new $400 Million plant or they could leave the market for more
profitable products. As they looked toward the 1 MB DRAM chip, they decided to sell their
intellectual property and look toward other designs. According to Gordon Moore, “we decided
to abandon the largest semiconductor product type, one that we had created and that had been
critical to our early success, and focus our efforts on other areas where there was a greater
chance to succeed” (Moore 74). Instead they looked toward SRAM and EPROM (Erasable
Morin 7
Programmable ROM) as less saturated markets where they could find profitable products.
EPROM was an especially profitable area for Intel. Before their shift away from DRAM, they
had marketed EPROM as a niche product only a few designers would use. However, they
realized the ability to recode ROM memory had implications far beyond design mockups and
started marketing it to a larger audience in the late 1980’s (More 73-74).
What of TI, the other large US innovator in the early development of RAM? TI took a
more aggressive strategy in response to the increased competition in the RAM market. They
continued production through the 1980’s, testing one of the first 4MB DRAM chips in 1988.
However, the costs of development proved to large to continue isolated development. In 1988
when developing the 16MB DRAM chip, they partnered with Hitachi to cover development
costs. According to TI, this made “it possible for both companies to produce new products faster
and at a lower cost” (TI 3.4). This allowed them to continue production into the early 1990’s.
However, the early days of US domination were over. As the computer industry moved into the
PC age, the industry influencers would be global.
The PC Age (1980 - 2000)
As the computer industry moved from the mainframes and industrial computing that
defined the early computing industry to the PC age when every business and many people had a
computer, the need for RAM expanded greatly. This spurred a new generation of RAM
technologies and standards. These technologies would allow PC’s to hold much more
information and use graphics in ways early mainframe designers could only dream. They also
allowed these larger chips to fit inside newly created laptops while saving valuable power.
Morin 8
Going back to the early 1980’s when the industry standard for DRAM was 64-Kbit, three
key development areas improved the reliability and cohesiveness of DRAM chips: array noise
reduction, and voltage normalization, and CMOS designs. The first of these, array noise
reduction, was accomplished with a combination of two techniques: Folded-bit-lines (FoldedBL) and half-VDD pre-charging. Folded-BL, first developed in the 1970’s differ from the then
standard open-bit-lines (Open-BL) in that Folded-BL are designed in parallel, close together
while Open-BL bit lines are designed more independently. The integrated nature of Folded-BL
allows the amplifier to cancel noise across the bit lines. Half-VDD capacitor pre-charging design
was implemented to increase the time before a capacitor would lose charge and therefore reduce
array noise in the bit lines. The second development area, voltage normalization, was important
to both reduce the complexity of supporting devices and allow for miniaturization of designs.
Before the 1980’s many different voltage inputs were used to power DRAM chips. At this point,
all those different inputs were standardized into a single 5-V supply. Inside the DRAM chip,
developers also created Voltage Down Converters (VDC). This basically allowed for the
internal voltage to decrease as the circuits became smaller while the external supply voltage
stayed constant. This allowed for developers to miniaturize DRAM circuits without worrying
about high input voltages. The third and final major advancement was the actual implementation
of the CMOS design I mentioned earlier. The first CMOS designs were implemented in the
1980’s with the 1MB DRAM (Kiyoo 28-29). These increases in technology were complemented
with new RAM I/O standards.
Morin 9
The first of these innovations was SDRAM. SDRAM was developed in the mid 1990’s.
The basic idea of SDRAM very simple; perform memory reads/writes on the rising edge of the
clock pulse, allowing for more efficient data transfer (Ikeda 686). According to Steve Cullen,
memory analysis for marketing researchers Cahners, “there was a fast transition to synchronous
RAM” (Paulson 17). However, as
developers continued increasing
clock speeds to increase data flow
rates, they found RAM clock speeds
would not be able to keep pace with
the significant improvements
processors had been making. This
led to the development of DDR
SDRAM. DDR is also, on the
surface, a fairly simple manipulation
of the clock cycle. Instead of using
just one clock edge for reads/writes,
it uses two. Actually, a request is
sent on the rising edge and data can be
Figure 3 - RAM Technology Development (Cosoroaba 391)
sent on either edge. This
improvement alone doubled data bandwidth, helping DRAM keep up with processor requests.
This also helped reduce the need for caching to bridge the gap between processor and memory
speeds (Cosoroaba 388). As figure 3 clearly shows, DDR SDRAM quadrupled DRAM data
access rate in just a few years.
Morin 10
During this period,
improvements were also made
to the physical interfaces RAM
chip used to interact with the
system bus. In 1988, Wang
Figure 4 – Original Patent Image For SIMM (Clayton)
Laboratories created the SIMM
(Single Inline Memory Module) interface standard. This original design used 9 DRAM chips
placed on a single board, allowing the SIMM to interface as a whole with the bus interface. As
the original patent image (figure 4) shows, this early design used a 30-pin architecture (Wang
Labs 2-6). As DRAM capacity and
processor speeds increased,
increasing the number of pins on a
SIMM module was not enough. In
response to this, DIMM (Dual Inline
Memory Module) was created. This
used separate pins on the front and
back of a memory module,
effectively allowing for double the
data transfer rate. In addition, this
allowed DRAM chips to be placed
Figure 5 - DDR3 DIMM (JEDEC DDR3 25)
on both sides of a DRAM module, greatly increasing data density. This DIMM construction is
the current standard for DRAM modules. Jedec, the standards body for DIMM provides this
graphic (figure 5) for a current DDR3 memory module. It clearly shows how the SIMM module
Morin 11
design has been essentially mirrored across the module to double both data storage and access
bandwidth.
All these improvements were developed in conjunction. Simplification and
miniaturization of RAM chips allowed for the creation of effective DIMM modules. Improved
reliability allowed developers to create DDR modules that doubled data throughput. All these
improvements allowed PC makers to develop compact, powerful computers that revolutionized
the computing industry.
The Modern Age (2000-2014)
The 21st century. XP was new, Google was just a small company, and the Internet was
something most people just didn’t use. This new period did not bring revolutionary new RAM
technologies or standards. Instead, companies and organizations refined and expanded the
previous RAM technologies. Although there were no large shifts, there were some interesting
developments in embedded systems that pushed RAM to interesting new places.
In the theme of continued improvement, Samsung, now the dominant RAM manufacturer
in the world, created the first 512MB DRAM chip in 2000. Samsung continued making
significant improvements. According to Samsung, they “developed the worlds' first 50nm 1G
DRAM” in 2006 and “ the world's first 2Gb 50 NANO” in 2008 (Samsung). These impressive
increases, quadrupling DRAM capacity in under a decade, along with their well-known business
acumen placed Samsung in the top spot in the DRAM market. According to iSupply, Samsung
held a 34% share of DRAM sales in 2009 (qtd in. Kang 13).
In parallel with this big business investment into larger DRAM modules, embedded
DRAM (eDRAM) was developed to overcome some of the problems associated with commodity
Morin 12
DRAM. First, commodity DRAM is relatively expensive. Although creating non-standard
eDRAM systems may seem more expensive. eDRAM system remove the significant bussing
and interfacing costs standardized systems require. In addition, commodity DRAM must be
added to a system in relatively large blocks. eDRAM reduces both these costs. Second, DRAM
is power hungry. All those busses require larger input voltages than the core DRAM
architecture: 3.3 vs. 1.8. By removing these interfaces, the system can operate entirely on the
lower voltage. Third and finally, DRAM is slow. eDRAM improves speed by allowing for
variable bus size. Larger busses mean greater bandwidth and transfer rates. Simplified eDRAM
architectures mean less time to access data. Finally, eDRAM can be customized for specific
applications (Keitel-Schulz 10). Despite these improvements in cost, power use, and speed,
eDRAM has its disadvantages: higher development costs and less flexibility. So, why would a
designer use eDRAM vs. commodity DRAM. eDRAM is important for less traditional
computing systems: networking, embedded systems, hard disks, and mobile technologies. For
the vast majority of traditional computing applications, the development costs and low flexibility
make eDRAM impractical. Basically, if savings from the production of large volume justify the
higher development costs, eDRAM would be used. If power is limited or performance is
imperative, eDRAM may also be a good choice.
So far, the first decades of the 21st century have produced three new DDR standards.
DDR2 doubled the access speed of DRAM chips just like DDR did for previous DRAM chips.
This was accomplished by a number of changes. JEDEC, the standards organization in charge of
DDR, envisioned a single DDR2 chip becoming as large as 4 Gb. Their official standard defined
a “minimum set of requirements for JEDEC-compliant 256 Mb through 4 Gb for x4, x8, and x16
DDR2 SDRAM devices” (JEDEC DDR2 1). This long, detailed requirements document boils
Morin 13
down to a few basic changes. Most significantly, DDR2 improves how the DRAM chip
communicates over the buss. In the DDR standard, there is a delay between accessing the RAS
(Row Access Strobe) and CAS (Column Access Strobe). This caused the bus accessing the
RAM module to operate at less than half capacity (Kirihata). The DDR2 standard increased the
internal clock speed and accessed the CAS directly after the RAS (Fujisawa 38). This
improvement approximately doubled the access speeds of DDR2 over DDR1. In conjunction
with these speed improvements, DDR2 introduced lower voltage and more accurate signaling.
Even as DDR2 was entering producing, Jedec and their partners started development on
the next generation of DDR, aiming to yet again double the access speed of DRAM modules. In
2004 Samsung produced the first DDR3 prototype. With access rates of ~1600 Mbps, Samsung
achieved Jedec’s requirements of the DDR3 standards (Chang 8). Similar to DDR2, DDR3
decreased the input power over the previous generation, going from 1.8V/0.1V to 1.5V/0.075V.
In addition, the
number of RAM
modules was increased
to a minimum of x8
(Chang 4). Samsung’s
published comparison
in partnership with
Jedec shows the across
the board
improvements in
DDR3 (figure 6).
Figure 6 - DRR Comparison (Chang 4)
Morin 14
DDR3, now the current industry standard, will be replaced by the next generation of DDR. As
was inevitable, DDR4 breaks the doubled rate each generation tradition. Although it did not
double access rates, it increased DRAM speeds by 50% (~2400Mbps). Similar to previous
generations, it decreased power usage by 30% (~1.2V). This new generation should enter
production soon, Samsung and other manufacturers demonstrating their designs and Intel
readying their processors for the new memory (Shah par. 1). However, DDR4 may be the last
generation of DDR DRAM. Although speeds continue to improve, DDR DRAM is still a
volatile memory type. Other non-volatile memories such phase-change, RRAM (Resistive
RAM), and MRAM (Magnetoresistive RAM) may become the standard primary memory type in
future computing systems (Shah par. 4). Although the first decade of the 21st century may not
have brought radical change to the RAM industry, it set the stage for the memory used in every
server, laptop, and iDevice.
The Future (2014-)
RRAM is one of those promising technologies that may replace the current DRAM
standard. There are three main reasons to move toward a technology such as RRAM: stability,
efficiency, and CMOS compatibility (Lee B. 3270). Stability is perhaps the greatest advantage
of these new RAM technologies. RRAM is a non-volatile memory technology: it retains data
even without power. This could potentially allow for a computer to be powered off and resumed
from the exact same spot, instantly. Efficiency, both in speed and power, is another significant
reason to move toward a new RAM technology. Tom Coughlin, founder of Coughlin
Associates, said “RRAM will eventually deliver 20 times faster write performance, 20 times less
power consumption and 10 times more durability than NAND flash memory” (qtd. in Williams
Morin 15
par. 5). Finally compatibility with CMOS allows RRAM to work seamlessly with existing
systems.
So, what is the technology behind RRAM and other potential DRAM replacements?
RRAM is a classification of many different technologies implementing the same basic idea: use
the resistive properties of two sandwiched materials to hold a charge. This overall category is
divided into many different types by different materials, operation mechanisms, and polarities.
RRAM is not the only promising non-volatile RAM technology. MRAM, another non-volatile
RAM technology, uses magnetism to store data. MRAM technologies, used in combination with
spin transfer torque (STT) magnetic technology, has the potential to greatly increase speed and
capacity (Shah par. 4). According to Wang Kang from University of Paris-Sud, these spintronics
technologies have the capacity to continue Moore's law past want CMOS designs can achieve
(Kang W. 1). Although both RRAM and MRAM are more reliable, faster, and power efficient,
they have one significant disadvantage. They are currently only produced on small scales and
with large costs. The challenge is to design manufacturing practices that can break Rock’s law
using these new technologies to reduce production cost.
Even as these technologies promise increases in the next few decades, other technologies
may totally revolutionize how we design computing systems in the future. One such technology
is Quantum Dot memory. Basically, as transistor designs continue to decrease in size, Quantum
mechanical rules start to apply and conventional designs cannot just be made smaller. Quantum
dot memory aims to solve this size problem by using self-organizing dot semiconductors. This
allows billions of these transistors to be placed on a circuit without using complicated printing
methods (Nowozin 183-185). In another memory system, quantum states are used to store and
manipulate data, Oxford University developed just last year a system that held a quantum state
Morin 16
for almost 40 minutes, smashing the previous record of under a minute at room temperature.
This memory not only has the potential to hold vast amounts of data, but because of quantum
laws, any single bit can be both 1 and 0. This allows these systems to perform multiple
calculations simultaneously (Morgan par. 1). Stephanie Simmons from Oxford University
department of materials said, “[These] qubits [dual value bits] could prove very helpful for
anyone trying to build a quantum computer” (qtd. in Morgan par.2). As we continue to develop
the DRAM technologies originally developed in the 1970’s, these new technologies may in the
future create entirely new computing architectures with capabilities as different just as
revolutionary as those first MOSET designs.
Conclusion
RAM technology has come a long way from those early MOSFET designs created in the
1960’s and 1970’s. Those early designs allowed for a revolution in computing, allowing
designers to create computing systems that quickly and securely managed data. This would have
been impossible with the alternative tape and disk memory systems. The designers who made
RAM a cornerstone of modern computing systems built upon those early designs by increasing
RAM size and speed. As RAM systems became larger, new standards for data transfer and
architecture such as DIMM and DDR allowed DRAM, the most common RAM type, to enter
new fields and systems.
All along this development path, designers and developers in the US and abroad drove
this race toward greater security, speed, and efficiency. In the early days, US designers and
manufacturers dominated the industry. However, as the industry matured, new competitors from
abroad and especially Japan made innovative technical and manufacturing designs that propelled
Morin 17
them into the RAM industry forefront. In particular, Samsung, the current industry leader, leads
production and development of current RAM chips.
Throughout all these changes in technology and business, the basic concepts of RAM
have stayed the same. However, this may change. New technologies are pushing the boundary
of how small we can design circuits and how we fundamentally interact with computing systems.
These changes may usher in a new era as revolutionary to the computing industry as those
original RAM designs.
Morin 18
References
Chang, Jaci. Design Considerations for the DDR3 Memory Sub-system. Jedex San Jose
2004, SanJose. Samsung Electronics, 2004. Print.
Clayton, James. "Single In-Line Memory Module." United States Patent 4727513. Feb. 23,
1988. Print.
Cosoroaba, A.B. "Double data rate SYNCHRONOUS DRAMs in high performance
applications." 1997. 387-391. Print.
Fujisawa, Hiroki, et al. "1.8-V SOO-Mb/s/pin DDR2 and 2.5-V 400-Mb/s/pin DDRl
Compatibly Designed 1Gb SDRAM with Dual Clock Input Latch Scheme and
Hybrid Multi-Oxide Output Buffer." 2004 Symposium On VLSl Circuits Digest of
Technical Papers, (2004): Print.
Hu, Chenming. Modern Semiconductor Devices for Integrated Circuits. University of
California, Berkeley, 2010. Print.
Ikeda, H. and H. Inukai. "High-speed DRAM architecture development." Solid-State
Circuits, IEEE Journal of, 34. 5 (1999): 685-692. Print.
JEDEC. DDR3 Unbuffered SO-DIMM Reference Design Specification. 2013. 4.20.18. Print.
JEDEC. DDR2 SDRAM SPECIFICATION. 2009. 79.2-F Print.
Kang, Wang, Weisheng Zhao, Zhaohao Wang, Yue Zhang, J Klein, Claude Chappert and
D Ravelosona. "DFSTT-MRAM: Dual Functional STT-MRAM Cell Structure for
Reliability Enhancement and 3D MLC Functionality." IEEE, Print.
Kang, Joonkyu. "A study of the DRAM industry." Diss. MIT, 2010. Print.
Keitel-Schulz, D. and N. Wehn. "Embedded DRAM development: Technology, physical
design, and application issues." Design Test of Computers, IEEE, 18. 3 (2001): 7-15.
Print.
Kirihata, Toshiaki, et al. A 113mm2 600Mb/s/pin 512Mb DDR2 SDRAM with
Vertically-Folded Bitline Architecture. ISSCC 2001 / SESSION 24 / DRAM, ISSCC
2001, 2001. 24.3. Print.
Morin 19
Kiyoo Itoh, B.S. "The History of DRAM Circuit Designs - At the Forefront of DRAM
Development;." Solid-State Circuits Society Newsletter, IEEE, 13. 1 (2008): 27-31.
Print.
Lee, Byoungil and H-SP Wong. "Fabrication and characterization of nanoscale NiO
resistance change memory (RRAM) cells with confined conduction paths." Electron
Devices, IEEE Transactions on, 58. 10 (2011): 3270--3275. Print.
Moore, Gordon E. "Intel - memories and the microprocessor." Daedalus 125.2 (1996): 55+.
Biography in Context. Web. 22 Feb. 2014.
Morgan, James. "Quantum 'world record' smashed." BBC News, 2014. Web. 12 Apr 2014.
<http://www.bbc.com/news/science-environment-24934786>.
Nowozin, Tobias, Martin Geller and Dieter Bimberg. "Quantum Dot-Based Flash
Memories." CRC Press, (2013): 183--200. Web.http://dx.doi.org/10.1201/b16236-11.
Paulson, Linda Dailey. "Faster RAM tackles data and marketplace bottlenecks."
Computer, 35. 4 (2002): 17-19. Print.
Phipps, Charles. "Oral History of Charles Phipps." Interview. By Rosemary Remacle.
5/28/2009. 2009. Web. 22 Mar. 2014.
Samsung Corp. "SAMSUNG." Samsung, 2014. Web. 23 Feb 2014.
<http://www.samsung.com/us/aboutsamsung/samsung_group/history/>.
Shah, Agam. "Intel set to bring next-gen DDR4 DRAM memory to computers later this
year | PCWorld." PCWorld, 2014. Web. 4 Mar 2014.
<http://www.pcworld.com/article/2085880/intel-set-to-bring-ddr4-dram-to-computers
-in-third-quarter.html>.
TI Corp. "Texas Instruments History of Development." Ti.com, 2014. Web. 23 Feb 2014.
<http://www.ti.com/corp/docs/company/history/timeline/popup.htm>.
WANG LABORATORIES, INC., Plaintiff-Appellant, v.MITSUBISHI ELECTRONICS
AMERICA, INC. and MITSUBISHI ELECTRIC CORPORATION UNITED
STATES COURT OF APPEALS FOR THE FEDERAL CIRCUIT. 2007. Print.
Williams, Martyn. "New types of RAM could revolutionize your PC." PCWorld, 2014.
Morin 20
Web. 2 Mar 2014. <http://www.pcworld.com/article/2084240/new-types-of-ram-couldrevolutionize-your-pc.html>.
Download