Abstract - Computer Society Of India

advertisement
21 November 2011
Nanocomputing – Trends, Directions and Applications
Author
Dr. T V Gopal
Professor
Department of Computer Science and Engineering
College of Engineering, Anna University
Chennai – 600 025, India
&
Chairman, CSI Division II [Software] and
Advisor – CSI Communications [CSIC]
e-mail : gopal@annauniv.edu
Preamble:
This article based on the content prepared for:
1. A Lecture Series “Next Generation Information Technology – A Quantum Leap”
for M/s Tata Consultancy Services Limited during 2005.
2. Invited Lecture for C-FACT’10, Loyola College, Chennai during 2010
3. Invited Lecture for BIT's 1st Annual World Congress of Nano-S&T, China during
2011 [Not Presented]
Abstract
The quest for smaller and faster computers has been on ever since the first computer was
made. The journey from SSI to VLSI and the consequent reduction in the physical size of
the computer has been well documented. The making of faster computers is also well
documented. Today we have very fast personal computers on the desktops and
supercomputers for scientific computing and other complex applications.
Demand for PCs and leading-edge performance was so strong for so long that we came to
believe that Moore’s law created the industry’s success. But Moore’s law is just an
aggressive supply curve. We forgot that demand is a force of its own. For decades we had
solutions based on generic micro-processor designs and brute-force miniaturization of
transistors to improve the performance. Recently, ASICs and re-configurable
microprocessors are finding their way into the market. However, they have not shaken off
the basic computing model.
Nanocomputing is a new paradigm that is promising to provide the speed and power of
supercomputers at very small physical size. This paper explores the new paradigm of
computing and its applications. The paper discusses five major trends in Nanocomputing.
1.
Introduction
Each new decade sees a new wave of innovative technology. Digital computers were
invented in the mid-1940s. The inventions of the transistor and the integrated circuit
resulted in their explosive growth. Experts in the 1950s thought that the world's
computing needs could be supplied by half a dozen computers. The demand for
Computers grew rapidly and the focus began to shift towards High Performance and
High Return on Investment.
Digital computers represent the world around them using ones and zeros. To deal with
even simple data, such as the colors on a computer screen, a digital computer must use
many ones and zeros. To manipulate this data, a digital computer uses a program, a
sequence of tiny steps. Animating a picture may require a digital computer to execute
millions of steps every second.
But digital computers are limited by the "clock rate," the number of steps that can be
executed in a second. There physics takes its toll: the clock rate cannot be made
arbitrarily fast, for eventually the ones and zeros would have to travel through the
computer faster than the speed of light. Electronics also limit digital computers. As
digital chips are built with tens of millions of transistors, they consume increasing
amounts of power, and the odds increase that a chip will be defective.
Supercomputers are Computers designed solely for high performance. They are
expensive and use cutting edge technology. The target applications are complex and
highly parallel. With the availability of high performance multiprocessor the distinction
between supercomputers and parallel computers is blurred. The applications of
supercomputers range from high quality graphics, complex simulations, high speed
transaction processing systems to virtual reality. Supercomputers are made using multiple
high speed processors. The architectures include pipelined vector processors, array
processors and systolic array processors.
The traditional supercomputing model was one where users on workstations submitted
their compute-intensive jobs to a large, expensive, common, shared compute resource in
a batch environment. While the actual time it took for a task to complete may have been
very small, the turn-around time for a task to be submitted, progress to the head of the
batch queue, and finally get executed could be very large. The economies of these large,
shared resources often meant that there were a large number of users, which in turn meant
very long turn-around times. In fact, the utilization of the common, shared compute
resource is inefficient since even compilation must be done frequently for lack of an
object-code compatible environment on the user's desktop. Expensive vector hardware
brings no value to compiling, editing, and debugging programs.
Computer system vendors build machines in several different ways. There are good
things and bad things about each type of architecture. The characteristics of typical
supercomputing architectures include:



Vector & SMP architecture: the shared-memory programming model, which is
the most intuitive way to program systems and the most portable way to write
code.
Message Passing: the traditional massively parallel architectures offer scalability
for problems that are appropriate for this loosely coupled architecture.
Workstations Clusters: economies of computing. Workstation clusters have the
tremendous appeal of functioning as individual workstations by day and a
compute cluster at night.
Supercomputing thus stems from innovative machine architectures. One of the major
problems with parallel computing is that we have too many components to consider and
that all these components must work in unison and efficiently if we are to get the major
performance potential out of a parallel machine. This presents a level of complexity that
makes this field frustrating and difficult.
Figure 1: The Fixed Costs (as diagonal lines) , Norman Christ, Columbia University,
USA
The 1990s saw digital processors become so inexpensive that a car could use dozens of
computers to control it. Today just one Porsche contains ninety-two processors.
According to Paul Saffo, director of the Institute for the Future, changes just as dramatic
will be caused by massive arrays of sensors that will allow computers to control our
environment. In Saffo's vision of the future, we will have smart rooms that clean
themselves with a carpet of billions of tiny computer-controlled hairs. Airplanes will
have wings with micromachine actuators that will automatically adjust to handle
turbulence. These smart machines will require computers that can process vast amounts
of sensor data rapidly, far faster than today's digital computers.
High Performance Computing had several bottlenecks given below.
1. Host Computer Bottlenecks: CPU utilization, Memory limitations, I/O Bus
Speed, Disk Access Speed
2. Network Infrastructure Bottlenecks: Links too small, Links congested, Scenic
routing with specific path lengths, Broken equipment, Administrative restrictions
3. Application Behavior Bottlenecks: Chatty protocol, High reliability protocol,
No run-time tuning options, Blaster protocol which ignores congestion.
The innovations in having High Performance Computing facilities ushered in the
Petaflops Era.
1.1
Analog Computing – Everything Old is Becoming New
Analog computing systems, which include slide rules, have been used for more than a
century and a half to solve mathematical equations. In the 1950s and 1960s electronic
analog computers were used to design mechanical systems from bridges to turbine
blades, and to model the behavior of airplane wings and rivers. The analog equivalent of
the digital computer is a refrigerator-sized box that contains hundreds of special
electronic circuits called operational amplifiers. On the front of the box is a plugboard,
similar to an old-fashioned telephone switchboard that is used to configure the analog
computer to solve different problems. The analog computer is not programmed, like a
digital computer, but is rewired each time a new problem is to be solved.
An analog computer can be used to solve various types of problems. It solves them in an
“analogous” way. Two problems or systems are considered analogous if certain or all
of their respective measurable quantities obey the same mathematical equations.
The digital computing attempts to model the system as closely as possible by abstracting
the seemingly desired features into a different space.
Most general purpose analog computers use an active electrical circuit as the analogous
system because it has no moving parts, a high speed of operation, good accuracy and a
high degree of versatility.
Digital computers replaced general purpose analog computers by the early 1970s because
analog computers had limited precision and were difficult to reconfigure. Still, for the
right kind of problem, such as processing information from thousands of sensors as fast
as it is received, analog computers are an attractive alternative to digital computers.
Because they simulate a problem directly and solve it in one step, they are faster than
digital computers. Analog computer chips use less power and can be made much larger
than digital computer chips. Analog Computation model the system behavior and the
relates the various parameters directly unlike the discreteness [which enforces a local
view] inherent in digital computing.
The author strongly opines that going back to Analog Computing may help us understand
better and resolve the challenges posed by rapidly increasing complexity of the
Computing Systems being developed. As a mater of fact, Artificial Neural Networks are
being dubbed as "Super-Turing" computing models.
Artificial Intelligence – A Paradigm Shift
1.2
Artificial Intelligence has been a significant Paradigm Shift which brought in brilliant
researchers working on:








Building machines that are able of symbolic processing, recognition, learning, and
other forms of inference
Solving problems that must use heuristic search instead of analytic approach
Using inexact, missing, or poorly defined information - Finding representational
formalisms to compensate this
Reasoning about significant qualitative features of a situation
Working with syntax and semantics
Finding answers that are neither exact nor optimal but in some sense „sufficient“
The use of large amounts of domain-specific knowledge
The use of meta-level knowledge (knowledge about knowledge) to effect more
sophisticated control of problem solving strategies
Table 1 below describes the component of Intelligence which eventually proved to be
very restrictive. Artificial Neural Networks and Genetic Algorithms are providing some
innovative implementation methodologies but remained far from the original goals of
Artificial Intelligence.
The internal world: cognition
1.
2.
The external world: perception and
action
3.
1.
2.
3.
The integration of the internal and
external worlds through experience
processes for deciding what to do
and for deciding how well it was
done
processes for doing what one has
decided to do
processes for learning how to do
adaptation to existing environments
the
shaping
of
existing
environments into new ones,
the selection of new environments
when old ones prove unsatisfactory
1. the ability to cope with new
situations
2. processes for setting up goals and
for planning
3. the shaping of cognitive processes
by external experience
Table 1: Components of Intelligence
Nanocomputing is a totally new paradigm to enhance the computing speeds at tiny sizes.
There are five perceptible trends in Nanocomputing. They are
1.
2.
3.
4.
5.
Quantum Computing
Molecular Computing
Biological Computing
Optical Computing
Nanotechnology Approach
The following sections briefly explain these trends and the applications of
Nanocomputing.
2.
Quantum Computing
Traditional computer science is based on Boolean logic and algorithms. The basic
variable is a bit with two possible values 0 or 1. The values of the bit are represented
using the two saturated states off or on. Quantum mechanics offers a new set of rules that
go beyond the classical computing. The basic variable now is a Qubit. A Qubit is
represented as a normalized vector in two dimensional Hilbert space. The logic that can
be implemented with Qubits is quite distinct from Boolean logic, and this is what has
made quantum computing exciting by opening up new possibilities.
The Quantum computer can work with a two-mode logic gate: XOR and a mode we'll
call QO1 (the ability to change 0 into a superposition of 0 and 1, a logic gate which
cannot exist in classical computing). In a quantum computer, a number of elemental
particles such as electrons or photons can be used (in practice, success has also been
achieved with ions), with either their charge or polarization acting as a representation of 0
and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and
behavior of these particles form the basis of quantum computing. One way to think of
how a qubit can exist in multiple states is to imagine it as having two or more aspects or
dimensions, each of which can be high (logic 1) or low (logic 0). Thus if a qubit has two
aspects, it can have four simultaneous, independent states (00, 01, 10, and 11); if it has
three aspects, there are eight possible states, binary 000 through 111, and so on.
The two most relevant aspects of quantum physics are the principles of superposition and
entanglement.
2.1
Superposition
A qubit is like an electron in a magnetic field. The electron's spin may be either in
alignment with the field, which is known as a spin-up state, or opposite to the field,
which is known as a spin-down state. Changing the electron's spin from one state to
another is achieved by using a pulse of energy, such as from a laser - let's say that we use
1 unit of laser energy. But what if we only use half a unit of laser energy and completely
isolate the particle from all external influences? According to quantum law, the particle
then enters a superposition of states, in which it behaves as if it were in both states
simultaneously. Each qubit utilized could take a superposition of both 0 and 1. Thus, the
number of computations that a quantum computer could undertake is 2^n, where n is the
number of qubits used. But how will these particles interact with each other? They would
do so via quantum entanglement.
2.2
Entanglement
Particles (such as photons, electrons, or qubits) that have interacted at some point retain a
type of connection and can be entangled with each other in pairs, in a process known as
correlation. Knowing the spin state of one entangled particle - up or down - allows one to
know that the spin of its mate is in the opposite direction. Even more amazing is the
knowledge that, due to the phenomenon of superpostition, the measured particle has no
single spin direction before being measured, but is simultaneously in both a spin-up and
spin-down state. The spin state of the particle being measured is decided at the time of
measurement and communicated to the correlated particle, which simultaneously assumes
the opposite spin direction to that of the measured particle. This is a real phenomenon
(Einstein called it "spooky action at a distance"), the mechanism of which cannot, as yet,
be explained by any theory - it simply must be taken as given. Quantum entanglement
allows qubits that are separated by incredible distances to interact with each other
instantaneously (not limited to the speed of light). No matter how great the distance
between the correlated particles, they will remain entangled as long as they are isolated.
Taken together, quantum superposition and entanglement create an enormously enhanced
computing power. Where a 2-bit register in an ordinary computer can store only one of
four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a
quantum computer can store all four numbers simultaneously, because each qubit
represents two values. If more qubits are added, the increased capacity is expanded
exponentially.
Quantum computing is thus not a question of merely implementing the old Boolean logic
rules at a different physical level with different set of components. New Software and
Hardware that take advantage of the novel quantum features can be devised.
There is no direct comparison between information content of a classical bit that can take
two discrete values and a qubit that can take any value in a two dimensional complex
Hilbert space. Quantum gates represent general unitary transformations in the Hilbert
space, describing interactions amongst qubits. Reversible Boolean logic gates are easily
generalized to quantum circuits by interpreting them as the transformation rules for the
basis states. In addition there are gates representing continuous transformations in the
Hilbert space. Almost any two qubit quantum gate is universal.
Some of the problems with quantum computing are as follows:



Interference - During the computation phase of a quantum calculation, the
slightest disturbance in a quantum system (say a stray photon or wave of EM
radiation) causes the quantum computation to collapse, a process known as decoherence.
Error correction - Because truly isolating a quantum system has proven so
difficult, error correction systems for quantum computations have been
developed. Qubits are not digital bits of data, thus they cannot use conventional
(and very effective) error correction, such as the triple redundant method. Given
the nature of quantum computing, error correction is ultra critical - even a single
error in a calculation can cause the validity of the entire computation to collapse.
There has been considerable progress in this area, with an error correction
algorithm developed that utilizes 9 qubits (1 computational and 8 correctional).
More recently, there was a breakthrough by IBM that makes do with a total of 5
qubits (1 computational and 4 correctional).
Output observance - Closely related to the above two, retrieving output data after
a quantum calculation runs the risk of corrupting the data.
The foundations of quantum computing have become well established. Everything else
required for its future growth is under exploration. That includes quantum algorithms,
logic gate operations, error correction, understanding dynamics and control of
decoherence, atomic scale technology and worthwhile applications.
Quantum computers might prove especially useful in the following applications:





3.
Breaking ciphers
Statistical analysis
Factoring large numbers
Solving problems in theoretical physics
Solving optimization problems in many variables
Molecular Computing
Feynman realized that the cell was a molecular machine and the information was
processed at the molecular level. All cellular life forms on earth can be separated into two
types, those with a true nucleus, eukaryotes (plants, animals, fungi and protests) and
those without a true nucleus, prokaryotes or bacteria. Bacteria are usually unicellular and
very much smaller than eukaryotic cells. Prokaryotic cells are much better understood
than the eukaryotic cells. The fundamental unit of all cells is the gene.
A gene is made up of DNA which acts as the information storage system. DNA consists
of two antiparallel strands of alternating sugar (deoxyribose) and phosphate held together
by hydrogen bonds between nitrogenous bases, attached to the sugar. There are four
bases : Adenine (A), Guanine (G), Cytosine (C) and Thymine (T). A typical DNA
structure is shown in figure 1. Hydrogen bonding can occur only between specific bases
i.e. A with T and G with C. DNA encodes information as a specific sequence of the
nitrogenous bases in one strand. The chemical nature of DNA is such that it is very easy
to make a precise copy of the base sequence.
The process of DNA replication is not spontaneous. There have to be nucleotides present.
The agents of synthesis of nucleotides and DNA are enzymes which are made up of
proteins. There is an intermediate molecule involved in the transformation. This molecule
is messenger RNA (mRNA). The DNA is converted into RNA by a process called
transcription. RNA in turn generates proteins through a process called translation.
A very important part of cell as a molecular machine is the fact that gene expression is
controlled. In bacteria all aspects of gene expression can be subject to control.
Transcription can be switched on or switched off. In bacteria the two major control points
are regulation of transcription initiation and control of enzyme activity.
Figure 1 : DNA Structure
An aspect of molecular machines is the intrinsic unreliability. One cannot predict the
behaviour of one molecule. In 1994, Adleman demonstrated how a massively-parallel
random search may be implemented using standard operations on the strands of DNA.
For molecular computers to become a reality the molecules have to be intrinsically selfrepairing i.e they have to be living.
Consider the traveling salesman problem. Suppose you want to start at Atlanta and end
up at Elizabeth, NJ, while visiting three other cities (Boston, Chicago, Altoona) once
each. The number of connections between cities is variable: any given city might or
might not be connected to any other city. The question is: in what order do you visit these
cities such that you visit each one once and only once?
The first step is to give each city _two_ four-letter names, each representing a sequence
of four nucleotides (adenine, guanine, cytosine, and thymine; hereafter, a, g, c, and t).
The bases are chosen at random. For the sake of clarity, the first name is in upper case
and the second in lower case. The change in case has no other meaning.
Atlanta = ACTTgcag
Boston = TCGGactg
Chicago = GGCTatgt
...
The second step is to assign 'flight names' for those cities that have direct connections,
using the first set of names to signify arrivals and the second to signify departures:
Dep.
Arv.
flights
Dep.
Arv.
Flights
Atlanta -> Boston = gcagTCGG Boston -> Chicago = actgGGCT
Boston -> Atlanta = actgACTT Atlanta -> Chicago = gcagGGCT
...
Each of the four nucleotides bonds with one and only one other 'complement' base.
Specifically, a bonds with t, and c with g. Each of the citynames therefore has a
complement:
Atlanta (original name) ACTTgcag
Atlanta (complement) TGAAcgtc
Boston (original name) TCGGactg
Boston (complement) AGCCtgac
The flight names and the cityname complements are then ordered from a gene vendor and
mixed together. Now imagine that a molecule coding for a flight from Atlanta to Boston gcagTCGG - bumps into a complement for the Boston cityname.
You get a structure that looks like:
g c a gTCGC
| | | |
AGCCtgac
If that assemblage bumps into a molecule representing a flight from Boston to Chicago
(actgGGCT) the structure will grow as follows:
g c a gTCGCactgGGCT
| | | || | | |
AGCCt g a c
And so on and so on.
The next step is to find and read the strand(s) encoding the answer. First, techniques are
available that allow strands to be filtered according to their end-bases, allowing the
removal of strands that do not begin with Atlanta and end with Elizabeth. Then the
strands are measured (through electrophoretic techniques) and all strands not exactly 8x5
bases long are thrown out. Third, the batch of strands is probed for each cityname in turn.
After each filtration the strands not containing that cityname are discarded. At the end
you have a small group of strands that begin with Austin and end with Elizabeth, have
every cityname in their length, and are five citynames long. These strands represent the
solution, or equivalent solutions. The answer is read by sequencing the strand.
We could use two aspects of bacteria to carry out computations : gene switch cascades
and metabolism. We need to generate such cascades or metabolic pathways that do not
interfere with the normal cellular functions. At present it is possible to generate gene
switch cascades at will but not metabolic pathways.
4.
Biological Computing
Life is nanotechnology that works. It is a system that has many of the characteristics
theorists seek. The role of biology in the development of nanotechnology is key because
it provides guiding principles and suggests useful components. A key lesson from
biology is the scale of structural components.
The lower energies involved in non-covalent interactions make it much more convenient
to work on the nanometer scale utilizing the biological principle of self-assembly. This is
the ability of molecules to form well structured aggregates by recognizing each other
through molecular complementarity. The specificity, convenience and programmability
of DNA complementarity are exploited in Biological computing.
There are two major technical issues. One is positional control (holding and positioning
molecular parts to facilitate assembly of complex structures) and the other is selfreplication.
The Central Dogma of Molecular Biology describes how the genetic information we
inherit from our parents is stored in DNA, and that information is used to make identical
copies of that DNA and is also transferred from DNA to RNA to protein.
A property of both DNA and RNA is that the linear polymers can pair one with another,
such pairing being sequence specific. All possible combinations of DNA and RNA
double helices occur. One strand DNA can serve as a template for the construction of a
complementary strand, and this complementary strand can be used to recreate the original
strand. This is the basis of DNA replication and thus all of genetics. Similar templating
results in an RNA copy of a DNA sequence. Conversion of that RNA sequence into a
protein sequence is more complex. This occurs by translation of a code consisting of
three nucleotides into one amino acid, a process accomplished by cellular machinery
including tRNA and ribosomes.
Although it is possibly true in theory that given a protein sequence one can infer its
properties, current state of the art in biology falls far short of being able to implement this
in practice. Current sequence analysis is a painful compromise between what is desired
and what is possible. Some of the many factors which make sequence analysis difficult
are discussed below.
As noted above, the difficulty of sequencing proteins means that most protein sequences
are determined from the DNA sequences encoding them. Unfortunately, the cellular
pathway from DNA to RNA to Protein includes some features that complicates inference
of a protein sequence from a DNA sequence.



Many proteins are encoded on each piece of DNA, and, so when confronted with
a DNA sequence, a biologist needs to figure out where the code for a protein
starts and stops. This problem is even more difficult because the human genome
contains much more DNA than is needed to encode proteins; the sequence of a
random piece of DNA is likely to encode no protein whatsoever.
The DNA which encodes proteins is not continuous, but rather is frequently
scattered in separate blocks called exons. Many of these problems can be reduced
by sequencing of RNA (via cDNA) rather than DNA itself, because the cDNA
contains much less extraneous material, and because the separate exons have been
joined in one continuous stretch in the RNA (cDNA). There are situations,
however, where analysis of RNA is not possible and the DNA itself needs to be
analyzed.
Although a much greater fraction of RNA encodes protein than does DNA, it is
certainly not the case that all RNA encodes protein. In the first case, there can be
RNA up- and down-stream of the coding region. These non-coding regions can be
quite large, in some cases dwarfing the coding region. Further, not all RNAs
encode proteins. Ribosomal RNA (rRNA), transfer RNA (tRNA), and the
structural RNA of small nuclear ribonucleoproteins (snRNA) are all examples of
non-coding RNA.
By and large, global, complete solutions are not available for determining an encoded
protein sequence from a DNA sequence. However, by combining a variety of
computational approaches with some laboratory biology, people have been fairly
successful at accomplishing this in many specific cases.
There are two primary approaches:
1. Cellular gates
2. Sticker based computation
4.1
Cellular gates
In this method, we can build a large number of logic gates within a single cell. We use
proteins produced within a cell as signals for computation and DNA genes as the gates.
The steps of producing a protein is as follows:
1. The DNA coding sequence (gene) contains the blueprint of the protein to produce.
2. When a special enzyme complex called RNA polymerase is present, the gene
copies portions of it to an intermediate form called mRNA. The RNA polymerase
controls which portions of gene are copied to mRNA. mRNA copies, like
proteins, rapidly degenerate. Consequently, mRNA copies must be produced at a
continuous rate to maintain the protein level within a cell.
3. Ribosome of the cell produces the protein using the mRNA transcript.
DNA Gene: Gene is a DNA coding sequence (a blueprint). It is accompanied by a
control region, which is composed of non-coding DNA sequences as shown below. The
control region has three important regions:
1. RNA polymerase binding region: This is the region where the special enzyme
RNA polymerase binds. When this enzyme binds, it triggers the production of
mRNA transcript.
2. Repressor binding region(s): These regHes typically overlap with the RNA
polymerase binding region, so that if a repressor protein is bound to this region, it
inhibits production of mRNA (since RNA polymerase cannot bind now).
3. Promoter binding region(s): These regions attract promoter proteins. When these
proteins are bound, they attract RNA polymerase so mRNA production is
facilitated.
4.1.1. How to model a gate using a cell


Signal: The level of a particular protein is used as the physical signal. This is
analogous to the voltage in a conventional gate. There can be many signals (e.g.,
to represent variables x1, x2, etc) within one cell, since there are many proteins
within a cell.
The gate: A gene and its control sequence are used as a gate. A gene determines
which protein (signal) is produced. The control sequence associated with the gene
determines the type of gate (i.e., how to control it). There can be a large number
of gates, since a cell can accommodate many genes. The following examples
shows two gates:
Example 1: Inverter
Protein B
(x1)
Gene (gate)
Produces protein A
With repressor region for
protein B


NO Protein A
(x2 = NOT x1)
When protein B is not present the gate will be producing protein A (assuming that
RNA polymerase is always there). If we produce protein B, the gate will stop
producing protein A. Therefore, this gate acts as a NOT gate (inverter).
Example 2: NOR gate
Protein B
(x1)


Protein C
(x2)
Gene (gate)
Produces protein A
With a repressor region for
protein A and another
repressor region for protein C
(Protein A) NOR
(Protein B)
(x3 = x2 NOR x1)
 Input: A gate within a cell can use two kinds of inputs:
 A gate (gene) can use a signal (protein) produced by another gate (gene)
as its input. This is how the gates within the cell are networked.
 A gate (gene) can use a signal (protein) produced within the cell as a
response to a stimulus (like illumination, a chemical environment, the
concentration of specific intracellular chemical) outside the cell. This is
how gates in different cells can be connected.

Output: A gate within a cell can produce two kinds of outputs
 A gate (gene) can produce a signal (DAN binding protein) that can be
used by other gates as input.
 A gate (gene) can produce enzymes which effect reactions like motion,
illumination or chemical reactions that can be sensed from the outside of
the cell.
4.2
Sticker Based Model
This method uses DNA strands as the physical substrate to represent information. A DNA
strand is composed of a sequence of regions termed as A, T, C, and G. One of each such
region shows affinity to another region as shown below:
A
C
T
G
Consequently, if we have two DNA strands containing the forms ATCGG and TAGCC,
they will stick together as:
……T A G C C ……
……A T C G G……
We use the above fact to perform computing as described below.
4.2.1 Information Representation
We divide a DNA strand to K bases with each base of size M as shown below. We can
decide the size of M depending on the amount of combinations we want to represent.
a t c g g
DNA
memory
strands
0
t c a
t a
g c a c t
0
0
1
0
1
c g t g a
t a g c c
a t c g g
t c a
t a
g c a c t
Fig. 2: (top) DNA memory strand with no stickers attached, (bottom) same
DNA memory strand with stickers attached to the first and third bit regions
If the corresponding sticker of the base is attached to it as shown in Fig. 1, we treat that
bit as set (i.e., bit =1). Otherwise the bit is cleared (bit = 0). From the figure, it should be
clear that in the picture there are three bases, each of length 5 shown for both memory
strands. The top DNA memory strand with no stickers represents the bit sequence 0002
and the strand with stickers placed on the first and third bases represents the bit sequence
1012.
We do computation by performing a series of 4 unique steps to the DNA strands. Those 4
steps are combine, separate, bit set, and bit clear.
Biological computing does not scale well. It is very good for problems that have a
certain very large size but the amount of DNA that is needed is exponential in growth.
So, biological computing is very good at vast parallelism up to a certain size. It is very
expensive, not very reliable, and it takes a long time to get the result. By altering DNA,
which humans do not fully understand, we could possibly generate a disease or mutation
on accident. There are also ethical issues to be considered.
Two important areas of technology have been inspired by Biological Computing. They
are Cyborgs and Emotion Machines.
4.1
Cyborgs
[Excerpted from:
Gopal T V, “Community Talk” CSI Communications, April 2007
Theme: Cyborgs; Guest Editor: Daniela Cerqui
]
The quest to understand the human brain got into the realms of Computer Science several
decades back. Not entirely unrelated are the following set of really big questions one
must answer to hopefully replicate the activities of human brain using technology in
some form.
1.
2.
3.
4.
5.
6.
What is Intelligence?
What is life about?
What is Thought?
How did Language Evolve?
What is Consciousness?
Does GOD exist?
Even preliminary answers to these questions amenable to automation using technology
are proving to be hard to come by. If the answers are available to such questions, we can
have very interesting extensions to several human endeavors, which go beyond the
present day activities done using computer-based automation.
The technology support provided outside the human system in these areas may take a
longer time. “Cyborg” is an innovation to enhance the capabilities of the human being in
some manner. The term “cyborg” is used to refer to a man or woman with bionic, or
robotic, implants. There are two popular methods of making a Cyborg. One method is to
integrate technology into organic matter resulting in robot-human. The other method is to
integrate organic matter into technology resulting in human-robot. Cyborgs were well
known in science fiction much before they became feasible in the real world.
4.2
Emotion Machine
Why Can’t…
 We have a thinking computer?
 A machine that performs about a million floating-point operations per second
understand the meaning of shapes?
 We build a machine that learns from experience rather than simply repeat
everything that has been programmed into it?
 A computer be similar to a person?
The above are some of the questions facing computer designers and others who are
constantly striving to build more and more ‘intelligent’ machines. Human beings tend to
express themselves using:




Body language
Facial expressions
Tone of voice
Words we choose
Emotion is implicitly conveyed. In psychology and common use, emotion is the language
of a person's internal state of being, normally based in or tied to their internal (physical)
and external (social) sensory feeling. Love, hate, courage, fear, joy, and sadness can all
be described in both psychological and physiological terms.
“There can be no knowledge without emotion. We may be aware of a truth, yet until
we have felt its force, it is not ours. To the cognition of the brain must be added the
experience of the soul.”
- Arnold Bennett (British novelist, playwright, critic, and essayist, 1867-1931)
There are two major theories about Emotion. They are:


Cognitive Theories: Emotions are a heuristic to process information in the
cognitive domain.
Two Factor theory: Appraisal of the situation, and the physiological state of the
body creates the emotional response. Emotion, hence, has two factors.
Three major areas of Intelligent activity are influenced by emotions
 Learning
 Long-term Memory
 Reasoning
Somatic Marker Hypothesis is proving to be very effective in understanding the
relationship between Emotion and Intelligence.
 Real-life decision making situations may have many complex and conflicting
alternatives : the cognitive processes would be unable to provide an informed
option
 Emotion (by way of somatic markers) aid us (visualisable as a heuristic)
- Reinforcing stimulus induces a physiological state, and this association gets
stored (and later bias cognitive processing)
Iowa Gambling Experiment was designed to demonstrate Emotion-based Learning.
People with damaged Prefrontal Cortex (where the semantic markers are stored) did
poorly.
Marvin Minsky wrote a book titled “Emotion Machine” which defines such a
machine as:
An intelligent system should be able to describe the same situation in multiple ways
(resourcefulness) – such a meta-description is “Panalogy”. We now need metaknowledge to decide which description is “fruitful” for our current situation and
reasoning. Emotion is the tool in people that switches these descriptions “without
thinking”. A machine equipped with such meta-knowledge will be more versatile when
faced with a new situation.
Minsky outlines the book as follows:
1.
2.
3.
4.
5.
6.
7.
8.
9.
"We are born with many mental resources."
"We learn from interacting with others."
"Emotions are different Ways to Think."
"We learn to think about our recent thoughts."
"We learn to think on multiple levels."
"We accumulate huge stores of commonsense knowledge."
"We switch among different Ways to Think."
"We find multiple ways to represent things."
"We build multiple models of ourselves."
Machines of today don’t need emotion. Machines of the future would need it to, Survive,
Interact with other machines and humans, Learn, Adapt to circumstances. Emotions are a
basis for humans to do all the above. Understanding the Biology of the Brain is the crux
in building biological computation models that thrive on the Emotions in Human Beings
and their impact on Intelligence and Reasoning.
An Emotion Machine named WE-4RII (Waseda Eye No. 4 Refined II) is being developed
at the Waseda University, Japan. This machine simulates six basic emotions: Happiness,
Fear, Surprise, Sadness, Anger and Disgust. It recognizes certain smells and detects
certain types of touch. It uses three personal computers for communication and it is only
a preliminary model of the Emotion Machine envisaged by Marvin Minsky.
5.
Optical Computing
Compared to light, electronic signals in chips travel very slowly. Moreover, there is no
such thing as a short circuit with light. So beams could cross with no problem after being
redirected by pin-point sized mirrors in a switchboard. Optical computing was a hot
research area in the 1980s. However, the progress slowed down due to the nonavailability of materials to make optochips. Currently electro-optical devices are
available with the limitations imposed by the electronic components.
Optical Computing is back in the reckoning thanks to the availability of new conducting
polymers to make components many times faster than their silicon counterparts. Light
does not need insulators. One can send dozens or hundreds of photon streams
simultaneously using different color frequencies. Light beams are immune to
electromagnetic interference or cross talk. Light has low loss in transmission and
provides large bandwidth. Photonic devices can process multiple streams of data
simultaneously. A computation that requires 11 years on electronic computers could
require less than one hour on an optical one.
Figure 3 and Figure 4 give the all optical building blocks for computing.
Figure 3 : The schematic of an all optical AND Gate
Figure 4 : The schematic of an all optical NAND Gate
6.
Nanotechnology Approach
Carbon nanotubes, long, thin cylinders of carbon, were discovered in 1991 by S. Iijima.
A carbon nanotube is a long, cylindrical carbon structure consisting of hexagonal
graphite molecules attached at the edges. The nanotube developed from the so-called
fullerene, a structure similar to the way geodesic domes, originally conceived by R.
Buckminster Fuller, are built. Because of this, nanotubes are sometimes called
buckytubes.
A fullerene is a pure carbon molecule composed of at least 60 atoms of carbon. Because a
fullerene takes a shape similar to a soccer ball or a geodesic dome, it is sometimes
referred to as a buckyball after the inventor of the geodesic dome, Buckminster Fuller,
after whom the fullerene is more formally named. Current work on the fullerene is
largely theoretical and experimental.
Some nanotubes have a single cylinder [single wall nanotube]; others have two or more
concentric cylinders [multiple wall nanotube]. Nanotubes have several characteristics:
wall thickness, number of concentric cylinders, cylinder radius, and cylinder length.
Some nanotubes have a property called chirality, an expression of longitudinal twisting.
Because graphite can behave as a semiconductor, nanotubes might be used to build
microscopic resistors, capacitors, inductors, diodes, or transistors. Concentric nanotubes
might store electric charges because of capacitance among the layers, facilitating the
construction of high-density memory chips.
Much progress has been achieved in the synthesis of inorganic nanotubes and fullerenelike nanoparticles of WS2 and MoS2 [Tungsten and Molybdenum Sulphides] over the
last few years. Synthetic methods for the production of multiwall WS2 nanotubes by
sulfidizing WO3 nanoparticles have been described and further progress is underway. A
fluidized-bed reactor for the synthesis of 20-50 g of fullerene-like WS2 nanoparticles has
been reported. The detailed mechanism of the synthesis of fullerene-like MoS2
nanoparticles has been elucidated.
There are two big hurdles to overcome for nanotube-based electronics. One is
connectibility - it's one thing making a nanotube transistor, it's another to connect
millions of them up together. The other is the ability to ramp up to mass production.
7.
Applications
Nanocomputing is an inter-disciplinary field of research. Carbon nanotubes hold promise
as basic components for nanoelectronics - they can be conductors, semiconductors and
insulators. IBM recently made the most basic logic element, a NOT gate, out of a single
nanotube, and researchers in Holland are boasting a variety of more complex structures
out of collections of tubes, including memory elements.
There are two big hurdles to overcome for nanotube-based electronics. One is
connectibility - it's one thing making a nanotube transistor, it's another to connect
millions of them up together. The other is the ability to ramp up to mass production.
By choosing materials so that they naturally bond with each other in desired
configurations you can, in theory, mix them up in a vat under carefully controlled
conditions and have the electronic components assemble themselves. Carbon nanotubes
do not lend themselves to such approaches so readily, but can be reacted with or attached
to other substances, including antibodies, so that self-assembly becomes a possibility.
However, it is not that simple. The sorts of structures that constitute an electronic circuit
are far too complex, varied and intricate to be easily created through self-assembly.
Nanocomputers have wide ranging applications in Life Sciences, Robotics and Power
systems.
8.
Conclusions
Nanocomputing is an inflection point in the advancement of technology. The research in
this area is progressing rapidly. Till date the basic principles have been crystalized. It
may be a decade from now when the fruits of Nanocomputing reach the common man.
9.
References
1.
2.
3.
4.
5.
http://doc.union.edu/237/Notes/lect14.PDF
http://meseec.ce.rit.edu/eecc756-spring2002/756-3-12-2002.pdf
http://www.go.dlr.de/wt/dv/bitmap/0896/sgi.html
http://130.94.24.217/2001/0901issue/0901stix.html
http://www.sciam.com/explorations/2000/112700nano/index.html
6. http://130.94.24.217/2001/0901issue/0901whitesides.html
7. http://130.94.24.217/2001/0901issue/0901skeptic.html
8. http://www.sciam.com/explorations/091696explorations.html
9. http://www.sciam.com/news/031301/4.html
10. http://www.mitre.org/research/nanotech/
11. http://www.zyvex.com/nanotech/feynman.html
12. http://www.ee.udel.edu/~kolodzey/courses/ELEG86701F/nanolinks.htm
13. http://www.atomicscaledesign.net/microelectronics/links/index.shtml
14. http://www.chem.rochester.edu/~rocacs/section/symposium/nanosites.htm
15. http://www.foresight.org
16. http://www.foresight.org/SciAmDebate/index.html
17. http://www.nano.org.uk/
18. http://www.cnsi.ucla.edu/
19. http://www.inst.phys.ust.hk/
20. http://www.techfak.uni-bielefeld.de/bcd/Curric/welcome.html
21. http://science.nasa.gov/headlines/images/nanosecond/thepaper.PDF
22. http://www.rutherfordjournal.org/article020106.html
23. http://www.eetimes.com/electronics-news/4036696/Analog-computer-trumpsTuring-model
24. http://www.comdyna.com/gp6intro.htm
25. http://courses.engr.illinois.edu/ece486/labs/lab1/analog_computer_manual.pdf
26. http://www.indiana.edu/~rcapub/v21n2/p24.html
27. http://www.nsa.gov/public_info/_files/tech_journals/why_analog_computation.pdf
28. http://cacm.acm.org/blogs/blog-cacm/135154-analog-computing-time-for-acomeback/fulltext
29. K S Lakshminarayanan, Usha Devi, R Bhavani Shankar and T V Gopal (Editors),
“NanoComputing – Technology Trends”, Allied Publishers, Chennai, 2001
30. Marvin Minsky, “The Emotion Machine: Commonsense Thinking, Artificial
Intelligence, and the Future of the Human Mind”, Simon & Schuster, 2006.
31. Michael A Nielsen and Isaac L Chuang, “Quantum Computation and Quantum
Information”, Cambridge University Press, UK, 2002.
32. Nick Tredennick and Brion Shimamoto, “The Death of Microprocessors”,
Embedded Systems Programming, September 2004
33. Rich Belgard, “Reconfigurable Illogic”, Embedded Systems Programming,
September 2004.
Download