Convergent assembly

advertisement
Convergent
assembly
by Ralph C. Merkle, Xerox PARC
DRAFT
This article has been published in
Nanotechnology 8, No. 1, March 1997, pages 18-22.
It is available on the WWW at http://www.zyvex.com/nanotech/convergent.html. The
web version differs in some respects from the published version.
Abstract
Nanotechnology can make big things as well as small things. An attractive approach is to
use convergent assembly, which can rapidly make products whose size is measured in
meters starting from building blocks whose size is measured in nanometers. It is based on
the idea that smaller parts can be assembled into larger parts, larger parts can be
assembled into still larger parts, and so forth. This process can be systematically repeated
in a hierarchical fashion, creating an architecture able to span the size range from the
molecular to the macroscopic.
Convergent assembly covers a class of many different architectures. Drexler described
one member of this class (see figure 14.4 from Nanosystems (Drexler, 1992)). The
present proposal is simpler while retaining the desirable performance characteristics
described in Nanosystems
Convergent assembly
It's common to make bigger things by assembling them from smaller things. A finished
assembly which is about 1 meter in size might be made from eight smaller subassemblies
each of about 0.5 meters in size. To make a 1 meter cube, we'd need to assemble eight
smaller cubes, each of 0.5 meters in size. This relationship holds approximately for many
products.
We could assemble our eight subassemblies into a finished assembly by using one or
more robotic arms (or other positional devices) in an assembly module (depicted
abstractly in figure 1). The present paper is focused on architectural issues; the reader
interested in details of how positional devices can be used in manufacturing is referred to
other sources -- one example is A new family of six degree of freedom positional devices
(Merkle, submitted to Nanotechnology).
Figure 1: a single assembly module able to accept subassemblies from the four input
ports to the right, and which produces a finished assembly to the left through the
single output port. If the output port has a size of one meter by one meter, each
input port would have a size of 0.5 meters by 0.5 meters.
This process could be repeated. We might take 64 sub-subassemblies, each with a size of
about 0.25 meters, assemble them into 8 subassemblies, each with a size of about 0.5
meters, and finally assemble the 8 subassemblies into the finished assembly, with a size
of about 1.0 meters. This could be done using the system shown in figure 2, which
depicts a 1.0 meter assembly module connected to four 0.5 meter assembly modules.
(Note that we are designating an "assembly module" by the size of its output port. The
module size is larger -- in this case, the 1.0 meter assembly module has a size of 2.0
meters. This additional size provides sufficient room to handle subassemblies and the
final 1.0 meter assembly).
Figure 2: two stages of assembly starting with 64 sub-subassemblies with a size of
about 0.25 meters, producing 8 subassemblies with a size of about 0.5 meters, and
producing a finished product with a size of about 1.0 meters.
Let's assume that the final stage (which assembles the subassemblies into the final
product) takes about 100 seconds. (Its not critical exactly how long it takes, but it's
convenient to pick a specific time). If the 1.0 meter assembly module takes 100 seconds,
then the 0.5 meter assembly module should complete one subassembly in 50 seconds.
This is because an object which is half the size can finish a movement of half the length
in half the time: it's frequency of operation is doubled (see (Drexler,1992) for a
discussion of scaling laws). Each 0.5 meter assembly module can therefore produce two
subassemblies in 100 seconds. The four 0.5 meter assembly modules can finish eight
subassemblies in 100 seconds, which means that both the 1.0 meter assembly module and
the four 0.5 meter assembly modules must work for 100 seconds to produce the final
product.
This process can, of course, be continued. Figure 3 shows three stages of this process, in
which 512 sub-sub-subassemblies are assembled in 16 0.25 meter assembly modules,
making 64 sub-subassemblies; these 64 sub-subassemblies are assembled into 8
subassemblies in 4 0.5 meter assembly modules. Finally, the 8 subassemblies are
assembled into the final product in the single 1.0 meter assembly module.
Figure 3: three stages of convergent assembly produce a final product of ~1.0 meters
in size from 512 sub-sub-subassemblies each of which is ~0.125 meters in size.
Again, the 0.25 meter assembly modules operate twice as fast as the 0.5 meter assembly
modules, and four times as fast as the 1.0 meter assembly module. The 16 0.25 meter
assembly modules can make 64 sub-subassemblies in the same time that the 4 0.5 meter
assembly modules can make 8 subassemblies, which is the same amount of time that the
single 1.0 meter assembly module takes to produce the finished product.
We can keep adding stages to this process until the inputs to the first stage are as small as
we might find convenient. For the purposes of molecular manufacturing, it is convenient
to assume that these initial inputs are ~one nanometer in size.
Summary of convergent assembly
The following rules hold approximately for a convergent assembly system which doubles
the size of product produced at each stage, which starts with building blocks with a
characteristic length of lambda meters, and which has a characteristic assembly time to
manipulate those building blocks of tau seconds. Values for lambda and tau might be
10^-9 meters (one nanometer) and 10^-7 seconds (100 nanoseconds).
1. The size of product that can be made at stage N is 2^N x lambda meters.
2. The time required for an assembly module in the Nth stage to produce an
assembly of size 2^N x lambda meters given that eight subassemblies of size
2^(N-1) x lambda meters are already available is 2^N x tau seconds. We will
further assume that the assembly process can be started when only four
3.
4.
5.
6.
subassemblies are available, effectively allowing stage N to begin operation after
stage N-1 has produced only half the required subassemblies.
The time required to produce a product of size 2^N x lambda meters, including
the time taken by all preceding stages (i.e., assuming all stages start empty, with
no assemblies or subassemblies in them, but that they have been provided with
appropriate instructions for the manufacture of the desired product) is 2^(N+1) x
tau seconds.
Such a system can be pipelined, in which case a steady stream of finished
products of size 2^N x lambda meters can be produced at a rate of one new
product every 2^N x tau seconds.
The area of the output port of an assembly module equals the sum of the areas of
its input ports. The total area of all output ports at stage N is equal to the total area
of all output ports at stage N-1.
The average speed with which material flows through the system is approximately
equal at all stages, and is approximately lambda / tau meters per second. For the
example values of lambda and tau chosen here, this speed is roughly 0.01
meters/second.
The example values selected for tau and lambda imply that the assembly of a 1.0 meter
product from eight 0.5 meter subassemblies will take about 100 seconds. There seems to
be no particular reason to assume that the characteristic time tau could not be made
shorter, thus increasing the speed of the manufacturing process.
The claim that it requires 2^(N+1) x tau seconds for the Nth stage to produce a single
product of size 2^N x lambda meters, made in item 3 above, can be shown by induction.
We simply postulate that stage 1 requires 4 x tau seconds, thus providing the base for the
induction. Assume that the hypothesis is true for stage N-1 and we wish to show that it is
true for stage N. The assembly modules at stage N-1 will produce their first output in 2^N
x tau seconds, at which point (by item 2) the assembly process can begin in stage N. The
assembly process in stage N adds an additional 2^N x tau seconds (by item 2) before it
produces its first output, so the total time before stage N produces an output is (2^N +
2^N) x tau seconds, or 2^(N+1) x tau seconds, as desired.
If we assume that stage N is unable to overlap any operations at all with stage N-1, i.e.,
we assume that stage N-1 must produce all eight subassemblies before stage N can even
begin, then we will increase the time needed before the Nth stage can produce its first
output to 3 x 2^N x tau seconds. This is only 50% longer.
A 30 stage system (N=30) should be able to produce a single meter-sized product in ~200
seconds. It should be able to produce a steady stream of meter-sized products at a rate of
roughly one new product every 100 seconds.
Assembly module failures
If an assembly module fails (which we assume can be detected, if not immediately then
by the subsequent assembly module), then other assembly modules can take over the task
of producing the output which should have been produced by the failed module. As each
assembly module accepts parts from four subassembly modules, the failure of a single
subassembly module implies that the assembly module will receive parts from only three
subassembly modules. Provided that at least one of the three surviving subassembly
modules can be reprogrammed to build the subassemblies that would have been produced
by the failed subassembly module, then the manufacturing process can continue, although
with modest delays. Provided that module failure rates aren't too high (a failure rate of
0.1 percent or even somewhat higher could be tolerated, which is a very high failure rate
in a molecular manufacturing system) the system could continue to function.
To show that assembly module failures need not result in a major disruption of the
manufacturing process, we consider a very simple (and inefficient) method of scheduling:
we slow the manufacturing process by a factor of two. This implies that each assembly
module is producing output 50% of the time and is idle 50% of the time. The total time to
manufacture a product is doubled. With this assumption, the output of a failed assembly
module could be entirely replaced by running an adjacent assembly module 100% of the
time instead of 50% of the time. If two adjacent modules are used to replace the output of
a failed assembly module, then they only need to operate 75% of the time.
Alignment of parts
Alignment of parts to within a fraction of an atomic diameter can be accomplished
directly in the earlier stages. As the parts become larger some aids in alignment might be
necessary, particularly if alignment to within a fraction of an atomic diameter is desired.
Several approaches for doing this are feasible. One particularly straightforward approach
(discussed in Nanosystems, (Drexler, 1992)) is to use alignment pegs. If two surfaces are
to be joined, one surface might have protruding pegs while the other would have
corresponding holes. By tapering both the end of the peg and the start of the hole, even if
the two surfaces are initially misaligned, the peg will enter the hole. As the two surfaces
approach more closely, they will be forced into alignment by the peg and hole.
Precise alignment will be particularly important if wires or other fine structures must be
precisely joined. A variety of alignment methods are feasible.
Accurate alignment of parts in vacuum should also allow reactive surfaces to be bonded
together with great precision. While a wide range of possibile surfaces exist, one surface
that might be worth further investigation is the diamond (110) surface. This surface does
not reconstruct, and so bringing together two diamond (110) surfaces in vacuum might be
an effective method of directly bonding two diamond blocks to each other. Modeling of
the energy released during this process and its possible adverse effects on the diamond
lattice in the vicinity of the plane joining the two blocks would be useful. If the energy
released during bonding causes significant damage, inclining the two surfaces with
respect to each other at a small angle and then joining them slowly should create a more
controlled release of energy at the line of joining.
The first few stages
The analysis given here is based on simple scaling laws. Various issues influence the
validity of the scaling laws in the first few stages. At the smallest size molecular "parts"
might well be found in solution. It will be necessary to bind and orient suitable feedstock
molecules at a speed commensurate with the needs of the rest of the system if the
indicated speeds of manufacturing are to be achieved. Nanosystems proposed the use of
variable affinity binding sites to orient molecules from the feedstock and bring them into
the system. High concentrations of feedstock molecules could be employed to insure that
this process takes place rapidly. In the extreme case, the "solvent" could itself be the
feedstock molecule used in further assembly steps. More generally, if the speed of
operation of the binding sites is insufficient to provide feedstock molecules at a sufficient
rate, then more binding sites can be employed. The area exposed to the feedstock solution
can be increased by using relatively straightforward methods (e.g., multiple thin sheets
which have binding sites on their surfaces, etc). It is not expected that this will pose a
fundamental problem.
Another issue that arises in the first few stages is the scaling of the control system. At the
macroscopic scale, we are used to the idea that a computer is much smaller than the
robotic arm which it controls. For the very smallest scales, this assumption is dubious. A
simple 8-bit processor might occupy a cube about 100 nanometers on a side. If we
assume the linear dimensions of each assembly module are only a few times larger than
the size of the components to be assembled, then our 8-bit processor will dominate the
volume requirements of the assembly module for the first few stages. If the first stage
handles one nanometer parts, then our simple scaling laws suggest that the first stage
assembly module will have linear dimensions perhaps three times that size (e.g., 3
nanometers). As our 8-bit control processor is over 30 times larger than this (in linear
dimensions) the validity of our scaling laws is evidently open to serious question.
One approach to this problem would be to "special case" the first few stages. By
employing hard automation and eliminating their flexibility, the control requirements
could be substantially reduced or effectively eliminated. This seems like a good idea, as
the advantages of hard automation over more flexible methods should be substantial. By
combining a relatively modest number of standard 100 nanometer parts we could make a
remarkably wide range of products.
If, however, we want a manufacturing process which can more closely approach the
ultimate limit of being able to manufacture almost any product consistent with the laws of
chemistry and physics, then we could adopt a more flexible approach: we could allow the
first few stages to deviate from the simple scaling laws used for the rest of the stages.
This is feasible because the first few stages occupy very little volume: if we continued to
follow the same scaling laws as used for the later stages, the first few stages would be
little more than a very thin "skin" at the beginning of the assembly process. We can
deviate from the scaling laws adopted for larger stages by increasing the thickness of this
"skin."
We could eliminate the first few stages and replace them with special assembly modules
that (a) accept as inputs parts of about 1 nanometer but (b) produce as output parts of
about 100 nanometers in size. This means that the volume occupied by the control system
is roughly similar in volume to the rest of the special assembly module. Of course, this
means that the first 6 to 7 "normal" stages have been eliminated. If we simply had one
layer of special assembly modules, we would find that they were too slow to feed the rest
of the system. As suggested above, we can increase the number of special assembly
modules and let them occupy a larger volume.
We have been assuming that each assembly module would assemble about 8 parts while
operating at a certain characteristic speed. Our special assembly modules will operate at
the characteristic speed of stage 6 or 7, but will be handling (2^6)^3 / 8 = 32,000 to
(2^7)^3 / 8 = 262,000 times as many parts. Another way of phrasing this is to point out
that a cube 100 nanometers on a side is composed of 1,000,000 one-nanometer cubes.
Thus, a cube 100 nanometers on a side has about 1,000,000 one-nanometer parts. As a
consquence, our special assembly module will produce output parts at a speed roughly
100,000 times slower than it would produce output parts which had only 8 sub-parts in
them. This implies that we need roughly 100,000 times more special assembly modules
to produce parts at the same rate as would otherwise have been produced by the 7th stage
modules if we had kept using our original scaling laws. As the 6th or 7th stage modules
are a few hundred nanometers in size, we must increase the thickness of the first 6 to 7
stages to a few centimeters (100,000 times a few hundred nanometers).
It should be emphasized that this increase in volume is based on the assumption that we
wish to retain the greatest possible flexibility in control over the first few stages by using
one computer per assembly module.
Limitations
Convergent assembly is likely to prove most useful in the manufacture of relatively rigid
structures made from rigid components. This should include computers with a large
number of molecular logic gates densely packed into a three dimensional volume;
structural elements useful in load bearing applications in cars, airplanes, bicycles,
buildings, furniture, etc.; high density memories; and many more. In aerospace
applications the great strength-to-weight ratio of diamond (over 50 times that of steel)
should prove particularly useful.
Although each stage in our example system was a factor of two larger than the preceding
stage, it is not clear that a factor of two is the most appropriate value for all (or even
most) manufacturing applications: larger factors might prove more convenient for some
operations. If we wish to bolt two parts together, the bolt might be an order of magnitude
smaller than either part. This can be accomodated by passing the bolt through several
stages of binary convergent assembly and finally using it in a stage which can
accomodate much larger parts. More generally, small parts can, if necessary, be passed
through several stages unchanged if this should prove convenient or necessary. The
inefficiencies of this approach might be offset by making many small parts and passing
them through succeeding stages in a batch (e.g., one might make a box with many bolts
in it, rather than making a single bolt and passing it alone through several successively
larger and increasingly underutilized stages).
Alternatively, it might be advantageous to increase the size of the succeeding stage by a
factor larger than two. Complex patterns in which different stages or even different
assembly modules in each stage increase in size by individually selected amounts might
prove useful.
The greatest limitation of convergent assembly as presented here is the requirement that
the product be relatively stiff. Easily deformed, liquid or gaseous components are most
problematic. An inflated balloon (with thanks to Josh Hall for suggesting this example)
does not appear well suited to convergent assembly. Many products, however, could be
initially manufactured in a more rigid form (e.g., at lower temperature or with supports or
scaffolding to provide the desired rigidity) and then later allowed to assume the desired
less rigid state (by removing the scaffolding, warming, etc).
Other approaches
We have considered a relatively straighforward system in which parts move only in one
direction. Parts could also move laterally and backwards, which might be useful in
bypassing failed modules.
Convergent assembly is only one approach to making large objects from small objects.
An oak tree does not use convergent assembly, but quite effectively makes something
large starting with only a small seed. We have considered an approach in which relatively
rigid "parts" have approximately similar X, Y and Z sizes. It would be possible to
consider "parts" which are basically one dimensional. This might be useful in the
manufacture of products similar to cables, in which long thin strands are woven together
into a final product. Two dimensional "parts," e.g., sheets, could also be produced and
then pressed together into a final product.
Conclusions
Convergent assembly is based on the idea that smaller parts can be successively
assembled into ever larger parts. The particular architecture proposed should be able to
produce meter-sized products in a few minutes from nanometer-sized parts while going
through about 30 stages. At each stage the parts are approximately double the size of the
parts from the previous stage and half the size of the parts in the succeeding stage. These
conclusions are based primarily on simple scaling laws. Deviations from simple scaling
laws that might occur when dealing with smaller parts should not substantially change
these conclusions.
Convergent assembly has many clear advantages but seems ill-suited to the manufacture
of flexible, liquid or gaseous products. It is not the only method of making large products
from small parts. Further research into both convergent assembly and other approaches
appears worthwhile.
References


Drexler, K.E. Nanosystems: Molecular Machinery, Manufacturing, and
Computation, Wiley 1992.
Merkle, R.C. (1997) Nanotechnology 8 pages 47-52. A new family of six degree
of freedom positional devices
This page is part of the nanotechnology web site.
Download