|Life\Repair Time||1 Hour||10 Hours||100 Hours|
|1 Month (30 days)||7.413
|1 Year (356 days)||1094.52
|10 Years (3562 days)||109,549.9
On-board repair can dramatically extend total system life beyond what is feasible with simple redundancy. Very long life depends on having a very low probability of failure for the remaining manufacturing or infrastructure subsystems, during the period it takes to rebuild and reinstall a failed subsystem.
One of the most general concepts of the logical core architecture is "surge" performance. By using on-board manufacturing, large amounts of physical hardware to fully accomplish some function can be built, creating a significant temporary increase in the system's ability to perform that function.
To illustrate surge potential, the simplest case is used, of a subsystem with capacity related linearly to subsystem mass. To help make the idea concrete, the function of computation of a problem of known computational demand, running an algorithm that implicitly is massively parallelizable, is shown.
Equations of the form below are applicable to any function where capacity equals subsystem mass. This general approach of surging capacity to provide extra capacity at certain times in a mission can be used for many other purposes. In many cases, the relationship between system mass and system performance will be non-linear, even step functions in some cases, and equations of surge function equivalent to those below will have significantly different form.
Assume a computer of power (instructions per second) ofCg, and a problem of estimated size P (instructions). The estimated run time is merely:
The general manufacturing available in the logical core architecture and other potential MNT-based systems, however, can be used to build additional computer power. If there is G general manufacturing capacity (in mass of product per unit time), used to produce more general computing power as a density of instructions per second per gram, and taking a period of Tcycle from start until the first manufacturing is complete, then the period to solve a problem of size P can be estimated as:
or .. ..for ..
If P>>Tcycle x Cg, then there is a strong incentive to build significant additional computational capacity to help solve the problem. If building additional computational capacity to solve a particular problem, however, then it makes sense to build special-purpose computers to particularly solve the problem. While the details vary by problem, [Moravec, 1988] suggests that a 1000-1 advantage with special purpose computation is a reasonable rule of thumb.
Assume, therefore, that the additional computational capacity built is special purpose, with the ability to perform an equivalent number of instructions per second per gram on this problem of s. (This probably represents a smaller number of instructions per second, but significantly more powerful instructions.) Estimating Trun only requires replacing in equation (12) above with s:
or . ..for ...
The strategy embodied in equation (13) is to begin working on the problem, and meanwhile devote the fabrication capacity fully to building further computational capacity, which begins to contribute to the calculations as soon as it is built. The result is a linear increase in computational capacity with time, and a parabolic increase in computations over time.
Within an MNT-based system such as the logical core architecture, the manufacturing can be used to surge the amount of any subsystem. This includes manufacturing itself. By first building more manufacturing capacity, and only after some exponential buildup then building more computational capacity, the time to complete some very large problems can be greatly reduced. Let Tcyclem be the period it takes for a level of manufacturing to build the equivalent additional capacity. Let Tbuildm be the period spent building up manufacturing capacity before switching over to building computational capacity. The problem size, P, that is solved in time Trun, is then approximately:
The actual function of productive capacity with time, buried within the second term of (14), may be stepwise discontinuous, but for small enough steps (which seem reasonable for large systems built using molecular nanotechnology) the continuous exponential growth implied in (14) is a reasonable approximation.
Since the specialized computation expressed by s is more productive than the general purpose computation expressed as Cg, and furthermore, when solving large problems the produced computational equipment will greatly exceed the pre-existing computational equipment, the first term in (14) is small for large problems. Dropping that term allows an analytic solution to the optimum Tbuild:
Substituting that term into (14) and solving for Trun gives an estimate for the run time using this build then solve strategy, as a function of problem size:
With an algorithm based on building additional computing capability, however, and especially when the algorithm relies on exponential growth, there is a real danger of overgrowing available resources for building the computers before being able to follow a plan for a very large computation. This can be addressed by setting a total mass, Mbudget, available for use. Assume for simplicity that this mass is directly interchangeable between manufacturing and computers. This means that equation (16) remains valid as long as
This now raises an interesting question. If the desired surge manufacturing and computational capacity to solve a problem exceeds the mass budget, how should the budget be partitioned over time between manufacturing and computation? For somewhat oversized problems, the likely strategy is to build less manufacturing capacity and spend a longer time building computational capacity, but to build a higher total of computational capacity by using more of the total Mbudget. In this approach, the amount of additional built computational capacity is:
and the time then spent building that capacity is:
A more detailed model would also consider disassembly capacity. Once the entire Mbudget is expended, the surge manufacturing capability would be disassembled, and converted into further computational capacity. For very large problems, the time to surge computation will be small compared to the total run time, and thus Trun can be approximated as:
While this introduction assumed a simple model of problem computability, actual approaches must be algorithm-specific, depending on the problem to be solved. Additional computational mass will be most useful in running purely parallel algorithms. Inherently sequential problems may have algorithm elements, such as pivoting a large matrix, that can be accelerated with special-purpose hardware, but that will be sized to the problem and may not provide any further benefit with further size.
Furthermore, this analysis has assumed that the algorithmic run size of a problem is known. In general it is not known ahead of time how long an algorithm must run before it stops. In these cases a mixed strategy of building additional computational capacity and additional fabrication capacity in parallel may be the best strategy. In addition, there are trade-offs between algorithms, and between specialized computational devices and general computational devices across the span of computations during the mission. Special purpose hardware that seems likely to be used again soon should not be disassembled at the end of the calculation. An agoric system [Miller & Drexler, 1988] may be the most reasonable way to handle these complexities.
Similar calculations can be made for power estimates. Indeed, the power (and cooling) budget is likely to often be more restrictive to computational capacity than the mass budget.
While this analysis did not consider the details of the feedstock mass and how it supported the possible manufacturing and computer components, that also must be properly handled in a working system. In physical surging there is always a trade-off between storing more elemental, widely usable feedstocks such as small, simple molecules, and storing more specialized units that can be rapidly combined for functional purposes, such as prefabricated logic units.
Even for mass fully dedicated to a built computer, some mechanical rearrangement for specific computations may be promising. A mechano-electric computer could involve the mechanical arrangement or rearrangement of pre-defined logic units, creating a special-purpose electric computer for the problem at hand much quickly than building it entirely from scratch, for a process nearly as fast as having the special purpose computer already on hand. Quantum computing, where the algorithm depends finely on the physical structure, may be particularly well suited to this hybrid approach.
Particular instantiations of the logical core architecture will have their own physical framework. Many different physical styles can be imagined, but, with the ability to create stable scaffolding, it should be possible to move between radically different physical styles over the course of a mission.
For example, imagine a system physically comprised of a modular space frame and various subsystems attached to the space frame as "boxes," including attached computers, molecular manufacturing systems similar to the exemplar architecture in chapter 14 of [Drexler, 1992a], disassemblers, storage tanks, external sensors, solar panels, cooling panels, engines, etc. Pipes and cables could run along the frame, or in conduit within the frame. Locomotion within the system can be tracked, frame-walking, or both.
A radical physical change would be into a cloud of utility fog [Hall, 1996], a cloud of small, discrete, inter-grasping units, plus some additional items in the cloud that perform logical core architecture functions that pure utility fog omits. To begin, the space frame system begins disassembling its least necessary components into raw material, while simultaneously producing utility foglets. Software for those foglets are loaded from the Archive, and over time the contents of the space frame archive are copied into the growing mass of data storage spread across the foglet computers. The foglets link, holding to each other, while some attach to what will be the last bit of the old space frame system. The foglets also distribute power and communications through and across their cloud. Initially, the space frame equipment moves parts to the space frame's disassembler, but at some point in the process, the utility fog can take over the task of transporting elements to the disassembler. Since foglets cannot perform general fabrication or fine disassembly, special subsystems designed to perform these functions, and embed in the utility fog, must also be produced. Tiny amounts of material storage can be provided by each foglet. If the total mass contains too much of some element(s) to be fully stored across the foglets, then special-purpose storage containers will also have to be included in the cloud.
The end is a cloud of utility fog, with some general manufacturing device(s), some foglet disassembly device(s), able to disassemble even damaged foglets, and possibly some foglet repair devices embedded in the cloud.
There are many possible frameworks, and corresponding examples of potential transformations between frameworks.
The system may go through mission phases where there is very little the system can or should do. For example, if riding a long-period comet, the system may wish to shut-down for the long time when it is far from the sun.
The system could hibernate during these times, go to near or zero operations and zero power for an extended period, and later restart. To restart, the system must either continue some function at a low trickle-power level, such as a clock or observing for restart conditions, or it must be actively restarted by its environment.
Before simply shutting down, the system may want to surge Data Storage, reducing nearly everything to data. After hibernation, the system would reboot from the Archive. This might be a useful technique when crossing interstellar space near the speed of light, and thus experiencing a massive radiation load for a fixed period of time. Since the system is not reviewing and correcting the Archive data, it will eventually degrade beyond recovery, setting a limit on how long the system can hibernate. To maximize potential hibernation time, the data should be in encoded into a stable media, using error-correcting code. Since computational capacity can be surged after hibernation, the error correcting code should use context heavy redundancy, rather than simple data repetition.
Three example multi-phase space mission profiles, illustrating different capabilities, are presented. They are asteroid settlement, Martian terraforming, and an interstellar seed. Each is described narratively.
Start by launching the system conventionally from Earth. Once beyond the atmosphere, the system reconfigures so that most of the mass is dedicated to solar power and ion engines. This will allow propulsion with very high specific impulse and high acceleration [Drexler, 1992b], allowing the system to quickly reach any asteroid in the solar system. Travel to a carbonaceous asteroid. Once there, begin excavating the asteroid, using the mass to build up extractive and processing capability. The system then grows to consume the mass of the asteroid. This larger system can now be put to many purposes.
For example, if 10% of a 1 km diameter asteroid orbiting at 2.4 AU is converted into the equivalent of a 1 micron thick sheet of solar panels, computers and cooling, then the structure can present an area of 5.2 x 107 square km to the sun, and should be able to generate roughly 1041 bits of computation per second, if not more with aggressive use of reversible computation. Note that this physical layout is compatible with the reversible 3-D mesh structure that Frank considers "an optimal physically-realistic model for scalable computers."[Frank, 1997] Using the estimates in [Moravec, 1988], this is computationally equivalent to roughly the total brain power of 1026 people, which significantly exceeds the 1.4 x 1016 people that Lewis estimates could be supported as biological inhabitants on the resources of the main belt asteroids [Lewis, 1997]. Even with significant software inefficiencies, such capacity could quickly perform large amounts of automated engineering (as defined in [Drexler, 1986a]). An option is to surge the computers, develop a number of designs, save those in compact form in the Archive, and then disassemble most of this computer for other purposes.
Whether or not this quantity of pre-computation is useful, the system eventually reconfigures itself into a settlement, along the ideas of O'Neill [O'Neill, 1974], [O'Neill, 1976], [Johnson, 1977], [McKendree, 1996]. It would be convenient for the settlement to be arranged so that its rotational axis is parallel to its orbital axis. One advantage is that the system could then grow cables and provide momentum transfer for solar system traffic to, from, and through this logical core architecture system.
Over long periods of time, the system might use solar sailing to put it in a more favorable orbit. This could be less inclined to the plane of the ecliptic, to ease travel with the rest of the solar system. It could be more inclined to the plane of the ecliptic, if there is contention for solar output. It would probably be more circular, to regularize the received solar flux, and perhaps closer to the sun, to increase the density of solar flux. Finally, it would be managed to avoid collisions.
Launch the system from Earth. Once above the atmosphere, it reconfigures itself, dedicating much of the system mass to solar sails; using MNT these can have significant capabilities [Drexler, 1992b], [McKendree, 1997]. This allows the vehicle to transit to Mars. On approach the system builds an aerobrake, and is captured into a polar orbit by the Martian atmosphere. The system then converts the bulk of its mass into a large aperture mass spectrometer, to map the elemental composition of Mars in detail. The system might abort its mission at this point, notifying Earth and waiting for instructions for a new mission, if it does not find suitable conditions to continue, such as adequate water.
Assuming the system continues, it then selects one or more promising landing sites, and reconfigure itself into one or more atmospheric entry capsules that land on the surface. Building up from local resources, the system grows itself into an assembly of inflatable cells that cover the Martian surface with a thin plastic canopy holding a growing atmosphere, as in [Morgan, 1994]. This allows terraforming Mars while supporting early shirtsleeve habitability.
This mission profile uses a number of different platforms, which need not, but could be, implemented using the logical core architecture.
Launch a system from Earth, which then travels to a Uranus and enters its atmosphere. The system configures itself into a balloon structure floating in the atmosphere, mines He3 from the atmosphere, and ultimately rockets out with a payload of liquefied He3, as sketched in chapter 13 of [Lewis, 1996].
One approach for the next step is to divide into several daughter systems at this point, and each travel to one of several different asteroids. For subsequent reasons, this should include at least one main-belt asteroid, potentially a trojan asteroid, at least one Kuiper-belt asteroid, and possibly a long-period comet, if one can be found in an appropriate orbit. Indeed, it might be helpful if every asteroid selected were an extinct comet, as it should contain significant quantities of hydrogen. Bootstrap the system at each location using each asteroid's resources.
Launch another system from Earth, which transforms the majority of its mass into a solar sail. That solar sail decelerates, falling close towards the sun, and then accelerates strongly during and after its solar fly-by.
Use the asteroids as bases to beam acceleration to the outgoing vehicle. Many schemes are possible. The one suggested here is to shine lasers from each asteroid, powered by H-He3 fusion, towards the departing system's solar sail similar to [Forward, 1984].
The vehicle travels outside the solar system, accelerating to a significant fraction of the speed of light on a trajectory to a nearby solar system. Once accelerated, it reconfigures itself, converting the significant mass fraction of the solar sail into shielding against the radiation bombardment of the vehicle as it travels at high speed through the interstellar medium. The system hibernates, with the archive in a highly recoverable data redundancy. Most of the Archive should be so structured from the beginning.
Later in its flight, the system reactivates, and reconfigures itself into a magsail [Zubrin & Andrews, 1991], a superconducting loop of current that generates thrust by deflecting the charged particles in space. This allows the vehicle to slow down, and incidentally extracting energy (which thus allows the system to resume maintenance functions). The system may release probes before fully slowing down, which would fly through the target system gathering data for broadcast back to the main system, and use that information to prepare for arrival. The system arrives at the target star system, surveys it, and travels to a small body, probably similar to a carbonaceous asteroid or a comet.
Once there the system bootstraps itself from the local resources. It then builds a receiver, and listens for data and instructions to update its mission plan. Meanwhile, it may send daughter systems to other small bodies, and bootstrap local capabilities significantly. Subsequent potential actions include building up information and products based on received instructions, possibly building local settlements, and building daughter seeds and sending them on to further star systems.
The "seed" that crosses the interstellar space may well mass significantly less than a human being, and nonetheless be able to support the transmission of significant numbers of humans, from their information content transporting across the interstellar divide [Reupke, 1992], by bootstrapping to equipment that receives and assembles them.
This is similar to the Freitas strategy for galactic exploration [Freitas, 1980]. Beyond the technical details of implementation, the major difference is that the Freitas probes were meant to be search probes, with fixed programming replicated across the galaxy, whereas in this mission each vehicle arrives as the first step in creating a local homestead, fully benefiting from the resources of each system and creating a space faring civilization of communicating, settled solar systems. Inhabited civilization closely follows the frontier, and over time can refine the programming of the leading edge probes.
A logical core architecture system is similar to a computer, in that it has the intrinsic capability for a tremendous range of function beyond any particular preplanned functions specifically intended for a given mission. Thus, when confronted by an unplanned challenge, the logical core architecture may be able to take action outside its original mission scope to adequately respond to the challenge. This could involve producing novel components for its interface layer, beyond what is simply catalogued in the Archive, or even reconfiguring the system to implement some novel tactic.
Given sufficient advances in automated engineering [Drexler, 1986a], novel components for the interface layer could be designed by the system, in response to the challenge, and even more radical responses might be autonomously derivable.
These mission profiles have been described above as if they ran entirely autonomously, without human intervention. While such a capability may be desirable, especially for interstellar missions, it will often be unnecessary, and adds to the technical challenge. In many cases, the missions could run under manned mission control.
When doing so, the data archive effectively includes all the data and software that mission control has or can generate. JPL has demonstrated massive reprogramming of vehicles in space, and in concert with logical core architecture vehicles a highly skilled mission control could achieve significant results at a distance.
Also, the interface layer could provide life support, allowing the vehicle to be manned, and under local control.
For all the above operations, the purpose of the system is ultimately implemented by the interface layer, with the inner layers merely providing the support necessary for the system to survive and implement its interface layer. Another potential purpose for a logical core architecture system, however, is the maintenance, or the operational continuity, of particular information held within the logical core. The interface layer then implements a subsidiary purpose, survival within the environment of the system.
This purpose could be applied to many elements of information that are valued. To take an extreme, the idea has been suggested [Moravec, 1988], [Minsky, 1994] that the total informational content of a human's mental processes could be abstracted from their biological substrate, and implemented in computers. Estimates for the necessary computational power [Moravec, 1988], [Merkle 1989], fall well below the 1015 MIPS Drexler conservatively estimates [Drexler, 1992a] is feasible for an MNT-based mechanical computing system, let alone more advanced computers MNT may enable [Merkle & Drexler, 1996].
Note, if maintenance or the operational continuity of an uploaded mental system is a desired purpose, then one should also consider the costs and benefits of multiple dispersed copies.
The definition of the logical core architecture sets the framework for many open questions. Entire flowdowns of increasingly detailed designs remain. Indeed, since systems should be able to reconfigure between radically different physical implementations of the logical core architecture, a range of different versions needs to be designed.
[McKendree, 1996] discusses how to grapple with the potential of the future through a hierarchy of technology, system designs, operational concepts, and grand strategy. As various implementations of the logical core architecture are developed, we need corresponding operational concepts, describing how those approaches would execute desirable missions. Validation of those operational concepts by analysis should then prompt thinking about the sorts of strategies such systems enable or require.
Two specific questions raised by the architecture include, what is the minimal system mass to implement a logical core architecture system, and does a robot bush [Moravec, 1988] contain sufficient intrinsic capability such that it is an implementation of the logical core architecture?
In the stepwise refinement [Rechtin & Maier, 1997] of this architecture, general operating policy questions loom large. Surge performance with time remains to be derived for various non-linear functions of performance with subsystem mass. Beyond replacement on failure, the maintenance policy must investigate scheduled replacement, which can be very powerful when using subsystems with failure rates that increase with time, like the exemplar architecture in [Drexler, 1992a].
Design remains for efficient computation algorithms that integrate the potential for surge computation and surge data caching. Heuristics for good practice, let alone optimal algorithms, are unclear. Algorithms should be examined that use in-progress fabrication of selectable special-purpose manufacturing capacity. For example, the simple model presented in 4.2 only works when the same special-purpose computational devices are useful throughout the run of the algorithm, whereas the utility of a particular piece of computational hardware may not be evident until partially through an algorithm. Furthermore, some computational devices will scale in an algorithm, whereas other devices (e.g., a device to invert an array sized to the problem in one clock cycle) may have no utility until a large increment of computational hardware is complete.
There are many operationally trade-offs that are expressions of the general trade-off between using less efficient general purpose equipment, and carrying various amounts more efficient special purpose equipment. These trade-offs become more complicated when also considering the possibility of surging various capacities. It requires balancing mass budgets and different quantities of various equipment, along with power and response timelines, and still leaves the question of balancing between short-term use of specialized equipment versus potential reuse of more generalized equipment.
If uploading is possible, and uploading into a logical core architecture is desirable, then an open policy question is "would it be preferable to devote one Logical Core Architecture system each to one upload, or to host a colony of uploads within one system?"
Looking more generally at the field of MNT-based system design, new functions require definition of new performance metrics. Many forms of performance have standard metrics, such as resistance which measures ability to carrying current, yield stress which measures a material's ability to carrying force, capacitance which measures ability to store charge, maximum velocity, vehicle range, and for estimating computing speed, instructions per second. An important category of MNT-based performance is "morphability," the ability of the system to reconfigure itself, but there are no standardized metrics defined for this. Many different polymorphic systems are possible, and metrics are needed to quantify them. More than one metric may be required. Issues that should be addressed include how quickly can the system change form, how many different forms can the system change between, and how different in dimension, characteristic and function are the forms the system can change between. At least some metrics should integrate across forms, to consider, for example, a system that can quickly change amongst a small set of forms and that can more slowly change amongst a wider set of forms.
Given a capability for fairly general purpose manufacturing and of disassembly, as molecular nanotechnology promises, along with sufficient control, and systems with a logical core architecture are feasible. This four-layer architecture is defined. It maintains a purely informational center, hosts that and other core system functions in replaceable and radically re configurable hardware, is able to surge the capacity of any core function, and interacts with its environment though instantiated components from a potentially vast set of virtual components. The architecture offers the potential of long-life through remanufacture of subsystems. It is "a pile of atoms. And when you want something else, it rearranges itself into another pile of atoms" [Merkle, 1994].
The logical core architecture forms a basis for developing system concepts that exploit the particular capabilities of MNT. As such, it provides another step to help in thinking about the future that will be unfold from the development of MNT.
Finally, this architecture is applicable for purposes well beyond space operations.
Foresight materials on the Web are ©1986–2014 Foresight Institute. All rights reserved. Legal Notices.