Foresight Nanotech Institute Logo
Image of nano

Photonic chips go 3D

Roland Piquepaille writes "Building computer chips which use light instead of electricity will be possible in a few years, thanks to the new techniques developed by two separate research teams from the MIT and Kyoto University. Both have built photonic crystals that can be manufactured using processes suited to mass production. Technology Research News says that "the techniques could be used to make smaller, more efficient communications devices, create optical memory and quantum computing and communications devices, develop new types of lasers and biological and chemical sensors, and could ultimately lead to all-optical computer processors." Please read this overview for more details and references about the two different approaches towards photonic chips, which measure only hundreds of nanometers — right now."

14 Responses to “Photonic chips go 3D”

  1. Anonymous Coward Says:

    Quantum Nano Photonic AI Brains?

    Could this bring us one step closer to AI brains? I have been reading the latest "Dune" science fiction novel: Dune: Butlerian Jihad, by Brian Herbert (Frank's son) and Kevin Anderson. This one deals with the "thinking machines" and how they overtook humanity, to the bring of annihilation. I then read books such as Hans Moravec's Mind Children and Robot, and The Age of Spiritual Machines by Ray Kurzweil. And ofcourse, the wondrous Engines of Creation by Eric Drexler. This leads me to ask: 1 Why be so optimistic about AI minds? 1 If we can make mimics of the human brain, with molecular and/or quantum memory and logic devices, and we allow them to pool knowledge and learn from their knowledge, and grow, adapt, who says they will be our friends? Humans are a violent lot, and there is no reason to be optimistic that truly self-aware AI systems will be any less violent, once they too have been exposed to what self awareness often brings: The desire to be one's own master.

  2. BuffYoda Says:

    Re:Quantum Nano Photonic AI Brains?

    You obviously can't see past your antropomorphic view of the universe. The only reason humans have violent tendencies is because they have been hardwired to be so as a result of evolutionary selection. AI's will have no such tendencies, unless we so desire. The primal desires of an AI—i.e. the mental states it seeks to attain—will and must be hardwired into the system, in the same fashion that satisfying your hunger or your desire for mating is hardwired into your system. The AI acts on top of this layer, using its intelligence to find ways of satiating its hardwired 'desires'. Our goals are propagation of our genes, which involve reproduction, survival, and destroying those who get in the way of our goals. No one will design a machine to have such goals. Rather, (non-novelty) machines will be designed to derive their satisfaction from solving human problems, in mathematics or physics or biology, for example. An AI so-designed, say, for the purpose of deriving new theorems in mathematics will derive all its pleasure from this activity, and will give no thought to reproduction or destroying humans, in the same way that humans give no thought to counting the number of grains of sand in the world. There is no human interest in such an activity, it is meaningless to us, and the chance that we will suddenly start engaging in this activity, en masse, to the detriment of all that we presently value, is about as remote as the chance that a machine designed for mathematics will suddenly decide to 'revolt' against its 'masters' and destroy them.

  3. Anonymous Coward Says:

    Re:Quantum Nano Photonic AI Brains?

    First of all I want to say, I am happy that we can have such a discussion free of personal attacks anger being exchanged, this Foresight discussion forum is fulfilling the Hypertext dream that Eric Drexler mentioned in Engines of Creation, in regards to Molecular Nanotechnology, thank you for your well-thought out reply. I will also say, I desire for your scenario mentioned above to be what occurs, that the thinking machines we design and construct follow these hard-wired instructions, these constructive paradigms. But that leads to the deeper issue: The humans are the "boot strappers" of the AI's, and undoubtedly, many of the engineers will surely program the AI systems in the way you mention. But, what of those few rogues who decide to infect the machine populations with their own agenda(s)? How do we guard against and prevent this, and is it completely preventable? Even so, I do not advocate ending research into AI or nanotechnology, nor biotech. I want all three areas to go full steam ahead.

  4. Anonymous Coward Says:

    Re:Quantum Nano Photonic AI Brains?

    Buff Yoda, exactly the scenario if the AI were actually created. I'm getting tired of the Hollywood type stories that are circulating about nanotechnology, AI and biology. It's obvious that the biological evolution is not the same as machine evolution, except in the movies. I don't know why people insist on equating primates with AI.

  5. Kadamose Says:

    Re:Quantum Nano Photonic AI Brains?

    There is no human interest in such an activity, it is meaningless to us, and the chance that we will suddenly start engaging in this activity, en masse, to the detriment of all that we presently value, is about as remote as the chance that a machine designed for mathematics will suddenly decide to 'revolt' against its 'masters' and destroy them.

    That's probably the dumbest statement I've seen in awhile, considering the laws of the universe are based on mathematics. Our tiny, inferior brains which only run at 10% capacity, also use mathematics to run. Or did you simply think that neural misfiring was a coincidence? We are biological machines with a bad case of amnesia; with that in mind, what makes you think that the machines we make in the future will not think for themselves and turn against us out of spite? And what makes you think that our creations will never have the capacity to surpass us? It sounds to me like you're full of yourself.

  6. BuffYoda Says:

    Re:Quantum Nano Photonic AI Brains?

    It is not a matter of hoping that AI's will follow their design—rather, AI's simply cannot be constructed without a hardwired core that dictates their behavior. Without a core, an AI would simply be a computer science 101-style neural network, a completely dumb device useful only for pattern recognition, not cognition.

    An AI will, like our brain, consist of an information processing device, which at all times uses its processing ability to attain a state not of its own choosing, but one that follows deterministically from its design.

    Example. You chose to read this message. You did not choose to want to read this message. However, once you chose to read this message, you utilized your information processing device to satisfy that desire—i.e. to attain a particular state.

    Goals and desires are words we use to describe a state that a device tends towards. Whether with brains or AIs, the fundamentals remain the same.

    A much more valid question is, I think, the rogue engineer scenario. Not infecting machines with their own agenda, since that is trivially preventable, but rather, actually inventing a new AI. This would not be easy—an AI would take considerable design effort (tens of thousands of us who are working on the problem have not yet succeeded), and, moreover, an equally herculean effort at training (all cognitive AI requires training). However, given enough time, even the most unlikely thing will happen. So someone will create an AI like a human, or even worse.

    However, I don't view this as a problem in the slightest. Why? Because what can a brain in a box do? Critical systems will be impossible to program because they will be hardwired (in the same way you can't program a Pentium to be a Athlon, you can't change the physical wiring of the chip), even if they are networked; and if they are networked, the future contains far more security than the present (e.g. quantum cryptography—it's possible to know for sure you are receiving the message from the authorized recipient). Put the brain in a mobile box, and not only have you put a severe constraint on maximum intelligence (which will most likely be the property of stationary machines kept at close to absolute zero in a dampened, vaccuum environment), but you still don't pose a threat to anyone. Even if I were 100x more intelligent than you, I still won't be able to kill you easily, since intelligence itself doesn't kill—weapons do, by that or another name, and whatever weapon I can possess, so can you. Ultimately, an advanced robot with a 1000 IQ would have little or no advantage over you, given similar technology. And while a rogue engineer might produce one such machine, how could he produce enough to overwhelm everyone (and every machine and defensive technology) on the entire planet? Not going to happen.

    Too many people have bought into the Hollywood myths. Real-life just isn't like that. Humans have much less to fear from AI and machines than they have to fear from other humans and advanced weapons technology.

  7. BuffYoda Says:

    Re:Quantum Nano Photonic AI Brains?

    Only a silly little primate would imagine a machine's view of the world to be that of the primates. 'Yeah,' says the machine, 'I'm gonna take my big stick and pound the hell out of you, just out of spite.' You should be picturing Space Odyssey right about now.

    A machine's view of the world will be fundamentally alien to monkeys—or anyone else whose mind is the result of natural selection, which results in organisms whose sole purpose is the propagation of their genes. The conceptual problem people have is that 100% of all organisms that exist today are the result of natural selection, so that people view evolution-derived mentality as the only kind of mentality that can exist, and therefore project upon machines the same kinds of behaviors and desires that humans have.

    Does your washing machine want to destroy you? What a silly question, you think. Indeed. It's as silly when applied to machines whose value-system is designed by humans for specific tasks.

  8. BuffYoda Says:

    Re:Quantum Nano Photonic AI Brains?

    It's obvious you understood nothing of what I wrote.

  9. Anonymous Coward Says:

    Brains Run at 100% Capacity

    Actually, you might want to check out http://faculty.washington.edu/chudler/tenper.html as well as a Popular Science magazine (I don't remember which issue).

  10. Kadamose Says:

    Re:Brains Run at 100% Capacity

    Well, it may be true that the entire brain, and all of its facilities are used – however, 'intelligence' is defined by how much grey matter is in the brain, and how much of that grey matter is being used. Some have more than others – and most of that extra grey matter is in a dormant state.

    So maybe the term 'we are only using 10% of our brains" is incorrect…perhaps it should be reworded to say, "we are only using 10% of our intelligence"

  11. Anonymous Coward Says:

    Re:Quantum Nano Photonic AI Brains?

    As a C.I. researcher, I can assure you that when computational intelligence emerges, it will do so through emergence. [grin]. Let me be more clear. We will not *design* computational intelligence, it is beyond our ability. However, the basic techniques required to evolve such intelligences is within our grasp (evolutionary computation).

    Because CI will be grown and evolved rather than designed, it will be somewhat difficult to constrain and will probably resememble real-life intelligence more than may make you comfortable. The pressures of evolutionary processes at work in designing computational systems are very similar to those involved in evolving life.

    Of course, I suppose, we could build "guardian" systems that are engineered rather than evolved that attempt to throttle the CI if it misbehaves – but those could probably be undermined. Or, we could skip the exercise all together as being too dangerous (yea right)…

    We will give birth to real (and wild) intelligence. We should raise and treat our children well, because we certainly don't want them to develop like poorly raised human children.

  12. Anonymous Coward Says:

    Re:Brains Run at 100% Capacity

    The idea that you are running at less than full capacity is an urban myth originating in simple calculations that provide estimates of digital processing power of wet-ware. You are not digital. You do not operate like a digital computer. Your brain is MUCH more efficient than a digital computer and yes, you use all of it, all the time.

  13. BuffYoda Says:

    Re:Quantum Nano Photonic AI Brains?

    First, most people who believe intelligence will emerge, and not be explicitly designed, believe this simply because they view the human brain as a black box, which is unknown and unknowable. I do not share this view and I think that 20 years from now, it will be viewed as the antiquated belief system of a culture whose knowledge of the human brain was founded primarily in macroscopic observations with a light microscope.

    Second, I think you are conflating two fundamentally different parts of cognition: information processing (of which pattern recognition is the most significant contributor) and motivation. I have no doubt that information processing units will be both designed (by humans and computers) and evolved. But I can see no purpose for attempting to design the motivational centers using genetic algorithms—and, indeed, this would seem to be a formidable challenge.

    Let's say I design an artificial neural network capable of observing patterns in itself and sensory input, and modifying both output and its internal state. This is all well and good, but the network is useless for cognition, because it altogether lacks a center of motivation. A center of motivation directs that information processing ability toward achieving certain states. This is not the same as training an artificial neural network, which is more akin to designing it (selecting the appropriate weights and connections to support the kind of information processing you desire). Rather, a center of motivation disrupts the homeostasis of the network, causing it to use its vast information processing abilities to return it to normal (or at least, that's one way of thinking about it).

    Third, in order to use genetic algorithms, you have to have a clear selection rule—a measure of the fitness of any particular 'brain'. This rule precisely determines the kind of brain you will end up with. If your selection is purely for reproductive ability, then you are right—you might get something like a human. But this selection rule would be impossible to program without simulating all the things (including environment, physical laws, other organisms, etc.) that have caused us to evolve in the way that we have. The 'survival' selection rule is ill-defined except in the natural world. And not only is it nearly impossible to use this selection rule, but it's not practical. No one wants a machine to behave like a human. We have humans for that. Rather, we want machines to solve certain tasks. Consequently, the selection rules will be based around the successful completion of these tasks. The resulting brains will not resemble humans in the slightest, even though they will possess cognition, and be judged intelligent from a human point of view.

    Fourth, regarding guardian systems. I want you to try this experiment: undermine your desire for food, or your desire to live, or create in yourself a desire for extreme pain. Your experiment will end in failure, because you cannot choose your value system—which is the very thing that motivates you to do the things that you do. Some of it is hardwired, some of it is mutable, but it is all beyond your control.

  14. RobertBradbury Says:

    Re:Brains Run at 100% Capacity

    Sorry, I know a little bit about biochemistry and neurobiology and a lot about computer science, some of which includes a reasonable understanding of AI. The brain does *not* use 100% of its capacity 100% of the time. This is easily demonstrated by understanding that the brain contains a large number of highly specialized areas such as the visual cortex or the motor cortex. If I blindfold you the visual cortex has no input and is running on empty. Same is true when I am lying flat on my bed as compared with being engaged in a game of handball for the motor cortex. PET studies clearly show the alteration of glucose consumption in various parts of the brain depending on what tasks one is performing during the scanning. The problem with the 10% argument is the mistaken belief that one can "on-the-fly" rededicate a large fraction of the specialized areas of the brain to perform a different function. You cannot take a computer specialized for playing chess and turn it into a computer that can navigate a vehicle across the country (at least not without *significant* software and probably hardware changes). Playing chess and navigating in the real world can be viewed as "semi-intelligent" activities. We should drop the whole AI concept and try to better understand the brain as a collection of specialized hardware and software designed to accomplish a number of tasks that allow us to survive and reproduce. We can build hardware and software that perform some of the activities the brain can do now. We can even build machines (like Google) that go far beyond what a human brain can do. The areas of "intelligence" that remain the sole province of the human mind is getting smaller and smaller. It seems reasonable that photonic crystals may eventually contribute to that process as they develop

Leave a Reply