MESSAGE FROM ASSOC. PROF. PAMELA S. EVERS, ATTORNEY AT LAW This article has been offered by web posting to UNCW students for educational purposes only. Articles posted may have been edited for clarity and format by Pamela S. Evers. Anthropomimetic Robot Copies Inner Structures of Human Body A consortium of European robotics labs is developing a humanoid robot by copying not only the overall form of the human body but also its inner structures: bones, joints, muscles, and tendons. The goal of the ECCEROBOT project is to create an anthropomimetic robot whose body moves and interacts with the physical world in the same way our flesh bodies do. The researchers used thermoplastic polymer, elastic cords, and other soft, flexible materials to build the torso, arms, and hands. One potential advantage, according to the researchers: shake hands with ECCE and it won't crush your bones. The result is fascinating, if a bit creepy. The robot looks eerily organic, with parts that look like bone and muscle. The researchers say that humanoids built with metal parts and electric motors and actuators have limitations in the kinds of interactions they can have with humans and the environment. Indeed, they say, these limitations may affect their ability to perceive and "internalize" the world around them. The big challenge now is devising methods for controlling such flexible (the technical term is compliant) robots. The researchers say there's a lot of work to do in terms of understanding intrinsic movement patterns and being able to model and control these movements. Once they make progress in that direction, their ultimate goal is to use the robot's human-like characteristics to explore human-like cognitive features. The consortium, led by the University of Sussex (UK), includes Technische Universität München (Germany), Universität Zürich (Switzerland), Elektrotehnicki Fakultet Universitet u Beogradu (Serbia), and the Robot Studio (France). http://www.youtube.com/watch?v=cI9H4FoA0b4 Exoskeletons Are on the March Cyberdyne is shipping nearly 100 more exoskeletons this fall Photo: Yoshikazu Tsuno/AFP/Getty Images BY YU-TZU CHIU // AUGUST 2009 17 August 2009—An army of exoskeletons is coming. And according to their inventor, Professor Yoshiyuki Sankai of the University of Tsukuba, in Japan, they’re making a difference in the lives of disabled people. Speaking at the International Conference on Intelligent Robotic Technology and Business, held earlier this month in Taipei, Taiwan, Sankai proudly described how the robotic exoskeleton suit HAL (short for Hybrid Assistive Limb), helped a 46-year-old man whose left leg was withered by polio when he was 11 months old. HAL reads electric signals at the surface of the skin that are generated by the muscle beneath and then uses them to guide the movement of robotic limbs strapped to a person’s real limbs, thereby multiplying their strength. The polio patient’s withered left leg generated extremely weak bioelectric signals at first, and the robotic limb remained unmoved. Ten days later, with HAL’s assistance, the patient moved his left leg based on his own intention. “He cried,” says Sankai. Sankai suspects that in the past 45 years, the patient’s brain had rarely generated the signals needed to move his left leg. After the patient used HAL, the levels of signals strengthened and became detectable. Sankai says that similar phenomena were observed when applying the HAL suit to patients with spinal cord injuries. Starting in late April, his team began measuring bioelectric signals in polio and stroke patients before and after using HAL. They hope to record data over a period of 8 to 12 months. An analysis of how the brain adapts to HAL will be taken into account to improve the exoskeleton’s operation, says Sankai. In Japan, more than 20 sets of various HAL exoskeletons are in use at hospitals and rehabilitation centers, Sankai says. The facilities lease the robots from Sankai’s company, Cyberdyne, for about US $1700 per month on average. “It’s worthwhile, because a suit can be used for eight patients per day,” he says, adding that the service could possibly be cheaper once the market for the exoskeletons increases. Sankai, who is Cyberdyne’s CEO, expects to supply 80 to 90 suits in Japan in October. At the end of September, 10 sets of HAL suits will be delivered to Denmark to be used by nurses who care for elderly people. The suits should enhance the nurses’ strength, helping them to move patients. More versions of HAL are in the works, says Sankai. Following HAL’s use by a man injured in a car wreck to climb the 4164-meter Breithorn Mountain, in Switzerland, the company decided to develop a weather-resistant outdoor exoskeleton. Sankai says the company will also be introducing a HAL with significantly smaller and lighter batteries this fall at an event in Kyoto. About the Author Yu-Tzu Chiu is a Taipei-based reporter. In the August 2009 issue of IEEE Spectrum, she explained TSMC’s new interest in solar cells and LEDS. To Probe Further There are several other exoskeleton designs out there, including those by Berkeley Bionics, Raytheon, and MIT. Other efforts to give the disabled back some mobility include Dean Kamen's Luke Arm and brain-machine interfaces. Augmented Reality in a Contact Lens A new generation of contact lenses built with very small circuits and LEDs promises bionic eyesight BY BABAK A. PARVIZ // SEPTEMBER 2009 The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a highspeed Internet connection. But why stop there? In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes. These visions (if I may) might seem far-fetched, but a contact lens with simple built-in electronics is already within reach; in fact, my students and I are already producing such devices in small numbers in my laboratory at the University of Washington, in Seattle [see sidebar, "A Twinkle in the Eye"]. These lenses don’t give us the vision of an eagle or the benefit of running subtitles on our surroundings yet. But we have built a lens with one LED, which we’ve powered wirelessly with RF. What we’ve done so far barely hints at what will soon be possible with this technology. Conventional contact lenses are polymers formed in specific shapes to correct faulty vision. To turn such a lens into a functional system, we integrate control circuits, communication circuits, and miniature antennas into the lens using custom-built optoelectronic components. Those components will eventually include hundreds of LEDs, which will form images in front of the eye, such as words, charts, and photographs. Much of the hardware is semitransparent so that wearers can navigate their surroundings without crashing into them or becoming disoriented. In all likelihood, a separate, portable device will relay displayable information to the lens’s control circuit, which will operate the optoelectronics in the lens. These lenses don’t need to be very complex to be useful. Even a lens with a single pixel could aid people with impaired hearing or be incorporated as an indicator into computer games. With more colors and resolution, the repertoire could be expanded to include displaying text, translating speech into captions in real time, or offering visual cues from a navigation system. With basic image processing and Internet access, a contact-lens display could unlock whole new worlds of visual information, unfettered by the constraints of a physical display. Besides visual enhancement, noninvasive monitoring of the wearer’s biomarkers and health indicators could be a huge future market. We’ve built several simple sensors that can detect the concentration of a molecule, such as glucose. Sensors built onto lenses would let diabetic wearers keep tabs on blood-sugar levels without needing to prick a finger. The glucose detectors we’re evaluating now are a mere glimmer of what will be possible in the next 5 to 10 years. Contact lenses are worn daily by more than a hundred million people, and they are one of the only disposable, mass-market products that remain in contact, through fluids, with the interior of the body for an extended period of time. When you get a blood test, your doctor is probably measuring many of the same biomarkers that are found in the live cells on the surface of your eye—and in concentrations that correlate closely with the levels in your bloodstream. An appropriately configured contact lens could monitor cholesterol, sodium, and potassium levels, to name a few potential targets. Coupled with a wireless data transmitter, the lens could relay information to medics or nurses instantly, without needles or laboratory chemistry, and with a much lower chance of mix-ups. Three fundamental challenges stand in the way of building a multipurpose contact lens. First, the processes for making many of the lens’s parts and subsystems are incompatible with one another and with the fragile polymer of the lens. To get around this problem, my colleagues and I make all our devices from scratch. To fabricate the components for silicon circuits and LEDs, we use high temperatures and corrosive chemicals, which means we can’t manufacture them directly onto a lens. That leads to the second challenge, which is that all the key components of the lens need to be miniaturized and integrated onto about 1.5 square centimeters of a flexible, transparent polymer. We haven’t fully solved that problem yet, but we have so far developed our own specialized assembly process, which enables us to integrate several different kinds of components onto a lens. Last but not least, the whole contraption needs to be completely safe for the eye. Take an LED, for example. Most red LEDs are made of aluminum gallium arsenide, which is toxic. So before an LED can go into the eye, it must be enveloped in a biocompatible substance. So far, besides our glucose monitor, we’ve been able to batch-fabricate a few other nanoscale biosensors that respond to a target molecule with an electrical signal; we’ve also made several microscale components, including single-crystal silicon transistors, radio chips, antennas, diffusion resistors, LEDs, and silicon photodetectors. We’ve constructed all the micrometerscale metal interconnects necessary to form a circuit on a contact lens. We’ve also shown that these microcomponents can be integrated through a self-assembly process onto other unconventional substrates, such as thin, flexible transparent plastics or glass. We’ve fabricated prototype lenses with an LED, a small radio chip, and an antenna, and we’ve transmitted energy to the lens wirelessly, lighting the LED. To demonstrate that the lenses can be safe, we encapsulated them in a biocompatible polymer and successfully tested them in trials with live rabbits. Photos: University of Washington Second Sight: In recent trials, rabbits wore lenses containing metal circuit structures for 20 minutes at a time with no adverse effects. Seeing the light—LED light—is a reasonable accomplishment. But seeing something useful through the lens is clearly the ultimate goal. Fortunately, the human eye is an extremely sensitive photodetector. At high noon on a cloudless day, lots of light streams through your pupil, and the world appears bright indeed. But the eye doesn’t need all that optical power—it can perceive images with only a few microwatts of optical power passing through its lens. An LCD computer screen is similarly wasteful. It sends out a lot of photons, but only a small fraction of them enter your eye and hit the retina to form an image. But when the display is directly over your cornea, every photon generated by the display helps form the image. The beauty of this approach is obvious: With the light coming from a lens on your pupil rather than from an external source, you need much less power to form an image. But how to get light from a lens? We’ve considered two basic approaches. One option is to build into the lens a display based on an array of LED pixels; we call this an active display. An alternative is to use passive pixels that merely modulate incoming light rather than producing their own. Basically, they construct an image by changing their color and transparency in reaction to a light source. (They’re similar to LCDs, in which tiny liquid-crystal ”shutters” block or transmit white light through a red, green, or blue filter.) For passive pixels on a functional contact lens, the light source would be the environment. The colors wouldn’t be as precise as with a white-backlit LCD, but the images could be quite sharp and finely resolved. We’ve mainly pursued the active approach and have produced lenses that can accommodate an 8-by-8 array of LEDs. For now, active pixels are easier to attach to lenses. But using passive pixels would significantly reduce the contact’s overall power needs—if we can figure out how to make the pixels smaller, higher in contrast, and capable of reacting quickly to external signals. By now you’re probably wondering how a person wearing one of our contact lenses would be able to focus on an image generated on the surface of the eye. After all, a normal and healthy eye cannot focus on objects that are fewer than 10 centimeters from the corneal surface. The LEDs by themselves merely produce a fuzzy splotch of color in the wearer’s field of vision. Somehow the image must be pushed away from the cornea. One way to do that is to employ an array of even smaller lenses placed on the surface of the contact lens. Arrays of such microlenses have been used in the past to focus lasers and, in photolithography, to draw patterns of light on a photoresist. On a contact lens, each pixel or small group of pixels would be assigned to a microlens placed between the eye and the pixels. Spacing a pixel and a microlens 360 micrometers apart would be enough to push back the virtual image and let the eye focus on it easily. To the wearer, the image would seem to hang in space about half a meter away, depending on the microlens. Another way to make sharp images is to use a scanning microlaser or an array of microlasers. Laser beams diverge much less than LED light does, so they would produce a sharper image. A kind of actuated mirror would scan the beams from a red, a green, and a blue laser to generate an image. The resolution of the image would be limited primarily by the narrowness of the beams, and the lasers would obviously have to be extremely small, which would be a substantial challenge. However, using lasers would ensure that the image is in focus at all times and eliminate the need for microlenses. Whether we use LEDs or lasers for our display, the area available for optoelectronics on the surface of the contact is really small: roughly 1.2 millimeters in diameter. The display must also be semitransparent, so that wearers can still see their surroundings. Those are tough but not impossible requirements. The LED chips we’ve built so far are 300 µm in diameter, and the light-emitting zone on each chip is a 60-µm-wide ring with a radius of 112 µm. We’re trying to reduce that by an order of magnitude. Our goal is an array of 3600 10-µm-wide pixels spaced 10 µm apart. One other difficulty in putting a display on the eye is keeping it from moving around relative to the pupil. Normal contact lenses that correct for astigmatism are weighted on the bottom to maintain a specific orientation, give or take a few degrees. I figure the same technique could keep a display from tilting (unless the wearer blinked too often!). Like all mobile electronics, these lenses must be powered by suitable sources, but among the options, none are particularly attractive. The space constraints are acute. For example, batteries are hard to miniaturize to this extent, require recharging, and raise the specter of, say, lithium ions floating around in the eye after an accident. A better strategy is gathering inertial power from the environment, by converting ambient vibrations into energy or by receiving solar or RF power. Most inertial power scavenging designs have unacceptably low power output, so we have focused on powering our lenses with solar or RF energy. Let’s assume that 1 square centimeter of lens area is dedicated to power generation, and let’s say we devote the space to solar cells. Almost 300 microwatts of incoming power would be available indoors, with potentially much more available outdoors. At a conversion efficiency of 10 percent, these figures would translate to 30 µW of available electrical power, if all the subsystems of the contact lens were run indoors. Collecting RF energy from a source in the user’s pocket would improve the numbers slightly. In this setup, the lens area would hold antennas rather than photovoltaic cells. The antennas’ output would be limited by the field strengths permitted at various frequencies. In the microwave bands between 1.5 gigahertz and 100 GHz, the exposure level considered safe for humans is 1 milliwatt per square centimeter. For our prototypes, we have fabricated the first generation of antennas that can transmit in the 900-megahertz to 6-GHz range, and we’re working on higherefficiency versions. So from that one square centimeter of lens real estate, we should be able to extract at least 100 µW, depending on the efficiency of the antenna and the conversion circuit. Having made all these subsystems work, the final challenge is making them all fit on the same tiny polymer disc. Recall the pieces that we need to cram onto a lens: metal microstructures to form antennas; compound semiconductors to make optoelectronic devices; advanced complementary metal-oxide-semiconductor silicon circuits for low-power control and RF telecommunication; microelectromechanical system (MEMS) transducers and resonators to tune the frequencies of the RF communication; and surface sensors that are reactive with the biochemical environment. The semiconductor fabrication processes we’d typically use to make most of these components won’t work because they are both thermally and chemically incompatible with the flexible polymer substrate of the contact lens. To get around this problem, we independently fabricate most of the microcomponents on silicon-on-insulator wafers, and we fabricate the LEDs and some of the biosensors on other substrates. Each part has metal interconnects and is etched into a unique shape. The end yield is a collection of powder-fine parts that we then embed in the lens. We start by preparing the substrate that will hold the microcomponents, a 100-µm-thick slice of polyethylene terephthalate. The substrate has photolithographically defined metal interconnect lines and binding sites. These binding sites are tiny wells, about 10 µm deep, where electrical connections will be made between components and the template. At the bottom of each well is a minuscule pool of a low-melting-point alloy that will later join together two interconnects in what amounts to micrometer-scale soldering. We then submerge the plastic lens substrate in a liquid medium and flow the collection of microcomponents over it. The binding sites are cut to match the geometries of the individual parts so that a triangular component finds a triangular well, a circular part falls into a circular well, and so on. When a piece falls into its complementary well, a small metal pad on the surface of the component comes in contact with the alloy at the bottom of the well, causing a capillary force that lodges the component in place. After all the parts have found their slots, we drop the temperature to solidify the alloy. This step locks in the mechanical and electrical contact between the components, the interconnects, and the substrate. The next step is to ensure that all the potentially harmful components that we’ve just assembled are completely safe and comfortable to wear. The lenses we’ve been developing resemble existing gas-permeable contacts with small patches of a slightly less breathable material that wraps around the electronic components. We’ve been encapsulating the functional parts with poly(methyl methacrylate), the polymer used to make earlier generations of contact lenses. Then there’s the question of the interaction of heat and light with the eye. Not only must the system’s power consumption be very low for the sake of the energy budget, it must also avoid generating enough heat to damage the eye, so the temperature must remain below 45 °C. We have yet to investigate this concern fully, but our preliminary analyses suggest that heat shouldn’t be a big problem. Photos: University of Washington In Focus: One lens prototype [left] has several interconnects, single-crystal silicon components, and compound-semiconductor components embedded within. Another sample lens [right] contains a radio chip, an antenna, and a red LED. All the basic technologies needed to build functional contact lenses are in place. We’ve tested our first few prototypes on animals, proving that the platform can be safe. What we need to do now is show all the subsystems working together, shrink some of the components even more, and extend the RF power harvesting to higher efficiencies and to distances greater than the few centimeters we have now. We also need to build a companion device that would do all the necessary computing or image processing to truly prove that the system can form images on demand. We’re starting with a simple product, a contact lens with a single light source, and we aim to work up to more sophisticated lenses that can superimpose computer-generated highresolution color graphics on a user’s real field of vision. The true promise of this research is not just the actual system we end up making, whether it’s a display, a biosensor, or both. We already see a future in which the humble contact lens becomes a real platform, like the iPhone is today, with lots of developers contributing their ideas and inventions. As far as we’re concerned, the possibilities extend as far as the eye can see, and beyond. The author would like to thank his past and present students and collaborators, especially Brian Otis, Desney Tan, and Tueng Shen, for their contributions to this research. About the Author Babak A. Parviz wakes up every morning and sticks a small piece of polymer in each eye. So it was only a matter of time before this bionanotechnology expert at the University of Washington, in Seattle, imagined contact lenses with built-in circuits and LEDs. “It’s really fun to hook things up and see how they might work,” he says. In “For Your Eye Only”, Parviz previews a contact lens for the 21st century. To Probe Further You can find details about the fabrication process using self-assembly in “Self-Assembled Single-Crystal Silicon Circuits on Plastic,” by Sean A. Stauth and Babak A. Parviz, in Proceedings of the National Academy of Sciences, 19 September 2006. OPINION Brave Neuro World Using drugs to neuroenhance memory and mental stamina engenders new controversies -- and new words BY PAUL MCFEDRIES // AUGUST 2009 We live in an information society. What’s the next form of human society? The neuro-society. --Zack Lynch & Byron Laursen, The Neuro Revolution: How Brain Science Is Changing Our World In April 2009, the journal Nature published the results of a poll that asked people whether they were using beta blockers and drugs like Ritalin—not for their original medical purposes but to boost their brain power. Of the 1400 people from 60 countries who responded, one in five—an eyebrow-raising proportion—reported they had done so. About 80 percent said that healthy adults should be able to take such drugs for nonmedical purposes. These surprising results touched off a flurry of off- and online harrumphing and tut-tutting, which one scientist dismissed as mere neurogossip. The mental, physical, legal, and ethical pros and cons have, of course, been well debated over the past year or so, but all this talk about brain boosting has also generated a tidy collection of new words and phrases (such as brain boosting). The use of pharmaceuticals to enhance memory, focus, and mental stamina in healthy brains is known generally as cognitive enhancement; the pharmaceuticals themselves are often called cognitive enhancers. As you can imagine, the neuro- prefix gets quite a workout in these circles, with the equivalent terms being neuroenhancement and neuroenhancer, a word that rhymes with Neuromancer, the title of a seminal 1984 book by cyberpunk novelist William Gibson. The term neuro made the leap from prefix to adjective recently with the publication last month of The Neuro Revolution, coauthored by Zack Lynch, who also coined the phrase neurosociety. In a letter to Nature published in December 2007 (under the terrific title ”Professor’s Little Helper,” a clever shout-out to ”Mother’s Little Helper,” the Rolling Stones’ paean to housewives on prescription drugs), the scientists Barbara Sahakian and Sharon Morein-Zamir wrote, ”The drive for self-enhancement of cognition is likely to be as strong if not stronger than in the realms of ’enhancement’ of beauty and sexual function.” In other words, forget cosmetic surgery; the next fad is likely to be cosmetic neurology. The ”off label” (that is, outside its original scope) use of a drug such as methylphenidate (aka Ritalin), which is normally prescribed to treat attention-deficit hyperactivity disorder (aka ADHD), is sometimes called brain customization, but these days it is more likely to be referred to as mind hacking. In this kind of neuroenhancing, the drugs are usually administered in the form of smart pills, and the effect they create has been called smart-in-a-pill. Mind hackers use these drugs, marketed euphemistically as ”study aids,” to increase their mental horsepower. Some people want their brains neuroenhanced because they find themselves forgetting things or aren’t quite as sharp as they used to be. They don’t necessarily want supercognition; they just want their brains to be normal again. However, the neurodiversity movement is based on the belief that there is no such thing as ”normal” when it comes to the human mental landscape—the neurotypical person simply does not exist. People display a wide variety of neurological behaviors and abilities, and most of us exhibit some form of mental ”disorder” from time to time, albeit in nondebilitating—or subclinical—form: mild depression, temporary anxiety, and so on. We accept that the world is populated with people who are tall and small; big boned and bird boned; ecto-, meso-, and endomorphic; and, the theory goes, also diverse when it comes to neurological traits like forgetfulness. Are we so concerned with eternal youthfulness (perma-youth in the vernacular) that we need to turn to smart drugs to achieve at least the mental side of that goal? Other naysayers reject these pharmacological tricks as cheating. They deride them as academic steroids and the practice as brain doping. Rather than welcome an ever more efficient and productive neurosociety, they wring their hands at the prospect of a stressed-out, always-on, 24/7 society. And neuroethicists worry that the goal of better brains, while worthy in itself, may create social disparities if only the well-off can afford nootropic (literally ”mind affecting”) drugs. Who’s right? I guess I’ll go have another espresso and think about it. Synthetic Skin Gets a Second Life German automation could make engineered skin affordable PHOTO: FRAUNHOFER-GESELLSCHAFT BY JOHN BLAU // JULY 2009 13 July 2009—Producing synthetic skin for grafts and testing the safety of drugs and chemicals is possible today, but it is a highly complex process requiring extensive manual work. A number of ventures that have tried to produce synthetic skin in large quantities have failed, largely due to a lack of automation in their manufacturing. But a team of scientists and engineers from several units of Germany’s Fraunhofer-Gesellschaft believe they can make engineered tissue widely available using a fully automated process they recently demonstrated. Researchers at the Fraunhofer Institute for Interfacial Engineering and Biotechnology worked with colleagues at the Fraunhofer institutes for Production Technology, Manufacturing Engineering and Automation, and Cell Therapy and Immunology to develop what they claim to be the first fully automated system to produce artificial skin, consisting of two layers with different cell types. It’s an ”almost perfect copy of the human skin,” says Professor Heike Mertsching, one of the coordinators of Fraunhofer’s Automated Tissue Engineering on Demand project. That system, to be made commercially available by the end of 2010, is expected to produce about 5000 skin “equivalents” per month, each with a diameter of roughly one centimeter, at an estimated cost of about €35 (US $49) per piece. In a next step, the Fraunhofer researchers plan a fully automated system capable of producing synthetic skin with blood vessels in it. That system could hit the market as early as 2013 and would represent a big step forward in efforts by the medical industry to provide safe—and affordable—skin transplants. The Fraunhofer technology relies on advanced sensors, control systems, and techniques, such as Raman spectroscopy, to create and monitor the biochemical and mechanical environments that cause the skin to mature. It also encompasses robotics and other advanced automation processes that may make human intervention in the artificial-tissue-growing process unnecessary. One part of the system is a fully automated cutting device for preparing biopsied skin for use in the tissue engineering process. Another is a scalable bioreactor system that boosts the yield of usable skin cells by using an integrated suite of sensors designed to detect contaminations instantly. A third innovation is the use of optical coherence tomography, a nondestructive three-dimensional imaging technique for testing the quality of the finished skin. So far, at least 19 patents have emerged from the project. Current demand for artificial skin to test creams, cleaning agents, bandages, and drugs far exceeds the industry’s ability to produce it. The work at Fraunhofer could lead to manufacturing processes that not only help meet current demand but also take it to another level. Once the production of synthetic skin containing blood vessels is fully automated, for instance, it would allow companies to assess the risk of substances in their products entering a person’s blood stream. ”Today these tests are done on rats or mice, but they have different skin,” Mertsching says. ”A vascularized skin model would definitely be a step forward.” Mertsching and her colleagues believe the Fraunhofer system for manufacturing vascularized tissue will someday produce a whole portfolio of human tissue products in significant quantities. These products may be used to track the pathway of substances through the entire human metabolism and gain valuable information for new drug candidates. And, no less important, it could give people with damaged tissue on their faces or elsewhere the opportunity to feel good in a new layer of affordable skin. About the Author John Blau writes about technology from Düsseldorf, Germany. In January 2009 he reported on the insolvency of Europe’s only major dynamic RAM maker, Qimonda.