For 100 million people around the globe who suffer from macular degeneration and other diseases of the retina, life is a steady march from light into darkness. The intricate layers of neurons at the backs of their eyes gradually degrade and lose the ability to snatch photons and translate them into electric signals that are sent to the brain. Vision steadily blurs or narrows, and for some, the world fades to black.
Until recently some types of retinal degeneration seemed as inevitable as the wrinkling of skin or the graying of hair—only far more terrifying and debilitating. But recent studies offer hope that eventually the darkness may be lifted. Some scientists are trying to inject signaling molecules into the eye to stimulate light-collecting photoreceptor cells to regrow. Others want to deliver working copies of broken genes into retinal cells, restoring their function. And a number of researchers are taking a fundamentally different, technology-driven approach to fighting blindness. They seek not to fix biology but to replace it, by plugging cameras into people’s eyes.
Scientists have been trying to build visual prostheses since the 1970s. This past spring the effort reached a crucial milestone, when European regulators approved the first commercially available bionic eye. The Argus II, a device made by Second Sight, a company in California, includes a video camera housed in a special pair of glasses. It wirelessly transmits signals from the camera to a 6 pixel by 10 pixel grid of electrodes attached to the back of a subject’s eye. The electrodes stimulate the neurons in the retina, which send secondary signals down the optic nerve to the brain.
A 60-pixel picture is a far cry from HDTV, but any measure of restored vision can make a huge difference. In clinical human trials, patients wearing the Argus II implant were able to make out doorways, distinguish eight different colors, or read short sentences written in large letters. And if the recent history of technology is any guide, the current $100,000 price tag for the device should fall quickly even as its resolution rises. Already researchers are testing artificial retinas that do not require an external camera; instead, the photons will strike light-sensitive arrays inside the eye itself. The Illinois-based company Optobionics has built experimental designs containing 5,000 light sensors.
Commercial digital cameras hint at how much more improvement might lie just ahead. Our retinas contain 127 million photoreceptors spread over 1,100 square millimeters. State-of-the-art consumer camera detectors, by comparison, carry 16.6 million light sensors spread over 1,600 square millimeters, and their numbers have improved rapidly in recent years. But simply piling on the pixels will not be enough to match the rich visual experience of human eyes. To create a true artificial retina, says University of Oregon physicist and vision researcher Richard Taylor, engineers and neuroscientists will have to come up with something much more sophisticated than an implanted camera.
it is easy to think of eyes as biological cameras—and in some ways, they are. When the light from an image passes through our pupil, it ends up producing a flipped image on our retina. The light that enters a camera does the same thing. Eyes and cameras both have lenses that adjust the path of the incoming light to bring an image into sharper focus. The digital revolution has made cameras even more eye-like. Instead of catching light on film, digital cameras use an array of light-sensitive photodiodes that function much like the photoreceptors in an eye.
But once you get up close, the similarities break down. Cameras are boringly euclidean. Typically engineers build photodiodes as tiny square elements and spread them out in regularly spaced grids. Most existing artificial retinas have the same design, with impulses conveyed from the photodiodes to neurons through a rectangular grid of electrodes. The network of neurons in the retina, on the other hand, looks less like a grid than a set of psychedelic snowflakes, with branches upon branches filling the retina in swirling patterns. This mismatch means that when surgeons position the grid on the retina, many of the wires fail to contact a neuron. As a result, their signals never make it to the brain.
Some engineers have suggested making bigger electrodes that are more tightly spaced, creating a larger area for contact, but that approach faces a fundamental obstacle. In the human eye, neurons sit in front of the photoreceptors, but due to the snowflake-like geometry, there is still lots of space for light to slip through. An artificial retina with big electrodes, by contrast, would block out the very light it was trying to detect.
Natural photoreceptors are quirky in another way, too: They are bunched up. Much of what we see comes through a pinhead-size patch in the center of the retina known as the fovea. The fovea is densely packed with photoreceptors. The sharp view of the world that we simply think of as “vision” comes from light landing there; light that falls beyond the fovea produces blurry peripheral images. A camera, by contrast, has light-trapping photodiodes spread evenly across its entire image field.
The reason we don’t feel as if we are looking at the world through a periscope is that our eyes are in constant motion; our focus jumps around so that our foveas can capture different parts of our field of view. The distances of the jumps our eyes make have a hidden mathematical order: The frequency of a jump goes up as distance gets shorter. In other words, we make big jumps from time to time, but we make more smaller jumps, and far more even smaller jumps. This rough, fragmented pattern, known as a fractal, creates an effective means of sampling a large space. It bears a striking resemblance to the path of an insect flying around in search of food. Our eyes, in effect, forage for visual information.
Once our eyes capture light, the neurons in the retina do not relay information directly to the brain. Instead, they process visual information before it leaves the eye, inhibiting or enhancing neighboring neurons to adjust the way we see. They sharpen the contrast between regions of light and dark, a bit like photoshopping an image in real time. This image processing most likely evolved because it allowed animals to perceive objects more quickly, especially against murky backgrounds. A monkey in a forest squinting at a leopard at twilight, struggling to figure out exactly what it is, will probably never see another leopard. Unlike a camera that passively takes in a picture, our eyes are honed to actively extract the most important information we need to make fast decisions.
Right now scientists can only speculate what it might be like to wear an artificial retina with millions of photoreceptors in a regular grid, but such a device would not restore the experience of vision—no matter how many electrodes it contains. Without the retina’s sophisticated image processing, it might just supply a rapid, confusing stream of information to the brain.
Taylor, the Oregon vision researcher, argues that simplistic artificial eyes could also cause stress. He reached this conclusion after asking subjects to look at various patterns, some simple and some fractal, then describe how the images made them feel. He also measured physiological signs of stress, like electrical activity in the skin. Unlike simple images, fractal images lowered stress levels by up to 60 percent. Taylor suspects the calming effect has to do with the fact that our eye movements are fractal too. It is interesting to note that natural images—such as forests and clouds—are often fractal as well. Trees have large limbs off which sprout branches, off which grow leaves. Our vision is matched to the natural world.
An artificial retina that simply mirrors the detector in a digital camera would presumably allow people to see every part of their field of view with equal clarity. There would be no need to move their eyes around in fractal patterns to pick up information, Taylor notes, so there would be no antistress effect.
The solution, Taylor thinks, involves artificial retinas that are more like real eyes. Light sensors could be programmed with built-in feedbacks to sharpen the edges on objects or clumped together to provide more detail at the center. It may be possible to overcome the mismatch between regular electrodes and irregular neurons. Taylor is developing new kinds of circuits that he hopes to incorporate into next-generation artificial eyes. His team builds these circuits so that they spontaneously branch, creating structures that Taylor dubs nanoflowers. Although nanoflowers do not exactly match the eye’s neurons, their geometry would similarly admit light and allow circuits to contact far more neurons than can a simple grid.
Taylor’s work is an important reminder of how much progress scientists are making toward restoring lost vision, but also of how far they still have to go. The secret to success will be remembering not to take the camera metaphor too seriously: There is a lot more to the eye than meets the eye.