The Visual System and the Brain: Hubel and Wiesel Redux
I don't think many neuroscientists would dispute the statement that the work David Hubel and Torsten Wiesel began in the late 1950s and continued for the next 25 years provided the greatest single influence on the ways neuroscientists thought about and prosecuted studies of the brain during much of the second half of the twentieth century. Certainly, what they were doing had never been very far from my own thinking, even while working on the formation and maintenance of synaptic connections in the peripheral nervous system. To explain the impact of their work and to set the stage for understanding the issues discussed in the remaining chapters, I need to fill in more information about the visual system, what Hubel and Wiesel actually did, and how they interpreted it.
Presumably because we humans depend so heavily on vision, this sensory modality has for centuries been a focus of interest for natural philosophers and, in the modern era, neuroscientists and psychologists. By the time Hubel and Wiesel got into the game in the 1950s, a great deal was already known about the anatomy of the system and about the way light interacts with receptor cells in the retina to initiate the action potentials that travel centrally from retina to cortex, ultimately leading to what we see. The so-called primary visual pathway (Figure 7.1) begins with the two types of retinal receptors, rods and cones, and their transduction of light energy.
Figure 7.1 The primary visual pathway carries information from the eye to the regions of the brain that determine what we see. The pathway entails the retinas, optic nerves, optic tracts, dorsal lateral geniculate nuclei in the thalamus, optic radiations, and primary (or striate) and adjacent secondary (or extrastriate) visual cortices in each occipital lobe at the back of the brain (see Figures 7.2 and 7.3). Other central pathways to targets in the brainstem (dotted lines) determine pupil diameter as a function of retinal light levels, organize and motivate eye movements, and influence circadian rhythms. (After Purves and Lotto, 2003)
The visual processing that rods initiate is primarily concerned with seeing at very low light levels, whereas cones respond only to greater light intensities and are responsible for the detail and color qualities that we normally think of as defining visual perception. However, the primary visual pathway is anything but simple. Following the extensive neural processing that takes place among the five basic cell classes found in the retina, information arising from both rods and cones converges onto the retinal ganglion cells, the neurons whose axons leave the retina in the optic nerve. The major targets of the retinal ganglion cells are the neurons in the dorsal lateral geniculate nucleus of the thalamus, which project to the primary visual cortex (usually referred to as V1 or the striate cortex) (Figure 7.2).
Figure 7.2 Photomicrograph of a section of the human primary visual cortex, taken in the plane of the face (see Figure 7.1). The characteristic myelinated band, or stria, is why this region of cortex is referred to as the striate cortex (myelin is a fatty material that invests most axons in the brain and so stains darkly with reagents that dissolve in fat, such as the one used). The primary visual cortex occupies about 25 square centimeters (about a third of the surface area of a dollar bill) in each cerebral hemisphere; the overall area of the cortical surface for the two hemispheres together is about 0.8 square meters (or, as my colleague Len White likes to tell students, about the area of a medium pizza). Most of the primary visual cortex lies within a fissure on the medial surface of the occipital lobe called the calcarine sulcus, which is also shown in Figure 6.1B. The extrastiate cortex that carries out further processing of visual information is immediately adjacent (see Figure 7.3). (Courtesy of T. Andrews and D. Purves)
Although the primary visual cortex (V1) is the nominal terminus of this pathway, many of the neurons there project to additional areas in the occipital, parietal, and temporal lobes (Figure 7.3). Neurons in V1 also interact extensively with each other and send information back to the thalamus, where much processing occurs that remains poorly understand. Because of the increasing integration of information from other brain regions in the visual cortical regions adjacent to V1, these higher-order cortical processing regions (V2, V3, and so on) are called visual association areas. Taken together, they are also referred to as extrastriate visual cortical areas because they lack the anatomically distinct layer that creates the striped appearance of V1 (see Figure 7.2). In most conceptions of vision, perception is thought to occur in these higher-order visual areas adjacent to V1 (although note that what occur means in this statement is not straightforward).
Figure 7.3 The higher-order visual cortical areas adjacent to the primary visual cortex, shown here in lateral (A) and medial (B) views of the brain. The primary visual cortex (V1) is indicated in green; the additional colored areas with their numbered names are together called the extrastriate or visual association areas and occupy much of the rest of the occipital lobe at the back of the brain (its anterior border is indicated by the dotted line).(After Purves and Lotto, 2003)
By the 1950s, much had also been learned about visual perception. The seminal figures in this aspect of the history of vision science were nineteenth-century German physicist and physiologist Hermann von Helmholtz, and Wilhelm Wundt and Gustav Fechner, who initiated the modern study of perception from a psychological perspective at about the same time. However, Helmholtz gave impetus to the effort to understand perception in terms of visual system physiology, and his work was the forerunner of the program Hubel and Wiesel undertook nearly a century later.
A good example of Helmholtz's approach is his work on color vision. At the beginning of the nineteenth century, British natural philosopher Thomas Young had surmised that three distinct types of receptors in the human retina generate the perception of color. Although Young knew nothing about cones or the pigments in them that underlie light absorption, he nevertheless contended in lectures he gave to the Royal Society in 1802 that three different classes of receptive "particles" must exist. Young's argument was based on what humans perceive when lights of different wavelengths (loosely speaking, lights of different colors) are mixed, a methodology that had been used since Isaac Newton's discovery a hundred years earlier that light comprises a range of wavelengths. Young's key observation was that most color sensations can be produced by mixing appropriate amounts of lights from the long-, middle-, and short-wavelength regions of the visible light spectrum (mixing lights is called color addition and is different from mixing pigments, which subtracts particular wavelengths from the stimulus that reaches the eye by absorbing them).
Young's theory was largely ignored until the latter part of the nineteenth century, when it was revived and greatly extended by Helmholtz and James Clerk Maxwell, another highly accomplished physicist interested in vision. The ultimately correct idea that humans have three types of cones with sensitivities (absorption spectra) that peak in the long, middle, and short wavelength ranges, respectively, is referred to as trichromacy, denoting the fact that most human color sensations can be elicited in normal observers by adjusting the relative activation of the three cone types (see Chapter 9). The further hypothesis that the relative activation explains the colors we actually see is called the trichromacy theory, and Helmholtz spotlighted this approach to explaining perception. Helmholtz's approach implied that perceptions (color perceptions, in this instance) are a direct consequence of the way receptors and the higher-order neurons related to them analyze and ultimately represent stimulus features and, therefore, the features of objects in the world. For Helmholtz and many others since that era, the feature that color perceptions represent is the nature of object surfaces conveyed by the spectrum of light they reflect to the eye.
This mindset that sensory systems represent the features of objects in the world was certainly the way I had supposed the sensory components of the brain to be working—and as far as I could tell, it was how pretty much everyone else thought about these issues in the 1960s and 1970s. By the same token, I took exploring the underlying neural circuitry (the work Hubel and Wiesel were undertaking) to be the obvious way to solve the problem of how the visual system generates what we see. The step remaining was the hard work needed to determine how the physiology of individual visual neurons and their connections in the various stations of the visual pathway were accomplishing this feat.
Using the extracellular recording method they had developed in Kuffler's lab at Johns Hopkins, Hubel and Wiesel were working their way up the primary visual pathway in cats and, later, in monkeys. At each stage in the pathway—the thalamus, primary visual cortex, and, ultimately, extratstriate cortical areas (see Figures 7.1–7.3)—they carefully studied the response characteristics of individual neurons in the type of setup that Figure 7.4 illustrates, describing the results in terms of what are called the receptive field properties of visual neurons. Their initial studies of neurons in the lateral geniculate nucleus of the thalamus showed responses that were similar to the responses of the retinal output neurons (retinal ganglion cells) that Kuffler had described. Despite this similarity, the information the axons carried from the thalamus to the cortex was not exactly the same as the information coming into the nucleus from the retina, indicating some processing by the thalamus. The major advances, however, came during the next few years as they studied the responses of nerve cells in the primary visual cortex. The key finding was that, unlike the relatively nondescript responses to light stimuli of visual neurons in the retina or the thalamus, cortical neurons showed far more varied and specific responses. On the surface, the nature of these responses seemed closely related to the features we end up seeing. For example, the rather typical V1 neuron illustrated in Figure 7.4 responds to light stimuli presented at only one relatively small locus on the screen (defining the spatial limits of the neuron's receptive field), and only to bars of light. In contrast, neurons in the retina or thalamus respond to any configuration of light that falls within their receptive field. Moreover, many V1 neurons are selective for orientation and direction of movement, responding vigorously to bars only at or near a particular angle on the screen and moving in a particular direction. These receptive field properties were the beginning of what has eventually become a long list, including selective responses to the lengths of lines, different colors, input from one eye or the other, and the different depths indicated by the somewhat different views of the two eyes. Based on this rapidly accumulating evidence, it seemed clear that visual cortical neurons were indeed encoding the features of retinal images and, therefore, the properties of objects in the world.
Figure 7.4 Assessing the responses of individual neurons to visual stimuli in experimental animals (although the animal is anesthetized, the visual system continues to operate much as it would if the animal were awake). A) Diagram of the experimental setup showing an extracellular electrode recording from a neuron in the primary visual cortex of a cat (which is more anterior in the brain than in humans). By monitoring the responses of the neuron to stimuli shown on a screen, Hubel and Wiesel could get a good idea of what particular visual neurons normally do. B) In this example, the neuron being recorded from in V1 responds selectively to bars of light presented on the screen in different orientations; the cell fires action potentials (indicated by the vertical lines) only when the bar is at a certain location on the screen and in a certain orientation. These selective responses to stimuli define each neuron's receptive field properties. (After Purves, Augustine, et al., 2008)
Important as these observations were, amassing this foundational body of information about the response properties of visual neurons was not Hubel and Wiesel's only contribution. At each stage of their investigations, they used imaginative and often new anatomical methods to explore the organization of the thalamus, the primary visual cortex, and some of the higher-order visual processing regions. They also made basic contributions to understanding cortical development as they went along, work that might eventually stand as their greatest legacy. Hubel and Wiesel knew from the studies just described that neurons in V1 are normally innervated by thalamic inputs that can be activated by stimulating the right eye, the left eye, or both eyes (Figure 7.5). What would happen to the neural connections in the cortex if one eye of an experimental animal was closed during early development, depriving the animal of normal visual experience through that eye? Although most of the neurons in V1 are activated to some degree by both eyes (Figure 7.5A), when they closed one eye of a kitten early in life and studied the brain after the animal had matured (which takes about six months in cats), they found a remarkable change. Electrophysiological recordings showed that very few neurons could be driven from the deprived eye: Most of the cortical cells were now being driven by the eye that had remained open (Figure 7.5B). Moreover, the cats were behaviorally blind to stimuli presented to the deprived eye, a deficit that did not resolve even if the deprived eye was subsequently left open for months. The same manipulation in an adult cat—closing one eye for a long period—had no effect on the responses of the visual neurons. Even when they closed one eye for a year or more, the distribution of V1 neurons driven by one eye and the animals' visual behavior tested through the reopened eye were indistinguishable from normal (Figure 7.5C). Therefore, between the time a kitten's eyes open (about a week after birth) and a year of age, visual experience determines how the visual cortex is wired, and does so in a way that later experience does not readily reverse.
Figure 7.5 The effect on cortical neurons of closing one eye in a kitten. A) The distribution observed in the primary visual cortex of normal adult cats by stimulating one eye or the other. Cells in group 1 are activated exclusively by one eye (referred to here as the contralateral eye), and cells in group 7 are activated exclusively by the other (ipsilateral) eye. Neurons in the other groups are activated to varying degrees by both eyes (NR indicates neurons that could not be activated by either eye). B) Following closure of one eye from one week after birth until about two and a half months of age, no cells could be activated by the deprived (contralateral) eye. C) In contrast, a much longer period of monocular deprivation in an adult cat (from 12 to 38 months of age in this example) had little effect on ocular dominance. (After Purves, Augustine, et al., 2008)
The clinical, educational, and social implications of these results are hard to miss. In terms of clinical ophthalmology, early deprivation in developed countries is most often the result of strabismus, a misalignment of the two eyes caused by deficient control of the direction of gaze by the muscles that move the eye. This problem affects about 5% of children. Because the resulting misalignment produces double vision, the response of the visual system in severely afflicted children is to suppress the input from one eye (it's unclear exactly how this happens). This effect can eventually render children blind in the suppressed eye if they are not treated promptly by intermittently patching the good eye or intervening surgically to realign the eyes. A prevalent cause of visual deprivation in children in underdeveloped countries is a cataract (opacification of the lens) caused by diseases such as river blindness (an infection caused by a parasitic worm) or trachoma (an infection caused by a small, bacteria-like organism). A cataract in one eye is functionally equivalent to monocular deprivation in experimental animals, and this defect also results in an irreversible loss of visual acuity in the untreated child's deprived eye, even if the cataract is later removed. Hubel and Wiesel's observations provided a basis for understanding all this. In keeping with their findings in experimental animals, it was also well known that individuals deprived of vision as adults, such as by accidental corneal scarring, retain the ability to see when treated by corneal transplantation, even if treatment is delayed for decades.
The broader significance of this work for brain function is also readily apparent. If the visual system is a reasonable guide to the development of the rest of the brain, then innate mechanisms establish the initial wiring of neural systems, but normal experience is needed to preserve, augment, and adjust the neural connectivity present at birth. In the case of abnormal experience, such as monocular deprivation, the mechanisms that enable the normal maturation of connectivity are thwarted, resulting in anatomical and, ultimately, behavioral changes that become increasingly hard to reverse as animals grow older. This gradually diminishing cortical plasticity as we or other animals mature provides a neurobiological basis for the familiar observation that we learn anything (language, music, athletic skills, cultural norms) much better as children than as adults, and that behavior is much more susceptible to normal or pathological modification early in development than later. The implications of these further insights for early education, for learning and remediation at later stages of life, and for the legal policies are self-evident.
Hubel and Wiesel's extraordinary success (Figure 7.6) was no doubt the result of several factors. First, as they were always quick to say, they were lucky enough to have come together as fellows in Kuffler's lab shortly after he had determined the receptive field properties of neurons in the cat retina—the approach that, with Kuffler's encouragement, they pursued as Kuffler followed other interests (an act of generosity not often seen when mentors latch on to something important). Second, they were aware of and dedicated to the importance of what they were doing; the experiments were difficult and often ran late into the night, requiring an uncommon work ethic that their medical training helped provide. Finally, they respected and complemented each other as equal partners. Hubel was the more eccentric of the two, and I always found him somewhat daunting. He had been an honors student in math and physics at McGill, and whether solving the Rubik's cube that was always lying around the lunchroom or learning how to program the seemingly incomprehensible PDP 11 computer that he had purchased for the lab, he liked puzzles and logical challenges. He asked tough and highly original questions in seminars or lunchroom conversations and made everyone a little uneasy by taking snapshots with a miniature camera about the size of a cigarette lighter that he carried around. He was hard to talk to when I sought him out for advice as a postdoc, and I couldn't help feeling that his characterization of lesser lights as "chuckleheads" was probably being applied to me. These quirks aside, he is the neuroscientist I have most admired over the years.
Figure 7.6 David Hubel and Torsten Wiesel talking to reporters in 1981, when they were awarded that year's Nobel Prize in Physiology or Medicine. (From Purves and Litchman, 1985)
Although Wiesel shared Hubel's high intelligence and dedication to the work they were doing, he was otherwise quite different. Open and friendly with everyone, he had all the characteristics of the natural leader of any collective enterprise. Torsten became the chair of the Department of Neurobiology at Harvard when Kuffler stepped down in 1973 and, after moving to Rockefeller University in 1983, was eventually appointed president there, a post he served in with great success from 1992 until his retirement in 1998 at the age of 74. In contrast, Hubel had been appointed chair of the Department of Physiology at Harvard in 1967, but he quit after only a few months and returned to the Department of Neurobiology when he apparently discovered that he did not want to handle all the problems that being a chair entails. (Other reasons might have contributed, based on the response of the Department of Physiology faculty to his managerial style, but if so, I never heard them discussed.)
This brief summary of what Hubel and Wiesel achieved gives some idea of why their influence on the trajectory of "systems-level" neuroscience in the latter decades of the twentieth century was so great. The wealth of evidence they amassed seemed to confirm Helmholtz's idea that perceptions are the result of the activity of neurons that effectively detect and, in some sense, report and represent in the brain the various features of retinal images. This strategy seems eminently logical; any sensible engineer would presumably want to make what we see correspond to the real-world features of the objects that we and other animals must respond to with visually guided behavior. This was the concept of vision that I took away from the course that Hubel and Wiesel taught us postdocs and students in the early 1970s. However, I should hasten to add that feature detection as an explicit goal of visual processing was never discussed. Hubel and Wiesel appeared to assume that understanding the receptive field properties of visual neurons would eventually explain perception, and that further discussion would be superfluous.
In light of all this, it will seem odd that the rest of the book is predicated on the belief that these widely accepted ideas about how the visual brain works are wrong. The further conclusion that understanding what we see based on learning more about the responses of visual neurons is likely to be a dead end might seem even stranger. Several things conspired to sow seeds of doubt after years of enthusiastic, if remote, acceptance of the overall program that Hubel and Wiesel had been pursuing. The first flaw was the increasing difficulty that they and their many acolytes were having when trying to make sense of the electrophysiological and anatomical information that had accumulated by the 1990s. In the early stages of their work, the results obtained seemed to beautifully confirm the intuition that vision entails sequential and essentially hierarchical analyses of retinal image features leading to the neural correlates of perception (see Figure 7.3). The general idea was that the luminance values, spectral distributions (colors), angles, line lengths, depth, motion, and other features were abstracted by visual processing in the retina, thalamus, and primary visual cortex, and subsequently recombined in increasingly complex ways by neurons at progressively higher stages in the visual cortex. These combined representations in the extrastriate regions of the visual system would lead to the perception of objects and their qualities by virtue of further activity elicited in the association cortices in the occipital lobes and adjacent areas in the temporal and parietal lobes.
A particularly impressive aspect of Hubel and Wiesel's observations in the 1960s and 1970s was that the receptive field properties of the neurons in the lateral geniculate nucleus of the thalamus could nicely explain the properties of the neurons they contacted in the input layer of the primary visual cortex, and that the properties of these neurons could explain the responses of the neurons they contacted at the next higher level of processing in V1. The neurons in this cortical hierarchy were referred to as "simple," "complex," and "hypercomplex" cells, underscoring the idea that the features abstracted from the retinal image were progressively being put back together in the cortex for the purpose of perception. Although I doubt Hubel and Wiesel ever used the phrase, the rationale for the initial abstraction was generally assumed to be engineering or coding efficiency.
These findings also fit well with their anatomical evidence that V1 is divided into iterated modules defined by particular response properties, such as selectivity for orientation (see Figure 7.4) or for information related to the left or right eye (see Figure 6.2A). By the late 1970s, Hubel and Wiesel had put these several findings together in what they called the "ice cube" model of visual cortical processing (Figure 7.7). The suggestion was that each small piece of cortex, which they called a "hyercolumn," contained a complete set of feature-processing elements. But as the years passed and more evidence accumulated about visual neuronal types, their connectivity, and the organization of the visual system, the concept of a processing hierarchy in general and the ice cube model in particular seemed as if a square peg was being pounded into a round hole.
Figure 7.7 The ice cube model of primary visual cortical organization. This diagram illustrates the idea that units roughly a square millimeter or two in size (the primary visual cortex in each hemisphere of a rhesus monkey brain is about 1,000 square millimeters) each comprise superimposed feature-processing elements, illustrated here by orientation selectivity over the full range of possible angles (the little lines) and comapping with right and left eye processing stripes (indicated by L and R; see Figure 6.2A). (After Hubbel, 1988)
A second reason for suspecting that more data about the receptive field properties of visual neurons and their anatomical organization might not explain perception was the mountain of puzzling observations about what people actually see, coupled with philosophical concerns about vision that had been around for centuries. Taking such things seriously was a path that a self-respecting neuroscientist followed at some peril. But vision has always demanded that perceptual and philosophical issues be considered, and the cracks that had begun to appear in the standard model of how the visual brain was supposed to work encouraged a reconsideration of some basic concerns. One widely discussed issue was the question of "grandmother cells," a term coined by Jerry Lettvin, an imaginative and controversial neuroscientist at MIT who liked the role of intellectual and (during the Vietnam War era) social provocateur. If the features of retinal images were being progressively put back together in neurons with increasingly more complex properties at higher levels of the brain, didn't this imply the existence of nerve cells that would ultimately be ludicrously selective (meaning neurons that would respond to only the retinal image of your grandmother, for example)? Although the question was facetious, many people correctly saw it as serious. The ensuing debate was further stimulated by the discovery in the early 1980s of neurons in the association areas of the monkey brain that did, in fact, respond specifically to faces (an area in the human temporal lobe that responds selectively to faces has since been well documented). A related question concerned the binding problem. Even if visual neurons don't generate perceptions by specifically responding to grandmothers or other particular objects (which most people agreed made little sense), how are the various features of any object brought together in a coherent, instantaneously generated perception of, for example, a ball that is round, chartreuse, and coming at you in a particular direction from a certain distance at a certain speed (think tennis). Although purported answers to the binding problem were (and still are) taken with a grain of salt, most neuroscientists recognized that such questions would eventually need to be answered. Although a lot of my colleagues were not very interested in debates of this sort, I had always had a weakness for them and was glad to see these issues raised as serious concerns in neuroscience. After all, I had been a philosophy major in college and had left clinical medicine because I wanted to understand how the brain worked, not just how to understand its maladies or the properties of its constituent cells.
By the mid-1990s, I began to be bothered by another philosophical issue relevant to perception that was ultimately decisive in reaching the conclusion that mining the details of visual neuronal properties would never lead to an understanding of perception or its underlying mechanics. Western philosophy had long debated about how the "real world" of physical objects can be "known" by using our senses. Positions on this issue had varied greatly, the philosophical tension in recent centuries being between thinkers such as Francis Bacon and René Descartes, who supposed that absolute knowledge of the real world is possible (an issue of some scientific consequence in modern physics and cosmology), and others such as David Hume and Immanuel Kant, who argued that the real world is inevitably remote from us and can be appreciated only indirectly. The philosopher who made these points most cogently with respect to vision was George Berkeley, an Irish nobleman, bishop, tutor at Trinity College in Dublin, and card-carrying member of the British "Empiricist School." In 1709, Berkeley had written a short treatise entitled An Essay Toward a New Theory of Vision in which he pointed out that a two-dimensional image projected onto the receptive surface of the eye could never specify the three-dimensional source of that image in the world (Figure 7.8). This fact and the difficulty it raises for understanding the perception of any image feature is referred to as the inverse optics problem.
Figure 7.8 The inverse optics problem. George Berkeley pointed out in the eighteenth century that the same projected image could be generated by objects of different sizes, at different distances from the observer, and in different physical orientations. As a result, the actual source of any three-dimensional object is inevitably uncertain. Note that the problem is not simply that retinal images are ambiguous; the deeper issue is that the real world is directly unknowable by means of any logical operation on a projected image. (After Purves and Lotto, 2003)
In the context of biology and evolution, the significance of the inverse problem is clear: If the information on the retina precludes direct knowledge of the real world, how is it that what we see enables us to respond so successfully to real-world objects on the basis of vision? Helmholtz was aware of the problem and argued that vision had to depend on learning from experience in addition to the information supplied by neural connections in the brain determined by inheritance. However, he thought that analyzing image features was generally good enough and that a boost from empirical experience (empirical experience, for him, was what we learn about objects in life through trial-and-error interactions) would contend with the inverse problem. This learned information would allow us to make what Helmholtz referred to as "unconscious inferences" about what an ambiguous image might represent. Some vision scientists seemed to take Helmholtz's approach to the inverse optics problem as sufficient, but many simply ignored it. The problem was rarely, if ever, mentioned in the discussions of vision I had been party to over the years. In particular, I had never heard Hubel and Wiesel mention it or saw it referred to in their papers.
At the same time, I was increasingly aware in the 1990s, as anyone who delves into perception must be, of an enormous number of visual illusions. An illusion refers to a perception that fails to match a physical measurement made by using an instrument of some sort: a ruler, a protractor, a photometer, or some more complex device that makes direct measurements of object properties, therefore evading the inverse problem. In using the term illusion the presumption in psychology texts and other literature is that we usually see the world "correctly," but sometimes a natural or contrived stimulus fools us so that our perception and the measured reality underlying the stimulus fail to align. But if what Berkeley had said was right, analysis of a retinal image could not tell the brain anything definite about what objects and conditions in the world had actually generated an image. It seemed more likely that all perceptions were equally illusory constructions produced by the brain to achieve biological success in the face of the inverse problem. If this was the case, then the evolution of visual systems must have been primarily concerned with solving this fundamental challenge. Surprisingly, no one seemed to be paying much attention to this very large spanner that Berkeley had tossed into logical and analytical concepts of how vision works.
I didn't have the slightest idea of how the visual wiring described by Hubel and Wiesel and their followers might be contending with the inverse problem. But I was pretty sure that it must be by means of a very different strategy from the one that had been explicitly or implicitly dominating my thinking (and most everyone else's) since the 1960s. If understanding brain function was going to be possible, exploring how vision contends with the inverse problem seemed a very good place to start.