chapter two

The Mind-Brain Conundrum

We have a genuine dilemma on our hands, the logician tells us, when we can assert the truth of two mutually incompatible propositions or statements. By this standard, science could have been in a perilous philosophical position, had any of its critics seized upon memory research. On the one hand, specific functions, and presumably memory, seemed to be localized in particular parts of the brain. On the other, memory defied careful attempts to isolate or fraction it with a scalpel. Two equally convincing and opposite conclusions had emerged from the clinics and the laboratories. Because these conclusions represented the entire memory-brain universe of discourse, their simultaneous validity created a conundrum.

I don't mean to suggest that scientists sat around in smoking parlors, speakeasies, or faculty clubs lamenting (or trying to solve) the dilemma that memory posed. Judging from my own former frame of mind, I doubt that anyone was fully aware of the philosophical problem. If scientists assumed any position at all, it was at either of two poles, structuralists at the one, holists at the other, atomization and localization the summon bonum of the former, distributiveness and equipotentiality that of the latter.

Holism, as a generic doctrine, asserts that a universe as a whole cannot be reduced to the sum of its parts; try to dissect the parts and, as with the value of 22/7 or 3.14 ,you don't come out with discrete numbers or elements but with a left-over fraction. Holism entered the study of brain in the 1820s, following the experiments and speculations of Pierre Flourens.[1]

A structuralism, idealistic or materialistic, can be identified by the idea that wholes are indeed the sums of discrete parts--atoms. Thus the neural structuralist would insist that memories reduce to individual units or bits--like 22 or 23 rather than 22/7. And to a structuralist, a memory would be a structure of such discrete elements. Add materialism to the theory and you'd want to store those elements, each in its own structure of the brain.

To the holist, mind depends on the brain as a whole; the mental cosmos cannot be mapped like the surface of the earth or broken into subunits, this bit going here, that bit over there. Historically, holists have based their beliefs on the survival of cognition and the retention of memory after massive injury to the brain. Structuralists conceded that the brain's programs are not easily found; but holists consistently failed to link their theories to physical reality. To me, structuralist that I was, holism looked at best like metaphysics, and at worst like magic.


Let me say a few words about the genesis of my former faith in structuralism, and about anatomy as it is practiced in the latter half of the 20th Century. Anatomy of course includes what rests on the dissecting table, but the scope of the science goes far beyond this. Anatomy is an attempt to explain living events by observing, analyzing, and, if necessary, conceptualizing the body's components, whether the object of study happens to be a genital organ or the genes within its cells, whether the search calls for a sophisticated Japanese electron microscope or the stout crucible steel blade of a Swedish butcher knife. Anatomy rests upon a belief shared by many in our culture, in and out of science. Robert Traver's Anatomy of a Murder and Ashley Montagu's Anatomy of Swearing express metaphorically what many in our day embrace epistemologically: in order to find out how something really works, take it apart. What could seem reasonable than that?

As a student entering science in the 1950s, during some of the most exciting moments in intellectual history, I could see no basic philosophical difference between what anatomists were seeking to discover and what other scientists, with different titles, were actually finding out about the cell and the molecular side of life. In the 1950s and 1960s, scientists in large numbers and from diverse fields had begun accepting the anatomist's already ancient credo: physiological functions explicitly and specifically reduce to the interplay between discrete structural entities. Structure was suddenly being used to account for events that only a generation before seemed beyond reason: how genes maintain a molecular record of heredity; how muscles contract, cells divide, the sperm penetrates the egg; how a cell's membrane actively picks and chooses from the body's milieu what shall and shall not pass across its boundary; how an irritated nerve cell generates and propagates a neural signal and then transmits the message to the next cell in the network; how cells fuel and refuel their insatiable demands for energy. Those investigators who abided by the structural faith were coming up with answers any child of our thing-bound culture could easily comprehend--and they were winning the Nobel Prizes in the process. The intellectual environment in which I grew up vindicated every fundamentalism of my chosen field, virtually everywhere anyone chose to look. Everywhere except the brain.


Judging from artifacts, club-wielding cave men seemed to know that something essential to behavior existed inside the skull of a foe or quarry. Physicians of ancient Egypt correlated malfunctioning minds with diseased brains. Gladiators wore helmets, and those who lost them sometimes contributed personally to the early anecdotal wisdom about the brain's biology. Phrenologists, seeking to map the facets of the human personality over the surface of the cerebrum, laid the very foundations for modern neuroanatomy. High-velocity rifle bullets, which could inflict discrete wounds, afforded mid-nineteenth century battlefield surgeons with insights into the brain that they and others pursued at the laboratory bench. And the study of the nervous system in our own times can be traced directly to the science and surgery of Victorian and Edwardian Europe.

Today there are entire libraries, whole university departments, and specialized learned societies devoted exclusively to storing, disseminating, and promoting the wisdom of the "neurosciences." So vast is knowledge about the nervous system that the study of human neuroanatomy alone requires a completely separate course. Facts abound on the brain's chemical composition, anatomical organization, and electrophysiological activities. The main routes for incoming sensory messages, for example, have been plotted and replotted--the signals enabling us to see a sunrise, hear a sparrow, smell a rose, taste a drop of honey, feel the sting of a wasp, appreciate the texture of another human hand. The images of these words, for instance, land on the retinas of the reader's eyes and trigger well-worked-out photochemical reactions, which, in turn, detonate electrical signals within the receptor cells--the rods and cones. The retina itself begins sorting, integrating, and encoding the signals into messages, which it transmits through highly specific routes via the optic nerves and optic tracts to relays in the core of the brain. From the relays, the message moves to specific cells in what are called the occipital lobes of the cerebrum, and there establishes point-for-point communication between loci out in the visual fields, and particular input stations in the brain.

Much is known, too, of outflow pathways used in carrying direct orders to the effectors of our overt behavior--the muscles and glands that let us walk, talk, laugh, blush, cry, sweat, or give milk. In spite of admittedly vast gaps among the facts, enough is known today to fill in many of the blanks with plausible hypotheses about circuits used in language, emotions, arousal, and sleep--hypotheses for many of our actions and even a few of our feelings and thoughts. Damage to a known pathway yields reasonably predictable changes or deficits in behavior, perception, or cognition. Neurological diagnoses would be impossible, otherwise. For example, a person with partial blindness involving the upper, outer sector of the visual field, with accompanying hallucinations about odors and with a history of sudden outbursts of violence, quite likely has a diseased temporal lobe of the cerebrum --- the forward part of the temporal lobe on the side opposite the blindness, in fact. Or a person who suffers a stroke, cannot speak but understands language, and is paralyzed on the right side of the body almost assuredly has suffered damage at the rear of the cerebrum's frontal lobe--the of the left frontal lobe, to be precise.


For a quick pictorial neuroanatomy lesson, go here.

Up to a point, in other words, the brain fits neatly and simply into the anatomical scheme of things. But throughout history, the battle-ax, shrapnel, tumors, infections, even the deliberate stroke of the surgeon's knife, have paralyzed, blinded, deafened, muted, and numbed human beings, via the brain, without necessarily destroying cognition, erasing memory, or fractionating the mind. It wasn't that anatomists couldn't link specific functions to particular parts of the brain. Far from it. But when we reached for the dénouement, for an explanation of the most pivotal features of the brain, the structural argument teetered under the weight of contradictory evidence.


Consider a paradox about vision known as macular sparing. No part of the human brain has been worked on more exhaustively and extensively than the visual system. Nor, seemingly, could any structural realist ask for a more explicit relationship between form and function than one finds there. Every locus in our fields of view corresponds virtually point-for-point with microscopic routes through our visual pathways. As you can demonstrate to yourself with gentle pressure on an eyelid, you form an image of right and left fields in both retinas. (The nose blocks the periphery of the eye's inner half and the right eye's outer half; vice versa for the right field). For optical reasons however, the images of the field do a 180-degree rotation in projecting onto the two retinas. Thus the left field registers on the left eye's inner half and the right eye's outer half; vice versa for the right field. The fibers from the retina, which form the optic nerve strictly obey the following rule: those from the inner half of the retina cross to the opposite side of the brain; those from the outer half do not. Thus all information about the visual fields splits precisely down the middle and flashes to the opposite side of the brain. Corresponding fibers from the two eyes join each other in the centers of our heads (at a structure known as the optic chiasm) and form what are called optic tracts--the right tract carrying messages about left field exclusively, and the left tract carrying information about right field. If an optic tract is totally destroyed, we become blind to the entire opposite visual field.

Optic tracts end where they make connections with a highly organized collection of cells known as the LGB (lateral geniculate body). The LGB has the job of communicating visual signals to the visual cortex of the occipital lobe. Now there is every anatomical reason to predict that destruction of one occipital lobe will split a visual field map into seen and blank halves, as sometimes occurs.

Usually, though, a person with a lesion beyond the LGB will lose the peripheral parts of the opposite field but retain a whole, un-split view of the central field. The macula, a yellowish spot on the center of the retina, receives the projection from the central field. Thus the term macular sparing means that an otherwise split visual field remains un-split on both sides of the central zone, which is precisely as it should not be.
If the visual pathways were haphazardly arranged, with fibers coursing everywhere, macular sparing would be understandable. But clinical records, autopsy reports, the results of direct stimulation of conscious human brains during surgery, and probings into ape and monkey brains with minute electrodes--all means of gathering evidence--consistently show that the visual system is minutely precise in organization. For a while, some authors explained away macular sparing by assuming that central retinal fibers violate the crossing rule. But in 1934, a famous ophthalmologist, Stephen Polyak, studied the chimpanzee's visual pathways and found that central fibers do obey crossing rules, just like fibers of the rest of the retina: nasals cross, temporals don't! And repeated searches of human pathways has led to an identical conclusion-- namely that crossing doesn't explain macular sparing. [2 ]

Until 1940, one could assume either or both of two additional hypotheses to explain macular sparing: that of partial survival of the visual pathways, and/or that of sloppy examination of the visual fields. But in that year, Ward Halstead and his colleagues published data in the Archives of Ophthalmology that eliminated these simple hypotheses as well.

Halstead's group reported the case history of a twenty-five year old filing clerk who arrived at a clinic in Chicago in the autumn of 1937 with a massive tumor in her left occipital lobe. Summarizing what the surgeons had to cut out to save the woman's life, Halstead et al. wrote, "The ablation had removed completely the left striate [visual] cortex and areas 18 and 19 of the occipital lobe posterior to the parieto-occipital fissure." Translated, this means that the young woman lost her entire left optic lobe--the entire half of her brain onto which the right visual field projects and in which information is processed into higher-order percepts. Visual field maps showed that the operation caused blindness in the young woman's right visual field (homonymous hemianopsia, as it is called), but with macular sparing.


If macular sparing always occurred after occipital-lobe damage, one might explain the phenomenon by assuming that the macular-projection area of one LGB somehow sends fibers to both occipital lobes. But the Halstead article nullified this explanation, too, with an almost identical case history of a twenty-two year old stenographer. A patient in the same hospital, she also had a massive tumor, but in her right occipital lobe. After surgery, visual field mapping showed that she was totally blind to the left field of view--without macular sparing!

In other words, not only did Halstead's group document macular sparing as a genuine anatomical paradox; they even showed that one cannot apply simple, linear cause-and-effect reasoning to it: in the case of the two young women, the same antecedents had produced decidedly different consequences.

In no way does macular sparing detract from the orderliness of the visual system. Indeed, this was part of the mystery. Specific places on the retina excite particular cells in both the LGB and the visual cortex. When stimulated, the macular zone on the retina does excite specific cells of the occipital cortex--in the rear tip of the lobe, to be exact--on the side opposite the half visual field. But the phenomenon of macular sparing (and thousands of people have exhibited the sign) shows that there is not an exclusive center in the brain for seeing the central field of view. If the message can make it into the LGB, it may make it to the mind.


But what about the mind after the loss of a visual lobe of the brain? Halstead's group had something to say about this, too. The twenty-two year old secretary had scored 133 points on an IQ test before surgery. A month after the operation, she again scored 133. And five weeks after the operation, she left the hospital and returned to her job--as a secretary, no less! About the filing clerk, whose IQ also remained unchanged, Halstead et al. wrote, "Immediately on awakening from the anesthetic, the patient talked coherently and read without hesitation. At no time was there any evidence of aphasia [speech loss] or alexia [reading deficits].

Thus, in spite of the loss of half the visual areas of their cerebrums, despite a halved, or nearly halved, view of the external world, both young women retained whole visual memories. They are far from unique. Three floors below where I sit, there is an eye clinic whose filing cabinets contain thousands of visual-field maps and case upon case documenting the survival of a complete human mind on the receiving end of severely damaged human visual pathways.

The structuralists attempted to dodge Halstead's evidence by insisting that visual cognition and memory must lie outside the occipital lobe--somewhere! Others just plain ignored it (and still do.)


Nor is vision the sole brain function whose story begins true to an anatomist's expectations only to end in uncertainty. Take language. Certain speech and reading deficits correlate with damage to particular areas of the brain (and provide important diagnostic signs). Broca's motor speech aphasia most often results from blockage or hemorrhage of the arteries supplying the rear of the frontal lobe, and occurs on the left cerebral hemisphere about 80 to 85 percent of the time. In Broca's aphasia, a person understands language, communicates nonverbally, and writes, if not also paralyzed, but cannot articulate or speak fluently. (A sudden drop in fluency may, in fact, signal an impending stroke). In contrast, another speech aphasia is associated with damage to the temporal lobe. Known as Wernicke's aphasia, this malady is characterized not by apparent loss of fluency but by absence of meaning in what the person says. The words don't add up to informative sentences; or the person may have problems naming familiar objects, and call a cup an ashtray, for instance, or be unable to name a loved one.

Broca's and Wernicke's speech areas intercommunicate via a thick arching bundle (called the arcuate bundle). When damage to this pathway disconnects the two speech areas, language fluency and comprehension are not affected; however, the sufferer cannot repeat newly presented phrases.

Alexia, the inability to read, and its partial form, dyslexia, may suggest a tumor or arteriosclerosis in an area directly in front of the occipital lobe. Or, if a person begins to have problems writing down what he or she hears, a lesion may be developing in a span of brain between the occipital lobe and Wernicke's area.

In other words, anatomy functions in language as it does in vision. And those who tend to our health ought to be well informed about what a particular malfunction may portend. But aphasias do not supply evidence for a theory of mind. Damage to a specific cerebral area does not always produce the anticipated deficit. Individuals vary. Many malfunctions correlate with no detectable anatomical lesion (this is often true in dyslexia). And, whereas, massive cerebral damage (for instance, surgical removal of an entire cerebral hemisphere) may have only marginal effects on one person, a pin prick in the same area may destroy another's personality. Scientific law, qua law, cannot be founded on maybes and excuses. Yet in every bona fide case the structuralist has been able to make for the anatomy of memory, the holist has managed to find maybes-- and excuses. One of the best illustrations of this occurs in what is called "split-brain" research.


The two cerebral hemispheres intercommunicate via a massive formation of nerve fibers called the corpus callosum. A splitting headache marks roughly where the corpus callosum crosses the midline (although pain signals travel along nerves in blood vessels and connective tissue wrappings of the brain). A feature of mammals, the corpus callosum develops in our embryonic brain as we start acquiring mammalian form. On occasion, however, a person is born without a corpus callosum.

In spite of its relatively large mass--four inches long, two inches wide, and as thick as the sole of a shoe--the corpus callosum received surprisingly little attention until the 1950s. But in the 1960s, it made the newspapers. When surgeons split the corpus callosum, they produced two independent mentalities within one human body.

Surgeons had cut into the corpus callosum many years earlier, in an attempt to treat epilepsy. In fact, brain surgery developed in the 1880s after Sir Victor Horsley found that cutting into the brains of laboratory animals could terminate seizures. Until the drug dilantin came along in the 1930's, surgery, when it worked at all, was the only effective therapy for epilepsy. In epilepsy, convulsions occur when electrical discharges sweep the surface of the brain. A diseased locus may initiate the discharges, and removal of the zone may reduce or even eliminate seizures. Often, just an incision works, possibly by setting up countercurrents and short-circuiting the discharge. At any rate, splitting the entire corpus callosum seemed too drastic a measure. What would two half-minds be like?

In the 1950s, Ronald Meyers, a student of Roger Sperry's at California Institute of Technology, showed that cats can lead a fairly normal life even after total disconnection of their cerebral hemispheres. Sperry and his associates soon extended their investigations to include the monkey. The ensuing success prompted two California neurosurgeons, Joseph Bogen and P. J. Vogel, to try the split-brain operation on human beings.

Bogen and Vogel's first patient was an epileptic middle-aged World War II veteran. When he awoke from surgery, he couldn't talk. No doubt to the relief of everyone concerned, his speech did return the next day. His seizures could be controlled. And to outward appearances, he and others who have undergone the operation are "just folks," as Michael Gazzaniga, another former student of Sperry's, said during a lecture.

But the split-brain operation has profound effects, although it took careful observation to detect them. Recall that an object in the left visual field signals the right hemisphere, and vice versa. Taking advantage of this, and presenting visual cues in one field at a time, Gazzaniga discovered that most people who had undergone split-brain operations could read, write, and do arithmetic normally, but only with their left cerebral hemispheres. When tested in their right hemispheres, they seemed illiterate, unable to write, and incapable of adding simple sums. Addressing a symposium a few years ago, Gazzaniga described a typical experiment. He held up the word, HEART, in such a way that H and E, presented in the left visual field, signaled the nonreading right hemisphere, while the rest of the word cued the left hemisphere. "What did you see?" , Gazzaniga asked. His subject responded, "I saw ART." The right hemisphere seemed blind to words. But was the right hemisphere really blind? Worse, did it simply lack intelligence? Or even a human mind?

Gazzaniga soon found that the right side of the cerebrum functioned admirably in nonverbal situations. For instance, when shown a picture of a cup, in such a way that it cued the right hemisphere, the person could reach behind a screen, feel among a collection of objects, and find a cup. In fact, the right hemisphere could manifest profound intelligence and sardonic wit. When presented with a picture of a smoldering cigarette, one subject, instead of matching it with a cigarette, brought forth an ashtray.

Not only is the right side capable of humor, but various studies indicate that people tend to use this hemisphere to comprehend geometric form, textures, and music. It's as though, in most of us, the dominant left side does the mundane jobs of reading, writing, and arithmetic, leaving the right hemisphere free to create and appreciate art.

Lateralization, as hemispheric differentiation is called, need not be investigated with the knife.[3] The psychologist Victor Milstein showed me a visual field-testing rig that he and his colleagues use in screening for brain damage. In fact, some of the best evidence of musical tendencies in the right hemisphere came from a test used by Bogen's group prior to actual surgery. Called the amobarbital test, it was perfected by Bogen in collaboration with another member of Sperry's group, Harold Gordon. Amobarbital is an anesthetic. The test involves injecting anesthetic into either the left or the right common carotid artery in the neck, thus anesthetizing one hemisphere at a time. (Actually, blood from a carotid artery on one side will reach the other side of the brain, through a channel called the circle of Willis. But the volume of blood crossing over is small in relation to what flows to the same side.) Gordon compared audio tapes of Bogen's patients singing before and after either the right or left hemisphere had been put to sleep. With the left hemisphere unconscious and the right one awake, most people sang well. But, with some exceptions, the subjects sang flat and off-key when the right hemisphere was unconscious.

Laboratory animals display interesting behavior after split-brain surgery. Two disconnected hemispheres may learn to respond to what would otherwise be conflicting stimuli. The animals can even learn at a faster pace. (There are, after all, two intelligences instead of one.) One side of the brain may be taught to avoid a stimulus that the other side responds to favorably. A split-brain monkey, for instance, may lovingly fondle a toy doll with its right hand and angrily beat it with the left. (Arms are voluntarily controlled by opposite hemispheres.) Sperry has even reported that persons with split brains sometimes maintain two entirely different attitudes toward the very same object--simultaneously.


At first glance, and when the results were new, split-brain research looked like a powerful case for a structural theory of mind-brain. (I used to refer to it in the classroom.) Language memory, for example, seemed to be housed in the dominant hemisphere (along with handedness). Music memories seemed to be stored over on the nondominant side. But as more facts emerged, and as all the evidence was carefully weighed, what seemed like such a clear-cut case became fuzzy again.

As I mentioned earlier, some people are born without a corpus callosum. Sperry's group studied one such young woman extensively.[4] Unlike persons who have undergone split-brain surgery, those born without a corpus callosum don't show lateralization: both hemispheres reveal similar linguistic ability. Children who have had split-brain operations show much less lateralization than adults. A few years ago, after I'd written a couple of feature articles on hemispheric differences, a student who had read one of them came to see me, puzzled. If the left side of the brain stores language, he asked, how do people taking an amobarbital test know the lyrics of a song when only the right hemisphere sings?

It was a perceptive question. Clearly, no natural law confines language to one and only one side of the brain. Otherwise, no one with complete separation of the cerebral hemispheres could handle language on the right side; and children would show the same degree of lateralization as adults. Nor would Bogen and Gordon have found individual variations in music or language during the amobarbital test.

Gazzaniga has conducted a great deal of research on children. Before the age of two or three, they exhibit little if any lateralization. Hemispheric differences develop with maturity. We are not born with lateralized brains. How do most of us end up that way?

Circuitries in the visual system can be altered by the early visual environment.[5] There's direct evidence about this for laboratory animals, and a good circumstantial case has been made for humans. Environment has a much more profound effect on even relatively uncomplicated reflexes than anyone had ever suspected. Maybe culture and learning play critical roles in lateralizing. Maybe as we mature, we unconsciously learn to inhibit the flow of information into one side of the brain or the other. Maybe we train ourselves to repress memories of language in the right hemisphere. Maybe the formation of language and the routines in arithmetic proceed more efficiently when carried out asymmetrically--unless we are singing.

Inability of a right hemisphere to read doesn't necessarily preclude memory there, though. Maybe the right hemisphere has amnesia. Or, relying on the left side to handle language, the right hemisphere may simply not remember how it is done. We do repress and do not remember all sorts of things, all the time. I cannot recall my third-grade teacher's name, although I'll bet I could under hypnosis. With regard to repression, consider something like functional amblyopia, for instance--blindness in an eye after double vision, even when there are no structural defects in the eye. It is as though the mind prefers not to see what is confusing or painful--as double vision can be. But with correction of the double view, that same blind eye sometimes regains 20/20 vision.

Thus, we really cannot turn the results of split-brain research into a conclusive argument in favor of a structural theory of mind. We do not know whether split brains show us the repository or the conduits of memory. We do not know if what is coming out flows directly from the source or from a leak in the plumbing.

But the split human brain raises still another question: What does the operation not do? Why didn't the knife create half-witted individuals? Why were both personalities "just folks," as Gazzaniga said? Why two whole personalities? Isn't personality part of the mind? Why doesn't personality follow the structural symmetry of the brain? If we split this page in two, we wouldn't have whole messages in both halves.

It's not that a structuralist cannot answer such a question. But the structuralist's thesis--my old argument--must be tied together with an embarrassingly long string of maybes.


The mind-brain conundrum has many other dimensions and extends to virtually every level of organization and discourse, from molecules to societies of animals, from molecular biophysics to social psychology. Name the molecule, cell, or lobe, or stipulate the physiological, chemical, or physical mechanism, and somebody, someplace, has found memory on, in, around, or associated with it. And, in spite of the generally good to splendid quality of such research, there's probably someone else, somewhere, whose experiments categorically deny any given conclusion or contradict any particular result.

Among those who believe, as I did, that memory is molecular, there are the protein people, the RNA people, the DNA people, the lipid people. And they're often very unkind to each other. Why? Most scientists, consciously or unconsciously, practice the principle of causality--every cause must have one and only one effect, or a causal relationship hasn't been established. If you are an RNA person and somebody finds memory on fat, that's unpleasant news. For RNA and fat cannot both be the cause of memory.

Some investigators believe that memories can be transferred from animal to animal in chemical form; that it's possible to train a rat, homogenize its brain, extract this or that chemical, and inject the donor's thought into another rat or even a hamster. The disbelievers vastly outnumber the believers, for a variety of rational and irrational reasons. Not everyone has been able to reproduce the results;[6] but memory transfer is in the literature, implicating quite a variety of alleged transfer substances.

Some research on memory does not implicate molecules at all. And while some data suggest that memories depend on reverberating circuits to and from vast regions of the brain, other evidence places memory in individual cells.

Who's right? Who's wrong? As we shall see later in the book, this is not the question.


Dynamics of the learning process have suggested to psychologists that two distinct classes of memory exist: short-term memory and long-term memory. Short-term memory is, for example, using the telephone number you look up in the directory and forgetting it after you have put through the call. Long-term memory operates in the recollection of the date of New Year's, or in remembering the telephone number, you don't have to look up. Can we find any physiological evidence to support the psychologists' claim? The reader probably knows that electroconvulsive shock (ECS) can induce amnesia. ECS can totally and permanently obliterate all signs of short-term memory, while producing only temporary effects on long-term memory.[7] Certain drugs also induce convulsions with results very similar to those produced in experiments with ECS. Taken together, the evidence does indicate that short-term memory and long-term memory depend on different physiological mechanisms.

Some investigators employ a very interesting theory in dealing with the two classes of memory.[8] According to this theory, short-term memory is the active, working memory, and it exists in an idealized "compartment." Long-term memory is stored memory, and the storage "depot" differs from the working compartment. According to the theory, the working compartment receives incoming perceptual data, which create short-term memories. The short-term memories, in turn, make the long-term memories. In other words, in the learning process, information from experience moves into the working compartment, becomes short-term memory, and then goes on to the storage depot. But what good would the memory be if it were confined to storage? According to the theory, the working compartment has two-way communication with the storage depot. In this theory, when we use long-term memory, we in effect create a short-term working memory from it. And there's more. Learning doesn't depend simply on what comes into the mind. The remembered past has a profound effect on what we're learning in the present. Cognition--understanding--can't be divorced from the learning process. The working-memory theory maintains that the active memory in the working compartment is a blend of perception from the senses and memory drawn from storage. When we forget, the active memory "goes off" in the working compartment.

But the concept of two classes of memory gives rise to imponderables in the mind-brain connection. If different physiological mechanisms handle short-term and long-term memories, how do we explain their informational identities? After all, Butterfield 8 is Butterfield 8 whether we forget it immediately or remember it to the end of our days. There are other problems. The useful working-memory theory requires a more general theory to link it to reality.


Nevertheless, there is a great deal of empirical evidence of a piece of the human brain that is involved in short-term memory. This structure is known as the hippocampus. Shaped like a zucchini, but about the size of a little finger, the hippocampus (Greek for sea-horse) is buried deep within the cerebrum's temporal lobe. A person with a damaged hippocampus exhibits defective short-term memory, whereas his or her long-term memory shows sign of being intact. One clinical sign of a lesion in the hippocampus is when a person can't repeat a name or a short sequence of randomly presented numbers but can, for instance, recite the Gettysburg Address. I will say more about the hippocampus in chapter 10. For now, I want to make this point: If we take the holist's classical position, we will have to dismiss important facts about the hippocampus.

Well, then, why can't we consider the hippocampus the seat of short-term memory? I've been asked sophisticated versions of this very question by several persons who work with the brain. There are correspondingly sophisticated reasons why we can't. But let me indicate some simple ones.

First of all, there are entire phyla of organisms whose brains lack hippocampi. Yet these same creatures often have splendid working, short-term memories. I can give another example from my own laboratory. Salamanders whose cerebrums, and therefore hippocampi, have been amputated learn as well as normal animals. Perhaps salamanders and various other forms of life are simply too lowly to count? Later in the book, I will summarize experiments whose results show that cats can learn, and thus exhibit working memory, without their hippocampi. The point, once again, is that structuralism is no more enlightening than holism, in regard to the role of the hippocampus.


I mentioned earlier that we humans require the visual cortex in order to see. But on a summer forenoon, when I look out my office window, I sometimes observe a hawk, perhaps 600 feet up, gliding in circles above the meadowed and hardwood-forested Indiana University campus, searching the ground for a target less than a foot long. Why doesn't the hawk dive after that discarded Hershey bar wrapper or the tail of that big German shepherd? The hawk is up there in the clouds doing complicated data processing with its visual system. It's certainly seeing. Yet that hawk, unlike a human being, doesn't employ a visual cortex. It doesn't even have one. For the visual cortex in the occipital lobe is a mammalian characteristic.

Birds process their visual sensations in what is called the midbrain tectum. (Barn owls, in addition, handle depth perception in a mound of brain, on the front of the cerebrum, called the Wulst, the German word for pastry roll. Indeed, the Wulst[9] does look like something on a tray in a Viennese bakery.)

Mammals, humans included, also have tectums, which they use in pupillary light reflexes. A human who has suffered complete destruction of both occipital lobes, and loses the entire visual cortex as a consequence, becomes blind, although some evidence indicates that this person may be able to sense very strong light. Firm evidence shows that rats, rabbits, and even monkeys can sense diffuse light following complete destruction of their occipital lobes. Do the tectum and the visual cortex (and the Wulst, too, of course) comprise the seat of vertebrate vision? If a vertebrate lacks some, but not all, of these structures, it may lack certain special features of vision. If the creature lacks a tectum, a visual cortex, and a Wulst, will it have no vision at all?

The argument works, up to a point. Specific lesions in a frog's tectum produce specific deficits in its visual perception. But let me tell you a little anecdote from my own laboratory in the days before my experiments with shufflebrain.

I was doing experiments with larval salamanders. For control purposes, I had to have a group of neurologically blinded animals. That would be a cinch, I thought, since the tectum is the seat of vision in animals below mammals (the function of the Wulst hadn't been worked out yet). All I had to do, I thought, was go in and remove the tectum, which I did. Was I in for a surprise when the animals came out of anesthesia! Every single animal could see! I didn't even consider publishing the results, feeling certain that I must have goofed up somewhere. But a few years later, the animal behaviorist, G. E. Savage, reported basically the same thing, except in adult fish.

It's not that the visual cortex and the tectum (or the Wulst) aren't important. And it's not that the vision of a squid is identical to yours and mine. The fact is that we really can't assign all vision to a single anatomical system. We can relate specific features of visual perception to certain structures in particular organisms. But we can't generalize. And if we can't generalize, we can't theorize. Which leaves mind-brain open to the holist--or the magician.


I can't think of anyone who has contributed more to our knowledge of functional human neuroanatomy than the late Wilder Penfield. Yet the mind-brain question eventually forced him into mysticism. A neurosurgeon who began his career early in this century, Penfield developed and made routine the practice of exploring and mapping a region of the brain before cutting into it. Good doctor that he was, he was preoccupied by the question of whether the treatment would be worse than the disease.

The cerebrum doesn't sense pain directly. Painful sensations from inside the skull travel on branches of peripheral nerves. These nerve fibers actually leave the skull, join their parent trunks, reenter the cranium, and fuse into the brainstem, the lower region of the brain between the cerebrum and spinal cord. Local anesthesia deadens the skull, scalp and coverings of the brain; intracranial surgery can thus be performed on a fully conscious person, which is what Penfield usually did. With electrodes, he stimulated a suspicious area and found out, firsthand, the role it played in his patient's actions and thoughts. In this way, Penfield confirmed many suspected functions and discovered new ones as well.

During the early and middle phases of his career, Penfield was a staunch advocate of the anatomical point of view. In some of his last written words, he related how he had spent his left trying "to prove that brain accounts for the mind." But he had seen too many paradoxes over the years.

Take, for example, a patch of cerebral cortex you're probably using this very moment as your eyes scan this page. The patch is the size of a postage stamp, on the rear of your frontal lobe, about where a Viking's horn emerges from the side of his helmet. It's in a place called area 8 alpha, beta, gamma; or, alternatively, the frontal eye fields; or just plain area 8 for short. Penfield explored area 8 with electrodes and found that it is indeed associated with voluntary eye movements. What do you suppose happens if area 8 is cut out? The person may lose the ability to move his or her eyes, willfully, toward the opposite side of the head (smooth, involuntary eye movements are handled by the occipital lobes). But the voluntary eye movements usually return a few days after surgery. And sometimes the function doesn't disappear at all.

Memory is even more puzzling. Penfield could often elicit vivid recollections of scenes from his patient's distant past by stimulating the temporal lobe. Had Penfield tapped the seat of long-term memory? Removal of the area frequently had no demonstrable effect on the person's memory.

For Penfield, the discrepancies eventually became overwhelming. Shortly before he died, he came to the conclusion that "our being consists of two fundamental elements."[10] For him, those elements had become "brain and mind" (my italics). Even the most faithful of the faithful have had trouble with mind-brain.


Holism does not rest its case on the structuralist's dubious dialectical position, but on prima facie evidence from some of the finest research ever conducted in psychology or biology--thirty furious years of exhaustive, imaginative, and carefully controlled laboratory investigations by Karl Lashley, the founder of the entire field of physiological psychology.

Lashley investigated memory in a wide variety of species, ranging from cockroaches to chimpanzees. But his favorite subject was the rat. His basic experiment was to train an animal to run a maze. Then he would injure the animal's brain at a particular location and in a specific amount. Finally, he would retest the animal's ability to run the maze postoperatively, comparing its performance with that of control rats whose skulls had been opened but whose brains hadn't been injured.

Lashley found that destruction of 20 percent or more of a rat's cerebrum could dim its memory of the maze. And increasing the damage would proportionately decrease the animal's recall. But (and this is the single biggest "but" in the history of brain research!), the critical thing was not where he made the wound but how much of the area he destroyed. Lashley got the same results by destroying the same percentages of different lobes. Anticipating hologramic theory, he even analogized memory to interference patterns.[11] He had borrowed the name of his cardinal principle--equipotentiality--from the embryologist Hans Driesch The term, which I'll expand on shortly, means that engrams, or memory traces, are distributed all over the region.

From chemistry, Lashley borrowed the principle of mass action[12 ]to explain how increased brain damage dulled performance. The less engram the brain had to work with, the dumber the animal seemed.

Equipotentiality and mass action became Lashley trademarks. He and his students and followers produced, reconfirmed, and extended their evidence. More recently, the physiologist, E. Roy John, has developed an extensive new line of evidence to support the principle equipotential distribution of memory.

John and his colleagues, working with cats, perfected electrophysiological methods to monitor the learning brain. Electrical activities in the animal's brain assume the form of waves on the recording device. As an animal learns to distinguish flickering lights of different frequencies, the waves begin to change form; and after the animal has learned, the harmonic features of the waves assume distinctive characteristics, which John and his colleagues take to signify memory. And these same waves--and presumably the memory underlying the animal's reaction--show up throughout widely dispersed regions of the brain.[13]

There is always some extraneous "noise" associated with electronic waves--"blips" that are independent of the main waves. Information theorists call the main waves the signal, and an important aspect of electronic communications is the signal-to-noise ratio. John and his group have found that although the main waves are the same all over the brain, signal-to-noise ratio varies. John believes that variations in signal-to-noise ratio account for specific functions of different regions of the brain and explain why, for example, the occipital lobe works in vision and the temporal lobe works in hearing.

How might a structuralist explain John's research One way is to argue that he really did not tap stored memory but instead tapped communications from long-term to short-term compartments. Another is to assume that the alleged noise is really the memory, and that the signals represent some nonspecific nerve-cell activity. I'm not faulting John's work here, incidentally, but merely giving examples of structuralist explanations of his findings.


Lashley did not resolve the mind-brain conundrum. His work sharpened its intensity, extended its dimensions, and made a whole generation of psychologists afraid even to think of behavior along physiological lines.

As I mentioned before, Lashley took the term (and the concept of) equipotentiality from Hans Driesch. Driesch espoused equipotentiality because dissociated two- and four-celled frog and salamander embryos don't form half or fractions of animals but whole frogs or salamanders. Driesch's research led him to embrace entelechy, the doctrine of vitalism, or the belief that the first principles of life cannot be found in nonliving matter.

Driesch was a man of the nineteenth century. By the time Lashley came along, biology had fallen madly in love with chemistry and physics, and with the belief that life obeys the laws of Nature generally. Lashley had a thorough background in microbiology and chemistry. True to a twentieth-century scientist's view of things, he resisted vitalism and sought to explain his findings by physical and chemical examples. Yet to me, structuralist and materialist that I was, Lashley's principles seemed like dissembling--a cover-up job! I believed that he engaged in a limp form of metaphysics, disguised to sound like science but lacking the practicing metaphysician's depth and scope. Until my shufflebrain research, I thought Lashley had concocted his doctrines as a verbal means of escape from the powerful vitalistic implications of his position. Lashley's ideas seemed like substations on the way to pure vitalism. The best thing to do was ignore him, which is what I did until hologramic theory emerged.


As we shall see later on, though, the hologram cannot be strictly equated with equipotentiality. As I said in the first chapter, the hologram concerns that property of waves called phase. Phase makes for equipotentiality (when it is a feature of a hologram at all), not the other way around.

As a general theory, derived from the generic phase principle, hologramic theory does not make champions of the holists and chumps of the structuralists. Instead, hologramic theory breaks the mind-brain conundrum by showing that one need not choose between holism and structuralism. Hologramic theory will supply us with the missing idea--the thought that Hegel would have said allows thesis and antithesis to become synthesis.

But before we take our first glimpse at hologramic theory, let us consider holograms as such.



image<---Try a little private quiz here, Doc!