WE RETURN TO THE IMPERFECT but comfortable realm of experience. In chapters 5 and 6, we sought out predictions in hologramic theory. Now we shift emphasis to explanations, to the use of hologramic theory for making rational sense of certain equivocal and seemingly unbelievable observations. And we start with the behavior of bacteria.
Bacilli, rod-shaped bacteria, propel themselves through fluid with whip-like appendages called flagella. flagellar motion depends on a contractile protein hooked to the base of each microbial appendage. An individual flagellum rotates, in the process executing wave motion. Locomotion of a bacillus, therefore, is an algebraic function of the phase and amplitude spectra collectively generated by its flagella. We might even regard the overt behavior of a bacillus as a direct consequence of periodic activity--the phase and amplitude spectra--reflecting the rhythm of its contractile proteins. Thus if a hologramic mind exists in the bacillus, it shows up literally and figuratively right at the surface of the cell.
But is there really a mind in a creature so primitive? We can describe a stretching and recoiling spring with tensor transformation. But our intuitions would balk, and rightfully, if we tried to endow the back porch door with hologramic mind. Thus before we apply hologramic theory to the bacillus, we need evidence only experiments and observations can supply.
Single cells of many sorts can be attracted or repelled by various chemicals. The reaction is called chemotaxis. In the 1880s, when bacteria was still a very young science, a German botanist named Wilhelm Pfeffer made an incredible observation in the common intestinal bacillus, Escherichia coli (E. coli)-- more recently of food-poisoning fame. Other scientists had previously found that meat extract entices E. coli whereas alcohol repels them: the bacteria would swim up into a capillary tube containing chicken soup or hamburger juice but would avoid one with ethanol. What would happen, Pfeffer wondered if he present his E. coli with a mixture of attractants and repellants? He found if concentrated enough, meat juice would attract E. coli even though the concentration of ethanol was enough, by itself, to have driven the bacteria away.
Did Pfeffer's observations mean that bacteria make decisions? Naturally, the critics laughed. But in comparatively recent times biochemists have begun to rethink Pfeffer's question. And in rigorously quantified experiments with chemically pure stimulants, two University of Wisconsin investigators, J. Adler and W-W Tso came the conclusion, "apparently, bacteria have a 'data processing' system." Courageous words, even within the quotations marks. For in a living organism, 'data processing' translates into what we ordinarily call thinking.
Adler and Tso established to important points. First, the relative--not absolute--concentrations of attractant versus repellent determines whether E. coli will move toward or away from a mixture. Second, the organisms do not respond to the mere presence of stimulants but instead follow, or flee, a concentration gradient. And it was the consideration of concentration gradient that led biochemist D. E. Koshland to establish memory in bacteria.
Koshland became intrigued by the fact that a creature only 2 micrometers long (.000039 inches) could follow a concentration gradient at all. The cell would have to analyze changes on the order of about one part in ten thousand--the rough equivalent of distinguishing a teaspoon of Beaujolais in a bathtub of gin, a "formidable analytical problem," Koshland wrote.
Did the cell analyze concentration variations along its length? To Koshland's quantitative instincts, 2 micrometers seemed far too short a length for that. Suppose, instead, the bacterium analyzes over time, instead of space (length)? What if the cell could remember the past concentration long enough to compare it with the present concentration? Koshland and company knew just the experiment for testing between the two alternative explanations.
When a bacillus is not responding to a chemical stimulus, it tumbles randomly through the medium. (Its flagella crank randomly.) In the presence of a stimulus, though, the bacterium checks the tumbling action and swims in an orderly fashion. What would happen, Koshland wondered, if he tricked them? What if he placed the organism in chemical stimulus (no gradient though) and then quickly diluted the medium? If the bacteria indeed analyzes head-to-tail, they should go right on tumbling because there'd be no gradient, just a diluted broth. But if bacteria remember a past concentration, diluting the medium should fool them into thinking they were in a gradient. Then they'd stop tumbling.
"The latter was precisely what occurred," Koshland wrote.[1] The bacterial relied on memory of the past concentration to program their behavior in the present.
There was more. Koshland called attention to another feature of bacterial behavior. He pointed out that in responding to a chemical stimulus--in checking the tumbling action-- "the bacterium has thus reduced a complex problem in three-dimensional migration to a very simple on-off device."[2]
When a human being simplifies a complicated task, we speak about intelligence. Thus bacteria show evidence of rudimentary minds. And we can use hologramic theory to account for it.
***
Adler and Tso discovered that attractants induce counterclockwise rotation in E. coli's flagella. Repellents cranked the appendage clockwise. In terms of hologramic theory in its simplest form, the two opposite reactions are 180 degrees (or pi) out of phase. By shifting from random locomotion to movement toward or away from a stimulus, the organism would be shifting from random phase variations in its flagella to harmonic motion--from cacophony to a melody if they were tooting horns instead of churning their appendages.
Adler and Tso identified the bacterium's sensory apparatus. Like the biochemical motor, it also turned out to be a protein. A search eventually turned up strains of E. coli that genetically lack the protein for a specific attractant. (Specific genes direct the synthesis of specific proteins.)
At the time they published their article, Adler and Tso had not isolated the memory protein (if a unique one truly exists). But the absence of that information doesn't prevent our using hologramic theory to explain the observations: Phase spectra must be transformed from the coordinates of the sensory proteins through those of contractile proteins to the flagella and into the wave motion of the propelling cell. Amplitudes can be handled as local constants. The chemical stimulus in principle acts on the bacterium's perceptual mechanism analogous to the reconstruction beam's decoding of an optical hologram. As tensors in a continuum, the phase values encoded in the sensory protein must be transformed to the coordinate system representing locomotion. The same message passes from sensory to motor mechanisms, and through whatever associates the two. Recall that tensors define the coordinates, not the other way around. Thus, in terms of information, the locomotion of the organism is a transformation of the reaction between the sensory protein and the chemical stimulus, plus or minus the effects of local constants. Absolute amplitudes and noise, products of local constants, would come from such things as the viscosity of the fluid (e.g., thick pus versus sweat) the age and health of organisms, the nutritional quality of the medium (better to grow on the unwashed hands of a fast-food hamburger flipper than on the just-scrubbed fingernails of a surgical nurse), or whatever else the phase spectrum cannot encode. As for the storage of whole codes in small physical spaces, remember that phase has no prescribed size in the absolute sense. A single molecule can contain a whole message.
***
Evidence of memory on single-celled animals dates back at least to 1911, to experiments of the protozoologists L. M. Day and M. Bentley on paramecia.[3] Day and Bentley put a paramecium into a snug capillary tube--one whose diameter less than the animal's length. The paramecium swam down to the opposite end of the tube, where it attempted to turn abound. But in the cramped lumen, the little fellow twisted, curled, ducked, bobbed....but somehow managed by accident to get faced in the opposite direction. What did it do? It immediately swam to the other end and got itself stuck again. And again it twisted, curled, ducked...and only managing to get turned around by pure luck. Then, after a while Day and Bentley began to notice something. The animal was taking less and less time to complete the course. It was becoming more and more efficient at the tricky turn-around maneuver. Eventually, it learned to execute the move on the first attempt.
Day and Bentley's observations didn't fit the conventional wisdom of their day, nor the criteria for learning among some schools of thought in our own times. Their little paramecia had taught themselves the trick, which in some circles doesn't count as learning. But in the 1950s an animal behaviorist named Beatrice Gelber conditioned paramecia by the same basic approach Pavlov had taken when he used a whiff of meat to make a dog drool when it heard the ringing of a bell.
Gelber prepared a pâté of her animal's favorite bacteria (a single paramecium may devour as many as 5 million bacilli in a single day[4] ); then she smeared some of it on the end of a sterile platinum wire. She dipped the wire into a paramecium culture. Immediately her animals swarmed around the wire, which was not exactly startling news. In a few seconds, she withdrew the wire, counted off a few more seconds and dipped it in again. Same results!. But on the second trial, Gelber presented the animals with a bare, sterilized wire, instead of with bacteria. No response! Not at first, anyway. But after thirty trials--two offers of bacteria, one of sterile wire--Gelber's paramecia were swarming around the platinum tip, whether it proffered bacterial pâté or not.[5]
Naturally, Gelber had her critics, those who dismiss the idea that a single cell can behave at all, let alone remember anything. I must admit, a mind isn't easy to fathom in life on such a reduced scale. Yet I've sat entranced at my stereoscopic microscope for hours on end watching protozoa of all sorts swim around in the water with my salamanders. I've often wondered if Gelber's critics had ever set aside their dogmas and doctrines long enough to observe for themselves the truly remarkable capabilities of one-celled animals. Let me recount something I witnessed one Saturday afternoon many years ago.
On occasion, a fungal growth infects a salamander larva's gills. To save the salamander, I remove the growth mechanically. On the Saturday in question, I discovered one such fungal jungle teeming with an assortment of protozoa. What were those beasts? I wondered. Instead of depositing the teased-off mass on the sleeve of my lab coat, I transferred it to a glass slide for inspection under the much greater magnification of the compound phase microscope.[6]
Several different species of protozoa were working the vine-like hyphae of the fungus. I was soon captivated by the behavior of a species I couldn't identify. They were moving up and down the hyphae at a brisk pace. At the distal end of a strand an animal's momentum would carry it out into the surrounding fluid. It would then turn and swim back to its "own" hypha, even when another one was closer. Something spatial or chemical, or both, must be attracting these critters, I thought almost out loud. Just as I was thinking the thought, one animal attracted my attention. It had just taken a wide elliptical course into the fluid; but along the return arc of the excursion, another hypha lay directly on its path. And my little hero landed on it. After a few pokes at the foreign strand, the animal paused as though something didn't seem quite right. Meanwhile its sibs were busily working the territory. After a few tentative pokes, my animal moved away. But now it landed on a third hypha, shoved off after a brief inspection and landed on still another hypha. Soon it was hopelessly lost on the far side of the microscopic jungle.
But then something happened. As I was anticipating its departure, protozoan hesitated, gave the current hypha a few sniffs and began slowly working up and down the shaft. After maybe five or six trips back and forth along the strand, my animal increased its speed. Within a few minutes, it was working the new hyphas as it had been when it first attracted my attention. I couldn't escape the thought that the little creature had forgotten its old home and had learned the cues peculiar its new one.
Had I conducted carefully controlled experiments, I might have discovered a purely instinctive basis for all I saw that Saturday. Maybe Gelber's or Day and Bentley's observation can be explained by something other than learning, per se. But, instinctive or learned, the behavior of protozoa--or bacteria--doesn't fit into the same class of phenomena as the action-reaction of a rubber band. Organized information exists in the interval between what they sense and how they respond. We employ identical criteria in linking behavior to a human mind.
***
But higher organisms require a "real" brain in order to learn, don't they? If posing such a question seems ridiculous, consider an observation of a physiologist named G. A. Horridge made some years ago on decapitated roaches and locusts.
In some invertebrates, including insects, collections of neurons--ganglia--provide the body with direct innervation, as do the spinal cord and brainstem among vertebrates. Horridge wondered if ganglion cells could learn without the benefit of the insect's brain. To test the question, he devised an experiment that has since become famous enough to bear his name: "The Horridge preparation."
In the Horridge preparation, the body of a beheaded insect is skewered into an electrical circuit. A wire is attached to a foot. Then the preparation is suspended directly above a salt solution. If the leg relaxes and gravity pulls down the foot, the wire dips into the salt solution, closing the electrical circuit and-- zappo! A jolt is delivered to the ganglion cells within the headless body. In time, the ganglion cells learn to avoid the shock by raising the leg high enough to keep the wire out of the salt solution.
An electrophysiologist name Graham Hoyle went on to perfect and refine the Horridge preparation. Working with pithed crabs, he used computers to control the stimuli; he made direct electrophysiological recordings from specific ganglion cells; and, because he could accurately control the stimuli and record the responses, Hoyle was able to teach the cells to alter their frequency of firing, which is a very sophisticated trick. How well did the pithed crabs learn? According to Hoyle, "debrained animals learned better than intact ones."[7]
I'm not suggesting that we replace the university with the guillotine. Indeed, later in this chapter we'll bring the brain back into our story. But first-rate evidence of mind exists in some very hard-to-swallow places. Brain (in the sense of what we house inside our crania) is not a sine qua non of mind.
Aha, but does the behavior of beheaded bugs really have any counterpart in like us?
***
In London, in 1881, the leading holist of the day, F. L. Goltz arrived from Strasbourg for a public showdown at the International Medical Congress with his arch-rival, Englishman David Ferrier who'd gained renown for investgating functional localization within the cerebral cortex.
At the Congress, Ferrier presented monkeys with particular paralyses following specific ablations of what came to be known as the motor cortex. Ferrier's experiments were so dramatically elegant that he won the confrontation. But not before Goltz had presented his "Hund ohne Hirn," (dog without brain)-- an animal who could stand up even though his cerebrum had been amputated. [8]
The decerebrated mammal has been a standard laboratory exercise in physiology courses every since. A few years ago, a team of investigators, seeking to find out if the mammalian midbrain could be taught to avoid irritation of the cornea, used the blink reflex to demonstrate that, "decerebrate cats could learn the conditioned response."[9]
Hologramic theory does not predict that microbes, beheaded bugs or decerebrated dogs and cats necessarily perceive, remember and behave. Experiments furnish the underlying evidence. Some of that evidence, particularly with bacteria, has been far more rigorously gathered than any we might cite for support of memory in rats or monkeys or human beings. But the relative nature of phase code explains how an organism only 2 micrometers long--or a thousand times smaller that, if need be--can house complete sets of instructions. Transformations within the continuum give us a theory of how biochemical and physiological mechanisms quite different from those in intact brains and bodies of vertebrates may nevertheless carry out the same overall informational activities.
Yet hologramic theory does not force us to abandon everything else we know.[10] . Instead, hologramic theory gives new meaning to old evidence; it allows us to reassemble the original facts, return to where our quest began, and with T. S. Eliot, "know the place for the first time."
In the last chapter, I pointed out that two universes developed according to Riemann's plan would obey a single unifying principle, curvature, and yet could differ totally if the two varied with respect to dimension. Thus the hologramic continuum of both a salamander and a human being depend on the phase code and tensor transformations therein. But our worlds are far different from theirs by virtue of dimension. Now let's take this statement out of the abstract.
***
It's no great surprise to anyone that a monkey quickly learns to sit patiently in front of a display panel and win peanut after peanut by choosing, say, a triangle instead of a square. By doing essentially the same thing, rats and pigeons follow the same trend Edward Thorndike first called attention to in the 1890s. Even a goldfish when presented with an apparatus much like a gum machine soon learns that bumping its snout against a green button will earn it a juicy tubifex worm while a red button brings forth nothing. Do choice-learning experiments mean that the evolution of intelligence is like arithmetic: add enough bacteria and we'd end up with a fish or an ape? In the 1950s a man named Bitterman began to wonder if it was all really that simple. Something didn't smell right to him. Bitterman decide to add a new wrinkle to the choice experiments.
Bitterman began by training various species to perform choice tasks. His animals had to discriminate, say, A from B. He trained goldfish, turtles, pigeons, rats and monkeys to associate A with a reward and B with receiving nothing.
Then Bitterman played a dirty trick. He switched the reward button. Chaos broke out in the laboratory. Even the monkey became confused. But as time went by, the monkey began to realize that now B got you the peanut, not A. Then the rat began to get the idea. And the pigeon too! Meanwhile over in the aquarium, the goldfish was still hopelessly hammering away at the old choice. Unlike the other members of the menagerie, the fish could not kick its old habit and learn a new one.
What about his turtle? It was the most interesting of all Bitterman's subjects. Confronted with a choice involving spatial discrimination, the turtle quickly and easily made the reversal. But when the task involved visual recognition, the turtle was as bad the goldfish; it couldn't make the switch. It was as though the turtle's behavior lay somewhere between that of the fish and the bird. Turtles, of course, are reptiles. During vertebrate evolution, reptiles appeared after fishes (and amphibians) but before birds and mammals.
Now an interesting thing takes place in the evolution of the vertebrate brain. In reptiles, the cerebrum begins to acquire a cortex on its external surface. Was the cerebral cortex at the basis of his results? Bitterman decided to find out by scaping the cortex off the rat's cerebrum. Believe it or not, these rats successfully reversed the habit when give a space problem. But they failed when the choice involved visual discrimination. Bitterman's rats acted like turtles!
***
Bitterman's experiments illustrate that with the evolution of the cerebral cortex something had emerged in the vertebrate character that had not existed before. Simple arithmetic won't take us from bacterium to human being.
As embryos, each of us re-enacts evolution, conspicuously in appearance but subtly in behavior, as well. At first we're more like a slime mold than an animal. Up to about the fourth interuterine month, we're quite salamander-like. We develop a primate cerebrum between the forth and sixth month. When the process fails, we see in a tragic way how essential the human cerebral cortex is to the "Human Condition," in the Barry N. Schwartz sense of of the term.
Mesencephalia is one of the several terms applied to an infant whose cerebrum fails to differentiate a cortex.[11] A mesencephalic infant sometimes lives for a few months. Any of us (with a twist of an umbilical cord, or if mom took too long a swig on a gin bottle) might have suffered the same fate. Like its luckier kin, the mesencephalic child will cry when jabbed with a diaper pin; will suckle a proffered breast; will sit up; and it can see, hear, yawn, smile and coo. It is a living organism. But human though its genes and chromosomes and legal rights may be, it never develops a human personality, no matter how long it lives. It remains in the evolutionary bedrock out of which the dimensions of the human mind emerge. It stands pat at the stage where the human embryo looked and acted like the creature who crawls from the pond.
Yet there's no particular moment in development when we can scientifically delineate human from nonhuman: no specific minute, hour or day through which we can draw a neat draftsman's line. Development is a continuous process. The embryo doesn't arrive at the reptilian stage, disassemble itself and construct a mammal, de novo. In embryonic development, what's new harmoniously integrates with what's already there to move up from one ontological step to the next.[12] The embryo's summations demand Riemann's nonlinear rule: curvature!
What is arithmetic, anyway. What do we mean by addition and subtraction? At the minimum, we need discrete sets. The sets must equal each other--or be reducible to equal sets be means of constants; and their relationships must be linear. The correct adjective for describing a consecutive array of linear set is contiguous (not continuous), meaning that the successive members touch without merging into and becoming a part of each other--just the opposite of Riemann's test of continuity. This may seem utterly ridiculous. But if the sets in 1 + 1 + 1 surrender parts of themselves during addition, their sum will be something less than 3. We literally perform addition and subtraction with our fingers, abacus beads, nursery blocks and digital computers-- any device that uses discrete, discontinuous magnitudes.
Continuity is essential to the theory of evolution. Try to imagine a tree whose branches are not and never were continuous, back to the main trunk. Continuity in fact during embryonic development is our prima facie evidence for continuity in theory among the species. Evolution is inconceivable in a simple, discontinuous arithmetic system. In the light of Bitterman's turtle, a straight-line theory of the natural history of intelligence would predict discontinuity among the species and render the theory of evolution itself no more defensible on formal grounds than the events depicted in Genesis. Bitterman's investigations deny a simple, linear progression from fish to human and provide experimental evidence for the evolution of intelligence.
We have not constructed hologramic theory along linear lines. If we had, we wouldn't be able to reconcile what we find. We would be forced to ignore some facts in order to believe others. Without the continuum, we would be unable to explain not only the differences and similarities of the species, but also those in ourselves at various stages of our own embryonic development.
The hologramic continuum, by nature, allows new dimensions to integrate harmoniously with those already present. It lets us explain how our biological yesterday remains a part of today within a totally changed informational universe. Even though we share the same elemental rule--the phase code--with all other life forms, we're not reducible to what we once were, nor to bacteria or beheaded bugs. We are neither a linear sum of what we were nor a linear fraction of what we used to be. And our uniquely human inner world begins to unfold with the advent of the cerebral cortex.
***
Physiologically, the cerebral cortex was a near-total enigma until comparatively recent times. But two physiologists, David Hubel and Torsten Wiesel spent decades exploring pattern recognition in the visual cortex, first of cats and eventually monkeys.[13] They identified three basic types of neurons there, cells that would respond to visual targets of different degrees of complexity. Hubel and Wiesel called these neurons simple, complex and hypercomplex cells:
The visual system must handle a great deal of information in addition to patterns: it must deal with color, motion, distance, direction--with a variety of independent abstract dimensions that have to be compiled into a single, composite picture, as do bars and rectangles and edges to make unified percept. Hubel and Wiesel analogized the problem to a jigsaw puzzle: the shape of the pieces is independent of the picture or potential picture they bear. We recognize a checkerboard whether it's red and black or black and blue. And we don't usually confuse a Carmen-performing diva flitting around the stage in a red and black dress with the red-black checkerboard.
We're always assembling informational dimensions into a single composite scene. If we go to a three-dimensional object, or if the object is moving, or if we attach some emotional significance to the input, we must either integrate the data into a percept or keep the subsets sorted into groups. And the integration (or segregation) must be quick. Vary the beast's capability for handling dimensions and we change its perception in a nonlinear way, as Bitterman did when he took the knife to his rat's cortex. Interestingly, it was not until Hubel and Wiesel began studying the monkey (versus the cat) that they discovered hypercomplex cells in sufficient numbers to analyze their complicated physiological details.
What we hear, touch, taste and smell may also be multidimensional. For instances, we may recognize a melody from The Barber of Seville, but our understanding of the lyrics may depend on a knowledge of Italian. And whether we're enthralled or put to sleep will depend on factors other than just the words or the music.
We also harmoniously integrate diverse sensory data. Thus silent movies disappeared quickly after talkies came along. We not only have the capacity to combine sight and sound, but we like (or hate) doing it which is a whole other constellations of dimensions.
The cerebral cortex is far from the only dimensional processor in the brains of organisms (although it may very well be the most elegant one in there). The frog, for instances, has a well developed roof on its midbrain--the tectum I mentioned in chapter 3: the part that helps process visual information among non-mammalian vertebrates. While the tectum comes nowhere close to the capabilities of a primate visual cortex, it nevertheless integrates different dimensions of the frog's visual perception.[14]
***
Another structure I mentioned in chapter 3 was the zucchini-shaped hippocampus, lesions of which induce short-term memory defects and make it difficult for persons so afflicted to repeat newly presented phrases or sequences of numbers. In rats, however, the hippocampus appears to assist navigation.
The function of the rat's hippocampus became evident in studies involving what is called the radial maze.[15] This kind of apparatus consists of typically a dozen alleys leading off a central choice point like the spokes radiating from the hub of a wagon wheel. To win a reward, the rat must go through the alleys in a predetermined sequence. To carry out the task, the rat must remember which alley he's in and which turn to take for the next one. The rat must compile at least two sets of memories: one of positions, the other of direction. Lesions in the hippocampus erode the animal's efficiency at the radial maze.
Now think of what happens when we recite the lines of a poem. We must remember not only the individual words but, so as to place correct emphases, their location in the sequence.[16] Although the task assumes verbal form in a human being, its informational aspects seem quite like those a rat uses to organize memories of geographic locations and sequences.
At the same time, clinical and laboratory evidence don't prove that the hippocampus is the exclusive seat of such short-term memory processing. Lesions do not totally nullify the rat's ability to run the radial maze: performances dropped from 7 correct turns out of 8 to 5 or 6, a statistically significant drop but far from the total loss of the ability that follow from destruction of the seat of the neural information. [17] I once watched a record film of a man with a damaged hippocampus. He made errors when repeating phrases, but he wasn't always wrong. In addition, he often employed subtle tricks to recall items. When he was allowed to count on his fingers, he could often correctly repeat phrases that he couldn't handle without them. Other parts of the brain can compile memories of position and distance but would seem to do with much less efficiency than the hippocampus.
The success or failure of a particular behavior may depend on how fast an organism can assemble different memories. Out in the wild, the navigational problems a rat confronts are much more difficult--and potentially perilous--than anything in the laboratory. Ethologist are students of behavior in the wild. And ethologists Richard Lore and Kevin Flanders, undertook the frightening job digging up a rat-infested garbage dump in New Jersey to see how the beasts live out there. To Lore and Flander's surprise, they found that wild rats live in family groups, each with its own burrow. The dump wasn't honeycombed with one communal rat flop, the animals randomly infesting a labyrinth and eating, sleeping or mating wherever the opportunity presented itself. Now the wild rat is one vicious creature, as a child of the inner city can often testify first hand. A strange rat who ends up in the wrong hole isn't welcomed as an honored dinner guest but may very well become the piece de résistance. Thus when the rat ventures into the night and turns around to bring home a half-eaten pork chop, it by-god better know which burrow to choose out of hundreds in a multidimensional array. It would quickly succumb to its own social psychology if not for the superb navigational system resident in its hippocampus. The use to which a rat puts its hippocampus seems at least as complicated as ours. Appreciate, though, that while the qualitative features of the behaviors mediated by the homologous brains structures can differ greatly between them and us, the abstract attributes--the analogous logic-- can be quite similar. Let's remind ourselves of this from hologramic theory.
We can construct two continua that have identical numbers of dimensions yet produce different universes. How? The shape of a universe depends not only on how many dimensions it has, but on how they connect up and which part connects with what. Recall in the last chapter we envisaged adding a dimension by converting a figure 8 into a snowman.
What if we take a snowman apart and reassemble it with the members in a different order. Call our initial snowman A and our reshuffled one B. Now let's send an imaginary insect crawling on each figure at the same time and at the same speed. Of course, both insects will complete the round-trip excursion together. But relative to the starting point (the phase variation between the two), the two bugs will never once have traveled an equal distance until both have arrived back at the start-finish point. The similarities and differences don't trick us. Likewise, the differences between our hippocampus and the rat's shouldn't mask the similarities.
As I mentioned earlier, some psychologists conceptualize short-term or working memory by means of an idealized compartment.[18] The hippocampus, as I've suggested, may be a location of a portion of that compartment, at least in higher animals. Of course, as evidence from, for instance, decerebrated cats and many other sources indicates, we can't consider the hippocampus as the exclusive repository of short-term memory. But hippocampal functions may yet reveal critical details about the dynamics of perception and reminiscence in higher animals. Circumstantial evidence from rats as well as humans indicates that an active memory in the hippocampus is short term. Now short-term memory in general is very sensitive to electroconvulsive shock (ECS). And relatively mild electrical stimulation of the hippocampus or the surrounding brain can evoke a violent convulsive seizure.[19]
A temporary and erasable working memory would be valuable to us as well as to rats. When we no longer need a telephone number, for example, we simply expunge. Yet we wouldn't want to forget all telephone numbers. After a journey from the garbage pile, the neural map in the rat's hippocampus could become a liability. But the rat wouldn't want to have to relearn every map. ("That's the pipe were the coyote got my brother!") What might control the shift from short-term to long-term storage? I raise the latter question not to signal a final answer. But a little speculation from a few facts will allow me to illustrate how we can use hologramic theory to generate working hypotheses (the kind from which experimentation grows). Also, the hippocampus exhibits an important feature of the brain that Norbert Wiener predicted many years ago.
The human hippocampus interconnects with vast regions of the central nervous system. Its most conspicuous pathways lead to and from subdivisions of what is called the limbic system. The limbic system is most conspicuously
associated with emotions.
One hippocampal circuit in particular connects the
hippocampus to a massive convolution called the cingulate gyrus. Draped like a lounging leopard on the corpus callosum, the cingulate gyrus was the first part of the limbic system ever designated [20] and became a favorite target of
psychosurgery in the post-prefrontal lobotomy era.
The hippocampal-cingulate circuit has (among other many things) three features
that are highly germane to our discussion.
- First, a number of relay stations intervene between the hippocampus
and the cingulate gyrus. These relay stations are locations where the signal
can be modified; where messages can be routed to and from other areas of the
brain and spinal cord and blended (or removed) with (from) other data.
- Second, the circuit consists of parallel pathways (co- and
multi-axial) all the way around. The significance of this is that when
activated the circuit preserves phase information in principle in the way
Young and Fresnel did when making interference patterns--by starting from the
same source but varying the specific course between the referencing waves.
The hippocampal-cingulate circuitry looks designed to handle phase
differentials, and on a grand scale!
- Third, the overall circuit makes a giant feedback loop. Feedback is
at the heart of the communications revolution Norbert Wiener touched off in
1947 with the publication of his classic, Cybernetics and that you're living through as you read these very words.
***
Biofeedback became a popular subject a couple of decades ago when people were hooking electrodes to the their heads and stopping an starting electrical trains by altering their thoughts. "EEG with electric trains," a friend of mine used to call what seemed to him a stunt wherein amplified electrical signals from the scalp are fed into the transformer of an electric train set instead of a polygraph. However one wants to characterize feedback, it is central to much that goes on chemically and physiologically in our bodies--including the brain.
Wiener spent World War Two trying to develop methods for anticipating where a German bomber was headed so that an Allied aircraft gun crew could aim properly to shoot it down, evasive actions and the trembling ground notwithstanding. The problem led Wiener began to appreciate the importance of the feedback loop in controlling the output from a system where the output device itself is subject to continual changes. Feedback became the central notion in his cybernetic theory.
What a feedback loop does is relate input to output in a dynamic instead of static way. With moment-to-moment monitoring, tiny corrections can be made in the output to compensate for last minute changes not only in a target but also in the readout mechanisms.
Wiener the mathematician used to joke about his ignorance of the brain. Nevertheless, in 1947 from pure cybernetic theory, he predicted one of the brain's (or computer's) most important functional attributes: reverberating circuits. (Reference to "reverberating circuit" is to be found in even standard textbooks.[21] ) Even though he'd never even seen the cerebellum before, he predicted both its function and the effects of damage to it (dysfunction in dynamic motor control).[22]
In a system like that conceived by Wiener, a short-term memory would reverberate around the feedback loop and remain active until other input modified it or shut it off. The hippocampus seems elegantly designed for just that kind of activity.
The specific types of memories associated with the human hippocampus are especially susceptible to emotions. Tell a class of students that a particular fact will appear on the final examination and it's almost as though their limbic systems suddenly opened the gates to the permanent storage compartment. On the other hand if you become scared or angry or amorous just before you dial a newly looked-up telephone number, you'll probably have to consult the directory again before putting in the call.
Karl Pribram has suggested that neural holograms exist in microcircuits within neurons. A cogent argument can be made for his thesis, but not in the short-term memory on the hippocampus. The evidence suggests that short-term hippocampal memory depends on a vast number of neurons, perhaps even the entire feedback loop. Let me repeat an important maxim of hologramic theory: the theory won't predict the biochemical or physiological mechanisms of memory; the absolute size of a whole phase code is arbitrary, meaning that it may be (but doesn't have to be) tiny, as Pribram's microcircuits, or in oscillations within a pair of molecules on a cell membrane; or gigantic as in the entire hippocampus, or even with the whole nervous system. Recall that the same code may exist simultaneously in many different places or sizes and in many specific mechanisms. To make the memory of a telephone number permanent, what has to transform is not a protein or a voltage but a set of variational relationship--the tensors of our hologramic continuum.
In 1922, Einstein wrote The Meaning of Relavity. In it, he demonstrates that his special relativity theory (E=mc2) can be derived from the tensors of his general relativity theory, from his four-dimensional space-time continuum. Although the hologramic continuum is not the same as Einstein's theory, we do use Riemann's ideas, as did Einstein. The logic is similar. Einstein demonstrated the same relative principle in both the very small atom and in the universe at large. Hologramic theory, too, tells us that the same elemental rules operate in mind in the small or the large. If we find a memory in, say, the resonance of two chemical bonds within a hormone molecule, we should not throw up our hands in disgust abandon science for magic simply because someone else discovers the same code in the entire feedback loop of the hippocampus. Thus if microcircuits turn out to play no role in the short-term memory on the hippocampus, they could still very well exist elsewhere.
***
The cortex on the cerebrum's frontal lobe may also be essential to the uniquely human personality, and some of the data collected with reference to it are not only useful to our discussion but also rather interesting in their own right. One of the most important contributors to knowledge about the frontal lobe is no less than Karl Pribram.
Human beings who have undergone prefrontal lobotomy often cannot solve problems consisting of many parts. Pribram, who was familiar with these clinical signs as a neurosurgeon, had a hunch about why, a hunch he pursued in the lab as a neurophysiologist.
Signals from the frontal lobes often have an inhibitory effect on other areas of the brain. (One theory behind prefrontal lobotomy was that it relieved the uptights.) Pribram began wondering if such inhibitory activity might represent something akin to parsing a message-- breaking a sequence of letters or words into meaningful chunks. Maybe to lobotomized patients words such as "hologramic theory" seem like "ho logramichteory"; or "ho log ram ich teory."
He tested his hypothesis on monkeys with what he called "the alternating test," a modified version of the shell game as he describes it.[23]
In the alternating test, the monkey sits in front of two inverted cups, one with a peanut under it, the other not. To win the peanut, the monkey must turn up the cup opposite the winning choice on the last trial (thus the name 'alternating'). At the end of a trial a screen descends, blocking the monkey's view of the cups and remaining down anywhere from a few seconds to several hours. Monkeys find the game easy, and, after a little experience, readily win peanuts, even after the screen has been down all day long. But following frontal lobotomy, Pribram found the monkey "will fail at this simple task even when the interval between trials is reduced to three seconds."[24]
It occurred to Pribram that "perhaps the task appears to these monkeys much as an unparsed passage does to us." What would happen, he wondered, if he organized the message for the lobotomized monkey? He thought he could do the parsing by alternating short and long pauses between trials. In the original test the interval between trials was randomly selected (for statistical prudence). Now, though, the animal would sit in front of the cups, as before, and would again win a peanut by selecting the cup opposite the correct choice on the previous trial. But the screen would remain down for either 5 or 15 seconds. Alternating short and long pauses was to akin to converting an amorphous sequence, as for example LRLRLRLR, into sets like (L+R) (L+R) (L+R), the 5-second pauses representing the plus signs and the 15-second pauses the parentheses.
How did the lobotomized monkeys make out? In Pribram's words, "immediately the monkeys with frontal cortex damage performed as successfully as the control animals whose brains were intact."[25]
Pribram's findings have practical implications that go beyond far beyond the frontal lobe. It might be possible to develop strategies to assist a damaged brain in carrying out functions it has only apparently lost. Such clinical strategies could conceivably evolve from exercises like finding the common denominator in the functions of the rat versus human hippocampi.
***
Primates seem to parse information rather well in comparison to other highly intelligent organisms. The forward growth of the cerebrum, a hominoid characteristic, would seem to coincide with parsing capabilities. But potential pitfalls await us if we oversimplify any brain function. Consider this anecdote Alexander Luria relates about the English neurologist Sir William Gower.
Gower had a patient with a speech aphasia that surfaced when the person tried to repeat certain words, 'no' being one of them. During one session, after several failed attempts to repeat Gower's sentence, the patient became exasperated and declared: "No doctor, I can't say 'no.'"[26]
It was as though Gower's patient housed at least two disconnected mental universes. (The observation seems akin to split brain.[27]) Obviously, if the two independent universes can't communicate, they can't integrate (or parse, for that matter). The psychologist, Julian Jaynes[28] has proposed that human consciousness evolved, and history dawned en passant, following the development of new connections between the left and right temporal lobes[29]. A capacity for acute self-awareness, Jaynes cogently argues, is a cardinal characteristic of present-day human beings. According to Jaynes, when a prehistoric person reported, for instance, hearing the voice of the gods, he or she probably heard himself or herself without recognizing the actual source. Our ancestors, in the Jaynes's theory, had "bicameral" minds--minds like the U. S. Congress divided into a Senate and House.
I don't know if Jaynes is right or wrong. (And I have no advice as to how to test his hypothesis experimentally.) But the harmonious blending of mentalities, like Buster and animals in the looking up-experiments, is consistent with the hologramic principle of independence--but on a cosmic scale. The smooth, continuous blending of one universe with another is something the hologramic continuum can do very easily, in theory. Nor would we need a lobotomy to separate dimensions; that could be done with signals producing the equivlance of desgructive interference--jamming!
***
In chapter 8, when we were optically simulating calculations, I demonstrated that identical phase-dependent memories can be produced both a priori and a posteriori. Thus one principle we can infer directly from hologramic theory is that the ancient argument between rationalists and empiricists (i.e., whether ideas are innate or learned) is a phony issue in the quest of mind-brain. On the one hand, we can use learning rates to evaluate memory from but can't define or reduce the hologramic mind to learning and experience. Conversely, the question of whether nature or nurture or a combination of both create(s) a given memory is something we have to establish from empirical evidence, not a priori with our theory. Yet the empirical evidence contains many surprises that often appear contradictory and preposterous in the absence of a theoretical explanation. Let's consider two examples: language and social behavior.
***
Until about thirty years ago, nothing seemed less innate and more dependent on experience than the grammars of human languages. Yet one fundamental tenet of contemporary theoretical linguistics is that all human languages develop from common, universal, a priori rules of syntax.[30] Since many specific languages and cultures have emerged and developed in total isolation from others, yet exhibit common rules, those rules must be present at birth, so the reasoning goes.
If we accept the latter idea, does it necessarily follow that the genes determine grammar? Maybe. But in terms of constructing a phase code, there's at least one other possibility: imprinting during interuterine life. It this sounds wacky, consider the fact that the behavior of ducklings can be profoundly influenced by sounds they hear while still unhatched, in the egg.[31]
We, as fetuses, receive sonar vibrations set up by our mother's voice and her heartbeat. The rhythm of the heart and the resonance of the human vocal apparatus share common features independent of culture, and the modulating effects of the human body would be the same whether Mom lives on Madagascar or Manhattan. Now, I'm not asserting that this heart-beat hypothesis is correct. Hologramic theory really doesn't provide the answer. But it would be very useful to know if interuterine experiences can set the stage for the development of language. Science aside, imagine what a poet could do with the idea that language has its genesis not in our embryonic head but in the hearts of our original host.
***
A sizable and controversial body of literature exists on the language capabilities of gorillas and chimpanzees. [32] Some critics of the idea define human language in such a way that, by definition, the sign language and symbolic capabilities exhibited by apes are 'behavior,' not 'language.' Hologramic theory won't resolve the controversy. But if a gorilla and a man encode and transform the same phase spectrum, they hold a similar thought, whether its the sensation of an itchy back or the statement, "I'm famished!" Still, if we invoke the hologramic principle of dimension and also admit the existence of local constants, we would not reduce the ape's and the man's behaviors to "the same thing."
***
Turning to social behavior, nothing seemed less learned and more instinctive to the experts of a generation ago than the social behaviors of nonhuman animals. Herds, coveys, packs, bands, prides, schools....tend to exhibit rigid, stereotypic order, showing the same attributes today as when the particular strain of the species first evolved. But much evidence of social ambiance within animal groups has come to the fore, especially since Jane Goodall's work with chimps and George Schaller's studies of mountain gorillas put ethology on the map in the 1960s.[33] One study I'm particularly fond, by an ethologist named Gray Eaton, involved a troop of Japanese monkeys.
Eaton had relocated from southern Japan to a large fenced-in field in Oregon. Social behavior in these animals is highly structured with a dominance order among females as well as males. At the top of the entire troop is the co-called alpha male, which a collaborator of mine, Carl Schneider, use to call, "the Boss." What makes for the Boss? Is he the monkey with the sharpest fangs, quickest fists, meanest temperament, highest concentration of blood testosterone? In Eaton's troop, the Boss was a monkey known as Arrowhead. And nobody messed Arrowhead. Nobody!
Yet Arrowhead was a one-eyed, puny little guy who would have had a tough time in a fight with most females, let alone any of the other alphas. Eaton even found lower blood levels of male sex hormone in Arrowhead than among some of his subordinates. Nor was he aggressive; didn't strut around displaying his penis; didn't beat up on other monkeys to show everybody else who was in charge. The troop didn't respect Arrowhead's primacy because of a machismo he really didn't have but because of a highly complex interplay of social behaviors and group learning! And the circumstantial evidence suggested that his position in the troop was an outcome of the females' dominance order. The play of baby monkeys often turns into furious if diminutive combat. Now mother monkey is highly protective. Howls for help from two small disputants usually bring two angry females head-to-head. Of course, the weaker mother usually takes her little brat by the arm and scrams, leaving the field to the stronger female and her infant. Monkeys with weak mothers grow up learning to run away. Of course, the male monkey with the dominant mother grows up expecting others to yield to his wishes, which they tend to do.
Maternalism doesn't end as the male monkey matures. Eaton described Red Witch, who help establish her grown son as the second-ranking member of the troop. The son had challenged the second-ranking male who was the tougher the two. But when her son cried out for help, Red Witch came running. She jumped into the fight and together they established sonny as number two monkey.
Curiously, Red Witch didn't make her son the Boss. Scrawny Arrowhead was no match for her. The study didn't really answer the question. But the prospects challenging Arrowhead seemed like one of those critical taboos that, violated in a human society, often leads to downfall of the community.
Look at it like this. The job of the Boss is to lead the troop from imminent danger. He must know the direction from which a hungry leopard may suddenly appear and the quickest possible escape route. Should a threat come, the Boss must direct an orderly and efficient retreat: mothers and infants first, he and his hefty lieutenants last, to put up a fight and, if necessary, die to save the troop (and their gene pool). Somewhere in the intricate behavior of the group, even Red Witch leaned not to mess with the Boss of her troop.
***
Jane Goodfield is an historian who has raised insightful questions about science and scientific ideas. "Why, with very few exceptions," she once wrote, "have these themes or these people never stimulated great works of literature or art?"[34] She went on to observe, "somehow sciences manages to extract the warmth and beauty from the world."[35]
June Goodfield's words make me feel guilty, and I wish I could deny her contention. But I can't. And I used to wonder if hologramic theory would so perfect our understanding of mind-brain as to let no place in the human psyche for art. Has science finally claimed the last major mystery in Nature? Is mind now fated to become perfected and boring -- and dehumanized?
I'm not really sure. But I began to phrase the question in terms of what hologramic theory suggests about intelligence. The result was the final two chapters of this book.
Internet contact:pietsch@indiana.edu