(written by Lucas A. C. Derks & Dr. John David Sinclair in 2000)
A new taste in general psychology
We can think what we like about our brain, our brain doesn’t mind. It even allows us to conceive, remember and believe the silliest of theories about itself. Neuro-science is full of scattered statistics and measurements; what we lack are the grand dreams that encompass all the data and that enable us to also better understand the neuro in NLP.
Working with NLP demonstrates that for humans a single experience may be sufficient to build an entirely new line of thought and behavior. To understand how ‘one trial learning’ can occur within NLP change work, we need to look at the neuronal basis of learning.
Most contemporary neuro-scientists agree that learning takes place at the neural switch points – the synapses. Research has shown that experience can change synapses: they can become either stronger or weaker; and this change can persist for a lifetime. Changes in synaptic conduction are without doubt the basis for learning and long-term memory. But how exactly does this work?
Generations of psychologists adhered mistakenly to the so-called ‘Hebb rule’ which maintained that using a neural connection will automatically strengthen it. In other words, the more you think of something, the better you learn it. But Sinclair (1981) turned this Hebb rule upside down.
According to Sinclair synaptic connections get stronger while they are resting, right after they have been used. Just using a synapse only weakens it. In other words, thinking about the same thing continuously reduces the neural connections involved. And such weakening of connections means extinction, amnesia or reverse learning. Only when we stop, after thinking about the same thing for a while, do we provide the used neural connections with a period of rest. During this rest the connection is strengthened again. And provided that the synaptic connection was not overused and that the duration of rest was sufficient, the synapse can reach a new level of strength, higher than before.
Thus the rate at which learning occurs is influenced by the firing pattern of the neurones involved. For one trial learning to take place, the human brain must perform many rehearsal trials in imagination, and/or have powerful ways to rest its synapses.
Brakes to provide rest
One method to ensure that synapses get uninterrupted rest is to provide excitatory neurones with inhibitory neurones that block their firing. In other words, to have specialized brakes, neurones that switch on after a synapse has been used to make sure it get time off to rest. And such braking mechanisms do indeed exist in our brains.
Ever since Pavlov (1907) discovered it, inhibition has been regarded primarily as a means to prevent undesired responses, and to keep previously conditioned trains of thought on track (Shallice, 1978). These, and some other functions, are rightfully ascribed to it, but Sinclair was the first to emphasize its prominent role in learning, in the maintenance and consolidation of memories.
From this perspective it becomes clear that in the course of evolution the organization of inhibitory neurones became more and more refined. From the first development of the autonomous nervous system, in which parasympathetic and sympathetic antagonize each other, the neurones used by one system were provided with rest when the opponent system was active.
In the human cortex, a number of very precise inhibitory systems has evolved. One of them is organized into small units that apply their inhibitory influence directly onto the cells that need rest, enabling them to strengthen their synapses (Eccles, 1980).
Sleeping and dreaming
Most psychologists came to believe in Pavlov’s theory that sleep equals inhibition of the brain. This ‘very obvious’ idea has misled many researchers ever since. Another popular, related misconception is that memory consolidation takes place during sleep. These ideas fit within common sense thinking; but as we shall see, both are totally wrong.
To understand this, it is important to note that the cortical inhibition systems – the brakes – can be switched off and on. When they are switched on, learning is fast. But when the cortical inhibitory neurones are switched off, learning slows down to a very low rate and the acquisition of new memories can be stopped almost completely.
When cortical inhibition is switched off, sleeping and dreaming emerge. And, in general, of most dreams no memory is stored. Only when we wake up in the middle of a dream do we tend to remember its contents. Awakening amounts to switching on the cortical inhibition, which functions like the record button of our mental tape recorder. The inhibition provides rest for just-used synapses. Thus the synapses used in the dream just before awakening are strengthened and become memories.
This also explains why we remember the knock on the door that woke us, even though we were asleep when the knock occurred. It is not the use of synapses alone – when our ears are stimulated by the knock – that determines whether a memory is stored. Instead, what happens after the knock is critical. If sleep continues, no memory is formed; but if sleep stops, the activated inhibition allows the just-used synapses to grow stronger, causing us to remember the knock.
This example brings us to the behavioristic concept of reinforcement. It was discovered that not the response itself but rather the subsequent consequences of the response determine whether strengthening of the response occurs. It is not the begging for an ice cream that makes us beg more and more, but getting one, that increases the probability we will beg for it in the future.
Thus generations of psychologists learned that reinforcement consisted of some rewarding stimulus: ice cream, food, sweets, attention, money, etc. That punishment could also sometimes reinforce behavior seemed to be contradictory and strange, but explanations were found easily enough in the so-called ‘law of effect,’ that said that every effect of a behavior could reinforce it. Yet Sinclair’s ‘rest principle’ clearly indicates that it is ‘rest’ that causes the reinforcement. And to be more accurate, rest caused by the activity of inhibitory neurones.
Being awakened from a sound sleep is not an event we usually associate with positive reinforcement, but just like getting food when we are hungry, it does improve memory storage of the immediately preceding events.
Altered states of consciousness
The research of Lopez da Silva and by others indicates that cortical inhibition (switching the inhibitory systems off and on) is regulated by thalamic pulses in the alpha frequency. But the mental switchboard is not a matter of all or nothing; cortical inhibition does not have to be switched on or off in its entirety; some parts of the cortex may sleep while other parts are awake. In the areas where the thalamic alpha activates the cortical inhibitory neurones, one finds wake state learning and thinking, while at the same time other regions may be asleep. In the latter regions there is amnesia, whereas the regions that are awake can store memories; this explains many phenomena related to altered states of consciousness
The function of sleep
Do we need to sleep to rest our bodies? Maybe. Do we need to sleep to solve our traumas? No. Do we need to sleep to enhance our memories?
Sinclair proposes that sleep functions primarily to provide rest for inhibitory interneurones: after a day of long hard labor – slowing down the firing rates of excitatory neurones – the inhibitory neurones get tired; more specifically, their synapses get weaker as a result of overuse. In other words, after braking the brain for some time, the brakes themselves need rest to restore their braking capacity.
During sleep the inhibitory neurones restore their ability to release inhibitory transmitters and, especially, restore the number of receptors for this purpose. Up-regulating the number of receptors is a slow process, requiring several hours. That is why we need several hours of sleep. The amount of sleep a person needs is an indicator of the restoration rate of his brain. Of course, it also can be influenced by other factors, like taking naps during the day. The amount of sleep needed is also generally proportional to the amount of overuse inflicted on the inhibitory neurones. For example, long periods of high attention and alertness produce more overuse, and thus require more sleep afterwards. The more you brake the more rest the brakes need.
When cortical inhibition is disrupted, mechanically, by lack of sleep, or by intoxication, there are direct behavioral consequences. Not only is the learning ability impaired, but orderly linear thinking that follows en grained paths of association will also suffer from the lack of the restraining influence of inhibition: reveries, short dreams and epileptic seizures may occur. Acute psychosis can also be understood as the result of exhausted inhibitory systems. Exhausted brakes may cause hallucinations to occur; as they can exceed normal-waking state perception in intensity, they may appear ‘super realistic’.
The function of dreams
Sleep is a wonderful way to give rest to all inhibitory mechanisms, but not at all to the excitatory ones. In sleep, the firing rate of excitatory neurones is increased, up to even six times that occurring during wakefulness, as was discovered by Evarts (1967). This unbridled excitatory activity results in dreams. The lack of inhibition causes dreams to be very intense experiences.
Because neural activity without rest causes a weakening of neural connections, dreaming reduces connections that have become strong through training in daytime. Dreams tend to follow recently activated thought patterns. That is why, in the morning, everything we learned the day before has moved to the background of our experience.
The Freudian error of calling dream activity ‘unconscious’ has misled generations of psychologists. We immediately forget the very intense conscious experiences that we have while dreaming. Dreams are thought patterns with little inhibitory boundaries, which gives them their bizarre nature, for every concept can be combined and activated in every possible order. Fortunately, the high level of amnesia ensures that we cannot confuse our dreams too much with reality.
Consciousness and unconsciousness
Erickson’s concept of ‘The Unconscious Mind’ suggests that there are two modes of thinking, conscious and unconscious: what we are aware of and what we are not aware of. But John Miller (1941) suggested that, although there exists only one type of consciousness, there are many different unconscious neural phenomena, from forgotten, repressed, extinguished, autonomous, amnestic, automatic, pre-attentive, subconscious, subliminal, sleep, somnambulistic, comatose, post mortem, all the way up to latent memories. Conclusion: the unconscious mode is in fact multiform. Outside of our awareness, many different things may happen that we don’t know about.
We could say that consciousness is imposed on a multitude of unconscious functions. According to Baars (1988) consciousness is something that an organism needs more of, as the complexity of its mental activity increases. He also assumes that humans are the most conscious creatures we know of. Arrogance?
Consciousness monitors our mental software for bugs, and at the same time offers a mode for fast debugging. Or formulated differently: consciousness is a trouble-shooting device.
Pioneering research on ‘unconsciousness’ and ‘consciousness’ was carried out by Schneider and Shiffrin (1977). They saw fast and parallel thinking as a faculty of unconscious processes, while serial controlled thinking was the relatively slow way of consciousness. In this way Schneider and Shiffrin can be said to be the experimental researchers who opened the door of academic cognitive psychology to the Ericksonian concept of the mind. For Schneider and Shiffrin’s experiments proved that Erickson’s view makes sense: we learn our learnings until we know them so well that they have passed before we become aware of them – too fast to be seen on the monitor of the brain.
One of the greatest misconceptions about consciousness is its confusion with ‘self consciousness’. Psychologists who defined consciousness as something social, or as self-awareness, have misled cohorts of their colleagues. What they should have stated is that ‘self consciousness’ is just an instance of consciousness, a subset of it. Self consciousness arises from making the self the content of consciousness, just as attention can be focussed on one’s own thought processes, resulting in reflective (meta-level) thinking.
Everything we learn ultimately becomes part of our unconscious automatic rapid processing. The brain increases its processing speed with each repetition until it reaches its maximum execution rate. The speed difference between conscious thoughts and very well learned unconscious mental activities is quite significant. The difference is noticeable whenever we learn a complex task. For instance, when a person starts a course in typewriting, it takes them ages to find the location of each key as they are not organized in a previously known order.
Is there really only one consciousness? Yes, most of the time there is. But it seems that in hypnotic and other trances, drug-induced states, and above all in life-threatening situations, some persons report double awareness; this is probably caused by independent hemispheric functioning, or by two dissociated cerebral areas in parallel focus during extreme vigilance. In normal situations, most researchers see room for only one conscious process at a time.
Most one-trial learning in NLP involves conscious processing: resources are often found and applied within the focus of attention.
Is there an explanation for the accelerated speed of learning in conscious awareness? There are some ways to think about it:
(1) In consciousness the intensity of the excitation is heightened. Consciousness is more like a high gear: a high level of activity with rapid firing of many neurones. Although there is little physiological evidence, it seems likely that neural activity is more intense, and that this improves consolidation. How? Perhaps the firing tends to occur in large rapid bursts, and such tetanic stimulation triggers greater strengthening during the subsequent rest.
(2) Or as an alternative explanation, perhaps more synapses are used in consciousness, leading to the memory being stored in more places.
(3) Or maybe rest is more complete in consciousness. Attention seems to involve high levels of inhibition that block activity in all but the areas being attende d to; as soon as attention moves on to the next thought, this inhibition would assure uninterrupted rest to the just-used brain cells.
(4) Consciousness also seems to be related to a higher state of alertness; it may represent specific brain regions being in a highly alert state. In its turn high alertness is related to higher levels of the neural modulator, norepinephrine, which suppresses spontaneous firing. As a result, rest is not interrupted by spontaneous firing, and synaptic strengthening is more effective.
Anatomically, everything points toward the limbic system being involved in this fast conscious learning. Damage to this part of the brain may impair the ability for one-trial learning. The part of the limbic system that consolidates information – by means of a strong inhibition – causes dramatic cases of forgetfulness when damaged.
Consciousness and mental control are not the same thing. The ultra-fast, well-learned unconscious processes are generally the ones in control. They may ‘swarm’ around the new evolving lines of thought under construction. By blocking and repressing all faulty turns that the new thought pattern may try to take, they ensure that this new mental pathway under construction will make sense. The new thought will automatically be integrated by the activity of numerous old habitual thinking patterns that are triggered during the construction period of the fresh one. And where the new one conflicts with an old one, a conscious experience of inner conflict is caused. When an NLPer is testing for ecology, he amplifies this involvement of unconscious processes in the creation of new lines of thought.
Concepts and neural networks
Generally speaking we can say that an organism will function best when the right behaviors are coupled to the perception of the right stimuli in its environment. Since all stimuli are different, being in reality unique events, an organism must be able to generalize them into categories or, in other words, to develop concepts.
Recognition of a stimulus is based on the reactivation of essentially the same collection of nerve cells that were fired when the stimulus was initially presented. When I see an apple, my brain responds in a way similar to what it did when I first learned to recognize such a fruit. Eccles (1980) and Edelman (1987) made it clear that ‘meaning’ in the central nervous system results from the temporal and spatial patterns of activated nerve cells. Various names have been given to this theoretical construct: ‘Gestalt’, ‘informational nodes’, ‘cognitive elements’, ‘engrams’, etc. Here I shall call it a ‘specific neural network’.
A specific neural network starts to exist after a collection of cells have been simultaneously activated at a particular moment, and then allowed to rest. During the rest the connections between the cells are strengthened. The coherence of a specific neural network depends on the strength of the mutual connections and is increased by repetition, by firing the same group of cells at the same time again.
However, the question of simultaneous versus sequential activation in a network needs to be considered. Even at relatively low levels in the visual system, there are neurones that fire only when they receive simultaneous input from a certain set of retinal cells. Others fire only when there is sequential activation of the retina cells that provide them with input: this is interpreted as motion. Thus the ‘meaning’ of a stimulus is not just the spatial pattern of activation, nor just the temporal pattern, but the combined spatial and temporal pattern of activation.
In the formation of specific neural networks, inhibition plays a dual role: (1) providing rest, and (2) helping to suppress populations of neurones that do not belong to the network, thus providing it with boundaries. The latter effect is also called ‘trans-marginal inhibition’.
Every concept that we can differentiate in our minds necessarily corresponds to a specific neural network. However, anyone neurone may participate in many networks; it is the combination of large numbers of cells that make up the characteristics of a network and determine its meaning. Think of a tv screen; the same pixels can make up a limitless number of meaningful pictures. The central nervous system is a three dimensional structure, and neural networks are three dimensional structures too. We must assume that a specific neural network may extend over huge regions of the brain, and body neurones, brain stem cells, midbrain cells and cortical cells can be part of one and the same network.
In the language of NLP, a concept is a constellation of submodalities. Thus a combination of submodalities is the experience of a specific neural network. And the interface between neural network theory and subjective experience can be found in this relationship.
The location of networks
Scientists looking for specialized locations for certain memories and meanings miss the point of the brain being very much a multi purpose apparatus. That certain regions are involved in body sensations, language or motor activity, merely indicates the sites of the ‘input’ and ‘output’ gates of the brain. Beside these areas that process information with a certain type of sensory or motor content, a lot of space can be used for just anything.
The NLP jargon of ‘transferring resources’ suggests that we are able to copy networks from one site in the brain to another. But such copying and multiplying is probably not involved in what NLPers call ‘transfer’. We had better think of ‘transfer’ as a metaphor for ‘connection’. In other words, the resources stay where they are, but are linked to the requisite present state networks that they were not yet connected to.
Networks and anchoring
Once they are learned, specific neural networks are easily activated. If only part of the neurones in it are activated by the senses or other networks, the coherence of the network takes care of the activation of the rest of it. Or in other words: the activation of a part is sufficient to start activity in the whole – a law recognized by Gestalt psychologists before World War II. We thus could say that specific neural networks need be activated only up to the threshold of perception in order to set them off. Classical conditioning is a result of this. The conditioned stimulus becomes part of the network, and when just this stimulus is activated it triggers the whole network. When an NLPer connects an anchor to some experience, the anchor becomes part of the networks that make up the experience. The anchor, as a part, has the power to activate all the rest of the network it belongs to.
Although there is little direct evidence, it seems likely that from this moment on, the level of excitatory activity inside a specific neural network increases due to reverberation, and builds up quite rapidly. Thus the activity within the network almost immediately exceeds that of the background neurones that do not belong to the network.
Association and connection
When we talk about the neurological mechanisms behind our work with NLP, we must look particularly at the laws governing the activation of neural networks.
Thinking is rapid when a specific neural network takes only a brief moment to get started, and the faster it activates subsequent specific networks, the faster thought proceeds. The increasing efficiency with which networks activate each other in successive learning trials implies the existence of mechanisms that automatically enhance the processing speed, as we witness during the ‘swish pattern’ or ‘chaining’.
Derks and Goldblatt (1986) proposed the concept of ‘engram simplification’. This means that specific neural networks lose all unnecessary reverberatory loops in the course of repeated use, thus allowing neural activity to proceed linearly and much more rapidly. The number of neurones that take part in a network is reduced with repeated activation, while at the same time the links between the network’s remaining units get stronger. This type of ‘learning’ is not confined to conscious processing; it goes on far beyond the time a thought pattern can be processed consciously. ‘Engram simplification’ permits a huge reduction in processing time.
Between awareness and unawareness
When an NLPer says to a person, ‘Ask your unconscious mind. . .’ this implicitly suggests that there exist sources of information other than unconscious ones. Although this suggestion may provide NLPers with the response they have in mind, theoretically it is nonsense since, in our view, all mental activity starts unconsciously. Thus any thought process starts pre-attentive (Neisser, 1967), may turn into something conscious, and ends unconscious again. Conscious processing must be regarded as an intermediate stage in learning, but not even a necessary one, since the brain is also capable of unconscious learning.
Unconscious learning, as we suggested before, is much slower than conscious learning in the sense that it takes many more learning trials. Pavlov’s dogs needed over twenty trials to learn unconsciously to connect food with a bell. Unconscious learning from a model, a person we learn to copy behavior from, will take considerable time when we do it without awareness. But it can go very fast when executed with full attention, as most NLPers know from their work. But learning can also take place totally outside a person’s control: people may unconsciously acquire curious and at times unproductive thoughts and behaviors that are difficult for them to identify with, so that they may sometimes wonder: ‘Is that me who is thinking this?’
Derks’s (1998) work with social representations (‘Social Panorama’) demonstrates many aspects of the unconscious acquisition of information that may disturb a person in mysterious ways, as when they discover that they have representations of dead people that they have never met. Sometimes such ‘ghost images’ are placed in centrallocations in their model of the social world and are surprisingly influential.
In many instances where people claim to have supernatural abilities, they just call them that because they have trouble identifying themselves with skills they acquired by unconscious learning.
The selection for consciousness
Thus, given the fact that conscious and unconscious should ask: what mechanism takes care of the selection? What makes some learning take place in consciousness while other processes pass unnoticed? We need to go back in history to answer.
Sometime before 1907 Pavlov discovered the ‘orientation reaction’ (OR). He noticed that animals react more intensely to a novel stimulus than to a familiar one. And humans react in exactly the same way. In 1960 another Russian psychologist, Sokolov, formulated a theory about this that delayed the development in psychology by decades. Although his theory had a number of very obvious inconsistencies, it worked like a sort of conceptual trap – when caught in it, escaping proved hard.
Sokolov stated that all incoming sensory perception is compared with pre-existing memories. The latter he called ‘neural models of the stimuli’. When there is a match between the perceived and the remembered, there is no OR; and Sokolov decided that the stimulus was ‘habituated’. But when the memory and the perception mismatch each other, an OR occurs. In this case habituation has not yet taken place.
What makes this idea attractive is the fact that the physiological (objective) OR always goes together with the (subjective) conscious awareness of the stimulus. In the laboratory this means that when brain waves are recorded during the evocation of an OR, the moment
of the onset of awareness goes together with a clearly detectable positive peak. This peak seems to indicate the start of conscious awareness of the stimulus. In the experiments with the habituation of the OR, subjective experience and objective measurement do indeed coincide.
The peak in the measured brain waves can occur from 200 to 600 milliseconds after the stimulus is presented, with a mean of about 300 milliseconds. According to Sokolov’s reasoning, this implies that it takes about a third of a second for the sensory perception to be transmitted to the cortex, to be compared to neural models (memory), and in case of a mismatch, to break into awareness.
lf Sokolov’s theory is right, any mismatch between a stimulus and its memory image will result in conscious awareness. And until a matching neural model of the stimulus is created, conscious awareness will be triggered at every stimulus presentation. But once a matching model is completed, the stimulus will be habituated and will pass unattended for ever more.
Why was Sokolov wrong?
It is obvious that it is not changes in the stimulus but changes in the nervous system that cause habituation; but does comparison occur?
The first question Sokolov could not answer was: what happens if a stimulus is totally new, and no neural model whatsoever exists in memory? Does the stimulus go unnoticed, which seems logical, or does this work like a mismatch? lf the answer is, ‘unnoticed’, how than do we start creating mental models in the first place?
Reputed ‘theoretical escape artists’ and critics of Sokolov’s theory were Mandler and Naatanen. They stated that by conceiving the three elements involved, a perception, a neural model and the comparison between the two, Sokolov had made things very complex. And it was hard to imagine a nervous system that could accomplish such a task, given the huge number of stimuli that reached it at every moment. And as stated above, it was hard to imagine how neural models would be created from scratch. It was also very difficult to imagine the evolutionary steps that had led such a mechanism to develop.
This critique was fuelled when Sokolov postulated the existence of what he called ‘feature detectors’ in the brain, small mechanisms that scanned every stimulus for differences and similarities. However, the problem with these feature detectors was that a lot of research data could be interpreted in that light, and Sokolov’s supporters grew in number. But it made his already complicated model even more complex and less realistic. For instance, could he explain how a tiger that had never seen a rifle showed no orientation reaction to one, while an arms dealer, who had a very refined and elaborate neural modal in his mind, could really freak out when a gun was pointed at him?
Theory takes a turn
Mandler (1984) formulated an alternative view, that consciousness results from an interruption in an ongoing thought process. Derks and Goldblatt (1986) extended this view when they formulated the ‘feedforward theory of consciousness’. This says that from the first moment of activation, a specific neural network automatically and compellingly starts searching for another neural network to connect itself to. It does so by sending excitatory activity into the surrounding neural tissue. The purpose of this radiating of excitation, along the numerous dendritic branches of all the cells composing the network, is to single out one other specific neural network from all the potential networks in memory, as a successor. When enough excitatory energy is transmitted to one of these potential networks, this network starts to be activated and will be the next thing in mind.
It is in this process of one network searching for another network to succeed it that consciousness may arise. This happens when finding a successor network proves difficult. As the milliseconds tick by, the searching activity intensifies by increasing the amount of excitation which impinges upon the surrounding neural tissue.
As the neural network searches harder, the limbic system also becomes involved, or one could say ‘alarmed’, and in reaction it amplifies the activity of the searching network, pushing it over the threshold of consciousness. This whole process takes about 300 milliseconds and will show as a positive peak in the brain wave records.
When an active network manages to activate a successor quickly, this follow-up network becomes active before consciousness is called in. Then the latter helps to inhibit the former, and in doing so consolidates the connection between them by providing a period of rest.
In our theory, the brain is full of chains of specific neural networks feeding activity forward to each other. Only if such a chain is interrupted for lack of connectedness between the last activated network and its potential successor does an orientation reaction occur and involve consciousness.
Derks and Goldblatt believe that the searching process stops when a successor network is found. And that immediately after a successor is found, processes of inhibition reduce the level of excitation and thus the intensity of awareness. At the same time this inhibition takes care of the consolidation of the new-found connection and it suppresses the activation of ‘wrong’ networks.
Right after a successor network has been found, the limbic system shows a radical change in activity. From amplified excitation it switches over to amplified inhibition. This inhibition breaks off the experience and also helps to assure rest, thus firmly welding the newly-made connection. Furthermore, the more intense the preliminary search, the more powerful this burst of inhibition seems to be, and the more solid this new connection. At such moments the subject experiences a feeling of great satisfaction.
Because this view builds largely on Sinclair’s rest principle, an idea that by itself proved very hard to grasp, the combination with the ‘feedforward theory of consciousness’ is even harder to communicate. However, dear reader, if you believe you understand it right now, you must congratulate yourself on your extraordinary level of imaginative power!
But what happens when no successor network is found?
While the search for a successor network goes on and on, it affects huge parts of the central nervous system, and has a strong influence on the behaviour of the organism. This is best thought of as a chain reaction. Or as an explosion, or storm of excitatory activity, that originates in the searching network.
Due to the intensity with which the synapses are used within a searching network during such an explosion, they progressively weaken. So the longer they search, the weaker these synapses get.
Sinclair also noted a general correlation between changes in synaptic strength and affect: those situations which, according to his theory, should produce a net increase in synaptic strength were the ones in which we generally experience pleasant emotional reactions, but those predicted to produce a net weakening of synapses are the ones in which we experience negative feelings. According to this relationship, when fruitless search for a successor network reaches its peak and many synapses are being weakened, the subject might be expected to experience strong dysphoric emotions, such as fear, anxiety, sadness, or panic. Clinical experience, of course, supports this conclusion.
The function of negative emotions
If there is one misconception in psychology that NLP has brought to surface, it is the function of negative emotions in psychotherapy. Even though there are many counter examples readily available, many psychologists and psychotherapists, and most of mankind, still believe that psychotherapeutic change causes negative emotions to be discharged. A statement like, ‘The emotional energy that was blocked in the body by the trauma must be released,’ can make whole tribes of therapists nod in agreement; except most NLPers. But although many NLPers know the practical side of the coin, the theoretical side is rarely understood.
Emotions, for sure, do not only occur in therapy; they are part of human nature. So why do people cry, and why do babies cry most of all? Is there a mechanism behind all these tears?
Our reasoning starts with the commonsense notion, that all emotions finally end (Frijda, 1986). From the Sinclair, Derks and Goldblatt view, the negative emotion comes from the inability to find the next step in a chain of thought. And the termination of dysphoric emotions is explained by the weakening of the synapses involved in the search process. After some minutes of intense emotional activity, caused by fruitless search, the excitation finally comes to a halt. This must be because the substances (neurotransmitters and receptors) that are necessary for the conduction of excitatory activity are finally used up. At that time the synaptic connections that have been used most intensely will be ‘burned out’. This results in their extinction, and the subject will have amnesia for the concepts mediated by the burned-out networks. Is this the way in which the brain deletes information? In a way it is. While sleeping and dreaming and in emotional outbursts, the brain reduces the strength of synapses. If we could not reduce synaptic strength we would end up doing and thinking only one and the same thing: whatever was mediated by the strongest connections. So sleeping and dreaming and emotions are forms of negative feedback for the brain.
As most mothers and fathers know, babies cry a lot. We must assume that in so doing they are cleaning up their ‘mental hard disk’. But all this crying is a waking state affair. Many babies cry themselves to sleep, where they go on weakening their ‘over learnings’ even further by dreaming about them.
The resources vs extinction debate
If emotional discharge is the easiest way to get rid of traumatic and fear inducing memories then why do NLPers not make more use of it? What is wrong with extinction?
The main problem is, that during the ensuing rest after the extinction, there will be spontaneous recovery of the synaptic strength. In the hours following an emotional discharge that almost took out a network, restoration sets in. Examples from Janov’s primal therapy, in which some clients did not benefit from over 70 intense emotional extinction sessions, show how easily networks can recover from near extinction.
Nevertheless, and even more misleadingly, during the recovery and thus the re-strengthening of the ‘traumatic’ networks, the client will enjoy positive affects, a dramatic change of mood from suffering to a state of peaceful clarity. Psychotherapists who make use of this principle in flooding or implosive desensitization procedures know the addictive impact that this emotional upswing can have, depressed clients.
Accurate clinical observations, however, have shown that in the ‘post extinction period’ the client often is able to perform valuable creative problem solving; finding successor networks becomes easy after an extensive emotional outburst. Furthermore, finding follow-up networks (or resources) will often happen without the therapist’s involvement; it may just happen, in which case the therapy works. Types of extinction therapy owe their popularity to clients’ ability to find resources on their own. And also to the placebo effect of emotional discharge: the suggestive power of therapy is never shown more dramatically than when the client experiences and expresses strong emotions.
So we must conclude that extinction is indeed a way for the mind to delete data it cannot process, and when no resources are found, there will always be new ‘backup files’ that may be similarly disturbing.
Experienced therapists know how long it takes from the confrontation with an unprocessable stimulus to the onset of crying. From the moment of perception to the moment of emotion there is a small time window, in which thinking is conscious and clear. But as soon as intense emotions take over, the chances of carrying out controlled mental construction work diminish. In that case there is nothing for a therapist to do but wait until the crying or screaming has waned and the post extinction period set in, when the search for a solution can continue.
NLP’s technique of V-K dissociation helps to keep the emotion at bay and enables the client to go on problem-solving; V-K dissociation extends the client’s ability to carry out controlled conscious construction work. It is a way of going directly for the resources – at the price of the placebo effect of strong emotions.
On the other hand, positive emotions are closely connected with the process of finding a successor network after a period of search. When we find a successor, a new association, we tend to feel good. NLPers learn to calibrate when their clients find new connections: heads that automatically nod ‘yes’ indicate the moments of finding and connecting new networks. The basic difference between positive and negative emotions comes from the difference between searching (in vain) and finding (at last), where searching is excitatory and finding inhibitory.
As early as 1855 Herbert Spencer wrote that what characterizes an emotion is the location in the brain that it comes from. Similarly, we can say that the storm of excitation radiating from a specific searching neural network has a characteristic impact on the central nervous system, cortically and subcortically. It acts like a storm blowing from a certain direction, or like an earthquake having an epicenter in a particular location in the brain. How and where the sensory-motor cortex is hit by such a storm will give shape to the precise emotional behavior that a subject displays and his or her accompanying feelings. The corners of the mouth are pulled down when the motor cortex is hit by a excitatory storm; while the opposite movement comes when an inhibitory wind blows from the same corner.
Avoiding negative emotions
When a person is in the midst of a strong negative emotion, their interaction with the environment is often inadequate; sensory input and motor output seem to be momentary disconnected. Blind rage may scare some offenders off, but the opposite is also likely. Fear can immobilize; flight can make one an easy target.
Scientists who believe all emotions are means of inter-human communication have difficulty formulating what it is that people intend to communicate by crying. It is even harder for them to believe that crying may have no communicative function at all, and be just a result of uncontrollable neural storms in the brain.
Practitioners of martial arts demonstrate that the ability to respond without negative emotion provides us with the best means for handling difficult situations. Keeping your cool works best for the survival of the fittest. The only type of excitation-driven emotion that has a clear (communicative) value for the survival of a species is sexual arousal!
As explained above, negative emotions are the result of a shortcoming in an individual’s ‘software’; and they are functional insofar as they motivate the person to solve these problems. Dysphoric emotions push people to learn in areas where their mental representation of the world is inadequate. When a person responds to the negative emotion by searching for follow-up steps for his ‘stuck’ mental programme, he will be rewarded with the pleasant emotion that comes when the new found connections are consolidated.
Avoidance of negative emotions proves to be a strong drive in both animals and men. But avoidance of an emotion leaves the underlying software problems unsolved. As the Russian psychiatrist Vadim Rotenberg (1984, 1990) puts it, ‘the renunciation of search activity is the cause of every type of psychological problem.’
Relaxation and coping
The speed of problem solving is mainly determined by the efficacy of a person’s access to all the specific neural networks in his memory. Or as NLPers would say, problem solving is dependent on the accessibility of a person’s resources. High states of arousal, poor organization of information, and all kinds of temporary amnesias (dissociation, state dependent amnesia) reduce the rate at which adequate successor networks can be found.
Primarily, relaxing means decreasing the activity in the frontal (motor) cortex. This may proceed until there is no need for cortical inhibition in this region any more. In fact the frontal part of the cortex may fall asleep separately from the rest of the brain. Speaking becomes hard with a sleeping Broca area, and at the same time muscle tone diminishes in the entire body because the motor output gates fall asleep. Together these all lead to deep relaxation.
When we consider the systems of inhibitory neurones as the ‘brakes’ of the brain, we can classify them in different types.
(1) General brakes that inhibit huge brain areas, like the frontal lobes.
(2) Detailed brakes that respond automatically to single cell activity, and protect specific synapses.
(3) Brakes that protect general areas that are in danger of being over used or even burned out.
(4) And (emergency) brakes that can act upon specific neural networks to stop them from becoming conscious and leading to action, and to stop them from being extinguished by emotional outbursts.
The cortex does indeed possess the last mentioned type of brakes; they seem to be the newest ABS-system in our evolution. These brakes help us to accomplish two things, for they (a) interrupt an ongoing behavior in order to synchronize it with the state of affairs in the world (delayed response; the timing of action), and (b) they prevent emotions (repressing) by blocking the searching network before high intensities are reached.
This type of brake was probably not originally designed for repressing, but rather to enable actions to be accurately timed. Repressing seems to be an evolutionary extension of motor control. Higher animals need such fancy timing brakes, because without them some behaviors would run in a fixed (and too brief) span of time. Remember, automatic unconscious processes tend to be very rapid. The ticking rate of a chain of networks may involve thirty networks or more per second. Although this enables behavior to be executed very fast, it is useless if it is not accurately timed. A hunter may be able to shoot twenty arrows in a minute at a deer, but this is only effective if the animal is in the right position; so the hunter needs to be able to wait, with the shots in mind, until the right opportunity is there.
The fancy brakes of the brain consist in inhibitory mechanisms that are located beside the motor output centers, the frontal lobes. This mechanism is capable of blocking an ongoing sequence of specific neural networks. And it is able to let go at a previously established moment. The arrow is released instantaneously when the planned state of affairs eventually shows up in reality.
Derks and Goldblatt called this braking function in the brain ‘process inhibition’, the ability to inhibit ongoing mental processes. In humans, process inhibition contributes greatly to a broad variety of other cognitive operations. If we calculate six times eight plus twelve, we keep the ‘forty eight’ in mind – process inhibition – until we release it as we add the twelve.
George Mandler (1984) also suggests that in conscious awareness many operations are directed by the same system. Most often seen as a form of short-term memory, they are in fact the same braking mechanism which works by holding a network and then releasing it when we need to operate on it. Mandler sees selective attention as belonging to the same class of mental operations: selection means stopping a searching network and keeping it outside of awareness.
The process Freud called ‘repression’ is a clear example of the use of process inhibition, even if the term is overused. When we apply our mind’s brakes again and again to the same network, this action itself begins to function automatically. Repression, as encountered in clinical practice, is the result of repeated avoidance of negative emotions by making use of the brakes. If an unprocessable stimulus is blocked just before the searching intensity reaches the threshold of consciousness, no emotion will result. Many therapists indicate that this causes problems in the long run. For instance, after a mental process is broken off, it must be held steady by prolonged inhibitory activity. And clinical observation suggests that the held (repressed) network maintains some searching activity at a low (sub-conscious) level of intensity. This low level of searching activity, consisting of radiating excitation, may have a pernicious influence on many parts of a person’s central nervous system. It may cause all kinds of psychosomatic and other complaints. It makes people function like cars with the hand brake left on: they feel mentally tired, cannot concentrate for very long, and often suffer from depression. Since the central location of the brakes of the brain is in the frontal cortex, relaxation – putting this structure to sleep immediately reduces our mental ‘braking power’. Therapists make use of this when they enable a client to become aware of usually repressed mental contents by first allowing them to relax.
Sometimes, and in certain individuals, the brakes suddenly collapse. Acute psychosis seems to result from a person’s inability to tame numerous incomplete, heavy searching mental processes. Searching without finding results in emotional outbursts. At the same time many neural connections are burned out; reverse learning is the epiphenomenon of negative emotion. When the brakes of the brain give up, many of the individual’s problematic cognitions will be ‘deleted’. Thus acute psychosis functions as a purification, helping to remove information that impaired the subject’s behavior.
If one and the same action is repeated rapidly and uninterruptedly – as in dancing or running, for instance – the restoration rate of the synapses involved may be insufficient, resulting in the weakening of these neural connections. Such overuse of neural connections produces aversive feelings of fatigue, a demand to stop. If the action continues none the less, the ability to perform it will ultimately be terminated, and even after recovery and restoration there may be a permanent deficit in the ability to carry out the action. Sports people call this phenomenon overtraining, and for behavior therapists it is the method of ‘negative exercise’ that they learned from Wolpe: repeat a compulsion as fast and as long as you can, until you cannot do it any more.
There is a particular rate of neural activity at which the synaptic strength remains constant; it’s like running on the spot. Any level of activity above this rate causes a net weakening; all activity below this level is rest. The same seems to hold true for both excitatory and inhibitory neurones. The inhibitory cortical neurones can go on without rest when they are running at a steady 8 to 13 pulses a second (alpha frequency). Any activity that causes them to work harder leads to fatigue.
But the brain seems to possess a mechanism that enables it to make use of a kind of ‘turbo’ in the form of neurotransmitters called endorphins. They can brake and block synaptic activity before overuse occurs. By doing so these endorphins protect neural connections that are in danger of burning out. Their inhibitory activities help reduce the weakening of hard-working synapses in the midst of action. During a marathon run, it is the endorphins that consolidate the connections involved in running, otherwise the race would be too long.
In effect, endorphins produce reinforcement; they strengthen the heavily used synapses and at the same time make the subject feel fine. Thus, in a way, they enable us to exceed our mental speed and endurance limits. Endorphins block the aversive feeling by which the brain tries to stop us from exhausting ourselves, and replace it with the euphoric ‘high’ that accompanies a net strengthening of synapses.
However, running with the turbo on has its price: one needs more sleep afterwards and one faces the danger of inflicting damage on the body – a danger that pain and aversive feelings would otherwise have prevented. Over the last few decades it became clear that the selfadministration of drugs that mimic or promote the turbo effect on the brain also reduces the discomfort of searching processes. But here the price one pays is much higher. Breaking the speed limit of the central nervous system probably impairs its ability to coordinate parallel activity. Ultimately the whole system gets messed up.
Dreaming about this turbo brain led to Sinclair’s idea that most addictions have to do with the release of endorphins by the brain itself, or with the self-administration of endorphin-like substances (like heroin). Alcohol drinking, he argued, was no exception. Part of the kick from alcohol comes from the endorphins that the brain releases when it faces the danger of burning out all the connections involved in the digestion of alcohol. This kick induces the person to become an alcaholic. The reinforcement of the neural connections that mediate alcohol consumption make sure the person learns to drink better and better. Absence of negative feedback may result in alcohol drinking becoming the best learned behavior of all.
In a great series of animal experiments and an impressive number of highly significant clinical trials with humans, Sinclair proved that his hypothesis was correct. When alcoholics were given naxalone along with their drinks, the drug reduced the uptake of endorphins so that drinking didn’t give them a kick any more. In a matter of weeks the alcoholics saw their drinking return to ‘normal’ levels. And if there are still unsatisfied positive intentions left, NLP methods for changing behavior can be applied.
The ‘neuro’ in NLP is difficult terrain. Besides, one can practice NLP without having any understanding of the ‘neuro’. It took us about nine years to complete this article. We hope it serves those NLPers who want deeper exploration of the basic concepts that they make use of every day.
Baars, B. (1988). A Cognitive Theory of Consciousness, Cambridge University Press, Cambridge.
Derks, LAC. (1998). The Social Panorama Model: Social psychology meets NLP SON-repro, Eindhoven
Derks, LAC (1988). Psychotherapy een kwestie van wennen. Stichting voor Educatieve en Therapeutische Hypnose, Den Haag.
Derks, LAc. & Goldblatt RB. (1986). The Feedforward Conception of Consciousness: A Bridge Between Therapeutic Practice and Experimental Psychology. The William James Foundation, Amsterdam, Berlin.
Derks, L &J. Hollander (1996). Essenties van NLP. Servire, Utrecht. Eccles, J.C. (1980). The Human Mystery. Springer, New York.
Edelman, G.M. (1987). Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, New York.
Evarts, E.V. (1967) ‘Mind Activity: Sleep and Wakefulness.’ In Quartor, Melnechuk & Schmitt, The Neuro-science Study Program. New York: Rockefeller University Press.
Frijda, N. (1986). The Emotions. Cambridge University Press, Cambridge. Lopez Da Silva, F.H., J.E. Vos, J. Mooibroek, & A. Van Rotterdam (1980).
‘Partial Coherence Analysis of Thalamic and Cortical Alpha Rhythms in the Dog.’ In G. Pfurtscheller, et al (eds.), Rhythmic EEG Activity and Cortical Function.
Mandler, G. (1984). Mind and Body, Psychology of Emotion and Stress. Norton, New York.
Miller, J.G. (1941). Unconsciousness. Wiley, New York.
Naatanen, R. (1982) ‘Processing Negativity, Evoked Potential Reflection of Selective Attention: Review and Theory.’ Psychological Bulletin, 1982, 92, 605-40.
Neisser, U. (1967). Cognitive Psychology. Appleton-Century Crofts, New York. Pavlov, I.P. (1927). Conditional Reflexes. Oxford University Press, Oxford. Rotenberg, V. (1984). ‘Search Activity in the Cortex of Psychosomatic Disturbances, of Brain Monoamines and REM Sleep Function.’ In: Pav. Journal Biological Sciences, January-March (1990).
Schneider, W & R.M. Shiffrin (1977). ‘Controlled and Automatic Human Detection, Search and Attention.’ In: Psycho Information Processing: Psychological Review, 84, 1977, 1-66
Shallice, T. (1978). ‘The Dominant Action System: An Information Processing Approach to Consciousness.’ In K. Pope & J. L. Singer: The Stream of Consciousness: Scientific Investigation into the flow of Human Experience. New York.