Modelling as a misleading ideology in NLP

Print Friendly

(written by Lucas Derks in 2006)


In 1985, Lesley Camaron Bandler, David Gordon and Michael Lebeau, published the The Emprint Method. It was the first book on modelling, the authors however, used the words mental aptitude patterning.

Although their approach was state of the art and the book well written, it had only a minor impact on NLP. Already in 1985 this raised the question: What is the real status of modelling in NLP? Over the last decade David Gordon and Graham Dawes continued offering trainings in improved versions of the Emprint method; this is probably the highest level modelling training; but it proves to be difficult to sell.

Modelling is said to be NLP’s unique research method. In the same way as the observation of the night sky is unique for astronomy. The many existing NLP techniques are presented as the products of this in dept analysis of the patterns found in the thought and behaviour of experts, who themselves intuitively developed their skills.

Bandler and Grinder choose the term ‘modelling’ somewhere in 1976, to characterise their own approach. I imagine this to be part of their search for distinguishing features, in comparison to other psychological traditions.

The term modelling resulted from contemplating on what they did. The same word modelling was already used in connection to social learning and the construction of mathematical models. Bandler and Grinder added a partly new meaning to the word; without giving it a very formal definition. To them it meant observing experts in action, identifying with these experts, copying their skills and describing them in step by step formats.

An intriguing question is: Were Bandler and Grindler at that time familiar with the so called focussing method, as it was developed by Eugene Gendlin with a rather similar approach? Gendlin studied the mental patterns that, distinguished good psychotherapy client from bad ones. His focussing therapy is a step by step recipe based on what his good clients did.

Modelling, as defined in the above way, became part of many NLP master practitioners trainings and there are also several special modelling courses. We should not underestimate the value of the concept of modelling in giving shape to NLP. The modelling attitude of open curiosity is a great prerequisite to all learning. Even if one just tries to accomplish the difficult task of modelling someone, on only a minor skill, this usually results in very fruitful experiences. It opens our eyes for the existence of tacit knowledge and the structure of unconscious cognition.

NLP as a unity

Modelling in NLP is described in many ways, but most often it has to do with a modeller observing an expert with the aim of extracting parts of his artistry. The goal is always to learn the expertise oneself and to hand it over to others.

This definition nicely describes what Bandler and Grinder so successfully did in the seventies. However, Bandler and Grinder only partly linked their techniques to the individual sources. The clearest examples where they did, are the Satir categories and the Milton Model.

It would have been totally congruent with the concept of modelling to present a clear set of techniques linked to their origin. Instead, the techniques were presented as if they had some basic elements in common; in 1982 NLP was presented as if it was a unity. As if there was one NLP, consisting of techniques that were modelled from many sources; at that time it was often said that the common denominator was that these techniques had proven to work.

This, at the time (and still today) was totally acceptable to all. However, it gave the methodology of modelling a fuzzy start. The reason was of course, that a concept as rapport was a general overall thing, while change personal history was a concrete intervention. Just as ecology and positive intentions were general but chaining was one single technique. Today we would say: The modelled pieces were of different logical levels or of different chunk size or scope. The respect for the clients model, reaches up to the level of identity, while applying a kinaesthetic anchor is behaviour.

At the end of the eighties, some people discovered that there existed other methods, that also worked, that however, were conflicting with some general aspects of NLP. For instance, provocative therapy as developed by Frank Farelly did focus a lot on content (often introduced by the therapists), did not respect the client’s model of the world that much and gave a new meaning to rapport. Although Bandler himself had sponsored the publication of Farally’s book, it was clear that it conflicted with the unity in NLP. A decade later, in Europe, the same happened in relation to Hellingers family constellations therapy.

Beside the examples from expert therapists, many others came from sales and management; sometimes it were persuasion methods that failed to respect the clients model or were not ecological. Not everything that works can be easily integrated into NLP.

The modelling debate

In 2005, a discussion forum on the internet was devoted to the distinction between NLP-modelling and analytical-modelling. John Grinder and Carmen Bostic St Clair came to agree with Robert Dilts that this distinction was vital. Dilts is the author of the book Modelling with NLP (1998). In this volume he described the method by which he arrived at models for leadership as based on his study of successful managers at Fiat.

This comprehensive book however, did not distinguish between NLP-modelling and analytical modelling. Grinder and Borstic defined NLP-modelling on the base of two criteria: 1) Postponing any attempt to structure the information that the modeller extracts from the expert, until 2) the modeller is able to demonstrate the modelled skill himself. All other ways in which one arrives at models are called analytic.

This discussion brought to light that, the procedures for arriving at a new piece of NLP were not yet defined. Only Grinder and Bostic, made an attempt to formalize them in the year 2001. If these criteria were put on all the material that is commonly regarded as genuine NLP, most of it falls short. Read also Steve Andreas article: Modelling modelling on this. For instance Robert Dilts’ modelling of geniuses must be categorized as analytic modelling. And also, the very popular Disney strategy and logical levels.

One wonders, NLP-ers have produced hundreds of books. But where are the NLP books that present new models that are based upon single experts? Where are the examples of real NLP modelling and who are teaching that method? Are Grinder and Bostic St Clair the only ones?

James Lawley and Penny Tompkins have written one: their Symbolic Modelling is derived from the New Zealand therapist David Grove.

In retrospect, some argue that Lesley Camaron Bandler modelled Richard Bandler to produce meta programs. But did Connirea and Tamara Andreas model some expert to get to core transformations? Is Grinders own new code modelled after an expert NLP-er who was more effective than the others? Did Bandler model some expert for design human engineering? Micheal Hall said he modelled someone, when doing his master practitioners to arrive at his meta states model. But after that?

Wyatt Woodsmall claims to be one of the few NLP-ers doing actual modelling: did he base his Time Line book on one expert? Was that expert perhaps Steve Andreas? Did Woodsmall NLP model all the dimensions of personality that are also included in his book?

In the discussion forum on the above mentioned website, the contrast between ‘real or pure modelling’ and ‘false modelling’ was raised. Since ‘real’ is an original meta model violation, this distinction was invalidated. (What makes this modelling more real than other modelling?)

The question is, do we need a clear definition of what modelling means in NLP?

The answer to this question is only a YES, if we want to define ‘NLP’ on the base of ‘modelling’.

NLP pollution

The need to define ‘pure NLP’ becomes urgent when one is confronted with NLP practice that diverts largely from the original formula. Since more and more people do business in NLP, the diversity increases. Partly this is inspired by competition on the NLP market; every dealer is in search of his own unique selling propositions. But another part comes from people who are enthusiastic about models that were developed outside of NLP. Some NLP trainers started to include ‘highly fascinating stuff’ or ‘extremely useful stuff’ from all kinds of sources in their regular NLP programs. Nearly every existing brand of western, eastern or shamanistic psycho-technology was at sometime combined with NLP or sold as NLP. On NLP conferences half of the presentations were about NLP and X; X being some other non-NLP method. This was called NLP pollution by its critics and NLP enrichment by its supporters.

The question is: Why is a free mixture of everything that is positive, useful, fascinating or marketable a problem for NLP? Why must it stay pure?

One reason for keeping NLP pure, lays in marketing itself: Products that lack clear profiles are difficult to sell. So it is vital to NLP’s brand image that it is not just a nice mixture.


Every school of knowledge encounters purity questions at some stage of it‘s development; because there are always people exploring the margins and trespassing them. However, there is also the role-identity issue. Mediators are not coaches. Coaches are not therapists. Artists are not scientists. Astronomers are not astrologists. Identification with what you do means becoming what you do. I am a NLP-er. On the level of identity this results in the strongest commitment. I am an NLP-er, helped me to master a difficult trade like NLP. But this motivation comes at a price.

The identification with ones trade and with ones training can lead up to violent debates between professionals. People may develop a sort of religious orthodoxy when they identify with what they do: and the more philosophical it is, the more bitter they tend to quarrel. Psychotherapy and psychology house many great examples of schools in conflict.

If you identify with NLP, you are an NLP-er. But when it is unclear what NLP is, you have an identity problem. Pure NLP is logically the NLP you can identify with. You may not mind the difference between si-ite and sunny Muslims. But when your identity is built on this distinction, you will care a lot. And discussions may flame high about the rights and wrongs of such doctrines. You may consider quitting NLP all together, when you feel your don’t fit in anymore on the identity level.

It is obvious that many new things in NLP did not come from ‘real modelling projects’. They came from existing formats that were absorbed without any exploration of the subjective experience, beliefs, meta programs or criteria of the originators. For instance, spiral dynamics or the enneagram, were introduced to the field of NLP without the modelling of Graves or Sufis. One could argue that this modelling process had already taken place; by these non NLP-ers: by Graves himself for instance. However, by such reasoning every model can be introduced as being a genuine part of NLP. For instance, family constellation therapy, can be said to be modelled by Bert Hellinger; the fact that this model is unrelated to many other NLP models does not have to be a problem. However when NLP is considered to be one thing; the question rises: Does this new model fit in?


Other new contributions are the results of NLP-ers playing around with the already accepted concepts. Re-modelling is the best word here. The Scottish NLP-er John McWirthier did an extensive remodelling of NLP. The criteria on which he restructured something were not always so clear. Was the new form more efficient? Easier to teach? Easier to learn? More effective? More ethical? Easier to sell? Or was it the need to create something with a personal signature? Micheal Hall is another extreme case of remodelling NLP. He re-modelled and re-named most of NLP. Even NLP itself he tried to rename. Is it his desire to improve NLP or a more personal agenda? The same renaming we saw ten years earlier by Anthony Robbins; the introduction of his own brand of NLP (NAC) payed off for him.

Epistemological rules govern the better part of academic science; when the statistics are correct and the experimental design is valid, it is good research. As long as you stick to the right procedures you are supposed to produce something worthwhile.

As already stated above, for a model to be accepted as a piece of NLP it takes more than having followed the correct modelling procedure. A great number of values that belong to NLP, that are stemming from the seventies, also decide over whether a model fits or not. Think of ecology, sensory acuity, positive intentions, respect for the model of the world, avoiding psychiatric diagnostics and valuing the unconscious mind. If the correct modelling procedure would be decisive for what belongs to NLP or not, we could have models for religious brainwashing, voodoo death or zombie-fication that could fulfil these norms.

Is all of that a problem?

The answer is YES, when we want NLP to be true to its ideology. If we are just pragmatic, the answer is NO. Because, modelling or not, NLP is a great success.

If we do see it as a problem, the question arises, how can this be solved. Since the NLP community lost its international journals (NLP World and Anchorpoint) the only channels left are platforms on websites. Who will be able to lead the NLP community towards a proper methodology? As far as I am aware, there is no structure or panel and no individual with enough authority to change anything in the ways of NLP-ers do their business. Neuro-linguistic programming is not evolving on a stream of publications nor is it steered by forums on World Congresses. NLP just follows the market. The transfer of NLP knowledge goes by means of books and workshops. Instead of spending their time with modelling, most NLP-ers search for workshop participants. In analogy, academics spend most of their energy on writing research proposals and getting the work financed. The research itself is often left to students; in NLP most of the modelling is done by master practitioner students.

A pragmatistic attitude to modelling

Pragmatism as a background philosophy was always part of NLP. Truth is what works, stated William James in the 1890. NLP is what works, is the same statement. The test for a model is found in whether one can produce the same results as the expert.

Social learning; learning from observing others, is of great significance for the transfer of skills and knowledge. Educators have falsely under estimated this mode of learning for ages. In the wake of Bandura and cognitive behaviour therapy NLP has helped to put this in the centre of attention. Learning by identification, learning by going in second perceptual position are expressions for the same phenomena. Being in the presence of an expert and observe his skills, functions largely unconscious with the aid of pre-wired mirror-neurons in a sheer automatic fashion. This implicit modelling takes an open minded attitude and sharp perception. In all social, artistic and sports skills, this is the mayor mode of learning.

It is a twofold process:

1) Create a dissociated image of somebody doing a skill.

2) Step into that image an associate with it; imagine doing it yourself.

An implicit model is an unconscious cognitive construct that contains sequences of steps, criteria, beliefs, values, pictures, sentences and everything the nervous system is able to capture. The owner of such a model may even be totally unaware of having it obtained.

The success of NLP is not resulting from its formal procedures; it comes from the ability of NLP-ers to simplify complex psychological phenomena to a level that one can make use of them. Application is the weak point in academic psychology. Application is exactly where the strength of NLP lies. The fact-finding methodology of psychology results in too complex theoretical considerations for practical use. By limiting itself to some concepts that fit to subjective experience and general psychodynamics NLP has provides far better tools. By leaving out much of unnecessary content and interpretation, the application can be made to fit on many relevant psychological problems. NLP’s mixture of abstraction, structure and connection to the operators own perception overtakes statistic statements by far stretches.

We may expect that in the history of social science NLP will not be defined by its unique method of modelling. Modelling will be just included in the set of new distinctions, views and values that NLP introduced to the field of psychology.

Like the stimulus-response distinction made behaviour therapy a possibility, and the rational-irrational distinction helped RET come to life; NLP’s rich vocabulary has created a subculture within applied social science. NLP provides us with a language that enables us to see, hear and feel different things and as a consequence makes us do other things. Who identifies with NLP, an NLP-er, speaks the language of a subculture that he became a member of.

NLP is a tradition of rituals; one of them is called modelling. Modelling is a formalization of our natural social learning potential. When social learning is based on NLP conceptual filters, the results are great.

Population modelling

By applying the NLP concepts on areas of special interest, unknown worlds can be mapped out. My own example is human social behaviour. The concepts of perceptual positions were already helpful to understand certain forms of communication. However the deep structure of human social life asked for more. Since 1992 I have taken on the quest of applying the NLP concepts on this area. What I expected did indeed happen, an extension of social psychology appeared with a very practical inclination.

I had not one expert to model, but hundreds of people that had all their own level of social skills. On the most basic level, the question was: ‘How do people think of people?’

Not every NLP colleague understood the important implications of this project. But I was totally fascinated by what I started to discover.

The results of this project consist of many common patterns that I found in the social experience of many subjects. One could call this qualitative research. But since it was only possible on the base of NLP concepts I started to call my method population modelling. I used the attitude and perspective of NLP modelling; however it was not one expert but an army of experts that I modelled. To make the findings comprehensive to others, I had to devote my attention to these communalities. In a way this resulted in what all psychologist aim for: knowledge about how HUMANS in GENERAL think.

In fact we must say that the work of James and Woodsmall with the personal timeline, based on Steve and Connirae Andreas discoveries, that resulted in time line therapy used a similar methodology. With the help of the concept of sub modalities they explored general patterns in the experience of time.

With NLP modelling one can explore the unique abilities of individuals. When one repeats this process with several experts that master similar skills, insight will appear in the common patterns that these experts share. This will filter out idiosyncrasies and leave us with the essence of the skill.

An interesting by product of this method is its automatic link to traditional social science. The question how humans in general think will be answered this way. Statistical underpinning is only a matter of effort. To me, the value of the data will not increase if I have calculated in how many cases a certain pattern does occur; but for those who hunger for numbers it can be done. In the social panorama research one can count how many subjects do have counter identification personifications straight in front of them, a little higher than their own eye level, and at a distance of in between 10 and 50 mental metres. It can be done, but it will not help. Only if one needs to convince an academic audience that only wakes up when one speaks in numbers and significances.

The population projects that I know of are:

  • Longevity (Dilts and Hollander)
  • The time line. (Andreas & Andreas; James and Woodsmall, Walker)
  • The social panorama.(Derks)
  • Educational metaphors (Loyd Yero)

But there are themes that are worthwhile to start population modelling on. For some of these themes there is already made a start.

  • The environment and nature.
  • Value and money, economics.
  • Right and wrong; good and evil.
  • Markets.
  • Political conflicts and their resolutions.
  • Travelling and tourism.
  • Education.

What makes new NLP today?

Within the NLP community there is no formal board of approval, that judges new models. If there was one, on what criteria could they base their decisions?

It is my observation that new models are chosen on the following criteria. I have placed them in the order that I believe that they are weighted in the real NLP world:

(1) Marketing: A model adds market value to my training program. It is unique for my institute. It is appealing to workshop participants. (More success, health, power, sex and money with NLP)

2) From an NLP authority: It was introduced by a person with a name in NLP. (Bandler, Grinder, Dilts, James, Robbins)

3) Usefulness: A model works. It has proven to be useful in my own work.

4) Conceptual aesthetics: Is it chosen because it is a beautiful model. It satisfies my needs for everything fitting in harmony. (NLP and Buddhism, Ken Wilber in NLP)

5) NLP congruent: It fits in NLP concerning explicit and implicit values. It is ecological. It makes use of the client’s potential. It takes the larger system into account. And it can be translated in existing NLP vocabulary.

6) NLP modelled: It is modelled from an expert in the correct manner.

Comments are closed