Neuro-linguistic Programming (NLP) and Eye-accessing Cues (EAC) by Jane Mauret

Cover of "Frogs into Princes: Neuro Lingu...

Cover via Amazon

Can Results Obtained in Historical Testing of the NLP-EAC Model be Duplicated in New Zealand 2012?

Research Report Presented in Partial Fulfilment of the Requirements for the Degree of Master of Management (Communication Management) Massey University, Wellington, New Zealand – November 2012 (received A grade)

ABSTRACT

Bandler and Grinder developed Neuro-linguistic programming (NLP) in 1979. One of NLP’s basic tenets is that practitioners can identify with clients via eye and language cues that individuals manifest. Some historical studies have examined the language aspect but most have investigated the eye-cue component, known as the NLP-EAC model. The NLP-EAC model was also said not to apply to left-handers due to differing brain hemisphere organisation. The wide variety of testing over the last thirty years into eye-movements (EMs) has in the main failed to support the theory. The current project incorporated testing data from a well-known 1991 study. However, a thorough analysis of the results did not produce a positive outcome on behalf of the model.

(Please note that tables and Excel spread-sheets, etc are not copied here due to formatting issues but if any reader wishes to view those, they can have them emailed by placing a comment/contact at the end of this post

The main finding was that EMs were not dictated by mental processes responding to stimuli related to sensory modality. Secondly, it was found that there were three particular EMs produced the majority of the time by both the right- and left-handed (stares; and left- and right-lateral looking) to all types of questions. This investigation pointed up that each handedness reacted in the same way to a range of verbal stimuli, but subjects did not exhibit the predicted eye-accessing cues (PEACs). The present study was in agreement with historical works that failed to support the NLP-EAC model, i.e. that EMs do not appear to be linked to cognitive function. It is suggested that future research look more closely at left-handed subjects, children and across cultures.

1. INTRODUCTION

The Neuro-linguistic Programming (NLP) model was promoted by mathematician Richard Bandler, and linguist, John Grinder, in their book Frogs into Princes (1979). The neuro in the concept refers to brain function; the linguistic to language, and the term programming references learned behaviour. They maintained that persons’ preferred way of communicating could be discerned by aspects of their language – predicates. Examples of predicates are: “I see; it appears to me, I have a clear picture [and] I hear; it sounds to me, I tell myself [and] I feel; he’s out of touch; and it’s heavy going” (Heap, 1988, p.269).

They maintained that semantic preferences, taken together with eye-movements (EMs), would indicate if someone was visually (V), auditory (A) or kinaesthetically (K) orientated. Together, these overt tendencies were called the Preferred Representational System (PRS); and someone could have a V-, A-, or K-PRS. More precisely, the two Americans asserted that therapists and communicators, having identified a client’s PRS, could then exhibit a corresponding PRS which would invigorate two-way communication. This report deals with only the EM aspect of the NLP hypothesis.

Decoding EMs is not straightforward since they are both fleeting and constant. Nevertheless, Bandler and Grinder conceived the eye-map below as a guide to what EMs meant in relation to the three sensory modalities. Thus; if looking up denotes visual cognitive function, then a client displaying upward eye-movements at a particular time, was signalling (unknowingly) that they were processing the world via a visual modality.

Table 1: NLP Eye-Map Key (Dilts, 1998, n.p.n.)

The NLP advocates endowed their NLP model with biological origins, and further, Bandler and Grinder stated that the responses depicted in the eye-map above were innate and therefore, universal – even cross-culturally. The study of EMs generally had already been of interest for some time. Bandler and Grinder extrapolated the phenomenon of involuntary EMs into the NLP hypothesis. Eye-movement had been referred to by authorities as lateral eye movement (LEM) (Kinsbourne, 1972). Bandler and Grinder (1979) relabelled LEM eye-accessing cues (EACs) (Bridgeforth, 2009).

However, the NLP-EAC model had codicils attached: firstly, the formula was applicable to approximately 75% of right-handers (RH); and secondly, not to the left-handed (LH) at all (1979). Bandler and Grinder said this was because “a person with a different cerebral organization than most of the population is automatically going to have outputs which are novel and different from the rest of the population” (p.35). It was also unclear what to make of looks that were aimed straight ahead, which were classed as “quick access of almost any sensory information; usually visual” (Dilts, 1998, n.p.n.).

If EMs are actually biological responses to particular stimuli, then it should be possible to observe same by presenting right-handed subjects (RHSs) with specific tasks designed to generate each of the three modalities. Just to be clear, semantically-speaking, the eyes in terms of the NLP-EAC model are not performing a visual task. Bandler and Grinder were linking EACs with cognitive function and not literal vision. “The EAC model suggests that non-visual eye-movements … indicate which representational system a person is currently using” (Diamantopolous, 2010, p.16).

Since the emergence of the NLP-EAC model, a number of researchers have sought to sanction the EAC part of the hypothesis. This present account surveys the trends found in historical testing and against that backdrop, examines the outcome of a recent study that was based around an NLP-EAC enquiry from Australians Mark Baddeley and John Predebon in 1991.

2. LITERATURE REVIEW

2.1 Eye-Movement Studies

That eyes have another function other than operating in a manner not related to vision, was remarked upon by the American psychologist and philosopher, William James, in his book, Principles of Psychology (1890): “I cannot think in visual terms … without feeling a fluctuating play of pressures, convergences, divergences, and accommodations in my eyeballs … when I try to remember or reflect” (pp. 193-195; cited in Dilts, 1998, n.p.n.).

In fact, eye-motion has an important biological basis. Eye-movements are known technically as saccades and in 1965, Soviet scientist, Alfred Yarbus, proved that “saccades are essential to vision [for] when the eye is artificially fixed in place … the quick result is blindness” (cited in Archibald, 2008, n.p.n.).

An overview of NLP-EAC literature encompasses a look at historical LEM findings, since that field also searched for links between EMs and mental processes (Baddeley & Predebon, 1991).

Studies pertaining to EMs’ connection to cognitive function began relatively recently, even though “people have always been willing to acknowledge the existence of eye movements occurring for purposes other than visual perception” (Micic, Ehrlichman & Chen, 2010, p.211). From the early 1970s, psychologists were proposing that there was a connection between eyes and brain-hemispheres (Kinsbourne, 1972; Galin & Ornstein, 1974; cited in Dilts, 1998, n.p.n.). Kinsbourne (1972) used questions in experiments and noted that RHSs looked to the right in verbal tasks and to the left in numerical and spatial tasks. He also believed that left-handed subjects (LHSs) did not follow a pattern. His conclusion was that these differing eye-directions were related to “the lateralization of the underlying cerebral activity” (p.539). This idea foreshadows the statement a few years later by Bandler and Grinder regarding the “different cerebral organization” attributed to the left-handed.

Ehrlichman, Weiner and Baker (1974) also tested for links between hemisphere division and EMs. This study included trials with the Experimenter and Subject facing each other, and with the Subject alone (but filmed). This research took note of the Subject’s first EM following questions. The results did not replicate Kinsbourne’s findings and the authors surmised that this might be because, regardless of the nature of the stimuli, all questions and responses were verbal. This latter point was considered by Galin and Ornstein (1974) who also wondered if stares, what the present writer will term eyes directed straight ahead (EDSA), resulted from competing tendencies to look right and left. For example, the task may have been spatial but the response was delivered verbally.

Meskin and Singer (1974) considered the notion that people shift their gaze from a speaker’s face to help them formulate answers. However, when the researchers placed a lively painting by Brueghel within the Subjects’ eye-line, the Subjects “acted as if they could look right through it” (p.70). This echoes the occasions when eyes are working in the non-visual sense. It appeared that eyes will disregard even novel and startling imagery in their eye-line if alternative occupation (e.g., thinking) is required.

Gur (1975) also found that Subjects behaved differently, dependent on whether the Experimenter was facing them. For, when the Experimenter was facing the Subject, the latter would look to the left or right when giving answers. “The results indicate that the cerebral hemispheres, though specialized for problem type, are also preferentially activated within the same individuals” (p.751).

Gur recalled Teitelbaum (1954) and Day (1964) who had observed that individuals displayed a preference for looking left or right. This led Duke (1968) to suggest the terms ‘left- and right-movers’ (cited in Gur, 1975, p.752). Gur wondered if Subjects altered their eye-patterns when face-to-face with an Experimenter out of anxiety and noted that “a significant majority (70%) of a subject’s eye movements were to one direction” (p.754) regardless of question or task type. The experiment and result were duplicated the same year by Gur, Gur and Harris (1975) with thirty-two RH and seventeen LHSs.

Ehrlichman and Weinberger (1978) discussed weaknesses inherent in LEM testing. These were mainly the unknowns of when to score an EM; how many EMs were relevant and even what constituted an EM. They reported that little data had emerged to confirm the theory that hemispheres dictated EMs. They also make the prosaic, yet illuminating point that, “there are, after all, only a limited number of directions in which eyes can move” (p.1096). Eye-directions are limited compared with the number of cognitive functions performed by the brain.

A word about scoring EMs. Some of the researchers above took note of the first EM and this was still favoured in 1985, when Jamieson and Sellick recorded the first EM following the end of verbal and spatial questions but recorded ‘no EM’ when stares occurred. “Lateral eye movements (left or right) were recorded for 75.8% of the questions; 2.9% were up, 6.8% were down, 8.8% were unscoreable, and 5.6% were stares” (p.157). Cheney, Miller and Rees (1982) related that studies often only noted the first lateral EM, which they believed was insufficient.

Just as the seven universal facial expressions are displayed by those blind from birth (Ekman, 2003), the present writer wondered if congenitally blind persons could also display EM patterns. A blind nystagmoid female (the eyes moved involuntarily and she was unable to – at will – override this state) was tested with forty each of verbal and spatial tasks. The results were assessed by no less than twenty-four raters, who all agreed highly that there was: a close interdependence of eye movement directionality and cognitive mode in the subject [and] on the basis of evidence from the repeat test, the association between LEMs and cognition appeared to be a stable or habitual feature of S’s behavioural pattern. (Griffiths & Woodman, 1985, p.260).

So far then, we know that people (including one blind person) look left and right when presented with stimuli, at least in a laboratory context. Whether these EMs are linked to mental processing was not clear, but at least one pair of researchers were prepared to take the view that “lateral eye movement literature revealed that cognitive activities were not reliably related to direction of eye movements” (Ehrlichman & Weinberger, 1978, cited in Micic, Ehrlichman & Chen, 2010, p.211). Around this time, due to those and other unreliable findings (MacDonald & Hiscock, 1992; Raine, 1991; cited in Ehrlichman, Micic, Sousa and Zhu (2007) “research on lateral eye movements ceased” (p.7).

The NLP-EAC model was launched thirty years ago, at a time when there was much contradictory evidence from LEM studies that had tried to determine if cognitive function regulated EMs. As already noted, LEM studies are very closely aligned to the precept behind NLP-EAC, but this did not deter Bandler and Grinder from speaking as though there was a connection; and an easily observable one. The lack of support for the NLP-EAC model, coupled with the hiatus in the field of EMs study has been the catalyst for this contemporary inquiry, which will use the same stimuli from an experiment in 1991.

2.2 Non-Visual Gaze Behaviour

More recently, researchers again looked into EMs, this time examining why eyes move during sleep, in the dark, and when no-one else is present (Ehrlichman, Micic, Sousa & Zhu, 2007), in face of Yarbus’ (1965) long-established observation that the eyes cannot, not move. And others were again investigating why people move their eyes when they think (Ehrlichman & Micic, 2012).

It is certainly agreed that eyes ‘see’ in two different ways, i.e. “visual eye-movements may be defined as eye-movements whose purpose is to change the visual stimulus falling on the fovea and non-visual as those eye-movements that are a result of neuro-physiological events and are not associated with vision” (Diamantopolous, 2010, p.9).

That people look away during conversation and when answering questions could simply be related to social convention, in that it is not considered polite to stare (‘weirdo’) into another’s eyes (intimate encounters notwithstanding). Similarly, we find it disconcerting if people avoid (‘dodgy’) meeting our gaze. Persons also take turns at ‘gaze-shifting,’ which seems to depend on whether one is the listener or the speaker (Bavelas, Coates, & Johnson, 2002; Kendon, 1967; cited in Ehrlichman, Micic, Sousa and Zhu, 2007, p.8). Some researchers proposed that faces are distracting when preparing answers which is why Subjects look away (Argyle & Cook, 1976, cited in Ehrlichman, Micic, Sousa & Zhu, 2007, p.8). So faces may be distracting but detailed and flamboyant artwork is not.

Duke (1968) had noticed that EMs were less likely to occur if the answer to a question could be yes or no; it is more complex questions that elicit EMs. In line with this, Ehrlichman and Micic (2012) recorded that humans “move their eyes about twice as often when searching through long-term memory as they do when engaged in [simpler] tasks” (p.96).

One further dictum from Bandler and Grinder (1979) was that their NLP-EAC model applied cross-culturally. McCarthy, Lee, Itakura, and Muir (2008) compared Canadians with Japanese in experiments where Subjects had to answer twenty questions. The experiments occurred with the Experimenter in front of, and behind, the Subject.

The Japanese Subjects looked down in both set-ups, whilst the Canadians looked up only when facing the Experimenter and down when not being observed. From this, it was concluded that “eye movement patterns [are] consistent with the view that thinking-related gaze behaviours are modulated by cultural display rules and social contexts” (p.716). (This also links back to Gur (1975) who found that Subjects eye behaviour altered depending on whether they were ‘alone’ or not.)

The early and more recent EM literature has perhaps raised more questions than it has answered. Initially it seemed that scientists came up with a theory and tried to find evidence to support it, demonstrated by the contradictory LEM results. It remains that eyes move – because they have to – and EMs have been found to vary, both within individuals and between cultures.

It is useful to survey the EM literature because these studies were looking for links between EMs and hemisphere function. They often also involved questions put to Subjects. The fact that LEM study suddenly ceased by the end of the 1970s suggests that the assumed link between the two was not justified. However, these studies from the 1970s could have been testing the NLP-EAC theory – if it had been developed then – and most of those studies were similar to the emerging NLP-EAC investigations which often used questions to test the Bandler and Grinder model.

2.3 Neuro-linguistic Programming Eye-accessing Cues Research

Empirical studies testing the NLP-EAC model commenced from 1980. In the main they used questions but some used imagery or real objects. To be clear, they were attempting to show that certain cues, generally VAK-loaded questions, actually did elicit the corresponding EAC as proposed by Bandler and Grinder (1979).

The following table is a summary of thirty years (1980-2012) of testing, which has been compiled from Sharpley (1984); Einspruch and Forman (1985); Baddeley and Predebon (1991); Bridgeforth (2009); Diamantopolous (2010); and Witkowski (2012).  (not available in this version)

Fifteen studies between 1980 and 2012 failed to support the NLP-EAC hypothesis. Examples of a non-supportive outcome was that from Thomason, Arbuckle and Cady (1980) who did notice EMs were “not entirely random,” but in no sense was this evidence in favour of the model. Elich, Thomson and Miller (1985) found that whilst control questions were not meant to evoke imagery, they did elicit EMs “very similar to the percentages for the questions from the three sensory modalities” (p.624). If any stimuli invoke predicted eye-accessing cues (PEACs), then that voids the premise of identifying sensory modalities in individuals for therapy, etc.

Of the studies that found some support, no further details are to hand from Hernandez (1981) except that VAK-statements were used rather than questions (Sharpley, 1984). Both Wertheim, Habib and Cumming (1986) and Dooley and Farmer (1988) repeated the unsupportive 1985 study from Farmer, Rooney and Cunningham, which employed objects such as oranges. The former concluded that “overall, subjects most often exhibited eye-movements suggested in the model to be ‘auditory’ in nature” (p.528). (NB Auditory displays are left- and right-lateral EACs.) The latter group used aphasic Subjects who did display some EACs “designated by neuro-linguistic programming” (p.234). The EACSs being downward and straight ahead, which raised the question (not answered) as to why being aphasic produced a partially supportive outcome.

Could it be that once language skills are diminished, the eyes take over as the dominant transmitter, possibly echoing an ancient evolutionary response? On the other hand, EACs that look straight ahead have not been properly corralled as to their meaning (see Table 1 above). Also, looking down could simply be submissive body-language on the part of people made less confident due to, what is likely to be, a socially inhibiting illness.

Buckner, Meara, Reese and Reese (1987) relied on Subject self-report and found that, regarding V and A modalities “interrater agreement … supports the NLP claim that specific eye movement patterns exist and that trained observers can reliably identify them” (1987, p.283).

Diamantopolous (2010) thought that this provided some support, while Bridgeforth (2009) said these 1987 results, along with Nate (2000) “demonstrated that eye-accessing cues are a consistent, predictable, and reliable indicator of cognitive information retrieval and processing” (pp.75-76) (my italics).

The work from Burke, Meleger, Schneider, Snyder, Dorvlo and Al-Adawi (2003) identified that EACs were “idiosyncratic and person-specific” (p.1330). (NB Burke et al. also sought to elicit a gustatory response which explains the abbreviation ‘VKG.’)

Schleh (1987) unsupportive, and Nate (2000) supportive, both used questions and were the only two researchers that mention children were involved. Nate (2000) is noteworthy since, apparently, all fifty children always looked up, sideways, straight ahead and down in conjunction with retrieving VAK information. (Unfortunately, no further details were available in the timeframe of the present report.) For the time being, as to why fifty children provided so much support is a matter of conjecture. Perhaps it is a case of some evolutionary response that we are born with but becomes lost; similar to the dive-reflex that mammals had when they were still water-living. This reflex is exhibited by babies today but they lose it at about six months.

From the above results, it seems that questions as study stimuli, may or may not result in support. Most of the studies used RHSs, for whom Bandler and Grinder say – 75% of the time – the NLP-EAC model can be applied. It would seem overall, that the NLP-EAC studies have gone the way of the earlier LEM research, i.e. the contradictory evidence is vaster than the supporting literature.

2.4 Interpretation of the NLP-EAC Model

Over a similar timeframe (1981-2000) other researchers were trying to reveal PRS by matching predicates with EACs. This approach was inappropriate according to Buckner, Meara, Reese and Reese (1987) for “the basic tenets of the eye movement model … state that specific eye movements are indicative of when a person is thinking visually, aurally, and/or kinaesthetically” (p.283) (my italics). And in fairness, Bandler and Grinder did not say that PRS could be divined from EACs or predicates alone. Some studies that sought to do so and are thus deemed irrelevant came from Falzett (1981); Gumm, Walker and Day (1982); Ellickson (1983); Dorn, Atwater, Jereb and Russell (1983); Fromme and Daniell (1984); Jupp (1989b); Sandhu (1991); Lichtenberg and Moffitt (1994) and Turan and Stemberger (2000) – cited in Diamantopolous (2010) and Witkowski (2012).

Other writers championed the NLP-EAC theory by widening the range of acceptable responses: right-handed people will occasionally violate the usual pattern, and many people have partial or full reversals of this pattern. The salient feature of the concept of patterning is that whatever pattern a person may have, he or she will follow that pattern consistently. (Einspruch & Forman, 1985, p.591).

They go on to add that EACs are are “an important, but small, part of NLP” (p.594). This seems to be getting away somewhat from the importance of EACs put forward by the originators who, after all, drew up the eye-map because they were so confident about the meaning behind six out of seven possible EACs. Beck and Beck (1984) writing about Thomason, Arbuckle and Cady (1980) suggested that the latter did find support for NLP when their Subjects displayed patterns, for Beck and Beck allowed that EACs occur “regardless of specific stimuli” (p.175).

These generous interpretations of EACs appear to weaken the premise behind the NLP-EAC model rather than add support, since they seem to be saying that observed eye-directions in a dyad can tell us nothing – except that someone’s eyes moved.

Another factor contributing to the lack of confirmation over thirty-odd years has been the differing procedures. This includes the nature of the testing material (at times questions, at other times images or statements); whether answers required verbalisation or not; the method of capturing Ss’ EACs; which EACs and how many EACs to record; the skill of the raters (some naïve; some with a little training; others NLP practitioners), etc. Ellickson (1983) thought the genders of the Experimenter and Subject might affect results but the present writer suggests that is a confound too far (it perhaps might apply if the testing was done in the 19th century). Cheney, Miller and Rees (1982) wondered if getting people to stare into a camera restricted EACs that Subjects could make. It might initially, but the present writer feels it is unlikely that anyone would be able to sustain looking straight ahead for too long as the natural urge to move the eyes would take precedence.

Diamantopolous (2010) reckoned that there is no way of telling if the question content is processed by the receiver as the transmitter intended. He uses the example of a question posed by Baddeley and Predebon from 1991; What colour are the walls in your bathroom? Diamantopolous says the response could be engendered visually (seeing the walls) or kinaesthetically (laying in warm bathwater).

Bridgeforth (2009) proposed that the overall poor show of supporting results “may be attributable to study design rather than eye movement theory” (p.76). One particular historical EAC study had taken note of previous design-issues and set out to test the NLP-EAC model as accurately as possible, i.e. Baddeley and Predebon (1991).

2.5 Baddeley & Predebon (1991)

In 1989, Mark Baddeley, a psychologist, wrote that the NLP-EAC model had been widely criticised because “it attempts to synthesize theoretical and practical borrowings from many sources and some of these sources are theoretically antagonistic” (cited in Dowlen, 1996, p.31); in a word, the model was pseudo-scientific. Up to this time, as has been seen, there had been scant confirmation that EACs would correspond in a predictable way to VAK-loaded stimuli.

In 1991 Mark Baddeley teamed up with John Predebon, a university lecturer, in an attempt to find links between EACs and appropriate stimuli. Einspruch and Forman (1985) had observed that due to “methodological errors” in NLP-EAC studies it was “not possible … to determine the validity of … NLP concepts” (cited in Baddeley & Predebon, 1991, p.17). This prompted Baddeley and Predebon to design and execute “more rigorous investigations of the NLP eye-movement hypothesis than had been conducted previously” (p.18).

The Australian duo noted that questions in past studies sometimes drew EMs only 50% of the time. ‘No EMs’ is thought to mean occasions when Subjects continue to look at the interrogator. This ‘look’ is often referred to as a ‘stare’ in the literature (or ‘eyes directed straight ahead’ (EDSA) by the present writer). This is also the ‘seventh look’ that was not satisfactorily categorised in the eye-map from 1979. This led the Australians to hone questions after their Pilot Study when they realised that if they were too easy, stares were generated. For example, Imagine the sound of a dog barking was replaced with Imagine a sound you have heard late at night (1991, p.18). This refers again to Duke (1968) above, who realised that it was more complex questions that elicit EMs.

Baddeley and Predebon logged that Thomason, Arbuckle and Cady (1980) and Poffel and Cross (1985) reported the first EM, but the 1991 researchers did not consider this sufficient (reinforced possibly by the fact that neither of those studies supported the NLP-EAC model). They decided to record several EACs displayed during the verbalisation period over two studies. This would certainly reduce the incidence of ‘no EMs.’ Study I allowed Subjects to have a reflective period before answering, but in Study II, Subjects had to answer immediately; Study II was thought to more closely resemble a “clinical face-to-face setting” and a more appropriate test of the NLP-EAC model. Baddeley and Predebon had the Experimenter and Subject face each other, noting that only Cheney, Miller and Rees (1982), Elich, Thompson and Miller (1985) and Buckner, Meara, Reese and Reese (1987) also employed this set-up (with only Buckner et al. finding support out of the three teams).

Baddeley and Predebon got Subjects to first complete Humphrey’s (1951) Edinburgh Handedness Inventory (EHI) (Appendix I) to eliminate left-handed Subjects. The EHI comprises twenty-two situations where the Subject has to self-report the use of handed-dominance. The tasks range from using a fork to looking through a telescope. Oldfield (1971) had examined the EHI and concluded “it is simple and provides one quantitative measure of handedness … for screening purposes” (p.110).

Virtually every other historical study had excluded LHSs; the idea being that, following Bandler and Grinder, LHSs were unlikely to display PEACs. So in this respect, the experiment was weighted to favour the hypothesis. Thirty-one and then thirty Subjects participated in the two question-based tests (Appendix VII). There were 28 questions comprising seven groups of four; i.e. six sets designed to elicit the six EACs depicted on the eye-map (and indicating either a V, A, or K sensory modality) and one set of control questions.

The overall results from 1991 are set out below.

Table 3: Comparison of Observed and Expected Outcomes for Individual Question Types with Data Collapsed Across Eye-Movement Instances (Baddeley & Predebon, 1991, p.15)

Notice for now that the highest incidences were for Ar (Left-lateral) and Ac (R-lateral) with EACs totals of 62/45 and 44/23 respectively.

Table 4: Statistical Outcomes of Four Levels of Analysis for Two Studies (Baddeley & Predebon, 1991, p.16)

From these findings, the researchers determined that “the results failed to support the Neuro-Linguistic Programming hypothesis” (Baddeley & Predebon, 1991, p.1). Despite this, the same study tool was used as the framework for the present writer’s 2012 research.

3. METHODOLOGY

3.1 Sourcing the Tool

Noting that questions had been used in many historical experiments (see Table 2 above), the present writer decided to follow suit in effecting the current study. However, past researchers had not included the test-material in published works, and efforts by the present writer to retrieve same were unsuccessful for several reasons. These ranged from the difficulty in contacting authors of works from thirty-plus years ago (e.g., Thomason et al., 1980); the works being in foreign languages (e.g., Bliemeister, 1988); works not forthcoming despite library inter-loan requests (e.g., Nate, 2000); authors contacted had not retained the questions (e.g., Poffel & Cross, 1985); or the document came to hand too late in relation to the testing schedule (e.g., Bridgeforth, 2009).

In the event, the 28 questions from Baddeley and Predebon (1991) were unearthed in good time and were available from the Pilot Study phase. Although these authors had failed to find support for the NLP-EAC model, they had attempted to overcome past methodological issues and made it a goal to organise “rigorous investigations” via a more realistic “clinical face-to-face setting.”

This research is well-known and had been referred to in many works, including major projects around the NLP-EAC model by Bridgeforth (2009), Diamantopoulos (2010) and Witkowski (2012).

3.2 Pilot Study 2012

The 2012 Pilot Study was carried out on 27 August 2012 with one RHS for whom English is the first language (as per the 1991 study) using all 28 questions from 1991, which included four control questions (CQs) and four familiarisation questions (FQs), bringing the total to 32. The Subject filled out the EHI to confirm handedness dominance beforehand.

No camera equipment was used in the Pilot Study (as there was in 1991). The Experimenter and Subject faced each other in a plain room, with only the first EAC at the end of each question being noted as per the LEM studies from Ehrlichman, Weiner and Baker (1974) and Jamieson and Sellick (1985); and the NLP-EAC testing by Thomason, Arbuckle and Cady (1980) and Poffel and Cross (1985). This was for the very good reason that to note several EACs would be too complex for the lone Experimenter (conducting the experiment; observing several EACS; listening/talking; and taking notes).

However, the simple approach replicated a normal dyadic setting as noted by Diamantopolous (2010) for, “direct-viewing was used in NLP studies because the EAC model is taught to be useful in real-time human interaction where the practitioner observes the eye-movements without technological aids” (p. 24).

Four familiarity questions were asked of the Subject before the 28 NLP Study Questions (6 sets of 4 NLP and 1 set of 4 control questions). If the Subject continued to look at the Experimenter immediately following the question, that was recorded as EDSA (i.e. a stare). Eye-movements that matched the question were called Predicted Eye-accessing Cues (PEACs) and EMs in the absence of PEACs were termed Alternative Eye-accessing Cues (AEACs).

3.3 Field Study 2012

The main study was conducted over 10 days in late September 2012.

Due to the time it took to ask 32 questions in the Pilot Study; the elements of repetition and suitability generally (e.g., recite The Lord’s Prayer), the present writer, in consultation with the project supervisor, redesigned the test such that only two questions from each group were asked (six sets of two VAK-questions); two CQs and four FQs (totalling 18 questions).

Some of the original question wording from 1991 was changed slightly. For example, Imagine in detail that you can see the building three doors down from where you live was changed to Imagine in detail a house near where you live since that gave the Subjects the opportunity to recall a house that they may have noticed for some reason.

Three familiarisation questions from 1991 were also altered slightly: Who is your favourite author? / What is your favourite film? were altered to, Who is a favourite author? / What is a favourite film? after the Pilot Study Subject agonised over having to pick out the all-time favourites.

The 1991 familiarisation question What’s the name of one of your tutors? was altered to What is the name of a current or former colleague? to both cover those Subjects who were not students, and those that had retired. The 2012 study questions are attached in full as Appendix II. (NB The present writer did not devise any new NLP-weighted questions.)

The present writer had decided beforehand to record responses to the familiarisation questions since the results of the Pilot Study had demonstrated that these elicited specific EACs comparable to those elicited by NLP-weighted questions.

All Subjects were given an Information Sheet to read, which detailed the study aims and content etc (Appendix III), and then a Consent Form (Appendix IV) to sign once they agreed to participate. Prior to this, the present writer had applied to, and been given formal permission by, the Massey University Human Ethics Committee for the study to take place (Appendix III). The Committee adjudged the 2012 study as Low Risk on behalf of the Subjects.

Each Subject then filled out the EHI so the Experimenter could confirm individual handedness. The Experimenter had previously decided to include LHSs in the study for comparison purposes.

As per the Information Sheet, Subjects had the option of not answering questions or withdrawing from the testing at any time.

Most of the interviews took place in Subjects’ homes and all knew the Experimenter. The Experimenter made changes to the room in terms of shutting curtains, turning on lights or swapping places with Subjects, etc, if necessary to aid the observations. The Subjects were asked to face the Experimenter during the question and answer phases (NB At no time did this appear to inhibit EACs although two Subjects did guess that EACs were being observed).

The Field Study took place over ten days during September 2012 (in Christchurch and Wellington) and tested 27 Subjects (10 male and 17 female) ranging in age from 20-91 years and engaged in a wide variety of occupations. Twenty-one Subjects were RH and six were LH.

Instructions Given to Subjects: This is not an IQ or personality test. / I am going to give you a series of questions. / Please do not rehearse the answer. / If you cannot think of an answer, please say ‘pass’ or ‘I don’t know,’ or similar. / You can elect to not answer any question, or withdraw from the testing, at any stage. / Please note that there are no right or wrong answers!

4. RESULTS

4.1 Pilot Study 2012

The table below shows the shaded areas where predicted eye-accessing cues (PEACs) would appear if they occurred. It also shows the alternative eye-accessing cues (AEACs) manifested by the Pilot Study Subject.

Table 5: Results of 2012 Pilot Study (1 Subject) Eye-Accessing Cues Made by Pilot Study Subject (PSS)

There were 32 questions to answer:

4 x 6 sets of NLP-weighted questions, 4 Familiarisation (FQ), 4 Control (CQ).

The PSS scored only one PEAC (left-lateral) which related to Auditory Remembered (Ar) out of a possible 24 and made 17 EDSA/stares out of 32 questions.

The PSS made 11 left-lateral (LL) AEACs which correspond to Auditory Remembered and four left-down (LD) AEACs which correspond to Auditory Digital (inner dialogue).

There were four control and four familiarisation questions (8 in total) which were not designed to elicit any particular sensory modality (or therefore PEACs).

Whilst four of these elicited EDSA (by way of a default EAC), the other four elicited left-lateral EACs (associated with Ar – Auditory Remembered).

The results for control and familiarisation questions were thus deemed useful for comparison purposes.

Overall, the PSS made EDSA/stares and leftward-looking EACs, but made no right- or upwards eye-directions at all.

4.2 Field Study 2012

The total EACs possible from 27 Subjects responding to 18 questions is 486. The table below illustrates the breakdown in terms of question amounts for RHSs and LHSs.

Table 6: Eye-accessing Cues (EACs) Possible for 18 Questions to 21 RHSs & 6LHSs (Shaded Area = Left-handed Subjects)

There were 486 possible EACs but since ten were miscellaneous (e.g., eye-shutting) the remaining 476 were assessed by the present writer.

The full results for each Subject are attached as Appendix IX and show which EAC each Subject employed to answer each of the 18 questions.

Table 7 below shows Subjects’ profiles and the totals of each EAC direction that each Subject displayed: left-looking (LEACs), right-looking (REACs), or eyes directed straight ahead (EDSA)/stares.

If a Subject, e.g., made more left-looking eye-movements (LEACs) than any other, they were described as having an L-tendency (‘left-mover’), and so on.

Table 7: Subject Profiles & Strongest EAC Tendency (Left, Right, EDSA) (Shaded Rows = Left-handed Subjects)

The following group of tables shows various configurations by Subject data profiles.

Table 8: Subjects by Ascending Age (Shaded Rows = Left-handed Subjects)

No noteworthy trends by age.

Subjects were just as likely to have a spread of EACs at any age.

Table 9: Subjects by Female-Male (Shaded Rows = Left-handed Subjects)

Most LHSs were male, but that was a coincidence.

Table 10: Subjects by Descending ‘Left-Movers’ (Shaded Rows = Left-handed Subjects)

Subjects that looked left a lot, tended not to look right, but opted for EDSA/stares.

These Subjects (# 1-7) made 9 or more LEACs (out of a possible 18 EACs).

Table 11: Subjects by Descending ‘Right-Movers’ (Shaded Rows = Left-handed Subjects)

Subjects that looked right a lot, tended not to look left, but opted for EDSA/stares.

These Subjects (#1-6) made 9 or more REACs (out of a possible 18 EACs).

This configuration appears to highlight (after the right-movers), a bunching of EDSA/stares, and then a bunching of left-movers.

Table 12: Subjects by Descending ‘EDSA/stare (non!)-Movers’ (Shaded Rows = Left-handed Subjects)

10 Subjects (# 1-10) made 9 or more EDSA/stares (out of a possible 18 EACs).

Table 13: Shows All EACs Ranked by Descending Occurrence for Right- and Left-handed Subjects (RH & LH)

Table 13 shows RHSs achieved 37 PEACs out of a possible 252:

21 Subjects x 12 NLP-weighted questions.

LHSs achieved 5 PEACs out of a possible 72:

6 Subjects x 12 NLP-weighted questions.

Thus, 42/324 (252+72 = 324) PEACs were achieved overall.

(NB EDSA scores are n/a for the PEACs groups as EDSA/stare is not classed as a PEAC.)

The PEAC that was achieved the most was left-lateral (12+2), followed by right-lateral (8+2).

Table 13 also shows the AEACs displayed in place of PEACs and the EACs displayed in response to six FCQs.

Table 13 shows that all question types elicited EDSA/stares most often (RH 73-67 & LH 21-21).

Table 13 shows that next after EDSA/stares, both RHSs and LHSs displayed firstly, left-lateral and secondly, right-lateral EACs (72-66 and 21-18 respectively).

Overall, the most frequent EACs (after EDSA 182) were left-lateral (93) and right-lateral (84).

The least frequent AEACs overall were left-down (25) and right-down (22).

Some slight transpositions are seen in Table 13:

RHSs had more right-lateral EACs (21) than left-lateral EACs (15) and more right-down EACs than left-down and left-up EACs for the control and familiarisation questions (FCQs).

LHSs also had a similar transposition for FCQs as well as some variation in looking up (left and right).

The table below shows the percentages of PEACs achieved by all Subjects.

Table 14: Numbers & Percentages of Predicted EACs (PEACs) Achieved

Each Subject had the opportunity to achieve a maximum of 12 PEACs.

However, no Subject scored more than 4/12 (33.33%).

Most Subjects scored 3, 2, 1 or no PEACs at all.

The table below shows numbers/percentages achieved for all EACs.

Table 15: Numbers & Percentages of all EACS in Descending Order of Occurrence

The most recurring EACs in descending order are:

EDSA-LL-RL-RU-LU-LD-RD (182-93-84-36-34-25-22).

This trend applies when scores for RHSs and LHSs are taken separately (except for a slight transposition for LHSs’ AEACs between right-up/left-up).

The totals for EACs in descending ranking are seen more clearly in Table 16. After the initial three eye-directions, all Subjects looked up (right and left), then down (left and right); the latter the least exhibited direction.

Table 16: Descending Numbers/Percentages of EACs for All Subjects

Also note that the total numbers/percentages of EACs drop away significantly after Right-lateral (RL).

Appendix X shows Alternative EACs (AEACs) displayed by each Subject to each question, i.e. when Predicted EACs (PEACS) were not; below is a summary table.

Table 17 (in sections): Alternative EACs (AEACS) to NLP Questions; NB Comments on Trends Pertain to AEACs Displayed After EDSA (shaded areas show where PEACs should have occurred)

The AEACs displayed more often (after EDSA/stare) in lieu of PEACs were: left-lateral and right-lateral, the EACs associated with Auditory Remembering and Auditory Construction respectively.

The table below shows a little more clearly (by colours) the AEAC trends that were displayed overall by all Subjects in response to NLP-weighted questions.

Table 18: 273 AEACs Out of 324 [252 + 72] Possible PEACs

Table 18 shows that Subjects were slightly less likely to display alternative EACs (AEACs) when the predicted EAC (PEAC) was left-lateral (39) or right-lateral (42).

Table 18 shows that Subjects were slightly more likely to display AEACs when the PEAC was left-up or right-down (50).

EDSA/stare was the leading EAC in 5 question types; however, EDSA/stare was the third choice for kinaesthetic questions, where Subjects favoured left-lateral (14), then right-lateral (9) before EDSA/stare (8).

The table below shows the tendencies for all Subjects to answer control and familiarisation questions (FCQs).

Table 19: EACs in Response to FCQs

Table 19 shows that after EDSA/stare (88), all Subjects displayed right-lateral (29), then left-lateral (20) EACs in response to FCQs.

The table below shows numbers and ranking of all EACS over all question types and scenarios.

Table 20: All EACs Ranked by Cluster Totals

Table 20a: Table 20 Reconfigured to Show Cluster Trending of All EACs

Table 20a shows the common trends over a variety of cluster scenarios.

The FCQs also followed the trend, with a slight transposition with right-lateral coming before left-lateral (in other columns it is left-lateral followed by right-lateral).

This table also shows that after 3rd place, numbers drop away markedly from 4th place onwards:

55-28, 66-29, 84-36, 74-31, 45-22, 20-8, etc.

Table 21: How Bandings Might Have Turned Out

Table 21 is a demonstration of how Tables 20/20a might have looked based on the theory of how handedness/question-type affects eye-directions: if left-handers really did have a divergent patterning and if non-NLP questions elicited a small range of EACs.

5. DISCUSSION

5.1 Pilot Study 2012

The Pilot Study Subject (PSS) was right-handed. After opting for EDSA/stares (17), the PSS favoured left-lateral (11) and left-down (4). The latter two groups of EACs are associated with Auditory Remembering (Ar) and Auditory Digital (Ad, inner dialogue). The large result for EDSA/stares was an unexpected finding and the PSS resorted to EDSA/stares over all question-types. The control and familiarisation questions (FCQS) might have been expected to elicit EDSA/stares since they were not meant to be connected with the three sensory modalities (VAK). However, the Subject did respond half the time to FCQs as if they were NLP-orientated questions. Either those questions were harder for the Subject to answer, or the Subject was ‘torn between two hemispheres’ (after Galin & Ornstein, 1974). It might just be that the PSS had a preference for looking-left when thinking – a ‘left-mover’ after Duke (1968).

5.2 Field Study 2012 (Rider)

Please note that all the discussion of the following results is made in the knowledge that the study had 21 RHSs and only 6 LHSs, therefore, no significant comparisons can be said to have been demonstrated due to the small number of LHSs and Subjects overall. However, there are trends displayed in the results that were obtained such that it is not impossible to imagine that testing of further LHSs and Subjects generally would add weight to the current speculative findings.

5.3 EAC Trends by Subject Handedness

Tables 7-12 in the Results section illustrate that RHSs and LHSs were just as likely to look left or right in response to the same questions, or produce EDSA/stares – the leading EAC displayed by all Subjects. Ten Subjects exhibited more stares than other EACS; seven Subjects were described as left-movers and six subjects as right-movers. The other Subjects tended to spread their EACs across all three directions. Even if LHSs had not been included, the results from the RHSs alone indicate that there is much variation among RHSs as this group did not ‘fall into line’ in terms of displaying the cues in tune with the eye-map, i.e. they did not look in the predicted direction when answering Visual-, Auditory-, or Kinaesthetic-weighted (NLP) questions.

From a total of 18 questions, five RHSs looked left nine times or more (11, 12, 12, 14, 17). Similarly, five RHSs looked right nine times or more (9, 9, 12, 13, 13). Seven RHSs opted for EDSA/stares nine times or more (9, 10, 10, 10, 11, 11, 12). The RHSs were evenly spread over the three main EAC directions, as were the LHSs. The EACs for LHSs (in this small sample) were not as irregular as suggested by Bandler and Grinder (1979).

The results from the Field Study now demonstrate that the Subject in the Pilot Study was not atypical because they always used stares or looked left. Similar person-specific trends were exhibited by the Field Study Subjects. This adds weight to theorists who surmised that people develop preferences in eye-directions when thinking/answering, rather than their eye-movements are dictated by cerebral function.

Table 13 of the present study shows (albeit barely!) that when PEACs were achieved, these occurrences were the same for both the LHSs and RHSs. i.e. they both ‘got correct’ left- and then right-lateral PEACs. However, due to the small sample, those PEACS could well have been flukes all round. There were some slight transpositions in Table 13 (noted above) but the totals overall set the pattern that emerged in various configurations.

Table 19 of the present study also showed that both handedness responded to control and familiarisation questions (FCQs) in a similar fashion in that both utilised first right-, then left-lateral EACs. So even the questions not meant to be related to VAK responses still elicited similar responses. This suggests that people look left or right when processing all kinds of information. i.e. not just when dealing with ‘sensory’ data as the NLP-EAC model would have it. None of this is supportive to the model if people are working in an auditory mode nearly all the time and for any purpose.

5.4 EAC Trends Overall

What arises from the numbers for the seven EACs overall is the trend for all Subjects to answer all questions in a similar vein as illustrated by Table 13. All Subjects opted for EACs in this order: EDSA, LL, RL, RU, LU, LD, RD. This trend was observed almost 100% across all EAC instances (except for very slight transpositions noted earlier). If Subjects are answering all question-types in a similar fashion, it voids the supposition that specific question content elicits corresponding sensory modalities. This trend was demonstrated in several configurations of the data. What this indicates is that both RH and LH people employ EDSA/stares and look left- and right-lateral in greater numbers before employing upwards and down eye-movements.

There were some questions that elicited almost only stares, i.e. the control questions and this was most likely because they were very easy for the Subjects to answer: they did not have to delve too deep. The familiarisation questions were a little harder, which was demonstrated by Subjects’ reliance on right-lateral (Auditory Constructed) followed by left-lateral (Auditory Remembered) (Table 17). Were Subjects really constructing when they were asked to recall their high-school or a book and film they like?

Table 17 demonstrated how Subjects answered questions (after EDSA/stares) by resorting to left-lateral and right-lateral, time and again. There was only one occasion when EDSA/stares were the second choice of EAC, when Subjects opted for left-lateral (Auditory Remembering) in response to the Kinaesthetic-weighted questions. The Experimenter recalls that only one Subject seemed to understand the questions and answer them ‘correctly.’ That so many Subjects were baffled by these questions was perhaps why overall they ditched the popular EDSA/stare and opted for looking left or right; perhaps looking left and right really helped them focus their thoughts. Table 17 also shows that in response to the Auditory Digital questions (inner dialogue), the Subjects still chose to look left- and right-lateral. The predicted EAC of looking down-left was virtually ignored.

Tables 20/20a show how several configurations of the experimental data results illustrates the persistent patterning, i.e. for Subjects to ‘answer’ by supplying mostly EDSA/stares, then left-lateral and right-lateral, followed by looking up (right and left) and then looking down (left and right). That all Subjects displayed all six/seven eye-directions, whilst also exhibiting the same order of preference (lateral, up, down) may simply indicate – that is what people do. This much was established by the results from the current study.

Also note that the trend in the current study for Subjects to move their eyes (after EDSA/stares) in left- and right-lateral directions was reflected in the 1991 work from Baddeley and Predebon (Table 3) for ‘Data Collapsed Across Eye-Movement Instances.’

5.5 Comparing PEAC & AEAC Occurrences

The NLP-EAC model allows that 75% of right-handers (RH), but no left-handers (LH) will adhere to the eye-map script, but neither of these predictions were observed even slightly. The RHSs achieved 37 PEACs out of a possible 252 – or 14.68% (Table 14). The LHSs achieved PEACs 8.33% of the time and while that was not a great result, it was not much ‘worse’ than the RHSs.

Tables 13, 14 and 15 showed percentages of EACs overall. Alternative EACs (AEACs) such as EDSA/stares occurred 38.23% of the time. Left-lateral and right-lateral were the next most common AEACs at 19.53% and 17.64% respectively. This trend was found for LHSs as well as RHS. This suggests either that the eye-map break-down applies to all handedness – or none at all. At present, it seems very likely that the latter is the case. And both handedness displayed the same high percentage rates for the first three EACs and both showed a dropping-off after right-lateral, with the final four EACs’ usage (up and down, left and right) much reduced.

Table 21 showed how the similar banding in Tables 19/19a might have looked if Subjects had responded to the three question-types in the divergent manner believed to prevail as discussed in the Literature Review. That being, questions to RHSs would produce predictable EACs; that LHSs would be unlikely to follow any pattern; and that the FCQs would not produce EACs in line with NLP-weighted questions. In terms of confirming the NLP-EAC model, what has been observed here is that 27 people had the same tendencies when answering questions – regardless of the sensory prompt or handedness. The findings that looking left and right are very popular (after EDSA/stares) also harks back to the LEM research from Gur (1975), Teitelbaum (1954) Day (1964) who all found people seemed to have personal preference for looking left or right.

5.6 This Study in Context of NLP-EAC Studies

Some writers above (e.g., Cheney, Miller & Rees, 1982; Dilts, 1998) did suggest that LHSs would exhibit a mirror-reversal but this was not found (still bearing in mind there were just six LHSs.) However, if we had left off who was RH and who was LH, the present writer suggests it would have not been possible to distinguish the LHS from the RHSs based on their EAC responses.

This study also found that questions not intended to elicit sensory EACs, very often did, as noted by Elich, Thompson and Miller (1985) above. At least one EM study above (Gur, Gur & Harris, 1975) included a large number of LHS and both handedness groups were found to have left- and right-looking tendencies. As with the Subject in the Pilot Study, the Field Study participants could be classed as left- and right-movers (Duke, 1968), with a third group, who might be termed the EDSA-non-movers on account of their [fondness for] continuing to stare at the Experimenter when answering questions.

Tables 19/19a showed the trend for EACs coming after the top three (EDSA, LL, RL) to drop away drastically. Looking up and down left and up and down right was favoured much less by all Subjects over all scenarios. Whatever the reason behind Subjects looking to the side so much, it appears that such EACs are sufficient for many purposes for most people. At this stage, the argument for eye-aversion in social contexts seems a strong contender as an explanation of why people move their eyes in dyads. Certainly no NLP-VAK tenets seemed to drive EACs in specific (predicted) directions.

The poor display of 42 out of a possible score of 324 PEACs was well below chance – or nowhere near anything like an expected result based on the NLP-EAC model. Buckner et al. (1987) above said the NLP-EAC model revealed when people were in the V-, A-, or K-zone. By that standard then, all the Subjects were in the Auditory zone a large percentage of the time (37.17%); and everyone spent most of their time in the no-zone of EDSA/stares, or 38.23% of the time (Table 15). No information was elicited that would be helpful to therapists, etc, in terms of tailoring to individual needs if everybody always ‘acts’ the same way. The fact that virtually all the Subjects could not answer the Kinaesthetic-weighted questions seems odd when counselling and so on are about feelings.

It was surprising to find so many EDSA/stare responses, but then Baddeley and Predebon (1991) did remark that other studies had found ‘no EMs’ about half the time. Could so many studies, including this one, have been designed with faulty methodology?

Other researchers above (e.g., Wertheim, Habib & Cumming, 1986) had found most EMs to be “auditory in nature,” which was also the case here. However, this study did not find support for V and A modalities as did Buckner, Meara, Reese and Reese (1987). In this study, looking up (V-modality) was secondary to looking-laterally (A-modality). This study is in agreement with Burke et al. (2003) who found EACs to be “idiosyncratic and person-specific.”

5.7 Influence of Testing Conditions

EAC studies are generally carried out under laboratory conditions to avoid Subjects being distracted. The interviews in this study were carried out in Subjects’ homes. Naturally décor would be in evidence, but the Subjects live with these arrangements and so it is unlikely that anyone was distracted by the ‘non-laboratory’ surroundings. As noted in the Methodology section, the Experimenter made changes in terms of lighting, etc, to improve visibility when scanning Subjects’ EACs.

(It is thought that because Subjects knew the Experimenter, anxiety would have been reduced also.)

However, recall the novel painting that was deliberately placed in one test but which may as well have been non-existent to Subjects. Gur (1975) was among those who established that Subjects feel the need to look laterally once faced with a human Experimenter. This alludes once more to the notion that faces – or more precisely eyes – are distracting. So social etiquette in face-to-face experiments must play a part. As for the suggestion that stares are a compromise (Galin and Ornstein, 1974), stares (or EDSA) are seen just as often in daily life as lateral-looking.

Regarding the recording of just the first EAC by the Experimenter: many studies in the past had done the same, or counted many (as did Baddeley and Predebon, 1991) but no method of testing was better or worse than the other when it came to (not) supporting the NLP-EAC model.

Diamantopolous (2010) above had put the point that it is difficult to know if the stimulus is being received as intended. Researchers have tried many different testing methods over thirty years (an important one being the capturing of EACs at different stages of the experiment) but have not succeeded in proving the NLP-EAC model (Table 2). Among these are: Thomason, Arbuckle and Cady (1980), Cheney, Miller and Rees (1982), Elich, Thompson and Miller (1985), and Salas, de Groot and Spanos (1989), and Baddeley and Predebon (1991) who all used questions; with the latter looking to lessons learned in historical studies in the attempt to prove the model.

Two of the studies that found support used questions while the rest used other stimuli. And of course there are all the historical eye-movement studies that began about a decade before NLP emerged. It is felt that some note should be taken of this field especially when overall, they could find no linkages either between brain hemispheres and eye-directions.

6. CONCLUSIONS

The major finding here is that three default EACs, namely: EDSA/stares, left-lateral and right-lateral exist. This was demonstrated by all Subjects regardless of handedness or question-type. This study then has supported other studies that over a thirty-year period tried, but failed to find support the NLP-EAC model. Other research also identified that both LH and RH looked laterally, as opposed to looking up and down, right and left (Gur, 1975; Gur, Gur & Harris, 1975; Jamieson & Sellick, 1985). Other writers reported what they called ‘no EMs,’ which was the case here, evidenced by the large proportion of EDSA/stares.

Sharpley (1987) surveyed forty-four pieces of work on NLP and found six did provide support, eleven found some support but twenty-seven failed to back up NLP (cited in Witkowski, 2012, p.31). Soon afterwards, another reviewer of sixty-three studies recommended that: it may well be appropriate now to conclude that there is not, and never has been, any substance to the conjecture that people represent their world internally in a preferred mode which may be inferred from their choice of … eye movements. (Heap, 1988, p.275).

Stephen Poffel (2012), co-author of a 1985 study into the NLP-EAC model, wrote in a personal correspondence to the present writer: I have reviewed several international pertinent studies and results during the past 10 years. Reviewers (peers and myself) all came to the same conclusion, that being: there seems to be no reason to pursue this topic any further. (Poffel, 2012, n.p.n.).

Another formal review was carried out by Witkowski (2012) who after scrutinising thirty-three studies opined, “18.2% show results supporting the tenets of NLP; 54.5% results non-supportive of the NLP tenets and 27.3% brings uncertain results [contradicting] the claim of an empirical basis of NLP” (p.58).

Some NLP proponents are at pains to clarify that the NLP-EAC model says that individuals have their own individual patterns, which can be said to have occurred here. However, it is hard to see how any results found here would inform counsellors, etc. The object of this study, as with so many similar studies dating back to 1980, was to prove that certain stimuli would elicit particular EACs. If that were true, then communicators and therapists could use this tool to ‘get inside’ and ‘get with’ clients and customers. This did not occur.

And it probably did not occur because mental processes do not seem to be linked with EACs. (Note I am not talking here of extremes, e.g., eyes widening in horror at a traumatic sight. Ekman’s (2003) seven universal facial expressions all encompass the eyes.) If more testing is carried out in the future, it would be very useful to survey a large proportion of LHSs, children, and other cultures. That McCarthy et al. (2008) above found differences cross-culturally adds weight to the ‘social convention’ theory that exists within and between cultures: we should look at each other – but not too much – and not, not at all.

The present writer knows of six NLP-EAC studies that gave some support and one of those may have given massive support (Nate, 2000) but the present writer also knows about twenty-five studies that went no way to supporting the model. This is not counting the separate but linked history into EMs.

But the present writer has come to understand from the current testing that EM is another way that people express themselves, and wonders if some attention could be given to how much evolution has played a part. Could it be that hundreds of thousands of years ago, before language began, humans had advanced to a point where EMs were a reliable form of communication? And the advent of language has meant this usage is less important, and is now more or less defunct. What we do now know is that eyes must move all the time – because they have to.

This could tie in with Micic, Ehrlichman and Chen’s (2010) notion that EMs may not be functional but epiphenomenal, i.e. that whilst EMs occur in companion with cognitive tasks, they don’t necessarily contribute to them. These authors go on to conclude that “at the present time, it is not clear … what is responsible for the large individual differences in the tendency to move one’s eyes during non-visual tasks” (p.211).

7. REFERENCES

Archibald, D. (2008). Ways of seeing. In Cabinet, 30.

Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. Cambridge, MA: Cambridge University Press.

Baddeley, M. (1989). Neuro-linguistic Programming: The academic verdict so far. The Australian Journal of Clinical Hypnotherapy and Hypnosis, 10(2), 73-81.

Baddeley, M. and Predebon, J., (1991). Do the eyes have it? A test of neuro-linguistic programming’s eye movement hypothesis. Australian Journal of Clinical Hypnotherapy and Hypnosis, 12(1), 1–23.

Bandler, R., & Grinder, J. (1979). Frogs into princes. Moab, UT: Real People Press.

Bavelas, J. B., Coates, L., & Johnson, T. (2002). Listener responses as a collaborative process: The role of gaze. Journal of Communication, 52, 566–580.

Beck, C.E., & Beck, E. A. (1984). Test of the eye-movement hypothesis of neuro-linguistic programming: A rebuttal of conclusions. Perceptual and Motor Skills, 58(1), 175–176.

Bliemeister, J. (1987). An empirical test of basic assumptions of NLP. Integrative Therapie, 13, 397-406.

Bliemeister, J. (1988). An empirical test of theoretical constructs essential to NLP. Zeitschrift fur Klinische Psychologie, 17, 21-30.

Bridgeforth, B.W. (2009). Leadership as role and relationship in social dynamics: An exploratory study seeking a leadership archetype. Dissertation Abstracts International Section A: Humanities and Social Sciences, 70(4-A), 1340.

Buckner, M., Meara, M.N., Reese, E.J., & Reese, M. (1987). Eye movement as an indicator of sensory components in thought. Journal of Counselling Psychology, 34(3), 283-287.

Burke, D.T., Meleger, A., Schneider, J.C., Snyder, J., Dorvlo, A.S.S., & Al-Adawi, S. (2003). Eye-movements and on-going processing. Perceptual and Motor Skills, 96(3), 1330-1338.

Carbonell, D. A. (1985). Representational systems: An empirical approach to neuro-linguistic programming (eye movements). Dissertation Abstracts International, 46(8), 2798B.

Cheney, S., Miller, L., & Rees, R. (1982). Imagery and eye movements. Journal of Mental Imagery, 6, 113-124.

Day, M. E. (1964). An eye movement phenomenon relating to attention, thought and anxiety. Perceptual Motor Skills, 19, 443–446.

Diamantopoulos, G. (2010). Novel eye feature extraction and tracking for non-visual eye-movement applications. A thesis submitted to the University of Birmingham for the Degree of Doctor of Philosophy.

Dilts, R. (1998). Eye movements and NLP. Downloaded from http://www.nlpu.com/Articles/artic14.htm on 27 July 2012.

Dooley, K.O., & Farmer, A. (1988). Comparison for aphasic and control subjects of eye movements hypothesized in neuro-linguistic programming. Perceptual and Motor Skills, 67, 233-234.

Dorn, F.J., Atwater, M., Jereb, R., & Russell, R. (1983). Determining the reliability of the NLP eye-movement procedure. American Mental Health Counsellors Association Journal, 5(3), 105–110.

Dowlen, A. (1996). NLP – help or hype? Investigating the uses of neuro-linguistic programming in management learning. Career Development International, 1(1), 27-34.

Duke, J.D. (1968). Lateral eye movement behaviour. Journal of General Psychology, 78, 189-195.

Ehrlichman, H., Weiner, S., & Baker, H. (1974). Effects of verbal and spatial questions on initial gaze shifts. Neuropsychologia, 12, 265–277.

Ehrlichman, H., & Weinberger, A. (1978). Lateral eye movements and hemispheric asymmetry: A critical review. Psychological Bulletin, 85, 1080-1101.

Ehrlichman, H., Micic, D., Sousa, A., & Zhu, J. (2007). Looking for answers: Eye movements in non-visual cognitive tasks. Brain and Cognition, 64(1), 7–20.

Ehrlichman, H., & Micic, D. (2012). Why do people move their eyes when they think? Current Directions in Psychological Science, 21(2), 96-100.

Einspruch, E.L, & Forman, B.D. (1985). Observations concerning research literature on Neuro-Linguistic Programming. Journal of Counselling Psychology, 32(4), 589-596.

Ekman, P. (2003). Emotions revealed. Understanding faces and feelings. London: Weidenfeld & Nicolson.

Elich, M., Thompson, R.W., & Miller, L. (1985). Mental imagery as revealed by eye movements and spoken predicates: A test of neuro-linguistic programming. Journal of Counselling Psychology, 32(4), 622-625.

Ellickson, J.L. (1983). Representational systems and eye movements in an interview. Journal of Counselling Psychology, 30(3), 339-346.

Falzett, W. (1981). Matched versus unmatched primary representational systems and their relationship to perceived trustworthiness in a counselling analogue. Journal of Counseling Psychology, 28(4), 305–308.

Farmer, A., Rooney, R., & Cunningham, J.R. (1985). Hypothesized eye movements of neuro-linguistic programming: A statistical artefact. Perceptual and Motor Skills, 61, 717-718.

Fromme, D.K., & Daniell, J. (1984). Neuro-linguistic programming examined: Imagery, sensory mode, and communication. Journal of Counselling Psychology, 31(3), 387-390.

Galin, D., & Ornstein, R. (1974). Individual differences in cognitive style: Reflective eye movements. Neuropsychologia, 12, 376-397.

Griffiths, P., & Woodman, C. (1985). Conjugate lateral eye movements and cognitive mode: Blindness as a control for visually-induced oculomotor effects. Neuropsychologia, 23(2), 257-262.

Gumm, W. B., Walker, M. K., & Day, H. D. (1982). Neurolinguistics programming: Method or myth? Journal of Counselling Psychology, 29, 327-330.

Gur, R.E. (1975). Conjugate lateral eye movements as an index of hemispheric activation. Journal of Personality and Social Psychology, 31(4), 751-757.

Gur, R.E., Gur, R.C., & Harris, L.J. (1975). Cerebral activation, as measured by subjects’ lateral eye movements, is influenced by experimenter location. Neuropsychologia, 13(1), 35-44.

Heap, M. (1988). Neuro-linguistic programming: An interim verdict. In M. Heap (Ed.) Hypnosis: Current clinical, experimental and forensic practices. pp. 268–280, London: Croom Helm.

Hernandez, V. (1981). A study of eye movement patterns in the neuro-linguistic programming model. Dissertation Abstracts International, 42(4), 1487B.

Humphrey, M.E. (1951). Handedness and cerebral dominance. BSc Thesis, Oxford University.

James, W. (1890). Principles of psychology. London: MacMillian & Co.

Jamieson, J.E., & Sellick, T.B. (1985). Effects of subject-to-experimenter distance and instructions on lateral eye movement. Perceptual and Motor Skills, 60, 155-159.

Jupp, J. J. (1989a). A further empirical evaluation of neuro-linguistic primary representational systems (PRS). Counselling Psychology Quarterly, 2(4), 441-450.

Jupp, J.J. (1989b). Neuro-linguistic programming: An experimental test of the effectiveness of ‘leading’ in hypnotic inductions. British Journal of Experimental and Clinical Hypnosis, 6(2), 91-97.

Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologica, 26, 22–63.

Kinsbourne, M. (1972). Eye and head turning indicates cerebral lateralization. Science, 179(4034), 539-541.

Lichtenberg, J.W., & Moffit, W.A. (1994). The effect of predicate matching on perceived understanding and factual recall. Journal of Counselling & Development, 72(5), 544-548.

McCarthy, A., Lee, K., Itakura, S., & Muir, D.W. (2008). Gaze display when thinking depends on culture and context. Journal of Cross-Cultural Psychology, 39, 716-729.

MacDonald, B. H., & Hiscock, M. (1992). Direction of lateral eye movements as an index of cognitive mode and emotion: A reappraisal. Neuropsychologia, 30, 753–755.

Meskin, B.B., & Singer, J.L. (1974). Day-dreaming, reflective thought, and laterality of eye movements. Journal of Personality and Social Psychology, 30(1), 64-71.

Micic, D., Ehrlichman, H., & Chen, R. (2010). Why do we move our eyes while trying to remember? The relationship between non-visual gaze patterns and memory. Brain and Cognition, 74, 210–224.

Monguio-Vencino, I., & Lippman, L.G. (1987). Image formation as related to visual fixation point. Journal of Mental Imagery, 11, 87-96.

Nate, S. R. (2000) Eye accessing cues: A study of eye movements while retrieving internal information. Dissertation Abstracts International: Section B: The Sciences and Engineering, 61(1-B), 518.

Oldfield, R.C. (1971). The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia, 9, 97–113.

Poffel, S.A., & Cross, H. J. (1985). Neuro-linguistic programming: A test of the eye movement hypothesis. Perceptual and Motor Skills, 61, 1262.

Poffel, S. A. (2012). Personal correspondence to the present writer, 4 September.

Radosta, R. (1982). An investigation of eye-accessing cues. Dissertation Abstracts International, 43(3-B), 883.

Raine, A. (1991). Are lateral eye-movements a valid index of functional hemispheric asymmetries? British Journal of Psychology, 82, 129–135.

Salas, J. A., de Groot, H., & Spanos, N. P. (1989). Neuro-linguistic programming and hypnotic responding: An empirical evaluation. Journal of Mental Imagery, 13(1), 79-89.

Sandhu, D.S. (1991). Validation of eye-movements model of NLP through stressed recalls. Annual meeting of the American association for counselling and development. (April 21-24), Reno, NV.

Schleh, M.N. (1987). An examination of the Neurol-inguistic Programming hypothesis on eye movements in children. Dissertation Abstracts International, 48(2), 584-B.

Sharpley, C.F. (1984). Predicate matching in NLP: A review of research on the preferred representational system. Journal of Counselling Psychology, 31(2), 238–248.

Sharpley, C.F. (1987). Research findings on neuro-linguistic programming: Non-supportive data or untestable theory? Journal of Counselling Psychology, 34(1), 103-107.

Teitelbaum, H. A. (1954). Spontaneous rhythmic ocular movements: Their possible relationship to mental activity. Neurology, 4(5), 350-354.

Thomason, T.C., Arbuckle, T., & Cady, D. (1980). Test of the eye-movement hypothesis of neuro-linguistic programming. Perceptual and Motor Skills, 51, 230.

Turan, B., & Stemberger, R.M. (2000). The effectiveness of matching language to enhance perceived empathy. Communication & Cognition, 33, 287-300.

Wertheim, E.H., Habib, C., & Cumming, G. (1986). Test of the neuro-linguistic programming hypothesis that eye-movements relate to processing imagery. Perceptual and Motor Skills, 62, 523-529.

Wiseman, R., Watt, C., ten Brinke, L., Porter, S., Couper, S-L., & Rankin, C. (2012). The eyes don’t have it: Lie detection and Neuro-Linguistic Programming. PLosOne, 7(7), e40259.

Witkowski, T. (2012). Thirty-five years of research on neuro-linguistic programming. NLP research data base. State of the art or pseudoscientific decoration? Polish Psychological Bulletin, 41(2), 58-66.

Yarbus, A.L. (1967). Eye movements and vision. Translated by Basil Haigh. New York: Plenum Press.

8. APPENDICES

Appendix I Edinburgh Handedness Inventory (Humphrey, 1951)

Age ___ Occupation ____________ M / F Are you right- or left-handed? _____

Please indicate your preferences in the use of hands in the following activities by

ticking either the Right or Left column.

If you do not really have a preference, ie, can use either hand effectively, then please put a tick in

both the Right and Left column.

Some of the activities require both hands, eg, tennis racket. In these cases the part of the task, or object, for which hand preference is wanted is indicated in brackets.

Please try to answer all the questions, and only leave a blank if you have no experience at all of the object or task.

Appendix II 18 Study Questions (12 NLP; 2 CQs; 4 FQs)

Imagine in detail that you can see a house near where you live. Vr

What colour are the walls in your bathroom? Vr

Imagine a sound you have heard late at night. Ar

What does a car running over a gravel driveway sound like? Ar

Say the words to Mary Had a Little Lamb. Ad

Say a line from any poem or song. Ad

Imagine in detail what a Martian would look like. Vc

Imagine in detail looking at the cross between a dog and a bird. Vc

Imagine that you can hear the sound of a shooting star. Ac

Imagine that you can hear the sound of an ant eating. Ac

What are the physical sensations that you have when you feel angry? K

What are the physical sensations that you have when you feel elated? K

Who is the current Prime Minister of New Zealand? CQ

What country are you in? CQ

What is the name of a current or previous colleague? FQ

Which high school did you attend? FQ

Who is a favourite author? FQ

What is a favourite film? FQ

Appendix III Study Permission Letter Human Ethics Committee, Massey University

Appendix IV Participant Information Sheet

Appendix V Participation Consent Form (Individual)

Appendix VI Personal Correspondence (September 2012) to the Present Writer from Dr S A Poffel, co-author (with Herbert Cross) of Neuro-linguistic Programming: A Test of the Eye-Movement Hypothesis (1985)

Appendix VII 28 Study Questions (& Four Familiarisation Questions) Baddeley & Predebon (1991)

VISUALLY REMEMBERED

What colour are your mother’s eyes?

Imagine in detail that you can see the building three doors down from where you live.

What colour are the walls in your bathroom?

Imagine in detail that you are looking at your favourite childhood toy.

AUDITORILY REMEMBERED

What does the voice of your best friend sound like?

Imagine a sound you have heard late at night.

Think of an unusual sound that you have heard recently.

What does a car running over a gravel driveway sound like?

AUDITORY DIGITAL

I want you to say the prayer The Our Father or The Lord’s Prayer to yourself.

Say the words of Mary Had a Little Lamb to yourself.

Say the words of a poem or song to yourself.

Count from one to yourself.

VISUALLY CONSTRUCTED

Imagine in detail looking at a plant that you have never seen before.

Imagine in great detail what a Martian would look like.

Imagine in detail looking at the cross between a dog and a bird.

Imagine in detail looking at a painting that you have never seen before.

AUDITORILY CONSTRUCTED

Make up the sound of a new word in a new language.

Imagine that you can hear a sound that you have never heard before.

It is the sound made by a strange bird from another planet.

Imagine that you can hear the sound of a shooting star.

Imagine the sound of an ant eating.

KINAESTHETIC

What are the physical sensations that you have when you feel angry?

What are the physical sensations that you have when you feel frustrated?

What are the physical sensations that you have when you feel sad?

What physical sensations do you have when you feel elated?

CONTROL QUESTIONS

Who is the current Prime Minister?

What country are you in?

What’s your address?

How old are you?

FAMILIARISATION QUESTIONS

What’s the name of one of your tutors?

What high school did you go to?

Who is your favourite author?

What’s your favourite film?

3 thoughts on “Neuro-linguistic Programming (NLP) and Eye-accessing Cues (EAC) by Jane Mauret

  1. Hi, I am currently undertaking a dissertation investigating the validity of EAC model using novel eye-tracking technology. I was wondering if it would be possible to see the tables etc you constructed? Thank you, you work has been a great help so far!

    Like this

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s