The Tomato Effect,
The Placebo Effect, and Science
I would ask whether the focus on the disease process rather than on the patient is scientific in the best sense of the word. Have such clinical investigators and scientists not fallen into the trap called by Alfred North Whitehead “the fallacy of misplaced concreteness,” which results from neglecting factors that should not be excluded from the concrete situation? Many unfortunate consequences result when an abstract idea, called a disease, is considered as if it were separated from the human being with the changes of the disease.
Mark Lipkin, M.D.
JAMA, July 11, 1985
And true of biofeedback as well.
The Tomato Effect
The tomato effect is the rejection of an effective treatment because it does not fit an established model. Goodwin and Goodwin (1984) state: “The tomato effect in medicine occurs when an efficacious treatment for a certain disease is ignored or rejected because it does not ‘make sense in the light of accepted theories of disease mechanism and drug action” (p.2387). Goodwin and Goodwin call this tendency in medicine the “tomato effect” because it is reminiscent of the rejection of the tomato as edible. The tomato was not eaten in America until 1820 because it did not make sense to eat something poisonous. The idea at the time was that the tomato is poisonous, being a member of the nightshade family. This belief was maintained in America in spite of the fact that Europeans had been eating tomatoes for years without harm. The evidence was in favor of the tomato, but the belief prevented acceptance of the evidence. The tomato effect in biofeedback training occurs when an efficacious treatment or training program is
Shellenberger and Green 86
ignored, or rejected for publication, or criticized, because it does not make sense in the light of accepted theories about biofeedback training.
Many researchers have rejected the evidence from successful biofeedback studies because of mistaken beliefs about the nature of biofeedback training. In The American Psychologist and Clinical Biofeedback and Health, editors accepted derogatory articles on biofeedback that include the following statements: “I would like to turn now from history to the current status of clinical biofeedback research. The current status is, in a word, dismal” (Roberts, 1985, p.940); “The snake-oil approach is one that has been adopted by many biofeedback workers” (Furedy, 1985, p.156.).
How did a therapy shown to be effective in the treatment of many disorders become a tomato? Why are the data from successful biofeedback studies ignored, discounted and criticized? Because so many researchers and reviewers of the field believe that biofeedback is something more than feedback of information-they believe in a ghost in the box with specific effects. Researchers and reviewers who believe that biofeedback has a specific effect set up the following polarity: either biofeedback has a specific drug-like effect and follows the laws of operant conditioning or it is not effective. Therefore, any study or clinical case that is not designed to demonstrate, or does not demonstrate this nonexistent specific effect is rejected.
These beliefs about biofeedback training lead to several interrelated concepts that enhance the tomato effect: confounding variables, specific vs. nonspecific effects, the placebo effect, and scientific vs. nonscientific, all brought together in Furedy’s (1985) recent article, “Specific vs. Placebo Effects in Biofeedback: Science-based vs. Snake-oil Behavioral Medicine.” Based on these concepts, researchers and reviewers of the field commit “tomato errors.” Tomato errors are conceptual errors based on the “ghost in the box” approach to biofeedback. Researchers are unable to accept the value of an efficacious treatment because their model of biofeedback, and of science, prevents accurate assessment of the data from successful studies.
Ghost in the Box 87
The most common tomato error occurs when researchers insist that there must be one and only one, active ingredient, one independent variable, to account for the effects of biofeedback training: physiological change or symptom reduction. Any other variable contributing to the results is a “confounding variable.” The active ingredient is variously called “biofeedback,” “biofeedback stimulus,” “reinforcer,” or “contingency” and is thought to have “specific” effects, meaning that results are causally related to the active ingredient. Confounding variables are thought to produce “nonspecific” effects and to contaminate or “confound” the results through the “placebo effect.” In the official doctrine view, the presence of confounding variables is reason to reject the results of a study.
Yates (1980) defines a confounding variable as “. . . an independent variable which is not under experimenter control [emphasis added] and may account for significant results which are thereby incorrectly assigned to the variations in an independent variable which is under experimenter control” (p.31). The official doctrine insists that such variables must be strenuously eliminated or controlled for in biofeedback research. And what are these confounding, nonspecific, placebo-inducing, unscientific variables that are not under experimenter control and contaminate results and mask the pure specific effect of “biofeedback?” According to researchers, they are homework, relaxation training, instructions, motivation, even the information from the biofeedback instrument.
Yates (1980) discounts the research studies of Patel and associates because: “In all these studies, the relaxation training was confounded with the provision of feedback” (p.233). Alexander and Smith (1979) discount the research studies of Budzynski and associates and other successful studies because “the unique contribution of EMG feedback has been consistently confounded with both the inclusion of other relaxation methods during training and regular home practice of nonfeedback relaxation” (p.124,125).
Isolation of these nonspecific effects is critical to the establishment of a strong scientific foundation for
Shellenberger and Green 88
biofeedback and for the acceptance of biofeedback therapy by the health professions. Effects due to adaptation, habituation, suggestibility, instructions, patient motivation, and treatment credibility all can be expected to affect response to a relaxation task, and the specific effects of biofeedback cannot be unambiguously assessed unless these potentially confounding variables are in fact held constant (Hatch et al., 1983, p.410).
Hatch confuses variables like adaptation that are not part of the training, with important variables like instructions and motivation, calling them all “confounding variables.”
Furedy (1979) writes:
Rather the evidence for informational biofeedback’s efficacy has to be in the form of control conditions that show that an appreciable amount of increased control can indeed be attributed to the information supplied and not to other placebo-related effects such as motivation, self-instruction, relaxation and subject selection (Furedy, 1979, p.206).
Here, the independent variable is information, and all other variables are “confounding”. On the other hand:
Coursey (1975) compared a group given contingent feedback with control groups given a constant tone, with or without specific instructions on how to relax . This put the controls at a distinct disadvantage, since the contingent feedback stimulus itself provided sufficient information to enable the trained subjects to discover the response of interest (Alexander and Smith, 1979, p.117).
In this case, the information is a confounding variable, as suggested by Hatch in describing a hypothetical study comparing two treatment packages, one using EMG feedback and the other using progressive relaxation:
Ghost in the Box 89
Since nonspecific–effects [emphasis added] of the two packages differ in many respects, there are many compeling explanations for the observed differences. One possibility is that the biofeedback provided subjects with information [emphasis added] about their muscles that the relaxation group was denied, and this produced the differential effect (Hatch, 1982, p.379).
Hatch and Coursey seem to believe that the signals from the machine should have power to create physiological change and account for differences between groups, independently of the information provided by the signals-there is a ghost in the box.
“Confounding variable” seems to be defined as anything that the reviewer believes might contaminate the results of a study. The inclusion of “not under the experimenters control” in the definition is particularly interesting, since in biofeedback training with humans that can include almost everything. When experimenters attempt to have everything under their control, including the feedback, they create conditions in which learning cannot occur, as in bare-bones, double-blind, and ABAB designs.
Should physiological change and symptom reduction data that result from motivation, expectation, practice, instructions, and all the other variables that are thought to be “confounding” be discounted in biofeedback research and clinical practice? And should these variables be discounted? Certainly not: there is nothing else to study. These variables are called “confounding,” and the effects “placebo” only because the drug and operant conditioning models imply that some other variable is supposed to be creating the results, some active ingredient under the control of the experimenter. But in biofeedback training there is only one variable over which the experimenter may have absolute control and that is the feedback characteristics of the machine, which have no impact on physiology. There are relaxation techniques, breathing techniques, imagery techniques, there is hope and positive interaction with the instructor, there are unending varieties of internal learning strategies, cognitions and beliefs, there, is feedback of information to hasten learning, and that is it. We call these “compounding” variables, not “confounding” variables.
Because “biofeedback” is nothing more than a mirror, with no
Shellenberger and Green 90
specific power of its own, the successful biofeedback studies described in chapter 4 are successful precisely because they incorporated these supposedly “confounding” variables, but are therefore tomatoes and ignored. On the other hand, the unsuccessful studies noted throughout this manuscript have failed precisely because they strenuously eliminated or controlled for these variables, hoping to find the ghost in the box. By accepting poor research, and creating tomatoes, official doctrine researchers inevitably conclude:
There is, in my opinion, no convincing evidence even to suggest, let alone to establish, that biofeedback methods represent a reasonable therapeutic procedure for the treatment of migraine headache. Most of the published literature is not relevant for a critical evaluation of the effects of biofeedback procedures on migraine (Beatty, 1982, p.220).
Until recently, no study investigating the treatment of Raynaud’s disease with skin temperature feedback used feedback to the exclusion of suggestion, autogenic training, or other relaxation procedures. However, Guglielmi (1979) recently conducted a group outcome study employing the double-blind design. . . This study, in combination with the results previously presented, argues strongly against biofeedback being the essential ingredient in the therapeutic effects that are often attributed to it (Surwit, 1982, p.231).
There is absolutely no convincing evidence that biofeedback is an essential or specific technique for the treatment of any condition (Roberts, 1985, p.940).
Statements like these are common in the biofeedback literature. The irony is that researchers like Roberts and Hatch believe that biofeedback must have drug-like properties with specific effects, use research methodology such as the double-blind design in an attempt to demonstrate that power, fail to demonstrate that power since it is not there, and then claim that biofeedback is not effec-
Ghost in the Box 91
tive. “The results of the present investigation clearly indicate that the best treatment for Raynaud’s disease is warm weather” (Guglielmi, et al., 1982, p.118). And to add to the confusion, these researchers claim that studies that are not attempting to demonstrate the specific ghost in the box effect of biofeedback (and thus incorporate such tools as relaxation training and instructions) or fail to use appropriate control groups to demonstrate the specific effect, are confounded, unscientific, and represent “snake-oil” approaches to biofeedback training (Furedy, 1985).
Specific vs. Non-specific Effects
In drug studies the attempt to determine the specific effect of chemicals-on physiology is legitimate, and this effect must be demonstrated to be independent of the “nonspecific” effects of human variables such as expectation. The terms “specific” and “nonspecific” are used appropriately in drug research. The isolation of specific and nonspecific effects is needed because pharmaceutical companies can only market chemicals that are shown to have specific physiological effects. Because the biofeedback instrument and the signals from it are not chemicals and have no power in themselves to create physiological change, it seems obvious that these elements of biofeedback training produce nonspecific effects, if it can be said that they produce effects at all. And because relaxation, motivation, expectations, and beliefs do have the power to change physiology via neurochemical links between cortex, limbic system, hypothalamus and the pituitary-adrenal axis, these variables do have specific effects. It is no wonder that there has been such confusion among researchers, and between researchers and clinicians. The official doctrine researchers have been searching on the wrong path for years while clinicians have known for years that “biofeedback” has no specific effects. These researchers have totally reversed the correct referential meaning of “specific” and “nonspecific”in relation to biofeedback. The accurate referent of “nonspecific effect” is whatever impact the biofeedback machine might have, and the accurate referent of “specific effect” is the effect that relaxation, expectation, instructions, and all other training variables and cognitions have on physiological change and symptom reduction.
Shellenberger and Green 92
No wonder there have been so many juicy tomatoes-so many good studies that are rejected because the reviewers thought that they failed to prove the specific effect of biofeedback by confounding it with “nonspecific effects.” And no wonder it has been repeatedly shown that the biofeedback machine and signals coming from it have no specific effect, and that relaxation is just as powerful, if not more powerful, than “biofeedback.”
Hopefully, this discussion will end the confusion of the specific and the nonspecific effects in biofeedback training and in the future it will be understood that the specific effects of biofeedback training are related to the training procedures and not to the machine and signals from it.
The Placebo Effect
In drug research the placebo effect refers to the degree of physiological change or symptom reduction that results from any variable other than the chemical being studied. Because placebo control groups always show some level of symptom reduction, but are not given the active ingredient, it is assumed that symptom reduction results from the subject’s beliefs and expectations about the “drug” being administered. If symptom reduction in the active ingredient group is not statistically or clinically different from symptom reduction in the placebo group, the drug is considered to be ineffective and cannot be marketed. This is the approach of official doctrine research in biofeedback.
Official doctrine researchers attempting to demonstrate the specific drug-like effects of biofeedback have a problem. They must eliminate the placebo effect, or account for it in the data so that the specific effect of biofeedback can be determined. Hours of discussion, pages of written material, and numerous studies have been devoted to this problem. There are two official doctrine solutions–either eliminate the confounding variables (expectation, motivation, relaxation) or make sure that the same confounding variables operate equally upon the experimental and control groups. The former approach leads to double-blind designs and “bare bones” studies in which subjects are given minimal information about how to proceed. The latter approach has led to a variety
Ghost in the Box 93
of bizarre control groups and conditions including false feedback. Furdey (1986) writes:
Many clinicians believe that, in the clinical context, it is unfeasible to provide the science-based, pharmacology-type, double-blind, specific-effects oriented control for biofeedback. However, the specific-effects control can be modified in such a way that it is both practical to use, and still retains the ability to make valid evaluations of specific biofeedback effects.
Here is Furedy’s suggestion for an appropriate control:
. . . In the control condition the contingency or accuracy of the feedback does not have to be completely removed, because this will often lead to the discovery by the patient and/or the therapist that the condition is a nonfeedback one. Rather, the contingency or accuracy of the feedback may simply be degraded rather than being completely removed (Furedy, 1985, p.161).
This suggestion for an appropriate control in clinical settings, and the need for it, is the epitomy of confused thinking about biofeedback training–as if smudging over the mirror would enable us to better isolate the “specific effect” of the mirror.
Many excellent studies conducted by clinicians have been rejected or discounted because they failed to include appropriate controls. Researchers in every field know that to determine the effect of one variable independently of the effect of other variables, control groups must be used. To isolate the effect of an ingredient, whether in dog food, in a social environment, in a classroom or in a drug, the study must include matched control subjects who receive everything that the experimental subject’s receive, except the independent variable being studied. This is not the case in biofeedback training. As we have noted repeatedly, there is no independent variable with specific effects that can be isolated and
Shellenberger and Green 94
studied independently of “non-specific” effects. The independent variable in biofeedback training is self regulation–self regulation of psychophysiological processes such as blood flow in hands, self regulation of low arousal, or self regulation of the self. Consequently, the need for a control group, and the nature of the control group in biofeedback research is dramatically different from other research.
Yet, researchers, who believe that “biofeedback” has a specific effect that can be isolated, automatically assume that particular types of control groups must be used, and criticize clinicians for not doing so. Varieties of control groups have been invented for comparison to “biofeedback” such as relaxation control groups (Error number 8), and groups receiving bizarre and misnamed procedures such as “false feedback” “pseudofeedback” and now “degraded feedback.”
The first question to ask is not, “What does a control group control for?” Hatch, 1982), but, “Is a control group appropriate?” Control groups are not necessary when:
(1) we want to know whether or not an individual has achieved a level of mastery, such as 95oF finger temperature in a cold room. Using the sports analogy, we do not need a control group to determine whether or not a runner can run the 100 yard dash in 9.6 seconds; we need only a stopwatch. We do not need a control group to determine a training effect when mastery is demonstrated.
(2) the training goal and the treatment goal are the same, such as lowering blood pressure in direct blood pressure feedback, increasing hand temperature in Raynaud’s disease, and vascular complications of diabetes, or increasing sphincter control in fecal continence training. When a Raynaud’s patient can consistently increase blood flow in her hand, and can abort vasospasms, the training effect is the treatment effect. A control group is superfluous. (And, an ABAB design is totally inappropriate; as discussed in Error #6.)
(3) there is a high correlation between the training and the treatment effects, and when there is a high correlation
Ghost in the Box 95
between minimal training and minimal treatment effects (Libo & Arnold, 1983b; Budzynski et al., 1973; Acerra et al., 1984) In this case, patients who fail to demonstrate learning and symptom reduction act as a legitimate post-hoc control group.
(4) a single case study design is used; here we refer to the subject as her/his own control, meaning that pretreatment data are the “control” data.
(5) long term effects of single case studies or multiple systematic case studies are reported. Many long term follow-up studies of clinicians are discounted for lack of control groups. These group studies are however, compilations of single cases, and as noted above each patient acts as her/his own control.
(6) we are not interested in determining the “specific effect” of a particular element in a complex treatment protocol. Rather, we are demonstrating the effects of a multi-component training program on the basis of pretreatment baselines and long term follow-up.
Control groups are appropriate when:
(1) comparisons of one type of treatment are made with another type of treatment. For example, comparison of biofeedback treatment to a medication control group is appropriate.
(2) the elements of a treatment protocol are compared. For example, in a successful treatment for migraine headache, both hand temperature and forehead EMG feedback maybe used. If the researcher wants to determine relative effect of each feedback modality, then a comparison of the combined treatment with an EMG control group and a temperature control group may be appropriate. Whether or not this would be useful research is another issue. In clinical practice a variety of feedback modalities are used in conjunction with a variety of training techniques, and the “usefulness” of any single technique is determined by the individual patient.
Shellenberger and Green 96
The attempt to determine the specific effect of “biofeedback” is inappropriate, and the use of control groups for this purpose is misleading. When control groups are used to determine the specific effect of biofeedback, the results usually indicate that “biofeedback” contributes little to the effects. This is because either (1) subjects in the “biofeedback” group failed to learn due to the methodologies of the official doctrine, (2) subjects in the control group did learn to relax, (3) both experimental and control groups were adequately trained, and the addition of feedback did not significantly enhance learning.
When subjects in both experimental and control groups are well trained, small differences may be expected. Certainly information is a tool for learning, and information feedback is particularly important in neuromuscular rehabilitation and fecal continence training, for example. But the information from a biofeedback machine has no power in itself, and with good training humans can learn many self regulation skills without the aid of external feedback. Particularly when generalized low arousal is the goal, humans can use their own mind/body feedback to learn.
In conclusion, the use of control groups has added little to our knowledge of biofeedback training and only confuses the field by implying that the specific effects of biofeedback can be studied independently of other variables. when this is not demonstrated on a basis of experimental vs. control group comparisons, false conclusions arise. The rejection of data from successful studies on a basis of inadequate control groups is often inaccurate.
There is No Sugar Effect
In addition to the passion to eliminate or control for the placebo effect in order to determine the specific effect of the “active ingredient” in drug research and biofeedback research, the “placebo effect” has been treated as if it were not real or genuine. The term “placebo effect” carries a negative connotation of “not legitimate” or “unscientific”.
Perhaps this arises from the fact that in drug studies the placebo itself is not the real or genuine drug, but is instead sugar or saline solution; thus, the placebo effect can hardly be “real.”
Ghost in the Box 97
The “placebo effect,” meaning physiological change and symptom reduction, is however, as specific and real and scientific as any effect produced by a drug. And the variables that produce these real physiological effects–motivation, expectations, hope, instructions, relaxation, imagery–are just as real as any chemical compound. (Although these variables may be more difficult to study than chemical compounds, they are no less real or “scientific.”)
The term, and the concept, “placebo” should be used only in the appropriate context, drug research. The term “placebo effect,” if used literally, is misnamed even in that context. This can be understood by putting it this way: “sugar” and “sugar effect.” We know that sugar has no sugar effect; the physiological effects result from the belief and expectation about the “drug” being consumed. In drug research the placebo is indeed an inactive ingredient. The inaccurately termed “placebo effect,” however, is a specific effect not resulting from the inactive ingredient sugar, but resulting from the “active” and powerful ingredients of positive beliefs and expectations.
Are hope and positive expectation “placebos?” Certainly not in the sense of the inactive”placebo” compound that cannot create physiological change in drug studies. Therefore, it is fallacious to claim that the effects of such variables are “placebo effects.” Actually, they are “motivation effects” or “expectation effects.” This is not merely a semantics problem; this is a conceptual problem that arises out of the drug model and has led to considerable confusion in biofeedback research.
If Miller had used the sports training model to understand biofeedback training, the placebo effect would not have been an important issue. In sports training the issue of placebo is not problematical because the “effect” is useful and encourages the development of the skill at the beginning of training. We find it curious that researchers using the model of operant conditioning with laboratory animals were untrue to the model regarding motivation and enthusiasm. Rats and pigeons are routinely kept at 80% ad lib body weight to ensure enthusiasm for learning, yet due to the fear of the placebo effect, humans are denied enthusiasm for successful learning in biofeedback training.
It is obvious, however, that enthusiasm and belief alone will not enable the athlete to run a four minute mile. Effective train-
Shellenberger and Green 98
ing and continued enthusiasm are needed to accomplish these goals. (Elevated enthusiasm before a meet, however, may give an athlete the leading edge, and is not a placebo effect.) This is true for biofeedback training.
Physiological change and symptom reduction that result from hope and positive expectation are impressive and motivating and are excellent examples to the patient of the powerful interaction of mind and body. Yet these effects may not be sustained because they are not the result of psychophysiological training and self regulation skills.
Biofeedback training, as the term implies, is training–training in deep relaxation, training in short relaxation techniques for maintaining homeostasis throughout the day, training in cognitive/perceptual skills, and training in behavioral skills. Ultimately physiological change and symptom reduction must result from such training in order to be sustained. Nonetheless, in biofeedback training it is important to create hope and positive beliefs, and positive rapport, knowing that these are powerful ingredients of therapy that will help the patient toward recovery. We do not refer to these ingredients as “placebos,” nor do we refer to their effects as “placebo effects.” In this regard referring to biofeedback training as “the ultimate placebo” (Stroebel & Glueck, 1973) is inaccurate even though it is meant to suggest that the mind is powerful and plays an important role in therapy. Effective training is needed and when provided, the “placebo effect” is irrelevant, or, is nonexistent.
We conclude that there are no “placebo effects” in biofeedback training. There are only the effects of the variables that are involved in learning: good or poor training, motivated or unmotivated students, good or poor coaching. These variables are not confounding variables but compounding variables that are essential ingredients in learning any skill.
As American psychology came under pressure to be “scientific” and attempted to be as much like physics as possible, it adopted a model of “science” in which the mind is conceptualized as “un-
Ghost in the Box 99
scientific.” The mind is viewed as unmeasurable and unobservable and therefore cannot be “scientifically” studied in psychology. Biofeedback training is creating a problem for those who espouse this model of science because the key principle that underlies biofeedback — the principle without which biofeedback as an aid to learning would not work, is that mind and body continually interact, and that mental events have physiological correlates, and the reverse. The mind must play a key role if biofeedback training is to be successful, but according to the official doctrine, the mind must be ruled out or “controlled for” if biofeedback research is to be “scientific.” Throughout the history of biofeedback training, researchers and reviewers of the field have criticized their own colleagues and clinicians for being “unscientific” or not following “scientific principles.” Sometimes this has been suggested in very condescending tones. For example:
. . . It is, however, the responsibility of educational programs to teach students to think critically enough to be able to avoid the pitfall of allowing fallible clinical judgment to supplement scientifically derived conclusions. . . . What is needed most in training of biofeedback clinicians is a stronger dose of experimental science and its interpretation. . . . If we are not an applied science then we have little more to offer than any number of other groups that want to work with clients and “make them better.” (Roberts, 1985, p.940).
. . . There is a powerful desire among health and health-related professionals to be able to provide treatment. It is incumbent upon the serious scientist to temper that noble desire with an equally noble appreciation for the value of hard evidence, and the need for caution and patience (Katkin , Fitzgerald and Shapiro, 1978, p. 286).
The failure to adopt a science-based approach to biofeedback technology means that behavioral medicine is unable to evaluate whether biofeedback of a particular system does or does not work . . . The more scientific
Shellenberger and Green 100
a treatment is, the more efficacious it will be (Furedy, 1985 p.156).
There is not one well controlled scientific study of the effectiveness of biofeedback and operant conditioning in treating a particular physiological disorder (Shapiro and Surwit, 1976, p.113).
The term “unscientific” carries a negative connotation and if a study or clinician can be labeled “unscientific” the work can be ignored. This is one of the chief tools for enhancing the tomato effect. It is time to carefully examine the issues.
What is meant by “science” and “hard scientific evidence” and “controlled scientific studies”? Apparently “science” means adopting models and using research methodology appropriate to one area of science, such as drug research or operant conditioning with animals, and applying them to whatever is of interest to the researcher, in this case biofeedback training. This is not scientific. That concepts and methods may indeed be scientific in one domain, such as the physical sciences, does not mean that these concepts and methods are “scientific” when applied to another domain, such as the complex area of mind/body interaction. For example, the double-blind design, as used by Guglielmi et al., (1982) is not scientific in biofeedback research, and the data from such studies are neither accurate nor scientific.
The recent article by Furedy (1985) “Specific vs. Placebo Effects in Biofeedback: Science-based vs. Snake-oil Behavioral Medicine” so well illustrates the confusion about what is scientific and what is not, and what is “specific” and what is not, that we examine it in detail.
As the title of the article suggests, the essence of Furedy’s argument is that any physiological change that can be attributed to a “specific” cause, such as a drug, is a “specific effect” and is scientific. And any physiological change that seems to be “nonspecific,” (having an unspecific cause), is a placebo effect, and is due to “snake-oil.” And we all know how unscientific snake-oil is: To illustrate this point, Furedy uses the example of death by bone-pointing in an aboriginal society. He writes “. . . It is quite possible that the bone in the hands of a witchdoctor with
Ghost in the Box 101
superb bedside, or rather graveside, manners would have been superior to even a modern gun, as a killing instrument. Moreover, this superiority would have been due solely to the placebo effects of the bone, rather than to any demonstrable specific effects” (p. 156).
The implication that death by bone-pointing is not a specific effect is rather amusing. What Furedy means, of course, is that the bone does not have anything like a bullet that has specific effects on the body, so death by bone-pointing cannot be a specific effect (and by his own logic, he would have to conclude that such a death is unscientific). Furedy fails to consider that certain beliefs about the bone are so powerful and so specific that they can have very specific effects, death. And these effects and their causes are well understood by anyone who has studied the psychophysiology of stress and illness. Again, we see the confusion between what is “specific” and what is not.
Furedy continues with another example: “Again, as in the case of the witchdoctor example, it is more than likely that in that society, and administered by a master salesman, snake oil through its placebo effects would have been more efficacious than a drug like aspirin with demonstrated specific effects” (p.156). (Actually aspirin is a poor choice here, because according to Furedy and others who contend that there are “specific” and’ “nonspecific” effects, specific effects are those that can be clearly attributed to a specific mechanism, and so far the mechanisms through which aspirin has its effects are unknown.) In any case, Furedy then contends that “what renders a technology superstitious rather than science-based is when the evaluation of the treatment is solely in terms of placebo, when, that is, there is no genuine role for science in the evaluation” (p. 156). This is Furedy’s personal definition of the role of science; we learn later in the article that the role of science is the application of double-blind designs, “The function of the double-blind arrangement is that it separates placebo effects from specific effects” (p.159). So Furedy is a proponent of the superstitious, ghost in the box mythology of biofeedback training. Stating this clearly he writes:
. . . There is no question that in the pharmacological evaluation of any drug, it is the specific rather than the
Shellenberger and Green 102
placebo effects that are of interest for the science-based technology of pharmacological medicine. The argument applies in the same way to that brand of behavioral medicine that seeks to employ biofeedback in a science-based rather than snake-oil or superstitious fashion (Furedy, 1985, p.159).
According to Furedy, then, science-based biofeedback research means using methodologies, such as the double-blind design, in an effort to determine its specific effect. Since this is not done in successful biofeedback studies, he concludes: “The snake-oil approach is one that has been implicitly adopted by many biofeedback workers” (p.156). Furedy’s “science-based biofeedback research” is not possible because the biofeedback mirror has no specific effect, and because the double-blind design has no “scientific” place in biofeedback research. Such concepts and methods are not useful and will eventually be discarded.
Science and scientific methodology evolved from a need to understand nature and dispel dogma. Yet biofeedback researchers who claim to be “scientific” are blinded by a dogma that makes it impossible for them to look at the facts and know that their concepts and methodologies have failed. Many researchers have conducted ghost in the box research with numerous errors, failed to demonstrate the specific effect of “biofeedback” and concluded that biofeedback fails. These researchers have failed to critically examine their data and methodologies, and have failed to examine the “hard scientific evidence” that makes it clear that their concepts and methodologies are inappropriate to the study of biofeedback training. This is not scientific.
In summary, we see that the concepts and constraints of official doctrine research make it easy to reject good biofeedback studies. Clinicians have been repeatedly admonished by official doctrine researchers to think critically and to be skeptical about the efficacy of biofeedback training. We think that this is good advice and suggest that researchers do the same regarding official doctrine theories and research results.
[End of Chapter 5]
Attention: The Ghost in the Box is “Shareware”!
Please Register Your Copy Today
This internet publication of the historic monograph “From the Ghost in the Box to Successful Biofeedback Training” is itself an historic event — the first known publication of a masterpiece previously-published document as “shareware”.
For years computer software has been published under this “honor system”. You can download a program for free, try it out, and see if you like it. And, if you continue to use the program, you are honor-bound to send the modest registration fee to the author.
If you download and read this Internet Edition of GHOST, either the whole book or any one or more of its chapters, and if you allow its message to influence your thinking about Biofeedback, you are asked to submit the modest sum of FIVE DOLLARS directly to the book’s authors, Bob and Judy. That’s even a bargain, since the original 1986 publication, which is herewith reproduced in its entirety, cost $9.95! Unfortunately, it has been out-of-print for several years, but would surely cost more if reprinted today.
License. Payment of the $5.00 registration fee entitles the reader to print one (1) copy of the entire text, or any part of the entire text, for personal use. Up to ten additional copies may be printed, provided that this notice is always included (once) and each recipient understands that the shareware fee applies to each and every printed copy of the book or any chapter of the book. Reproduction beyond the scope of this license is a violation of US and International Copyright Laws.
To Register your copy of “Ghost”, print this page and send it with your check or US$ 5.00 cash to:
Bob Shellenberger & Judy Green
c/o Psychology Department
Aims Community College
PO Box 69
Greeley, CO 80632 USA
Dear Bob and Judy:
Thanks for making “The Ghost in the Box” available again.
Email Address: __________________________________
Remember, send only one registration per person, regardless of how many chapters you have.
A revised edition of Ghost is in the planning stages; if you do include your name and address, you will be notified if and when it becomes available.
A brief description of your professional involvement in biofeedback would be of great interest to the authors.
Comments, suggestions and criticism are most welcome. Please feel free to make your suggestions or comments here.