Monday, January 5, 2009

The Quest for Efficacy Research

There has been much focus in the last 15-20 years on efficacy of sensory integration treatment and, of course, the validity of research on the topic. To some extent, it seems reasonable to expect the same level of validity on the part of those demanding 'evidence.' I found it interesting, therefore, when the National Academy of Sciences issued a report from the Institute of Medicine based on a study of the state of American health care (Knowing What Works in Health Care: A Roadmap for the Nation. http://www.nap.edu/catalog/12038.html). After extensive study they concluded that less than 50% of clinical medical practice in the US is based on good, unbiased, scientific research. Talk about a case of the 'pot calling the kettle black!' Add to that the fact that those who don't wish to accept the notion of a non-medical, drug free answer to dealing with some un-diagnose-able symptomology can ALWAYS find ways to refute even the best research results. I don't think I need to spell them out, because we've been dealing with them for decades. (Heilbroner, Kaplan, Polatajko, et al. 1993; Hoehn & Beaumeister, 1994; Shaw, 2002; to cite just a few). Given this climate of resistance to a different approach to treatment, it has sometimes seemed futile to persist. Trying to convince detractors who wouldn't accept the clinical truth of children making improvements, by battling to find funds and to engage in costly research to provide evidence of efficacy. Yet many of our colleagues have persisted in producing a variety of fine research to, at least support sensory integration intervention by demonstrating positive results. 

Accepting that no research is perfect, can we be proud of their efforts and celebrate the information gathered and reported? I don't think we need to apologize for our standard of practice or the quality of research in the area of sensory integration. It's an improving process.

However, some in our profession (OT) have essentially done just that. Last year a group of self selecting individuals decided they should determine what qualified as 'fidelity' (validity) research in sensory integration treatment and then proceeded to evaluate previous research based on their criteria (Parham, Cohn, et al, 2007). Never mind there was no such criteria when the research was carried out and published. Never mind that in 1994 several major figures in occupational therapy, well respected in the field of sensory integration theory and treatment expressed fairly diverse views of what SI and its scope, limitations, etc. were (Roley & Wilbarger, 1994). The group in question (Parham, Cohn, et al, 2007) indulged in an academic exercise in which they determined the 'core elements of sensory integration intervention processing' and therapist qualifications to provide sensory integration treatment' and referred to them as 'structural features of sensory integration.' 

While one can applaud the effort of this endeavor, there are a number of problems with the project and results. It would take a lot of web space to go into all of them, so I will just highlight a few.

1. All throughout the study, the authors engage in arbitrary parameters, based on what has to be assumed are their own biases. Examples: selection of studies included for review; core elements decided by committee opinion (made up of whom?)-what about elements identified, but that didn't reach the level of concurrence? This makes the whole process completely subjective. Further, there are intermittent references to occupational therapy using a sensory integration approach, but no discussion or differentiation with respect to the core elements, some of which could be claimed as occupational therapy tenants, not just sensory integration intervention elements. Why should this be considered valid?

2. "To determine the extent to which previous investigators attended to fidelity . . .(pg. 219)" How is it helpful or worthwhile to apply specific parameters to research carried out years before anyone even thought about adhering to them? How could those investigators have predicted what this particular group of people would determine was valid? What's the old saying: "changing the rules in the middle of the game?" Despite many of the studies having proven validity according to measures at the time, these authors conclude "Validity of sensory integration outcomes studies is threatened by weak fidelity in regard to therapeutic process (pg. 216)." Because past research didn't meet criteria (subjective criteria) recently established, publishing evaluations of it as weak is hardly fair and does not improve the public view of the perfectly respectable body of research generated in the past (deficient, as all research is). Won't it be lovely to see this quoted in the future reviews of sensory integration research? And all for an esoteric purpose. . .or was it?

3. Prominent instances of self interest: When the authors identify "therapist qualifications, which include professional background, formal education, clinical experience, post-professional training, supervision and certification in sensory integration or in sensory integration clinical assessment. . .(pg. 219) " they neglect to say who or how professional background, clinical experience, etc. is measured. Who determines what qualifies? Apparently they do. The reference to certification in sensory integration is an obvious reference to the USC-WPS course series which is the only one that 'certifies' in sensory integration. Then there is the discussion of a "sound sensory integration fidelity instrument" helpful for future valid research. The authors will be providing one. Forget all the instruments that already exist. They apparently aren't adequate.

4. One glaring omission from the article, which is a standard element of all published studies: a discussion of the weaknesses and limitations of the study. I'm guessing that every study that was deemed invalid according to these authors included such a discussion. How was this accepted for publication without it?

While this is an obviously editorial view of one of the recent documents regarding sensory integration, I'm of the mind there should be a place where we can express opinions about the state of the field and discuss current issues. I am hopeful others will engage in a dialogue. I'm sure there are other concerns about the fidelity document and I'm sure there are arguments that support the work presented in this study. Let's discuss it all.

Eileen Richter

Heilbroner, P.L. Why "Sensory Integration Disorder" is a Dubious Diagnosis, http://quackwatch.org/01QuackeryRelatedTopic/sid.html.

Hoehn, T., Baumeister, A. (1994). A critique of the application of sensory integration therapy to children with learning disabilities. Journal of Learning Disabilities, 27 (6), 338-350.

Kaplan, B., Polatajko, H., et al. (1993). Reexamination of sensory integration treatment: A combination of two efficacy studies. Journal of Learning Disabilities, 26 (5), 342-347.

Parham, L.D., Cohn, E.S., et al. (2007). Fidelity in sensory integration intervention research. The American Journal of Occupational Therapy, 61 (2), 216-227.

Roley, S.S., & Wilbarger, J. (1994). What is sensory integration? Sensory Integration SSIS Newsletter, 17 (2), 1-7.

Shaw, S. R. (2002, October). A school psychologist investigates sensory integration therapies: Promise, possibility and the art of placebo. National Association of School Psychologists Communique´, 31 (2), 5-6.

4 comments:

  1. I think we should consider our new U.S. president's perspective to the tune of "unclenching the fists to reach out and work together". We (OT's using SI) must find a way to work together by finding common ground. Otherwise, we cannot move this whole process forward in order to help children and families.

    ReplyDelete
  2. That's a great sentiment, Kris. Although I hadn't thought of going in a political direction here, why not? This blog would be a good place to post ideas for ways of working together for such a goal. One could argue that critiquing each other's work and expressing diverse opinions and observations is one way to move the process forward (to use your analogy: like Obama and not the Bush "yes,men" mentality). I hope you will be sure to put your ideas on a post (as opposed to a comment). It's easy to do. If you have any trouble, let me know.

    ReplyDelete
  3. Hi:
    I have been dancing with the idea of a collaborative research project in SI for the past two years, but feel intimidated by the knowledge that no project will ever be "perfect". However, I do feel that some of the criticisms mentioned in the article were valid. For example, sample size is important for validity, and the few articles that I've read that had a substantial sample size resulted in poor outcomes because the population was too diverse (and probably included those for whom SI was never intended). Another point of confusion is "what exactly constitutes SI treatment". Are we talking about simple use of sensory tools (like weighted vests), or planned series of regularly scheduled treatment sessions using floor and/or suspended equipment that results in child directed intervention?

    I am working with a local college to initiate a collaborative research project whose purpose is to demonstrate efficacy of SI treatment. I will work on posting the idea, and look forward to your responses.

    ReplyDelete
  4. Excellent comments Eileen. At least you are aware of your own biases. I'd prefer a "what can we learn from this research" approach, rather an a black/white evaluation. Each effort is a step forward in terms of learning something that is helpful or learning what is not so helpful.

    ReplyDelete

Note: Only a member of this blog may post a comment.