Educators wrestle with an array of challenges, such as raising achievement, reducing dropout rates, and helping students with disabilities or students learning English. At the same time, researchers study education problems, develop practices or programs, and assess their effectiveness. A more productive interaction between the two is the topic here.

If we start from the view that improvement in current practice is desirable, two courses of action suggest themselves: (1) Identify effective programs or practices based on current research and disseminate these practices to educators, and (2) Identify programs or practices that need more or better research and initiate research to study them. Both courses can be adopted at the same time, with effective practices emerging under (2) folded into the dissemination efforts in (1).

Researchers can readily tackle the second course of action. We are aware of trends in topics being studied, where gaps exist, the kinds of research that represents the technical and analytical frontier. The sense of moving forward will be reassuring, but the reassurance may be a false comfort. We may think we are making progress when we aren’t.

The reason is that the premise of the first course of action is shaky. The premise is that educators are eager to embrace the answers and that they want to (and can afford to) implement the programs and practices identified as effective. If there is evidence to support either of these assumptions, I’ve not seen it.

Anyone working in schools or districts knows that education is a practice-based field. I am speculating here, but I will put forward the statement that 90 percent or more of what teachers are doing in classrooms reflects their notions of effective practice rather than what current research would suggest is effective practice. Findings from research articles published within the last decade are unlikely to be driving what is happening in the classroom. Research affects practice only through slow and meandering channels: new “research-based” approaches might be found in textbooks, teachers might be exposed to research approaches in professional development workshops, and What Works Clearinghouse practice guides might be found on the desks of some administrators or teachers.   Disseminating research findings in this situation is pushing on a string.

Often when presenting research findings to educators, I am asked a variant of this question: “The research you presented was done in these places…but what I want to know is what will work in my school/district/state.” At the root of this question is the view that implementing a program or practice outside the context in which it was studied is risky. With many contextual variables affecting any program, the question educators are asking is whether researchers understand the conditions under which the program is effective. It acknowledges that implementing a program or practice derived from research is channeling a quantitatively simple statement, for example “the program increased graduation rates by 10 percent,” into a complex systems setting with many actors and interactions. If the pathways by which the result can be replicated are unknown, why should educators trust that the results can be replicated?

The problem is that answering the context question relies on the same program or practice being studied in a variety of contexts, or at least studied in contexts that are plausibly related to its effectiveness. In the face of the longstanding lack of funding for education research (in the past fiscal year, the National Institutes of Health received more than fifty times the funding received by the Institute of Education Sciences, $31 billion compared to .59 billion), and with school districts being less open to hosting research in the face of increasing accountability demands, the likelihood that programs will be replicated in various contexts is slim. We have a dilemma: educators want the answer to a question we cannot afford to give them. And until the context question can be answered, education research cannot make the case for more funding because there are not many “customers” who want its product.

Having painted myself into a corner, let me add thoughts on how to get out of it. Research dissemination is sometimes viewed as translation, in which researchers or a third party turns findings from journals or studies into practical steps educators can follow. As noted above, unless educators are asking for translation, the audience for these efforts may be limited. Practice guides prepared by the What Works Clearinghouse are its most popular product, but the number of practice-guide downloads is small compared to the scale of the preK to 12 education sector.

Another approach is brokering. Here, educators and researchers work interactively to define problems needing research to tackle. Researchers feed back findings to educators, and the group identifies new courses of action and so on. The approach used by the Consortium for Chicago School Research and the Research Alliance for New York City Schools fits this model, as does the engineering approach promoted by Bryk and Gomez (2008), and the recent creation of research alliances within the regional lab network. In these models, problems are defined by educators, which ensures that educators are interested in their solutions. That the problems are defined by educators also means schools are more likely to want to participate in the needed research because the solutions are salient to their everyday problems. Any researcher who has tried to implement an evaluation in a school that is receiving grant funding from a federal or state agency or a philanthropy will recognize that accountability is a weak basis to build school support for a study.

We need more brokering. Educators and researchers should not be in different orbits operating under different incentive systems (educators: raise test scores; researchers: publish). Aligning the incentives will support more brokering activity.

Five other observations relate to brokering:

1) Conducting smaller coordinated studies has appeal compared to the “one big study” approach we often see today. Smaller coordinated studies are better able to explore the role of context, are able to examine more questions, and still can be turned into one big study using the statistical tools of meta-analysis. I think educators would prefer that seven studies of some program showed positive effects than to hear that a really large study of the program showed positive effects. While large studies can detect small effect sizes, educators may be more interested in the robustness of the findings to context than in statistical power.

2) The one answer is wrong. It’s not ‘let’s implement after-school programs’ or ‘let’s use more technology in the classroom,’ or ‘teachers need professional development on the topic.’ If recent IES studies have showed us anything, it’s that some of these efforts might work somewhere, but on average, mostly they are not likely to do much. The real solutions are likely to need a combination of programs and practices and the combinations probably vary with context.

3) The choice of study methodology is a red herring. Schools are open to experiments and the need to randomly assign students, classrooms, or schools when questions are salient. When questions are not salient, arguing that an experiment is unethical or inappropriate becomes a convenient basis for not supporting or actively hindering it.

4) Cost is less an issue for experiments than commonly claimed. Though I often read that experiments are expensive, studies are costly when they are prospective and involve data collection, not when they use experimental methods. A case can be made that prospective nonexperimental studies that require active data collection are likely to be more expensive than experiments. Such studies need to create an equivalent comparison group to have validity, and usual methods for doing so, like propensity scoring, work best when a larger potential comparison group is available at the outset, which adds to cost.

5) “Practices” are more important than “programs.” Though I have used the two terms interchangeably here, they are different. Practices, such as teaching reading using a guided approach, or differentiating instruction, are ways in which educators do their work. Programs are bundles of practices organized by a conceptual model. Unlike practices, however, programs often have proprietary content, require procurement, and raise complex issues of fidelity of implementation.

There is an old joke about losing money on every sale but making up for it with volume. In a way, doing more studies to identify effective programs or practices runs that risk. Each study is costly, and doing more of them may still not move the needle. Brokered studies may move the needle but involve unfamiliar organizational roles and possibly awkward first steps. But better that than more of the same.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Set your Twitter account name in your settings to use the TwitterBar Section.