Sally Dalton Robinson Professor of History, African American Studies and Sociology
Much belatedly, I am writing in response to Michael McPherson’s letter. By way of preface, I should say that my own interests center on urban schools and particularly on bottom-tier urban schools, so what I say reflects my sense of research and practice in that context. Also, let me confess that I have pretty strong biases in this area since my current project deals with the overlapping problem of why so few reforms take root in urban schools. Part of what I stress is that we typically do not appreciate the salience of social and political factors in preventing change. As applied to research, this means that a great many bottom-tier schools are such low-capacity institutions that they have a great deal of difficulty making use any information coming from the “outside.” There are a number of reasons for this and I can only mention a few here. The general attitude toward research in many low-functioning schools is that it is just irrelevant; it is not understood to be a part of the answer to any question that matters. In the very worst schools, that attitude is part of a broader despair, a sense that nothing works with “these kids” anyway. Although they may express it differently, similar attitudes about the pointlessness of research can be found among some administrators and policymakers.
A related issue is that various constituencies bring their own epistemologies to the table and what constitutes evidence of success to one may not be convincing at all to another. Academic research attesting to the success of some approach may not be very convincing to a principal who thinks in more particularist terms; a testimonial from a fellow principal in a similar school may mean a great deal more.
For policy makers, the non-authoritative character of much educational research is an obvious problem. On many questions (Which comprehensive school reform is best? Is progressive pedagogy superior to didactic ones?), research doesn’t allow a confident answer, notwithstanding plenty of research on these issues. This is partly because of the narrow conceptualization of issues, partly because of poor study design, partly because of the sharply ideological character of the discussion. Much of the work I am most hopeful about comes from projects that have something of the Manhattan Project about them. That is, teams of researchers, often place-based, representing a variety of disciplines and methodologies, making a long-term commitment to understanding the larger problem, rather than doing little studies of some part of it, hoping that one day all the little studies will add up to something. To varying degrees, I think the Consortium on Chicago School Research, Research in Action and MDRC reflect some of this, although MDRC is not place-based, of course, nor is it relatively close to practitioners in the way the Consortium and RIA are. Still, it tends to do multi-method, longitudinal studies with much stronger-than-usual grounds for causal inference and generalizability.
Policymakers, including leaders of foundations, bear some culpability for what I think has proven to be an ill-advised emphasis on outcome studies. They question they really want answered is, Did it Work? Questions about context and process get pushed to the side. Thus, even when we know that such-and-such a program has “worked” somewhere, we may not understand its operations well enough to help it work somewhere else. Put another way, foundations may be paying for more implementation research but it is not clear they are attending to it in the way they attend to outcomes research. Given the amount of research done in the nineties, one might expect the foundation community to have a much stronger sense of good implementation but I’m not sure that’s the case at all.
As to examples of cases where I think research has made some difference:
The Consortium studies on ending social promotion. My sense is that the last round of studies, showing significant negative effects of the policy on younger students took most of the steam out of what seemed a few years earlier like a national movement around the ending social promotion idea. These were methodologically sophisticated studies, on an issue that ordinary people know about and understand, were well covered by the press and were timely. (In fact, I may be wrong but my memory is that the press in Chicago – and maybe New York – was actually provocative in the way it used the research to pit researchers and policymakers against one another.)
Research on small schools. Fifteen years ago, this was a fringe model, advocated by a small group of liberal reformers, many of them in New York. If we see this body of work as beginning with the dropout prevention studies, we must have 20-25 years of studies saying that you get better results from urban youngsters if they having stronger relationships with adults and one way to facilitate those relationships is to make schools smaller. I don’t remember any of this work as methodologically sophisticated, the samples were sometimes small, without random assignment. Nevertheless, I think what made a difference may have been the consistency across studies and across time and that so many studies found positive effects on both social and academic outcomes. It may also have helped that, insofar as high school reform is concerned, this was the only game in town. It’s not as if there were some other popular model of high school reform, supported by its own research. It was a marketplace without much competition. Interestingly, people across the country are now worried about whether the model hasn’t been oversold, whether cities like New York and Chicago aren’t opening them at an unrealistic pace.
The “Islands of Success” study from MDRC. My understanding is that this is the single most requested study they’ve done, which says something. I’m not saying it has yet had traceable impact on policy but I think it has played a major role in pushing the discussion away from “good schools” to “good districts.” I think discussion was headed in that direction anyway but the presence of this visible and accessible study pushed it faster. I also suspect this hit the market at the right time, a moment when many people were frustrated with the slow pace of making change school by school and perphaps with the fact that once change is made at that level it is often undermined by district-level errors of omission or commission. Ironically, this is not the kind of authoritative research for which MDRC is known. MDRC has specialized in random assignment but this is pretty much a descriptive study. While MDRC was careful to frame the study as a set of hypotheses, it was clearly seized on by some as offering definitive answers, suggesting how eager people are to have clarity.
Richard Elmore’s study on District Two. I think this work has been a large part of what has stimulated the national move toward instructional coaching, potentially a major shift in how the profession operates. It was also a major influence on the thinking on leaders in Boston, as they shaped their relatively successful initiatives in literacy. Some of the most successful regions in the re-organized New York City system are those which are trying to preserve and extend District Two. I know there have been charges (Larry Cuban) that Elmore cooked the books but I haven’t looked closely at them. It doesn’t matter that much what actually happened. District Two has mythic status.
Speaking of Boston reminds me that my impression is that the Consortium’s work on authentic instruction has had pretty wide interest at the school level, leading to discussions in which school people talk directly about the intellectual content of their assignments. I have been asked about this work in Boston among other places but I haven’t followed it as much as I have some of their other work.
Your questions made me think of another: Which are the best bodies of research that have little impact? One might put the literature on pre-school for low-income children here, a very convincing literature, I think. And there are some places where it has had impact – the Abbot decision in New Jersey, North Carolina state policy. Maybe the best body of literature that still gets ignored is the literature on professional development. Most districts still seem committed to isolated, disconnected workshops that the literature condemns. Maybe we need one or two dramatic, highly visible studies. Actually, I fear that part of the problem is that much of school leadership has such a shallow understanding of instruction that they cannot appreciate the work. They just don’t see a problem with workshops.
I won’t try to draw many grand lessons from this except to note that research can have impact – I need to be reminded of that, from time to time – that many different types of research can matter and that the social and political context matters. I take it to be one of the major points in the implementation literature that change needs to be facilitated somehow, that there has to be someone taking care of the details at key points in the process. Similarly, it seems safe to say in urban contexts that research use ordinarily has to be facilitated. Few districts and schools have the capacity to just take information and do something good with it.