Data Use and Educational Improvement Major Grants

 

View by grant year

2008 | 2009 | 2010 | 2011

2008 Grant Summaries


Susanna Loeb
Linda Darling-Hammond

Multi-District Collaboration for Evidence-Based Reform
Stanford University

This action research study addresses the important question of how school districts can generate and analyze data and effectively use these data in decision-making, with specific attention to decisions bearing on attracting, developing, and retaining well-qualified personnel. In this project, the Stanford team is working closely with eleven districts that are diverse with respect to size, location, and student population: Albuquerque, Austin, Indianapolis, Knox County (TN), Mapleton (CO), Miami-Dade, Milwaukee, New Haven (CT), North East (TX), San Diego, and San Francisco. By collaborating with the districts, the researchers can understand in situ how districts use information and make decisions, help them to develop models of organizational improvement that serve their unique needs, and ultimately generate findings that can be used by non-participating district leaders, policy makers and other researchers.

There are two main aspects to the project: working with the districts to collect and analyze data (or to make sense of existing data), and synthesizing data and information across the participating districts to create a larger set of findings that answer specific research questions. The research questions include an initial set that are primarily descriptive (e.g., What are the career paths of principals and teachers, and how do these patterns differ across school and student types?), and a second set related to causal effects (e.g., What are the effects of educator characteristics and mobility patterns on outcomes such as school stability and student achievement?). To answer these questions, a variety of data are being collected on principals, teachers, and students, and various analytic methods are used, including value-added modeling.

2009 Grant Summaries


Alexander C. McCormick
Jillian Kinzie

Learning to Improve: A Study of Evidence-Based Improvement in Higher Education
Indiana University at Bloomington

Despite being settings where research is highly valued, surprisingly few institutions of higher education conduct regular research on their own performance, and how best to assess performance and outcomes in higher education is a long-standing problem. Furthermore, many efforts at assessment of higher education have been driven by calls to demonstrate accountability and quality, in response to legislatures trying to determine what they are getting for state investments, to advocacy groups concerned about costs, and to businesses interested in skills of graduates. A focus on assessment for improvement is less visible but quite important. In this study, McCormick and Kinzie attempt to discover how institutions of higher education improve the education they provide and how such improvement is informed by assessment data.

Kinzie and McCormick employ quantitative and qualitative methods to answer these questions. In the quantitative portion of their work, they analyze longitudinal data from about 600 US and Canadian institutions that have participated at least three times in the National Survey of Student Engagement (NSSE). NSSE is administered to a random sample of first-year and senior students and assesses the time and effort respondents put into their studies and the extent to which they are exposed to and engage in empirically-proven educational practices. In this study, quantitative analyses are used to identify schools across a variety of school types that demonstrate improvement within scales constructed from NSSE data. Based on a subset of institutions identified in these analyses, the researchers are also conducting qualitative case studies, which delve into how data have actually been used on each campus. Their approach includes comparisons of cases representing “planned change”—i.e., institutions that have used NSSE campus reports as a basis for systematic changes and subsequently shown improvement—and cases involving “serendipitous change,” where improvements have occurred, but not as a result of a targeted plan.


Jonathan A. Supovitz
Leslie Nabors Oláh

Linking Instruction to Student Performance
University of Pennsylvania

Districts expend a great deal of effort and resources providing student performance data to teachers to inform instruction and use those same data in organizational decision-making. Yet it is rarely clear how these outcome data are connected to what goes on in classrooms. This three-year study involves the development and experimental testing of an intervention in 4th-grade and 5th-grade math classes. Through facilitated conversations, teachers are given data on their students’ learning and their own instructional practices. Following implementation of the intervention, an examination is made of the extent to which it is embedded in and influences district practices.

Data on student and teacher performance are gathered using existing instruments—for instruction, the Instructional Quality Assessment toolkit; for student performance, Learnia. While teachers randomly selected for the intervention are given both student data and instructional data and support from a trained coach in improving instruction and student performance, teachers in the control group are given only student data and no coaching support. Across the study’s three years, Supovitz and Oláh aim to address the following research questions:

  • Beyond information on student academic achievement, what is the added value of information on instruction practice and the relationship between practice and performance for teachers and school and district leaders?
  • What is the additional value of providing targeted instructional support based on data on practice in relation to performance?
  • How do data on practice related to performance influence the participating districts as learning organizations?
  • In what ways do particular components of the intervention work, and how does the design of the intervention change during the course of implementation?
Jeffrey C. Wayman

The Data-Informed District: Implementation and Effects of a District-Wide Data Initiative
University of Texas at Austin

Current education policy gives districts, schools, and educators a difficult task: to take the abundance of school data generated each year and turn it into information that can help improve educational practice. Wayman argues that most districts lack the capacity to use data effectively and broadly. In previous work with K-12 school districts, Wayman and his team have evaluated districts’ information needs, designed and implemented data systems, and assessed the results. In the current study, they are going into three Texas school districts to help them become “data-informed districts.” The relevant data are conceived of broadly to include not only the familiar student learning assessments, but also student background data, human resource information, and educator judgment. The ultimate goal is for all players to integrate data use into their practice in service of the larger goals of the district. Underlying the research are three primary questions:

  1. Which data use practices throughout a particular district are common to other districts, and which are dependent on a particular context?
  2. What is a scalable framework for establishing a data-informed district?
  3. What are the effects on student achievement and educator practice of establishing a data-informed district?


This is a mixed-methods study that proceeds in four phases. Quantitative and qualitative data are being collected from a wide range of stakeholders in the district, in schools, and in the community (e.g., school board members and/or parent groups) and include artifacts, interview and focus group data, educator surveys, and student achievement data. In the first two phases, the research team is gathering data, evaluating districts’ needs, and developing recommendations and will involve collection of district artifacts and an initial round of interviews and focus groups. In a third, implementation phase, the team is working with district personnel to design a plan for implementing recommendations. This phase also includes data collection through interviews, focus groups, and observation of meetings. The final evaluation stage looks at early effects of the implementation, using qualitative and quantitative measures of change.

 

2010 Grant Summaries


James Kemple, New York University
John Tyler, Brown University

Study of ARIS Usage and Lessons from the ARIS-Local Rollout

New York City’s Achievement Reporting and Innovative System (ARIS) can be described as both a system for storing and accessing student data and a knowledge management tool that draws from student biographical and enrollment information, formative assessment and state test results, and custom reports. By studying its use and the impact of its use on student achievement, Kemple and Tyler hope to learn more about the challenges to implementing school data systems and their effectiveness in improving teaching and learning.

There are three distinct parts to the study, which spans two years. A first part analyzes use of ARIS data through automatically generated activity logs. In this part of the study, the PIs will try to answer questions about usage quantity, type of information used, and association of use with student achievement gains, controlling for teacher, student, and school background characteristics. A second part of the study considers what content is most or least useful to educators, their capacity to use data, and other conditions that affect use (e.g., time, technical support, and professional development), drawing on data from teacher surveys (500 teachers in 25 schools) and administrator interviews, teacher focus groups, and observations in 10 case study schools. A third and final part of the study focuses on the implementation and effects of a new, more interactive version of ARIS, ARIS-Local, based on a new round of data gathering from activity logs, interviews, surveys, and focus groups.


Julie A. Marsh, University of Southern California
Jennifer McCombs, RAND

Bridging the Data-Practice Divide: How Coaches and Data Teams Work to Build Teacher Capacity to Use Data

Although schools are expected to be “data driven” in their decisions and practice, little is known about how to use data effectively in education. A central concern is teachers’ capacity to make sense of data and to apply it to practice. This study seeks to discover how different types of capacity-building agents (CBAs)—data coaches, literacy coaches, or data teams—affect middle-school teachers’ ability to use data in practice.

Marsh and McCombs are using six case studies—based in middle schools in Virginia, Florida, and California—to examine English language arts and social studies teachers’ use of literacy data in interaction with different types of CBAs. Two schools will have data coaches; two will have literacy coaches; and two will use the data team model. Data for the study will be gathered through interviews with teachers (three case teachers at each site), principals, and CBAs; focus groups with non-case teachers; observations (shadowing coaches); logs completed by CBAs and teachers; and school documents. Analysis will be guided by socio-cultural theory, with particular attention to cognitive apprenticeship (in which practitioners model behavior and activities) and communities of practice (in which individuals build relationships and negotiate meaning in a group).

 

2011 Grant Summaries


Morva McDonald
Charles Peck

Evidence and Action: Investigating the Organizational Contexts of Data-use in Programs of Teacher Education
University of Washington

Co-PIs Morva McDonald and Charles Peck plan to examine and document how outcome data are used in teacher education programs. As they explain, “The purpose of the research we propose here is to generate useful new empirical knowledge about the conditions under which programs of higher education successfully use data on program outcomes as a resource for program improvement.” The PIs will partner with the American Association of Colleges for Teacher Education for the first phase of the study—a survey to 750 teacher preparation programs. From that sample, they will select and conduct site visits at ten campuses with programs that they identify as high users of data. Finally, the PIs will write five deep case studies that focus on the specific organizational context and activities of those programs.

The research questions they propose to answer are:

  1. To what extent do teacher education programs systematically use program outcome data to make decision about program improvement?
  2. What organizational policies, structures, and practices are associated with systematic evidence-based decision making in “high data use” teacher education programs?
  3. What are the outcomes of data based decision making in “high data use” programs?

 


Robert J. Thompson

A Study of the Use of General Education Assessment Findings for Educational Improvement
Duke University

There have been increasing calls for a better accounting of what and how much students are learning in our colleges and universities, though fewer calls to improve teaching and learning in ways that contribute to improved outcomes for students. While there has been greater attention to how we assess student learning overall in colleges and universities, there has been almost no attention to how those assessment data might inform improvement at various levels of colleges and universities, from course to department to institution. This study proposes to document better what factors and processes affect the use of student assessment data to improve educational practices and student learning, as well as to test whether participation in a "sense-making" simulation exercise enables faculty and key decision makers to integrate and improve the use of general education assessment findings in their educational practices and decisions.  The study brings together a team of researchers—Robert J. Thompson and Kristen Neuschel, Duke University; Daniel Bernstein and Andrea Follmer Greenhoot, University of Kansas; Nancy Mitchell and Jessica Jonson, University of Nebraska-Lincoln—who will execute multiple cycles of the protocol for general education courses on the three campuses.

The research team proposes to answer the following questions:

  1. To what extent does student learning assessment evidence influence educational decision-making about general education practices/policies?
  2. To what extent do the organizational context, personal factors, and information characteristics of the “sense making” conceptual framework, independently and collectively, contribute to the use of student assessment data for educational decision-making and improvement in student learning?
  3. Does implementing a “sense making” simulation exercise improve the use of student assessment data for educational decisions?