Search EdWorkingPapers
Search EdWorkingPapers by author, title, or keywords.
Search
EdWorkingPapers
Many dimensions of teacher working conditions influence both teacher and student outcomes; yet, analyses of schools’ overall working conditions are challenged by high correlations among the dimensions. Our study overcame this challenge by applying latent profile analysis of Virginia teachers’ perceptions of school leadership, instructional agency, professional growth opportunities, rigorous instruction, managing student behavior, family engagement, physical environment, and safety. We identified four classes of schools: Supportive (61%), Unsupportive (7%), Unstructured (22%), and Structured (11%). The patterns of these classes suggest schools may face tradeoffs between factors such as more teacher autonomy for less instructional rigor or discipline. Teacher satisfaction and their stated retention intentions were correlated with their school’s working conditions classes, and school contextual factors predicted class membership. By identifying formerly unseen profiles of teacher working conditions and considering the implications of being a teacher in each, decisionmakers can provide schools with targeted supports and investments.
While policymakers have demonstrated considerable enthusiasm for “science of reading” initiatives, the evidence on the impact of related reforms when implemented at scale is limited. In this pre-registered, quasi-experimental study, we examine California’s recent initiative to improve early literacy across the state’s lowest-performing elementary schools. The Early Literacy Support Block Grant (ELSBG) provided teacher professional development grounded in the science of reading as well as aligned supports (e.g., assessments and interventions), new funding (about $1000 per student), spending flexibility within specified guidelines, and expert facilitation and oversight of school-based planning. We find that ELSBG generated significant (and cost-effective) improvements in ELA achievement in its first two years of implementation (0.14 SD) as well as smaller, spillover improvements in math achievement.
Using data from nearly 1.2 million Black SAT takers, we estimate the impacts of initially enrolling in an Historically Black College and University (HBCU) on educational, economic, and financial outcomes. We control for the college application portfolio and compare students with similar portfolios and levels of interest in HBCUs and non-HBCUs who ultimately make divergent enrollment decisions - often enrolling in a four-year HBCU in lieu of a two-year college or no college. We find that students initially enrolling in HBCUs are 14.6 percentage points more likely to earn a BA degree and have 5 percent higher household income around age 30 than those who do not enroll in an HBCU. Initially enrolling in an HBCU also leads to $12,000 more in outstanding student loans around age 30. We find that some of these results are driven by an increased likelihood of completing a degree from relatively broad-access HBCUs and also relatively high-earning majors (e.g., STEM). We also explore new outcomes, such as credit scores, mortgages, bankruptcy, and neighborhood characteristics around age 30.
We examine access to high school Ethnic Studies in California, a new graduation requirement beginning in 2029-30. Data from the California Department of Education and the University of California Office of the President indicate that roughly 50 percent of public high school students in 2020-21 attend a school that offers Ethnic Studies or a related course, but as of 2018-19, only 0.2 percent of students were enrolled in such a course. Achieving parity with economics, a current graduation requirement, requires more than doubling the number of Ethnic Studies teachers relative to 2018-19. We also examine school and community factors that predict offering Ethnic Studies and provide descriptive information about the Ethnic Studies teaching force across the state.
High-quality preschool programs are heralded as effective policy solutions to promote low-income children’s development and life-long wellbeing. Yet evaluations of recent preschool programs produce puzzling findings, including negative impacts, and divergent, weaker results than demonstration programs implemented in the 1960s and 70s. We provide potential explanations for why modern preschool programs have become less effective, focusing on changes in instructional practices and counterfactual conditions. We also address popular theories that likely do not explain weakening program effectiveness, such as lower preschool quality and low-quality subsequent environments. The field must take seriously the smaller positive, null, and negative impacts from modern programs and strive to understand why effects differ and how to improve program effectiveness through rigorous, longitudinal research.
COVID-19 upended schooling across the United States, but with what consequences for the state-level institutions that drive most education policy? This paper reports findings on two related research questions. First, what were the most important ways state government education policymakers changed schools and schooling from the moment they began to reckon with the seriousness of COVID-19 through the first full academic year of the pandemic? Second, how deep did those changes go – are there indications the pandemic triggered efforts to make lasting changes in states’ education policymaking institutions? Using multiple-methods research focused on Colorado, Florida, Louisiana, Michigan, and Oregon, we documented policies enacted during the period from March 2020 through June 2021 across states and across sectors (traditional and choice) in three COVID-19-related education policy domains: school closings and reopenings, budgeting and resource allocation, and assessment and accountability systems. We found that states quickly enacted radical changes to policies that had taken generations to develop. They mandated sweeping school closures in Spring 2020, and then a diverse array of school reopening policies in the 2020/2021 school year. States temporarily modified their attendance-based funding systems and allocated massive federal COVID-19 relief funds. Finally, states suspended annual student testing, modified the wide array of accountability policies and programs linked to the results of those tests, and adapted to new assessment methods. These crisis-driven policy changes deeply disrupted long-established patterns and practices in education. Despite this, we found that state education governance systems remained resilient, and that at least during the first 16 months of the pandemic, stakeholders showed little interest in using the crisis to trigger more lasting institutional change. We hope these findings enable state policymakers to better prepare for future crises.
Educational researchers often report effect sizes in standard deviation units (SD), but SD effects are hard to interpret. Effects are easier to interpret in percentile points, but conversion from SDs to percentile points involves a calculation that is not intuitive to educational stakeholders. We point out that, if the outcome variable is normally distributed, simply multiplying the SD effect by 37 usually gives an excellent approximation to the percentile-point effect. For students in the middle three-fifths of a normal distribution, the approximation is always accurate to within 1.6 percentile points (and usually accurate to within 1 percentile point) for effect sizes of up to 0.8 SD (or 29 to 30 percentile points). Two examples show that the approximation can work for empirical effects estimated from real studies.
School principals are viewed as critical actors to improve student outcomes, but there remain important methodological questions about how to measure principals’ effects. We propose a framework for measuring principals’ contributions to student outcomes and apply it empirically using data from Tennessee, New York City, and Oregon. As commonly implemented, value-added models misattribute to principals changes in student performance caused by unobserved time-varying factors over which principals exert minimal control, leading to biased estimates of individual principals’ effectiveness and an overstatement of the magnitude of principal effects. Based on our framework, which better accounts for bias from time-varying factors, we find that little of the variation in student test scores or attendance is explained by persistent effectiveness differences between principals. Across contexts, the estimated standard deviation of principal value-added is roughly 0.03 student-level standard deviations in math achievement and 0.01 standard deviations in reading.