« Back to Results

Measuring Development: New Tools

Paper Session

Sunday, Jan. 4, 2026 8:00 AM - 10:00 AM (EST)

Philadelphia Marriott Downtown, Grand Ballroom Salon E
This session will be streamed live.
Hosted By:
  • Chair: Pascaline Dupas, Princeton University

A New Perspective on Spatial Heterogeneity in African Development

Pascaline Dupas
,
Princeton University
Nicolás Suárez Chavarría
,
Stanford University
Zhongyi Tang
,
Boston University

Abstract

Access to basic infrastructure is a critical component of quality of life and an important measure of economic development. However, on-the-ground data about infrastructure access, especially in low-income countries, is often sparse and costly to collect. We leverage satellite imagery and survey data to train a machine learning model that predict access to infrastructure for each 6.72x6.72km area of Africa. The model achieves accuracy levels of 77.1% to 84.7%. We show the value of this novel dataset with two applications. First, we use a spatial regression discontinuity design to study how much of the heterogeneity in infrastructure access across countries comes from differences in institutional quality, finding a positive effect of a modest magnitude, reconciling previous contradicting results in this literature. Second, we study the role of political favoritism in explaining within country heterogeneity, finding that areas with political ties to current or former presidents have better access to infrastructure.

How Much Does the Measure Matter? Surveys, Observation, and (Remote) Sensing in Development

Jenny Aker
,
Cornell University
Jen Burney
,
University of California-San Diego
B. Kelsey Jack
,
University of California-Berkeley
Alison Campion
,
Tufts University
Chuan Liao
,
Cornell University

Abstract

How much does measurement error matter? Development programs and research initiatives often rely upon surveys to measure outcomes. In the context of randomized control trials, it is often assumed that this error is classical. But what if such measurement is correlated with the treatment, and hence biases the results? How much does direct observation versus remote sensing help? We use self-reports, direct observation and remote sensing to assess the direction and magnitude of measurement error in the context of a randomized control trial designed to increase adoption of an agricultural technology in Niger. By comparing self-reported adoption with direct enumerator observation, we find that respondents are more likely to overstate adoption when it is directly incentivized. In such cases, direct observation can remove the bias from self-reported measures. Yet direct observation may be costly or infeasible in other settings. We thus examine the potential for novel data sources – remotely sensed imagery – to address reporting bias. Such data do not allow for (easy) direct recovery of levels of agricultural technology adoption, but do offer “signals” via measurement of downstream outcomes, such as agricultural productivity. We conclude with a framework for assessing the cost and benefits of alternative approaches to addressing measurement error in the context of studies with self reported outcomes.

What Types of Poverty Can (and Can't) Be Measured with Mobile Phone Data?

Emily Aiken
,
University of California-San Diego
Josh Blumenstock
,
University of California-Berkeley
Sveta Milusheva
,
World Bank
M. Merritt Smith
,
University of California-Berkely

Abstract

Policymakers and researchers are increasingly relying on mobile phone metadata, in conjunction with machine learning algorithms, to estimate the economic characteristics of individuals in settings where survey data are scarce. Yet we still lack systematic guidance on which economic indicators can be recovered accurately and what data are necessary to do so. We provide cross-country evidence on the potential and limits of phone-based prediction, through a harmonized analysis of mobile phone data from five different countries. In each country, we supplement large databases containing billions of mobile transaction records with a small number of targeted surveys with individual subscribers. We document the accuracy with which a range of economic outcomes can be predicted, and illustrate how several key design decisions influence the performance of the predictive model.

Representation in Mobility Data in Low and Middle Income Countries

Daniel Bjorkegren
,
Columbia University

Abstract

How people move has implications for many facets of economic activity. Data from mobile phones have enabled much finer tracking of mobility. However, there are multiple ways to measure mobility using mobile phones; some can only be gathered from smartphones, which may omit many people in emerging economies. What are the tradeoffs in measuring mobility with different sources of data?

Discussant(s)
Stephan Heblich
,
University of Toronto
Christopher Barrett
,
Cornell University
James Foster
,
George Washington University
Noam Angrist
,
Oxford University