Bnaya Dreyfuss

Bnaya Dreyfuss

I am a Ph.D. candidate in Economics at Harvard University. In Fall 2026, I will join the Department of Economics at the University of Pittsburgh as an Assistant Professor. My research applies insights from behavioral economics to study policy design and technology adoption.

bdreyfuss@g.harvard.edu
Littauer Center, 1805 Cambridge Street, Cambridge, MA 02138

Working Papers

Equilibrium Neglect and Political Feasibility

Job Market Paper
When voters underappreciate the equilibrium effects of policies, efficient reforms can become politically infeasible. I develop a general framework of equilibrium neglect, and use this framework to construct a portable and generally applicable remedy. Using off-path contingent rebates, a social planner can generate a policy that (i) implements the social optimum, (ii) is budget balanced and (iii) is politically feasible even in the presence of equilibrium-neglecting voters. In a survey experiment on congestion pricing with a probability sample from six US metros, I show that respondents are overly pessimistic about traffic-reduction effects of congestion pricing, and that pessimism correlates with strong opposition to the policy. Adding contingent compensation that pays if traffic remains high significantly increases public support, especially among potential compensation recipients. In surveys of civil servants and state legislators, I show how equilibrium neglect affects the policymaking process.

Calibrated Coarsening: Designing Information for AI-Assisted Decisions

with Ruru Hoong
Artificial intelligence (AI) is increasingly used to aid human decision-making across critical applications, but errors in human probabilistic reasoning often undermine its effectiveness. This raises the central design question of how to provide AI input to humans in a way that improves decision-making outcomes. We propose calibrated coarsening—partitioning the signal space into fewer cells at chosen thresholds—as a way to do so that (i) ensures humans retain final decision authority, (ii) modifies signals without deception, and (iii) adapts flexibly to diverse biases and contexts. In a randomized experiment with professional loan specialists, we demonstrate that coarsening AI signals at the theory-derived threshold significantly improves decision-making outcomes, over both the human-only and uncoarsened AI benchmarks.

Human Learning About AI

with Raphaël Raux
Extended abstract at EC '25
We study how people form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people rely on human-relevant task features when evaluating AI, treating AI failures on human-easy tasks, and successes on human-difficult tasks, as highly informative of its overall performance. In lab experiments, we show that projection of human difficulty onto AI predictably distorts subjects' beliefs and can lead to suboptimal adoption, as failing human-easy tasks need not imply poor overall performance for AI. We find evidence for projection in a field experiment with an AI giving parenting advice. Potential users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and future engagement.

On the Workings of Tribal Politics

with Assaf Patir and Moses Shayo
We study economies in which an endogenous subset of voters support certain ("tribal") candidates regardless of their policies, and politicians choose whether or not to run on a tribal ticket. Non-tribal politics is characterized by centrist policies, while tribal politics produces extreme policies, typically from the right, despite the fact that the tribal base is from the lower middle class. Allowing policy in one period to determine the income distribution in the next, the economy either converges to a steady state or cycles between tribal and non-tribal regimes, depending on the vote share of the minority group, the scope for redistributive policy, and the salience of inter-group disparities.

Published Papers

Deferred Acceptance with News Utility

with Ofer Glicksohn, Ori Heffetz, and Assaf Romm
Management Science, 72(3), 2090–2110, 2026
Can incorporating expectations-based-reference-dependence (EBRD) considerations reduce seemingly dominated choices in the Deferred Acceptance (DA) mechanism? We run two experiments (total N = 500) where participants are randomly assigned into one of four DA variants—{static, dynamic} × {student proposing, student receiving}—and play ten simulated large-market school-assignment problems. While a standard, reference-independent model predicts the same straightforward behavior across all problems and variants, a news-utility EBRD model predicts stark differences across variants and problems. As the EBRD model predicts, we find that (i) across variants, dynamic student receiving leads to significantly fewer deviations from straightforward behavior, (ii) across problems, deviations increase with competitiveness, and (iii) within specific problems, the specific deviations predicted by the EBRD model are indeed those commonly observed in the data.

Additive vs. Subtractive Earning in Shared Human-Robot Work Environments

with Ori Heffetz, Guy Hoffman, Guy Ishai, and Alap Kshirsagar
Journal of Economic Behavior & Organization, 217, 692–704, 2024
The performance of robots working alongside humans might positively or negatively affect humans' earnings, depending on the economic setting. In a new real-effort lab experiment, we study the impact of economic conditions in hybrid human-robot workplaces on workers' effort provision and attitudes. In a previous subtractive-earnings experiment (Kshirsagar et al., 2019), subjects' expected earnings negatively depend on a robot's performance, while in our new additive-earnings experiment, they depend on the robot's performance positively. Both experiments are guided by a past human-human experiment and by a model of expectations-based reference-dependent preferences. As the theory predicts and as previously found, increasing robot performance discourages effort under subtractive earnings—but, as the theory also predicts and as we find here, this effect disappears and perhaps reverses under additive earnings. Additionally, increasing robot performance negatively affects subjects' perceptions of themselves and of their robotic coworker under subtractive earnings, but we find that these effects weaken or reverse under additive earnings. These findings suggest a relationship between workers' earning structures and robots' performance that should be considered when designing hybrid workplaces.

Expectations-Based Loss Aversion May Help Explain Seemingly Dominated Choices in Strategy-Proof Mechanisms

with Ori Heffetz and Matthew Rabin
American Economic Journal: Microeconomics, 14(4), 515–555, 2022
Deferred acceptance (DA), a widely implemented algorithm, is meant to improve allocations: under classical preferences, it induces preference-concordant rankings. However, recent evidence shows that—in both real, large-stakes applications and experiments—participants frequently play seemingly dominated, significantly costly strategies that avoid small chances of good outcomes. We show theoretically why, with expectations-based loss aversion, this behavior may be partly intentional. Reanalyzing existing experimental data on random serial dictatorship (a restriction of DA), we show that such reference-dependent preferences, with a degree and distribution of loss aversion that explain common levels of risk aversion elsewhere, fit the data better than no-loss-aversion preferences.

Monetary-Incentive Competition Between Humans and Robots: Experimental Results

with Alap Kshirsagar, Guy Ishai, Ori Heffetz, and Guy Hoffman
Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 95–103, 2019
In a controlled experiment, participants competed in a monotonous task with an autonomous robot for real monetary incentives. We manipulated the robot's performance and the monetary incentive level across ten rounds. Our findings validate behavioral economists' theories about loss aversion: people don't try as hard when their competitors are doing better. The results suggest how workplaces might optimize teams of people and robots working together.

Curriculum Vitae