I'm a PhD candidate in Management Science at the MIT Sloan School of Management and am on the job market Fall 2023.
My research focuses on the economics of digitization with an emphasis on content platforms, social influence, and Human-AI interaction.
Prior to graduate school, I was a research assistant at Microsoft Research and an analyst at Wells Fargo Securities. I hold an SM in Management Research from the MIT Sloan School of Management in addition to a BS in Business Administration and BA in Economics from the University of North Carolina at Chapel Hill.
A copy of my CV is available here.
Job Market Paper
Digital platforms increasingly curate their content through personalized algorithmic rankings. Given the limited attention of their users and reliance on advertising, platforms have an incentive to promote content that increases the predicted engagement of each user. However, managers must also balance maximizing total engagement with the quality of content promoted on the platform due to advertiser concerns over brand safety and to satisfy policy makers. This paper studies how maximizing engagement for each user affects the quality of content with which users engage in order to understand the extent to which engagement maximizing algorithms promote and incentivize low-quality content. In addition, I evaluate how the ranking algorithm itself can be designed to promote and encourage engagement with high quality content. To do this, I study the Reddit politics community and exploit a novel discontinuity – revealed in Reddit's code repository – in how the ranking algorithm orders posts to identify the effect of a post's rank on the number of comments it receives. I use this discontinuity to identify a discrete choice model of user comment decisions and estimate the distribution of news that users are exposed to and comment on under a personalized algorithm that maximizes engagement. This counterfactual demonstrates that personalization drives a wedge between users in terms of the quality of content – the credibility rating of an article's publisher – that users are exposed to and engage with. Under the personalized ranking algorithm, users who ordinarily engage with high-credibility publishers continue to do so. However, users who ordinarily engage with lower-credibility publishers are exposed to and engage with an even larger share of low-credibility publishers under the personalized engagement maximizing algorithm. Finally, I evaluate a credibility-aware algorithm that explicitly promotes credible news publishers and find that moving to the credibility maximizing algorithm reduces total engagement by 5.0%, a meaningful decline. Yet, platforms can increase the share of the average user's engagement with high credibility publishers by 6.8 percentage points for only a 2.0% decrease in engagement. These findings suggest that algorithmic interventions can be a useful tool to promote higher-quality content to help satisfy both advertisers and policymakers.
Information Frictions and Heterogeneity in Valuations of Personal Data (with Avinash Collis, Ananya Sen, and Alessandro Acquisti), 2021
Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology (with Nikhil Agarwal, Pranav Rajpurkar, and Tobias Salz), 2023
Although Artificial intelligence (AI) algorithms have matched the performance of human experts on several predictive tasks, humans may access valuable contextual information that has not been incorporated into AI predictions. Humans that combine AI predictions with their own information could therefore out-perform both humans alone or AI alone. Using an experiment on professional radiologists that varies the availability of AI support and contextual information, we show that (i) providing AI predictions does not uniformly increase diagnostic quality, and (ii)providing contextual information does increase quality. We find that radiologists do not realize the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating. Radiologists’ errors in belief updating can be explained using a model in which they partially under-weight the AI’s information relative to their own and do not account for the correlation between their own information and the AI’s. We then design a collaborative system between radiologists and AI. Our results show that, unless the mistakes we document can be corrected, the optimal solution involves delegating cases either to humans or to AI but rarely to a human assisted by AI
I study how the introduction of a new non-personalized news feed impacts user engagement quantity, quality, and diversity on Reddit. In June 2018, Reddit introduced the News tab on iOS devices that surfaces popular content from a curated list of news-related communities. I leverage this natural experiment to identify the causal effects of the News tab on iOS user engagement in a difference-in-differences design. I find that the News tab increases the share of iOS devices that engage with news-related content and the new engagement is not meaningfully different in quality from existing engagement. Additionally, I find that the diversity of engagement within news categories and within articles from publishers across the political spectrum increases as a result of the News tab. These results suggest that non-personalized feeds can be an important tool to mitigate algorithmic filter bubbles.
Social Influence and News Consumption (with Carlos Molina), 2023
Populations in several countries have become decidedly more polarized in recent decades. Many believe that social media, which facilitates interactions within echo chambers, is partly to blame. These interactions can trigger two distinct effects on the demand for biased news. First, individuals can be influenced by their peers' news consumption, for example, because they value keeping a news diet that is ideologically congruent with that of their peers. Second, individuals might purposefully skew their news consumption in anticipation that their peers will observe these choices. We design a field experiment on Twitter (renamed X in 2023) to separately identify the importance of both mechanisms. Our main result documents that, through these two mechanisms, online interactions with like-minded peers are not a major contributor to the demand for polarized news content. Our experiment induces variation in an individual's perceptions of the political leanings of their peers' news consumption and the visibility of their own news consumption to their social media followers. We track participants' sharing behavior and news consumption, proxied by the news outlets they follow. We find no evidence to support the first channel: our experimental variation influences respondents' beliefs about the news diets of their peers, but they do not respond by changing their own news diets. In contrast, we find that participants alter their news diet considerably when they believe their peers will observe these choices, as in the second channel. Interestingly, individuals primarily wish to present themselves as following a balanced set of news. Therefore, our paper uncovers one mechanism through which social media can attenuate the demand for polarizing content: as these platforms amplify the visibility of user interactions, which increases the importance of social image concerns, users adjust their news consumption to be more balanced.
Providing normative information increases intentions to accept a COVID-19 vaccine (with Avinash Collis, Kiran Garimella, M Amin Rahimian, Sinan Aral, and Dean Eckles) Nature Communications, 2023
Global survey on COVID-19 beliefs, behaviors, and norms (with Avinash Collis, Kiran Garimella, M. Amin Rahimian, Stella Babalola, Nina Gobat, Dominick Shattuck, Jeni Stolow, Dean Eckles, and Sinan Aral) (project website) Nature Human Behavior, 2022
Interdependence and the Cost of Uncoordinated Responses to COVID-19 (with David Holtz, Michael Zhao, Seth G. Benzell, Cathy Cao, M. Amin Rahimian, Jeremy Yang, Jennifer Allen, Avinash Collis, Tara Sowrirajan, Dipayan Ghosh, Yunhao Zhang, Paramveer Dhillon, Christos Nicolaides, Dean Eckles and Sinan Aral). Proceedings of the National Academy of Sciences, 2020