Research Projects

Look Who's  Talking

Pragmatics and Ethics of Large Language Models in Democracy

The rapid pace of technological innovation challenges established knowledge-production and dissemination structures. We propose a philosophical investigation of Large Language Models (LLMs) and their impact on democratic societies. Capable of generating vast quantities of human-like text, LLMs are likely to have significant epistemic effects. Our research question is: What epistemic effects can be expected of LLMs in democratic settings, and how can we improve the odds that this technology will serve democracy rather than harm it? We focus on three areas: (1) LLMs' impact on democratic discourse; (2) their implications for the epistemic authority of expertise and legitimacy of democratic institutions; and (3) the ethical responsibilities of governmental bodies and citizens in shaping and using LLMs.

Duration: 1/2024 - 12/2026

Funding: The Czech Science Agency (GACR)

Principal Investigator: Miroslav Vacura

Team members: Petr Špecián, Eugenia Stamboliev

Project website: https://llm4dem.vse.cz/

Perspectives of Paternalism in a Democratic Society

Lessons from Behavioral Sciences for Political Philosophy

The project focuses on the implications for the political philosophy that result from the growing amount of evidence documenting systematic deviations of people's choices from instrumentally rational behavior. This evidence challenges the assumption that people, as consumers and voters, can be expected to act in their own best interest. Today, various strains of paternalism play a prominent role in the philosophical discussion of the implications of limited rationality. The project examines the proposals based on epistemic paternalism and libertarian paternalism. It contributes to the debate on whether and to what extent do these proposals possess the potential to contribute to the long-term sustainability of liberal democratic institutions or to harm it. It explores the opportunities to use paternalist measures to reduce the risk of spreading of dangerous memes, such as conspiracy theories, but also the threats that follow from potential abuse of these same measures by authorities who could exploit limited rationality to pursue their self-interested ends.

Duration: 1/2019 - 6/2023

Funding: The Czech Science Agency (GACR)

Principal Investigator: Petr Špecián

Co-Investigator: Filip Tvrdý (2019 - 2022), Petra Chudárková (2022 - 2023)

Media and the Selection of Experts

The project addresses the media filtering of experts. We consider a media firm that asks experts to assess if a given problem is major or minor so it can report the type of the problem. The media firm decides between generalist and specialist experts. The specialist can identify the type of the problem with certainty; however, finding one is costly. We analyze how equilibria depend on the media search costs and the probability that the true state ultimately reveals itself to the public. We aim to demonstrate that the probability that the true state will ultimately be revealed in a way obvious to the lay public is the critical determinant of the accuracy of the reported expert testimony. Our chief hypothesis is that if the revelation probability is low, strategizing by the experts and the media bias may prevent accurate expertise from being broadcasted.

Duration: 1/2021 - 6/2022

Funding: IREF - The Institute for Research in Economic and Fiscal Issues

Investigators: Marek Hudík and Petr Špecián

Behaviorally Informed Paternalism and Democratic Values

The project researched the philosophical implications of the expanding evidence that people's choices deviate from instrumentally rational behavior in a systematic manner. The aim is to examine the compatibility of paternalist proposals with the democratic idea of individual normative sovereignty, i.e., with the commitment to accept individual value judgements incorporated in political choices as binding. I explore the opportunities to use paternalist measures to reduce the risk of spreading of dangerous memes, such as ‘fake news,’ but also the risks that follow from potential abuse of these same measures by authorities who could exploit bounded rationality to undermine liberal democracy.

Duration: 2/2021 - 6/2021

Funding: AKTION Austria - Czech Republic

Investigator: Petr Špecián