ORCID

Jeremy Bertomeu, https://orcid.org/0000-0001-6746-5767

Edwige Cheynel, https://orcid.org/0000-0001-5763-3253

Radhika Lunawat, https://orcid.org/0000-0002-3501-1426

Mario Milone, https://orcid.org/0009-0007-3839-0525

Language

English (en)

Publication Date

4-18-2026

Abstract

This study examines the resolution of ethical dilemmas in financial reporting by human participants and large language models. Participants act in the role of a CFO deciding whether to discontinue a prior policy with biased reporting; however, the bias is known and corrected by investors whereas a change may temporarily mislead investors. We find that models are less amenable to competing ethical considerations than humans, and exhibit greater preference for truthful reporting. Moreover, they respond with greater consistency to institutional ethical guidance, while humans become more indecisive under pressure from management. The models exhibit more internal coherence between their moral judgment and their policy prescriptions and are judged more persuasive by humans. Finally, humans follow model advice when accompanied by an explanation, but they seem to discount (and sometimes react against) advice offered without it. Our findings offer evidence on the misalignment between artificial intelligence and humans in tackling subjective reporting dilemmas while guiding the incorporation of such tools into corporate governance.

Document Type

Working Paper

Author's School

Olin Business School

Share

COinS