About me

I’m a Member of Technical Staff on Anthropic‘s Alignment team, working on alignment stress-testing. I have paused my PhD program in computer science at UC Berkeley, where I was supervised by Stuart Russell.
Previously, I received a master’s in computer science from the University of Toronto (advised by Roger Grosse), and studied mathematics and computer science at the University of Oxford and the Technical University of Berlin. I am grateful for support from fellowships by Coefficient Giving, the Future of Life Institute, and the Center on Long-Term Risk Fund.
My great passion is playing the double bass. Before my research career, I studied the double bass in Stuttgart and Nuremberg and played with the Munich Philharmonic.
Email | CV | Google Scholar | Alignment Forum | Twitter | Send me feedback
Research
Rowan Wang, Johannes Treutlein, Fabien Roger, Evan Hubinger, Sam Marks. Evaluating honesty and lie detection techniques on a diverse suite of dishonest models. Anthropic Alignment Science Blog, 2025. [Tweets].
Mia Taylor, James Chua, Jan Betley, Johannes Treutlein, Owain Evans. School of Reward Hacks: Hacking harmless tasks generalizes to misaligned behavior in LLMs. axXiv preprint, 2025. [Blog, Tweets, Dataset].
Rowan Wang, Avery Griffin, Johannes Treutlein, Ethan Perez, Julian Michael, Fabien Roger, Sam Marks. Modifying LLM Beliefs with Synthetic Document Finetuning. Anthropic Alignment Science Blog, 2025.
Samuel Marks†, Johannes Treutlein†, Trenton Bricken, Jack Lindsey, Jonathan Marcus, Siddharth Mishra-Sharma, Daniel Ziegler, Emmanuel Ameisen, Joshua Batson, Tim Belonax, Samuel R. Bowman, Shan Carter, Brian Chen, Hoagy Cunningham, Carson Denison, Florian Dietz, Satvik Golechha, Akbir Khan, Jan Kirchner, Jan Leike, Austin Meek, Kei Nishimura-Gasparian, Euan Ong, Christopher Olah, Adam Pearce, Fabien Roger, Jeanne Salle, Andy Shih, Meg Tong, Drake Thomas, Kelley Rivoire, Adam Jermyn, Monte MacDiarmid, Tom Henighan, Evan Hubinger†. Auditing language models for hidden objectives. arXiv preprint, 2025. [Blog, Tweets].
Nathan Hu, Benjamin Wright, Carson Denison, Samuel Marks, Johannes Treutlein, Jonathan Uesato, Evan Hubinger. Training on Documents About Reward Hacking Induces Reward Hacking. Anthropic Alignment Science Blog, 2025.
Ryan Greenblatt†, Carson Denison†, Benjamin Wright†, Fabien Roger†, Monte MacDiarmid†, Sam Marks, Johannes Treutlein, Tim Belonax, Jack Chen, David Duvenaud, Akbir Khan, Julian Michael, Sören Mindermann, Ethan Perez, Linda Petrini, Jonathan Uesato, Jared Kaplan, Buck Shlegeris, Samuel R. Bowman, Evan Hubinger†. Alignment faking in large language models. arXiv preprint, 2024. [Blog, Tweets, Video, Code].
Johannes Treutlein*, Dami Choi*, Jan Betley, Samuel Marks, Cem Anil, Roger Grosse, Owain Evans. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. NeurIPS 2024. [Blog, Tweets, Poster, Code]
Caspar Oesterheld, Johannes Treutlein, Roger Grosse, Vincent Conitzer, Jakob Foerster. Similarity-based cooperative equilibrium. NeurIPS 2023.
Johannes Treutlein. Modeling evidential cooperation in large worlds. arXiv preprint, 2023.
Caspar Oesterheld*, Johannes Treutlein*, Emery Cooper, and Rubi Hudson. Incentivizing honest performative predictions with proper scoring rules. UAI 2023. [Tweets, Poster]
Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, Kate Woolverton. Conditioning Predictive Models: Risks and Strategies. arXiv preprint, 2023.
Cem Anil*, Ashwini Pokle*, Kaiqu Liang*, Johannes Treutlein, Yuhuai Wu, Shaojie Bai, J. Zico Kolter, and Roger Grosse. Path Independent Equilibrium Models Can Better Exploit Test-Time Computation. NeurIPS 2022.
Timon Willi*, Alistair Letcher*, Johannes Treutlein*, Jakob Foerster. COLA: Consistent Learning with Opponent-Learning Awareness. ICML 2022. [Slides, Code]
Julian Stastny, Maxime Riché, Alexander Lyzhov, Johannes Treutlein, Allan Dafoe and Jesse Clifton. Normative disagreement as a challenge for Cooperative AI. NeurIPS 2021 StratML and Cooperative AI workshops.
William MacAskill, Aron Vallinder, Caspar Oesterheld, Carl Shulman, and Johannes Treutlein. The Evidentialist’s Wager. The Journal of Philosophy, Volume 118, Issue 6, June 2021. [Penultimate Draft]
Johannes Treutlein, Michael Dennis, Caspar Oesterheld, and Jakob Foerster. A New Formalism, Method and Open Issues for Zero-Shot Coordination. ICML 2021. [Poster, Video, Code]
Johannes Treutlein and Caspar Oesterheld. A typology of Newcomblike problems. Manuscript, 2017.
*Equal contribution †Core research contributor