Hello! I’m a Staff Research Scientist at Google DeepMind and where I work in the Ethics Research Team. My work focuses on the ethics of artificial intelligence, including questions about AI value alignmentdistributive justicelanguage ethics and human rights.

More generally, I’m interested in AI and human values, and in ensuring that technology works well for the benefit of all. I’ve contributed to several projects that promote responsible innovation in AI, including the creation of the ethics review process at NeurIPS.

Before joining DeepMind, I taught moral and political philosophy at Oxford University, and worked for the United Nations Development Program in Lebanon and Sudan.

Research

AI Ethics

  • Author(s): Stevie Bergman, Nahema Marchal, John Mellor, Shakir Mohamed et al

    Journal: Scientific Reports

  • Author(s): Iason Gabriel, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks et al

    Journal: arXiv

  • Author(s): A Stevie Bergman, Lisa Anne Hendricks, Maribeth Rauh, Boxi Wu et al

    Journal: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency

  • Author(s): Laura Weidinger, Kevin R McKee, Richard Everett, Saffron Huang et al

    Journal: Proceedings of the National Academy of Sciences

  • Author(s): A Kasirzadeh, I Gabriel

    Journal: Philosophy & Technology

  • Author(s): I Gabriel, V Ghazavi

    Journal: The Oxford Handbook of Digital Ethics (ed) Carissa Veliz (OUP, 2022)

  • Author(s): A Birhane, W Isaac, V Prabhakaran, M Díaz, MC Elish, I Gabriel et al

    Journal: ACM EAAMO

  • Author(s): V Prabhakaran, M Mitchell, T Gebru, I Gabriel

    Journal: ACM EAAMO poster

  • Author(s): Iason Gabriel

    Journal: Daedalus 151 (2), 218-231

  • Author(s): I Gabriel

    Journal: Minds and machines 30 (3), 411-437


Technical Reports and Papers

  • Author(s): Laura Weidinger, Joslyn Barnhart, Jenny Brennan et al

    Journal: arXiv

  • Author(s): Laura Weidinger, Maribeth Rauh, Nahema Marchal, Arianna Manzini et al

    Journal: arXiv

  • Author(s): Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong et al

    Journal: arXiv

  • Author(s): M Rauh, J Mellor, J Uesato, PS Huang, J Welbl, L Weidinger et al

    Journal: ACM NeurIPS – arXiv preprint arXiv:2206.08325

  • Author(s): Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin et al

    Journal: 2022 ACM Conference on Fairness, Accountability, and Transparency, 214-229

  • Author(s): Amelia Glaese, Nat McAleese, Maja Trębacz et al

    Journal: arXiv preprint arXiv:2209.14375

  • Author(s): Z Kenton, T Everitt, L Weidinger, I Gabriel et al

    Journal: arXiv preprint arXiv:2103.14659

  • Author(s): Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican et al

    Journal: arXiv preprint arXiv:2112.11446


General Philosophy

  • Author(s): I Gabriel

    Journal: Utilitas 30 (1), 32-53

  • Author(s): H Lazenby, I Gabriel

    Journal: The Philosophical Quarterly 68 (271), 265-285

  • Author(s): I Gabriel

    Journal: Journal of Applied Philosophy 34 (4), 457-473


Talks and Podcasts

  • Author(s): Iason Gabriel

    Host: Schwartz Reisman Institute

    Summary: The development of general-purpose foundation models such as Gemini and GPT-4 has paved the way for increasingly advanced AI assistants. While early assistant technologies, such as Amazon’s Alexa or Apple’s Siri, used narrow AI to identify and respond to speaker commands, more advanced AI assistants demonstrate greater generality, autonomy and scope of application. They also possess novel capabilities such as summarization, idea generation, planning, memory, and tool-use—skills that will likely develop further as the underlying technology continues to improve.

    Advanced AI assistants could be used for a range of productive purposes, including as creative partners, research assistants, educational tutors, digital counsellors, or life planners. However, they could also have a profound effect on society, fundamentally reshaping the way people relate to AI. The development and deployment of advanced assistants therefore requires careful evaluation and foresight. In particular we may want ask:

    What might a world populated by advanced AI assistants look like? 

    How will people relate to new, more capable, forms of AI that have human-like traits and with which they’re able to converse fluently? 

    How might these dynamics play out at a societal level—in a world with millions of AI assistants interacting with one another on their user’s behalf?

    This talk will explore a range of ethical and societal questions that arise in the context of assistants, including value alignment and safety, anthropomorphism and human relationships with AI, and questions about collective action, equity, and overall societal impact.

    Read
  • Author(s): John Danaher

    Host: Philosophical Disquisitions

    Summary: With John Danaher for the podcast Philosophical Disquisitions (1 hr 8 mins)

    Listen
  • Author(s): I Gabriel

    Host: UC Berkeley Social Science Matrix

    Summary: Author meets critics event with David Robinson and Deirdre Mulligan at the UC Berkeley Social Science Matrix (October 2022)

    Watch
  • Author(s): Matt Clifford

    Host: Thoughts in Between

    Summary: Matt Clifford for the podcast Thoughts in Between (48 mins)

    Listen
  • Author(s):

    Host:

    Summary: UCL x DeepMind Deep Learning Lecture with Chongli Qin (27 min)

    Watch
  • Author(s): Lucas Perry

    Host: Podcast – The Future of Life Institute

    Summary: With Lucas Perry for The Future of Life Institute Podcast (1 hr 45 mins)

    Listen
  • Author(s):

    Host: Princeton University

    Summary: A public lecture at Princeton University in November 2019

    Listen

Media

  • Author(s): Alison Snyder

    Host: Axios

    Summary:

    Read
  • Author(s): Iason Gabriel

    Host: DeepMind Technical Blog

    Summary: DeepMind Blog exploring work on value alignment and language models.

    Read
  • Author(s): Matthew Hutson

    Host: The New Yorker

    Summary: Exploration in The New Yorker of the ethics requirements introduced at NeurIPS and wider questions surrounding responsibility in the AI industry.

    Read
  • Author(s): David Castelvecci

    Host: Nature

    Summary: Write-up in Nature of the requirement to include social impact statements alongside research submissions at NeurIPS in 2020

    Read
  • Author(s): Iason Gabriel

    Host: DeepMind Technical Blog

    Summary: DeepMind Blog exploring value alignment research and approaches that draw upon political theory.

    Read
  • Author(s): Iason Gabriel

    Host: Medium

    Summary: An early exploration of the way in which insights from political philosophy, in particular those of intersectional analysis, can cast light on the challenge of algorithmic injustice

    Read
  • Author(s): World Bank Blog

    Host: Let’s Talk About Development

    Summary: Blog exploring the psychology of charity fund-raising and competitive dynamics within the sector.

    Read
  • Author(s): Derek Thompson

    Host: The Atlantic

    Summary: Article in the Atlantic exploring what it means to “do the most good” and whether a focus on systemic change could be relevant to this project.

    Read