My research focuses on key areas in ethics
My research focuses on a wide range of issues in ethics. I am interested in fundamental theoretical questions in normative ethics as well as more applied issues, specifically related to organizations. Some of my current research interests include
Ethics of AI and Technology: AI ethics and digital ethics; responsible AI governance, AI assurance and auditing, operationalization of AI ethics and responsible innovation, robot-human relationships and relationship goods;
Normative Ethics: The justification of partiality to intimates and adversaries; in particular, the justification of parental partiality and the content of parental rights, moral options, associative duties, and the connection between partiality and meaning in life;
Ethics of Information: Misinformation and disinformation; ethical and epistemic risks in combating disinformation; ethical risks of AI and machine learning approaches for targeting disinformation.
Business and Organizational Ethics: Organizational ethics, speak-up and voice, and epistemic injustice; (remote) ethical culture and purpose, ethical decision making (under uncertainty); AI risk mitigation and effective risk mitigation change management.
You can also find my work on PhilPeople and Google Scholar. Condensed overview of published papers can be found here.
Ethics of AI and Technology
We Need Accountability in Human-AI Agent Relationships
Npj Artificial Intelligence (forthcoming), with Geoff Keeling, Arianna Manzini, and Amanda McCroskery.
Abstract: We argue that accountability mechanisms are needed in human-AI agent relationships to ensure alignment with user and societal interests. We propose a framework according to which AI agents’ engagement is conditional on appropriate user behaviour. The framework incorporates design-strategies such as distancing, disengaging, and discouraging.
Epistemic Deference to AI
Bridging the Gap Between AI and Reality (forthcoming): Second International Conference, AISoLA 2024, Crete, Greece, October 30–November 3, 2024, Selected Papers
Abstract: When should we defer to AI outputs over human expert judgment? Drawing on recent work in social epistemology, I motivate the idea that some AI systems qualify as Artificial Epistemic Authorities (AEAs) due to their demonstrated reliability and epistemic superiority. I then introduce AI Preemptionism, the view that AEA outputs should replace rather than supplement a user’s independent epistemic reasons. I show that classic objections to preemptionism – such as uncritical deference, epistemic entrenchment, and unhinging epistemic bases – apply in amplified form to AEAs, given their opacity, self-reinforcing authority, and lack of epistemic failure markers. Against this, I develop a more promising alternative: a total evidence view of AI deference. According to this view, AEA outputs should function as contributory reasons rather than outright replacements for a user’s independent epistemic considerations. This approach has three key advantages: (i) it mitigates expertise atrophy by keeping human users engaged, (ii) it provides an epistemic case for meaningful human oversight and control, and (iii) it explains the justified mistrust of AI when reliability conditions are unmet. While demanding in practice, this account offers a principled way to determine when AI deference is justified, particularly in high-stakes contexts requiring rigorous reliability.
Relational Norms for Human-AI Cooperation
ArXiv (manuscript): 1-76, with Brian D. Earp, Sebastian Porsdam Mann, et al.
Abstract: How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI's capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being
Digital Duplicates and Collective Scarcity
Philosophy and Technology, (2025): 1-8 .
Abstract: Digital duplicates reduce the scarcity of individuals and thus may impact their instrumental and intrinsic value. I here expand upon this idea by introducing the notion of collective scarcity, which pertains to the limitations faced by social groups in maintaining their size, cohesion and function.
Moral Imagination for Engineering Teams: The Technomoral Scenario
The International Review of Information Ethics 34 (1) (2024). Edmonton, Canada. https://informationethics.ca/index.php/irie/article/view/527. (2024): 1-8, Geoff Keeling, Amanda McCroskery, David Weinberger, Kyle Pedersen, and Ben Zevenbergen.
Abstract: “Moral imagination” is the capacity to register that one’s perspective on a decision-making situation is limited, and to imagine alternative perspectives that reveal new considerations or approaches. We have developed a Moral Imagination approach that aims to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in organizations developing new technologies. We here present a case study that illustrates one key aspect of our approach – the technomoral scenario – as we have applied it in our work with product and engineering teams. Technomoral scenarios are fictional narratives that raise ethical issues surrounding the interaction between emerging technologies and society. Through facilitated role-playing and discussion, participants are prompted to examine their own intentions, articulate justifications for actions, and consider the impact of decisions on various stakeholders. This process helps developers to re-envision their choices and responsibilities, ultimately contributing to a culture of responsible innovation.
The Ethics of Advanced AI Assistants
ArXiv (manuscript): 1-256, with Iason Gabriel, Arianna Manzini, Geoff Keeling, et al.
Abstract: This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user, across one or more domains, in line with the user's expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders.
A Framework for Assurance Audits of Algorithmic Systems
Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency. (2024): 1-21, with Khoa Lam, Borhane Blili-Hamelin, Jovana Davidovic, Shea Brown, and Ali Hasan.
Abstract: An increasing number of regulations propose the notion of AI audits as an enforcement mechanism for achieving transparency and accountability for AI systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the criterion audit as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that mitigate harms and uphold human values. We discuss the necessary conditions for the criterion audit, and provide a procedural blueprint for performing an audit engagement in practice. We illustrate how this framework can be adapted to current regulations by deriving the criteria on which bias audits for hiring algorithms can be performed, as required by the recently effective New York City Local Law 144 of 2021. We conclude by offering critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing where robust guardrails against quality assurance issues are only starting to emerge. Our discussion as informed by experiences in performing these audits in practice highlights the critical role that an audit ecosystem plays in ensuring the effectiveness of such methodology.
The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework
Technological Forecasting & Social Change 204 (2024): 1-19, with Franziska Poszler
Abstract: With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. In particular, we identify resulting consequences on an individual level (i.e., deliberation enhancement, motivation enhancement, autonomy enhancement and action enhancement) and on a societal level (i.e., moral deskilling, restricted moral progress and moral responsibility gaps). We carve out two distinct methods/operation types (i.e., processoriented and outcome-oriented navigation) that decision-support systems can deploy and postulate that these determine to what extent the previously stated consequences materialize. Overall, this study holds important theoretical and practical implications by establishing clarity in the conceptions, underlying mechanisms and (directions of) influences that can be expected when using particular IDSSs for ethical decisions.
Engaging Engineering Teams Through Moral Imagination: A Bottom-Up Approach for Responsible Innovation and Ethical Culture Change in Technology Companies
AI & Ethics (2023): 1-15, with Amanda McCroskery, Ben Zevenbergen, Geoff Keeling, Sandra Blascovich, Kyle Pedersen, Alison Lentz, Blaise Aguera y Arcas
Abstract: We propose a 'Moral Imagination' methodology to facilitate a culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted over 40 workshops with teams from across the organization. We argue that our approach is a crucial complement to existing formal and informal initiatives for fostering a culture of ethical awareness, deliberation, and decision-making in technology design such as company principles, ethics and privacy review procedures, and compliance controls. We characterize some distinctive benefits of our methodology for the technology sector in particular.
The Current State of AI Governance
Whitepaper, Algorithmic Bias Lab (2023) with J. Davidovic, A. Hasan, K. Lam, M. Regan, S. Brown
Abstract: As AI, machine learning algorithms, and algorithmic decision systems (ADS) continue to permeate every aspect of our lives and our society, the question of AI governance becomes exceedingly important. This report examines the current state of internal governance structures and tools across organizations, both in the prtivate and public sectors and in large and small organizations. This report provides one of the first robust and braod insights into the state of AI governance in the United States and Europe.
Algorithmic Bias and Risk Assessments: Lessons from Practice
Digital Society 1, 14 (2022): 1-15, with with A. Hasan, S. Brown, J. Davidovic, and M. Regan
Abstract: In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of assessments: an ethical risk assessment and a narrower, technical algorithmic bias assessment. We explain how the two assessments depend on each other, highlight the importance of situating the algorithm within its particular socio-technical context, and discuss a number of lessons and challenges for algorithm assessments and, potentially, for algorithm audits. The discussion builds on our team’s experience of advising and conducting ethical risk assessments for clients across different industries in the last four years. Our main goal is to reflect on the key factors that are potentially ethically relevant in the use of algorithms, and draw lessons for the nascent algorithm assessment and audit industry, in the hope of helping all parties minimize the risk of harm from their use.
Combating Disinformation with AI: Epistemic and Ethical Challenges
IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (2021): 1–5, with Ted Lechterman
Abstract: AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
Normative Ethics
Partiality and Meaning
Ethical Theory and Moral Practice 28 (2025): 79–92.
Abstract: Why do relationships of friendship and love support partiality, but not relationships of hatred or commitments of racism? Where does partiality end and why? I take the intuitive starting point that important cases of partiality are meaningful. I develop a view whereby meaning is understood in terms of transcending self-limitations in order to connect with things of external value. I then show how this view can be used to distinguish central cases of legitimate partiality from cases of illegitimate partiality and how it puts pressure on the traditional way of thinking about partiality.
The Enmity Relationship as Justified Negative Partiality
In Monika Betzler & Jörg Löschke (eds.), The Ethics of Relationships: Broadening the Scope. Oxford University Press, (forthcoming): 1-27, with J. Brandt.
Abstract: Existing discussions of partiality have primarily examined special personal relationships between family, friends, or co-nationals. The negative analogue of such relationships – for example, the relationship of enmity – has, by contrast, been largely neglected. This chapter explores this adverse relation in more detail and considers the special reasons generated by it. We suggest that enmity can involve justified negative partiality, allowing members to give less consideration to each other’s interests. We then consider whether the negative partiality of enmity can be justified through projects or the value inherent in the relationship, following two influential views about the justification of positive partiality. We argue that both accounts of partiality can be conceptually extended to the negative analogue, but doing so brings into focus the problems with such accounts of the grounds of partiality.
Partiality, Asymmetries, and Morality’s Harmonious Propensity
Philosophy and Phenomenological Research, 109 (2024): 1-43, with J. Brandt.
Abstract: We argue for asymmetries between positive and negative partiality. Specifically, we defend four claims: i) there are forms of negative partiality that do not have positive counterparts; ii) the directionality of personal relationships has distinct effects on positive and negative partiality; iii) the extent of the interactions within a relationship affects positive and negative partiality differently; and iv) positive and negative partiality have different scope restrictions. We argue that these asymmetries point to a more fundamental moral principle, which we call Morality's Harmonious Propensity. According to this principle, morality has a propensity toward preserving positive relationships and dissolving negative ones.
A Project View of the Right to Parent
Journal of Applied Philosophy, 41 (2024): 804-826.
Abstract: The institution of the family and its importance have recently received considerable attention from political theorists. Leading views maintain that the institution’s justification is grounded, at least in part, in the non-instrumental value of the parent-child relationship itself. Such views face the challenge of identifying a specific good in the parent-child relationship that can account for how adults acquire parental rights over a particular child—as opposed to general parental rights, which need not warrant a claim to parent one’s biological progeny. I develop a view that meets this challenge. This Project View identifies the pursuit of a parental project as a distinctive non-instrumentally valuable good that provides a justification for the family and whose pursuit is necessary and sufficient for the acquisition of parental rights. This view grounds moral parenthood in a normative relation as opposed to a biological one, supports polyadic forms of parenting, and provides plausible guidance in cases of assisted reproduction.
The Ethics of Partiality
Philosophy Compass, 1 (8) (2022): 1-15.
Abstract: Partiality is the special concern that we display for ourselves and other people with whom we stand in some special personal relationship. It is a central theme in moral philosophy, both ancient and modern. Questions about the justification of partiality arise in the context of enquiry into several moral topics, including the good life and the role in it of our personal commitments; the demands of impartial morality, equality, and other moral ideals; and commonsense ideas about supererogation. This paper provides an overview of the debate on the ethics of partiality through the lens of the domains of permissible and required partiality. After outlining the conceptual space, I first discuss agent-centred moral options that concern permissions not to do what would be impartially optimal. I then focus on required partiality, which concerns associative duties that go beyond our general duties to others and require us to give special priority to people who are close to us. I discuss some notable features of associative duties and the two main objections that have been raised against them: the Voluntarist and the Distributive objections. I then turn to the justification of partiality, focusing on underivative approaches and reasons-based frameworks. I discuss the reductionism and non-reductionism debate: the question whether partiality is derivative or fundamental. I survey arguments for ‘the big three’, according to which partiality is justified by appeal to the special value of either projects, personal relationships, or individuals. I conclude by discussing four newly emerging areas in the debate: normative transitions of various personal relationships, relationships with AI, epistemic partiality, and negative partiality, which concerns the negative analogue of our positive personal relationships.
Other-Sacrificing Options
Philosophy and Phenomenological Research, 101 (3) (2020): 612–629.
Abstract: I argue that you can be permitted to discount the interests of your adversaries even though doing so would be impartially suboptimal. This means that, in addition to the kinds of moral options that the literature traditionally recognises, there exist what I call other-sacrificing options. I explore the idea that you cannot discount the interests of your adversaries as much as you can favour the interests of your intimates; if this is correct, then there is an asymmetry between negative partiality toward your adversaries and positive partiality toward your intimates.
Restricted Prioritarianism or Competing Claims
Utilitas, 29 (2) (2017): 137–152.
I here settle a recent dispute between two rival theories in distributive ethics: Restricted Prioritarianism and the Competing Claims View. Both views mandate that the distribution of benefits and burdens between individuals should be justifiable to each affected party in a way that depends on the strength of each individual’s separately assessed claim to receive a benefit. However, they disagree about what elements constitute the strength of those individuals’ claims. According to restricted prioritarianism, the strength of a claim is determined in ‘prioritarian’ fashion by both what she stands to gain and her absolute level of well-being, while, according to the competing claims view, the strength of a claim is also partly determined by her level of well-being relative to others with conflicting interests. I argue that, suitably modified, the competing claims view is more plausible than restricted prioritarianism.
Business Ethics
Beyond the Ivory Tower? The Practical Role of Ethicists in Business
In Christian Hoffmann (ed.), “Artificial Intelligence, Entrepreneurship and Risk Management: Reflections and Positions at the Crossroads between Philosophy and Management”. Springer, (2025): 1-22.
Abstract: “AI Ethics”, “Digital Ethics” or “Corporate Digital Responsibility” – ethics in business, especially with the rise of Artificial Intelligence (AI), is now in vogue. But how, if at all, can ethicists meaningfully contribute to practical business challenges? I examine the value that resources from moral philosophy can bring to ethical issues in business, particularly the technology sector. I show that there is a specific need for sharpened ethical acumen in so-called “grey areas”, in which laws and regulation do not provide definite answers to the ethical challenges businesses face. I argue that ethicists can distinctively help businesses navigate grey areas by strengthening their ethical capabilities and functions, which concern an organization’s ethical awareness, deliberation, decision-making, and commitment. I conclude by discussing some practical examples of how ethicists can strengthen these capabilities.
‘Getting Clear on Corporate Culture’
Journal of the British Academy, 6 (s1) (2018): 155–184, with Hsieh, N., Rodin, D., and Wolf-Bauwens, M. L. A.
Abstract: This article provides a review of existing literature on corporate culture, drawing on work from the disciplines of business ethics, management studies, psychology, anthropology, and economics, as well as interviews with business leaders. It surveys different definitions of corporate culture and proposes a framework for capturing their commonalities. It then discusses the importance of culture so conceived and widely used frameworks to measure it. The article also portrays different views on how culture can be operationalised and moulded within an organisation. The article concludes by discussing the relationship between corporate culture and corporate purpose and highlighting gaps in the literature which would profit from further research.
Medical Ethics
Moral Parenthood and Gestation: Replies to Cordeiro, Murphy, Robinson and Baron
Journal of Medical Ethics 51 (2025): 100-101.
I reply to comments and objections to my arguments against gestationalist accounts of moral parenthood.
Moral Parenthood: Not Gestational
Journal of Medical Ethics 51 (2025): 87-91.
Parenting our biological children is a centrally important matter, but how, if it all, can it be justified? According to a contemporary influential line of thinking, the acquisition by parents of a moral right to parent their biological children should be grounded by appeal to the value of the intimate emotional relationship that gestation facilitates between a newborn and a gestational procreator. I evaluate two arguments in defence of this proposal and argue that both are unconvincing.
Reviews
Moral Desert and Parental Rights
Philosophy & Economics, 35 (2) (2019): 339–347.
Review of The Moral Foundations of Parenthood, Joseph Millum. Oxford University Press, 2018, ix + 158 pages.
