Asymmetries in (AI) Ethics
Why?
Because they give us insights into key structural features of morality.
An asymmetry often points towards some more foundational principle or morally significant fact and distinction. Something is different and we have to take a closer look.
In our new paper, my colleague Joshua Brandt and I defend a new asymmetry. We wonder: can you can be as nice to your friends and family as you can be mean to your bullies and enemies?
We don't think so - these two kinds of personal relationships are distinctively asymmetrical. Why? Because, we argue, morality has a harmonious propensity to preserve positive relationships and dissolve negative ones.
Read more here: https://lnkd.in/eDS-fqEB
Another good recent example in AI Ethics comes from Sven Nyholm et al, who show how generative AI entails a credit-blame asymmetry.
In a nutshell, we can blame people for errors in their use of generative AI if it harms others - even if they put very little effort into their work. But people do not seem to deserve credit for text generated without much skill and effort, such as ChatGPT-generated exam papers.
Upshot?
Existing work from responsibility in AI does not straightforwardly apply to GenAI - something normatively different is going on. Check out their paper here: https://lnkd.in/er4m-GNA
#ethics #asymmetries #philosophy
Full articlecan be found here: https://www.hr-brew.com/stories/2023/05/17/train-employees-to-work-alongside-ai-not-replace-them-with-it-two-ai-ethicists-say
