Evolving Morality: Features and Bugs

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

Speakers: Joshua Greene
Festival: 2017

Watch and Listen: Society

Join Dan Porterfield and Kitty Boone in kicking off the 2018 Aspen Ideas Festival. Featuring conversations with Jeffrey... See more
Clothing and recreational equipment company Patagonia has sued the Trump administration over its resolution to reduce... See more
Join an intimate conversation on modern racism, hate, and the growing alt-right social movement with one of America’s... See more
The forces of division have been tearing America's social fabric for decades. But a new coalition of community... See more
Join a live podcast with Futuro Media’s ‘In The Thick.’ Co-hosts Maria Hinojosa and Julio Ricardo Varela meet up at... See more
Discussing her just-published book Sex Matters: How Modern Feminism Lost Touch with Science, Love, and Common Sense,... See more

Pages