Evolving Morality: Features and Bugs

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

Speakers: Joshua Greene
Festival: Aspen Ideas 2017

Watch and Listen: Society

When a self-driving car’s brakes fail and it has to barrel down one of two lanes, each occupied with two people, which... See more
Do you think of yourself as an introvert or an extrovert? Or have you discovered that you are an ambivert, a balanced... See more
The Second City and Caring Across Generations have joined forces to develop a unique training program that strengthens... See more
Norman Lear is the prolific television writer and producer of stories about diverse American life—among them “All in... See more
Ideas about living a moral life can be found in all cultures across time. In previous eras, education was meant to... See more
American women have lived their daily lives — before and after the epic election of 2016 and its accompanying drama —... See more

Pages