Evolving Morality: Features and Bugs

Human morality is a set of cognitive devices designed to solve social problems. The original moral problem is the problem of cooperation, the “tragedy of the commons” — me vs. us. But modern moral problems are often different, involving what Harvard psychology professor Joshua Greene calls “the tragedy of commonsense morality,” or the problem of conflicting values and interests across social groups — us vs. them. Our moral intuitions handle the first kind of problem reasonably well, but often fail miserably with the second kind. The rise of artificial intelligence compounds and extends these modern moral problems, requiring us to formulate our values in more precise ways and adapt our moral thinking to unprecedented circumstances. Can self-driving cars be programmed to behave morally? Should autonomous weapons be banned? How can we organize a society in which machines do most of the work that humans do now? And should we be worried about creating machines that are smarter than us? Understanding the strengths and limitations of human morality can help us answer these questions.

Speakers: Joshua Greene
Festival: 2017

Watch and Listen: Society

Too often, the decisions we make are driven by fear. With its companions worry and anxiety, it permeates our lives and... See more
When Big Bird has “big feelings,” he imagines himself in a comfy-cozy nest, soothed by the smell of baking cookies. The... See more
What divides us, and what do we share? When moments of mystic clarity come to us, what do they reveal? In this talk,... See more
Most of us are repulsed by hateful actions and feelings, and it often seems that the easiest — and most just — way of... See more
We know that men and women are different — but how exactly, and why? Though some differences lie in anatomy and biology... See more
What do your dogs think about? How do they perceive you and the world around them? And what exactly do they do all day... See more

Pages