Emotions and Tech: What Happens When We Code Our Inner Lives?
What if technology could read and respond to emotions the way humans do? It turns out that it already can, and in this session you’ll see a mind-bending live demonstration of software that recognizes our feelings based on facial cues. We’ll also explore how everyday tech like text messaging can get in the way of important social connection, but also how applications like Crisis Textline and Talkspace help people in grave danger get quick access to mental health care. This Deep Dive delves into questions around ethics and social interaction, and examines the good, the bad, and the surprising ways tech is in our heads.
Nicholas EpleyBehavior Science professor, University of Chicago
Elizabeth DunnPsychology professor at University of British Columbia
Rana el KalioubyCEO of Affectiva
Nancy LublinFounder of Crisis Text Line
Neil LeibowitzChief medical officer of Talkspace
Adrienne LaFranceExecutive editor of The Atlantic
Artificial intelligence (AI) is going to revolutionize how we interact with everything and everyone around us, and innovators like Rana el Kaliouby, CEO of Affectiva, want to make sure AI development is done right. As AI technology makes rapid advancements in the IQ of machines, el Kaliouby and Affectiva are working to make sure AI’s EQ (emotional quotient) keeps up.
Big IdeaThe merger of IQ and EQ in technology is inevitable.Rana el Kaliouby
The ability of machines to read your thoughts is still a long way off, but el Kaliouby and her team are finding ways to read the outward manifestations of emotions and moods, like facial expressions, body cues, and linguistic patterns. If AI can have at least a clue about what we’re thinking, it can complement AI’s already vast knowledge base. Check out how Affectiva software can already read dozens of facial expressions in real-time:
We’re already getting glimpses into what a world dominated by AI will look like, says Rana el Kaliouby. Although successful facial recognition software is mostly limited to unlocking your smartphone, companies are deep into researching applications that will touch every part of our lives.
Did you know?
AI could help you land a job by analyzing your speech patterns, it could stop you from falling asleep at the wheel, and it could help autistic children better read social cues.
This is very exciting, says el Kaliouby, but we need to take a step back and contemplate the ethical implications of AI technologies. How do people consent to their data being used once they step out of their houses? Should consumers get compensated if companies monetize the data collected through AI technologies? It’s high time we start having those conversations, says el Kaliouby.
Big IdeaWe as a community need to get together and decide on the rules for ethically developing and deploying [AI] technology.Rana el Kaliouby
There’s a problem, claims Rana el Kaliouby, when we build facial recognition software supported by data collected mostly from the faces of white men. AI engineers don’t set out to create software that’s biased, but el Kaliouby explains how it happens anyway:
“We solve for what we know,” says el Kaliouby. If we’re not forward thinking about who’s making AI software and how it’s being made, bias will continue to creep into AI.
Voice is a powerful messenger. Beyond relaying basic facts, voices carry a lot of information about what someone is really thinking. Are we going to lose our ability to read tones of voice if texting keeps replacing voice conversations? Social psychologist Elizabeth Dunn, behavioral scientist Nicholas Epley, and The Atlantic editor Adrienne LaFrance discuss what individuals and society could lose as machines work their way deeper into our everyday lives:
This excerpt has been lightly edited for clarity
Elizabeth Dunn: Phones make things easy. People love easy stuff — easy is good. But this massive benefit that this convenience of ease provided was largely undercut by this loss of social connection. So one thing that tells us is that phones aren't just doing one thing. They're changing our lives in multiple ways at the same time. They may partially cancel each other out, and it may depend on the particular features of the situation, whether the phone is really gonna hurt you or help you.
Adrienne LaFrance: Looking even farther into the future, as we outsource more of our human-to-human work... Do you think we'll end up forgetting how to have conversations or forgetting how to read facial cues? What will we lose as machines do more of the work for us?
Nicholas Epley: One thing I'm worried about is that we'll lose things that we're not aware we're losing... Pretty much anytime new tech comes on the scene, there's a period of years where we fumble around with it until we figure out how to use it. So when cars were first built, they were deadly because we didn't design them with all sorts of safety features. Now they're quieter and safer than they used to be. One concern I have about some of this [AI] technology, and in particular about the social consequences of the technology, is that it's not always obvious to us that we're losing those social effects.
New technologies are emerging faster than governments can figure out how to regulate them, so it’s often up to individual companies to determine best practices to protect privacy, compensate users, and integrate their products into society. And the stakes are high — companies are collecting data at alarming rates, and we’re already seeing AI in places that seemed impossible just a couple years ago. Attorney Neil Liebowitz and CEO of Crisis Text Line Nancy Lublin discuss how they think about society’s responsibility to reign in tech’s influence on our lives: