By Ian Bogost
This article originally appeared on The Atlantic.
Years ago, when I still used Facebook a lot, I posted a really mean reply to a friend’s post. A real friend, too, not just a “Facebook friend.” The aftermath was ruinous. My friend forgave me, thank goodness, but I was so horrified at my gaffe that I purged my account, shedding thousands of friends. I stopped posting my own personal updates, too. The whole experience made me not trust myself with social media. I’ve had similar experiences on Twitter, where it’s easy to go off the rails and regret it later.
Everyone who uses social media has probably had an experience like this. You say something that you regret, which hurts someone you know or—worse—upsets a whole mess of people you don’t.
Rana el Kaliouby, the CEO of Affectiva, an artificial-intelligence start-up, has an idea that might help remedy that. What if when you posted something on Twitter or Instagram or another service, she suggested on a panel at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic, the platform gave you feedback? For example, “You just upset 10,000 people.”
There’s some precedent. Twitter and Instagram have heart buttons, and Facebook lets you express (emoji-level) sorrow or anger at posts. But those are self-reported reactions, and they don’t capture the emotional response of everyone who chooses not to take explicit action. Reading laughter or anger directly off your Facebook friends’ real faces, in real time, would be a different story.
El Kaliouby’s company makes AI systems that analyze human perception and attempt to draw inferences about people’s emotional states. Using machine learning, the systems try to automatically identify “facial action units” based on categories devised by the psychologists Paul Ekman and Wallace V. Friesen in the 1970s.
After analyzing millions of faces, el Kaliouby’s team has applied its technology to a number of fields. In the automotive industry, computer-vision systems mated to Affectiva’s algorithms track drivers’ emotional states in the hopes of reducing distracted driving. In marketing, the company offers advertisers a way to automatically judge consumer reactions to products and services. And in health, Affectiva has contributed to the development of autism therapies that can help children learn emotional cues.
“I really think there’s an opportunity for a nonverbal social-media system,” el Kaliouby told me. The company tried to make a consumer-facing service itself, but shut it down for lack of traction—a social network really requires a massive network effect to work, and launching a brand-new one in a crowded market is difficult. Imagine a version of Twitter or Facebook, for example, that uses something like Affectiva’s facial-analysis system, fed by front-facing smartphone cameras, to judge emotional responses to the posts you make on those services. In addition to—or maybe instead of—like or share counts, the app could offer an abstract rendition, maybe in emoji form, of the approximate emotional reaction of people who viewed your post.
The potential rewards of this technology are no less significant than its privacy risks, not to mention the question of how much social or safety benefit is worth the trade-off of surveillance. In health care, in advertising, and in automobiles, AI systems operate behind the scenes, amplifying both their power and their perilousness. But el Kaliouby’s notion of a consumer-facing version of her company’s technology points to other ways it, and similar systems, could have a positive impact—if only because social-media apps are so awful as they stand.
The views and opinions of the author are his own and do not necessarily reflect those of the Aspen Institute.