You should be afraid of people, not A.I. The bad actors are us.
Like all technology, artificial intelligence can be used for good, and it can be used for evil. What little federal regulation the United States has governing technology and the internet was written before artificial intelligence existed in its current form, and as a society, we’re flying blind and in way over our heads as we enter this next phase of digital life. What could we possibly do to help point these constantly-evolving tools in the right direction, anticipate the biggest risks, and not replicate the overblown optimism of social media’s early days? Philanthropist and former Google CEO Eric Schmidt and the head of MIT’s College of Computing, Daniel Huttenlocher, explain how generative A.I. is built and taught to create content, and where it could go wrong. The two co-authors of the book “The Age of A.I.: And Our Human Future” point out the human biases built in to AI systems, and the dystopian (and some utopian) use cases of these tools in politics, warfare and other societal realms. Biographer and former Aspen Institute President Walter Isaacson moderates the conversation.