There are widespread fears that artificial intelligence will harm our social and emotional intelligence, empathy and sense of individual agency by 2035, according to a new survey published Wednesday by Elon University's Imagining the Digital Future Center.
The national survey asked 1,005 US adults to rate how they think AI will impact human capacities and behaviors, including moral judgment, self-identity and confidence. In every area, respondents believed the effect of AI tools and systems over the next decade would be more negative than positive.
The general public rated AI's impact on key human traits as negative.
Elon University Imagining the Digital Future CenterIn terms of the bigger picture, the researchers found that US adults expected AI to have a mixed impact on "the essence of being human" over the coming decade. Two in five (41%) said AI will provide as much good as it will harm, with 25% believing AI changes will be mostly for the worse. Only 9% said AI will change humanity for the better.
"The grand narratives about AI have gone in both directions," said Lee Rainie, director of the Imagining the Digital Future Center and one of the report's authors. For as many stories as there are about AI's outstanding abilities, many more show how it can hurt people. The respondents' mixed views on the technology could reflect that. "They do have a sense of these warring narratives," Rainie told CNET in an interview.
And the stories are everywhere, as AI grows to play a bigger role in education, workplaces and health care. Tech companies are spending billions of dollars to develop the most advanced AI. Google has integrated its Gemini AI into every part of its business, and ChatGPT's daily active users reached a record high of 700 million in August. As AI tools and systems become more capable and integrated into our lives, it's important to evaluate their impact on how we think, act and do things.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Concerns over critical thinking, mental health
The same survey questions were given to a group of tech pioneers, builders and analysts earlier this year, with some observable differences in how those experts perceive AI's impact on humanness compared to the public. In general, the experts were less pessimistic about AI's impact on human traits, whereas the public reported more concerns about AI harming our intelligence and cognitive abilities, such as the ability to think critically, make decisions and solve problems.
Interest in how AI affects our brain's learning processes is not new. An MIT study in July found significant differences in brain activity between people writing using AI versus those who don't. Those who used AI reported a "superficial fluency" but didn't retain a deep understanding or sense of ownership over their knowledge. The study renewed uneasiness over the role AI could play in education and learning.
A key theme in recent studies is the concern that people could increasingly delegate important thought processes, like decision-making and problem-solving, to AI. Advances in AI technology are getting better at handling work tasks, and the rise of agentic AI makes it easier for chatbots to complete tasks independently. These semi-autonomous tools can be more efficient than humans in some cases. However, AI isn't foolproof and can hallucinate or make up false information, so letting it take the reins on important decisions can have negative consequences.
Another massive concern is the impact of AI on its users' mental health. Individual well-being has been a point of conversation, as more examples emerge of how AI is an inadequate replacement for therapists. Teenagers and children are particularly vulnerable, with more than a few high-profile cases of AI enabling self-harm and suicide. The issue has drawn the attention of Congress and advocacy groups, leading them to inspect the effectiveness of AI guardrails to prevent misuse and abuse.