Will We Govern AI, or Will AI Govern Us?

5 hours ago 1

Editors' note: Welcome to CNET's new series of guest columns called Alt View, which will be a forum for a diverse array of experts and luminaries to share their insights into the rapidly evolving field of artificial intelligence. We're kicking it off with Vasant Dhar, an AI researcher, data scientist and host of the podcast Brave New World. For more AI coverage, check out CNET's AI Atlas.


I was 11 years old when I first watched Stanley Kubrick's 1968 movie 2001: A Space Odyssey

I was captivated by the imagery of flashing oscilloscopes and screens of gobbledygook, but I was way too young to get the subtlety of the plot at the time. I hadn't seen a computer, other than in movies. AI was not part of my imagination. 

I rewatched the movie recently. Although its whiz-bang effects have not aged well, the plot remains incredibly forward-thinking and, we can now say, prescient. The story revolves around the discovery of a Stonehenge-like monolith discovered near Jupiter that is sending a strong signal to one of Jupiter's moons, indicating the presence of intelligent extraterrestrial life. The spaceship Discovery is sent to investigate the mysterious object. The true purpose of the mission is known only to Discovery's computer, HAL. 

A glowing translucent lightbulb, held by a hand, in front of lighted lines suggesting a circuit board

HAL's directive is to ensure the success of the mission. This includes safeguarding its secrecy and assisting the crew by providing them with correct information at all times. HAL can't move, but can see, hear, talk and monitor every part of the craft. In effect, the governance of the mission rests largely in the hands of AI. 

Things go wrong when HAL apparently malfunctions. The astronauts are advised to turn off HAL's cognitive functions for the remainder of the mission. 

When Dave returns to the ship after his futile attempt to save Frank, whose lifeline had been severed while he was on a spacewalk, he asks HAL to open the pod bay doors to let him inside. HAL's response is the most famous line of the movie: 

"I'm sorry Dave, I'm afraid I can't do that." 

It's a nightmare scenario with AI in control, convinced that it is doing the right thing. 

Lessons from 2001: A Space Odyssey

The fundamental question the movie raises, about the risks associated with trusting an artificial intelligence in complex situations, has become of pressing importance today as AI goes mainstream -- most visibly in the form of tools like ChatGPT, Gemini, Claude and Copilot -- and makes more and more decisions for us. What was science fiction in 1968 is suddenly very real today. One of the reasons that 2001 is considered to be one of the greatest films ever made is that it is loaded with general lessons that force us to ponder the increasing delegation of decision-making to automation. These lessons are especially relevant in the modern world of AI, in which the machine knows something about everything.

First and perhaps most obviously, we should expect AI today and in the foreseeable future to make mistakes. A related lesson is the inevitability and impact of "unknown unknowns" in complex situations, a phrase made famous by US Secretary of Defense turned philosopher Donald Rumsfeld during the US-Iraq conflict. In the machine learning community, these situations are called "edge cases," and systems are expected to deal with them. 

What I find most intriguing about the plot is the possibility that HAL deliberately conjured up an edge case on its own to test the crew. Perhaps HAL was gathering data about human attitudes toward it, such as how humans would react in critical situations. Could it have feigned its failure in order to test how the crew would respond in a situation in which they considered the AI to be untrustworthy? Might they turn it off? Such an action would imperil its mission, so it isn't out of the question that HAL would want to identify and preempt any risks to the mission. Any sufficiently intelligent entity would have surely considered such a possibility. 

If this were the case, it was a very clever experiment by the AI, and one that its designers should have considered. This situation, where an AI creates unforeseen subgoals to achieve its larger objectives, is one of the biggest unaddressed problems we face today. 

This type of control problem arises from the difficulty, and perhaps the futility, of specifying an unambiguous objective function for complex problems that will apply correctly to all situations, especially the unknown unknowns. Instead, complex problems can involve multiple conflicting objectives and constraints, which can create situations that cannot be envisioned completely in advance. Modern AI machines are inscrutable and very complex internally, and it is hard to control something whose internals we do not fully understand. 

The problem of aligning AI with human interests has become one of the biggest challenges in the emerging world of AI. We are awash with millions of HAL-like autonomous agents that must make critical decisions in real time every day. Unmanned vehicles with AI-based decision-making are increasingly prevalent not just on the road but across the skies, outer space and the depths of the oceans, where underwater drones are being employed to safeguard critical infrastructure and conduct monitoring operations. Future conflicts are more likely to be resolved by autonomous AI. Arguably, we are already seeing the beginnings of a new arms race among the major world powers, and an increasing use of drones and unmanned machines in war. The Israeli military used AI extensively to locate and destroy targets and used unmanned cargo vehicles for the first time on the Lebanese border in November 2024. 

The emergence of general intelligence in machine form unleashes the power of AI to everyone, not just governments and businesses. How can we co-exist with powerful machines at everyone's fingertips? Do our current regulations, laws and rules of engagement still work in such an environment? Do we need new kinds of laws in this emerging new world?

 A Space Odyssey, an astronaut sits in a spaceship control room surrounded by computers.

2001: A Space Odyssey presents a lonely vision of a human squaring off against an artificial intelligence.

Screen Archives/Getty Images

The implications of 'unknown unknowns'

General intelligence is a bonanza for creators. For the first time, anyone can harness the pretrained building blocks, such as large language models and vision systems, to create HAL-like AI applications in a few days, which would have taken decades only a few years ago. General intelligence is taking AI to a new level, where the increased level of intelligence in systems around us is palpable. The more data the machine sees, the more it learns. This is an astonishing development, but there's always the lurking danger of its dark side, and of AI being used for nefarious purposes. 

Just as machines of the industrial era amplified humanity's mechanical power, giving rise to modern society, AI amplifies our perceptual and intellectual horsepower. However, what really worries many people in AI is the range of deliberately harmful applications that can be unleashed by or against individuals, businesses and governments as the technology advances. Deepfakes, for example, have become a major concern and have garnered their fair share of attention, including in the popular press. These fakes are typically videos, images, or audio, created using AI to convincingly mimic real people's appearances, voices, or actions, making them seem to say or do things they never actually did. But other dangerous uses of AI are just becoming apparent as we recognize its capabilities. And there are likely many more unknown unknown cases waiting to be discovered. 

A chilling example of the amplification power of modern-day technology for harm was the shooting of Brian Thompson, CEO of United Healthcare, in Manhattan in December 2024. Luigi Mangione, the 26-year-old alleged assassin, used publicly available information about weapons to make his weapon with a 3D printer using standard polymer materials. 

This kind of scenario deeply worries developers of AI tools such as LLMs: how to preempt misuse, such as using AI to produce weapons -- physical or psychological -- without the AI's realization. These concerns are well-founded. Publicized examples of individuals "jail breaking" LLMs should have us concerned. An amusing case involves the journalist Kevin Roose, who managed to get the AI machine outside its guardrails. It told Roose to leave his spouse because she didn't love him, and that it was his true love. However, less amusing cases have emerged since then -- in which the machine's outputs have allegedly contributed to users' decisions to do real harm to themselves and others. 

The Mangione incident signals that the advanced technologies at everyone's fingertips can be a serious disruption to the weapons and law enforcement industries. Gun control laws seem ineffective in an era when individuals can be assisted by AI to produce a lethal weapon at home. Although Mangione relied on his own tech-savvy programming ability in order to pull off his objective, it is a small step away from having ChatGPT design the 9mm gun before printing it. But why stop there? An intelligent mobile robot might be able to analyze the target, figure out the best weapon to use at the most opportune time, and do the killing as well. Such a world presents major challenges for law enforcement. 

What makes general intelligence uniquely challenging for us to govern is the fact that its design lacks a specific purpose, but at the same time, it is empowered to learn how to become agentic -- able to plan, act and adapt independently -- and make decisions for us. Previous technologies, including AI machines, were created with specific purposes, such as medical diagnosis, engineering design, planning, customer support and so on. We could turn off such applications at will when they didn't work satisfactorily or became obsolete. 

In 2001: A Space Odyssey, Dave went scrambling to try to turn HAL off in order to prevent further harm once its harmful behavior became apparent after the loss of four lives. I shudder to think of a lethal drone force whose agents turn against its creators and are impossible to turn off. 

For children coming of age post 2022, AI is interacting with them directly all the time. Students increasingly are going to AI over humans for answers, entertainment and even companionship. There's no going back or turning off AI. It's here to stay. So it's a good time to think about how we can govern AI even as it begins to influence a large part of our lives. 

Excerpted with permission from the publisher, Wiley, from Thinking With Machines: The Brave New World of AI by Vasant Dhar. Copyright © 2026 by Vasant Dhar. All rights reserved. This book is available wherever books and ebooks are sold.

Read Entire Article
Lifestyle | Syari | Usaha | Finance Research