Stuart Russell, a Professor of Computer Science at the University of California, Berkeley, and a top specialist in artificial intelligence called for a shift in how businesses approach constructing AI and for governments to regulate AI to protect human interests. The proposition was suggested to ensure AI that serves human interests and to prohibit the release of dangerous AI.
Artificial intelligence (AI) tools are becoming increasingly influential as tech companies race to deliver them. Some academics have cited artificial general intelligence (AGI) in this technology. AGI would be the first time an AI could learn and solve problems independently, much like a human.
Chatbots and other AI-enabled generative tools have given the public a window into AI’s capabilities and potential pitfalls. However, the stakes are high because of the potential for transformative impact and the “low-ball” estimate of US$13.5 quadrillion in value creation from AGI, as described by Russell.
The current path of AI development has raised a concern that humans might lose control over machines. He proposed that researchers and developers create a different approach to developing AI. The one that ends up with AI systems that help people and a more advanced civilisation.
Systems that are more intelligent than humans, either as individuals or as a group, are more powerful entities than us. Russell posed the question at the lecture, “How do we retain power over entities more powerful than us, forever?”
He voiced this issue over the opaque nature of AI systems like AI-enabled generative tools. It’s worrying because it needs to be clarified if AI tools are trying to achieve anything on their own. There is still the question of whether they can achieve their objectives or if ours are compatible with theirs. He used the example of a chatbot that constantly declared its love for a New York Times reporter, who repeatedly rejected the bot, to imply that AIs may be capable of such behaviour.
In his opinion, it is dangerous to release systems into the wild whose inner workings don’t fully grasp, and which may or may not be following their purposes. “We need to acknowledge that it doesn’t know what those interests are and to seek evidence to identify and act upon those interests,” he noted.
Considering this, he proposed reconsidering AI ideas like planning, reinforcement learning, and supervised learning that presume an established goal. Instead, it must also be built “well-founded,” with a thorough knowledge of each part and how it affects the whole. The regulation will help engineers foresee how these systems would function.
“I can’t think of any other way to make myself feel confident in the actions of these systems,” Russell stated.
Russell argues that AI could fundamentally alter our world. It has the potential to either enhance the lives of people everywhere or wipe out human civilisation. Great care and strict regulation have been taken with the last “civilisation-ending technology,” nuclear power. There is stringent oversight even in relatively calm technological domains like aviation. Since AI has a magnificent impact on human civilisation, it is also necessary to regulate it.
An existing international legal framework defines accountable AI and guides in this area. Before artificial intelligence (AI) is released into the wild, its creators should be able to demonstrate that it is safe, reliable, and poses no significant threat to society. Countries should make these ideals into laws that all businesses must follow.