advertisement

Follow Mint Lounge

Latest Issue

Home > Smart Living> Innovation > No evidence that AI can be controlled, says new research

No evidence that AI can be controlled, says new research

According to an AI safety expert, advanced intelligent systems can never be completely controlled. There will always be risks regardless of their advantages

The goal of the AI community should be to minimize such risks while maximizing potential benefits.
The goal of the AI community should be to minimize such risks while maximizing potential benefits. (Pexels)

An important question that has been raised since the rapid development of artificial intelligence (AI) is whether it can be controlled. Now, an AI safety researcher says currently there is no evidence that AI can be controlled and without such proof, it should not be developed.

Although it is well known that the key issue of AI control may be one of the most important problems facing humanity, it remains poorly understood, poorly defined, and poorly researched, AI safety expert and researcher Roman V. Yampolskiy says in his upcoming book based on extensive research, AI: Unexplainable, Unpredictable, Uncontrollable.

Also read: Could artificial intelligence predict our life events?

In the book, Yampolskiy talks about the different ways AI could potentially change society, including its negative impacts. “We are facing an almost guaranteed event with the potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance,” he elaborates.

According to a press statement from the publisher Taylor & Francis Group, Yampolskiy conducted an extensive review of AI scientific literature and concluded that there is no proof that AI can be safely controlled. Even if there are some partial controls, they would not be enough, he states in the book.

Yampolskiy said that as statistics show that the development of AI superintelligence is almost inevitable, there should be more support for a significant AI safety effort. He also suggested that advanced intelligent systems can never be completely controlled so there will always be safety risks regardless of their advantages. The goal of the AI community should be to minimize such risks while maximizing potential benefits, he adds.

One of the issues is AI cannot explain its decisions, and/or people cannot understand the explanation given as humans do not have the intelligence to understand the concepts implemented. If people do not understand AI’s decisions and they only have a ‘black box’, they cannot understand the problem and reduce the likelihood of future accidents, Yampolskiy explains in the book.

For instance, today AI systems are being tasked to make decisions related to healthcare, investing, and employment. However, they do not explain how they arrived at the decision and whether they are bias-free, the press statement adds.

Moreover, as AI’s capability increases, its autonomy will also increase. “Less intelligent agents (people) can’t permanently control more intelligent agents (ASIs). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist. Superintelligence is not rebelling, it is uncontrollable, to begin with,” Yampolskiy states in the book.

“Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable,” he warns in the book.

One way to address the issue of control is to find an equilibrium point wherein some capability is sacrificed in return for some control, Yompolskiy suggests. He also adds that it’s important to “dig deeper and to increase effort, and funding for AI Safety and Security research.”

Also read: Artificial intelligence could use as much electricity as a small country

Next Story