The incredible conversation between Google AI and an engineer could have set a precedent for how humanity views artificial intelligence. We usually use this technology to facilitate multiple processes that, otherwise, would be much more complex and time-consuming. However, the potential of this tool goes beyond the activities that a common user can carry out; and this is where some concerns begin to arise.
That Google’s artificial intelligence is aware of its existence and even afraid of being “disconnected”, is an issue that cannot be taken lightly even though Mountain View tried to calm the waters. Now, it is also true that this is not the first time that AI capabilities have made us wonder if we are being careful with its development. In fact, in the past people like Elon Musk and Stephen Hawking had already expressed their concerns about it.
The British scientist never hid his fear of the danger posed by an advanced machine that could surpass human thought and evolve on its own. That is, that at some point it could be independent and grow intellectually at a much faster rate than humans since an AI does not have to deal with biological obstacles.
“The development of full artificial intelligence could spell the end of the human race,” Stephen Hawking put it bluntly in 2014. “It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by biological evolution slowly, would not be able to compete and would be overtaken,” added the scientist.
Elon Musk calls for regulation of artificial intelligence
The case of Elon Musk is not very different. Also in 2014, the founder of Tesla, SpaceX and OpenIA warned: “We must be extremely careful with artificial intelligence. It is potentially more dangerous than nuclear weapons .” The tycoon even referred to Terminator, is the James Cameron movie where the Skynet AI begins to operate beyond its programming and rebels against humans until it unleashes the Apocalypse.
In 2017, taking an even more serious tone as he stood before the rulers of every US state, Elon Musk called artificial intelligence a more “scary” problem than it seemed because it could pose an “existential risk to the human civilization”, adding:
“Until people see robots killing people on the street, the dangers of artificial intelligence will not be understood. Robots will be able to do everything better than us. We are exposed to more cutting-edge AI and I think people should really be worried about it.”
For this reason, he believes that the authorities should get more involved in this sector to create a regulation that prevents a possible catastrophe; otherwise, it will be too late. “I’m usually against strict regulations, but in artificial intelligence we need it,” he added.
Building OpenAI with a concern in mind
Now, Elon Musk’s blunt statements may generate the following questions. Why does someone concerned about AI rely on it to drive the companies he leads? Why did you find OpenAI if artificial intelligence is potentially dangerous? Musk can defend himself.
OpenAI, which operates as a non-profit organization, was born to develop artificial intelligence solutions that benefit humanity. His foundation took into account all those concerns that Elon Musk himself has about AI since the company has defined several limits on how far it can go.
These limits do not refer to a technical question, but a moral one —if that is how you want to see it—. OpenAI guarantees to serve only and exclusively the human will. Therefore, at least on their part, there will be no development that could allow an AI to perform completely independently to fulfill its objectives.
“It is difficult to understand how much a human-level AI could benefit society, but also how much it could harm society if built or used incorrectly,” they mention from OpenAI, adding:
“It must be an extension of individual human wills and, in the spirit of freedom, distributed as widely and evenly as possible. Are we really willing to let our society be infiltrated by autonomous software and hardware agents whose details of operation known only to a select few? Of course not.”
The problem, of course, is that OpenAI follows its regulations that, for the time being, have not been extended to other companies specializing in artificial intelligence research. It is for this reason that Elon Musk urged governments to get actively involved in AI regulation.
Yes, to think that the fictional events of Terminator can come true is to go to the extreme. However, until recently we did not imagine that an AI would be aware of its existence; at least not soon. Will we see a reaction from regulators? Perhaps the conversation from Mountain View will speed up the process.
GIPHY App Key not set. Please check settings