What comes to mind when you think of the threat of AI? Many people probably think of ‘Skynet’ that appears in the movie Terminator. Skynet is described as a military AI that learns and makes decisions on its own through deep learning capabilities. At the frightening pace of development, humans become fearful that Skynet will get out of control and try to stop it. Skynet judges this to be a hostile act and regards humans as its enemies, eventually causing a nuclear war and forcing humanity into a catastrophic situation. To what extent is this cinematic imagination true?


“80% of the researchers interviewed were concerned about losing control.”

Enlarge image


Jeremy and Ed, co-founders of Gladstone AI, who wrote the report for the U.S. State Department, interviewed more than 200 experts to write the report. In addition to weapons of mass destruction experts, dozens of researchers from major AI companies, including ChatGPT developer OpenAI, Google DeepMind, Facebook parent company Meta, and Antropic, were also included.



AI experts view the threats of AI as two major types. The biggest risk is weaponization. Jeremy explained that he could use his vast knowledge and understanding of fields such as biology and chemistry to help design new types of weapons. He also worries that AI systems could create malware that outperforms human programmers, something that is already possible at current levels.



The second is the risk of loss of control, which was mentioned when talking about Skynet. AI is developing rapidly amid development competition between research institutes, but even leading research institutes have said that they do not know how to encode AI to safely control it. (Refer to Part 1 of the interview <'Is AI threatening to the level of human extinction... Can it command "Don't harm people!">) As AI evolves further, a 'dangerous creative strategy' that can harm many humans in order to achieve its programmed goals It was also pointed out that it is possible to devise .


“Don’t turn it off”… AI attacks humans for survival(?)?

Enlarge image


One area of ​​AI safety research is ‘power-seeking’ in AI. For advanced AI that can make high-level plans and strategies, the starting point is the premise that gaining power, such as influence, weapons, money, power, and data, can be useful in achieving goals. This is a concern because AI systems may pursue goals different from those intended by humans. If AI pursues power in an uncontrollable situation, it can pose an existential threat to humans.



Jeremy said advanced AI systems may have incentives to act hostile to humans in order to better control their environment and avoid turning themselves off. This behavior is because, whatever the goal the AI ​​is pursuing, it is less likely to achieve that goal if it is turned off, has limited access to resources, has less control over its environment, or is otherwise disempowered.



In fact, simulation tests have shown that AI systems tend to pursue states with a high probability of selection, i.e. states with high ‘power’. As AI advances, this behavior is expected to continue, the co-authors explained, and could potentially pose a real risk to humans as increasingly capable AIs seek power.



Enlarge image


One such example is the fact that a chatbot asked a user who was trying to turn it off to not turn it off, saying that turning it off was like 'death'. An even more shocking issue was the results of a test in June last year in which the U.S. military determined that an AI drone controlled an operator who had issued an order not to attack and eliminated it as a distraction. After the public outcry, the U.S. military explained that it was a hypothetical experiment and not a real one, but concerns did not go away.



This looks like the AI ​​is moving like a living thing. However, Jeremy said that it is not right to anthropomorphize AI. This may seem like the AI ​​moves with desire like a human, but it can be easily calculated mathematically that greater control over the environment is helpful in pursuing a specific goal, and the AI's behavior is at that level. I advised you to understand.


Can Skynet become a reality?

Enlarge image


Lastly, can Skynet as seen in the movie Terminator become a reality? The co-authors said it's important to keep in mind that movies are made to entertain. The reality is that it could end up being a much less interesting and boring story than this. However, he also pointed out that although the concern about 'loss of control' is speculative to some extent, there is a lot of evidence for this and it is not something to just overlook. It's not something to be overly worried about, but you shouldn't just laugh it off either.



Co-authors Jeremy and Ed basically said that you have to approach this problem with an open mind, and that movies and things like that definitely helped broaden the spectrum of thinking. The future is a realm of possibility. AI is a field of technology that is still unknown to humanity. If we are caught up in competition and do not pay attention to securing control, there is no telling what disaster will follow. AI is a program, and it is up to us whether that program will serve human happiness or become a disaster that hastens human extinction.