<Anchor>

Friendly Economy, reporter Ae-ri Kwon is out. This is the news of artificial intelligence ChatGPT. that the creators of ChatGPT proposed to create an international organization to prevent the side effects of artificial intelligence?

<Reporter>

Yesterday (23rd), photos of explosions synthesized by AI and fake photos circulated in the Pentagon building, and at one point, even the New York stock market was scrambled, and our correspondent also reported it.

However, just yesterday, the chief executives and chief scientist of OpenAI, the creator of ChatGPT, posted this proposal.

He likened the development of AI to the development of nuclear fuel or the development of biotechnology, suggesting that AI also needs inspection and monitoring agencies such as the International Atomic Energy Agency.

Since ChatGPT 3.5 was released in November last year, the era of artificial intelligence has been rapidly advancing.

It literally shocked the whole world, and the people who threw it said, "We can't just let this go on like this. You have to control and monitor development, just like nuclear or biotechnology."

The proposed surveillance method is quite specific and compulsory.

Let's limit the speed at which AI can develop year after year, so we need to be able to sneak around and check if we're over-developing AI.

Now, for example, North Korea has declared that it will go nuclear, and it has conducted a series of nuclear tests and is under international sanctions.

The World Atomic Energy Agency (IAEA) has allowed nuclear inspections and has given up its nuclear weapons.

Like the World Atomic Energy Agency, we are going to create an audit and inspection body that has public trust beyond borders.

The world has to have a way to prevent the possibility of someone like the villainous scientist in the movie secretly developing AI and harming the world.

<Anchor>

It seems that the more advanced artificial intelligence becomes, the more worries and concerns there become. Recently, a developer who is considered a pioneer in the development of artificial intelligence also said that he regrets dedicating his life to it.

<Reporter>
Dr.
Jeffrey Hinton made a big splash when he gave up his position as vice president of Google, saying that he regretted some of my life's work on AI development.

What kind of person is this Dr. Jeffrey Hinton, you've heard a lot about deep learning.

He is the same person who first created the path for AI to learn the information that humans have accumulated over the years.

Literally, the father of artificial intelligence and the chief scientist of OPENAI, the creator of ChatGPT, whom we just looked at, is a protégé of this person.

Professor Hinton said, "This is a more urgent problem that needs to be prevented than climate change. We're coming to a world where you can't tell what's real from what's fake because of AI, and people are going to lose their jobs," he said, leaving Google at the end of last month because he wanted to talk about it freely.

Dr. Oppenheimer, who led the development of U.S. nuclear weapons during World War II, said exactly the same thing when he saw the U.S. actually use the atomic bomb.

Google, which recently entered the AI race by releasing ChatGPT-like chatbot Bard, has not made any official statement about Dr. Hinton's departure.

Yesterday, too, countries around the world issued some kind of recommendations for AI to create these policies.

We're talking about the economic effects of AI first, but we're also talking about the potential for abuse of AI, and we're also proposing how to create an international organization.

<Anchor> I think it's
more of a concern because it's not
anyone else, it's people on the front lines of AI development who are voicing this.

<Reporter
>
Yes. There are two main things that they are worried about.

The first is that we're worried about the emergence of superintelligence, the emergence of superintelligences that go far beyond human intelligence and solve problems on their own.

The developer of ChatGPT is not far in the future, and in 10 years, artificial intelligence can surpass the level of experts in almost all fields.

I'm saying let's stop such a high level of development.

Recently, Microsoft also published a report saying that the advent of such superintelligence is near.

The other is Dr. Hinton's point about a world where it is impossible to distinguish between true and false, where fake products created by AI are sold, and existing social prejudices and discrimination are further strengthened because of AI.

It's not unthinkable that someone would abuse AI to wreak havoc.

Whether it's nuclear power or biotechnology, it's still controversial.

These are areas where big problems can arise if we allow them to do everything they can, but we are trying to develop and use only what we think is beneficial and necessary for humanity by imposing global restrictions.

We need that level of control over AI, and I think we need to think deeply about why this kind of talk is now coming from people who are at the forefront of AI development at the same time.