見出し画像

Artificial Intelligence

最先端のAI研究を半年間停止 イーロン・マスクやApple共同創業者の署名も

高度なAI研究によるリスク を行う非営利団体Future of Life Instituteにおいて、GPT-4以上の性能を持つAIの開発を最低6ヶ月停止することを求める公開状を公開した。イーロン・マスクやAppleの共同創業者スティーブ・ウォズニアックなども署名を行っており、3月30日現在で、1300人以上のテック業界関係者の署名が集まっている。

書面によると、最先端のAIの開発に高いレベルの計画とマネジメントが必要だが、現在の計画とマネジメントは不十分だという。また、同書面によると現在のAIは開発者ですら理解できないほどのレベルに到達している。

そのため、最低6ヶ月間AIの開発を停止し、専門家たちはその間に高度なAIの開発や扱いに対するプロトコルを作成するべきだと訴える。また、6ヶ月間の停止が直ちに行われない場合には、政府が介入してのモラトリアムを実施するべきだともした。

Artificial Intelligence, which encompasses everything from recommender algorithms to self-driving cars, is racing forward. Today we have 'narrow AI' systems which perform isolated tasks. These already pose major risks, such as the erosion of democratic processes, financial flash crashes, or an arms race from autonomous weapons.

Looking ahead, many researchers are pursuing 'AGI', general AI which can perform as well as or better than humans at a wide range of cognitive tasks. Once AI systems can themselves design smarter systems, we may hit an 'intelligence explosion', very quickly leaving humanity behind. This could eradicate poverty or war; it could also eradicate us.

That risk comes not from AI's potential malevolence or consciousness, but from its competence - in other words, not from how it feels, but what it does. Humans could, for instance, lose control of a high-performing system programmed to do something destructive, with devastating impact. And even if an AI is programmed to do something beneficial, it could still develop a destructive method to achieve that goal.

AI doesn't need consciousness to pursue its goals, any more than heat-seeking missiles do. Equally, the danger is not from robots, per se, but from intelligence itself, which requires nothing more than an internet connection to do us incalculable harm.

Misconceptions about this still loom large in public discourse. However, thanks to experts speaking out on these issues, and machine learning reaching certain milestones far earlier than expected, an informed interest in AI safety as a major concern has blossomed in recent years.

Super-intelligence is not necessarily inevitable, yet nor is it impossible. It might be right around the corner; it might never happen. But either way, civilisation only flourishes as long as we can win the race between the growing power of technology and the wisdom with which we design and manage it. With AI, the best way to win that race is not to impede the former, but to accelerate the latter by supporting AI safety research and risk governance.

Since it may take decades to complete this research, it is prudent to start now. AI safety research prepares us better for the future by pre-emptively making AI beneficial to society and reducing its risks.

Meanwhile, policy cannot possibly form and reform at the same pace as AI risks; it too must therefore be pre-emptive, inclusive of dangers both present and forthcoming.


"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before - as long as we manage to keep the technology beneficial."

Max Tegmark, President of the Future of Life Institute

What is AI?

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Why research AI safety?

In the near term, the goal of keeping AI's impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.



この記事が気に入ったらサポートをしてみませんか?