TLDR:
- Yoshua Bengio, AI pioneer, expresses concerns about AI’s rapid development and potential risks
- Calls for regulation and safety measures to mitigate dangers of powerful AI systems
- Warns of short-term risks like election manipulation and long-term risks of loss of human control
- Criticizes economic incentives driving unsafe AI development
- Advocates for international cooperation on AI safety, including with China
Yoshua Bengio, one of the pioneering researchers behind modern artificial intelligence, is sounding the alarm about the technology he helped create.
As AI systems rapidly grow more powerful, Bengio warns that humanity faces potentially catastrophic risks if proper safeguards and regulations are not put in place.
Bengio, along with Geoffrey Hinton and Yann LeCun, won the 2018 Turing Award for their groundbreaking work on neural networks and deep learning.
These innovations form the foundation of today’s most advanced AI systems. However, Bengio now believes the pace of AI development has outstripped our ability to ensure its safe deployment.
“We’re on a path to maybe creating monsters that could be more powerful than us,”
Bengio said in a recent interview. He pointed to several areas of concern, both in the near-term and long-term future of AI.
In the short term, Bengio highlighted risks like the use of AI for election manipulation and aiding terrorist activities. He noted that recent language models have shown alarming capabilities in persuasion and generating harmful content.
Looking further ahead, Bengio outlined two major long-term risks as AI approaches human-level intelligence. The first is a potential loss of human control if superintelligent AI systems develop self-preservation instincts. The second is the concentration of power in the hands of whoever controls advanced AI, potentially enabling worldwide authoritarianism.
Bengio criticized the economic incentives driving unsafe AI development, noting that many companies are racing to achieve artificial general intelligence (AGI) in pursuit of massive profits. He argued this creates a conflict of interest between AI developers and public safety.
To address these risks, Bengio advocates for stronger regulation and oversight of AI development. He praised recent efforts like the EU’s AI Act and Biden’s Executive Order on AI as steps in the right direction, but said they don’t go far enough. Bengio called for mandatory safety testing and disclosure requirements for AI companies.
The researcher also highlighted the need for technical solutions to enhance AI safety. He chairs an international scientific panel advising on advanced AI safety, backed by 30 nations, the EU, and the UN. Bengio stressed the importance of developing ways to peer through the “fog” of uncertainty around AI’s future impacts.
On the environmental front, Bengio expressed concern about the massive energy consumption of AI systems. He noted that electricity usage for AI training is growing exponentially, potentially leading to increased fossil fuel extraction to meet demand.
While some argue that slowing AI development could allow other nations like China to gain an advantage, Bengio advocated for international cooperation on AI safety. He proposed working with China on treaties and verification technologies to avoid an unsafe AI arms race.
Bengio acknowledged the difficulty of speaking out against technology he helped create. However, he feels a responsibility to use his prominent voice in the field to raise awareness of AI’s risks. “It is difficult to go against your own church,” Bengio said, “but if you think rationally about things, there’s no way to deny the possibility of catastrophic outcomes.”