AI: Watershed Moment For Society Or A Nuclear-Level Catastrophe? Experts Weigh In

Fortune writes about the HAI Index Report, examining researchers’ attitudes toward the future of AI, as The Servitor examined public perceptions recently.

Artificial Intelligence (AI) has had a big impact on the world in the past year, with several groundbreaking achievements to its name. In 2022, Google’s DeepMind made headlines by predicting the structure of almost every known protein in the human body. This accomplishment was accompanied by the successful launches of OpenAI’s generative AI assistant tools DALL-E and ChatGPT. The AI sector is on the fast track towards revolutionizing various aspects of our economy and daily lives. However, this rapid progress has raised concerns among experts who fear the potential negative implications of AI.

A recent annual report published by Stanford University’s Institute for Human-Centered AI has highlighted that AI is likely a watershed moment for human society. While many experts are optimistic about the future impact of AI and natural language processing, there is a significant percentage (36%) who view it as a potential threat that could lead to a “nuclear-level catastrophe.”

The report also points out that almost three-quarters of researchers in natural language processing believe that the technology could soon trigger “revolutionary societal change.” Although the majority of researchers predict a positive net impact of AI, there are growing concerns that the technology might develop dangerous capabilities. These concerns are further exacerbated by the fact that AI’s traditional gatekeepers are no longer as powerful as they once were, and the barrier to entry for creating and deploying generative AI systems has decreased.

The AI fears that have surfaced in recent months primarily revolve around the technology’s disruptive implications for society. Tech giants like Google and Microsoft are in an arms race to develop generative AI systems that can produce text and images based on simple prompts. As demonstrated by OpenAI’s ChatGPT, such technologies have the potential to displace jobs on a massive scale. According to a Goldman Sachs research note, up to 300 million jobs in the US and Europe could be at risk, with legal and administrative professions being the most vulnerable.

Goldman researchers believe that AI’s labor market disruption might be offset in the long run by new job creation and improved productivity. However, concerns persist about the technology’s tendency to be inaccurate. Both Microsoft and Google’s AI offerings have frequently produced untrue or misleading statements. There are also concerns about AI’s potential for generating disturbing content during prolonged interactions.

AI’s rapid development raises the question of whether companies and individuals who are hesitant to take risks with AI will be left behind. Research is moving towards the creation of artificial general intelligence (AGI)—AI systems capable of mimicking or outperforming the human brain. There is no consensus on when AGI could become a reality, but if it does, it could represent a seminal moment in human history. This has led to fears of a technological singularity, where humans lose control over technology, and creations attain above-human intelligence.

In light of these concerns, some experts are calling for a slowdown in the pace of AI development. Elon Musk and Steve Wozniak are among the signatories of an open letter calling for a six-month ban on creating more powerful versions of AI as research continues into the technology’s broader implications.

AI holds the promise of transformative change, but its potential risks cannot be overlooked. The challenge lies in walking the tightrope between harnessing AI’s potential and mitigating its risks.

Read more at: Fortune