This week has seen a whirlwind of controversy surrounding artificial intelligence (AI) and its future, as a group of well-known AI ethicists penned a powerful response to a letter calling for a six-month “pause” on AI development. The original letter, signed by thousands, including tech icons Steve Wozniak and Elon Musk, was published by the Future of Life Institute and proposed putting development of AI models like GPT-4 on hold to avoid the “loss of control of our civilization” and other potential threats.
In their counterpoint, leading AI ethicists Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, currently working at the DAIR Institute, fired back with a strong rebuke. They argued that the original letter’s focus on hypothetical future risks was misguided, and failed to engage with the real and immediate harms of AI misuse happening today. “Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,” they wrote.
The ethicists stressed that the true issues at hand are worker exploitation, data theft, synthetic media that props up existing power structures, and the concentration of power in fewer hands. Rather than worrying about a Terminator-like apocalypse, they pointed out that we already have reports of companies like Clearview AI being used by the police to frame innocent individuals. “No need for a T-1000 when you’ve got Ring cams on every front door accessible via online rubber-stamp warrant factories,” they quipped.
For the DAIR Institute team, the need for action is urgent, and it must be directed at today’s problems, using remedies already available. “What we need is regulation that enforces transparency,” the ethicists argued. They called for clear documentation and disclosure of training data and model architectures, and for companies building AI to be held accountable for their products’ outputs. The DAIR crew also critiqued the profit-driven race towards ever larger “AI experiments,” emphasizing the need for regulation that protects the rights and interests of people.
Rather than getting lost in science fiction, the ethicists urged us to “focus on the very real and very present exploitative practices of the companies claiming to build [powerful digital minds], who are rapidly centralizing power and increasing social inequities.”
With powerful voices on both sides, one thing is clear: AI’s future is not preordained, and our choices in shaping it will have a lasting impact.