In a recent article at The Atlantic, I noticed this snippet about Sam Altman, CEO of OpenAI (ChatGPT):
““We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.” Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.”
ChatGPT as a sort of gentle introduction to AI?
Maybe Altman is right the public wouldn’t have been prepared. There is certainly plenty of unrest and pushback being caused by OpenAI’s chat models as is. There are concerns over cheating in academia and on professional licensing exams like bar exams. Plagiarism and copyright theft. Valid fears of privacy and the secure handling of sensitive data. OpenAI is borderline persona-non-grata in the EU right now because of those concerns, with Italy deeply skeptical and France and Germany looking to carve their own footholds in the dawning age of consumer-facing AI.
It also makes me desperately want a peek at whatever the latest and greatest is at OpenAI’s in-house testing labs. If ChatGPT was a warning shot, hit me with the big guns, Mr. Altman! 👀