Here’s GPT-3.5 with browsing, asked to tell me about The Servitor:
The website The Servitor covers various topics related to artificial intelligence (AI) and ethics. It has published articles on subjects such as the adoption of ChatGPT by criminals, the importance of being polite to AI systems like ChatGPT, and the future development of AI models like GPT-4. Some of the specific articles published on The Servitor include “Google Gives Bard New Coding Capabilities,” “The Servitor Proudly Announces Future News: A Fictional Universe,” “Dark GPT: ChatGPT Is Rapidly Being Adopted By Criminals,” “Are You Polite To AIs Like ChatGPT?,” and “The Servitor given GPT-4 API Access”1.
And here is GPT-4 with browsing:
The website “The Servitor” appears to be a blog or news platform that provides content about various topics related to artificial intelligence (AI). The topics range from AI ethics to updates on specific AI models and technologies.
For example, some recent articles include:
“Google Gives Bard New Coding Capabilities”: Discusses a recent update to Google’s AI model, Bard, to make it more useful for both new and experienced developers.
“The Servitor Proudly Announces Future News: A Fictional Universe”: The details of this article are not clear from the title alone.
“Dark GPT: ChatGPT Is Rapidly Being Adopted By Criminals”: An article discussing the ethical implications of AI technologies being used for criminal activities.
“Are You Polite To AIs Like ChatGPT?”: This article likely explores the question of how humans interact with AI.
It went on, but you get the idea. Notably, it took three tries – and considerably slower tries than GPT-3.5 with browsing mode — for it to successfully read the front page. Further experimentation revealed an annoying pattern: at least for now, at least for me, GPT-4 with browsing fails a large number of prompts when making use of its browsing feature. But that’s what Alpha versions do. By all means, do not let me discourage OpenAI from deploying new models and features simply because they are in an early stage of development. I suspect progress and iteration will continue to be very rapid, even if LLMs using the more-parameters-is-better approach are already peaking, as Sam Altman has suggested.