Have you heard about Google’s new chatbot, Bard? They’ve just released it to a limited audience in the US and Britain, and it’s designed to compete with similar technologies like Microsoft’s Bing chatbot and OpenAI’s ChatGPT.
Bard is an experimental system, not meant to be a search engine but rather a creative and helpful collaborator. It’s quite open about its flaws and limitations, and even encourages user feedback to help it improve.
The chatbot is designed for a range of casual uses such as generating ideas, writing blog posts, or answering questions with facts or opinions. Bard generates new text each time you type in a prompt, so you’ll never get the same answer twice. It also annotates some of its responses, allowing you to review the sources it used.
However, Bard is not perfect. Sometimes, it makes mistakes and may provide a less-than-reliable source for its answers. For example, it cited a blog with a mix of English and Chinese when asked about the most important moment in American history.
Interestingly, Bard seems to be more cautious than ChatGPT. Google’s vice president of research, Eli Collins, explained that the bot often refuses to answer questions about specific people or provide certain types of information (like medical, legal, or financial advice) to avoid generating incorrect information. This is a phenomenon known as “hallucination” in the AI research world.
(Partly sourced from: What Google Bard Can Do (and What It Can’t))