Google’s AI chatbot Bard makes factual error in first demo

The recent misstep by Google's new AI chatbot, Bard, highlights a critical challenge facing the integration of such artificial intelligence systems into search engines: their tende

Admin

Admin

Thumbnail

Google's AI Chatbot Bard Stumbles with Factual Error in Debut Demo

The recent misstep by Google's new AI chatbot, Bard, highlights a critical challenge facing the integration of such artificial intelligence systems into search engines: their tendency to generate inaccurate information.

Google officially announced Bard, its direct competitor to OpenAI's ChatGPT, on Monday, with plans for a wider public release in the coming weeks. However, its initial public demonstration quickly drew scrutiny from experts who identified a significant factual error.

In a GIF shared by Google, Bard was shown responding to the query: "What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?" Among its three bullet-point answers, Bard incorrectly stated that the James Webb Space Telescope (JWST) "took the very first pictures of a planet outside of our own solar system."

This claim was swiftly challenged by a number of astronomers on Twitter. AstrophysicistGrant Tremblaypointed out the inaccuracy, tweeting, "Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take ‘the very first image of a planet outside our solar system’." He and others confirmed that the first image of an exoplanet was actually captured in 2004, a fact corroborated by NASA's own website.

Bruce Macintosh, who directs the University of California Observatories at UC Santa Cruz, also highlighted the error. He tweeted, "Speaking as someone who imaged an exoplanet 14 years before JWST was launched, it feels like you should find a better example?"

The factual blunder and the subsequent expert criticisms were initially reported byReutersandNew Scientist.

AsTremblayfurther elaborated in a follow-up tweet, while impressive, AI models like ChatGPT and Bard are often "very confidently wrong." This tendency for AI chatbots to "hallucinate" – or invent information – is a major concern. These systems are fundamentally autocomplete programs, trained on immense datasets of text to predict the most probable next word in a sequence. They are probabilistic, not deterministic, meaning they don't query a database of verified facts. This characteristic has led one prominent AI professor to label them "bullshit generators."

While the internet already contains a wealth of misleading information, the issue becomes particularly problematic when tech giants like Microsoft and Google envision these tools replacing traditional search engines. In such a context, the chatbots' responses can acquire an unwarranted air of authority, as if emanating from an all-knowing machine.

Microsoft, which recently demonstrated its own AI-enhanced Bing search engine, has attempted to mitigate these concerns by placing some onus on the user. Its disclaimer states: "Bing is powered by AI, so surprises and mistakes are possible. Make sure to check the facts, and share feedback so we can learn and improve!"

In response to the Bard incident,Jane Park, a spokesperson for Google, provided a statement toThe Verge: "This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program. We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information."

Related Articles

Stay in the loop

Get the latest insights delivered to your inbox

Built with v0