Google said it has made “more than a dozen technical improvements” to its artificial intelligence systems after its search engine feature was found to provide users with false information.
The tech company released a new feature in mid-May that often would provide users with AI-generated summaries at the top of their Google search results.
Shortly after the feature was offered to users, they noticed it was providing erroneous answers, The Associated Press reported.
Google has largely defended the feature, saying it’s normally accurate and was tested “extensively” before it was released.
But in a blog post Friday, Liz Reid, the head of Google’s search business, said that while the product was tested, “there’s nothing quite like having millions of people using the feature with many novel searches.”
“We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results,” she wrote. “Separately, there have been a large number of faked screenshots shared widely.”
Reid noted that some of the faked screenshots have been “obvious and silly,” but others have more serious implications such as leaving dogs in cars or smoking while pregnant.
“In a small number of cases, we have seen AI Overviews misinterpret language on webpages and present inaccurate information,” she wrote. “We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies.”
Reid said the company has worked on updates that can assess broad sets of inquiries, including new questions that may arise.
“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” she said. “We’ll keep improving when and how we show AI Overviews and strengthening our protection, including for edge cases, and we’re very grateful for the ongoing feedback.”