Google employees say Bard AI was pushed out despite ethical flaws
A group of Google LLC employees working on Bard says the company rushed to release the chatbot in spite of obvious ethical issues, according to a report today by Bloomberg.
Google may have been slow out of the blocks in terms of what has been an explosion of generative artificial intelligence trained on large language models, or LLMs. With Open AI LP’s ChatGPT taking the world by storm and then Microsoft Corp.’s Bing chatbot following hot on its heels, questions were raised as to Google lagging behind. Still, there have been calls to pause the development of such powerful tools, including by Elon Musk, who’s now raising funds for his own AI startup.
Although Google may have seemed to have been slow, now a number of its employees who have worked on Bard have said the chatbot was pushed out too fast. One of them called Bard a “pathological liar.” To be fair to Google, anyone who has used ChatGPT or the Bing chatbot for research purposes knows that the tools often sound very authoritative when producing unequivocal misinformation.
Nonetheless, the employees at Google used language that expressed that Bard was more than just an occasional liar. One of them said in an internal discussion, “Bard is worse than useless: please do not launch.”
According to Bloomberg, these employees raised their concerns with Google’s internal safety team, but the company apparently “overruled a risk evaluation.” This was in spite of some of the employees warning that the AI’s answers could “result in serious injury or death.” Google still went ahead and released Bard last month in what it called an “experiment.”
And if the employees are to be believed, a risky experiment. Not only did Bard provide them with mendacious answers, sometimes apparently “cringe-worthy” answers, but in one case, it explained how to land a plane correctly, which the employees said would have resulted in a crash. They also said it gave information about scuba diving safety that would have killed people had they followed the advice.
“The trusted internet-search giant is providing low-quality information in a race to keep up with the competition while giving less priority to its ethical commitments,” said Bloomberg after summing up the testimonials.
“We are continuing to invest in the teams that work on applying our AI Principles to our technology,” Google spokesperson Brian Gabriel told the Washington Post. He said that releasing a responsible AI was Google’s “top priority.”
Bard may well be a long way from ready, but the criticism of Google alone may seem unfair to anyone who has experienced the outright lies that Open AI’s and Microsoft’s chatbots are very capable of spewing out. We may just ask sometime in the near future if those people asking for a pause were right, but the fact is companies need a period of experimental public testing to polish their products. People might just want to be cautious about believing all the responses generative AI produces.
Photo: Rock’n Roll Monkey/Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU