Gaming

Google won’t launch ChatGPT rival because of ‘reputational risk’

[ad_1]

The launch of ChatGPT has prompted some to speculate that AI chatbots could soon take over from traditional search engines. But executives at Google say the technology is still too immature to put in front of users, with problems including chatbots’ bias, toxicity, and their propensity for simply making information up.

According to a report from CNBC, Alphabet CEO Sundar Pichai and Google’s head of AI Jeff Dean addressed the rise of ChatGPT in a recent all-hands meeting. One employee asked if the launch of the bot — built by OpenAI, a company with deep ties to Google rival Microsoft — represented a “missed opportunity” for the search giant. Pichai and Dean reportedly responded by saying that Google’s AI language models are just as capable as OpenAI’s, but that the company had to move “more conservatively than a small startup” because of the “reputational risk” posed by the technology.

“We are absolutely looking to get these things out into real products.”

“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date,” said Dean. “But, it’s super important we get this right.” Pichai added that Google has a “a lot” planned for AI language features in 2023, and that “this is an area where we need to be bold and responsible so we have to balance that.”

Google has developed a number of large AI language models (LLMs) equal in capability to OpenAI’s ChatGPT. These include BERT, MUM, and LaMDA, all of which have been used to improve Google’s search engine. Such improvements are subtle, though, and focus on parsing users’ queries to better understand their intent. Google says MUM helps it understand when a search suggests a user is going through a personal crisis, for example, and directs these individuals to helplines and information from groups like the Samaritans. Google has also launched apps like AI Test Kitchen to give users a taste of its AI chatbot technology, but has constrained interactions with users in a number of ways.

OpenAI, too, was previously relatively cautious in developing its LLM technology, but changed tact with the launch of ChatGPT, throwing access wide open to the public. The result has been a storm of beneficial publicity and hype for OpenAI, even as the company eats huge costs keeping the system free-to-use.

Although LLMs like ChatGPT display remarkable flexibility in generating language, they also have well-known problems. They amplify social biases found in their training data, often denigrating women and people of color; they are easy to trick (users found they could circumvent ChatGPT’s safety guidelines, which are supposed to stop it from providing dangerous information, by asking it to simply imagine it’s a bad AI); and — perhaps most pertinent for Google — they regularly offer false and misleading information in response to queries. Users have found that ChatGPT “lies” about a wide range of issues, from making up historical and biographical data, to justifying false and dangerous claims like telling users that adding crushed porcelain to breast milk “can support the infant digestive system.”

In Google’s all-hands meeting, Dean acknowledged these many challenges. He said that “you can imagine for search-like applications, the factuality issues are really important and for other applications, bias and toxicity and safety issues are also paramount.” He said that AI chatbots “can make stuff up […] If they’re not really sure about something, they’ll just tell you, you know, elephants are the animals that lay the largest eggs or whatever.”

Although the launch of ChatGPT has triggered new conversations about the potential of chatbots to replace traditional search engines, the question has been under consideration at Google for a long time — sometimes causing controversy. AI researchers Timnit Gebru and Margaret Mitchell were fired from Google after publishing a paper outlining the technical and ethical challenges associated with LLMs (the same challenges that Pichai and Dean are now explaining to staff). And in May last year, a quartet of Google researchers explored the same question of AI in search, and detailed numerous potential problems. As the researchers noted in their paper, one of the biggest issues is that LLMs “do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over.”

“It’s a mistake to be relying on it for anything important right now.”

There are ways to mitigate these problems, of course, and rival tech companies will no doubt be calculating whether launching an AI-powered search engine — even a dangerous one — is worth it just to steal a march on Google. After all, if you’re new in the scene, “reputational damage” isn’t much of an issue.

For OpenAI’s own part, it seems to be attempting to damp down expectations. As CEO Sam Altman recently tweeted: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”



[ad_2]

Source link

ScoopSky

Scoop Sky is a blog with all the enjoyable information on many subjects, including fitness and health, technology, fashion, entertainment, dating and relationships, beauty and make-up, sports and many more.

Related Articles

Back to top button