How Google became cautious of AI and gave Microsoft an opening

Researchers developed a powerful chatbot years before rival ChatGPT went viral. After management stalled its release, they quit.

More than two years ago, a pair of Google researchers started pushing the company to release a chatbot built on technology more powerful than anything else available at the time. The conversational computer program they had developed could confidently debate philosophy and banter about its favorite TV shows, while improvising puns about cows and horses.

The researchers, Daniel De Freitas and Noam Shazeer, told colleagues that chatbots like theirs, supercharged by recent advances in artificial intelligence, would revolutionize the way people searched the internet and interacted with computers, according to people who heard the remarks. 

They pushed Google to give access to the chatbot to outside researchers, tried to get it integrated into the Google Assistant virtual helper and later asked for Google to make a public demo available. 

Google Bard

Google AI on a mobile phone on Feb. 9, 2023, in Brussels, Belgium.  ((Photo by Jonathan Raa/NurPhoto via Getty Images) / Getty Images)

Google executives rebuffed them at multiple turns, saying in at least one instance that the program didn’t meet company standards for the safety and fairness of AI systems, the people said. The pair quit in 2021 to start their own company to work on similar technologies, telling colleagues that they had been frustrated they couldn’t get their AI tool at Google out to the public. 

MUSK LOOKS TO BUILD CHATGPT ALTERNATIVE TO COMBAT 'WOKE AI': REPORT

Now Google, the company that helped pioneer the modern era of artificial intelligence, finds its cautious approach to that very technology being tested by one of its oldest rivals. Last month Microsoft Corp. announced plans to infuse its Bing search engine with the technology behind the viral chatbot ChatGPT, which has wowed the world with its ability to converse in humanlike fashion. Developed by a seven-year-old startup co-founded by Elon Musk called OpenAI, ChatGPT piggybacked on early AI advances made at Google itself. 

Months after ChatGPT’s debut, Google is taking steps toward publicly releasing its own chatbot based in part on technology Mr. De Freitas and Mr. Shazeer worked on. Under the moniker Bard, the chatbot draws on information from the web to answer questions in a conversational format. Google said on Feb. 6 it was testing Bard internally and externally with the aim of releasing it widely in coming weeks. It also said it was looking to build similar technology into some of its search results.

Google’s relatively cautious approach was shaped by years of controversy over its AI efforts, from internal arguments over bias and accuracy to the public firing last year of a staffer who claimed that its AI had achieved sentience. 

Those episodes left executives wary of the risks public AI product demos could pose to its reputation and the search-advertising business that delivered most of the nearly $283 billion in revenue last year at its parent company, Alphabet Inc., according to current and former employees and others familiar with the company. 

Ticker Security Last Change Change %
GOOG ALPHABET INC. 157.46 +0.58 +0.37%

"Google is struggling to find a balance between how much risk to take versus maintaining thought leadership in the world," said Gaurav Nemade, a former Google product manager who worked on the company’s chatbot until 2020.

Messrs. De Freitas and Shazeer declined requests for an interview through an external representative. 

APPLE BLOCKS UPDATE OF CHATGPT-POWERED APP, AS CONCERNS GROW OVER AI’S POTENTIAL HARM

A Google spokesman said their work was interesting at the time, but that there is a big gap between a research prototype and a reliable product that is safe for people to use daily. The company added that it has to be more thoughtful than smaller startups about releasing AI technologies. 

Google’s approach could prove to be prudent. Microsoft said in February it would put new limits on its chatbot after users reported inaccurate answers, and sometimes unhinged responses when pushing the app to its limits. 

google

Sundar Pichai, chief executive officer of Alphabet Inc. and its Google subsidiary, told employees that some of the company’s most successful products earned user trust over time. (REUTERS/Brandon Wade / Reuters Photos)

In an email to Google employees last month, Sundar Pichai, chief executive of both Google and Alphabet, said some of the company’s most successful products weren’t the first to market but earned user trust over time.

"This will be a long journey—for everyone, across the field," Mr. Pichai wrote. "The most important thing we can do right now is to focus on building a great product and developing it responsibly." 

MUSK LOOKS TO BUILD CHATGPT ALTERNATIVE TO COMBAT 'WOKE AI': REPORT

Google’s chatbot efforts go as far back as 2013, when Google co-founder Larry Page, then CEO, hired Ray Kurzweil, a computer scientist who helped popularize the idea that machines would one day surpass human intelligence, a concept known as "technological singularity." 

Mr. Kurzweil began working on multiple chatbots, including one named Danielle based on a novel he was working on at the time, he said later. Mr. Kurzweil declined an interview request through a spokeswoman for Kurzweil Technologies Inc., a software company he started before joining Google.

Google also purchased the British artificial-intelligence company DeepMind, which had a similar mission of creating artificial general intelligence, or software that could mirror human mental capabilities.

At the same time, academics and technologists increasingly raised concerns about AI—such as its potential for enabling mass surveillance via facial-recognition software—and pressured companies such as Google to commit not to pursue certain uses of the technology. 

ZUCKERBERG SAYS NEW META TEAM TO WORK ON 'AI PERSONAS,' OTHER FEATURES

Partly in response to Google’s growing stature in the field, a group of tech entrepreneurs and investors including Mr. Musk formed OpenAI in 2015. Initially structured as a nonprofit, OpenAI said it wanted to make sure AI didn’t fall prey to corporate interests and was instead used for the good of humanity. (Mr. Musk left OpenAI’s board in 2018.)

ChatGPT

A logo of ChatGPT as seen on a cell phone.  (CFOTO/Future Publishing via Getty Images / Getty Images)

Google eventually promised in 2018 not to use its AI technology in military weapons, following an employee backlash against the company’s work on a U.S. Department of Defense contract called Project Maven that involved automatically identifying and tracking potential drone targets, like cars, using AI. Google dropped the project. 

Mr. Pichai also announced a set of seven AI principles to guide the company’s work, designed to limit the spread of unfairly biased technologies, such as that AI tools should be accountable to people and "built and tested for safety." 

Around that time, Mr. De Freitas, a Brazilian-born engineer working on Google’s YouTube video platform, started an AI side project. 

FTC THREATENS TO GO AFTER BUSINESSES THAT OVERHYPE AI CLAIMS: 'OUR BREAD AND BUTTER'

As a child, Mr. De Freitas dreamed of working on computer systems that could produce convincing dialogue, his fellow researcher Mr. Shazeer said during a video interview uploaded to YouTube in January. At Google, Mr. De Freitas set out to build a chatbot that could mimic human conversation more closely than any previous attempts.

For years the project, originally named Meena, remained under wraps while Mr. De Freitas and other Google researchers fine-tuned its responses. Internally, some employees worried about the risks of such programs after Microsoft was forced in 2016 to end the public release of a chatbot called Tay after users goaded it into problematic responses, such as support for Adolf Hitler.

The first outside glimpse of Meena came in 2020, in a Google research paper that said the chatbot had been fed 40 billion words from social-media conversations in the public domain.

OpenAI had developed a similar model, GPT-2, based on 8 million webpages. It released a version to researchers but initially held off on making the program publicly available, saying it was concerned it could be used to generate massive amounts of deceptive, biased or abusive language.

SPOTIFY RELEASING ARTIFICIAL INTELLIGENCE DJ IN TWO COUNTRIES

At Google, the team behind Meena also wanted to release their tool, even if only in a limited format as OpenAI had done. Google leadership rejected the proposal on the grounds that the chatbot didn’t meet the company’s AI principles around safety and fairness, said Mr. Nemade, the former Google product manager.

A Google spokesman said the chatbot had been through many reviews and barred from wider releases for various reasons over the years.

The team continued working on the chatbot. Mr. Shazeer, a longtime software engineer at the AI research unit Google Brain, joined the project, which they renamed LaMDA, for Language Model for Dialogue Applications. They injected it with more data and computing power. Mr. Shazeer had helped develop the Transformer, a widely heralded new type of AI model that made it easier to build increasingly powerful programs like the ones behind ChatGPT.

However, the technology behind their work soon led to a public dispute. Timnit Gebru, a prominent AI ethics researcher at Google, said in late 2020 she was fired for refusing to retract a research paper on the risks inherent in programs like LaMDA and then complaining about it in an email to colleagues. Google said she wasn’t fired and claimed her research was insufficiently rigorous.

WHO'S BEHIND CHATGPT? ITS CEO REPORTEDLY DONATED HUNDREDS OF THOUSANDS TO DEMOCRATS

Google’s head of research, Jeff Dean, took pains to show Google remained invested in responsible AI development. The company promised in May 2021 to double the size of the AI ethics group. 

Ticker Security Last Change Change %
MSFT MICROSOFT CORP. 404.33 -7.51 -1.82%

A week after the vow, Mr. Pichai took the stage at the company’s flagship annual conference and demonstrated two prerecorded conversations with LaMDA, which, on command, responded to questions as if it were the dwarf planet Pluto or a paper airplane.

Google researchers prepared the examples days before the conference following a last-minute demonstration delivered to Mr. Pichai, said people briefed on the matter. The company emphasized its efforts to make the chatbot more accurate and minimize the chance it could be misused.

"Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks," two Google vice presidents said in a blog post at the time. 

WILL CHATGPT REPEAT THE DOT-COM AND CRYPTO CRASHES?

Google later considered releasing a version of LaMDA at its flagship conference in May 2022, said Blake Lemoine, an engineer the company fired last year after he published conversations with the chatbot and claimed it was sentient. The company decided against the release after Mr. Lemoine’s conclusions began generating controversy internally, he said. Google has said Mr. Lemoine’s concerns lacked merit and that his public disclosures violated employment and data-security policies.

As far back as 2020, Mr. De Freitas and Mr. Shazeer also looked for ways to integrate LaMDA into Google Assistant, a software application the company had debuted four years earlier on its Pixel smartphones and home speaker systems, said people familiar with the efforts. More than 500 million people were using Assistant every month to perform basic tasks such as checking the weather and scheduling appointments.

The team overseeing Assistant began conducting experiments using LaMDA to answer user questions, said people familiar with the efforts. However, Google executives stopped short of making the chatbot available as a public demo, the people said.

Google’s reluctance to release LaMDA to the public frustrated Mr. De Freitas and Mr. Shazeer, who took steps to leave the company and begin working on a startup using similar technology, the people said.

Mr. Pichai personally intervened, asking the pair to stay and continue working on LaMDA but without making a promise to release the chatbot to the public, the people said. Mr. De Freitas and Mr. Shazeer left Google in late 2021 and incorporated their new startup, Character Technologies Inc., in November that year.

Character’s software, released last year, allows users to create and interact with chatbots that role-play as well-known figures such as Socrates or stock types such as psychologists. 

"It caused a bit of a stir inside of Google," Mr. Shazeer said in the interview uploaded to YouTube, without elaborating, "but eventually we decided we’d probably have more luck launching stuff as a startup."  

Since Microsoft struck its new deal with OpenAI, Google has fought to reassert its identity as an AI innovator.

Google announced Bard in February, on the eve of a Microsoft event introducing Bing’s integration of OpenAI technology. Two days later, at an event in Paris that Google said was originally scheduled to discuss more regional search features, the company gave press and the broader public another glimpse of Bard, as well as a search tool that used AI technology similar to LaMDA to generate textual responses to search queries.

Google said that it often reassesses the conditions to release products and that because there is a lot of excitement now, it wanted to release Bard to testers even if it wasn’t perfect. 

Since early last year, Google has also had internal demonstrations of search products that integrate responses from generative AI tools like LaMDA, Elizabeth Reid, the company’s vice president of search, said in an interview. 

One use case for search where the company sees generative AI as most useful is for specific types of queries with no one right answer, which the company calls NORA, where the traditional blue Google link might not satisfy the user. Ms. Reid said the company also sees potential search use cases for other types of complex queries, such as solving math problems. 

As with many similar programs, accuracy remained an issue, executives said. Such models have a tendency to invent a response when they don’t have sufficient information, something researchers call "hallucination." Tools built on LaMDA technology have in some cases responded with fictional restaurants or off-topic responses when asked for recommendations, said people who have used the tool.

Microsoft called the new version of Bing a work in progress last month after some users reported disturbing conversations with the chatbot integrated into the search engine, and introduced changes, such as limiting the length of chats, aimed at reducing the chances the bot would spout aggressive or creepy responses. Both Google and Microsoft’s previews of their bots in February included factual inaccuracies produced by the programs. 

"It’s sort of a little bit like talking to a kid," Ms. Reid said of language models like LaMDA. "If the kid thinks they need to give you an answer and they don’t have an answer, then they’ll make up an answer that sounds plausible."

Google continues to fine tune its models, including training them to know when to profess ignorance instead of making up answers, Ms. Reid said. The company added that it has improved LaMDA’s performance on metrics like safety and accuracy over the years. 

CLICK HERE TO GET THE FOX BUSINESS APP

Integrating programs like LaMDA, which can synthesize millions of websites into a single paragraph of text, could also exacerbate Google’s long-running feuds with major news outlets and other online publishers by starving websites of traffic. Inside Google, executives have said Google must deploy generative AI in results in a way that doesn’t upset website owners, in part by including source links, according to a person familiar with the matter.  

"We’ve been very careful to take care of the ecosystem concerns," said Prabhakar Raghavan, the Google senior vice president overseeing the search engine, during the event in February. "And that’s a concern that we intend to be very focused on."

Sarah Krouse contributed to this article.