Former Google employee: How a 'code red' meeting and ChatGPT led execs to take 'shortcuts' in Gemini AI launch

The former employee said Google's AI was pushed up to leadership despite warnings and execs abandoned 'fairness' to launch the product

Google abandoned "fairness" and took major "shortcuts" to launch the Gemini artificial intelligence (AI) chatbot despite internal concerns, according to a former high-level employee.

Before release, Gemini, formerly known as Bard, underwent an AI Principles Review. The experts responsible for reviewing the model said it was unsafe and Google should not turn the AI on.

According to the source, then-Director of Responsible Innovation (RESIN) Jen Gennai allegedly edited the responses and pushed the product up to leadership despite these warnings.

RESIN previously reviewed internal projects to ensure compliance with Google's AI principles before it was disbanded.

FORMER GOOGLE CONSULTANT SAYS GEMINI IS WHAT HAPPENS WHEN AI COMPANIES GO 'TOO BIG TOO SOON'

Google versus ChatGPT

A former high-level Google employee has expressed concern about the internal policies and decision-making inside the company that may have led to issues with the Gemini artificial intelligence (AI).  (SEBASTIEN BOZON/AFP/Michael M. Santiago/Steve Taylor/SOPA Images/LightRocket via Getty Images / Getty Images)

Gennai said her decision was sound because it was a preview product and review steps for demos were not typical. Still, the Google source, who spoke with Fox News Digital under the condition of anonymity, said her claim was "100% bull---."

Some employees wanted to examine Bard's data sets and embeddings because some initial results were "concerning." One employee even suggested building a tool to determine what the data inside the model had learned, the source claims.

"They said no, f--- it. We got to get to market because we are losing to ChatGPT," the source claimed.

In a statement to Fox News Digital, a Google spokesperson said, "These are old claims that reflect a single former employee’s view and are an inaccurate representation of how Google launches its AI products. We are committed to developing this new technology in a responsible way."

Up until November 2022, Google was the dominant leader in AI. In fact, Google was so successful that it could give away its tools for other people to build entire companies on. Much of Open AI's business model, including the technological bones of ChatGPT, was developed using research that Google had published.

The release of ChatGPT was seen as a massive threat to Google's business model. YouTube's ad revenue has been shrinking year over year. TikTok had been slowly carving into the earnings of Google's flagship video-sharing platform, negatively impacting Google Search as well. Its core revenue products were and still are being harmed by competitors in the market.

Out of nowhere, ChatGPT came onto the scene with better AI. Even more alarming, Google had no competing product.

The news sent the company into a tailspin, with Google declaring a "code red," according to The New York Times.

"The thing that has sustained them and made them who they are--they are no longer the market leader in. [ChatGPT] is a real, genuine threat to their whole business," the source said.

IS GOOGLE TOO BROKEN TO BE FIXED? INVESTORS 'DEEPLY FRUSTRATED AND ANGRY,' FORMER INSIDER WARNS

Google Open AI logos

A screen displays the logos of Bard AI, a conversational artificial intelligence software application developed by Google, and ChatGPT.  (LIONEL BONAVENTURE/AFP via Getty Images / Getty Images)

Google's CEO Sundar Pichai was involved in numerous meetings with executives to determine Google's response to ChatGPT, upending various groups inside the company and reorienting internal strategies.

Teams within Google Research and other vital departments were reassigned. YouTube Ads, Search and generative AI became priority number one. They needed to catch up, and they took "shortcuts" to get there, the former employee says.

"They basically made a strategy decision to say, generative AI; we have to get on it. We don't care about fairness anymore. We don't care about bias, ethics. As long as it's not producing child sexual abuse material or doing something harmful to a politician that could potentially affect our image, we're going to throw s---out there," the former Google employee said.

After Gennai's RESIN team was absorbed by the Office of Compliance and Integrity and trust and safety teams that sat under President of Global Affairs Kent Walker, things began to change quickly.

"Even the AI principles review shifted. Even the risk assessment that we were using shifted from looking at how might these models impact our users to what is the business risk on us for these models?" the source told Fox News Digital.

GOOGLE GEMINI USING ‘INVISIBLE’ COMMANDS TO DEFINE ‘TOXICITY’ AND SHAPE THE ONLINE WORLD: DIGITAL EXPERT

Sundar Pichai

Sundar Pichai speaks onstage Vox Media's 2022 Code Conference - Day 1 in Beverly Hills, California, on Sept. 6, 2022. (Jerod Harris/Getty Images for Vox Media / Getty Images)

But the changes that resulted from Google's new AI strategy grew from a culture at the company that promotes one goal above all else: launching and landing new products, the source said.

Product launching typically involves optimizing technology, marketing, roadshows and exhibitions, security and more. A product launch verifies the technology has been properly and thoroughly tested, and early adopters or beta testers are put into the mix.

Products are also expected to "land," ensuring that the rollout to the public goes smoothly, and the company has a plan for potential issues.

The higher up Google's ladder an employee goes, the more they are expected to demonstrate leadership. This oftentimes means employee "impact," essentially a review that documents their contributions to Google, is measured by creating new things rather than driving what already exists or building deeper and better performance for established products.

"You want to get promoted, you have to launch and you have to land," the former Google employee said.

According to the source, this mentality at the company, combined with rapid department changes and a lack of proper coordination, was a likely factor in the ensuing chaos once Gemini was released to the public.

Furthermore, there was no "forcing function" to make, for example, the generative AI team or the image search team work with the fairness team or those working on responsible data, as they sat under different leadership. Those in charge of reviewing could ask them to test the products in development, and teams could say no, the ex-Google employee said.

GOOGLE RELEASES NEW GEMINI UPDATE TO GIVE USERS ‘MORE CONTROL’ OVER AI CHATBOT RESPONSES

Google Gemini employees political bias

Jen Gennai was one of several employees who was accused of pushing progressive policies at Google.  (Smith Collection/Gado/Rafael Henrique/SOPA Images/LightRocket via Getty Images/Twitter/Screenshot / Getty Images)

Even the way reviews were assigned appeared to lack any persistent methodology, the source claims.

Google had a system called Case Assessment and Operation Sync, ironically called "CHAOS," where people would look at submissions related to issues cropping up across teams and allocate them to different employees.

There was no deadline for people to give a review unless someone specified that the product was trying to launch. An employee could submit an AI principal review to someone else and that person could be out of the office for a significant period. There was also no alert system. That is because CHAOS was not a ticketing system but rather a Google form.

This led to situations where many employees who completed reviews at the company lacked an AI or technical background, the former employee said.

"You have people reviewing these things who don't understand how they fundamentally work," the source said.

They added this issue was advantageous to people who wanted to climb the corporate ladder. Those who led departments could change the scope of who was completing reviews and technical assessments. People were moved around and asked to complete reviews for things they did not understand.

If the reviews were not thoroughly completed by those who deeply understood the technology, it could leave them unable to troubleshoot problems and stop a product from launching with substantial issues.

MICROSOFT TELLS EUROPEAN REGULATORS GOOGLE HAS AN EDGE IN GENERATIVE AI

Photo illustration of Google's AI model Gemini

The Google AI logo is displayed on a smartphone with Gemini in the background. (Jonathan Raa/NurPhoto via Getty Images / Getty Images)

But those who took credit for leading the teams that helped launch new products would expedite their timelines and ensure they had a high "impact."

Following backlash to Gemini, Prabhakar Raghavan, a senior vice president at Google, said two major things went wrong with the AI.

"First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive," he wrote in a Feb. 23 blog post.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Raghavan said Gemini's AI generation feature will undergo "extensive testing" before it is turned back on.

However, without outside regulation and significant overhauls to the existing structures that should have required extensive testing in the first place, the former employee said issues are likely to persist in Google products.

Editor's Note: This piece has been updated to include comment from a Google spokesperson.