ChatGPT could increase 'threat vector' for cyberattacks and misinformation, experts warn: 'Deepfake generator'

Experts say criminals can develop new code, malware and attacks with less time and complexity using AI

Cybersecurity and artificial intelligence (AI) experts speaking with Fox News Digital warned that tools like ChatGPT could reduce the time and resources necessary for criminals to engage in cyberattacks and misinformation campaigns. 

Kaseya Chief Information Security Officer Jason Manar said AI is particularly troubling because of its ability to cultivate completely fake videos and images of real people, known as deepfakes.

In 2022, The Department of Homeland Security (DHS) released a document highlighting the "Increasing Threat of Deepfake Identities" on national security, including examples of false videos featuring former President Obama, Meta's Mark Zuckerberg, podcaster Joe Rogan, actor Tom Cruise and more. Many of these videos had been viewed millions of times each.

Manar said to imagine a political campaign where you have a deep fake of a candidate with the right background and even the right people for a particular event. Now imagine this politician says something off-color or divisive, something that would hurt the campaign. Now imagine that happens the week before an election.

AI VS THE JOB MARKET: EXPERTS SAY TECH ROLES THOUGHT 'IMMUNE' TO AUTOMATION COULD FACE DISRUPTION

ChatGPT welcome screen

In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England.  (Leon Neal/Getty Images / Getty Images)

"You get to the point where you either believe everything you see or have what you call a liar's dividend where maybe it's actually true, but because of these deep fakes, the person comes out and goes 'absolutely not it's a deep fake,' and then the liar gets the benefit of the doubt," Manar said. "It becomes really hard to prove even what we see with our own eyes and a recording."

NASA JPL Chief Technology and Innovation Officer and the Division Manager on artificial intelligence, Dr. Chris Mattmann, pointed to recently circulated images of former President Trump being arrested and the Pope sporting a Balenciaga coat as examples of how artificial intelligence can create deepfakes that fool internet users.

Mattmann, whose work has previously been funded by DHS and The Defense Advanced Research Projects Agency (DARPA), said the impact of these deepfakes will vary from person to person and older generations or people with busy lives may be less likely to discern fact from fiction successfully.

He also noted that the frequency of these types of deepfakes will likely increase and come in more realistic presentations with the help of tools like ChatGPT, which Mattmann described as a "widely available and easy to use deepfake generator."

According to a UBS study, ChatGPT reached 100 million monthly active users in January, making it the fastest-growing consumer application ever. With that in mind, Mattmann says the mass appeal of the tool increases the "threat vector space."

"Now anyone for $20 a month can do this," Mattmann said.

"Any one of them becomes someone with the capacity, like a media company, to push out a vast amount of content that does not truly exist. That's wild from a national security perspective," he added.

POPE FRANCIS DECLARES A.I. LEARNING ‘POSITIVE’ TO ‘FUTURE OF HUMANITY’ IF USED ‘ETHICALLY & RESPONSIBLY’

illustration of someone writing code

A man holds a laptop computer as cyber code is projected on him in this illustration picture taken on May 13, 2017. (REUTERS/Kacper Pempel/Illustration / Reuters Photos)

Neatsun Ziv, the co-founder and CEO of OX Security, the first end-to-end software supply chain security solution, agreed that the mainstream trend of ChatGPT giving millions access to its data and capabilities could lead to trouble.

"If we thought the fake news era on Twitter and Facebook were really bad, I think that right now we're headed towards another level," he said.

For example, ChatGPT's dataset is comprised of knowledge that already exists in the market. If a company is not keenly aware of the knowledge these machines possess and are unable to build systems to identify it, that could make them susceptible to attacks, Neatsun said.  

"It is just a matter of time until they will be hacked. This is what we're seeing right now on a weekly basis," he added.  

Neatsun also noted that once these tools are fully understood and acquired by cybergangs, they will be able to abuse it and target companies dealing with insurance and fraud cases, as well as critical government infrastructure.

"You can write code simply by describing what you want to do," he said. "So the effort to create new things, new code, new malware, new attacks is being reduced and it is not just reduced in the time, but it is also reduced in the complexity."

Recently, ChatGPT created a new strand of polymorphic malware.

According to a technical report by CyberArk security researchers, the malware could easily evade "security products and make mitigation cumbersome with very little effort or investment by the adversary."

To create this malware, the team bypassed the content filters preventing the chatbot from making dangerous tools simply by posing the same question over and over in a more authoritative fashion or by using the API version of the program.

EVERYTHING YOU NEED TO KNOW ABOUT ARTIFICIAL INTELLIGENCE: WHAT IS IT USED FOR?

"You can do good things much more at scale with artificial intelligence," CyberSheath CEO Eric Noonan said. "On the flip side of that coin, a bad actor can do bad things more efficiently and at scale."

"Many times, it doesn't matter really where the attack came from, be it a nation-state and where it came from, be it artificial intelligence or traditional means, because as a defender, in that moment, what you're focused on really is defense, recovery and resilience," he added.

Noonan, who served on the Council on Cyber Security expert panel, said AI would likely become a more significant concern as it matures. But, right now, Noonan highlighted the importance of ensuring critical sectors are adequately protected and have the proper mitigation strategies in place.

"As we look at the breaches we've seen, whether it's the Office of Personnel Management, Solar Winds, Colonial Pipeline, we know that these critical infrastructure sectors are vulnerable and can be breached today and so AI is potentially another tool adversaries can use to be more effective at scale," he said.

Manar noted that most attacks are due to management or configuration changes and human error, such as when an employee clicks on a phishing or whaling attempt. Phishing and whaling attacks are typically aimed at senior executives and masquerade as legitimate emails.

ChatGPT homescreen

The Welcome to ChatGPT lettering of the US company OpenAI can be seen on a computer screen. ((Photo by Silas Stein/picture alliance via Getty Images) / Getty Images)

Manar, who previously served as the assistant special FBI agent in charge of overseeing all cyber, counterintelligence, intelligence and language service programs for the San Diego office, said if one could get access to samples of a person's writing or emails, the AI could tailor that attack to however the target is used to communication or receiving information.

AI PAUSE GIVES 'BAD GUYS' TIME TO CATCH UP, BILL ACKMAN SAYS: 'I DON'T THINK WE HAVE A CHOICE'

"AI will help this out tremendously by crafting intelligent, pinpointed, targeted responses," he said.

As an example, Manar noted that during his time working in healthcare fraud for the FBI, criminals would often pop up on the government's radar because they used CMS billing codes that were out of the ordinary, very expensive and not frequently used.

He said today someone could ask ChatGPT the most frequently used home healthcare billing codes for CMS, which will give you the top five.

"It will help you stay protected through obfuscation, right? You're going to blend into the crowd," Manar said. "It makes the criminal element a little smarter and allows them information that they otherwise wouldn't have or would require them to do an amount of research that they just typically don't put into it."

Noonan also noted that "data poisoning" of a benevolent AI system within a company by a bad actor could also shut down a business in the same way an attack on a payroll system could shut down the manufacturing side of a business.

Cyber security IT engineer working on protecting network against cyberattack from hackers on internet. Secure access for online privacy and personal data protection. Hands typing on keyboard and PCB (iStock / iStock)

CLICK HERE TO READ MORE ON FOX BUSINESS        

"If they're using AI to make decisions in their business and within the critical infrastructure sectors, that AI is then another vulnerability," he said. "So, they may be using it for an adjacent business purpose. But if an adversary can get in and corrupt benevolent AI to make nefarious decisions, that's another risk that I don't know a lot of thought has gone into."

To prevent these types of attacks, Noonan said enforcing and implementing mandatory minimum cybersecurity requirements is paramount.

Noonan also pointed to President Biden's May 12, 2021, executive order on cybersecurity, which pushes for implementing "zero trust" architecture and helps improve cyber threat reports to government entities, as a good first step.