Artificial intelligence is creating new, more complex cybersecurity challenges. It may also hold the solutions.
The uses for artificial intelligence (AI) are seemingly endless. AI can anticipate the next song we’d like to hear, break down advanced concepts into easy-to-understand terms and help companies operate more efficiently with automated processes.
But it can also be used to support unethical behavior or illegal practices, like cheating in school or committing outright fraud – especially generative AI, which can create new content like text and images through deep learning to gain access to money or sensitive data. The ubiquity of generative AI may account for a projected rise in cybercrime to $10.5 trillion in 2025, from less than $3 trillion in 2020¹.
“Hackers are using AI in increasingly inventive ways,” Raymond James Vice President of Technology Jeff Griffith said while discussing the threats and opportunities AI creates for cybersecurity specialists. “But so are we.”
AI is making familiar scams more elaborate and easier than ever to launch. For example, phishing emails used to be relatively easy to spot for the spelling and grammatical errors typical of humans posing as someone they’re not. Today, in any language, cybercriminals can prompt AI to write an email that sounds relatable and natural. They can also use AI to hyper-personalize a scam to make it more relevant to each recipient.
Another concern is the emergence of “deepfakes.” Hackers can easily clone voices and make them say virtually anything. “It’s turnkey and easy to do,” Griffith said, “and not expensive. I cloned a well-known voice for a demo. I didn’t even have to say who it was. I just played it, and everyone knew right away.”
The same technology can be used for video. Right now, you may be able to recognize a deepfake by finding errors in the details, like people with extra fingers or misplaced limbs, but the technology is developing rapidly.
“Compared to where it was two years ago, you can expect not to be able to tell two years from now,” Griffith said.
Fortunately, this same technology can also be used to mitigate these threats. Many companies – Raymond James included – are already using sophisticated AI cybersecurity tools to defend their systems and protect their data.
For example, AI can quickly analyze communications and scan vast records that would be impossible for a human to read with the same speed or accuracy. Businesses can also use AI to come up with different attack scenarios cybercriminals might use so they can understand how to defend against them.
“It’s a never-ending race between the good guys and the bad guys,” said Griffith. “You build a castle and fortify it, and criminals look for a way in. That’s why it’s important to use AI – because the bad guys do.”
Some organizations use generative AI tools to help them write stronger code and do it faster. Cybercriminals use those tools, too – to build malware with the speed of a hundred engineers working simultaneously.
“We are using AI to help our developers write better code faster, identify vulnerabilities before they happen and protect our systems in real time,” Griffith said. “It amplifies our experts’ abilities to secure the environment and deliver great functionality – keeping us at the forefront of innovation and safety.”
Source: ¹Cybersecurity Ventures