Banyan Card-Linked Offer Usage Over Time May 2024 Banner

Criminals’ Adoption of AI Remains Cautious, Report Finds

Despite public concerns about the potential for cyberattacks powered by artificial intelligence (AI), criminals have been slow to fully embrace the technology, according to a new report from cybersecurity firm Trend Micro

Eight months after the company first reported on criminals using generative AI, an updated analysis found that while malicious actors continue to exploit the technology, they are largely focused on simpler applications rather than developing advanced AI-enabled malware.

“Criminals are using generative AI capabilities for two main purposes: developing malware and improving social engineering tricks,” Trend Micro said in the report. In particular, criminals are leveraging AI language models to craft more convincing phishing emails and scam scripts.

Jailbreaking Chatbots

Hackers have emerged offering “jailbreak-as-a-service,” which use prompts to trick commercial AI chatbots like ChatGPT into generating content they typically prohibit, such as instructions for illegal activities or explicit material, according to the report. Some services, like BlackhatGPT, falsely market themselves as original AI models but, upon inspection, merely provide an interface for sending jailbreaking prompts to existing systems like OpenAI’s API.

Customized models on platforms like flowgpt.com, which allow users to create AI agents that follow specific prompts, are also being abused for criminal purposes. Meanwhile, fraudulent services continue to increase, with many scams like FraudGPT promising AI capabilities without delivering.

Cybercriminals are also beginning to offer “deepfake” services to help fraudsters bypass identity verification systems at banks and other institutions. Using stolen ID photos, they generate synthetic images to fool know-your-customer (KYC) checks. These services are advertised on forums and chat apps, and prices range from $10 per image to $500 per minute of video.

However, the technology still struggles to convincingly impersonate people, especially those familiar with the individual being imitated. Deepfake audio may prove more effective in scams like fake kidnappings. Broader deepfake-powered attacks impersonating executives remain concerning but have yet to materialize significantly. 

Trend Micro expects the adoption of AI attacks to remain gradual over the next 12 to 24 months as criminals weigh costs and risks against existing methods that already work. The high expense and technical challenges of training criminal AI models like WormGPT, infused with malware data, deter most.

The firm said fortifying cyberdefenses now, before AI-enabled attacks become more severe, should be a priority for organizations looking to get ahead of the evolving threat. Proactively strengthening security postures and monitoring criminal forums can help prepare for worst-case scenarios involving AI.

AI Arms Race

The findings underscore AI’s growing yet uneven adoption in the cyber realm, foreshadowing an emerging arms race between defenders and malicious actors. As generative AI matures and becomes more accessible, its appeal to cybercriminals will likely grow, increasing the need for robust countermeasures.

As PYMNTS recently reported, AI is transforming how security teams handle cyberthreats by automating the initial stages of incident investigation, analyzing vast amounts of data, and identifying complex patterns, allowing security professionals to begin their work with a clear understanding of the situation and speeding up response times.

Trend Micro’s analysis provides a valuable glimpse into the current state of play. While AI-powered attacks have yet to materialize fully, the groundwork is being laid through jailbreaking services, deepfakes, and malware development. The trajectory points to a future where AI is increasingly harnessed for defensive and offensive purposes in cyberspace.

For companies, this means investing in technical defenses, AI-focused cybersecurity talent, and threat intelligence. Staying ahead of criminals’ adoption curve will require proactive strategies, agile responses, and a commitment to ongoing research and innovation. As the cyberthreat landscape evolves, preparedness will hinge on the ability to anticipate and mitigate AI’s growing role in the hands of malicious actors.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.