Criminal business and malicious AI models on the darknet

Sicherheit (Pexels, allgemeine Nutzung)[German]

Check Point has published its new AI Security Report 2025. In it, the security researchers examine how artificial intelligence is changing the cyber threat landscape. This ranges from generative AI models built specifically for hackers, i.e. GPTs, on the darknet, deepfake attacks, data poisoning and account trading to the instrumentalization of generative AI models for cyber attacks and ransomware.

As quickly as artificial intelligence (AI) is being integrated into companies' business processes, it is also bringing about explosive changes in the development of cyber threats. The same technologies that help companies work more efficiently and automate decision-making are being weaponized by hackers.

The first edition of the Check Point Research AI Security Report examines how cyber criminals are not only exploiting mainstream AI platforms, but also developing and distributing tools specifically designed for malicious purposes. The most important findings from the report in brief:

  • AI usage in organizations: 51 percent of all organizations use AI services on a monthly basis. In corporate networks, around 1.25 percent (1 in 80) of all AI prompts contain highly sensitive data.
  • Criminal AI-as-a-service on the dark web: The spam and phishing service GoMailPro, integrated with ChatGPT, costs 500 US dollars (around 442 euros) per month. AI-based telephone services for fraud scams cost around 20,000 US dollars (around 17,662 euros) or are offered for 500 US dollars (around 442 euros) base price plus 1.50 US dollars (around 1.32 euros) per minute.
  • Trading with AI accounts has increased significantly: Credential stuffing, phishing and Infostealer stolen credentials to popular AI services such as ChatGPT are traded to anonymously generate malicious content and circumvent restrictions.
  • AI-powered malware automates and simplifies attacks: Malware groups such as FunkSec already use AI tools in at least 20 percent of their operations to develop malware and analyze stolen data more efficiently.
  • Disinformation campaign and AI manipulation (poisoning): The Moscow-based disinformation network Pravda produced around 3.6 million articles in 2024 alone to manipulate AI systems. This disinformation appeared in leading Western AI systems in around 33 percent of queries.

In an increasingly AI-driven digital world, defenders should be aware of the following growing threats when securing systems and users. An analysis of data collected by Check Point's GenAI Protect shows that 1 in 80 GenAI prompts pose a high risk of sensitive data loss. The data also shows that 7.5 percent of prompts – about one in thirteen – contain potentially sensitive information, posing critical security, compliance and data integrity challenges. As organizations increasingly integrate AI into their operations, understanding these risks is more important than ever.

As artificial intelligence evolves, the techniques used by threat actors are also changing. Autonomous and interactive deepfakes exacerbate the threat of social engineering by enabling deceptively real impersonations of people known to the victim. Text and audio imitations were only the first stage and are already so advanced that they can generate new texts in real time. Real-time video impersonations are therefore only a matter of time.

Security researchers also have concerns about LLM (Large Language Model) poisoning. This is a cyber security threat where training datasets are altered to contain malicious content and cause AI models to replicate the malicious content. Despite the stringent data validation measures taken by major AI vendors such as OpenAI and Google, there have been cases of successful poisoning attacks, including the uploading of 100 compromised AI models to the Hugging Face platform (an open source platform for AI models).

DThe use of AI in cybercrime is no longer just theoretical. It is evolving alongside the general adoption of AI and, in many cases, faster than traditional security controls can adapt. The findings of Check Point's AI Security Report suggest that defenders must now assume that AI will not only be used against them, but also against their systems, platforms and the identities they manage. The security researchers have compiled many details from the AI report in this Check Point blog post.

This entry was posted in Security, Software and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *