
Since its discovery in May 2024 by the Sysdig Threat Research Team (TRT), LLMjacking has emerged as a significant security concern, impacting organizations that rely on large language models (LLMs). The latest target is DeepSeek, a rapidly growing AI model that was exploited by cybercriminals within days of its release.
What Is LLMjacking?
LLMjacking is a form of credential abuse where attackers steal API keys and cloud access credentials to run costly AI models without authorization. These stolen credentials are often used to power illicit AI services, sometimes resold on underground markets, leaving legitimate users to foot the bill.
Why DeepSeek Became a Target
DeepSeek launched its DeepSeek-V3 model in December 2024, quickly gaining popularity. Within days, cybercriminals had integrated it into unauthorized proxy services. The same pattern repeated in January 2025 with DeepSeek-R1, underscoring how attackers track new AI models to exploit them as soon as they gain traction.
The Role of OpenAI Reverse Proxies (ORP)
A key method attackers use is OpenAI Reverse Proxy (ORP), a technique that disguises unauthorized API access through dynamic domains and masked IPs. These proxies allow cybercriminals to monetize stolen API keys, with access often sold on illicit marketplaces.
One such proxy, hosted at vip[.]jewproxy[.]tech, was found selling access for $30 per month. In just a few days, logs showed millions of tokens processed, leading to tens of thousands of dollars in unauthorized cloud fees. Some of the most expensive AI models—such as Claude 3 Opus—incurred nearly $39,000 in stolen usage.
How Attackers Evade Detection
LLMjacking communities are thriving on forums like Discord and 4chan, where they exchange tools and techniques. Attackers use TryCloudflare tunnels to hide their activities, and some even obfuscate their sites using CSS tricks or password authentication. API key theft remains central to these operations, with stolen credentials being tested and resold for further exploitation.
Steps IT Teams Can Take to Prevent LLMjacking
To safeguard against LLMjacking, IT professionals should:
- Secure API keys with management tools like AWS Secrets Manager or Azure Key Vault.
- Use temporary credentials instead of long-lived access keys.
- Monitor usage patterns to detect abnormal API activity.
- Regularly scan for exposed credentials using tools such as TruffleHog and GitHub Secret Scanning.
Conclusion
As AI adoption grows, so do cyber threats targeting these systems. LLMjacking is a costly and evolving risk, with attackers adapting quickly to new AI models like DeepSeek. IT teams must remain vigilant, implementing strong access controls and continuous monitoring to protect against these emerging threats.