Securing large language models: Lessons from the DeepSeek AI breach

By Travis Deforge, Abacus Group

Published: 24 March 2025

As the world increasingly turns to artificial intelligence (AI) and large language models (LLMs) for enhancing business operations, cybersecurity risks have escalated. The recent breach at DeepSeek AI, where over a million critical records were exposed, serves as a stark reminder of the vulnerabilities LLMs pose to organisations. This breach underscores the importance of robust cybersecurity measures, particularly penetration testing, to secure these powerful technologies.

The importance of penetration testing for LLMs

Penetration testing, a process where security experts simulate cyberattacks to identify weaknesses, is essential for securing AI systems, just as it is for traditional applications. However, securing LLMs requires a unique approach. Traditional methods used for websites or mobile apps are not sufficient to address the complexities and specific risks posed by LLMs.

The DeepSeek breach highlights that LLMs, while transformative, are prone to exploitation when not properly tested. Malicious actors can manipulate these systems to access sensitive data, alter information, or even bypass security measures. The critical question organisations must ask is: How can they safeguard their LLMs from these emerging threats?

Unique vulnerabilities in LLMs

LLMs introduce several security challenges that differ from traditional software. One of the most concerning vulnerabilities is prompt injection, where an attacker manipulates input to influence the AI’s output. In the case of DeepSeek, this kind of exploitation could have played a role in the breach, as prompt injection allows attackers to manipulate responses, potentially exposing sensitive information.

Other risks associated with LLMs include insecure plugin designs that could allow unauthorised access to critical systems, as well as data poisoning, where attackers influence the training data to compromise the integrity of the model. The DeepSeek breach emphasises the potential for LLMs to expose data and underscores the need for proactive security measures.

Penetration testing for LLMs: A specialised approach

Penetration testing for LLMs goes beyond traditional web app security testing. LLMs are powered by complex neural networks that require a tailored approach to identify vulnerabilities. At Abacus Group, our cybersecurity experts use specialised methodologies to assess LLM systems for security weaknesses, including prompt injection, flawed plugin configurations, and other AI-specific vulnerabilities.

We work closely with clients to understand the unique architecture of their LLM systems, ensuring that we identify potential threats early in the development process. This proactive approach helps us pinpoint not only existing vulnerabilities but also emerging risks, ensuring that organisations can secure their LLM applications before they are exploited.

Lessons from the DeepSeek AI breach

The DeepSeek breach offers valuable lessons for businesses deploying LLM technologies. First, it emphasises the importance of proactive cybersecurity measures. Delaying security testing or assuming that off-the-shelf solutions are secure can lead to catastrophic consequences. Organisations must invest in regular penetration testing to ensure their systems are protected from emerging threats.

Second, the breach highlights the need for specialised expertise. As LLMs are a new frontier in cybersecurity, traditional penetration testing approaches may not suffice. Firms should collaborate with cybersecurity experts who understand the nuances of AI systems and have experience in securing LLMs specifically.

Lastly, the DeepSeek breach shows that vulnerabilities can exist not just externally but also within the system’s configuration. Improper access controls, poor data handling, and insecure integrations can all open the door for attackers. Organisations must implement robust security practices throughout the lifecycle of their AI applications.

The road ahead: Future-proofing LLM security

As LLM technologies continue to evolve, the cybersecurity landscape will need to keep pace. AI systems, including LLMs, will only become more integral to businesses, and securing them will become increasingly challenging. Organisations can no longer treat LLMs like traditional software; they must take a proactive, tailored approach to security.

Conclusion

The breach at DeepSeek AI is a critical reminder of the vulnerabilities that LLMs present. As organisations continue to deploy AI-powered systems, it is essential that they take cybersecurity seriously and adopt a comprehensive, tailored approach to penetration testing. By doing so, they can mitigate risks, protect sensitive data, and confidently leverage the power of LLMs to drive business success. 

Delaying security testing or assuming that off-the-shelf solutions are secure can lead to catastrophic consequences.