
17 February 2025
If properly managed, artificial intelligence (AI) can be a source of infinite opportunities. Misused, however, it can become a dreadful Trojan Horse. The growing adoption of artificial intelligence (AI) in businesses, public institutions and critical infrastructures increases the attack surface for hackers, and raises concerns on many levels. From the unauthorized usage of an LLM (“Large Language Model”) tool to the exploitation of vulnerabilities by cyber attackers, it is vital to implement end-to-end AI security protocols.
Artificial intelligence (AI), particularly generative AI, is now being explored in many professional sectors and activities, transforming working methods and processes.
Let's take a look at the measures you can set up.
It's a good idea to keep a watch on all the AI systems used within your company or organization. Below are two strategies to consider:
Addressing the Shadow AI issue:
The growing popularity of generative artificial intelligence applications available to the wider public brings emerging cybersecurity challenges, often referred to as “Shadow AI”.
To manage this issue, organizations must be able to oversee their employees' use of these tools, identifying unauthorized applications that could compromise data security. Implementing Cloud Access Security Broker (“CASB”) solutions makes it possible to monitor and control access to these services, while defining clear usage policies (Total block, Conditional Authorization, Authorization).
At the same time, it is crucial to integrate “Data Leak Protection” (DLP) mechanisms to protect sensitive information and prevent data leaks through employees' interactions with these consumer applications.
Monitor the security posture of internal applications based on LLM:
Through the AI-SPM (“AI Security Posture Management”) approach, you maintain visibility over all AI applications and their components used internally:
Having this global vision not only strengthens security, but also compliance with current standards and regulations. By embracing these strategies, businesses can better navigate the complex AI landscape while protecting their assets and data.
While global visibility of AI systems is essential, it is equally crucial to understand and mitigate the specific risks associated with AI-based applications.
Securing LLM-based applications requires a comprehensive, structured approach, starting with a thorough awareness of the specific risks. Teams - business, development and security - need to be aware of and vigilant against prompt injections, data poisoning, data extraction and leaks of sensitive information... These threats differ from traditional cybersecurity risks. Threat Intelligence is therefore a valuable source of information for keeping up to date, anticipating attacks and adjusting defense strategies accordingly.
To address these challenges, LLM security must rely on a mix of traditional security tools, new additional components and specifically designed solutions.
AI-SPM and DSPM (Data Security Posture management) solutions form the first line of defense, overseeing the security posture of models and protecting training data. These tools are complemented by Data Leak Protections (“DLP”), essential for preventing sensitive data leaks, a particularly critical risk with LLMs, which can unwittingly expose confidential information in their responses.
Securing exchanges and access is another fundamental pillar. API Security and CASB solutions enable fine-grained control of interactions with AI services, while AI Firewalls, specifically designed for LLMs, filter malicious prompts and adversarial attack attempts. These tools are particularly relevant to the emergence of new attack techniques such as the injection of hostile prompts.
Securing AI chats and access is another fundamental pillar. API Security and CASB solutions enable a precise control level of interactions with AI services, while AI Firewalls, specifically designed for LLMs, filter malicious prompts and adversarial attack attempts. These tools are particularly relevant to the emergence of new attack techniques such as the injection of hostile prompts.
Last but not least, LLM-dedicated penetration testing (also dubbed “Pentest” or “Pentesting”) enables us to assess system resilience and adapt defense strategies. This practice, which is still in its infancy and combines classic pentest techniques with social engineering in interaction with the LLM chatbot interface, is becoming crucial as attack techniques against language models rapidly become more sophisticated.
This multi-layered defensive approach is essential in a context where threats are constantly evolving. Security teams must maintain an active watch and adopt a proactive posture to fully exploit the potential of LLMs for businesses, while guaranteeing their security.
From training to crisis management, Orange Cyberdefense experts are there to support companies and institutions in securing the use of artificial intelligence applications:
To find out more, master your risks and lead your future, read our Security Navigator 2025 report.
Written by Emilie Brochette
Business Development AI & Cybersecurity at Orange Cyberdefense
17 February 2025
11 February 2025
Security Navigator 2025 Research-driven insights to build a safer digital society, Adopt a proactive security posture based on cyber threat intelligence.