Select your country

Not finding what you are looking for, select your country from our regional selector:

Search

How to secure AI applications

If properly managed, artificial intelligence (AI) can be a source of infinite opportunities. Misused, however, it can become a dreadful Trojan Horse. The growing adoption of artificial intelligence (AI) in businesses, public institutions and critical infrastructures increases the attack surface for hackers, and raises concerns on many levels. From the unauthorized usage of an LLM (“Large Language Model”) tool to the exploitation of vulnerabilities by cyber attackers, it is vital to implement end-to-end AI security protocols.

Adopt a controlled and secure use of AI

Artificial intelligence (AI), particularly generative AI, is now being explored in many professional sectors and activities, transforming working methods and processes.

  • Vulnerabilities specific to AI systems are causing concern among IT systems and security departments of companies and organizations, as they are willing to secure their expanding use;
  • That's why it is essential to ensure that the adoption of generative AI solutions is onboarding strict cybersecurity measures, related to access, prompts, data and the overall infrastructure in which the LLM is integrated and interacting;
  • This requires a proactive, risk-aware approach and continuous updating of security practices to effectively protect AI systems and the data they process.

Let's take a look at the measures you can set up.

Establishing a global view of AI systems to optimize risk management

It's a good idea to keep a watch on all the AI systems used within your company or organization. Below are two strategies to consider:

Addressing the Shadow AI issue:

The growing popularity of generative artificial intelligence applications available to the wider public brings emerging cybersecurity challenges, often referred to as “Shadow AI”.

To manage this issue, organizations must be able to oversee their employees' use of these tools, identifying unauthorized applications that could compromise data security. Implementing Cloud Access Security Broker (“CASB”) solutions makes it possible to monitor and control access to these services, while defining clear usage policies (Total block, Conditional Authorization, Authorization).

At the same time, it is crucial to integrate “Data Leak Protection” (DLP) mechanisms to protect sensitive information and prevent data leaks through employees' interactions with these consumer applications.

Monitor the security posture of internal applications based on LLM:

Through the AI-SPM (“AI Security Posture Management”) approach, you maintain visibility over all AI applications and their components used internally:

  • Inventory of LLM (“Large Language Models”) APIs, data and training, inference data sets, hosting, plugins, number of users;
  • End-to-end AI pipeline detection to identify AI-related risks, such as misconfigurations, presence of vulnerabilities and prioritizations, agent over-privileges, identification of sensitive data across datasets...).

Having this global vision not only strengthens security, but also compliance with current standards and regulations. By embracing these strategies, businesses can better navigate the complex AI landscape while protecting their assets and data.

Securing your LLM applications

While global visibility of AI systems is essential, it is equally crucial to understand and mitigate the specific risks associated with AI-based applications.

Securing LLM-based applications requires a comprehensive, structured approach, starting with a thorough awareness of the specific risks. Teams - business, development and security - need to be aware of and vigilant against prompt injections, data poisoning, data extraction and leaks of sensitive information... These threats differ from traditional cybersecurity risks. Threat Intelligence is therefore a valuable source of information for keeping up to date, anticipating attacks and adjusting defense strategies accordingly.

To address these challenges, LLM security must rely on a mix of traditional security tools, new additional components and specifically designed solutions.

AI-SPM and DSPM (Data Security Posture management) solutions form the first line of defense, overseeing the security posture of models and protecting training data. These tools are complemented by Data Leak Protections (“DLP”), essential for preventing sensitive data leaks, a particularly critical risk with LLMs, which can unwittingly expose confidential information in their responses.

Securing exchanges and access is another fundamental pillar. API Security and CASB solutions enable fine-grained control of interactions with AI services, while AI Firewalls, specifically designed for LLMs, filter malicious prompts and adversarial attack attempts. These tools are particularly relevant to the emergence of new attack techniques such as the injection of hostile prompts.

Securing AI chats and access is another fundamental pillar. API Security and CASB solutions enable a precise control level of interactions with AI services, while AI Firewalls, specifically designed for LLMs, filter malicious prompts and adversarial attack attempts. These tools are particularly relevant to the emergence of new attack techniques such as the injection of hostile prompts.

Last but not least, LLM-dedicated penetration testing (also dubbed “Pentest” or “Pentesting”) enables us to assess system resilience and adapt defense strategies. This practice, which is still in its infancy and combines classic pentest techniques with social engineering in interaction with the LLM chatbot interface, is becoming crucial as attack techniques against language models rapidly become more sophisticated.

This multi-layered defensive approach is essential in a context where threats are constantly evolving. Security teams must maintain an active watch and adopt a proactive posture to fully exploit the potential of LLMs for businesses, while guaranteeing their security.

Securing AI with Orange Cyberdefense is about protecting the future

From training to crisis management, Orange Cyberdefense experts are there to support companies and institutions in securing the use of artificial intelligence applications: 

  • Training sessions to make business, development and security teams aware of the risks associated with the use of AI applications and GenAI; 
  • Audit and risk analysis of your GenAI applications;
  • Pentesting (“penetration testing”) of your GenIA and LLM applications; 
  • Securing your AI applications; 
  • Deployment of a secure AI code generation assistant; 
  • Support for secure integration of Microsoft 365 Copilot; 
  • SOC, MSSP, CERT (“Security Operational Center”, “Managed Security Services provider”, “Computer Emergency Response Team”) ;
  • Red Team, Blue Team and Purple Team;
  • CTI (“Cyber Threat Intelligence”)
  • Intrusion crisis management. 

To find out more, master your risks and lead your future, read our Security Navigator 2025 report.

Written by Emilie Brochette

Business Development AI & Cybersecurity at Orange Cyberdefense

Incident Response Hotline

Facing cyber incidents right now?

Contact our 24/7/365 world wide service incident response hotline.

CSIRT