
13 February 2025
The increasing adoption of AI and GenAI (or Generative AI) applications within organizations exposes them to new vulnerabilities and cybersecurity risks. The use of LLM embedded platforms redefines the scope of audit methods and ushers a new era in pentesting.
Pentesting (or "Pentest") is short for "Penetration testing". For cybersecurity purposes, the Pentester takes on the role of the cyberattacker. A member of the ethical hacking community - also known as "Ethical Hackers" and "White Hats" - the Pentester sets up and engages cyberattack scenarios. His goal? Putting the cybersecurity lines of defense of a company to the test. By identifying vulnerabilities in existing IT protocols and equipment, the Pentester helps to strengthen the organization's cybersecurity. His proactive contribution will help updating both the hardware and software infrastructure. The role of the Pentester is therefore preventive.
The Pentester sets several goals in identifying the vulnerabilities in an information system:
The Pentester can trigger attack scenarios on the various hardware and software layers of a company's IT and network infrastructure:
Pentesting approaches can also be used to test electronic payment equipment and IT/OT hybrid systems.
Beyond IT equipment, infrastructure and software layers, the Pentester can also evaluate the responsiveness of dedicated cybersecurity teams, to help them improve their processes.
As the risk of intrusion can be purely and simply physical, the Pentester can try to circumvent the usual reception procedures of companies as well. He can pretend to be an employee or try to enter the building via an unsecured access door.
While each of these components can be the target of a cyberattack, the growing adoption of AI and generative AI platforms exposes organizations to new weaknesses and vulnerabilities. A situation which redefines the role and methodology of the Pentester.
Because language models are very complex, they require specific AI and machine learning skills. The Pentester must be aware of their weak spots and try to exploit them preemptively in order to evaluate the robustness of the used LLM ("Large Language Model"). Here are some of these vulnerabilities that are now part of the Pentesting scope:
It is therefore essential to combine traditional security audits with artificial intelligence methods to ensure a comprehensive risk assessment. The Pentester can thus rely on AI to hunt vulnerabilities more effectively, before they are exploited by malevolent hackers (also called “black hats”).
AI can thus facilitate the automation of Pentesting tasks. AI’s 24/7 ability to analyze a large volume of data, and alert eventual anomalies (intrusions, abnormal behaviors, data leaks, etc.) within a company's network is a valuable asset. The time saved on traffic auditing allows the Pentester to focus on other, more complex tasks. The AI will also be able to produce an intrusion report detailing the flaws and vulnerabilities which allowed the Pentester to penetrate the infrastructure.
More specifically, AI can, for example, be specifically trained to detect vulnerabilities in computer code. When combined with a data lake, it can effectively identify the presence of malware.
The Pentester provides essential know-how to ensure data security and privacy. At Orange Cyberdefense, our cybersecurity expertise is based on the combination of human intelligence, artificial intelligence and Cyber Threat Intelligence (CTI). To learn more, master your risks, and lead your company’s future, you can check out our 2025 Security Navigator report.
Sources
OWASP Top 10 for LLM Applications 2025 - OWASP Top 10 for LLM & Generative AI Security
Written by Geoffrey Sauvageot Berland
Computer Engineer & Pentester at Orange Cyberdefense