Select your country

Not finding what you are looking for, select your country from our regional selector:

Search

AI and cybersecurity: a double-edged sword

The rapid evolution of AI ushers a new era in cybersecurity. As it brings a wider attack perimeter, artificial intelligence is both a challenge and an opportunity for both sides of the law.

Artificial intelligence: a new tool for cyberattacks

AI never sleeps and certainly knows no borders. With artificial intelligence, the removal of linguistic and cultural barriers means that cyberattacks can be amplified, accelerated and automated. We can therefore legitimately speak of an AI-enhanced threat. Here are a few examples of illicit uses.

AI and social engineering: a powerful lever for cyberattackers

Artificial intelligence, and particularly generative AI (or “GenAI”), makes identity theft easier. Facial (“Deepfakes”) and even voice mimicking tech (“Deepvoices” or “Deepfakes audio”) are no longer Hollywood’s playground. These become more and more advanced and available to the wider public by the day. While fake Tom Cruises and Keanu Reeves’ may have initially raised a few smiles, this technology is now raising many brows, as it is now being used for malicious purposes. In 2024, the identity theft of a financial director via a deepfake led to the embezzlement of HKD 200 million, or $25 million1. The illicit use of artificial intelligence can also serve disinformation purposes, turning the web into an information warzone, echoing fierce military, financial and geopolitical conflicts.

The misuse of generative AI systems

“Prompt Injection” involves manipulating an LLM (also known as “Large Language Model”) with malicious requests. By twisting the request (or “prompt”) in a certain way, the cyberattacker or hacker will attempt to bypass the platform's security filters to achieve his ends. 

Data poisoning during the design and training stage of the LLM can be used to introduce vulnerabilities. Through this “backdoor” the model will be easily hijacked later on. More specifically, the introduction of cognitive, ideological or moral biases during this phase can skew the output results and give false information to the user.  

During the model supply chain, vulnerabilities can also be introduced by external datasets or plug-ins.

In addition to the misuse of market applications, we are witnessing the rise of “Dark LLMs”. A Dark LLM, such as WormGPT, FraudGPT or WolfGPT is specifically created and trained to facilitate illicit activities: phishing, creating fake competitions or loan applications, or generating malicious code and creating fake websites. 

AI in the service of cybersecurity

Artificial intelligence is also an asset for cybersecurity experts. It accelerates and strengthens the anticipation, detection and identification of threats.

24/7 monitoring and enhanced detection capabilities

AI allows the continuous analysis of suspicious behavior and optimized incident management. Its capacity to handle large volumes of data from various network resources and layers (web traffic, logs, databases, software) can be used 24/7 to analyze network traffic to identify suspicious behavior, detect anomalies and intrusion attempts. These enhanced and accelerated detection capabilities can detect the use of fraudulent credentials (“Credential leaks”), the presence of malware to counter them, and identify data theft attempts. 

AI can also scan and analyze e-mails in order to alert users in case of phishing attempts.

Predictive AI and cybersecurity: thwarting the threat through advanced anticipation

AI can continuously learn and adapt to counter new and emerging threats. Predictive AI is transforming cybersecurity by anticipating and preventing threats before they cause any damage. Predictive analysis combines historical data, statistical modeling and machine learning to enable cybersecurity teams to take preventive action.

The rise of augmented analysts boosted by generative AI

Thanks to AI, the processing and investigation of cybersecurity alerts benefits from a real boost. AI-assisted threat identification can help separate true positives (a real threat) from false positives (false alerts). 

The use of a single or multi-agent GenAI assistant can boost the efficiency of cyber experts dedicated to monitoring and managing alerts (as part of a SOC or “Security Operations Center” dedicated to a business or public organization, for example). A unique GenAI assistant for ticketing management can cross-reference incoming tickets with the database history, instantaneously come up with correlated old tickets, assess the severity of the alert and make recommendations. A multi-agent supervision system, bringing together AI assistants with specific roles and scopes, can accelerate investigation across various resources (Ticketing, CTI, SIEM, EDR...2). Generative AI can also help to automatically classify customer feedback and deal with massively false positives amount of data.

Orange Cyberdefense: the alliance of augmented experts and cyber intelligence

Our teams at Orange Cyberdefense firmly believe that by combining human intelligence, Cyber Threat Intelligence (CTI) and artificial intelligence, we can develop end-to-end trusted AI that complies with European standards. Our aim: to help make the lives of businesses and individuals safer. To find out more, master your risks, and lead your company’s future, you can check out our 2025 Security Navigator report.

Sources : 

(1) AI: deepfake scam cost Hong Kong company $26 million», France Inter, 5 feb. 2024.

(2) CTICyber Threat Intelligence; SIEMSecurity, Information and Event Management; EDR: End Detection and response. 

Incident Response Hotline

Facing cyber incidents right now?

Contact our 24/7/365 world wide service incident response hotline.

CSIRT