Select your country

Not finding what you are looking for, select your country from our regional selector:

Search

Part 1 - Forging forward with GenAI

Introduction

It is truly an amazing time to be alive in terms of technological advancement and the promise of what these marvels may hold. The last 25 years saw explosive growth in the information technology sector that accelerated globalization even further. This was largely driven by the internet and mobile phone technologies.

Advancements in Artificial Intelligence (AI) since 2017 has sparked numerous innovations since then. Breakthroughs such as the transformer architecture for neural networks birthed generative pre-trained transformer (GPT) models, resulting in generative AI (GenAI) apps such as ChatGPT. The world has been enthralled ever since seeing major investments being made to tap into this “magical” technology with spending expected to surpass $200 billion over five years starting from 2025.

Keeping pace with this is exhausting but also exhilarating. We are planning a series of blogs that we will release under the umbrella of AI. This introductory blog is split into two parts. We start with this blog post where we will set the stage by recapping what we published in Security Navigator 2025. The second part of this blog post will explore at a high level what the impact of AI could be for the world in terms of productivity gains. You can access part two here.

If you have already read Security Navigator 2025 on AI, then feel free to skip the recap section.

Recap of Security Navigator 2025

In the Security Navigator 2025 chapter titled “Research: Artificial Intelligence What’s All the Fuss” we painted a high-level picture examining the implications of AI for cyber security and what it means for defensive and offensive use. This was framed in terms of impact on business, technical, and society at large.

Mankind has always benefited from creating tools and GenAI is promising to amplify our capabilities, whether it’s for good or bad. The Security Navigator 2025 article makes the point that new technologies have an asymmetrical impact on security, thus favoring the attacker initially. The argument is not about cybersecurity but the fact that any new technology is trying to push features and functionality to grow adoption while safety and security must play catch up.

Attackers with ties to governments have already been using LLMs for nefarious actions to social engineer or better understand content. These activities have spilled over into the cybercrime world as well as criminals who use real-time deepfake capabilities to synthesize video and voice to fool victims.

GenAI can also be used to protect and secure. In the Security Navigator 2025 article we speculate that there could be a use case where a cohort of LLMs work together to analyze vulnerabilities in systems and then build patches to protect against exploitation. This idea is not novel by itself (Cf DARPA awards and DARPA AI Cyber challenge) and has lots of potential benefits for creating secure software. Hopefully this type of technology is available before it is put to malicious use. Security Navigator 2025 has the following positive comment on this future:

The notion that adversaries may execute such activities more often or more easily is a cause for concern, but it does not necessarily require a fundamental shift in our security practices and technologies.

Sticking to good industry practices by performing rigorous risk assessment and threat management to complement vulnerability management practices will serve business well. The pace at which this will happen will however increase and the response latency must match this. Fighting fire with fire – the AI power tech arms race has begun.

AI adoption

Most large businesses are eager to employ AI, and this eagerness seems to be driven by the promise of efficiency gains, discovering new opportunities, and appearing relevant in between all the marketing activities of competitors trumpeting ‘AI-powered slogans.

This excitement around early AI adoption does not come cheap. Large corporations with established IT service procurement should be experienced and capable of onboarding AI driven software-as-a-service (SaaS) subscriptions. The risks associated with sharing proprietary and sensitive information with a third-party GenAI SaaS is non-trivial exercise for companies and those lacking understanding of compliance and regulations in this space may experience challenges later down the line.

On the other hand, organizations with existing commercial agreements with large cloud providers such as GCP, AWS, and Azure make AI a compelling business case. The challenge is however that data needs to be labelled and classified appropriately to ensure that the least privileged and need-to-know style controls remain in place.

Corporate social responsibility and emissions commitments also makes adopting GenAI tricky as these services are known to be energy hungry. Businesses may also want to pay attention to this increased demand as it may increase their carbon footprint overall.

New threats associated with LLM

We always need to consider what downsides or negative impact new technology can bring. This is not necessary just the explicit malicious actions but also the unimagined side effects of using LLMs or building these models. The SN25 section on this topic defines two subjects namely ‘consumers’ and ‘producers’ of LLMs. A consumer is a user of LLMs, and a producer is responsible for creating LLMs that consumers use.

We highlighted the following concerns facing consumers of LLMs:

  • Data leaks.
  • Hallucinations.
  • Intellectual property rights.

Producers or providers of LLM also face threat or risks that they must consider or actively mitigate such as:

  • Theft of the model.
  • Poisoning of the model that can steer it to some bias or make it perform poorly.
  • Destruction or disruption of a model.
  • Legal liability due to misrepresentation, misleading, inappropriate, or unlawful content.

We expect new threats targeting LLM producers and consumers to emerge over time. The technology stacks of vendors are complex and will continue to become even richer over time. This growing attack surface is attractive and the rewards for a successful breach may be a trophy worth perusing for some attackers. Many of the attacks will probably involve attacks that will work against classical web applications and not just attacks tailored targeting LLMs. Systems are interlinked with internal datasets that are linked to the internet making for a maze of a threat model to navigate.

Broader impacts

The challenge with GenAI is to produce systems that are trustworthy and that require some form of assurance. Security does play a role in this, but this does not transcend or override building solid foundations. The following four categories will have to deal with broader potentially negative impacts:

Business risks

  • Data privacy and sovereignty.
    • Businesses will continue to grapple with data privacy and sovereignty challenges even more as these systems will demand more data to grow and improve.
  • Platform provider dependencies
    • System designers will have to make conscious design choices to avoid platform lock-in or being tightly coupled with LLM services.
  • Adoption Fatigue.
    • Business will be pressured to adopt AI technologies to stay relevant and compete not to fall behind. Shareholders are expecting boards to show productivity gains by using new AI technologies. Executives will have to shift from a reactive response to a strategic response that will balance the justified expense and shift to focus on the medium and long-term business goals.

Technical risks

  • LLM accelerate social engineer
    • Attackers will be more efficient at generation content to scam or social engineer victims. It’s not clear if GenAI content is more effective, but attackers will hone and refine the usage of these tools to improve its effectiveness.
  • Threat globalization
    • Attackers will be able to engage targets that would naturally be excluded due to language and culture barriers that GenAI can bridge. This goes beyond just simple translation as GenAI can factor in tone, structure, and colloquialism that would normally be lacking.
  • Acceleration of existing threats
    • One of the big appeals of LLMs and GenAI is the claimed productivity gains. This will translate into basic level attackers operating at much higher levels than they would naturally have achieved. This could translate into teaching or just outright automating technical skills that were missing.
  • Data aggregation risks
    • Data hoarding and the challenges to keep these large data stores safe will be important in the future as attackers will focus on this.
  • AI as an attack proxy
    • Attackers can use LLMs that can connect to the internet and then use techniques to manipulate the model to perform malicious tasks on the attacker’s behalf. This will be a new layer at which attacks will keep evolving.

Societal risks

  • Privacy and personal information risks
    • LLMs will and have already found its way into social media, instant messaging, productivity, customer support, content creation platforms, and more. More data will be hoovered up to train and advance models.
  • Copyright and fair use issues
    • Content shared on the internet is used to train AI models without acknowledging or compensating creators while benefiting the owners of the models that sell access to their features.
  • Gradual degradation of quality of research, creative content, reporting and other output.
    • Generated content must be marked to enable future users to judge the trustworthiness and the ethical basis on which the content was produced.
    • Risks of mistakes in introduced by LLMs when generating code, research, technical and legal documents, etc. can find a permanent home on the internet, only to be included in the next iteration of a model. This runs the risk of perpetuating inaccuracies or falsehoods into the future.
  • Risks associated with cultural and geopolitical over-influence by large businesses that control influential LLMs.
    • Bias in AI models and platforms will result in output that is shaped or distorted by those that created the model.
  • Using LLMs to create deepfakes to discredit, bully, or terrorize individuals or minorities.
    • LLMs will further empower vindictive and abusive actions on social media and instant message platforms. These actions could allow for a new form of harassment, bullying and shaming of individuals through deep-fake images and videos.

Defending (against) AI

New technologies that are built on the internet are normally propped up by or wrapped in layers of code and systems that have seen widespread adoption. This means that ‘new platforms’ could suffer from common vulnerabilities associated with implementation or design mistakes associated with the present technology.

Building solid security foundations requires expertise and determination. Systems must be built with confidentiality, integrity, and availability in mind from the start. This

requires considerable thought to be applied ensuring architecture, implementation, deployment, and ongoing maintenance including secure good practices.

Senior management, including the CISO, should ensure employees have access to LLM-based services that meet regulatory and compliance standards for safe and responsible use. This includes:

  • Educating and training staff to critically evaluate opportunities and risks present in LLM solutions to select appropriate services and engage in a cautious manner.
  • Data leaks are always a possibility, and this will require further investment in assurance programs and technologies that can minimize deliberate or inadvertent disclosure of sensitive information.
  • Data security and how that relates to information in the organization will become more important than ever. Data labelling and classification must be enforced inside the organization to restrict LLM capability as appropriate based on the user role.

Tricking the AI

LLMs are complex mathematical and statistical constructs that take input and then predict what the next output will be based on the input. The ability of the LLMs to ‘understand’ natural language and produce an appropriate response seems magical. Most GenAI platforms have special guardrails, in the form of an alignment policy in place with the intent to prevent the LLM generating potentially harmful, dangerous, misleading, or inappropriate responses.

Researchers and attackers alike have discovered a technique called ‘prompt injections’ that can bypass or disable the guardrails resulting in a ‘jailbroken LLM’. The jailbroken LLM session can now generate content that the creators originally did not intend or potentially result in excessive resource consumption. This highlights the non-deterministic nature of LLM.

Prompt injections techniques are nothing new and creators of LLMs are actively working to counter bypasses. Examples include attacks that involve context switching, obfuscation, denial of service, or multimodal approaches that ‘confuses’ the model.

Defenses against these attacks include:

1. Limiting the size of responses

2. Human intervention for sensitive operations

3. Tracking LLM actions

4. Frequent updates

5. Security testing

For more detail on this topic see Security Navigator 2025 section titled “Tricking the AI: How to outsmart LLMs by using their ability to ‘think’”.

AI-driven detection

Pulling a weak signal out of the noise and identifying something abnormal is highly prized by the cybersecurity industry. Application and security logs are potentially rich in patterns that could be used to feed data hungry AI models.

Attacks use tools that try to blend into the background thus fading into the noise. Some of these tools are called command and control (C2) frameworks that offer a rich and highly capable feature set underpinned by a robust and flexible architecture. The C2 frameworks uses a signaling aspect called ‘beaconing’ that acts like a heartbeat between the stealthy software running on the compromised victim host and the command center. Cleaver attackers will hide their C2 beacons by associating the C2 network traffic with legitimate websites that they compromised previously. This makes it difficult for humans to detect suspicious activity in large volumes of data.

AI models are constructed to identify anomalous network traffic by looking for repetitive requests and deviations from established traffic – in other words anything that deviates from a baseline. This automated detection is combined with low latency response to drastically reducing the dwell time of attackers in an environment.

For more details on this topic see the Security Navigator 2025 section titled “Enhancing Beaconing Detection with AI-driven proxy log analysis”.

Summary

Artificial intelligence (AI) is a topic that will find its way into diverse discussions, more than we would ever have thought. Our recap of the Security Navigator 2025 chapter on AI sets the stage for many future discussions. Using AI in a secure and safe manner will always be challenged by the specter of attacker lurking in the shadows. The impact of new technologies that rushes forward at breakneck speeds to launch new features will always favor attackers as the world catches up to understand the real impact.

Fortunately, AI also favors those that use it for good and can be a force multiplier when seeking to identify suspicious or malicious activity.

For part two of this blog post is available here.

Glossary

Artificial Intelligence (AI)

AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as decision-making and problem-solving. AI is the broadest concept in this field, encompassing various technologies and methodologies, including Machine Learning (ML) and Deep Learning.

Machine Learning (ML)

ML is a subset of AI that focuses on developing algorithms and statistical models that allow machines to learn from and make predictions or decisions based on data. ML is a specific approach within AI, emphasizing data-driven learning and improvement over time.

Deep Learning (DL)

Deep Learning is a specialized subset of ML that uses neural networks with multiple layers to analyze and interpret complex data patterns. This advanced form of ML is particularly effective for tasks such as image and speech recognition, making it a crucial component of many AI applications.

Large Language Model (LLM)

LLMs are a type of AI model designed to understand and generate human-like text by being trained on extensive text datasets. These models are a specific application of Deep Learning, focusing on natural language processing tasks, and are integral to many modern AI-driven language applications.

Generative AI (GenAI)

GenAI refers to AI systems capable of creating new content, such as text, images, or music, based on the data they have been trained on. This technology often leverages LLMs and other Deep Learning techniques to produce original and creative outputs, showcasing the advanced capabilities of AI in content generation.

Incident Response Hotline

Facing cyber incidents right now?

Contact our 24/7/365 world wide service incident response hotline.

CSIRT