It is truly an amazing time to be alive in terms of technological advancement and the promise of what these marvels may hold. The last 25 years saw explosive growth in the information technology sector that accelerated globalization even further. This was largely driven by the internet and mobile phone technologies.
Advancements in Artificial Intelligence (AI) since 2017 has sparked numerous innovations since then. Breakthroughs such as the transformer architecture for neural networks birthed generative pre-trained transformer (GPT) models, resulting in generative AI (GenAI) apps such as ChatGPT. The world has been enthralled ever since seeing major investments being made to tap into this “magical” technology with spending expected to surpass $200 billion over five years starting from 2025.
Keeping pace with this is exhausting but also exhilarating. We are planning a series of blogs that we will release under the umbrella of AI. This introductory blog is split into two parts. We start with this blog post where we will set the stage by recapping what we published in Security Navigator 2025. The second part of this blog post will explore at a high level what the impact of AI could be for the world in terms of productivity gains. You can access part two here.
If you have already read Security Navigator 2025 on AI, then feel free to skip the recap section.
In the Security Navigator 2025 chapter titled “Research: Artificial Intelligence What’s All the Fuss” we painted a high-level picture examining the implications of AI for cyber security and what it means for defensive and offensive use. This was framed in terms of impact on business, technical, and society at large.
Mankind has always benefited from creating tools and GenAI is promising to amplify our capabilities, whether it’s for good or bad. The Security Navigator 2025 article makes the point that new technologies have an asymmetrical impact on security, thus favoring the attacker initially. The argument is not about cybersecurity but the fact that any new technology is trying to push features and functionality to grow adoption while safety and security must play catch up.
Attackers with ties to governments have already been using LLMs for nefarious actions to social engineer or better understand content. These activities have spilled over into the cybercrime world as well as criminals who use real-time deepfake capabilities to synthesize video and voice to fool victims.
GenAI can also be used to protect and secure. In the Security Navigator 2025 article we speculate that there could be a use case where a cohort of LLMs work together to analyze vulnerabilities in systems and then build patches to protect against exploitation. This idea is not novel by itself (Cf DARPA awards and DARPA AI Cyber challenge) and has lots of potential benefits for creating secure software. Hopefully this type of technology is available before it is put to malicious use. Security Navigator 2025 has the following positive comment on this future:
“The notion that adversaries may execute such activities more often or more easily is a cause for concern, but it does not necessarily require a fundamental shift in our security practices and technologies.“
Sticking to good industry practices by performing rigorous risk assessment and threat management to complement vulnerability management practices will serve business well. The pace at which this will happen will however increase and the response latency must match this. Fighting fire with fire – the AI power tech arms race has begun.
Most large businesses are eager to employ AI, and this eagerness seems to be driven by the promise of efficiency gains, discovering new opportunities, and appearing relevant in between all the marketing activities of competitors trumpeting ‘AI-powered slogans.
This excitement around early AI adoption does not come cheap. Large corporations with established IT service procurement should be experienced and capable of onboarding AI driven software-as-a-service (SaaS) subscriptions. The risks associated with sharing proprietary and sensitive information with a third-party GenAI SaaS is non-trivial exercise for companies and those lacking understanding of compliance and regulations in this space may experience challenges later down the line.
On the other hand, organizations with existing commercial agreements with large cloud providers such as GCP, AWS, and Azure make AI a compelling business case. The challenge is however that data needs to be labelled and classified appropriately to ensure that the least privileged and need-to-know style controls remain in place.
Corporate social responsibility and emissions commitments also makes adopting GenAI tricky as these services are known to be energy hungry. Businesses may also want to pay attention to this increased demand as it may increase their carbon footprint overall.
We always need to consider what downsides or negative impact new technology can bring. This is not necessary just the explicit malicious actions but also the unimagined side effects of using LLMs or building these models. The SN25 section on this topic defines two subjects namely ‘consumers’ and ‘producers’ of LLMs. A consumer is a user of LLMs, and a producer is responsible for creating LLMs that consumers use.
We highlighted the following concerns facing consumers of LLMs:
Producers or providers of LLM also face threat or risks that they must consider or actively mitigate such as:
We expect new threats targeting LLM producers and consumers to emerge over time. The technology stacks of vendors are complex and will continue to become even richer over time. This growing attack surface is attractive and the rewards for a successful breach may be a trophy worth perusing for some attackers. Many of the attacks will probably involve attacks that will work against classical web applications and not just attacks tailored targeting LLMs. Systems are interlinked with internal datasets that are linked to the internet making for a maze of a threat model to navigate.
The challenge with GenAI is to produce systems that are trustworthy and that require some form of assurance. Security does play a role in this, but this does not transcend or override building solid foundations. The following four categories will have to deal with broader potentially negative impacts:
Business risks
Technical risks
Societal risks
New technologies that are built on the internet are normally propped up by or wrapped in layers of code and systems that have seen widespread adoption. This means that ‘new platforms’ could suffer from common vulnerabilities associated with implementation or design mistakes associated with the present technology.
Building solid security foundations requires expertise and determination. Systems must be built with confidentiality, integrity, and availability in mind from the start. This
requires considerable thought to be applied ensuring architecture, implementation, deployment, and ongoing maintenance including secure good practices.
Senior management, including the CISO, should ensure employees have access to LLM-based services that meet regulatory and compliance standards for safe and responsible use. This includes:
LLMs are complex mathematical and statistical constructs that take input and then predict what the next output will be based on the input. The ability of the LLMs to ‘understand’ natural language and produce an appropriate response seems magical. Most GenAI platforms have special guardrails, in the form of an alignment policy in place with the intent to prevent the LLM generating potentially harmful, dangerous, misleading, or inappropriate responses.
Researchers and attackers alike have discovered a technique called ‘prompt injections’ that can bypass or disable the guardrails resulting in a ‘jailbroken LLM’. The jailbroken LLM session can now generate content that the creators originally did not intend or potentially result in excessive resource consumption. This highlights the non-deterministic nature of LLM.
Prompt injections techniques are nothing new and creators of LLMs are actively working to counter bypasses. Examples include attacks that involve context switching, obfuscation, denial of service, or multimodal approaches that ‘confuses’ the model.
Defenses against these attacks include:
1. Limiting the size of responses
2. Human intervention for sensitive operations
3. Tracking LLM actions
4. Frequent updates
5. Security testing
For more detail on this topic see Security Navigator 2025 section titled “Tricking the AI: How to outsmart LLMs by using their ability to ‘think’”.
Pulling a weak signal out of the noise and identifying something abnormal is highly prized by the cybersecurity industry. Application and security logs are potentially rich in patterns that could be used to feed data hungry AI models.
Attacks use tools that try to blend into the background thus fading into the noise. Some of these tools are called command and control (C2) frameworks that offer a rich and highly capable feature set underpinned by a robust and flexible architecture. The C2 frameworks uses a signaling aspect called ‘beaconing’ that acts like a heartbeat between the stealthy software running on the compromised victim host and the command center. Cleaver attackers will hide their C2 beacons by associating the C2 network traffic with legitimate websites that they compromised previously. This makes it difficult for humans to detect suspicious activity in large volumes of data.
AI models are constructed to identify anomalous network traffic by looking for repetitive requests and deviations from established traffic – in other words anything that deviates from a baseline. This automated detection is combined with low latency response to drastically reducing the dwell time of attackers in an environment.
For more details on this topic see the Security Navigator 2025 section titled “Enhancing Beaconing Detection with AI-driven proxy log analysis”.
Artificial intelligence (AI) is a topic that will find its way into diverse discussions, more than we would ever have thought. Our recap of the Security Navigator 2025 chapter on AI sets the stage for many future discussions. Using AI in a secure and safe manner will always be challenged by the specter of attacker lurking in the shadows. The impact of new technologies that rushes forward at breakneck speeds to launch new features will always favor attackers as the world catches up to understand the real impact.
Fortunately, AI also favors those that use it for good and can be a force multiplier when seeking to identify suspicious or malicious activity.
For part two of this blog post is available here.
Artificial Intelligence (AI)
AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as decision-making and problem-solving. AI is the broadest concept in this field, encompassing various technologies and methodologies, including Machine Learning (ML) and Deep Learning.
Machine Learning (ML)
ML is a subset of AI that focuses on developing algorithms and statistical models that allow machines to learn from and make predictions or decisions based on data. ML is a specific approach within AI, emphasizing data-driven learning and improvement over time.
Deep Learning (DL)
Deep Learning is a specialized subset of ML that uses neural networks with multiple layers to analyze and interpret complex data patterns. This advanced form of ML is particularly effective for tasks such as image and speech recognition, making it a crucial component of many AI applications.
Large Language Model (LLM)
LLMs are a type of AI model designed to understand and generate human-like text by being trained on extensive text datasets. These models are a specific application of Deep Learning, focusing on natural language processing tasks, and are integral to many modern AI-driven language applications.
Generative AI (GenAI)
GenAI refers to AI systems capable of creating new content, such as text, images, or music, based on the data they have been trained on. This technology often leverages LLMs and other Deep Learning techniques to produce original and creative outputs, showcasing the advanced capabilities of AI in content generation.