5 April 2023
Authors: Ivanche Dimitrievski, Adam Ridley, and Diana Selck-Paulsson
This is the sixth piece in a series exploring how cyber extortion and ransomware threat actors use “neutralization techniques” to legitimate their malicious behavior.
Sykes and Matza (1957) argue that people who engage in criminal activity make use of rhetorical techniques to neutralize the guilt that comes with committing a crime. In so doing, they can commit crime while remaining committed to the dominant normative system and interpret their deviant actions as acceptable.
We addressed this approach conceptually in the first part of this series, where we also outlined our research material. Part two, then, explored the neutralization technique called “denial of injury”, showing how threat actors try to reframe and reposition their malicious behavior as a business service.
In part three, we examined “denials of the victim” which involved, in this case, a villainization of the victim while presenting the threat actor as a vigilante figure. In part four, this was followed by an examination of threat actors’ attempts to justify their behavior by condemning those who oppose it.
Part five looked at threat actors’ “appeals to higher loyalties”. Using this neutralization technique, threat actors seek to display an image of themselves as sacrificing social norms in favor of a more pressing value, such as loyalty to family or “the greater good”. We deliberated on this as an organizational branding strategy, but also as a motivational force behind criminal activity.
This is the last piece in the series, and we will be focusing on the neutralization technique called “denial of responsibility”. According to Sykes and Matza, offenders deny responsibility by claiming that their behaviors are unintended or the result of forces that are beyond their control. Thus, they present themselves as victims of circumstance and as products of their environment.
We have noted in previous parts of this series that Sykes and Matza’s approach is not concerned with the validity of such orientations, but rather with their function of “deflecting blame attached to violations of social norms” (1957: 667). By learning to view themselves as being acted upon rather than acting, the offenders prepare the way for “deviance from the dominant normative system without the necessity of a frontal assault on the norms themselves” (ibid).
In what follows, we look at diverse materials, including negotiation chats, published interviews with threat actors, and leak pages to uncover some of the discursive strategies threat actors use to assign and distribute relations of responsibility. On this basis, we deliberate on the potential effects of this process. We will explore threat actors’ attempts to fashion a view of themselves as “regulated” and “trustworthy” entities, to diffuse responsibility for a crime by “blaming the victim”, and to incite fear by positing zones of no responsibility.
In addition, please note that the quotes from threat actors included below have not been edited for grammar or syntax.
Trust plays an important role in cyberattacks. Data is locked, after which a ransom is demanded in exchange for unlocking the data. But the victim needs some form of assurance that the data will indeed be unlocked, after paying the ransom. And since this process is not regulated normally, i.e., there are no official entities who would ensure data retrieval, the victim’s trust in the threat actor is imperative.
If the threat actor loses that trust, they may not get paid, which is not the preferred outcome for the threat actor.
One way in which threat actors generate trust is through crafting a public image of themselves as socially responsible entities. In the following excerpt, Cl0p makes a declaration to hold to a principle of not attacking a set of organization types, and to do right by such organizations in the event of an attack.
We have never attacked hospitals, orphanages, nursing homes, charitable foundations, and we will not. Commercial pharmaceutical organizations are not eligible for this list; they are the only ones who benefit from the current pandemic. If an attack mistakenly occurs on one of the foregoing organizations, we will provide the decryptor for free, apologize and help fix the vulnerabilities. (Announcement_Cl0p_1a_txt, Pos. 4)
The excerpt projects an image of the threat actor as a moral agent that follows a certain ethical principle as a basis for their attacks. We can observe a similar effect in the following excerpt from an interview with ALPHV. Responding to a journalist’s question about the ways they control their affiliates, the ALPHV representative says:
(…) we do not run an active advertising campaign and easily cut ties with non-compliant partners, but no matter how hard we try to filter people when creating an account — shit happens. There was already one episode with, I quote, “not the neighboring countries.” Decryption keys were issued automatically with the affiliate getting banned. (Interview_ALPHV_1a txt)
Lösch, Heil and Schneider (2017) describe “responsibilization” as a visioning process which involves the ascription of responsibilities among the actors that collectively shape sociotechnical arrangements. The statement above can be seen as an instance in such a process, producing an image of the threat actor – ALPHV, as a responsible entity that keeps others in its network in check. We are led to believe that these “partners” are held to a moral standard, and that any violation of that standard is being met with adequate sanctions.
In these excerpts, internal breaches of these attack policies are described as “accidents” and “mistakes”. Cl0p and ALPHV are represented as assuming responsibility for these accidents/mistakes. Not only do they cut ties with the guilty affiliate, but also issue decryption keys, apologize to the victim, and help them fix the vulnerabilities in their security systems to prevent potential future attacks.
It is important to emphasize, however, that assuming responsibility for such “accidental” attacks does not automatically suggest accepting responsibility.
Rather, responsibility is being distributed by representing the threat actor as a constellation of sub-actors and locating the cause of these attacks against a “faulty” part of that allegedly unified whole. In this way, a part, rather than the whole, is rendered accountable for the attacks and the removal of the “faulty” part can thus be understood as an act of reconstituting the threat actor as a trustworthy entity.
A cyberattack can be interpreted in multiple ways, depending on the perspective of the person or group involved. Consider the following excerpt from a Babuk ransom note:
If you see this note, your company've been randomly chosen for security audit and your company haven't passed it. (Ransom note_Babuk_4a txt)
The statement instructs us to read the attack as resulting from an inadequate ability of the victim company to protect itself.
Certainly, this is not the only way to read what has happened, and statements such as the above characteristically downplay the complexity of cyberattacks. For example, cyberattacks normally involve highly sophisticated social engineering techniques. This may involve identifying weaknesses in the victim’s cybersecurity system and creating a situation (such as a phishing email) which appeals to that weakness, thus increasing the likelihood of a breach.
Statements such as the above thus align with a broader discourse surrounding online fraud victimization, which, as Cross (2015) finds, is largely founded upon notions of blame and responsibility leveled towards the victims themselves for their failure to avoid victimization in the first place. In their public addresses, threat actors frequently make appeals to actual and potential victims’ responsibility to ensure that people’s data is safe:
Our third volume of different companies working in different industries and who failed to protect their data (…) (Leak Page_Karakurt_3a-b img)
Due to poor security of your networks, we have downloaded your critical information with a total volume of more than 50 GB. (Negotiation chat_Conti_11a-b txt)
[REDACTED VICTIM COMPANY NAME] can’t be proud of using the latest technologies and providing security and confidentiality of documents. The company has failed to secure their data, including documents related to customers and partners. (Leak page_Marketo_1a img)
Again, these attacks are represented, not, say, as examples of “successful conquests” where another’s data is hacked, stolen, and leaked, but as failures of the victims to protect their/people’s data. As has also been explored in previous papers in this series, this “failure to protect” is often elaborated as revealing internal priority setting at the victim organization:
Unfortunately there are still a lot of companies that are don't want to take responsibility for the personal information that gathered and don't want to improve security measures. (Leak page_Ragnar Locker_2a txt)
Move along people, nothing much to see here. Just one other law firm collecting big fees yet forgetting to protect personal information of clients. (Leak page_Cl0p_1a txt)
We think that everyone should know about this company is not working on their security measures and represents an unsafe partner. All employees and clients also should know that they better not trust any personal information to the company. (Leak page_Conti_1a txt)
These statements can be interpreted as acts of making the victim responsible for the attack. This responsibilization process can perhaps be understood as an act of self-exoneration, but more directly, it works to incentivize victims’ compliance by, say, appealing to the victims’ sense of obligation to make amends by paying the ransom, thus retrieving the stolen data.
The social aspects addressed above are therefore not simply features of the context in which an attack occurs. Rather, they are key elements in the broader sociotechnical processes that constitute cyberattacks. This can further be seen in the following excerpt from a negotiation chat between the ransomware group Babuk and a victim organization:
Victim organization: Tell me how you decided 4 million for this? It seems extremely high for a public sector entity
Babuk: You are funded by the state, the state has to pay, and not your employees, you are not a private company, but please do not assure us that you are poor, you are not some kind of police station from the village, you are PD of the Capitol (Negotiation chat_Babuk_1c img)
Babuk’s insistence that the victim can pay the ransom involves an organization of responsibility, where “the state” is made accountable for the entity that is part of it. Furthermore, this is not any kind of entity – “not some kind of police station from the village” – but “PD of the Capitol”, suggesting availability of means to pay the ransom. The re-specification of the victim entity in terms of its relationship to the state serves to counter the victim’s self-portrayal as just “a public sector entity” that cannot pay a ransom as “extremely high” as the one being demanded.
Our analysis thus suggests that ransomware attacks can be seen as unfolding through the discursive arrangement of the actors involved in the attacks. This includes specifying the actors’ identities as part of those attacks, as well as the relations of responsibility between them.
In an earlier paper in this series, we proposed, following Van Lente and Rip (1998), that threat actors’ statements can productively be understood as “forceful fictions”. While they cannot be taken as objective portrayals of cybersecurity scenarios, they can be said to have real effects in the world.
To an important degree, the forcefulness of these statements relies on their reading as “threats”. According to Cavelty (2013), an important feature of cybersecurity discourse is the construction of threat through the evocation of a shady, invisible, but powerful foe. Our research shows that this trope is often mobilized by adversaries during cyberattacks. Consider this excerpt:
Victim organization: I do not fear your threats!
Egregor: That is not the threat, but the algorithm of our actions. (Ransom note_Egregor_1a txt)
Egregor’s response describes the nature of their actions as algorithmic. This projects an image of Egregor as a powerful invisible force that acts according to its own inner logic, unaffected by any actions the victim might initiate to stop it.
In other words, the threat, in this case, consists of representing Egregor as a kind of entity that lacks response-ability, which is another way to think about “responsibility” – namely, a capacity to respond to situational or others’ demands.
Thus, despite explicitly denying the characterization, Egregor’s response to the victim’s attempt to confront them is organized to accentuate the sense of threat.
The following excerpt is from a negotiation chat involving the threat actor Babuk:
We are still waiting for your price. We all realize that if this info will be uploaded to the public sources, you will lose much more. (Negotiation chat_Babuk_2a-j txt)
The excerpt alludes to the dangers of having one’s data exposed to the public. At the same time, the excerpt positions Babuk – the threat actor who currently has control of the data – as outside the constellation of actors that make “the public” a dangerous space.
The latter is a common move amongst threat actors:
Who knows how the third party can use those documents, we are not responsible for that. (Ransom note_Babuk_2a txt)
Companies under attack of Ragnar_Locker can count it as a bug hunting reward, we are just illustrating what can happens. But don't forget there are a lot of peoples in internet who don't want money - someone might want only to crash and destroy. (About us_Ragnar Locker_1a txt)
At this point, all off your files are about to get public on our blog and will be available for anyone, including darknet criminals who are eager to abuse your information for their own evil purposes like social engineering attacks against your customers and vendors, spamming, and other bad actions. (Negotiation chat_Conti_17a-k txt)
In the segments above, threat actors deny responsibility for the data once they are no longer in their possession. Just as in the case of Babuk above, these denials of responsibility invariably allude to the existence of a third party – made available in general terms as “people in internet” or “anyone, including darknet criminals” – with whom the “real danger” resides.
Crucially, the image of a disembodied, anonymous, out of control third party is juxtaposed with that of the threat actor, who is made to appear accountable and in control. Delimiting responsibility in this way enacts the existence of a looming threat out there, positioning the threat actor and the victim as collaborators, who, by working together, can avoid that threat.
We can see, then, that the threat trope is used in these excerpts in conjunction with statements of responsibility to shape the unfolding of the sociotechnical process that constitutes the cyberattack.
In this paper we looked at different texts from threat actors and how these texts enact relations of responsibility as constituents of cyberattacks. Specifically, we examined how these actors might be denying responsibility for these attacks.
Our analysis shows that, for the most part, these are not explicit “denials” of responsibility and, importantly, that the end of such actions is not necessarily exoneration or self-justification. Rather, they constitute shifts, distributions, assumptions, and attributions of responsibility, with different rhetorical consequences.
Sometimes, threat actors use responsibilization to emphasize harm, and sometimes to generate trust, in ways that incentivize particular kinds of action, typically beneficial for the threat actor, such as victim compliance.
More broadly, we can think of the statements analyzed in this piece as “scripts” (Van Lente & Rip 1998) that position the various actors as characters in a story to be played out. As such, these statements are potentially powerful tools for coordinating cyberattacks. Understanding the social dynamics surrounding these scripts is crucial if we are to determine why some of them fail to come through in specific instances while others are successful.
Cavelty, M.D. (2013) From Cyber-Bombs to Political Fallout: Threat Representations with an Impact in the Cyber-Security Discourse. International Studies Review, Vol. 15, pp. 105-122.
Cross, C. (2015). No laughing matter: Blaming the victim of online fraud. International Review of Victimology, Vol 21, No 1, pp 187-204.
Lösch, A., Heil, R. & Schneider, C. (2017). Responsibilization through visions. Journal of Responsible Innovation, Vol 4, No 2, pp 138-156.
Sykes, G M and D Matza (1957), ‘Techniques of neutralization: A theory of delinquency’, American Sociological Review, vol 22, no 6, pp 664-670.
Van Lente, H and A Rip (1998), ‘Expectations in technological developments: An example of prospective structures to be filled in by agency’, in C Disco and B J R van der Meulen (eds), Getting New Technologies Together, Walter de Gruyter, Berlin, New York, pp 195-220.
5 April 2023
15 July 2022
8 February 2023