Cybersecurity is usually a serious business, and for good reason. If you’re reading this article, you likely don’t need reminding about the very real risks of falling victim to cybercrime: financial and reputational loss, disruption, and psychological and physical harm.
But, just occasionally, Sophos X-Ops comes across threat actors who change the tone – whether that’s through sheer audacity, genuine wit, or complete and utter ineptitude.
We wanted to share a few of the weirdest examples with you, gathered mostly from the work of our Incident Response teams and our expeditions on the dark web. So let the strangeness begin – but, any levity aside, we’ll also discuss the learning points from these case studies, and why they matter.
Encryption goes like this: the fourth, the fifth
In an attack we covered in our 2022 Multiple Attackers whitepaper, an organization was hit with three separate – and, we assess, unrelated – ransomware attacks, two of them in the space of two hours.
The groundwork for the attack began in December 2021, when a threat actor – possibly an initial access broker (IAB) – established an RDP session on an organization’s domain controller. The session lasted for 52 minutes.
Almost five months later, in April 2022, a threat actor gained access to the organization, likely though the exposed RDP instance – and exfiltrated data to Mega, a cloud storage service.
Just over a week later, the threat actor executed Mimikatz and obtained domain administrator access.
At the start of May, the threat actor created two batch scripts to distribute a ransomware binary via the legitimate tool PsExec. Ten minutes later, the binary was executed on 19 hosts, encrypting data – and a LockBit ransom note was dropped on each infected machine.

Figure 1: The LockBit ransom note
Less than two hours later, another threat actor used the legitimate tool PDQ Deploy to distribute a Hive ransomware binary across the network, via GPO. Around 45 minutes later, the binary was executed, encrypting data on 16 hosts. We tracked the initial access back to the same account exploited by LockBit, but the threat actor here used different machines, different accounts, and got domain administrator access their own way – suggesting that these two attacks were unrelated.

Figure 2 : The Hive ransomware note
Two weeks later, in mid-May, yet another threat actor deployed ALPHV/BlackCat ransomware via PowerShell. It was at this point that Sophos’ IR team was engaged to assist, and within six hours of the investigation commencing, we observed something fascinating: files that had been encrypted five times.
Some files were encrypted first by LockBit ransomware, then Hive. Each round of encryption resulted in filenames acquiring a new extension (unique to each group) – meaning that the LockBit ransomware, which was still running, assumed that the Hive-encrypted files were new, and encrypted them again. Ditto for Hive – and two weeks later, the BlackCat ransomware binary encrypted the files for a fifth time.
So, to decrypt the files, the targeted organization would first have had to contact BlackCat and pay to get the LockBit and Hive ransom notes decrypted – before then contacting those groups in turn to finally decrypt the rest of their files. Without a doubt, an unusual incident, and an extremely difficult situation for the organization in question.
The serious side
While this was an extreme example – and, we stress again, not at all amusing for the victim – incidents like this may become more common in the future. As the ransomware ecosystem becomes increasingly crowded, threat actors find themselves competing for targets, and sometimes stumble across the same vulnerable organization. This is exacerbated by certain features of the underground economy, such as IABs reselling accesses (as may have happened here), and ransomware leak sites providing data that other threat actors can later weaponize.
Historically, threat actors have been protective of their infections – to the extent of kicking rivals off compromised systems – but there are indications that this is changing, at least in the ransomware world. In this case, the threat actors either didn’t know or didn’t care that the organization had already been attacked.
More recently, while ransomware groups may still target each other, they have also expressed an interest in creating alliances and mutually beneficial agreements. The two linked examples both concern the same group (DragonForce), illustrating just how fluid attitudes can be.
See our article on multiple attackers for some practical guidance on prevention, and our full whitepaper for a more in-depth look at this aspect of the threat landscape.
Scammers scamming the scammers who scammed them
We spent a few weeks in late 2022 immersed in a fascinating niche of the cybercrime ecosystem: threat actors who target other threat actors (for a more recent example of such shenanigans, see this strange tale from 2025).
We examined multiple different types of scams targeting threat actors on several cybercrime forums, how threat actors police themselves, and the benefits for defenders (insights into culture, and rich, specific intelligence about individual threat actors).
Amongst all the schadenfreude, there were also some laughs – predominantly as a result of scammers attempting to scam the scammers scamming them. As we noted in the original article:
Unsurprisingly, while scamming threat actors might be lucrative, it can also be a dangerous game. We observed a few instances where threat actors weren’t just indignant about being scammed – they wanted to get even.
We saw one such case on BreachForums – a forum and marketplace specializing in stolen data – when now-convicted administrator Conor Fitzpatrick (AKA ‘pompompurin’) brought something interesting to users’ attentions. A scammer was pretending to be Omnipotent (the erstwhile founder and administrator of RaidForums, the predecessor to BreachForums) to trick users into paying $250 USD to join the ‘new RaidForums.’
(This sort of scam is very common; in the same series of articles, we uncovered a scammer – possibly a moonlighting methamphetamine dealer – who operated a network of at least twenty fake cybercrime marketplaces, charging users a $100 ‘deposit’ to sign up).
Having shared this news, Fitzpatrick invited BreachForums users to participate in a contest, with a $100 going to whoever “trolls this man the hardest.”
The eventual winner convinced the scammer that their website was leaking sensitive information by sending them a screenshot of the site’s Apache Server Status page. Convinced that their fake marketplace had been compromised, the alarmed scammer promptly took the site down.
In another incident, Fitzpatrick scammed a scammer personally. An individual had registered the Telegram channel ‘breachforums’ to offer already-public databases for sale (another common scam).
In addition to warning BreachForums users about this scam attempt, Fitzpatrick also contacted the scammer directly, offering to buy the handle.

Figure 3: The scammer expresses an interest in selling the Telegram handle
After some negotiating, a price of $10,000 was agreed, and Fitzpatrick asked the scammer to transfer the handle to his account. Naturally, the scammer was concerned about doing this before receiving payment, but received some reassurance from Fitzpatrick:

Figure 4: The BreachForums administrator provides some reassurance
In a plot twist that will surprise no one, Fitzpatrick removed the scammer’s permissions and banned him, all without paying – making this a classic example of a ‘rip and run’ scam.
The serious side
It’s undoubtedly entertaining to watch threat actors attacking each other instead of organizations. But there are two key practical learning points here:
- Unintended fallout. While scammers will sometimes set up fake marketplaces and sites with the intention of deceiving threat actors, it’s possible that others – such as researchers, analysts, journalists, and law enforcement personnel – may also fall victim to these schemes. That may involve paying to access ‘closed’ forums which are actually scam sites, or giving away credentials on fake marketplaces. Learning about the different types of scams and how they work may help such groups from becoming victims
- A rich source of intelligence. Many criminal marketplaces and forums have dedicated sections for ‘arbitration,’ where users can bring complaints and allegations to moderators. As we explained in Part 4 of this series, these sections are often a rich source of intelligence, because many threat actors are so indignant about being scammed that they will happily post screenshots, chat logs, and other details as proof when opening an arbitration case. In the examples we looked at, we found cryptocurrency addresses, usernames, transaction IDs, email addresses, IP addresses, victim names, source code, screenshots of desktops (which included open browser tabs, chats, conversations, weather conditions, and timezones), and other information.
One hat, two hat, Black Cat, blue hat?
We’ve noted in previous research that some ransomware groups like to refer to themselves as ‘pentesters’, to lend themselves an air of undeserved legitimacy – with some even offering ‘security reports’ to their victims (after getting paid, naturally). Our IR team observed a particularly egregious example of this practice in 2022.
An organization in the US was targeted in an ALPHV/BlackCat ransomware attack. There was nothing particularly unusual about the attack itself, and, having negotiated a significant discount, the organization paid the ransom.
Along with the decryptor, the organization also received a ‘security report’ from the threat actor, in the form of a short text file complete with spelling mistakes, grammatical errors, and profanity. (We suspect the report would not pass muster with any self-respecting pentesting outfit – no screenshots, diagrams, or professionally-produced PDFs here.)
In the ‘report,’ the threat actor described the vulnerability they used to obtain initial access:
These points should answer most questions, if you want to know anything else or have problems with decryption, let us know. 1. You had an old critical Log4j vulnerability not fixed on Horizon, this is how we were able to get in initially, it was a bulk scanning, not like we were targetting [sic] you intentionally.
The threat actor went on to (very briefly) explain how they moved through the network, and provided some basic, and not necessarily practical, security recommendations:
2. Once inside your horizon VM, we dumped credentials, got some Domain admin, cracked the hash and [were] able to move laterally. Its [sic] absolutely [sic] madness to have 3k computers on the same domain, you should split all the machines between different domains, like …ONLINE\, FINANCIAL\.. you get the idea, so if one domain is infilitrated [sic] somehow not all the infrastructure will be compromised. Also you should routinely review sensitive information like passports, bank accounts and so on and have them on a different domain, even more secured. Try search on fileservers for the word passport, driving license, background… put all those files on [sic] a safe.
The threat actor proceeded to advise the victim not to “use any massively used backup software…Cant [sic] go deeper on that.” We assess that the threat actor was admitting here that they’re familiar with popular backup applications and know how to extract data from them.
4. Once network was scanned, we went for the backup servers which you should have on a different domain under 5 different keys and 2FAs. Dont [sic] use any massively used backup software, its [sic] a goldmine for us. Cant [sic] go deeper than that, just don’t [sic] use any of the big names.
Interestingly – and we mention this only because it’s fascinating to get a threat actor’s opinion on the subject – the threat actor then provided further security advice on monitoring logs, noting that “sophos is a good AV.”
5. At this point it should be clear to you that having a database of passwords for all services and services on the local network, it took us a few days to verify the credentials and make a plan of attack. We have to say sophos is a good AV, however no one monitors the logs on your network or at least they dont [sic] do on weekends. […] 8. General Recommendation. 8.1. Install a hard-to-override antivirus, Sophos Obligatory 2FA function to be enabled on the input, at least for domain admins. The most expensive is not always the best one, EDRs or MSPs if they are not being watched the logs 24/7 [sic] are useless and no MSP does that even if they tell you otherwise.
The threat actor also recommended the use of two-factor authentication, and suggested that the organization “install sophos on servers which are DC, fileservers, backups or critical and monitor…logs each 24hrs.”
The threat actor ended their report by saying, apparently without any trace of irony: “Its [sic] been an honor working with you” and concluded with a bizarre exhortation to “keep up the good maple syrup and trucker protests” – suggesting that they mistakenly believed the victim was based in Canada.
The serious side
As we noted in a 2023 article exploring the relationship between ransomware groups and the media, ransomware is becoming increasingly professionalized and commoditized. One symptom of this is ransomware groups attempting to paint a picture of themselves as legitimate security professionals and outfits, by publishing ‘press releases’ and ‘security reports’ (as in the BlackCat case described above).
This kind of rebranding is a tactic borrowed from legitimate industries, and it’s perhaps not unreasonable to speculate that ransomware groups may do this more in the future – perhaps as a recruitment tool, or to try and alleviate negative coverage from the media and attention from law enforcement.
Practically speaking, so-called ‘security reports’ from ransomware groups may contain valuable information about how the threat actor gained access and pivoted through the network – particularly if it can be verified independently, as our IR team did in this case study. This is potentially very useful to know when it comes to recovery and remediation.
We would, however, advise that any organizations in possession of such reports do verify the information, and take any security recommendations from threat actors with a generous pinch of salt before acting on them.
Ice cream, e-liquid, Russian convicts, and Ancient Egypt
In our 2025 five-part series on what cybercriminals do with their ill-gotten gains – based on discussions in obscure areas of criminal forums – we found posts suggesting threat actors are involved/interested in fraud, theft, money laundering, shell companies, stolen and counterfeit goods, counterfeit currency, pornography, sex work, stocks and shares, pyramid schemes, gold, diamonds, insider trading, construction, real estate, drugs, offshore banking, hiring money mules (people hired by launderers to physically or virtually transport/transfer money) and smurfs (people hired to conduct small transactions in order to launder a larger amount), tax evasion, affiliate advertising and traffic generation, restaurants, education, wholesaling, tobacco and vaping, pharmaceuticals, gambling – and, believe it or not, cybersecurity companies and services.
But perhaps the most bizarre discussion we saw was a thread on a Russian-language cybercrime forum. It started innocently enough, with a user asking if it would be feasible to open an ice cream stall with 200,000 roubles (around $2,400 as of this writing).
Without any prompting, another user – who described themselves as “the master of the ice cream business” – confessed to arson.
In a very detailed post, they described setting fire to a competitor’s ice cream kiosk in the early 2000s, apparently reassured by the fact that “the statute of limitations…has already passed.” They went on to explain exactly what they used to commit the crime:
“…a crowbar, a plastic bottle with gasoline, a wick on an extension cord, matches…I noticed a vertical hollow pipe sticking out of the roof [of the kiosk]…I poured the whole bottle into it, stuffed a wick soaked in gasoline, and set it on fire…I never saw that business or that stall again.”
That threat actors are involved in both legitimate and criminal business activities came as no shock to us (although we didn’t expect there to be such scale and diversity). We were, however, a little taken aback to discover that the ice cream business can be so cut-throat (although any Scottish readers may be less surprised).
If ice cream arson won the title for ‘Weirdest Finding’, there were several other worthy contenders:
- A proposal to outsource software development, hardware manufacturing, and cybersecurity to Russian prison inmates. Some threat actors suggested that this could work in some cases (e.g., development of crude malware), while many others were sceptical and – perhaps surprisingly, given the proliferation on cybercrime forums of fenya, a dialect popular in Russian prisons – somewhat disparaging about the abilities of prisoners
- A threat actor who shared details of a controversial moneymaking scheme: selling e-liquid to schoolchildren. Another user took them to task (“I’m reading this as a parent…don’t you fucking have children?”). To the amusement of other threat actors in the thread, the two began a good old-fashioned flame war (“In the stores there is alcohol, cigarettes…maybe you should go to the mommies’ forum?”; “LEAVE YOUR ADDRESS…WE’LL COME NOW, WHEREVER YOU ARE”; “I don’t give a fuck about other people’s children”, and so on)
- In a case more alarming than amusing, we saw one threat actor advise others to invest in a prominent cybersecurity vendor, noting that the vendor may soon be acquiring another company. Irony aside, this raises the troubling possibility that threat actors could – if they’re not already – be shareholders (and therefore able to vote on corporate actions, receive dividends, etc.) of a company that tracks and disrupts threat actors
- But runner-up to the ice cream arson confession surely went to an enterprising threat actor who claimed to have “found some pharaonic and coptic monuments [i.e., Ancient Egyptian artifacts]…only two people know about its location. We want to sell it, but we don’t know how…to handle the shipment and the right place to sell in an auction (black market).” The user uploaded two photos of what appeared to be a sarcophagus lying on bubble wrap. To our surprise, some users expressed an interest in purchasing the ‘artifacts,’ with one even offering to put the sellers in touch with a buyer “who will buy it immediately after his expert confirms.”

Figure 5: A threat actor advises fellow forum users to invest in a well-known security vendor

Figure 6: A threat actor poses an unusual question to their fellow forum users
The serious side
During our ‘Beyond the Kill Chain’ investigation, we found hundreds of business activities being discussed on criminal forums. The examples above are certainly bizarre and entertaining, but what stood out to us most was the sheer diversity and plurality of both legitimate and criminal enterprises, online and in the real world, that cybercriminals may be involved in.
As we noted in the concluding chapter of the series, threat actors diversifying into other industries and criminal activities has troubling implications. It can make disrupting those threat actors more difficult, particularly when it comes to seizing assets, and can mean that investigations – ‘following the money’ – are necessarily more complex. Moreover, it can increase threat actors’ wealth, power, and influence, which again can complicate investigations. And it means that their crimes can affect more victims, directly or indirectly.
In the security industry, we often treat cybercrime as being in a silo – a distinct, specialist, isolated activity – and we focus on the kill chain, threat intelligence, and defensive measures. In the wake of attacks, our attention – not unreasonably – is usually focused on the victims. Meanwhile, the perpetrators slip back into the shadows, and we don’t typically think about what they do once an attack is over, or where the money goes.
But perhaps we should spend more time looking into how cybercriminals are using and investing their profits. Doing so can lead to additional investigative and intelligence opportunities around attribution, motivation, connections, and more.
Moreover, some of the activities we uncovered strongly suggest that we should not put threat actors on any kind of pedestal. They are not just cybercriminals – they are criminals, full stop. They should not be glorified, or celebrated, or portrayed as anything other than what they are: people who make money at the expense of victims. Our investigation suggests that at least some threat actors are engaged in exploitative, harmful, and illegal activities, both online and in the real world, from which they are actively profiting.
Much like the arbitration sections in our ‘Scammers Scamming Scammers’ series, the forum rooms in which such activities are discussed can also be a useful source of intelligence. We noted threat actors sharing details of geographic locations; biographical information; screenshots showing profile pictures, names and addresses; photographs of specific locations; references to specific amounts of money, sometimes accompanied by dates and times; references to previous convictions; detailed discussions of legal and illegal schemes; and details of advice received from lawyers, accountants, and peers.
AI: The script kiddie playground
Whatever you may think about the benefits, drawbacks, risks, and implications of generative AI with regards to cyber security, we can probably all agree on one thing: inexperienced threat actors (‘script kiddies’) will try to use it as a shortcut whenever possible.
In a 2023 post exploring attitudes to generative AI on cybercrime forums (followed by a 2025 update), we noted that many script kiddies were scraping the barrel, to put it mildly. One such individual on a low-tier cybercrime forum shared a script which performs the following actions:
- Ask ChatGPT a question
- If the response begins with “As an AI language model…” (a ChatGPT disclaimer indicating that the platform is unable to answer the question for whatever reason) then search on Google, using the original question as a search query
- Copy the Google search results and save them
- Ask ChatGPT the same question again, stipulating that the answer should come from the scraped Google results
- If ChatGPT still replies with “As an AI language model…”, then ask ChatGPT to rephrase the question as a Google search, execute that search, and repeat steps 3 and 4
- Repeat this entire process five times until ChatGPT provides a viable answer
We didn’t test the script ourselves, because life is too short – but we suspect that we, along with the vast majority of people, would find it more convenient to just use Google in the first place.
Another user, on a higher-tier cybercrime forum, came up with an idea that initially seemed more practical, albeit still niche: a novel malware distribution method using ChatGPT.
The idea, the user explained, was to create a site where visitors can take a selfie, which is then turned into a downloadable ‘AI celebrity selfie image.’ Naturally, this downloadable image would be disguised malware.
The threat actor claimed that ChatGPT had helped them turn this idea into a proof-of-concept, and to demonstrate it in action, they shared several screenshots – one of which was an unredacted photograph of what we suspect was the threat actor’s face.

Figure 7: A screenshot of the ‘Celebrity Selfie Generator’ shared by its creator
Finally, our favourite example: a prominent Russian-language cybercrime forum announced the launch of a ChatGPT chatbot to answer users’ questions. In the announcement, the forum administrator mentioned specifically that:
You can create topics exclusively on the topics of our forum. No need to ask questions about the weather, biology, economics, politics, and so on. Only the topics of our forum, the rest is prohibited, the topics will be deleted.
Despite some forum users responding enthusiastically to this news, the chatbot was not particularly well-suited for use on a cybercrime forum, as it repeatedly told frustrated users that it would not “provide assistance or guidance on coding malware or engaging in any illegal activities” or “violate ethical and legal principles.”

Figure 8: A user on a cybercrime forum asks the forum chatbot how to code malware, and is told the chatbot can’t help

Figure 9: Another unsuccessful attempt to use the cybercrime forum’s chatbot
Amusingly, at least one user attempted to jailbreak the forum bot to make it more compliant, and failed.

Figure 10: A user tries, unsuccessfully, to jailbreak the forum bot
The serious side
In our 2025 update to the original article, we found that while a minority of threat actors on criminal forums may be dreaming big and have some (possibly) dangerous ideas when it comes to AI, their discussions remained theoretical and aspirational for the most part. There was some evidence that a handful were seeking to incorporate generative AI into some toolkits – mostly for spamming and social engineering – but not a great deal more.
However, as shown in the examples above, threat actors – particularly those who are less skilled – are not shy when it comes to experimentation. While a lot of their ideas are unlikely to pose a threat, that’s not to say that inexperienced threat actors can’t still cause damage. In our 2024 post on ‘junk gun ransomware,’ for example, we discovered a significant amount of cheap, crude ransomware available on criminal forums, distinct from larger affiliate programs, that could enable lower-tier threat actors to target smaller companies and individuals.


