ChatGPT Already Being Used For Malicious Purposes
ChatGPT is the new AI that has some pretty cool functions and which Microsoft is negotiating to invest in to the tune of $10 billion. It can write college term papers for students, for example. It does so good a job that professors are concerned that they won’t be able to detect it and the current generation of plagiarisms tools won’t work against it because each paper that it writes is unique.
But ChatGPT also does good things. It could probably write my blog posts for me; some might say better than I do. Unfortunately, it stopped ingesting data in 2021, so I would have to write about historical cybersecurity.
However, the hackers have decided that they, too, can use ChatGPT.
Cybersecurity researchers at Check Point say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyber attacks and support malicious operations.
Security folks call low skill level hackers “script kiddies”. It is now possible that ChatGPT could dramatically up their game. Corporate developers are already using ChatGPT to write code snippets for them. That same technique should work for malicious code.
In one forum thread which appeared towards the end of December, the poster described how they were using ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. They are doing this to create Python based malware to search for common documents, images and PDFs and even steal them.
The hackers who wrote these posts appear to be writing them to TRAIN less skilled hackers how to hack so that they have more help to do evil. Credit: ZDNet
Researchers have demonstrated how to use ChatGPT to create better phishing and business email compromise emails. It is only a matter of a little bit of time before that hackers do the same thing. Assuming they are not already doing it.
The researchers say that they can create unique, one-off phishing emails with grammatically correct and human-like text and can build entire email chains to make the attack look more realistic. They can use stolen emails to understand the boss’s email style and mimic it. Credit: CSO Online
What this means is that businesses are going to need up their security game if they want to stay ahead of the AI-fueled hackers.
Alternatively, they can just be the next ransomware victim in the news cycle.
Of course, they could also hope that the anti-hacker-fairy will protect them. Based on the statistics I have seen, I think that fairy is on an extended vacation.
If you need help with your endpoint security program, please contact us.