AI-Generated Malware and How It’s Changing Cybersecurity
An AI-generated malware dubbed BlackMamba was able to bypass cybersecurity technologies such as industry leading EDR (Endpoint Detection and Response) in an experimental project led by researchers at Hyas.
While the BlackMamba malware was only tested as a proof-of-concept and does not live in the wild, its existence does mean that the threat landscape for individuals and for organizations will be unequivocally changed by the use of AI.
“Since a platform like ChatGPT can simulate human-like responses, it can be used to trick people into divulging sensitive information or clicking on malicious links.”- Shomiron Das Gupta, Cybersecurity Entrepreneur and Threat Analyzer
Cybersecurity providers already take advantage of AI to detect unusual data patterns within a network and discover cyberattacks.
In fact, organizations taking advantage of this technology have a “74-day shorter breach life cycle,” according to an IMB study. This means that AI and automation help stop a breach before it does incremental damage.
If you’d first like to review the elements of a solid cybersecurity strategy for businesses, download Impact’s eBook: What Makes a Good Cybersecurity Defense for a Modern SMB?
With the popularization of AI tools such as ChatGPT, which are able to generate code, malicious actors have more capabilities to create different types of attacks. The following are prominent examples of attacks powered by AI.
AI-Generated Videos: Malware Spread Through YouTube
AI is also helping cybercriminals deliver malware through trusted social media platforms such as YouTube. Malicious actors are creating AI-generated videos that appear to be tutorials to popular software programs like Photoshop, Premiere Pro, and others.
The description section of these videos offers viewers a free version of these otherwise expensive tools, tempting them to click on links which spread stealer malware.
Stealer malware works by infecting a system and stealing data from it. Data such as login usernames and passwords are taken from the target computers and send back to cybercriminals.
To prevent such attacks on your organization’s network, businesses should implement employee cybersecurity training. When your workforce is aware of the dangers of clicking malicious links or downloading illegal software, they will be prepared to avoid those dangers.
AI-Powered Phishing Attacks: More Enticing Lures
Another use bad actors have found for AI is creating extremely targeted phishing emails to lure recipients into clicking malicious links or downloading malware.
Usually, cybercriminals use information people post about themselves on social media or data acquired from a breach to craft emails that seem to come from a trusted source.
The emergence and the increasing widespread use of language AI tools like ChatGPT allows bad actors to insert someone’s personal or company data into the AI and ask it to create an email for that user.
Since the AI is a great tool for writing emails and works at a much faster speed than a human could while still appearing to be written by a person, phishing attacks improve not only in accuracy but also in speed.
Prompt your employees to check any emails coming from an external source against phishing red flags and to report it to your IT or cybersecurity team. Additionally, enforcing multi-factor authentication (MFA) across your company will ensure that accounts are secure even if credentials are exposed.
BlackMamba: An Ever-Changing AI-Generated Malware
BlackMamba, mentioned above, works by using a large language model (LLM)—a deep learning algorithm that can summarize and generate text—to create a polymorphic keylogger. This means that every time the BlackMamba malware runs, it mutates, making it able to slip through predictive cybersecurity software.
Think of this AI malware as a virus that is continuously mutating. It would be difficult to have a permanent cure since the malware is able to change on the fly.
The BlackMamba malware can be delivered through an executable. This type of file has instructions to alter a device’s system. Cybercriminals could create malware similar to the experimental BlackMamba and send it through what may seem like an innocuous software.
This is another instance where extreme care and attention need to be taken by the whole organization to prevent clicking on malicious links, downloading unlicensed software, and spreading malware across a network.
How is AI Changing Cybersecurity
The above examples illustrate the influence AI will have on the future of cybersecurity. While AI has been used by cybersecurity professionals to analyze data and detect anomalies in order to catch an attack, this tool now serves cybercriminals as well.
Since AI language tools like ChatGPT can simulate human speech, it will be much harder for users to identify whether an email is a phishing lure or a legitimate message.
We have reached an age in which AI will aid bad actors in increasing the volume and efficacy of cyberattacks.
Shomiron Das Gupta, a cybersecurity entrepreneur and threat analyzer, recommends that organizations take advantage of technologies such as EDR—which monitors individual end-user devices—and SIEM—a solution that recognizes and alerts you of suspicious events, for example: a suspicious login or excessive failed login on a company account.
- Phishing attacks are increasing in number and efficacy due to AI language tools such as ChatGPT which allow messages to seem more human.
- Bad actors use social media platforms such as YouTube to spread AI-generated videos that trick viewers into downloading malware-loaded software.
- Researchers created a type of malware that uses AI to continuously morph in order to bypass cybersecurity tools.
- Organizations can improve their security standing by implementing employee awareness training and using advanced tools such as EDR and SIEM to monitor their networks.
To check whether your business has a strong cybersecurity posture, download the eBook: What Makes a Good Cybersecurity Defense for a Modern SMB?