Cyber Crime's New Best Friend: AI Based Attacks
Cybersecurity firm Cowbell recently discussed the intense cyberattacks targeting manufacturing, public administration, education, healthcare, and supply chain channels. And, how budget restrictions in the public education sector have affected cybersecurity updates, leaving a gap for hackers to infiltrate.
As always, an organization’s data remains the most valuable resource for a hacker, where they’ll target businesses - large or small. Large businesses are targeted for hosting a substantive amount of data while small businesses are easy targets since they have limited cybersecurity resources..
Cybersecurity firms’ growing concerns is what’s behind the recent attacks, AI. AI’s rapidly expanding technology provides countless opportunities for cybercriminals to manipulate and conduct various schemes. In fact, according to Maria Korolov from CSO, 51% of companies fear that AI powered attacks are their largest threat. With AI having a rapid adaptability rate, experts are identifying ways it gets reconfigured.
‘AI Poisoning’ - The usual methods of detection are no longer enough. Anti-virus tools and machine learning are already designed to block basic malware, and security tools now utilize AI as a developed defense. But, with these looming threats - they’re now trained to scan for tougher measures. How do cybercriminals manipulate AI? They target an organization’s AI-learned-defense by feeding the model the wrong information, known as AI poisoning. When the model begins working opposite its original structure, it makes attacks easier and more difficult to pinpoint.
On top of that, these false AI models warrant AI-based malware. Using Generative AI to create malware campaigns allows cyberhackers even faster access to endpoint devices, running through the programming to find a faulty part of the script that can be reconfigured for malicious purposes.
Even worse - spam filters that redirect to our junk mail can give hackers an advantage by modifying the message and revamping the code, making it appear as a legitimate message like an ad campaign, sales approach, or a realistic headline. Unfortunately, unsuspecting employees end up clicking on those realistic messages that are actually laced with malware. Phishing emails may also embed simulated audio of a real person’s voice over an AI model, giving false credibility that a fake audio message can easily be mistaken for a real one.
Hackers Using Machine Learning - How does a hacker avoid triggering alerts? They can use AI platforms, like ChatGPT, where it teaches them how to create credible looking emails. From there, they become more adept at avoiding triggers for spam or phishing alerts. On top of creating discrete phishing emails, hackers also use machine learning to uncover users’ passwords. Former EY partner, Adam Malone, states, “They’re also using machine learning to identify security controls, so they can make fewer attempts and guess better passwords, increasing the chances that they’ll successfully gain access to a system.”
The firm, Cowbell urges businesses to become more proactive in protecting their company’s data infrastructure and network. Businesses must now assume hackers use AI too. Organizations employing services to help with patch management and endpoint security is recommended for basic security measures. Managed IT Services companies can help organizations shrink the gateway that hackers have to infiltrate your company’s data, devices, and the overall network.
At SpaceBound Solutions, we are always focused on on implementing strict cybersecurity measures, especially since AI-based attacks will only grow. We offer security services tailored for your network infrastructure - such as Endpoint Security .
Sources:
Risk and Insurance: https://riskandinsurance.com/manufacturing-most-vulnerable-rising-cybersecurity-risks-across-industries-report/
CSO Online: 10 ways hackers will use machine learning to launch attacks | CSO Online