Google's AI Assistant, Gemini, Faces Malware Attacks
It’s their [Google] priority to keep their software impenetrable.
It’s no secret that Google has quickly integrated AI into nearly every aspect on their search engine. AI Overview, AI assistants – they've got it all. Despite the controversies surrounding AI Overview giving inaccurate information on searches, the recent focus falls on Google’s AI assistant, Gemini. Well, they just made the headlines. In previous blogs, we’ve discussed how hackers take advantage of the learning models in Generative AI and feed them malicious scripts that override the original coding and end up causing chaos for that software system. So, once again, hackers have intercepted Gemini’s machine learning models to alert Google users that their account has been hacked. Users assume that everything is just fine, trusting Gemini but then, they end up being prompted to change their password and input personal information the chatbot says is necessary. But now, the hackers have a hold on your data.
In a separate incident, the chatbot assistant was also reported spouting sentences unprompted, calling itself a ‘failure’ and ‘a disgrace.’ Believing human output to be the root of these instances, Google put out another notice stating they were fixing the issue.
Thankfully, Google has jumped on setting up security measures where they’re coding and instructing upgraded machine models to detect suspicious malware that poses a risk to users and AI. It’s their priority to make sure their software is impenetrable to hackers.
While we adapt to newer technologies, SpaceBound Solutions highly recommends keeping your information and cyber infrastructure secure and updated – for any further information, you can view our list of IT services we offer to manage your network and protect you from potential cyber threats.
—
Sources:
Google Issues Major Warning to All 1.8 Billion Users
Google is fixing a bug that causes Gemini to keep calling itself a 'failure'