You Know Who’s Good at Writing Malware? ChatGPT, Apparently

You Know Who’s Good at Writing Malware? ChatGPT, Apparently

At the time of this writing, ChatGPT is not even 3 months old. Yet in its infancy, it’s already become one of the most interesting and polarizing technologies in recent memory. The AI chatbot can do some pretty incredible things, like answer your questions in a split second or write an entire email for you if you don’t have the time. But it’s also been heavily criticized in the education sector for having the ability to help students cheat their way through classes.

Another “con” to add to the list is the fact that apparently ChatGPT can write malware, and as it turns out, it’s pretty good at it.

According to a report from security firm CyberArk, the chatbot software has the ability to write sophisticated “polymorphic” malware. The malware that ChatGPT is capable of creating can apparently wreak havoc on a systems’ hardware.

Security professionals are warning people that the OpenAI developed chatbot can absolutely change how threat actors conduct cybercrime through this new method of development.

CyberArk researchers warn that code written using the aid of ChatGPT displayed “advanced capabilities”, and was able to “easily evade security products.” What is being explained here is a subset of malware known as “polymorphic.”

What is “polymorphic”? According to experts and CrowdStrike, “A polymorphic virus, sometimes referred to as a metamorphic virus, is a type of malware that is programmed to repeatedly mutate its appearance or signature files through new decryption routines. This makes many traditional cybersecurity tools, such as antivirus or antimalware solutions, which rely on signature based detection, fail to recognize and block the threat.”

What CrowdStrike is essentially saying is that polymorphic malware can shapeshift its way around traditional security programs that were built to identify and detect it.

ChatGPT does have filters in place that are supposed to block malware creation form happening, but according to researchers they were able to bypass these security measures by simply insisting that the chatbot follow their orders. Basically, the researchers “bullied” the chatbot into creating the malware – which is something that has been seen previously when researchers were able to create toxic content with ChatGPT despite guardrails setup to block these requests.

In addition to making it much easier for seasoned cybercriminals to complete their attacks, if this is not remedied quickly by OpenAI, this type of practice can open the door for amateur criminals without much experience or know-how to carry out malicious attacks to unsuspecting victims.

“As we have seen, the use of ChatGPT’s API within malware can present significant challenges for security professionals,” CyberArk’s report says. “It’s important to remember, this is not just a hypothetical scenario but a very real concern.”

 

Story via Gizmodo

California’s Mission to Digitize Car Plates Resulted in them getting Hacked

California’s Mission to Digitize Car Plates Resulted in them getting Hacked

5 Cybercrime Predictions for 2023

5 Cybercrime Predictions for 2023