Cybercriminals Using ChatGPT to Build Hacking Tools and Write Code

Cybercriminals have already begun to use OpenAI's chatbot ChatGPT for malicious purposes, according to security analysts.
In one documented example, a hacker published information on an underground hacking forum about experimenting with a popular chatbot to create malware strains.
The hacker had shared Android malware that he had found and compressed using ChatGPT. This malicious software was capable of stealing files of interest from devices running the app.
The same hacker showcased another malicious tool that could install a backdoor on a computer, allowing malware to spread more easily.

Check Point has noted that some hackers are using ChatGPT to create their first scripts. In a forum post, another user shared code he claimed was written in Python using the chat platform. The code is said to be able to encrypt files and is believed to be the first of its kind.
Check Point researchers say that software with this code could be used to encrypt a user's computer without their knowledge or consent.
The security company emphasized that while ChatGPT-coded hacking tools appear "basic," this is only a temporary stage. Threat actors will soon improve their use of AI-based tools for malicious purposes.
A third instance of ChatGPT being used for fraudulent activity was detected by Check Point, with a cybercriminal posting on an underground forum that he had used the AI chatbot to create a Dark Web marketplace using third-party API. The hacker explained that this code uses up-to-date cryptocurrency prices to run the market's payment system.
ChatGPT, the developer of OpenAI's chatbot platform, has implemented safeguards to prevent users from directly requesting that the AI create malware. However, security analysts and journalists have recently noted that this feature can also be used for malicious purposes by writing perfectly grammatical phishing emails without errors.
We were unable to reach OpenAI for comment.

Post a Comment

0 Comments