An article released this week from BlackBerry Global Research suggests that OpenAI‘s ChatGPT, may already be being utilized in nation-state cyberattacks.
In a poll of 1,500 “IT decision makers”, it was assessed that 51% of respondents believe there will be a successful cyberattack credited to ChatGPT within the year, and 95% felt that governments needed to regulate similar technologies.
The findings propose that the software’s capabilities, having only been available since November, could be leveraged by cyber criminals to launch highly sophisticated and convincing phishing attacks, and a need for organizations to remain vigilant while not falling for “hype and scaremongering”.
Much of the risk assessed regarding ChatGPT has focused on its ability to generate very convincing phishing emails and rapid code analysis. As well, there has been extensive attempts to see if the language model is capable of generating malicious code.
If asked outright, the software flags the request and gives a user a nonresponse.
An investigation by Check Point Research showed that ChatGPT was able to conduct a full infection flow. It was able to generate a phishing email that prompted users to download an excel file, which ran a malicious VBA code in Excel when asked indirectly. This was done with some refinement and reiteration done directly within the program itself.
That said, the software also has plenty of potential for defensive applications. In fact, if you ask ChatGPT itself how it can be used to supplement your cybersecurity, you’ll get the following response (or something similar):