Updated: Jun 10
AI-generated Phishing Scams: ChatGPT's ability to converse seamlessly with users opens up avenues for hackers to potentially breach advanced cybersecurity software. Cybersecurity leaders must equip their IT teams with tools that can determine what's ChatGPT-generated vs. human-generated, explicitly geared towards incoming "cold" emails. The IT infrastructure should integrate AI detection software, automatically screening and flagging AI-generated emails. All employees need to be routinely trained and re-trained on the latest cybersecurity awareness and prevention skills, with specific attention paid to AI-supported phishing scams.
Duping ChatGPT into Writing Malicious Code: Hackers can trick the AI into generating hacking code. Cybersecurity professionals need the proper training and resources to respond to ever-growing threats, AI-generated or otherwise. Cybersecurity training should include instruction on how ChatGPT can be an essential tool in the cybersecurity professionals' arsenal. Software developers should look to develop generative AI that's potentially even more powerful than ChatGPT and explicitly designed for human-filled Security Operations Centers (SOCs).
Regulating AI Usage and Capabilities: ChatGPT itself can be hacked and manipulated to provide biased information or a distorted perspective, making it a dangerous propaganda machine. Enhanced government oversight is necessary to ensure that OpenAI and other companies launching generative AI products regularly review their security features to reduce the risk of being hacked. New AI models should require a threshold of minimum-security measures before an AI is open-sourced.