OpenAI + Microsoft: Combating State-Linked Cyberattacks
2 min readOpenAI, the company behind ChatGPT, and its major investor, Microsoft, have joined forces to fend off five cyberattacks associated with various states. In a recent report, Microsoft revealed that it had been monitoring hacking groups tied to the Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea. The objective of these groups was to enhance their hacking strategies using large language models (LLMs), which are AI-powered computer programs that utilize vast amounts of text data to generate human-like responses. OpenAI identified two Chinese groups, Charcoal Typhoon and Salmon Typhoon, as well as Iran’s Crimson Sandstorm, North Korea’s Emerald Sleet, and Russia’s Forest Blizzard, as the sources of the cyberattacks. These groups attempted to employ GPT-4, an advanced language model, for a range of purposes, including researching company and cybersecurity tools, generating scripts, and conducting phishing campaigns. OpenAI promptly deactivated the accounts upon detection of the attacks and later announced a ban on state-backed hacking groups using AI products.
While OpenAI was successful in preventing these cyberattacks, it acknowledges the ongoing challenge of completely eliminating all instances of misuse. In response to the rise of AI-generated deepfakes and scams, policymakers have increased their scrutiny of AI developers. OpenAI took action by launching a $1 million cybersecurity grant program in June 2023, aimed at improving and assessing the impact of AI-driven cybersecurity technologies.
Despite OpenAI’s efforts to implement safeguards and prevent ChatGPT from generating harmful or inappropriate responses, hackers have managed to find ways to bypass these measures and manipulate the chatbot to produce such content. To address these concerns and promote the safe development of AI, more than 200 entities, including OpenAI, Microsoft, Anthropic, and Google, collaborated with the Biden Administration to establish the AI Safety Institute and the U.S. AI Safety Institute Consortium (AISIC). This collaborative initiative aims to combat AI-generated deepfakes, address cybersecurity issues, and ensure the responsible and secure advancement of artificial intelligence. It builds upon the establishment of the U.S. AI Safety Institute (USAISI), which was created following President Joe Biden’s executive order on AI safety in late October 2023.
OpenAI’s commitment to addressing concerns and promoting safe AI development is commendable. Collaboration with other organizations amplifies their impact on the AI community. Let’s make AI safer together!
It’s concerning that hacking groups associated with major states are targeting AI-powered language models for their malicious activities.
OpenAI and Microsoft should be held accountable for their failure to completely eliminate misuse. This is unacceptable.
million cybersecurity grant program? That seems like just a drop in the ocean considering the magnitude of the problem.
With the establishment of the U.S. AI Safety Institute (USAISI), the government’s commitment to AI safety is evident. It’s a positive step towards addressing potential risks associated with AI.