Your daily dose of crypto news

OpenAI + Microsoft: Combating State-Linked Cyberattacks

2 min read

OpenAI + Microsoft: Combating State-Linked Cyberattacks

OpenAI, the company behind ChatGPT, and its major investor, Microsoft, have joined forces to fend off five cyberattacks associated with various states. In a recent report, Microsoft revealed that it had been monitoring hacking groups tied to the Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea. The objective of these groups was to enhance their hacking strategies using large language models (LLMs), which are AI-powered computer programs that utilize vast amounts of text data to generate human-like responses. OpenAI identified two Chinese groups, Charcoal Typhoon and Salmon Typhoon, as well as Iran’s Crimson Sandstorm, North Korea’s Emerald Sleet, and Russia’s Forest Blizzard, as the sources of the cyberattacks. These groups attempted to employ GPT-4, an advanced language model, for a range of purposes, including researching company and cybersecurity tools, generating scripts, and conducting phishing campaigns. OpenAI promptly deactivated the accounts upon detection of the attacks and later announced a ban on state-backed hacking groups using AI products.

While OpenAI was successful in preventing these cyberattacks, it acknowledges the ongoing challenge of completely eliminating all instances of misuse. In response to the rise of AI-generated deepfakes and scams, policymakers have increased their scrutiny of AI developers. OpenAI took action by launching a $1 million cybersecurity grant program in June 2023, aimed at improving and assessing the impact of AI-driven cybersecurity technologies.

Despite OpenAI’s efforts to implement safeguards and prevent ChatGPT from generating harmful or inappropriate responses, hackers have managed to find ways to bypass these measures and manipulate the chatbot to produce such content. To address these concerns and promote the safe development of AI, more than 200 entities, including OpenAI, Microsoft, Anthropic, and Google, collaborated with the Biden Administration to establish the AI Safety Institute and the U.S. AI Safety Institute Consortium (AISIC). This collaborative initiative aims to combat AI-generated deepfakes, address cybersecurity issues, and ensure the responsible and secure advancement of artificial intelligence. It builds upon the establishment of the U.S. AI Safety Institute (USAISI), which was created following President Joe Biden’s executive order on AI safety in late October 2023.

Leave a Reply

Copyright © All rights reserved.