CryptoForDay

Your daily dose of crypto news

EU Commission Probes Big Tech on AI’s Electoral Integrity Risks

2 min read

EU Commission Probes Big Tech on AI's Electoral Integrity Risks

The European Commission has formally requested information from several major platforms, including Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, regarding their management of risks associated with generative artificial intelligence (AI) that could potentially mislead voters. The Commission is particularly concerned about the risks of “hallucinations,” viral deepfake dissemination, and manipulated automated services that could impact voter perception. These requests are being made in accordance with the Digital Services Act (DSA), the EU’s updated regulations for e-commerce and online governance, which classify these platforms as very large online platforms (VLOPs). As VLOPs, they are required to evaluate and address systemic risks and comply with the provisions outlined in the DSA.

The Commission is requesting additional details from these platforms regarding their measures to address the risks associated with generative AI. This includes information on their risk assessments and mitigation measures related to generative AI’s impact on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors, and mental well-being. The EU has identified election security as a primary focus for enforcing the DSA regulations for Big Tech. It has been seeking input on election security regulations and is developing formal guidance in this area. The requests from the Commission aim to contribute to the development of this guidance.

The platforms have until April 3 to provide the requested information on election protection, which the Commission has categorized as “urgent.” The EU aims to finalize the election security guidelines by March 27. The Commission has highlighted the decreasing cost of generating synthetic content, which increases the threat of deceptive deepfakes being circulated during elections. Therefore, it is focusing its attention on major platforms that have the capability to widely disseminate political deepfakes.

The Commission has the power to levy fines under Article 74(2) of the DSA if platforms provide inaccurate, incomplete, or misleading information in response to these requests. Failure to respond by VLOPs and VLOSEs could result in the imposition of periodic penalty payments. It’s worth noting that the Commission’s request for information is happening despite the technology industry agreement that was made at the Munich Security Conference in February to address deceptive AI use during elections. This agreement was supported by several platforms that are now receiving RFIs from the Commission.

Leave a Reply

Copyright © All rights reserved.