OpenAI prepares AI technology to reduce disinformation during the 2024 elections
Jan 16, 2024 / GMT+6
OpenAI, the founder of ChatGPT, has announced plans to reduce disinformation in the lead-up to many elections this year, which will impact about half of the global population.
Upcoming elections in countries like the United States, India, and Britain, OpenAI said that it will prohibit the use of its technology, including ChatGPT and DALL-E 3 (an image generator), for political campaigns.
OpenAI emphasized that it aims to prevent any potential misuse of their technology that could undermine the democratic process. Further efforts are being made to evaluate the effectiveness of their tools in personalized persuasion, and until more is known, OpenAI will not authorize the development of applications for political campaigning and lobbying.
On Monday, OpenAI announced its plans to develop tools that would provide reliable attribution to text generated by ChatGPT and enable users to detect if an image was created using DALL-E 3. The company said that it intends to implement digital credentials from the Coalition for Content Provenance and Authenticity (C2PA) this year. This approach involves encoding cryptographic details about the content's origin and is aimed at improving the identification and tracking of digital content. C2PA consists of notable members such as Microsoft, Sony, Adobe, Nikon, and Canon.
The massive popularity of ChatGPT, a text generator, has sparked an AI revolution worldwide. However, concerns have been raised about the potential for these tools to flood the internet with disinformation and manipulate the opinions of voters.
According to experts, concerns about election disinformation began years ago, but the proliferation of powerful AI text and image generators has significantly aggravated the threat. This is particularly concerning when users cannot easily differentiate between genuine and manipulated content.
In response to this issue, OpenAI said that, ChatGPT will direct users to authoritative websites when asked procedural questions about US elections, such as voting locations. The company also mentioned that the insights gained from this project will be applied to their approach in other countries and regions.
OpenAI further emphasized that their DALL-E 3 technology incorporates safeguards to prevent users from generating images of real people, including political candidates.
This announcement from OpenAI follows similar actions taken by major US tech companies like Google and Facebook's parent company, Meta, to curb election interference, specifically through the use of AI.