[ad_1]
Tech companies are promising to fight election-related deepfakes as pressure from policymakers increases.
Today at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe, and IBM signed an agreement stating their intention to adopt a common framework to respond to AI-generated deepfakes aimed at misleading voters. Gives indication of. Thirteen other companies, including AI startups OpenAI, Anthropic, Inflexion AI, ElevenLabs and Stability AI and social media platforms X (formerly Twitter), TikTok and Snap, joined the chipmaker arm and security firms McAfee and TrendMicro in signing the deal.
The undersigned said they will use methods to detect and label misleading political deepfakes when they are created and distributed on their platforms, share best practices with each other and respond “quickly and Will provide “proportionate responses”. The companies said they would pay particular attention to context in responding to deepfakes, aiming to “[safeguard] Educational, documentary, artistic, satirical, and political expression while maintaining transparency with users about its policies on misleading election content.
The agreement is effectively toothless and, some critics might say, amounts to little more than virtue signaling – its measures are voluntary. But the discussion reflects a wariness among the tech sector in the regulatory crosshairs as they relate to elections, in a year when 49% of the world’s population will participate in national elections.
Brad Smith, vice president and president of Microsoft, said, “There is no way the technology sector can protect elections from this new type of election abuse.” Press release, “As we look to the future, those of us who work at Microsoft feel that we will also need new forms of multistakeholder action… It is absolutely clear that the security of elections [will require] That we all work together.”
No federal law in the US bans deepfakes, election-related or otherwise. But 10 states across the country have enacted laws to criminalize them, with Minnesota being the first state to do so. Target Deepfakes are used in political campaigning.
Elsewhere, federal agencies have taken every possible enforcement action to combat the spread of deepfakes.
This week, the F.T.C. announced It is seeking to modify an existing rule that bans impersonation of businesses or government agencies to cover all consumers, including politicians. and fcc Have been taken Making AI-voiced robocalls illegal by reinterpreting a rule prohibiting artificial and pre-recorded voice message spam.
In the EU, the bloc’s AI Act would require all AI-generated content to be clearly labeled. The EU is also using its Digital Services Act to force the tech industry to curb deepfakes in various forms.
Meanwhile, deepfakes continue to spread. According to the data of clarityThe number of deepfakes being created, a firm that detects deepfakes, has increased by 900% year over year.
Last month, an AI robocall imitating the voice of US President Joe Biden tried to discourage people from voting in New Hampshire’s primary election. And in November, just days before Slovakia’s elections, an AI-generated audio recording showed a liberal candidate discussing a plan to raise beer prices and rig the election.
recently vote From YouGov, 85% of Americans said they were very or somewhat concerned about the spread of misleading video and audio deepfakes. a separate survey The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults think AI tools will increase the spread of false and disinformation during the 2024 U.S. election cycle.
[ad_2]
Thanks For Reading