[ad_1]
Hundreds of people in the artificial intelligence community have Signed an open letter calling for stricter regulation of AI-generated impersonations, or deepfakes, Although this is unlikely to lead to actual legislation (despite this) New task force of the House), it serves as an indication of where experts lean on this controversial issue.
The letter, signed by over 500 people in and associated with the AI sector at the time of publication, declares that “Deepfakes are a growing threat to society, and governments must enforce obligations throughout the supply chain to stop the spread of deepfakes. “
They call for the outright criminalization of deepfake child sexual abuse material (CSAM, aka child pornography), regardless of whether the figures depicted are real or fictional. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. And developers are called on to prevent harmful deepfakes from being created using their products in the first place, with the potential to face fines if their preventative measures are inadequate.
Among the more prominent signatories of the letter are:
- Jaron Lanier
- Francis Haugen
- stuart russell
- Andrew Yang
- mariatje shacke
- steven pinker
- gary marcus
- oren etzioni
- Genevieve Smith
- joshua bengio
- and hendrix
- team woo
Apart from this, hundreds of academicians from all over the world and from many disciplines are also present. If you’re curious, one person from OpenAI signed on, a few people from Google DeepMind, and at press time no one from Anthropic, Amazon, Apple, or Microsoft signed on (except Lanier, whose position there is non-standard. Is). Interestingly, they are ordered by “notability” in the letter.
This is far from the first call for such measures; In fact, these have been debated in the EU for years Before being formally proposed earlier this month, Perhaps it is the EU’s willingness to deliberate and follow through that has activated these researchers, creators and officials to speak out.
or maybe it is slow speed of whip (Kids Online Safety Act) toward acceptance – and a lack of protection for this type of abuse.
Or maybe it’s a threat (as we’ve already seen) AI-generated scam calls This may influence elections or innocent people may be defrauded of their money.
Or maybe it’s tomorrow’s task force being announced without any particular agenda In addition to perhaps writing a report about what some AI-based threats might be and how they might be restricted legislatively.
As you can see, there’s no shortage of reasons for those in the AI community to be here waving their arms and saying, “Maybe we should, you know, do something?”
Whether anyone will pay attention to this letter is anyone’s guess – no one really paid attention to that infamous letter calling on everyone to “pause” AI development, but certainly this letter is a bit Is more practical. If legislators decide to take up the issue, which is an unlikely event, given that this is an election year with a sharply divided Congress, they will have access to this list to take the temperature of the worldwide AI academic and development community. Will have to do.
[ad_2]
Thanks For Reading