Artificial intelligence has revolutionized the way we create media, making it easier than ever to manipulate images and videos. As a result, deepfakes, which are falsified videos or images that make it appear as if people are saying or doing things they did not, have become increasingly prevalent. These manipulated media can be used for a variety of nefarious purposes, such as scamming consumers or tarnishing the reputations of public figures like politicians.
One of the most common types of deepfakes is when a celebrity’s face is superimposed onto someone else’s body in explicit videos and images. This type of deepfake is particularly concerning because it can be used to spread fake news and misinformation about a person or organization.
Governments around the world are taking steps to combat this growing threat. In the United States, the Federal Communications Commission (FCC) recently outlawed the use of AI-generated voices in robocalls. This decision was prompted by an incident in which a company used an audio deepfake of President Joe Biden to deceive New Hampshire residents into staying home during the state’s presidential primary. While some states have implemented laws specifically targeting deepfake pornography, there is currently no federal legislation in the US that specifically addresses the issue of deepfakes. This lack of uniformity makes it difficult for victims of deepfake attacks to hold perpetrators accountable.
However, efforts to combat this emerging threat are gaining traction on an international level. The European Union has proposed an AI Act that would require platforms to label deepfakes as such, indicating that governments around the world are taking this issue seriously and are working together to find solutions.