Deepfake Laws Us

Second, Section 5709 also requires the DNI to notify congressional intelligence committees “whenever” the DNI determines that there is credible information that a foreign entity possesses or uses deepfakes that “target U.S. elections or domestic political processes.” The DNI must also inform Congress if the disinformation campaign can be attributed to a foreign government, institution, or individual. In June 2019, Texas released SB 751, which criminalizes the creation or distribution of a deepfake video within 30 days of an election in order to injure a candidate or influence the outcome of the election. The directors of NSF and NIST should submit to Congress no later than one year after the passage of the law a report on their findings regarding the feasibility of research opportunities with the private sector and any policy recommendations from the directors that could facilitate and improve communication and coordination between the private sector. the NSF and relevant federal agencies by implementing deepfake detection approaches. Many state laws focus on pornographic cases of deepfakes, which can cause emotional and psychological harm, violence, harassment, and extortion. Deepfake porn targeting a journalist in 2018, for example, led to hospitalization for palpitations. In 2017, a software developer nicknamed “Deepfakes” published his creations on the online platform Reddit, with which he exchanged the faces of Hollywood celebrities for the faces of artists. After the creations spread quickly, deepfake became a new trend.

Over the past 4 years, many applications have emerged to create deepfake content. Because the apps are easily accessible and free, many people create deepfake content from themselves or from celebrities such as politicians, actors/actresses, artists, and singers. Some famous examples of deepfakes concern Obama, Trump, Zuckerberg and Dalí. In addition, they are quite convincing. In other words, it is difficult to distinguish which one is real. Therefore, it`s exciting to hear your favorite artist call you in a video or see your favorite actors in another movie in which they didn`t appear. “A side effect of using deepfakes for disinformation is the decrease in citizens` trust in authority and the news media,” the report says, noting that “perhaps one of the most damaging aspects of deepfakes is not disinformation per se, but the principle that any information could be falsified.” Even the very existence of deepfake technology has led to political instability. In 2018, the president of the small African country of Gabon, Ali Bongo, who had not been seen in public for months, gave a video speech to quell rumors that he was sick or dead.

His political opponents dismissed the video as a deepfake and helped spark the country`s first coup in more than 50 years. Meanwhile, recent congressional legislation pushing for more research into deepfake technologies is a welcome change, Patrini says. Britain`s Channel 4 demonstrated the credibility of deepfakes at Christmas by airing a video that was manipulated to give the impression that Queen Elizabeth was addressing the nation. Deepfake content was initially focused on celebrities, but even ordinary people can now create their own deepfake content. With the widespread use of deepfake content, issues such as public manipulation, attacks on privacy rights, violations of intellectual property rights, and the protection of personal data are becoming more common. Lawmakers and big tech companies are looking for an effective solution to the growing problems of deepfakes. This use case quickly became the most widespread. As of September 2019, 96% of all deepfake videos online were pornographic, according to a report by startup Deeptrace. Lawmakers in at least 6 states have introduced bills during their current sessions that deal with “deepfake” videos and other digital content modified with artificial intelligence according to State Net`s legislative tracking system. One of these states, Georgia, adopted a deepfake measure (SB 78), joining at least four other states that did the same in 2019 or 2020.

The NDAA crowns a busy year for legislation in this emerging area. In 2019, two states enacted laws criminalizing certain deepfakes. Virginia was the first state in the country to impose criminal penalties on the distribution of non-consensual deepfake pornography. The law, which came into effect on July 1, 2019, made the distribution of explicit, non-consensual “poorly created” images and videos a Class 1 offense punishable by up to one year in prison and a $2,500.5 fine under the Deepfake Report Act of 2019 ( P.

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.