Handling the next frontier of false news and deepfakes

February 24, 2022 | 15:00
(0) user say
In today’s state-of-the-art technological era, computers are getting way better at simulating reality. Among other innovations, deepfake, the 21st century’s groundbreaking alternative to photoshopping, has lately been making the front page.
Handling the next frontier of false news and deepfakes
Manh Hung Tran - Head of IP and Technology Practice BMVN

To put it simply, deepfake is a portmanteau of “fake” and “deep learning” which uses deep learning AI to make synthetic media (images, audio, and video) of unreal events. Common examples of deepfake applications can include face-swapping with a movie character or synthetically creating audio content.

Among other concerns, deepfake technology has brought the fake news debate to the next wave of misinformation that not only involves misleading or blatantly false news reports but also extends to audio and visual content. Countries all around the world have been in their fight against the issue of misrepresentation based on false speeches and behaviour that can be cheaply created by this technology. China’s Cyberspace Administration, in complementing its toolbox for taking control of cutting-edge technologies, has recently released a draft rule on Deep Synthesis Internet Information Services to regulate technologies that use generative sequencing algorithms to make text, images, audio, video, virtual scenes, or other information, as represented by deep learning and VR.

Under the draft, deepfake providers are required to verify the identity of each user and establish a database of characteristics used to identify illegal and negative deep synthesis information. Where specified deep synthesis services (like voice imitation and face-swapping) are provided, the service providers shall effectively alert the public about the synthetic nature of the information content.

In a recent legislative move, the EU unveiled a proposal for a regulation laying down harmonised rules on AI and amending certain legislative acts. The proposal suggests that users who use AI to create deepfakes shall disclose that the content has been artificially generated or manipulated. Exceptions to this rule include where the use is authorised by law to detect, prevent, investigate, and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.

Manipulated content that falsely portrayed House Speaker Nancy Pelosi as ill or inebriated did bring the issue of malicious use of video-manipulating technologies, which shall include deepfakes, to a greater discussion in the United States. In light of the issue, states have been stepping up their attention by prescribing regulations on deepfakes to provide victims with avenues for recourse. Some of the laws, like in California, even made it illegal to create or distribute videos, images, or audio of politicians doctored to resemble real footage ahead of the 2020 election.

Unlike China, the EU, and the US’ specific prescriptions tailored for the provision and use of deepfake technologies, the prevailing regulations of Vietnam approach the matter at hand in a more generic and inclusive manner. Particularly, various legal instruments generally prohibit the use of cyberspace to spread false information. Since deepfake videos inherently contain unreal events, the spread of such videos, without any signs indicating their synthetic nature, is theoretically equal to the spread of false information and hence, potentially subject to the regulation of the listed laws as well as pecuniary sanctions.

In addition to the issue of misinformation, deepfakes also pose a threat to the exercise of the right of publicity which safeguards the recognisable aspects of one’s persona from illegitimately unauthorised use. As elaborated, the deepfake technology can produce believable video or audio of almost anyone doing just about anything, thus undermining the very intrinsic concept of human agency.

Under Vietnam’s Civil Code 2015, the right of publicity is not recognised by that name but instead protected as part of the personality right. Accordingly, the use of one’s image shall be consented to by that person; when such use involves commercial purposes, that person is eligible for remuneration, unless agreed otherwise. When it comes to a dead person, the use of his/her image shall be approved by his/her spouse or adult child, or, in the absence of these individuals, by his/her father or mother.

Exceptions to the consent requirement include when the use of one’s personal image is to serve public interests, or when the image being used is taken in public activities, providing that such uses do not impair the honour, dignity, and reputation of the image subject.

Unlike Vietnam’s legal protection of the publicity right in the Civil Code, such a right in the US is largely state law-based and its recognition can vary from state to state, meaning, it can be guaranteed as part of the right to privacy, through the law of unfair competition, or under the intellectual property law regime as explained by a court as “an intellectual property right of recent origin which has been defined as the inherent right of every human being to control the commercial use of his or her identity.”

Having said that, there are constraints on the right of publicity provided for in Section 230 of the Communications Decency Act immunising platforms from (most) liability.

By Manh Hung Tran

What the stars mean:

★ Poor ★ ★ Promising ★★★ Good ★★★★ Very good ★★★★★ Exceptional