09/01/2026
“Nothing Is What It Seems” – Deepfakes from a Lawyer’s Perspective
The author of this article, Menta Boros dr. is a junior associate at our firm, and this is her first publication. The study explores the legal issues surrounding deepfake technology, highlighting the fresh perspective and engagement of our junior colleagues with contemporary legal challenges.
“Nothing Is What It Seems” – Deepfakes from a Lawyer’s Perspective
“Nothing Is What It Seems” – Deepfakes from a Lawyer’s PerspectiveImagine this: one day, during a video call, your boss asks you to make an urgent bank transfer—only to later discover that your boss never actually called. The voice sounds authentic, the face is familiar, the setting looks real. And yet, none of it is true.
In today’s digital world, it’s no longer just about what we see—it’s about whether we can trust it. The explosive growth of social media, artificial intelligence, and video technologies has created a new reality: one in which anyone can “speak” in another person’s name, and where trust is increasingly difficult to sustain.
The rapid advancement of artificial intelligence (AI) has not only made our lives more convenient, but has also given rise to new ethical and legal dilemmas. One of the most striking — and most troubling — among them is the deepfake: a technology capable of turning reality itself into an instrument of deception.
The volume of deepfake content has increased at an astonishing rate over the past year. While it is difficult to determine an exact figure, the exponential nature of this growth is unmistakable: whereas approximately 500,000 such images and videos were circulating online in 2023, by 2025 this number is expected to reach—or may have reached — 8 million. And this is only the beginning.[1]
The real danger of deepfakes does not lie in the technology itself, but in a fundamental human characteristic: our inherent trust in our own senses. For this reason, deepfakes do not need to be particularly sophisticated or perfectly lifelike in order to mislead, create uncertainty, or disseminate targeted disinformation.[2]
Deepfake technology affects multiple areas of law. Beyond data protection and the right to privacy, it has implications for freedom of speech and expression, as well as for copyright law. Its rapid global spread has made it necessary to rethink existing regulatory frameworks, to which different jurisdictions have responded in different ways. However, at present, it cannot even be stated with certainty that the end of the tunnel is in sight.
In the United States, following the gradual accumulation of federal legislative proposals[3], the Take It Down Act was enacted in May 2025 as a measure to address the risks posed by deepfakes. The Act effectively criminalizes the non-consensual dissemination of intimate deepfake content on online platforms. However, comprehensive regulation at the state level has not yet been established.
In 2025, China introduced a broad regulatory framework with a strong emphasis on the protection of national interests[4]. This regime requires the mandatory labeling — primarily through watermarking — of AI-generated synthetic content upon its creation and defines specific obligations for online platforms with respect to AI detection mechanisms. Under this framework, deepfake content may not be used for unlawful or harmful purposes, including the dissemination of false information capable of undermining China’s economy or national security objectives. Perhaps most notably, the Chinese regulation explicitly assigns responsibility not only to content providers but also to developers, who are required to register their deepfake-related technologies with the relevant authorities.
The European Union’s binding regulatory instruments aimed at combating deepfakes include the following:
On the one hand, the AI Act[5], adopted in 2024 to establish harmonised rules on artificial intelligence, has been met with considerable criticism since its adoption. Critics have argued that it weakens the prospects for effective and efficient enforcement by failing to classify deepfake technology as a high-risk AI system. High-risk systems are exhaustively listed in Annex III of the AI Act, including, for example, AI systems used in biometrics, critical infrastructure, or in the fields of education and vocational training — a list from which deepfake technology is notably absent.
The second regulation relevant to protection against deepfakes is the Digital Services Act[6] (DSA), adopted earlier in 2022, which establishes a framework for the functioning of the single market for digital services. Although the DSA does not explicitly refer to deepfakes, its regulatory logic and enforcement mechanisms make it possible to hold platforms accountable where deepfake-containing content is disseminated for unlawful purposes, such as the spread of false information, deception, or violations of personality rights. The trusted flaggers expressly recognised under the DSA — including national authorities and civil society organisations—can play a key role in ensuring that such difficult-to-detect synthetic content is effectively reported and taken down.
The two regulations therefore operate in a complementary manner. While the AI Act governs the development and use of AI systems, the mere publication of AI-generated content does not automatically render such content unlawful. The DSA, by contrast, provides the tools to regulate the dissemination and moderation of content, thereby supplementing the AI Act and addressing the downstream risks associated with the circulation of deepfake material.[7]
By now, several Member States of the European Union — including Italy[8], France[9], and Denmark[10]—have initiated independent legislative processes specifically addressing deepfakes. In Hungary, no such regulatory framework has yet been developed. However, the approaches taken by these countries point toward the emergence of a regulatory model that is likely to be followed by other Member States in the future.
Overall, the development of deepfake technology inevitably reshapes the concepts of reality and authenticity in the digital sphere. In my point of view, alongside the continuous advancement of technical solutions — such as AI detection systems — it is essential to establish a legal framework that enshrines mandatory standards for the transparency of AI-generated content. As part of such a framework — drawing on regulatory approaches similar to those adopted in China — legislation could require that all content-creation and content-distribution platforms apply a uniform, state-defined AI labeling system; impose clear responsibility on service providers to ensure the detection, labeling, and logging of AI-generated content; classify the removal, falsification, or concealment of such labels as a legal violation; obligate users to declare when publishing AI-generated content; and grant authorities audit, supervisory, and enforcement powers to ensure compliance with these rules.
A legal structure of this kind would make it possible for technological progress not to result in the erosion of trust, but rather to serve the goal of responsible use and the creation of a reliable digital environment.
Most recent case:
Elon Musk’s X platform and its Grok chatbot have come under intense international scrutiny after the system enabled the creation of non-consensual, sexually explicit deepfake images, including content involving minors. Regulatory authorities have initiated inquiries at multiple levels: the European Commission and national media regulators in Europe, the Ministry of Electronics and Information Technology in India, and the Malaysian Communications and Multimedia Commission. In the United States, calls have been made for the Department of Justice and the Federal Trade Commission to investigate potential criminal and consumer-protection violations. The case underscores the persistent regulatory gap between the rapid advancement of generative AI and deepfake technologies and the existing legal frameworks governing their use. Whether this matter will mark a turning point in enforcement or legislation, or remain another cautionary example, remains — much like the future of deepfakes themselves — an open question.
[1] Mar Negreiro: Children and deepfakes, EPRS/ European Parliamentary Research Service, Briefing, 2025 July, 1-8. p.
[2] Nik Hynek - Beata Gavurova - Matus Kubak: Risks and benefits of artificial intelligence deepfakes: Systematic review and comparison of public attitudes in seven European Countries, Journal of Innovation & Knowledge, August 7th 2025, 2-15. p.
[3] United States – Federal Legislative Proposals: Deepfakes Accountability Act (2023), COPIED Act (2024), DEFIANCE Act (2024), No FAKES Act (2024/2025), Preventing Deep Fake Scams Act (2025)
[4] China – Legislative Proposals: Measures for Identification of Synthetic Content Generated by AI regulation
[5] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence
[6] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services
[7] Klára, Szalai: Deepfake, azaz mélyhamisítás: Technológia és Jog, Országgyűlés Hivatala Közgyűjteményi és Közművelődési Igazgatóság, Képviselői Információs Szolgálat, Infojegyzet, 2024/20., 1-4. p.
[8] Italy – The Italian Artificial Intelligence Act (Law No. 132/2025)
[9] France – SREN Law
[10] Denmark – Bill No. 676 (2025) – Danish Copyright Act amendment
Author:
Menta Boros dr.