In response to a recent viral deepfake video featuring popular actress Rashmika Mandanna, the Ministry of Electronics and Information Technology (MeitY) has issued a directive to all major social media platforms, including Facebook, Instagram, and YouTube, ordering the prompt removal of deepfake content within 36 hours. This advisory comes as the government seeks to combat the proliferation of deceptive content generated by artificial intelligence.
The controversial video in question depicted actress Rashmika Mandanna’s face superimposed onto that of British-Indian social media personality Zara Patel, who was shown entering a lift while wearing a revealing onesie. The video’s rapid spread triggered immediate action from the government.
The advisory underlines the legal obligations of online platforms as intermediaries and references Section 66 D of the Information Technology Act, 2000, and Rule 3(1)(b) of the IT Rules. These regulations mandate the removal of such content within specified time frames.
Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, emphasised the government’s commitment to addressing the challenges posed by misinformation and deepfakes.
He stated, “Deepfakes are a major violation and harm women in particular. Our Government takes the responsibility of safety & trust of all nagriks (sic) very seriously, and more so about our children and women who are targeted by such content.” The minister encouraged individuals impacted by deepfakes to file a first information report at their nearest police station.
Soumen Datta, Associate Partner, Digital Transformation, BDO India highlighted the far-reaching consequences of deepfake technology, emphasising its potential to blur the lines between truth and deception. “Deep-fakes are going to create deep troubles challenging the age-old adage, “Seeing is believing”.
“The consequences of such manipulative content are profound, often causing social and psychological distress to its victims, along with significant personal and professional ramifications. The issue becomes even more concerning when applied to the average individual, as the damage is typically done by the time the deception is discovered,” he added.
In 2019, a staggering 15,000 deep-fake videos surfaced online with 99% videos featuring morphed faces of celebrities.
Datta stressed the need for a multifaceted approach to mitigate the negative impacts of deepfake technology, including technological advancements, policy interventions, public awareness campaigns, and ethical considerations. He noted that some countries have enacted laws against deepfakes, making them illegal, and called for specific legislation in India to address this growing concern.
Concerning over the misuse of deepfake technology, Gaurav Sahay, Partner, SNG & Partners, Advocates & Solicitors acknowledged the challenges posed by deepfake content and discussed the legal provisions mandating preventive measures.
“The legal provision calls for a preventive measure, had it been mandating a proactive measure to be at the spot before a crime occurs, we would have been discussing a very different set of laws and compliances,”
He recognised that ongoing compliance can be cumbersome but is vital for preventing online crimes, protecting users, maintaining public trust, preventing misinformation, safeguarding reputations, and ensuring responsible and ethical use.
“The deployment of deep AI and algorithms will prove a further handy tool, to reduce slips and human error. The social media platforms, rather than counting the difficulties of countering, should rather look at preventing them, for which they do already have the resources and capabilities.” he said.