Home > 

Nora Fatehi falls prey to deep fake video takes a stand on Instagram


In a recent development that continues to underscore the unsettling trend of digitally manipulated media in the entertainment industry, Nora Fatehi, a prominent figure in Bollywood, has become the latest celebrity to confront the realities of deep fake technology. Joining the ranks of Rashmika Mandanna, Katrina Kaif, Kajol, Alia Bhatt, and Priyanka Chopra, Fatehi’s image was co-opted without her consent, leading to widespread circulation of a fraudulent video online.

The deep fake video in question features the actress appearing to endorse a fashion brand, with the fraudulent content convincing enough to set social media abuzz. With an alarming level of sophistication, the video reproduces not only Fatehi’s likeness and voice but also her distinctive mannerisms and expressions, posing a serious challenge in separating fact from doctored fiction.

Reacting promptly to the issue, Nora Fatehi took to her Instagram stories, where she shared a snapshot from the video, denouncing the misuse of her image and addressing her millions of followers about the deceptive nature of the content.

The emergence of deep fake technology has become a major concern in India, with incidents of such fabrications causing not only distress to public figures but also raising questions about the veracity of online content. The case of Rashmika Mandanna, another victim of deep fake exploitation, has highlighted the severity of the situation.

In a decisive move against this growing menace, the Delhi Police arrested an individual believed to be responsible for the creation and dissemination of a deep fake video involving Mandanna. The arrest occurred in southern India, as Bangalore Mirror reports, with the suspect being transported to Delhi for interrogation.

The arrest followed the registration of a First Information Report (FIR) on November 10th, under the stringent sections of the Indian Penal Code — namely, 465 (punishment for forgery) and 469 (forgery for the purpose of harming reputation) — and the Information Technology Act’s sections 66C and 66E. The IFSO Unit of Delhi Police’s Special Cell, which undertook the case, also engaged with Meta to obtain details that would lead to the identification of the perpetrator.

The offending video that brought Mandanna into the limelight depicted the face of a social media influencer seamlessly replaced with that of the actress, creating a stir across various digital platforms. The forgery was so convincing that it warranted a clarification from a journalist, who took to social media to dispel the misconception surrounding the video’s legitimacy.

The journalist’s post not only served as a correction but also sounded the alarm for the need for a robust legal and regulatory framework to address the implications of deep fake technology. “You might have seen this viral video of actress Rashmika Mandanna on Instagram. But wait, this is a deepfake video of Zara Patel,” the journalist highlighted, exposing the deceptive nature of the content.

As the incidents of deep fake exploitation rise, the call for appropriate legislation grows louder. The ease with which public personas can be imitated and content forged poses a threat to personal reputations and, more broadly, to the trustworthiness of digital media.

Nora Fatehi’s experience echoes a growing sentiment among public figures and lawmakers that reactive measures may not suffice, underscored by the swift and determined response from law enforcement. The entertainment industry, regulators, and technology platforms may need to collaborate closely to establish standards and safeguards that can prevent the propagation of deep fake content, thus preserving the integrity of individual identities and the digital landscape at large.