close
Choose your channels

Why Amitabh Bachchan Is Furious About Rashmika's AI Fake Video

Monday, November 6, 2023 • Telugu Comments
Listen to article
--:-- / --:--
1x
This is a beta feature and we would love to hear your feedback?
Send us your feedback to audioarticles@vaarta.com

In a shocking turn of events, fans of the renowned actress Rashmika Mandanna were deeply disturbed when an AI-generated deepfake video of her went viral on social media. The video in question featured a woman who initially appeared to be Zara Patel, a British-Indian influencer, entering an elevator while wearing a form-fitting outfit. However, her face was later manipulated using deepfake technology to resemble that of the actress.

Amidst the uproar, the legendary Bollywood actor Amitabh Bachchan expressed strong outrage and demanded stern action against those responsible for the deepfake video. This was in spite of netizens pointing out that the video was indeed fake. Additionally, Rashmika Mandanna herself reacted to the situation, sharing her apprehensions and concerns. In a heartfelt note, she expressed her feelings, stating, "I feel deeply hurt to have to share and discuss this deepfake video of me circulating online. Something like this is genuinely, extremely scary, not only for me but for anyone who is vulnerable to potential harm due to the misuse of technology today. As a woman and as an actor, I am grateful for my family, friends, and well-wishers who provide me with protection and support. However, if this had happened to me during my school or college days, I truly cannot fathom how I would have coped. It is imperative that we address this issue as a community and with a sense of urgency to prevent more individuals from falling victim to such identity theft."

Now, it's essential to understand why Amitabh Bachchan is so deeply concerned about AI-generated deepfake videos and the associated fears:

What Are AI Deepfakes and Their Consequences?

AI deepfakes represent a form of digital manipulation that employs artificial intelligence (AI) algorithms to craft highly convincing fake content, whether in the form of images, videos, or audio recordings. These deceptive media are meticulously designed to appear as though they were created by or feature real individuals, when, in reality, they are entirely fabricated.

The process of creating deepfakes commences with the gathering of an extensive amount of data about the targeted individual, often sourced from social media and various online platforms. Subsequently, AI technology is utilized to analyze and learn from the amassed data. This technology identifies and maps various attributes specific to the target person, such as facial features, expressions, voice patterns, and other distinctive characteristics. The AI then applies this reconstructed data to manipulate or superimpose it onto other individuals or bodies in videos or images.

The emergence of AI deepfakes has raised considerable concerns due to their potential for malicious use, including spreading misinformation, generating fake news, or impersonating individuals for illicit purposes. Consequently, there is a growing interest in developing tools and regulations aimed at detecting and mitigating the possible harms linked to this technology.

Instances of Victims Falling Prey to AI Deepfakes

In a recent incident in July 2023, a retired government employee from Kerala fell victim to a scam that involved AI being used to impersonate his friend. The fraudster not only replicated his friend's voice but also conducted a video call, imitating his colleague's appearance to gain trust and solicit money. Similarly, an individual in China became ensnared in a deepfake scam, losing a significant amount in a fraud where an imposter posed as a friend.

These incidents illustrate the real and pressing dangers associated with AI deepfakes, leading to strong reactions from public figures like Amitabh Bachchan and government authorities. It is evident that this issue warrants serious attention and action to protect individuals from the potential harm caused by this technology.

Follow us on Google News and stay updated with the latest!   

Related Videos