Ahmedabad
(Head Office)Address : 506, 3rd EYE THREE (III), Opp. Induben Khakhrawala, Girish Cold Drink Cross Road, CG Road, Navrangpura, Ahmedabad, 380009.
Mobile : 8469231587 / 9586028957
Telephone : 079-40098991
E-mail: dics.upsc@gmail.com
Deepfakes
News: Recently, Deepfakes resurfaced in news when actor Rashmika Mandanna’s video went viral on social media. It has raised concerns about potential misuse.
What are Deepfakes? How do they work?
• Deepfakes are a type of synthetic media where a person’s image or video is swapped with another person’s likeness. The term “deepfake” was created from the words “deep learning” and “fake”.
• Deepfakes usually employ a deep-learning computer network called a variational auto-encoder, a type of artificial neural network that is normally used for facial recognition. Autoencoders can encode and compress input data, reducing it to a lower dimensional latent space, and then reconstruct it to deliver output data based on the latent representation.
• To make the results more realistic, deepfakes also use Generative Adversarial Networks (GANs). GANs train a “generator” to create new images from the latent representation of the source image, and a “discriminator” to evaluate the realism of the generated materials. If the generator’s image does not pass the discriminator’s test, it is incited to develop new images until there is one that “fools” the discriminator.
What are the challenges being posed by Deepfakes?
• Misinformation and Disinformation: Deepfakes can be used to create a false narrative apparently originating from trusted sources. They can spread disinformation to manipulate public opinion towards a desired effect, such as influencing a particular election outcome.
• Security Threats: Deepfakes present challenges to security and democracy, including heightened potential for fraud, propaganda and disinformation, military deception, and the erosion of trust in institutions and fair election processes.
• Legal and Regulatory Challenges: Governments are grappling with how to regulate deepfakes. For instance, the Indian government has issued an advisory to social media intermediaries to identify misinformation and deepfakes, and remove any such content when reported within 36 hours.
• Focused target against Women: The deepfakes are being used as a weapon to attack women dignity and chastity. According to AI company Deeptrace report, over 90% of the deepfake videos are pornographic in nature.
• It is used as a Means of ‘hybrid warfare’ or ‘grey zone tactics’: For example, use of Deepfakes of injured Indian soldiers by the Chinese army during Galwan Clash.
• Non-state actors often use such technology to raise anti-state sentiments among people. For example, Fake videos showing armed forces committing ‘crimes in conflict areas’.
How is Deepfake regulated in major countries?
• United States: The US has introduced the bipartisan Deepfake Task Force Act to counter deepfake technology. The Deepfakes Accountability Act, passed in 2019, mandates deepfakes to be watermarked for the purpose of identification.
• China: China has issued guidelines to service providers and users to ensure that any doctored content using deepfake tech is explicitly labelled and can be traced back to its source.
• India: In India, there is no explicit law banning deepfakes. However, sections 67 and 67A of The Information Technology Act 2000 provide punishment for publishing sexually explicit material in electronic form. Section 500 of the Indian Penal Code 1860 provides punishment for defamation, but these provisions are insufficient to tackle various forms in which deepfakes exist. The Personal Data Protection Bill 2019 provides for the protection of personal data of individuals, which includes data relating to a natural person who is directly or indirectly identifiable.
Way Forward
• Technological Solutions: Invest in technology to detect and counter deepfakes. This includes developing more sophisticated algorithms for detecting deepfakes and implementing digital watermarking techniques.
• Assess Authenticity: Learn how to assess the authenticity of media content. For example, if a call seems suspicious, end it and call back the number to verify the person’s authenticity.
• Awareness and Education: Individuals should be aware of the existence of deepfakes and how they work. They should be educated on how to spot deepfakes and verify the authenticity of media content.
• The upcoming Digital India Act must have penal provisions in case of malicious deepfakes.
• The use of blockchain technology for media creation will allow the individuals to trace the origin and modification history of media. It is likely to discourage malicious deepfakes.
Address : 506, 3rd EYE THREE (III), Opp. Induben Khakhrawala, Girish Cold Drink Cross Road, CG Road, Navrangpura, Ahmedabad, 380009.
Mobile : 8469231587 / 9586028957
Telephone : 079-40098991
E-mail: dics.upsc@gmail.com
Address: A-306, The Landmark, Urjanagar-1, Opp. Spicy Street, Kudasan – Por Road, Kudasan, Gandhinagar – 382421
Mobile : 9723832444 / 9723932444
E-mail: dics.gnagar@gmail.com
Address: 2nd Floor, 9 Shivali Society, L&T Circle, opp. Ratri Bazar, Karelibaugh, Vadodara, 390018
Mobile : 9725692037 / 9725692054
E-mail: dics.vadodara@gmail.com
Address: 403, Raj Victoria, Opp. Pal Walkway, Near Galaxy Circle, Pal, Surat-394510
Mobile : 8401031583 / 8401031587
E-mail: dics.surat@gmail.com
Address: 57/17, 2nd Floor, Old Rajinder Nagar Market, Bada Bazaar Marg, Delhi-60
Mobile : 9104830862 / 9104830865
E-mail: dics.newdelhi@gmail.com