Take It Down Act :
The Take It Down Act criminalises non-consensual sharing of intimate images, including AI deepfakes, and requires platforms to remove such content within 48 hours. Victims of explicit deepfakes will now be able to take legal action against people who create them.
What is a deep fake?
- They’re synthetic media (videos, audio, or images) generated using deep learning algorithms to create realistic digital media. The term combines “deep learning” & “fake” (manipulate a person’s face, voice, etc.)
- Deep learning: A subset of machine learning that uses multilayered neural networks to simulate the complex decision-making power of the human brain.
Threats Posed by Deepfakes
- Deepfakes can be used to impersonate executives, tricking companies into transferring funds. Create fake videos of political leaders to spread misinformation.
E.g., in Gabon, a deepfake video of the president raised
Suspicions of a coup. - Proliferation of deepfakes erodes trust in the Media & creates
doubt about the authenticity of legitimate video content,
thereby, weakening public trust. Â
How to Determine if Something Is a Deepfake?
- Facial Inconsistencies: Deepfakes often struggle with certain
facial expressions, lighting, and micro-movements.
For instance, the eyes in a deepfake video may not blink
naturally. - Unnatural Movements: They sometimes exhibit awkward
movements. E.g., jerky head turns. - Distortions: They often show blurring, especially during fast
movements.