Social media has recently been flooded with AI-generated videos featuring some of Nigeria’s influential figures, promoting dubious health products and spouting fake endorsements and has raised a fresh round of questions around the ethics and deception possible with increasingly powerful Ai platforms.
Some of these Ai generated videos are exceptionally believable and can be quite difficult to detect given they have such incredibly good Ai voice copy and image copy capability.
It is an increasingly large concern for companies who have been targeted by hackers and scammers to fool staff into providing classified information and security information such as passwords and more. And, in relation to consumers, AI deep-fake videos are the new digital illusionists, using the faces and voices of respected personalities to hawk “miracle cures”.
The Nobel laureate Wole Soyinka was recently digitally altered to endorse a hypertension drug he never actually discussed, and cancer researcher Samuel Achilefu’s image was manipulated to falsely promote a fake cure for hypertension.
Tomorrow, who knows? Musicians, celebrities and respected public personalities can simply appear in a video promoting some product they never endorsed, but are completely believable with a voice and images that sound and look just like them.
The exploitation of the public’s trust, and the ever increasing digital risks of fraud and health related risks are starting to become a worldwide headache. The implications? They’re massive with even life threatening concerns due to phony healthcare products and they also have the potential of destroying trust in public figures.
Digital deception is evolving with a larger requirement for Ai standards and regulations to be adopted globally than most digital development s in the past. Ai holds the potential to be a weapon in the wrong hands that could potentially deceive even the smartest financiers, destroy individuals and risk the health of millions.
4 methods of Defeating Deep-fakes
Technology
Multiple technology-based detection systems can be utilised to identify deep-fake images and videos as they use machine learning, neural networks and forensic analysis, to analyse digital content for inconsistencies. Forensic methods that examine facial manipulation can be used to verify the authenticity of a piece of content. However, creating and maintaining automated detection tools performing inline and real-time analysis still remains a challenge.
Policy Efforts
Executive Orders and policy development around AI, by various governments are attempting to introduce a level of accountability and trust in the AI value system by signaling to other users the authenticity or otherwise of the content. They are requiring online platforms to detect and label content generated by AI, and for developers of Gen-AI to build safeguards that prevent malicious actors from using the technology to create deep-fakes. Work is already underway to achieve international consensus on responsible AI, and demarcate clear redlines.
Public awareness
Public awareness and media literacy are going to be critical measures in ensuring better safety and countermeasures against AI-empowered manipulation attacks. Starting from early education, individuals should ideally be equipped with the necessary skills to identify real from fabricated content and to understand how deep-fakes are distributed, and to also understand the psychological and social engineering tactics used by malicious actors
Zero-trust Mind-set
In cybersecurity, the zero trust approach entails teaching people to not simply trust anything they see by default and instead verifying everything. When applied to humans consuming information online, it calls for a healthy dose of skepticism and constant verification. This mind-set aligns with mindfulness practices that encourage individuals to pause before reacting to emotionally triggering content and engage with digital content intentionally and thoughtfully.