Deepfakes and Bangladesh general election 2024
Certainly, the powerful potential of deepfake technology can be leveraged for political gains
Publish : 25 Sep 2023, 04:44 PM Update : 25 Sep 2023, 04:44 PM
Certainly, the powerful potential of deepfake technology can be leveraged for political gains
Publish : 25 Sep 2023, 04:44 PM Update : 25 Sep 2023, 04:44 PM
The US Republican Party has recently created an AI (artificial intelligence) generated ad, titled “Beat Biden'' describing a scenario in which China invades Taiwan, illegal immigration is rampant, financial institutions shut down and the US city, San Francisco has been closed due to escalating crimes. As a caveat, at the upper right corner of the video, it is written with small letters “created entirely with AI imagery.”
But is it acceptable to create such a scary picture of the world? This kind of AI-generated video can make people frightened or worried if they don’t think about them critically, and they can have a major influence on what people think and talk about.
Given these realities in developed countries, it is important to be prepared for the potent threat of AI manipulation in the 2024 Bangladeshi general election. With only a few months left before the political parties can bag the support of the superpowers and regional powers—the most important facet of the upcoming election—the major political parties are ready to win voters with their traditional and digital campaign strategies. They seem ready to do it all in this watershed election. But are they ready to combat the potential mal-intelligence (bad or harmful intelligence) of AI which is increasingly becoming central to all aspects of our life? One of the interesting things about AI is that it now has the ability to create deepfakes fairly easily, videos in which the face or body of a person has been changed using this technology. So, you might see a video that looks like a certain person is speaking or doing something that has actually been altered by a computer.
Certainly, the powerful potential of deepfake technology can be leveraged for political gains. Consider a hypothetical situation in which a deepfake video portrays a major party leader in Bangladesh advocating violence. Suppose this video almost immediately goes viral on social media. The fallout would be immense and immediate, but who would bear responsibility for the chaos that would likely ensue? If it happened during the election, would the government or the election commission be accountable?
This issue of AI-generated content creates an even more serious threat to people who lack basic media literacy. Oxford Reference defines media literacy as the ability to think critically about various media. According to the Management and Resources Development Initiative, news literacy, a critical component of overall media literacy, among the Bangladeshi people is as low as 76%. What would happen to those who lack this when deepfakes become omnipresent in political communications here?
The election commission, government, technology companies, educational institutions, and media organizations should take serious account of the potential impact of AI and deepfakes. The bottom line is that it takes a minute to identify a deepfake video and a second to create one. And between seconds and minutes, a serious incident can happen. Hence, resistance to deepfakes, misinformation and disinformation needs to be efficient, effective and credible.
Most importantly, and as a general rule of thumb, we need basic media literacy for all voters. We need training for adults. Compared to contemporary teenagers, adults are more vulnerable because they are less adept at handling media and technology. Be it through public awareness campaigns or workplace training, we need to alert adults to the facts that there is something called false news, there is something called deepfakes and that they should not share everything they see on Facebook and, most importantly, not to automatically believe everything they encounter. Most importantly, the government needs to understand the threat of misinformation, disinformation and deepfakes and invest in research and development in the field of media literacy. The government also needs to understand that illiterate citizens are of no use, rather, they threaten the very “development” we aspire to achieve.
As the 2024 general election is knocking at the door, the concerned authorities should focus on a comprehensive media literacy initiative focused on the threat of AI and deepfakes. This program should aim to educate voters on critical aspects of media consumption, including analysis of a message’s origin, its intended and implied meanings, and the authenticity of its content. All the stakeholders of democracy should come together in order to make it effective. In the face of emerging digital threats to our democracy, there is no substitute for media literacy and awareness. By equipping citizens with the tools to differentiate real political messages and manipulated deepfakes, we can foster a resilient democratic process that is less susceptible to the disruptive potential of AI-generated content. In this AI century, media literacy is not merely an option but an essential undertaking to safeguard our democracy.
Sakir Mohammad is a graduate of the South Asian Institute of Policy and Governance, North South University.