China could deploy Artificial Intelligence including AI anchors and memes to disrupt 2024 Lok Sabha poll in India: Microsoft

Representational Image via Daily Sabah

Tech giant Microsoft has warned that after a test run in the Taiwan presidential election, China could potentially utilise Artificial Intelligence (AI)-generated content to influence the Lok Sabha poll in India. According to a Microsoft Threat Intelligence report, state-sponsored cyber groups in China would also attempt to target polls in South Korea and the United States with assistance from North Korea.

The report pointed out, “With major elections taking place around the world this year, particularly in India, South Korea, and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests. Despite the chances of such content affecting election results remains low, China’s increasing experimentation in augmenting memes, videos, and audio will likely continue and may prove more effective down the line. We can expect to see North Korea continue to steal cryptocurrency to fund space, missile, and nuclear programs as well as launch supply-chain attacks on the defence sector.”

The research stated that in the early fall and winter of 2023, India, the Philippines, Hong Kong, and the United States were the targets of a Chinese cyber actor named Flax Typhoon, which often targets the telecommunications industry. According to the Microsoft study, Storm-1376, an actor associated with the Chinese Communist Party (CCP), shared videos featuring an AI-generated anchor in both Mandarin and English and claimed that the upheaval in Myanmar was caused by the United States and India.

The technology giant further asserted that the communist country is polling people on their areas of difference to sow discord and potentially sway the outcome of the United States presidential election. It is doing this by creating bogus social media profiles. “China has also increased its use of AI-generated content to further its goals around the world. North Korea has increased its cryptocurrency heists and supply chain attacks to fund and further its military goals and intelligence collection. It has also begun to use AI to make its operations more effective and efficient.” the blog post read.

It mentioned, “The Taiwanese presidential election in January 2024 saw a surge in the use of AI-generated content to augment IO operations by CCP-affiliated actors. This was the first time that Microsoft Threat Intelligence has witnessed a nation-state actor using AI content in an attempt to influence a foreign election. The group we call Storm-1376, also known as Spamouflage and Dragonbridge, was the most prolific.”

The post added, “Storm-1376 has promoted a series of AI-generated memes of Taiwan’s then-Democratic Progressive Party (DPP) presidential candidate William Lai, and other Taiwanese officials as well as Chinese dissidents around the world. These have included an increasing use of AI-generated TV news anchors that Storm-1376 has deployed since at least February 2023.”

A cyber gang with ties to the Chinese state claimed to have attacked the Home Ministry, the PMO (Prime Minister’s Office), and companies including Reliance and Air India in February. The investigation revealed that the hackers also gained access to 95.2 terabytes of Indian official immigration data. The compromised data was shared on GitHub.

A seven-phase Lok Sabha election is scheduled to commence on 19th April and conclude on 1st June. On 4th June, the poll results will be announced. Interestingly, Bill Gates, the co-founder of Microsoft, met with Prime Minister Narendra Modi last month to talk about the use of artificial intelligence and the threat posed by deepfake content created with a variety of AI techniques. PM Modi remarked, “In a vast country like India, there is always a possibility of misguiding through deepfake. What if somebody puts out an obnoxious piece on me? People may believe it initially.” 

He highlighted, “I think that there is a significant risk of misuse when a powerful technology like AI is placed in unskilled hands, in untrained hands. I suggest that we should start with clear watermarks on AI-generated content to prevent misinformation. This isn’t to devalue AI creations but to recognise them for what they are. Also, in the case of deepfakes, it is crucial to acknowledge and present that a particular deepfake content is AI-generated along with the mention of its source. These measures are really important, especially in the beginning. We, thus, need to establish some Dos and Don’ts.”


OpIndia Staff: Staff reporter at OpIndia