Accessibility links

Breaking News

Q&A: Taiwan AI Labs Founder Warns of China’s Generative AI Influencing Election

FILE - Ethan Tu, founder of Taiwan AI Labs in Taipei on March 24, 2023.
FILE - Ethan Tu, founder of Taiwan AI Labs in Taipei on March 24, 2023.

China has ramped up efforts to influence Taiwan’s January 13 presidential and legislative election in recent weeks, with some experts pointing to the dangers of disinformation campaigns created by generative artificial intelligence – which can produce text and other types of content.

Taiwan AI Labs founder Ethan Tu's company has used generative AI to identify and publicize social media information manipulation trends. He told VOA during a video interview on Tuesday that Taiwan must increase efforts aimed at countering cognitive warfare – efforts to influence Taiwanese attitudes – created by generative AI in the long run.

VOA: How serious is the impact of China’s election interference through cognitive warfare in this Taiwan election?

Ethan Tu: Information manipulation is getting more serious online in this year’s Taiwan election, and the techniques that specific actors use are getting more sophisticated. In the past, it was easy to identify fake Chinese accounts that were posting a lot of similar comments, but in this election, due to the emergence of generative AI, these accounts began to differentiate their roles. The comments they make are becoming more diverse.

It’s harder to determine that all these online troll accounts are saying the same thing. Generative AI is helping to amplify the impact of information manipulation. The team at Taiwan AI Labs is observing election-related news and online user conversations and we can see what roles these online troll accounts are playing.

For example, in this year’s election, short videos on YouTube and TikTok are playing very important roles. While these videos have different scripts and actors, they are talking about the same topic. Based on these short videos, we can probably determine that the content produced by these YouTubers aligns with the narratives promoted by Chinese state media.

These YouTube channel owners will pretend to be neutral but when they talk about things in the videos, they will mix in information that reflects the Chinese government’s interests. The information in these short videos is often pro-China while demoting Japan and the U.S. They will frame China as a peacemaker while characterizing the U.S. as dragging other countries into wars.

Another type of account that we see on Facebook is responsible for spreading these videos to influential communities. For example, two particular accounts will leave comments that are aligned with Chinese state media’s narratives under the three presidential candidates’ Facebook accounts.

Apart from actively commenting on issues related to the Taiwan election in Mandarin, these two accounts are also actively using English to spread messages that aim at increasing the public’s distrust of the Biden administration in the U.S.

While dealing with information manipulation empowered by generative AI, when you try to debunk a rumor, the generative AI may use the correct information that the authorities are trying to amplify and create more rumors. The current fact-checking mechanism that Taiwan is using to combat cognitive warfare has been proven to be ineffective in this particular case.

The best approach to cope with disinformation campaigns created by generative AI is to use generative AI technology to identify the trends. When there is a spike in relevant activities, the research team should share relevant information with the public.

In my view, these cognitive operations have become more effective in this election, but through efforts from different research teams in Taiwan, Taiwan’s public opinion can be more immune to the effect of cognitive warfare.

VOA: Will cognitive warfare initiated by China and Russia become more prevalent in the future?

Ethan Tu: My observation shows that social media platforms and generative technologies are becoming great weapons for authoritarian states to influence people’s free will. The overall trend is not optimistic. Some people used to be willing to speak out against certain troll accounts online, but nowadays, these people will be attacked by troll accounts online until they take down those comments.

The situation now is even worse than in 2018 and 2020. There are fewer comments online challenging cognitive warfare because people are afraid to come forward. On the other hand, some abnormal censorship is taking place on social media platforms. This is why credible media reports or relatively neutral and objective people are not able to speak out on social media.

VOA: In the long run, what should Taiwan do to establish an effective mechanism to address these challenges?

Ethan Tu: Our team understands how to use generative AI to defend against threats extending from cognitive warfare created by generative AI. However, with limited resources, we can only try to educate other teams about how to address these challenges. This is not enough.

As probably one of the first teams around the world to use generative AI to detect efforts to manipulate information created by generative AI, we hope other international partners will be interested in contributing to the efforts of preventing authoritarian states from using generative AI to influence people around the world, creating an alliance among democratic countries.

If countries around the world are interested in doing this, Taiwan should come forward and share our ideas and experiences. Taiwan should work with them to research cutting-edge technologies that can be used to combat cognitive operations.