Voters are being warned to be vigilant about the growth of artificial intelligence-enabled political content on social media before the next federal election.
The Australian Electoral Commission has warned AI-generated disinformation content, such as deepfake videos or robocalls from politicians, could be legal under current regulations.
AAP FactCheck has already debunked deepfakes of Prime Minister Anthony Albanese and Treasurer Jim Chalmers created by financial scammers.
A deepfake of then-Queensland premier Steven Miles dancing was posted in TikTok in July, and in September, Senator David Pocock sounded a warning about AI content by creating a deepfake of Mr Albanese. Scammers also used AI to impersonate Sunshine Coast Mayor Rosanna Natoli in a fake Skype call in May.
"We should be vigilant. That is the smartest move," she told AAP.
Dr Nuisha Shafiabady, a computational intelligence expert at Charles Darwin University, says regulations or standards could reduce the risk of popular AI tools, such as chatbots, being used to generate political disinformation. But she warns that even with rules, individuals wanting to spread disinformation online would likely develop their own AI tools.
"AI content could 'change your view without you even knowing it', and it's up to individuals to be wary about online content," Dr Shafiabady explained.
Unfortunately, many people use social media platforms for entertainment which can make them more vulnerable to deceptive information.
A study by the UK's Alan Turing Institute analysing AI-enabled content during the US presidential elections couldn't find evidence it had affected the result. However, it noted that was mainly because there was insufficient data about how it affected real-world voting behaviour.
"Despite this, deceptive AI-generated content did shape US election discourse by amplifying other forms of disinformation and inflaming political debates," the study concluded. "From fabricated celebrity endorsements to allegations against immigrants, viral AI-enabled content was even referenced by some political candidates and received widespread media coverage."
Dr Marian-Andrei Rizoiu, a behavioural data scientist at the University of Technology Sydney, said more accessible AI-content tools also allowed more people to generate higher-quality deceptive content. He mentioned that users who engaged with AI-enabled political disinformation were more likely to be recommended such content again by social media platforms' recommendation systems.
"The way it's doing that is by profiling me and predicting what type of content would interest me," Dr Rizoiu said.
However, he remarked Australians shouldn't be overly worried about the impact of AI disinformation on elections. Drawing parallels with past technological advancements:
"When the printing press was invented...people didn't automatically start believing everything they saw written...if you see a video online...it may be true or maybe it's false," he said. "But we will have ways to check if something is true and correct."