Dangers in ‘deep fake’ battle, says Ciaran Martin
Melbourne, Australia – 25 July, 2023
The head of the Albanese government’s global expert advisory panel on cyber security, Ciaran Martin, says there is a risk for measures to combat misinformation and disinformation becoming “authoritarian”, but a conversation over how to stamp out fake online content is critical.
The former UK National Cyber Security Centre chief executive said the advent of artificial intelligence and invention of deep fakes threatened the “veracity of information” online and posed the need to differentiate between what was real and fake in the digital world.
“You can’t disinvent the ability to do deep fakes and the ability to artificially generate images but you can, I think, find ways of saying … ‘This is a purely altered image’ or ‘This has been modified in the following ways’,” he said.
On the government’s proposal to extend the powers of the Australian Communications and Media Authority to counter misinformation and disinformation online, Professor Martin said such a measure would need to be managed carefully.
“There is a challenge for policymakers … how do you verify information and get it trusted without creating a sort of ‘committee of public safety’ that would say ‘This is true and this isn’t’.
“It’s a really, really hard problem,” he said. “I’m not sure there’s a perfect solution … having the conversation is a good thing.
“We might find that people will say ‘Well you do need to ban X’ or so forth. I can see ways this could become authoritarian, but equally I don’t think that means people should do nothing.”
Labor’s proposal to beef up ACMA’s powers and allow it to issue multimillion-dollar fines to platforms for not effectively combating misinformation was met with a wave of criticism from tech giants, legal experts and the Coalition over fears it could lead to censorship and impinge on freedom of speech.
Communications Minister Michele Rowland stressed ACMA “would work with industry” on implementation of a voluntary code to counter misinformation, which it would have the power to enforce if self-regulation proved inadequate.
In a speech to the Tech Council of Australia last week, Professor Martin canvassed the opportunities and risks presented by AI along with the need to address an increased threat of ransomware, which was deployed in large scale hacks including the Medibank and Optus breaches.
“Ransomware, which is been around for ages, is actually starting to do more sort of social harm. Five years ago, ransomware was mostly about quietly extorting rich American companies … now it’s disrupting healthcare and education and stuff that people care about,” he said.
Professor Martin – a member of the CyberCX global advisory board – said he was “disappointed” countries like America the US had ruled out making the paying of ransoms illegal and that this was something Australia needed to consider.
“It’s worth considering very seriously,” he said.
“But as a standalone measure a ban would not work … it could be (accompanied by) an insurance model, expanding the way in which you could source a government-run capability you could try and call on to fix it.”
While he noted the relationship between Canberra and Beijing was improving, Professor Martin said Australia needed to “guard against large scale commercial espionage” conducted by China.
“We have to engage with China on a whole range of issues, but we should continue to press for better behaviour from China in cyberspace,” he said.
Australia has this year been forced to ban CCP-linked surveillance cameras devices across government buildings and suspend use of drones manufactured by a controversial Chinese company, with Professor Martin warning there would be more technology with links to China operating in Australia.
“10 years ago, there were few restrictions … by and large anybody could make anything and anyone could buy it,” he said.
“So I think we’ll see more of it … quite a lot of stuff we’ll see will be the uncovering of legacy stuff from the last decade.”