CyberCX returns as Cyber Security Partner of Australian Open 2025 → 

Ciaran Martin addresses the Victoria University Centre for Strategic Studies

 

Victoria University, Wellington – 26 July, 2023
Address to the Victoria University Centre for Strategic Studies

 

It is a pleasure to be back in Wellington. It’s been too long. I thank everyone for coming, presumably – hopefully – of your own free will. Other options are available.

This is my first trip to New Zealand since I left Government service three years ago. It is a chance for me to pay tribute to the many friends and partners whose support I was fortunate to have in my time in government service. I would like to commend the excellent work of Lisa Fong and her team at the New Zealand National Cyber Security Centre, who in many ways were pioneers of new ways of thinking about cyber security. New Zealand has long been a country that punches above its weight in key areas of cyber security and is a strong and reliable friend to us many thousands of miles away.

 

Beware the hype

The first thing I want to talk about today is why we shouldn’t overhype technological security threats and the dangers of doing so. Therefore, I am going to talk about killer robots.

Bear with me.

You may dimly recall that on 1 June this year, the much respected US tech website Motherboard ran a story saying that in a US military exercise, an AI drone had killed its human operator because the human was trying to stop it from carrying out its evil plan. I exaggerate, but only just).

The story was apparently based on a presentation given to a conference in London by a senior American military officer.

As I think is obvious, this would have had huge implications for our understanding of AI and the risks associated with it.

The world’s media and social media users took a similar view. The story exploded and was covered prominently and quickly by many household news names.

I don’t want to libel the newspapers by saying which ones ran it and got it wrong – it’s the newspapers’ job to libel other people – but I did read about it online in The Times in the UK, it ran on Fox News in the US, and you can still find it on The Daily Beast and Sky News websites, and it was covered by many other outlets. It was, in the parlance of our time, one of those stories that was “huge if true”.

I recall reading the story and was shaken by it. It would have led to a comprehensive re-evaluation of so much of what we thought about tech security.

However, it was then subject to three of the most spectacular – let us call them clarifications – I have ever seen.

The first clarification came at 8:37pm (all times local) on the day the story was published. That ‘clarified’ that no actual human being had been killed; it was a ‘simulation’. I don’t know about you but to me that’s an important point to clear up.

The second ‘clarification’ came at 12:55am – media hands will know that something is up when you’re putting out statements at that time of night. This carried a statement from the US military saying that no such formal exercise had taken place. The importance of this is less easy to spot, save for the aficionados: the term ‘exercise’ means something in military parlance; it means a serious exercise consuming considerable time and money, which will be the subject of formal evaluation that will feed into doctrine. This hadn’t happened.

The third and final clarification came from the US military officer who had presented in London, who clarified that it was based on an actual simulation of any kind, but more of a “thought experiment.” Again, an important clarification. Cue furious rewriting of stories in newsrooms across the world.

I tell this story not to have a go at anyone: miscommunications happen. I use it to demonstrate just how easy it is to scare people about technological threats, and for them to be amplified immediately and globally.

This is bad, and we need to get better at it.

It’s bad for two reasons.

The first is that it scares people about technology. Technology is a good thing and it’s going to get better. If you don’t believe me that technology is fundamentally a good thing, imagine your life in the pandemic without it. Secure tech is a public good.

The second reason it’s bad is that it sends people chasing after the wrong problems. We’ve seen this movie before in cyber security. A decade or two ago we were scaring ourselves with predictions of cyber Pearl Harbors and cyber 9/11s: these phrases were used directly by senior western government figures. Even the venerable The Economist had a cover in 2010 with a city skyline of skyscrapers tumbling in flames due to ‘cyberwar’.

All this sent us chasing after protections from the spectacular exception instead of the mundanely invidious. We only began to make progress when we started to break down individual bits of the problem set, and implement imperfect but useful measures to take parts of the problem away.

And what we’ve learned, and what my other key message is today, is that the key to our secure technological future is to mitigate the old problems and secure the new technologies.

 

Mitigating the old problems

What is remarkable about cyber security is just how enduring some of the old problems have been. If I’d been making this speech during my first visit to Wellington in 2013 I’d have been talking about data breaches, commercial espionage by China, strategic intrusions into critical infrastructure by Russia, disruptive attacks by Iran and North Korea, and this emerging phenomenon called ransomware where Russian based criminals were starting to lock people out of their networks and charging them for a decryptor key. Fast forward ten years and we’re still talking about all of those things, except that, discouragingly, ransomware has exploded, and its consequences are getting more and more dangerous.

However, more encouragingly, we also have far more examples of good practice and success stories to learn from.

Let’s look at some parts of this enduring problem set and how we’re getting to grips with parts of it.

Across the Tasman in Australia this past twelve months, we’ve been reminded that the era of the mass data breach is still with us. But we are also learning – importantly – that some data matters more than others. Up to now there have been, in my view, two approaches to data regulation: do nothing; or a sledgehammer.

I think the government of Australia, whom I am proud to advise as part of their review of cyber security strategy, are looking intelligently at this. Across the world we owe Australia a debt of gratitude over the way the country collectively – corporately, in government, the media, and wider society – held its nerve over the Medibank extortion demand. Here was an extremely sensitive mass dataset. If ever there was a case of data extortion where am organisation might be tempted to pay, this was it.

Yet by holding its nerve and managing the risk to vulnerable people successfully, Australia showed how – quite literally – to devalue the currency of a dataset for extortion.

Now, as the C10p group demand money from nearly 400 organisations after the massive MoveIT supply chain hack, it’s much easier to say to organisations that if Australia can see off the Medibank, you can hold your nerve over your payroll data. By the same token, New Zealand’s Government’s decision to make it clear the NZ state will never pay ransoms is to be welcomed.

The biggest threat to most organisations remains disruptive ransomware. And, tragically, some of this is becoming more dangerous.

The Colonial Pipeline hack in the US in 2021 showed it was possible to cripple a vital piece of hard infrastructure without actually hacking it: it was Colonial, not the criminals, who turned off the pipeline because their ordinary enterprise IT was so badly damaged, they felt they couldn’t viably operate the pipeline. There’s a real lesson there for our critical infrastructure protection.

Worse was the crisis in Irish healthcare in 2021. Then, Russian-based criminals locked up systems used to book hospital and doctor appointments which led to nationwide

postponement of cancer operations, diagnostics and other critical healthcare. This incident demonstrated that cyber attacks can endanger human safety, if not in the way imagined by Hollywood or The Economist a decade or so earlier.

Again, we are learning from this. Organisations across the world are reviewing how they can keep their critical systems going if their software goes down. Because of the type of crisis that hit Ireland, countries are reviewing their posture.

One of the most remarkable parts of the story in Ireland was that after three days of chaos, it was not until some personal data was leaked that regulatory procedures were triggered. In effect, Irish law had incentivised cyber defenders to prioritise patient data or patient services, or that, more colloquially in the words of someone who led the response, “I’m afraid we can’t schedule your potentially life-saving operation, but your email is safe.”

Now, laws across Europe prioritise resilience, and the Irish state’s publication of a painfully detailed review of what went wrong is a model of how to use transparency to help others improve.

Perhaps the most striking example of the full range of ‘old’ problems and the potential to mitigate them comes from the horrors of Russia’s murderous invasion of Ukraine. Although it pales into insignificance compared to the physical aspect of the war, Ukraine is experiencing the most sustained cyber assault of any nation in history, from one of the most capable actors – and it is doing very well.

Well before the war, Russia had shown it could, given enough time, money, and luck, execute the very complicated operations of taking out power stations, albeit briefly and with fairly limited impact. Just ahead of the invasion they used cyber attacks to harass the Ukrainian population, taking out government websites and messing up some digital transactions. At the time of the invasion, Russia showed the potency of military/cyber coordination by taking out the satellite communications system Viasat over Ukraine, complicating the communications of its military commanders.

But since then, in the words of my friend Paul Chichester, Director of Operations at the UK’s National Cyber Security Centre, Ukraine has shown that “the defender always has a vote.”

Ukraine’s own preparations ensured more resilient infrastructure.

It used partnerships with governments, especially the US and UK, to add to its capabilities. It harnessed the private sector to block threats at scale and see off specific attempts to disrupt it. Perhaps most importantly, the incredible work to move its rickety digital government infrastructure onto US based cloud services in a matter of days moved the dial: instead of trying to hack the on-premises IT of one of Europe’s poorer countries, Russia was now taking on Microsoft and Amazon and others, a completely different proposition.

The harms endure, but the tools to fightback are there, and we should use them.

Here, though, is a brief departure from my general optimism about safety and security in cyberspace. Everything I have talked about so far is about existing tech, which, famously, was built without security in mind. Many of our current problems therefore can only be mitigated, not strategically fixed. The mitigations are getting better, but they are still tactical. That is why the long-term solution is framing the tech of the future with security built in.

 

Fixing the new technologies

But back to good news: we are getting better at this. Already.

Take the security of the Internet of Things, or IoT.

One fun ‘thought experiment’ I like to do that is not about killer robots is to go back in time and look at apocalyptic predictions of the future of tech security.

Cast back to the mid-2010s and you will see lots of predictions that, as IoT dawned on us, the security consequences would be serious. By 2023, we were told, the number of internet connected devices would rise from around 5 or 6 billion – take your pick – to 25 or 30 billion, so there’s be a five- or six-fold increase in risk and harm. Well, it is 2023 now and there are 25 or 30 billion or so internet connected devices but the commensurate increase in harm hasn’t happened.

Why?

It is partly because we thought about this and started to make IoT safer. In 2016, much of the internet on the east coast of the United States went down. Twitter (or whatever it is called today), Amazon, CNN and a bunch of other household names were inaccessible. This was because they depended on a DNS service called Dyn. Dyn was taken out by some hackers who hijacked several hundred thousand IoT devices, most of which were CCTV cameras. Subsequent investigations revealed that the cameras had default passwords like 12345 or ‘password’. If, as a responsible operator you noticed that, it didn’t matter, because you couldn’t change it.

Selling an IoT product with an unchangeable default password is now unlawful in many countries.

This tells us something.

Our use of tech is evolving away from being about going on a website, paying nothing, and giving away lots of data, towards one that is increasingly based on paid products and services. Products and services are easier to make rules for, and to enforce those rules in the normal ways that we have regulated trade for decades, if not centuries.

We need to apply this approach to all emerging tech: the uses of AI, and, in particular quantum computing.

One could argue we are already applying it to AI.

Another ‘thought experiment’ is to look back at all the predictions that there would be no drivers on the roads in the 2020s. We are well into the 2020s, and there are millions of drivers. That is not because driverless cars aren’t technologically possible – we’ve all seen the videos. It is that we haven’t managed to develop a model where our safety can be guaranteed by anything more than the maintenance of a mobile phone connection. So, for now, someone in a driverless car must be ready to take the wheel, be sober, not working, not watching TV, awake and so on. In short, there is no point. This is a good thing.

We should celebrate the fact that we are approaching new technology with caution about security and safety before becoming dependent on it. This is the model of the future.

With both the lessons of the pandemic and the US-led de-risking or de-coupling from China in mind, we are thinking more constructively about the security of our digital infrastructure and not just our networks. The internet is not virtual: it’s a massive physical beast made of cables, data centres, base-stations, fibre, and microchips. Securing those, and securing a reliable supply of them, is now extremely high on the national security agendas of the Five Eyes and other allies.

Technological security is now about much more than just computer network security: the simple days when cyber security was about preventing cheating on a single model of the internet built by the West are over.

But we are getting to grips, gradually, with what this means.

Perhaps the most encouraging trend, on which I will finish, is the new emphasis on trying to clean up the digital environment and putting the onus on doing so where it belongs: with those who shape our technological environment.

For far too long we have placed far too great a burden on the humble end user: the harassed healthcare worker or retailer who is not a cyber security expert but is supposed to know which of the 300 or so emails per day they receive is safe to open.

But that is changing and I am proud that the UK has played some role in that: our 2018 NCSC blog “People are not the weakest link” has made the lovely journey from heresy to orthodoxy.

I do not normally recommend that busy people read national government strategies, because, frankly, life is too short. But occasionally they really are worth paying attention to. In March, the Biden Administration published its revised strategy. One of its five pillars is about reshaping the market to bring about better cyber security.

Here’s a quote from it:

“We ask individuals, small businesses, and local governments to shoulder a significant burden for defending us all. This isn’t just unfair, it’s ineffective. The biggest, most capable, and best positioned actors in our digital ecosystem can and should shoulder a greater share of the burden for managing cyber risk and

keeping us all safe. This strategy asks more of industry, but also commits more from the federal government.”

Unpacked, this means greater responsibility for hardware and software suppliers to put out more secure products, and government and industry measures to reduce the likelihood of the rogue email getting to the poor end user in the first place.

It means critical systems on which we all depend having better resilience so they can keep going. It means government having capabilities to shape all this.

It is very, very hard to do, and not all of it will be done right, or done at all.

But as we’ve seen with IoT, as we’ve seen with ransomware, and – particularly – as we’ve seen with Ukraine, the defender has a vote.

We have a vote in our free and open societies, and we should cast it in favour of free and open, but safer tech. And I passionately believe we can get there, and in the future when we get scared about killer robots it will be easier to remind people that technological advances are actually a very good thing.

Thank you.

 

ENDS

Ready to get started?

Find out how CyberCX can help your organisation manage risk, respond to incidents and build cyber resilience.