The smarter our lives, homes and cities become, the greater the risk that they could be taken over by hackers and used against us, says Peter Richards. It’s a scary thought … so why aren’t we more worried about it?
There’s a perception that cyber-security is an IT problem alone and that the solution is purely technical. It’s not: it’s a human behaviour problem. I’ve just completed a masters thesis on the relationship between people’s awareness of cyber-security policy and whether or not they comply with it. The disturbing thing is that there is no correlation. It’s human nature to believe that we’ll never experience these threats, the “optimum bias theory”. However much you may know about cyber-security, you probably put yourself and your organisation at risk every day by doing things that run counter to its policy, even if they are socially or culturally acceptable.
The cyber-security threat is increasing exponentially, and it is becoming cyber-physical: a cyber attack can have a direct consequence in the real world. As smart buildings, smart cities and the internet of things become a reality, every object in our homes, streets and cities will be network-enabled so that they can communicate over the internet. That means they could be potentially be taken over by hackers and used against us.
One of the biggest threats is the creation of a botnet — a robot network made up of many devices. Once the virus gets onto one device, it continues to replicate itself to infect everything that connects. It lies dormant until, at some point in the future, the person who controls the botnet takes control of all those devices. A botnet in a road traffic system or the Uber app could turn all the street lights red or tell every car to go to the same address and lock down a whole area of a city. Hackers could block access to a hospital because the entire neighbourhood is gridlocked, resulting in loss of life and preventing first responders from attending events. And then the controller of the botnet can hold us to ransom. A noteworthy percentage of our computers are likely to be infected already, but we will only know when they are activated.
People don’t take cyber-security seriously yet because there has not been a major event. There will be: a “cyber 9/11” is all but inevitable. If somebody turned the internet off for a week — and it’s possible — people might take more notice. But my research showed that fear doesn’t work as a deterrent. Even when we fully understand the risks, it doesn’t change our behaviour. We still download files and apps without knowing where they come from, and plug in USB sticks that haven’t been virus scanned. Everyone knows that their company has a cyber-security policy, but very few employees are aware that they’ve signed it, let alone understand it. They very rarely follow it. But then most companies don’t audit the policy or keep a record of how many violations there have been.
A simple example of bad practice is storing personal files on a work laptop. If you store a lot of information about yourself in one place, a criminal can build a profile around you. This will greatly increase the chances that they can crack your passwords or pass security checks to access company networks and sensitive information.
Cyber-security is linked to physical and operational security, so we have to look at it holistically. One of the easiest ways to hack a company is to go into the office and insert a USB drive containing malicious code into a network-connected machine. But it’s not just about physical protection.