Just because you can connect a device to the internet doesn’t mean you should. While connected devices (commonly called the “Internet of Things”) can deliver new services, there needs to be security, privacy and clear ownership of data.
The endgame for the Internet of Things (IoT) is a world where data flows between organisations and individuals, from multiple sources and can be analysed to provide better understanding of what’s happening in any aspect of our personal, community and business lives. This can drive efficiency, better service and massive positive societal change.
There is enormous potential to change lives for the better, using data to manage power consumption in homes, home security and integrating any device in a building to use connectivity. It can also support individuals in social care and to provide additional information when setting insurance premiums. The full manifestation of the Internet of Things is however some way off. Right now, the challenge is taking the first steps and making a business case for deployment of connected devices in isolated use cases. IoT is currently about finding specific opportunities with a business case in their own right to create greater efficiencies, whilst keeping an eye on a bigger vision for the future.
But where there’s data, there’s risk from criminal activity and human error
This is exacerbated with the IoT because of the volume of information being generated and permission for the use of data not being treated as rigorously as it should. And this can pose a risk if it reveals something about an individual’s routine. A hypothetical example: if information was transferred from a domestic boiler to the energy company as part of a monitoring service for leaks or fuel inefficiency, that’s fine provided the information can’t be used to identify when the person is in the house. Anyone who can access the information can infer when the house is empty and vulnerable to break in. That is where the potential danger lies, though organizations are gathering information with the best intentions. Therefore, the data needs to be encrypted and secured from outside hackers as well as having limited authorization for internal access.
People need to understand the extent to which data is collected already via things like home security systems detecting if you’re in or out, acoustic monitors (which indicate when someone is in a building) and the ability for others to infer an individual’s movements, such as not being at home. There is potentially a fairly painful learning curve in how data is handled in order to prevent making people vulnerable/
Therefore, data needs to be transmitted and stored in a secured fashion, encrypted and anonymized so it’s not possible to infer something about an individual. That said, when it comes to people it’s very difficult to prevent entirely the risk of human error.
For example, while car manufacturers are keen on enabling their internet-connected vehicles with apps in order to provide extra services to drivers, anything a user does on the internet – such as accidentally downloading malware into the car – could allow information and control of the car to fall into the wrong hands.
A massive education process is needed to tackle user apathy around the use of their data. Meanwhile, organizations collecting data need to know what they’re exposing via the data they hold. Indeed, they should be focusing on collecting the data that’s essential, which doesn’t mean gathering everything they can.
And, within companies, data needs to be kept as far away as possible from unauthorised users while people with access to the data need to be trained properly in how to handle it.