As we share more and more data online, questions are increasingly being asked about how businesses handle that information, and whether they are doing so ethically
‘There’s no such thing as a free lunch’ is quite possibly the most pertinent – yet the most ignored – piece of advice in the internet era.
Google and Facebook are two of the largest and most famous companies in the world, positions they reached by offering ‘free lunches’. Whether it’s a free platform for connecting with friends and family or an email service replete with on-demand office software, billions of people have bought into the idea that useful applications really can be free in this new, digital age.
Of course, there is a pay-off. It’s just one that most people don’t consider expensive – access to our most personal information, including the way we use our devices and the internet.
To the majority, this has seemed like no payment at all. After all, what does it matter if Google’s bots know which websites we visit or when we’re most likely to be reading and responding to emails?
The reality is that it matters a lot, but people haven’t understood this as they’ve given tech companies free rein to use their data for almost any purpose. In isolation, much of our data is unimportant but, when pooled together as mass datasets and subjected to detailed analysis, people’s routines, the way they live their lives and even their innermost thoughts can be detected and uncovered.
As we’ve seen with Facebook’s admission that its user data was shared with third parties without the users knowing, there’s an enormous amount of information hidden amongst all those clicks and likes. In fact, there’s so much information that governments are using it to target specific groups and individuals with a view to manipulating the way they vote.
Statistics such as 44 billion gigabytes per day don’t really convey the incredible scale of how much data is being created on the internet. It’s only by looking at the number of people involved that we can begin to grasp the magnitude.
There are believed to be more than four billion internet users in the world, which is well over half of the global population, and Facebook has 2.23 billion monthly active users. That gives the firm a unique insight into the minds and activities of people throughout the world.
Of course, this is just internet data. Much of the information we provide to businesses is given up front and voluntarily – we’ve learned to trust companies to look after our credit card, bank account, social security information and more. However, as the data breaches at global giants Sony and Target have shown, that trust may be misplaced.
Wake-up call
As people have woken up to the real consequences of data sharing, governments have begun to legislate more strictly. The EU’s much-heralded General Data Protection Regulation (GDPR) is the most recent and onerous example of new legislation brought in to protect our individual rights in the digital era.
As Philip Harker, Analytics General Manager at DXC Technology, explains: “Data needs to be respected and treated as an asset – or a liability, if disrespected – and this affects all walks of life and business.
“As consumers, we’re happy to get something for nothing with our loyalty cards, and we trust that that data will benefit us as consumers. Yet when we give our data for free access for information services, we should be rightfully unhappy when things go awry.
“And in business, this is data Darwinism: we must adapt with the data-as-an-asset mentality, respect its source, its lifecycle and its value, or perish if we don’t.”
Considering the recent Facebook revelations, the question of whether legislation is sufficient is now being asked, and some people are suggesting that it’s time to have a specifically ethical debate about the way organisations handle data.
“In terms of legislative framework, over the past 30 years, the environment hasn’t changed a great deal,” says Paul Vane, Deputy Information Commissioner for Jersey. “Compliance with GDPR, while important, isn’t going to work on its own. We need to have a debate about respect and dignity in the digital environment and it’s the responsibility of businesses to bring this into their organisations.”
Discussions about ethics are always difficult because they are relative. However, Emma Martins, Data Protection Commissioner for Guernsey, sees them as essential for the island’s future and not only because they’re a good thing to do, but because they’re also likely to create a competitive advantage.
“The next chapter of the data economy will be about the ethics of data handling,” she says. “The more that businesses can see that the ethical handling of data can help them, the more they’ll be successful.”
One Guernsey-based company that’s acknowledged this is motor insurance provider First Central Group.
“We do see that there’s a competitive advantage to dealing with data properly and carefully,” says Chief Information Officer John Davison. “First Central Group uses a lot of data and we take our cyber security and information security responsibilities very seriously, including having a policy of not selling customer data.”
In this case, Davison says that the decision against selling data was made on the grounds of not wanting to give away intellectual property rather than ethics. But it’s an excellent example of the fact that, when it comes to data, commercial and ethical considerations can sometimes collide.
The same can be said for the effect that GDPR’s stricter approach to data storage has had at an operational level. “One thing that’s happening is that compliance is getting people to ask more questions [about their operations],” says Davison. “GDPR states that you can only use data for the contracted reason given for using it. And this has highlighted a number of cases where businesses have found that they were holding data they didn’t need to keep.”
The bigger picture
These examples also reveal one of the most contentious issues in data-handling ethics – the question of ownership. Until GDPR, data has largely been viewed as a wholly-owned corporate asset rather than still being under the control of individuals, an attitude that’s under growing pressure to change.
“Autonomy and control over data are important aspects of any conversation,” says Emma Martins. “[Individuals] having autonomy isn’t optional from a governmental perspective because an enlightened, intelligent society doesn’t ignore issues of autonomy and rights.”
The widening of the debate about data ethics has happened as people have begun to understand the capabilities of big data analysis. The insights being gained are far more intimate than many imagined, but with machine learning and artificial intelligence (AI) taking their first steps into the mainstream, the acceptance of the need for ethical debate has become more widespread.
“Why are we using AI? Is there a need in the organisation? People think the technology is sexy but don’t think of the consequences. This is where the question of ethics comes in,” says Alexis Wintour, Managing Director of Channel Island consultancy firm, Marbral Advisory.
Marbral has been involved in the creation of HumAIn Resources, an HR consultancy that helps businesses and their boards prepare for the advent of AI in the workforce.
“Small jurisdictions are likely to be hugely impacted by AI, but what does that mean for islands such as ours?” she asks. “We have a responsibility to bring together all of the information we gather to build a local ethical framework.”
Paul Vane agrees about the need for information to be brought together and put into the public domain, but he also sees a risk in going too far. “There’s a danger that too much information can turn people off,” he says. “The whole drive of GDPR is to put people back in control.”
It’s here that the regulator’s dilemma can be found. Whilst having a key role in the debate, they can’t do everything themselves. “There’s a balance between institutions’ duty to give information to boost understanding, but there’s also a responsibility on the individual to make sure they understand,” says Vane.
Engaging individuals in the debate is essential and needs to come from a position of diversity because, as big data and AI are teaching us, technology isn’t neutral; it contains the implicit biases of its developers, a fact that ethical approaches must account for.
“If you have balanced, diverse teams, then you stand a better chance of developing appropriate frameworks,” says Wintour.
As the EU continues with its legislative approach, albeit guided by an ethical agenda, there’s a growing understanding that the Channel Islands could capitalise on their data sovereignty by adopting a more overtly ethical approach that creates commercial advantage.
“We’re embedding ethical principles into our strategy,” says Martins. “We want businesses to comply because they want to, because it’s good for them and their brand, rather than through fear of penalties.
“We have the opportunity to stand out by embracing legal standards and looking at what more we can add, because if you look at the qualities we have – compliance, skills, stability, a good legal system – we have all of the requirements needed to look after the data of others.”