Where we need responsible technology

Two programmers reviewing their responsible technology

Written by Laura James, Technical Director at doteveryone

People often ask me what technologies Doteveryone cares about. (Is it about broadband access? Privacy? AI? Data ethics?) The real answer is that our work is not so much about specific technologies as it is specific challenges. It’s about systems — how people and organisations and society and technology interact — not individual technologies in and of themselves. (You can see this in our thinking around the different aspects of responsible technology, which cut across technical and business issues as both are often intertwined.)

So we aim to tackle issues with technology today (and tomorrow!), some of which relate to specific technology fields, and some of which are quite general. There’s no shortage of interesting work to be done, and luckily we are not alone — many other great organisations and projects are thinking about many of these topics too.

Here’s some of what we’re thinking about, now and into 2018.

Dominant network platforms

A small number of platforms are hugely dominant in their areas — Google, Facebook, Apple, and Amazon, among others like WeChat and Alibaba. They wield incredible power in the markets they chose to be active in (including in buying potential competing companies, in recruitment, and in R&D). With the internet’s network effects, big platforms may be inevitable — but the governance and accountability is far from ideal today. How do we manage these massive companies, each with their own corporate strategies and management styles?

Universal access to good and useful information

The internet offers the potential for information to be freely available to all, and for everyone to create and share information too. This creates incredible opportunities for learning, fulfilment, new ideas, innovation, and art. However, much information still isn’t available online; academic papers are locked behind paywalls, and government and public data isn’t available in many places. Other information is hard to find because search tools are weak or biased, because there’s much misinformation (accidental or deliberate), or because of censorship, manipulation, social bubbles and more. As a result, it’s hard to know what’s right and what’s wrong. The way information is presented also often isn’t accessible to everyone; apps require new smartphones, websites aren’t very accessible, servers get turned off and their information lost. How do we make the most of the internet’s potential while still making sure the right data gets to the right people at the right times?

Security, safety, and resilience of internet products and services.

Too many internet services are insecure and/or unreliable, as well as poorly architected, designed and maintained. As digital starts to affect even more of our lives, this becomes increasingly problematic. Insecure systems and technologies involved in the internet of things, connected cars and cities don’t just affect their users — they affect others too. (Think of hacked webcams forming a botnet, or leaks of children’s data from internet-connected toys.) How can we motivate good practice and the use of appropriate standards for safe and secure technologies?

Unequal vulnerabilities to fraud, surveillance, and inequalities around pricing and access

While internet technologies may feel “bright and shiny,” they also operate within old systems that disenfranchise already vulnerable and marginalised people. (A £1,000 smartphone will almost certainly have stronger encryption than a £50 smartphone; visions of autonomous car futures rarely include or account for people who can only afford older, second-hand cars. When things go wrong, well off people are more likely to secure refunds and compensation.) How do we prevent safety, security and consumer protection from becoming luxury services?

Surveillance capitalism and the attention economy

The ad- and data-fuelled internet isn’t working; it’s invasive, and it only sustains by becoming even more attention-seeking, manipulative, and pervasive. What are the pros and cons of other models? Is it even possible to think about how we might get there from here?

The impact of algorithms and AI on our lives

“Algorithms” — automated processes — are, in the broadest sense, used in all kinds of activity to inform and to make decisions. That includes public sector organisations, companies, educational institutions and so on. This isn’t particularly new; however, the amount of data available is much greater, the algorithms are much more powerful, the machine learning systems are increasingly complex and opaque, and the potential uses of these technologies have greater impact on our lives — through judicial sentencing, autonomous vehicles, and more. How do we deal with algorithms and AI that can be unaccountable, biased, overhyped and difficult to understand?

Agency around personal information

Ever more interconnected systems are changing the value of data about us, and making that data harder to understand and control. The downsides of information sharing — what happens if information reaches people whom you would rather not have it — are often hard to perceive, and also vary depending on who you are, your situation, the data in question, and how it’s being used by others. The long term impact of information being available — legally or otherwise — is very hard to assess. How can we ensure appropriate levels and types of control, with protections and redress where needed, and reward and collective value where beneficial?

Resourcing maintenance and infrastructure

Technology can be frustrating: it can change frequently, demand to be upgraded, or stop working entirely. This state of affairs leads to greater insecurity, increased obsolescence, and disproportionate effects for poorer communities and individuals. This goes along with the rapid pace of development which means that investment in new things doesn’t last long. More importantly, though, we don’t have incentive models which resource digital infrastructure and maintenance of useful tools and systems. Huge services rely on tiny open source projects, maintained by handfuls of volunteer developers; everyone wants the benefits of well maintained desktop software, but few are willing to pay for it. How do we find ways to ensure our digital infrastructure is maintained and supported?


You’ll see that these issues are a mix of hot topics that are in the news at the moment and topics which have been studied and worked on for a while — in some cases, decades. That’s because responsible technology isn’t just a reaction to “big tech” (Google, Apple, Facebook and Amazon), or surveillance capitalism, or the dominance of advertising and personal data markets as the basis of consumer internet tech today.

The world of technology development is maturing and, like other innovations, it’s a time for reflection, stabilisation, building good practice and getting it used. It will take time and many changes — a shift in culture, a change in the relationships between tech and government and society, new practices, new ways of measuring things.

Building technology responsibly is simply what we should be doing: as far as possible, in big corporates and startups and SMEs, in nonprofits and co-ops and communities, when building electronics or software, for consumers or B2B, for infrastructure and products, for toys and tools.


This article was originally published here and was reposted with permission.

 

More Thought Leadership

 

Comments are closed.