Ethics and digital transformation

Written by Dani Ochangco, Programme Manager, ServiceNow

The role of ethics has always been to provide people with guidelines on how we should behave and treat one another. As emerging technologies change every aspect of how businesses operate and compete today, companies need an ethical framework for the era of intelligent machines.

Technologies like artificial intelligence, advanced data analytics, and process automation create ethical challenges. How do companies manage individual privacy rights against the need to understand their customers so they can deliver the best products and services?

How do they balance the efficiency and productivity gains of automation against potential job losses and labor market disruption? How do they define the relative rights and responsibilities of people and machines in a world where AI‑powered algorithms make decisions about who gets a mortgage, a job or a prison sentence?

Business leaders should consider a utilitarian approach to help them work through these questions. Most closely associated with the work of Jeremy Bentham (1748‑1832), a British philosopher, jurist, and social reformer, utilitarianism seeks to minimise harm and maximise good for the greatest number of people.

Corporations always need to balance economic and ethical imperatives. The economic imperative is to increase efficiency and productivity. The ethical imperative is to minimise harm and maximise benefits for customers, employees, and society.

Here are three rules for business leaders who seek to transform their companies in an ethically responsible manner. First, define clear goals and guidelines for any new tech deployment. Second, minimise job disruption by helping workers acquire skills that complement machine intelligence. Third, always act transparently and in a way that respects the privacy rights of customers and employees.

Rule #1: Build AI guardrails

Start by putting clear boundaries around your technology and processes. When implementing new tech, engineers and leaders must find the balance between achieving maximum efficiency while minimising the potential for harm.

The mathematician and philosopher John McCarthy argues that teaching a machine to think means giving it a belief system and thus a worldview. The better companies get at defining goals for AI, the better they can understand what kinds of protections they need and how to avoid releasing AI products that create socially undesirable outcomes.

We should ensure that our machines reflect human‑centered design, meaning that they share our worldview and ethical principles. This is especially important when companies introduce powerful AI tools that can act in unpredictable ways. In 2016, for example Microsoft researchers launched a Twitter chatbot named “Tay” to study social interactions among Twitter users. Within hours, some of those users had trained Tay to use racist and sexist language.

The researchers hastily withdrew the bot from circulation, but the damage had been done. Tay was a PR debacle for Microsoft, which could have avoided the mess by using keyword or content filters and applying algorithms within the AI to monitor sentiments more closely.

Leaders can also create organisational roles to help ensure that AI is deployed in commercially and socially beneficial ways. For example, companies can employ ethical officers whose job is to identify potential harm and bias in products, ensuring that AI is deployed in ways that benefit workers and customers.

Rule #2: Focus on people and skills

As business process automation and AI spread through organisations, concern is rising that these technologies will put people out of work. A recent Pew Research Center survey found that roughly 75% of Americans expect economic inequality to rise if machines take over many jobs currently done by humans. Only a quarter of Americans expect a more automated economy to create many new, better‑paying jobs for humans.

It’s a false choice to frame the relationship between people and machines as a zero‑sum competition over jobs. In fact, AI and automation technologies are created by people and exist to serve them. To the extent that AI changes work, however, business leaders should start developing programs that can help employees acquire skills they need to thrive in a world where career success increasingly requires the ability to work productively with intelligent machines.

Consider robots in the automotive industry. When car companies first deployed robots on factory floors in the early 1960s, they were confined in separate enclosures to avoid the risk that they would injure or kill workers who got in their way. Today, companies are investing in robotic arms equipped with sensors that can identify people and objects in their path to avoid injuring humans and damaging parts.

These arms allow workers and engineers to focus on making fine adjustments without the worry of having to keep parts in place. Machines also take on dangerous and repetitive tasks, allowing human workers to focus on more variable tasks that require creativity and foresight. As Kim Tingley argued last year in a New York Times Magazine article, companies need to focus on preparing people to work with these technologies.

Rule #3: Be transparent

AI tools harvest enormous volumes of data about users. Companies that use those tools have a responsibility to protect the privacy of customers and employees. While that’s a nuanced challenge that requires collaborative input from companies, regulators and civil society, there are steps companies can take now.

The first is transparency, which helps people understand and therefore trust our decisions. Companies need to define which tasks are entrusted to machines and which are entrusted to people. And they should explain how these decisions will impact employees and customers.

To practice transparency, businesses need leaders who are committed to building a culture of trust in AI. They must be sensitive to biases hidden in data, and work tirelessly to ensure their AI is fair.

As we reimagine the workplace, we must ask ourselves what world we want to create for customers, employees and partners. How will this new world of work be different from the one we live in now? Framing these challenges through an ethical lens might just be the tool we need to build a future that makes sense for both people and machines.


This article appeared originally on Workflow.

More thought leadership

Comments are closed.