Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Update your browser
CapTech Home Page

Blog December 11, 2019

An Action Plan for Practicing Ethical Machine Learning

Ben Harden
Author
Ben Harden

The future of machine learning will involve an ecosystem of algorithms within organizations that aim to make life easier, but there are also some pretty big concerns from a business perspective about what it means to get there ethically. When you start to build these algorithms and train machines to do some of the tasks that humans do, you find gray areas you may not have expected. Unconscious biases become apparent, and these biases can be subtle and complex.

Organizations must consider their own biases to make the most of machine learning technology. If the builders of the system are unconsciously developing machine learning tools that disregard certain populations, this is a broken system. Unintentionally, the tool limits rather than expands possibilities.

Ethics come into question here, even in well-thought-out design.

Imagine an accident suddenly takes place in front of an autonomous vehicle. It must quickly analyze its options: 1. Swerve left and hit a mother and child; 2. Swerve right and hit a larger group waiting on the corner; or 3. crash into the car in front of it, likely killing the driver. How does ML-driven technology make that choice?

Maybe this seems extreme when your own role is perhaps in marketing and your challenge is in recognizing potential zip code biases, but it’s all part of the same conversation. The tools will do what we decide they do with the data we provide, making choices previously determined by a human – so we need to be extremely thoughtful about their creation.

What do organizations need to consider to move beyond human biases when working with machine learning technologies?

Become aware of what biases may exist.

Sometimes, organizations have predispositions and don’t realize it. When their team members build algorithms, the output can easily mirror these same perspectives. This can create a machine-led cycle of potential problems like a segment of the market being passed over or worse – an increased liability risk for your company. The recent Apple credit card gender bias exposure is just one more example of what this can look like.

Biases can lead to recruiting only a certain subset of talent into the company. Gender and ethnicity could unintentionally be influenced by your talent algorithms, as was recently discovered at Amazon. However, biases could also be more nuanced – for example ML recruiting tools that favor a specific type of degree from a certain set of universities. Maybe that’s okay and exactly what your organization needs, but you might be ignoring a population that could be even more successful and should be, at the very least, still considered.

If Machine Learning assists with your procurement process and the negotiation of contracts with vendors, the goal might be to optimize value from your business partners. However, unconscious biases within your algorithm structure might routinely steer you to a specific set of vendors and ignore others. This could cause problems like getting less than optimal pricing or worse, potential lawsuits.

Still these concerns shouldn’t make us close the door on Machine Learning. Instead we must consciously and humbly challenge ourselves to look at things from different angles and generate awareness first. Then, we must lead with this new way of thinking so that the rest of the organization can learn to strategize and deliver in the same way.

Create a diverse operational culture that respects all voices.

As leaders, we need to recognize the great business value of Machine Learning but also the potential implications of how these tools can get us into trouble if not properly built.

Building a cross-functional team enables people with diverse skillsets and viewpoints to sit down at the same table and discuss solutions holistically. By embracing a culture where every voice is valued, the chance of more successful Machine Learning is accelerated.

A diversity of perspectives from across your organization – age diversity, gender diversity, ethnic diversity, department diversity, and a diversity of long-time and newer employees – is essential for building powerful ML systems. This is another reason that having a diverse workforce is essential. You need one to have multiple perspectives at the table.

Keep testing for biases beyond the initial use case.

If you build a machine learning tool and it’s successful, that doesn’t mean that it no longer needs to be examined. Biases aren’t always obvious, nor do they always appear quickly. Or, in a different scenario, shifting successful ML tools from one department into another might bring biases along with it that weren’t apparent in its original application.

Organizations must keep challenging themselves to build more ethical tools. This process will not only make your systems stronger, but it will also lead to more accurate, comprehensive results.

Consistently seek fresh perspectives.

Your team’s understanding of your business and desired AI and Machine Learning initiative outcomes is important, but gaining the perspective of someone who isn’t as close to the project is vital. Employees from different departments, team members focused in different areas, or even external partners can approach a project and examine bias, awareness, and process with a fresh point-of-view.

As an example, your company’s internal data sets about your customers might bias you toward the customer-type you already have. If everything you build focuses specifically on expanding your present customer-profile, your tools might completely axe off new potential customers that you probably also want. Fresh perspectives can discover where subtle limitations exist because of limited data, exposing potential issues your team may be too close to see. External partners can even be resources for purchasing new data from outside your organization to remove these limitations entirely.

From IBM’s AI Fairness 360 to Google’s What-If Tool’s “Performance + Fairness” tab to Workday’s ethical compass, this focus on the ethics of AI and machine learning technology is gaining attention. But it requires a consistent commitment.

As with any new technology, it sometimes feels like there are a lot of landmines, and the more you think about it, the more landmines you discover. Fear, uncertainty, and doubt contrast with the realization that if you don’t embrace Machine Learning, you’re going to be left behind.

How machine learning models are trained and the impacts of this training on the business are complex. Ethics become involved. But don’t be overwhelmed by ML. Emerging technologies like this are changing the face of the business world and focusing on the ethics of these new tools is just one more aspect that we must embrace.