• Saturday, September 07, 2024
businessday logo

BusinessDay

When machine learning goes off the rails

When machine learning goes off the rails

Inaccurate decisions are not the only risks with machine learning

What happens when machine learning — computer programs that absorb new information and accordingly change how they make decisions — leads to investment losses, biased hiring or car accidents? Should businesses allow their smart products to evolve autonomously? And how should companies evaluate and mitigate the inevitable risks? As machine learning permeates more processes, executives and boards must be prepared to answer such questions.

Algorithms don’t always work smoothly. They don’t always make ethical or accurate choices. There are three fundamental reasons for this:

1. The algorithms typically rely on the probability that someone will, say, default on a loan or have a disease. Because they make so many predictions, it’s likely that some will be wrong. The likelihood of errors depends on a lot of factors, including the quality of the data, the specific machine learning method and the type of algorithms used.

2. The environment in which machine learning operates may itself evolve or differ from what the algorithms were developed to face. Consider a machine learning algorithm for stock trading. If it has been trained using data only from a period of low market volatility and high economic growth, it may not perform well when the economy enters a recession.

3. Machine learning can make inaccurate decisions due to the complexity of the overall systems it’s embedded in. Consider a device used to diagnose a disease on the basis of images that doctors input. The quality of any diagnosis depends on how clear the images are, the algorithm used by the device, the data that algorithm was trained with, whether the doctor providing the images received appropriate instruction, etc. With so many parameters, it’s difficult to assess whether and why such a device may have made a mistake, let alone be certain about its behavior.

Inaccurate decisions are not the only risks with machine learning. Let’s look now at two other categories:

— AGENCY RISK: Ordinarily, it’s possible to draw on evidence to reconstruct the circumstances that led to an accident. As a result, when one occurs, executives can get helpful estimates of the extent of their company’s potential liability. But because machine learning is typically embedded within a complex system, it will often be unclear what led to a breakdown. Environmental change and the probabilistic nature of machine learning make it even harder to attribute responsibility to a particular agent.

Read Also: Nigeria’s growing debt fails to lift citizens

— MORAL RISK: Products and services that make decisions autonomously will also need to resolve ethical dilemmas — a requirement that poses additional risks and challenges. Scholars have now begun to frame these issues as problems of responsible algorithm design. They include the puzzle of how to automate moral reasoning. Should Tesla program its cars to think in utilitarian cost-benefit terms or Kantian ones, where certain values cannot be traded off? Moral risks also include biases related to demographic groups. For example, facial-recognition algorithms have a difficult time identifying people of color and recidivism-prediction instruments give Black and Hispanic people falsely high ratings.

All these considerations, of course, don’t mean that we should avoid machine learning altogether. Instead, executives need to embrace the opportunities it creates while making sure they properly address the risks.

Executives need to think of machine learning as a living entity. Executives should demand a full analysis of how employees, customers or other users will apply these systems and react to their decisions. Even when not required to do so by regulators, companies may want to subject their new machine-learning-based products to randomized controlled trials to ensure their safety, efficacy and fairness prior to rollout. But they may also want to analyze products’ decisions in the actual market, where there are various types of users, to see whether the quality of decisions differs across them. In addition, companies should compare the quality of decisions made by the algorithms with those made in the same situations without employing them. Before deploying products at scale, companies should consider testing them in limited markets to get a better idea of their accuracy and behavior when various factors are at play.

Businesses should also develop plans for certifying machine-learning offerings before they go to market. Many startups provide services to certify that products and processes don’t suffer from bias, prejudice and other pitfalls. Professional organizations like the International Organization for Standardization are also developing standards for such certification, while companies like Google offer artificial intelligence ethics services.

As machine learning and the environments it operates in evolve, companies may find that their technologies don’t perform as initially intended. It is therefore important that they set up ways to check that the technology behaves within appropriate limits. In many ways companies’ monitoring programs may be similar to the preventive maintenance processes used in cybersecurity. For example, firms might conduct so-called adversarial attacks on AI like those used to routinely test the strength of information technology systems’ defenses.

Finally, executives and regulators should learn to ask the right questions, delving into the following:

— ACCURACY AND COMPETITIVENESS: How much is the performance of the machine-learning-based system likely to improve with the volume of new data from its use if we don’t lock the algorithm? What will such improvements mean for the business?

— BIASES: What data was used to train the algorithm? How representative is it of the population on which the algorithm will ultimately operate? Can we predict whether an unlocked algorithm will produce less-biased results than a locked one if we allow it to learn over time?

— THE ENVIRONMENT: How will the environment in which the offering is used change over time? Are there conditions under which machine learning should not be allowed to make decisions, and if so, what are they? How can we ensure that the offering’s behavior evolves appropriately given how the environment itself is changing?

— AGENCY: On which thirdparty components, including data sources, does the behavior of our machine-learning algorithms depend? How much does it vary when they’re used by different types of people? What products or services of other organizations use our data or machine-learning algorithms?

Machine learning has tremendous potential. But as the technology is woven into our economic and social fabric, the risks it poses will increase. For businesses, mitigating them may prove as important as managing the adoption of machine learning itself.