• Tuesday, November 28, 2023
businessday logo


A Better Way to Onboard AI

Untitled design (38)

In a 2018 Workforce Institute survey of 3,000 managers across eight industrialized nations, the majority of respondents described artificial intelligence as a valuable productivity tool. It’s easy to see why: AI brings tangible benefits in processing speed, accuracy and consistency, which is why many professionals now rely on it.

But respondents to that survey also expressed fears that AI would take their jobs. These fears are echoed by academics and executives we meet at conferences and seminars. AI’s advantages can be cast in a much darker light: Why would humans be needed when machines can do a better job?

The prevalence of such fears suggests that organizations looking to reap the benefits of AI need to be careful when introducing it to the people expected to work with it. Just as when new talent joins a team, AI must be set up to succeed rather than to fail. A smart employer trains new hires by giving them simple tasks that build hands-on experience, and assigns them mentors. This allows the newcomers to learn while others focus on higher-value tasks. As they gain experience and demonstrate that they can do the job, their mentors increasingly entrust them with more substantive decisions. Over time an apprentice becomes a partner, contributing skills and insight.

We believe this approach can work for artificial intelligence as well. In this article we draw on our own and others’ research and consulting to present a four-phase approach to implementing AI. It allows enterprises to cultivate people’s trust — a key condition for adoption — and to work toward a distributed human-AI cognitive system in which people and AI both continually improve.


This first phase of onboarding artificial intelligence is rather like the process of training an assistant. You teach the new employee a few fundamental rules and hand over some basic but time-consuming tasks you normally do. The trainee learns by watching you, performing the tasks and asking questions.

One common task for AI assistants is sorting data. An example is the recommendation systems companies use to help customers filter thousands of products and find the ones most relevant to them. More and more business decisions now require this type of data sorting. When, for instance, portfolio managers are choosing stocks in which to invest, the information available is far more than a human can feasibly process. Software can make the task more manageable by immediately filtering stocks to meet predefined investment criteria.

Another way AI can lend assistance is to model what a human might do. As anyone who uses Google will have noticed, prompts appear as a search phrase is typed in. This kind of user modeling can easily be applied to decision-making. AI would use it to identify the choice an employee is most likely to make, given that employee’s past choices, and would suggest that choice as a starting point when the employee is faced with multiple decisions — speeding up, rather than actually doing, the job.


The next step is to set up the AI system to provide real-time feedback. Thanks to machine-learning programs, AI can be trained to forecast what a user’s decision would be in a given situation. If a user is about to make a choice that is inconsistent with his or her choice history, the system can flag the discrepancy. This is especially helpful during high-volume decision-making, when human employees may be tired or distracted.

Of course, AI is not always “right.” Companies should set rules about designing and interacting with AI to ensure organizational consistency. These rules might specify the conditions under which an employee should either follow the AI’s instruction or refer it to a superior.

To help employees retain their sense of control in phase 2, we advise managers and systems designers to involve them in design: Engage them as experts to define the data that will be used; familiarize them with models during development; and provide training as those models are deployed. In the process, employees will see how the models are built, how the data is managed and why the machines make the recommendations they do.


In a recent PwC survey nearly 60% of respondents said they would like to get performance feedback on a daily or a weekly basis. The trouble is that the only way to discover opportunities for improvement is through a careful analysis of key decisions and actions. That requires documenting expectations about outcomes and then comparing those expectations with what actually happened. Thus the feedback employees get usually comes from hierarchical superiors during a review — not at a time or in a format of the recipient’s choosing. That is unfortunate, because, as found in a recent study, the more people feel that they are in control of the conversation, the better they respond to it.

AI could address this problem. The capabilities we’ve already mentioned could easily generate feedback for employees. A monthly summary analyzing data drawn from their past behavior might help them better understand their decision patterns. The data can reveal interesting and varying biases. Some people may be more loss-averse than others. Others may be overconfident. The analysis identifies these behaviors and — like a coach — provides personalized feedback that suggests how to improve decisions.


Edwin Hutchins, a cognitive anthropologist, developed what is known as the theory of distributed cognition. It is based on his study of ship navigation, which, he showed, involved a combination of sailors, charts, rulers, compasses and a plotting tool. The theory posits that cognitive processing is not necessarily limited to the brain. External tools and instruments can, under the right conditions, play a role in cognitive processing and create what is known as a “coupled system.”

In line with this thinking, in the final phase of the AI implementation journey (which to our knowledge no organization has yet adopted) companies would develop a coupled network of humans and machines in which both contribute expertise. We believe that as AI improves through its interactions with individual users, a community of experts (humans and machines) will naturally emerge in those organizations.

Because of its potential impact, artificial intelligence may be perceived as particularly difficult to implement. Yet if done mindfully, adoption can be fairly smooth. That is precisely why companies must ensure that AI’s design and development engage the people who will be working with it. Otherwise workers will quite reasonably fear being constrained — or even replaced — by machines.

Boris Babic is an assistant professor of decision sciences at INSEAD. Daniel L. Chen is a professor at the Institute for Advanced Study at the Toulouse School of Economics and lead investigator at the World Bank’s Data and Evidence for Justice Reform program. Theodoros Evgeniou is a professor of decision sciences and technology management at INSEAD. Anne-Laure Fayard is an associate professor of innovation, design and organization studies at NYU’s Tandon School of Engineering.