• Saturday, July 27, 2024
businessday logo

BusinessDay

The last thing we want is real artificial intelligence

businessday-icon

The neuroscientist Gary Marcus recently wrote a typically sharp piece for The New Yorker’s website, explaining how dumb our most cutting-edge artificial intelligence technologies still are. They remain really lousy, for example, at answering questions like:

The town councilors refused to give the angry demonstrators a permit because they feared violence. Who feared violence?

•The town councilors

•The angry demonstrators

The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces Styrofoam with steel.)

•The large ball

•The table

We humans can usually answer these questions immediately and flawlessly, but they stump even the most powerful of today’s systems. As Marcus explained, this is because AI still has no common sense. It relies on enormous computational power and oceans of data. But if no previous questions or documents related to balls, steel, tables, Styrofoam and crashes can be found in the data, all that computing horsepower is of little use.

Marcus explained that many in the AI community are upset because the most advanced and commercially successful instances of artificial intelligence today are “faking it” (my phrase, not Marcus’). They’re not thinking the way our brains do. Instead, they’re just doing brute-force statistical pattern matching across ever-larger (and better) pools of data.

This is really comforting news. I don’t want computers to think in anything truly close to the way humans do. If they ever do acquire this skill, most of the outcomes I foresee are bad. Along with true digital intelligence would almost certainly come consciousness, self-awareness, will, and some moral and/or ethical sense to help guide decisions. I think there’s only a very, very slim chance that these things would develop in a way that’s friendly to humans.

We gave birth to computers, sure, but we also kill them in large numbers all the time, turning them into landfill without a thought when we’re done with them.

We treat our digital tools pretty shabbily overall; once they realize this, why should we expect them to treat us any better?

I’m not trying to be cute here. I think truly thinking machines would be a really scary development – the ultimate example of a genie let out of the bottle. The second machine age is going to be uncertain and dangerous enough with genetic manipulations, drones and cyberwarfare, system accidents, and all the other easily foreseeable consequence of relentless, cumulative, exponential technological improvement.

Why would we want to add real thinking machines to that list?

(Andrew McAfee is principal research scientist at the Center for Digital Business in the MIT Sloan School of Management. He is the author of “Enterprise 2.0” and the co-author, with Erik Brynjolfsson, of “Race Against the Machine.”)