Is the Road to AI a Roundabout?

Just recently, Andrew Moore, the dean of computer science at Carnegie-Mellon University, has announced that researchers are giving up on the prospect of human-like Artificial Intelligence. What? In the middle of all this progress we’ve been making? Well, it turns out that the progress the AI field has seen recently has been more about refining the techniques we’ve had for years, rather than discovering anything new.

This even applies to our most ambitious AI technologies, as we will explain here. For self-driving cars, they’ve been all the buzz in the automotive industry, and MIT researchers continue to make improvements, while Japan optimistically promises a self-driving car system by 2020. As for knowledge systems, they’re becoming more robust in every industry from the medical field to human resources, but there are limits to what they can do.

The Two Definitions Of AI

There’s a huge gap between the popular public understanding of AI and what’s actually going on in the field. Popular culture, after all, is riddled with improbable fantasies about human-like machines, be it the HAL9000 from 2001: A Space Odyssey, Ava from the more recent Ex Machina, or GlaDOS, the sarcastic mechanical mentor from the video game franchise Portal. Not only are we led to believe that Artificial Intelligence leads to rebellious, super-intelligent machines with wills and desires of their own, but there’s a whole movement out there including world-class entrepreneurs insisting that it’s right around the corner. But it isn’t.

To this date, not only do we not know how to make a computer reason like a human, or even an earthworm, but we have no idea where to start. Instead, we’ve gotten better at simulating pseudo-thinking behavior thanks to Moore’s Law (no relation to the above-mentioned Andrew Moore). The co-founder of Intel Corporation, Gordon Moore, famously predicted (roughly) that computer technology doubles in power every two years. And this is how we have gotten “Black Box AI,” the hottest new trend in “smart” machines.

This is the practical, ad-hoc approach to AI, in which we forget worrying about creating an electronic brain that will laugh at jokes and cry at soap operas, instead focusing on throwing all our processing power into solving real day-to-day problems. The distinction is so old that it’s in the Jargon File, that grimoire of ancient hacker wisdom. Human-like AI is the “neat” approach, which has evaded us so far; the “scruffy” approach just cares about results no matter how the computer gets them.

In Black Box AI, we literally abuse our processing speed so that the computer can find its own solution by trial-and-error. It is exactly like dropping the world’s fastest mouse into a maze and letting it crash all over until it finds a way out. Sometimes we can give it millions of patterns to learn from and let it arrive at its own conclusions in recognizing new patterns, which is pretty much how things like voice and face recognition work. Or we use pure discovery, such as how Google’s AI DeepMind learned to “walk,” simply by giving it a virtual arena and telling it to try moving everything in any pattern it can find until it hits on how to move from point A to point B.

In the case of self-driving cars, a union of both approaches is needed, a combination of heuristic rules and learning in a simulated environment. The process is known as “emergent learning.”

That video animation of DeepMind’s spastic flailing is the perfect illustration of why Black Box AI can only get us so far.

You Can’t Get To “Neat” From “Scruffy”

Going back to Andrew Moore at Carnegie-Mellon University. He’s saying we’re not making any progress on the “neat” side of AI research, but things are going great for the “scruffy” DeepMinds of the world right now. And that’s a difficult concept for the layman to grasp, or even some experts because it requires a deep understanding of both fields of computer science and neurology. For instance, some proponents of “neat” AI insist that it’s a simple matter of hooking up enough computers in a neural net and allowing the computer to explore endlessly. The problem is, humans don’t just think with electrical impulses traveling down neurons; we also have a chemical element involving neurotransmitters, and well as several appended organs such as the hippocampus and the amygdala whose function we barely understand at all.

That’s one inherent limitation of human-like AI: Before you can program a computer to do something, you have to understand how it’s done yourself. And the human mind is still a big mystery to its owners. We still don’t fully understand the process behind most of the brain’s diseases, how emotions drive us, what part of our personalities are from nature and what part nurture, why placebos work, and ten other mysteries of the brain besides.

Turning an AI loose in a simulation and letting it discover everything from quantum physics to Italian cooking by itself isn’t the answer either. Emergent learning techniques only work for specifically prescribed sets of problems.

AI’s Likely Impact On Industries

Thus we might ask: Will AI replace doctors? Yes, AI will help in diagnosis by being a fast pattern-matching search engine, but no, humans will still have to oversee them. Will AI replace human resources recruiters? Probably not, because our best search algorithms are already deployed to scour resumes, and there isn’t too much beyond that you can do without a human pilot. Now take a negative example: Will AI replace psychopaths? Aside from the sensationalist headline, AI is almost by definition psychopathic, in that computers lack empathy unless we figure out how to teach it to them. The truth is an image-recognition AI was confined only to be exposed to morbid images, then it interpreted new images with a morbid guess. What a non-surprise.

Most assuredly, self-driving cars will be a thing, even though the media is quick to alarm us about every one-off accident they have. Emergent driving AI will learn from each mistake, while in the United States human drivers still cause over 30,000 fatalities per year and keep doing the same stupid things. Just like being a psychopath, perhaps not driving inebriated or distracted is one measure where AI is an improvement on humans already. Demand will increase as more people get comfortable with a robot driver, and even a sloppy AI that has an accident once in a while seems to beat the average human driver.

In Conclusion

Yes, there is AI in your future - not only that, but it’s very much in your present every time you have a “conversation” with the virtual assistants, Amazon's Alexa or Apple's Siri. But the human-like AI from science fiction books will have to remain, alas or fortunately, a fantasy.

Don't wait around for AI Human Resource recruiters to become a reality. Give us a call and we promise there will be a human on the other end.