Commentary: We’ve been overhyping deep learning for too long. It’s time to start embracing it as a complement to, not replacement for, human ingenuity.
A few years back I jumped on the “machine learning will eliminate the need for radiologists” bandwagon. It wasn’t my smartest prediction. In my failure, however, I’m joined by the biggest experts in deep learning, like Geoffrey Hinton, who in 2016 proclaimed it was “just completely obvious [that] within five years deep learning is going to do better” than trained radiologists.
He was wrong. I was wrong. And as an industry, we all keep being wrong about how fast deep learning, a branch of machine learning, will progress.
Or not really “progress,” because deep learning is progressing, and quickly. What it’s not doing, however, is progressing to the point that it’s displacing people. The key to appreciating deep learning, wrote Gary Marcus, a scientist and founder of Geometric Intelligence, a machine-learning company acquired by Uber in 2016, is to recognize that this pattern-recognition tool is “at its best when all we need are rough-ready results, where stakes are low and perfect results optional.”
In other words, when machines can be used to complement, not replace, people.
Playing to deep learning’s strengths
Deep learning is essentially a way to do pattern matching at scale. No human can comb through gargantuan piles of data to uncover patterns in that data – machines can. By contrast, machines struggle when presented with an outlier that might be easy for a human to spot but contradicts the data the machines have been trained with. Machines can’t reason – people can. (Well, most people can…most of the time!)
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
OpenAI’s Jared Kaplan has argued that the problem isn’t about reason, but instead about scale. The more data you feed into the machines, the closer machines get to replicating human reason. This view is wrong.
You don’t have to take my word for it. Just look around. Pick any AI/ML system you want. None of them has come close to copying even simple human intelligence, because they fall down on delivering real comprehension of what the data means. This isn’t to suggest it’s useless. Far from it. No, it’s rather to argue that we should let people be people, and machines be machines, and find ways to marry our respective strengths.
Getting real about real machine learning
We also should stop trying to make ML/deep learning the solution to problems that might be more easily resolved by simple math, following the reasoning of Noah Lorang (“data scientists mostly just do arithmetic”). Or as articulated by Amazon applied scientist Eugene Yan, “The first rule of machine learning [is to] start without machine learning.”
If we’re striving to understand data, rather than merely crunch numbers, we need to be more deliberate about how we employ the machines (i.e., the ML/AI) at our disposal. Further quoting Lorang, “Lorang’s insight into data science is as true today as when he uttered it a few years back: ‘There is a very small subset of business problems that are best solved by machine learning; most of them just need good data and an understanding of what it means.’” As such, he said, instead of overloading deep learning/ML models with expectations, we should turn to “SQL queries to get data, … basic arithmetic on that data (computing differences, percentiles, etc.), graphing the results, and [writing] paragraphs of explanation or recommendation.”
SEE: Hiring Kit: Artificial Intelligence Architect (TechRepublic)
You know: the sort of thing we’ve done for decades, long before deep learning became de rigueur.
Back to Yan. For a successful ML project, “You need data. You need a robust pipeline to support your data flows. And most of all, you need high-quality labels.” This last point highlights the need to get to know your data: To label it well you need to understand the data to some degree. All of this needs to happen before you start throwing random data into a deep learning algorithm, praying for results.
Which, again, calls out the need for more symbiosis between humans and machines. Neither replaces the other. As TechRepublic’s Mary Shacklett recently wrote, “Great AI doesn’t work in a vacuum. It coordinates with human decision-makers and operates in a symbiotic mode with humans so an optimum decision or operation can be arrived at or performed.” As such, it would help if we’d stop overselling the future of deep learning, machine learning and artificial intelligence and instead focus on the present need to better integrate human ingenuity with brute-force, machine-driven pattern matching.
Disclosure: I work for MongoDB, but the views expressed herein are mine alone.