Why AI isn’t Intelligence
Classifying the wonders of current machine learning algorithms as artificial intelligence does a disservice to the idea of intelligence and creates a vague market that offers little differentiation or definition of the solutions it contains. AI isn’t Intelligence. Marketers need to stop cluttering “AI” by establishing classification, logic and pattern recognition markets that offer real solution claims.
This will reduce the hype around AI, and perhaps lead to a precipitous fall in mentions of AI and to reporting about its demise. Avoiding the use of AI will, however, not only prove more intellectually honest but offer a better platform for creating markets. Amid the hype around AI, the adage from the 1990s round of AI enthusiasm remains true, that anything that works, is no longer AI.
As an example, we challenge the AI claims associated with IBM’s Project Debater and the University of California Rubik’s Cube solution as examples where AI hype misses the opportunity for a more salient solutions discussion.
Solving the Rubik’s Cube
Steven McAleer and team at the University of California, Irvine (where I hold a certificate in Intelligent Systems Engineering) applied autodidactic iteration that allows their algorithm to solve the Rubik’s Cube. In the paper titled “Solving the Rubik’s Cube Without Human Knowledge“, they detail an approach that steps back from a finished cube to determine if a particular move advances the solution better than another move. Once trained, the approach uses search trees to narrow the solution space. Heuristic computing solutions, including those that use robots, apply expert logic and they have been around for years. The uniqueness of this solution is the discovery of that logic independent of instruction.
In passing, MIT’s Technology Review makes quotes McAleer as saying, “We are working on extending this method to find approximate solutions to other combinatorial optimization problems such as prediction of protein tertiary structure.” A worthy goal with real-world implications. That the researchers arrive at this solution through game mastery displays reasoning by analogy— a feat of leaping beyond logic which remains the purview of humans.
Broad areas of human intelligence, like humor, intuition, and aesthetics, some might argue the fundamental parts of human intelligence, remain beyond “AI’s” grasp. For human Rubik’s Cube solution expert,s their brain’s pattern recognition centers and spatial recall may simply be more finely turned than others to solve that kind of problem. The Rubik’s Cube solution would then reflect a computer analog to that point solution, which the human may have as much difficulty applying to other problems as the computer does (though generalized good memory in humans is a central resource used by a variety of problem-solving approaches. Memory in “AI” tends to be application specific, neither shared by other programs nor stored/represented in a way that could be used by other programs if they did share access.
Broad areas of human intelligence, like humor, intuition, and aesthetics, some might argue the fundamental parts of human intelligence, remain beyond “AI’s” grasp.
One question not answered by the paper: do humans use knowledge to solve the Rubik’s Cube? As with other “AI” work, the role of tacit knowledge, that knowledge not easily captured, may actually be the human brain applying approaches similar to those discovered by McAleer and team to the Cube problem. Simon Baron-Cohen and team in their paper, Talent in autism: hyper-systemizing, hyper-attention to detail and sensory hypersensitivity detail that Rubik’s Cube solutions require good local processing that results in faster solutions and “if p, then q” reasoning. What McAleer and his team may have discovered is how to produce hyper-systemizing analogs which while intelligent, do not require higher order social or verbal skills to master.
Learning to debate
IBM’s Project Debater takes cues from its early Jeopardy-winning Watson demonstration to use data to make counter arguments during a discussion or debate. Because the demonstrations put natural language processes at the forefront, the demonstrations of Debater appear more intelligent than their underlying routines,
IBM outlines its research on its website, which can be found here. Much of it focuses on component level classification solutions to detecting claims, evidence and argument quality, along with the classification of phrases and idioms. IBM is as often more concerned with appearing intelligent by simulating casual responses, than in being intelligent. As an augmentation to human-computer interaction, this may be a worthy path, but IBM needs to focus much more on the solution goals, not the demonstration systems.
Even Debater’s more sophisticated approaches, such as “Claim Synthesis,” which applies logic to develop a new claim from a previous claim fall into the classification category of solutions. In the paper researchers Yonatan Bilu and Noam Slonim of IBM’s Haifa Research Lab offer the following example:
If we are familiar with the claim “banning violent video games is a violation of free speech” in the context of the topic “banning violent video games”, we could synthesize the claim “Internet censorship is a violation of free speech” when presented with the topic “Internet Censorship”.
They then apply statistical methods to Wikipedia to determine if the synthesized claim matches that data set. The prison for the Debater becomes the worldview chosen as the core for determining the rightness or wrongness of a topic. The mention of Aristotle in the introduction asserts a Western-leaning model for arguments. Debater reflects any biases in its data as it encapsulates the boundaries of classical logic.
As soon as topics move into areas governed by philosophical logic, where facts become ambiguous (including areas like quantum mechanics, a “non-classical probability calculus resting upon a non-classical propositional logic”), traditional approaches to logic fail. In the debate over AI and ethics (see the Serious Insights position on AI and Ethics here), for instance, Debater may well construct arguments on either side, but even with the most extensive database and flawless logic, Debater’s arguments would prove inconclusive because the subject has no right or wrong answer.
Debater is perhaps more about its natural language progression than that of its internal logic, which IBM admits remains curated.
A Debater instance programmed for political correctness becomes an unreliable source as it incorporates the bias of those paying for its “contrarian” advice. For Debater to provide value, it must not only find counter-arguments but understand the intent and the purpose of the debate, and the implications of the debate, including the stakes in play. To unearth counter arguments in safe situations does not reach the bar required to provide meaningful advice where human lives and economic livelihoods stand in the balance.
Classification and machine learning, not intelligence
Because of the massive amounts of data available to IBM, the Debater appears intelligent but it continues to reflect the hard edges of AI solutions. As a decision-making tool for humans, technology like Debater may play a role in providing alternative arguments to human dialog biased by cognitive bias. The system may, however, also present arguments that ideologically, politically or ethically conflict with the debate’s intent.
The Rubik’s Cube solution proves easiest to deny claims of intelligence. Like other game solutions, including those designed for Chess and Go, these algorithms do one thing and they do it exceedingly well. Even the argument that game solutions can learn to apply their reasoning to other games such as an “AI” learning how to play Super Mario Bros. keep the approaches within the realm of classification, reasoning through known heuristics and other search trees. Computers remain well-suited to trial-and-error in risk-free environments. By design, computers precisely memorize input and output datasets which makes them ideally suited for brute force attacks on any solution. Overtime algorithms designed to master one thing acquire digital savant syndrome. Expertise in one area may display an aspect of intelligence, but because current solutions prove frail in practice meet only the most narrow definitions of intelligence.
Debater and the Rubik’s Cube solution fall short of intelligence because they don’t know what they know or how they know it. The lack metacognition means the algorithms remain trapped in their own solution set. While scientists can speculate on other applications for the technology, the algorithms themselves remain outside of that discussion. And that is the key point. Debater would prove woefully underprepared, for instance, to argue with cognitive scientists over the approach or merits of the Rubik’s Cube solution.
Researchers may not be to blame as much for the AI marketing assault as those who wish to capitalize on their findings.
Researchers may not be to blame as much for the AI marketing assault as those who wish to capitalize on their findings. The marketing use of AI conflates several ideas, from simple pattern recognition derived from machine learning, such as Facebook’s ability to identify you in a picture, to the aspiration for strong AI emulation of the human brain and the ultimate transfer of human intelligence into a post-Singularity device.
For real-world applications, companies seeking customers and not just venture capital funding need to focus on solutions and name their approaches with precision. Aspirations to AI remain worthy academic and commercial pursuits, but there is no AI today. AI vendors need to focus on naming their solutions in a meaningful way that reflects on their value and stop filling the AI hype barrel with more chaff. For most business buyers, AI already offers little more than meaningless abstractions. AI will become less attractive (again) to venture capitalists as their investments fail to transform the hype into revenue.