Artificial Intelligence and the Rumsfeld Test

 

October 10, 2018

 

robot women in technology background

But there are also unknown unknowns – the ones we don’t know we don’t know.” – United States Secretary of Defense Donald Rumsfeld, February 12, 2002

An artificial intelligence strategy is the corporate equivalent of your spleen: everyone has one, but not everyone understands quite what it will accomplish.  There are bold plans afoot everywhere in the world of AI to be sure, but its reality is still distant from the vision of artificial general intelligence (AGI) – i.e., machines displaying intelligence equivalent to the natural intelligence of humans – of popular imagination.

Welcome, Robot Overlords?

Investors in particular need a sober and realistic view of what’s achievable in the field of machine learning-driven AI today, versus what promises nothing more than a waste of time and money.  Successful models in machine learning share a common set of characteristics which should inform our assessment of investment opportunities:

  1. The business model comes first. Machine learning projects cannot be begun simply because they’re cool ideas or because there’s a large base of training data that can be leveraged.  Without a meaningful business problem and an efficient AI-driven solution to that problem, machine learning projects are doomed to fail for simple reasons of economics.
  2. With business model in place, next in priority is identifying problems with lots (lots!) of available data, especially as problem dimensions become increasingly complex. Machine learning is driven by statistical pattern matching, putting a premium on problems with lots of training data and lots of operational data.  Proprietary data sources are best, and the data needs to be in a state “clean” enough to be put to good use.  The data should also promise relevance: for example, it’s not prudent to attempt predictive healthcare based on knowing patients’ preferred brand of shampoo.
  3. Machine learning loves highly stable, closed-ended environments and struggles in dynamic, open-ended ones. Well-defined, closed-ended systems like games, dermatological diagnoses, warehouse operations and Google’s chatbot readily lend themselves to artificial intelligence solutions.  Open-ended systems like gardening, playing soccer, or predicting the success of a start-up are non-deterministic and beyond the scope of today’s AI.
    Machine learning needs to ingest an infinite amount of data and consume an infinite amount of computation in order to work optimally.  In practice, neither data nor computation (not to mention storage and bandwidth) are infinite.  Thus, machine learning is best applied to problems in finite, constrained domains.
  4. The technical solution must be scalable. If data is good, more data is even better.  (It goes without saying that if the underlying problem is too simple then more data will only yield marginal returns).  In this way, machine learning exhibits “anti-commodity” behavior.  Rather than being cheapened into commoditization by ever higher volumes of data it becomes instead ever more powerful.  But this means that any viable machine learning solution must be capable of scaling as the volume and dimensionality of data increases.  Data is produced in real-time.  Decisions must be produced in real-time too.

There are many business problems that map to the attributes above.  The key to success in AI is to focus on these classes of practical problems and solutions.  Falling victim to excessive hype and expanding beyond machine learning’s current capabilities is a recipe for failure.

AI’s Future: Understanding Wile E. Coyote

But what lies in AI’s future?  When will we know if AGI has been achieved?

Picture a common scene from any Warner Brothers “Road Runner” cartoon.  Typically (and frequently) we see the Road Runner’s nemesis, the hapless Wile E. Coyote, defying gravity, suspended in mid-air (maybe while clutching an anvil), ten feet away from the nearby cliff face.  Any child will be able to tell you on his or her first viewing exactly what happens next: Wile E. Coyote will plummet earthward, succumbing to the inexorable forces of gravity.  The child will also know that the situation is drawn from comedy and not tragedy.  In a similar situation machine learning will be able to reach neither of these two conclusions.

Machine learning is brittle.  It’s data-hungry and does not lend itself to situations that are not data-rich.  It’s unable to attack tasks that don’t require vast amounts of trial and error.  It remains subservient to its training data and cannot generalize outside this training space: imagine Google’s haircut appointment chatbot having to spontaneously engage you in a conversation about lacrosse or barbecue.  Furthermore, machine learning is vulnerable to perturbations in its data, leading to data-hacking and mis-training.

The strength of machine learning – that it’s able to find correlations and associations within vast seas of data – is also its weakness: it’s only able to find correlations and associations within vast seas of data. If there’s enough training data – a nontrivial “if” – machine learning is great at closed-end mappings of inputs to outputs (“curve fitting” in computer scientists’ parlance).  But not only do novel situations (i.e., little or no training data) defeat machine learning, but pure curve-fitting doesn’t solve a very broad class of real-world problems.

Machine learning is a brute force-driven pattern-matching engine, not an intuition-driven common sense engine.  There’s a broad range of tasks that machine learning is incapable of addressing today, and until it’s able to do so – by correctly interpreting Wile E. Coyote’s predicament – AGI will remain solely in the realm of science fiction.

To finally reach AGI, machine learning will have to progress past mere curve-fitting to understand real-world physics, psychology / emotion, and causal models, exactly like any human child can do.  This goal is still distant (even assuming the advent of some breakthrough new computer architecture) in technology’s future.  While machine learning excels at classification problems – far outperforming human capabilities – humans are uniquely able to draw conclusions from widely-scattered inferences, frequently from data received long before, and even from data only faintly perceived.  In the real world, needless to say, data doesn’t come neatly packaged and labeled, nor does it come in deluges of volume.

The Turing Test Redux

In the end, we’ll know when the opportunity of AGI has been achieved when machines are able to autonomously populate (assimilate, understand) abstract models of the real world.  It will have been achieved when machines acquire the ability to autonomously learn from experience and past evidence.  It will have been achieved when machines show the same flexibility in surmounting novel problems as humans, and it is this last capability which brings us to what might be the final test for AGI.

Historically, the Turing Test has been viewed as the acid test for whether machines can think as humans.  Hans Moravec’s paradox is another measure of artificial general intelligence, as is Hector Levesque’s Winograd Schema Challenge.  But perhaps a new measure might now be applied, the Rumsfeld Test: specifically, the ability of a machine to deal with the fuzzy uncertainties of “unknown unknowns”, just as humans are so adept at doing.  Passing the Rumsfeld Test might require that these next-generation systems be partially rules-driven, partially data-driven, and perhaps – in the interests of keeping humans gainfully employed – even partially human-driven.

To ensure that investors (and entrepreneurs) not fall victim to the abundant hype within AI we need to always have a clear-eyed understanding of what it can do today and what it might be able to do tomorrow.  Innovation in AI, including one day passing the Rumsfeld Test, will create new business models, and will continue to drive our future opportunities for entrepreneurship.  The 21st Century will ever be known as the AI Century.