|
A system possesses robust intelligence if it tends to perform well in both familiar and unfamiliar situations. Humans are robustly
intelligent: we are highly effective in most of the new situations we
find ourselves in every day. Robust AI systems, on the other hand,
remain an elusive goal. While decades of AI research have produced
systems that perform as good as (or better than) humans in
well-defined and specialized domains, such as playing chess or task
scheduling, these same systems cannot function at all outside the
narrow set of circumstances they were explicitly designed for. This is
the brittleness problem: automated systems break when confronted with
unanticipated anomalies.
|
Two examples eptiomize brittleness in AI systems: (i) a DARPA
Grand Challenge robot bumped into a chain-link fence it could not see
and then simply stayed there spinning its wheels futilely; and (ii) a
NASA satellite turned itself to look in a certain direction as
instructed, but then was unable to receive further instructions{even
the instruction to turn back{since its antenna was no longer in a
direct line of sight. In each of these cases, a modest amount of
self-modeling (I should be moving forward; I should be receiving more
instructions) and self-observation (I am not moving forward; I am no
longer receiving instructions) would have alerted the systems that
something was amiss; and even a modest amount of self-repair (attempting
random activity) would have been better than staying stuck.
Standard approaches to brittleness, in which it is up to the
designer to predict specific, individual anomalies by incorporating
extensive knowledge about the world, have not been
succesful. Realistic environments have too many contingencies to be
enumerated a priori. The issue then is how to design systems that can
respond to situations they were not explicitly designed for, and
regarding which they do not have explicit knowledge. Our hypothesis is
that this ability can largely be captured by a special-purpose
anomaly-processor that, when coupled with an existing AI system,
improves the latter's robustness. We have created a model of such a
processor, which we call the metacognitive loop
(MCL). Experiments with several pilots of MCL have met with a
significant amount of success, enough to strongly suggest that at its
full potential MCL will be a significant advance toward robust
intelligence.
-
Matthew D. Schmill, Darsana Josyula, Michael Anderson, Tim Oates, Don
Perlis, and Scott Fults. "Ontologies for Reasoning about Failures in
AI Systems". In First International Workshop on Metareasoning in
Agent-Based Systems, 2007.
-
Michael L. Anderson, Matthew D. Schmill, Tim Oates, Don Perlis,
Darsana Josyula, Dean Wright, and Shomir Wilson. "Toward
Domain-Neutral Human-Level Metacognition". In Proceedings of the
Eighth International Symposium on Logical Formalizations of
Commonsense Reasoning, 2007.
|