Next: Discussion
Up: The control of understanding
Previous: The interest restriction
The main criterion which acts to constrain
the understanding process performed by reasoning
is the ontology. As understanding progresses,
it is often the case that a concept being manipulated will
need to shift itself out of the ontological grid cell it is initially
in. A vertical or horizontal transition requires more
effort than remaining in the same cell; transitioning
both vertically and horizontally is even more difficult.
Also, there are a small set of heuristics
related to the ontology which serves to bound
understanding
(see [#!read:moorman4!#]):
- Physical types can become transitioned to other domains
more easily than other domain types can be transitioned to
physical. Since humans are physical
entities with a great deal of experience with other physical
entities, it is ``easier'' to believe in the existence of
a novel, non-physical entity formed from a physical
analogue than it is to accept the creation of a new type
of physical entity. Consider, John saw the days
fly by. Is this a novel use of saw and fly created
by altering physical concepts into the temporal domain,
or is it a novel use of days created by considering
a temporal object as a physical one?
- An object may transition to an action by creating an action
which captures a function of that object and vice versa. English,
in particular, tends to have many lexical examples of this.
A fax is the thing you send when you fax someone. A
(Star Trek) transporter is the device used to transport material
from one location to another.
- An object may transition to a state by creating a state which
captures a primary attribute for that object, and vice versa.
Through this transition, we get many common similes and
metaphors, such
as Hungry as a bear.
- Agents and objects can easily transition between each other. This results
from two observations. First, agents exist as
embodied entities in the world ([#!phil:johnson1!#]), explaining
the agent to object transition. For example, one may treat John as
a physical object. Second, it is possible to view
objects as though they possess intention ([#!ai:newell1!#]), enabling
the object to agent transition.
For instance, a thermostat may be thought of in
terms of agency, i.e., it wants to keep the house
at a constant temperature.
- Concepts from the three so-called ``psychological'' domains, which
are emotional, mental, and social,
can transition between those three domains easier than into
or out of the other domains.
- Make the minimal changes necessary. This is simply a general
rule, ala Occam's Razor. It results from the earlier
discussed idea of satisfaction ultimately driving the
understanding process--stop the process once
you have a ``good enough'' understanding to allow
the higher cognitive task to continue.
By combining the three basic movement types with the high-level
heuristics, I have produced an ordering of the amount of
cognitive effort required to manipulate concepts (from easiest
to most difficult):
- 1.
- Concepts may transition within a single cell.
- 2.
- Agents may be treated as objects and objects
may be treated as agents.
- 3.
- Concepts may vertically transition according to the
modification heuristics.
- 4.
- Mental, emotional, and social concepts may
horizontally transition
between those domains easier than into the other
domains.
- 5.
- Physical domain concepts may transition to other domains (horizontal
motion).
- 6.
- Other domain types may transition to the physical
domain (horizontal motion).
- 7.
- Combinations of 2-5 may occur.
Within this ordering, however, operations which result in
the minimal changes are preferred over those which are more
complex.
By making use of this set of ontological rules,
it is possible to allow the
creative understanding
process to execute without appealing to other higher-level
heuristics which may have to be tailored to fit
a particular situation. Thus,
THE ONTOLOGICAL REQUIREMENTS AND THE READING DOMAIN PROVIDE SUFFICIENT CONSTRAINTS ON THE UNDERSTANDING PROCESS.
The ontological bounding process also allows me
to operationalize the earlier idea of
suspension of disbelief.
The ``level'' of the disbelief suspension which is
required can be viewed in terms of how much
ontological transitioning is required to
cause a story to ``fit'' into the
background knowledge of the reader. More complex
transitions mean that more suspension of disbelief
is required. As some transitions are too severe
to be allowed, this also captures the fact that there
is a limit to how much a reasoner is willing to
sacrifice belief.
These ideas can be seen by viewing some
examples.
First, consider the example sentence, John
was a bear. The sentence has a number of possible
interpretations, as described in Chapter 4:
- John (a human) acts like a bear.
- John (a human) has been transformed into a bear.
- John (an agent) is a were-bear (a magical creature
which can transform from human form to bear form and back
to human).
- John (a bear) is a bear.
Some of these are more probable than others;
from an ontological perspective, if John
is simply the name of a particular bear,
then no movement in the ontological grid will be
required. This is the version that ISAAC prefers,
if no additional information from the
story is provided.
A second example comes from the Meta-AQUA system ([#!learn:cox1!#])
which reads a story involving a drug-sniffing dog. Meta-AQUA
initially knows only that dogs will bark at agents which
threaten them.
But, in the story, a dog is
barking at a suitcase. In ISAAC, the system is presented with
two possibilities--its knowledge of dogs is wrong or its
knowledge of suitcases is. The first involves altering an existing
physical agent to create a variant of it, an intracellular movement.
The second involves shifting a physical object to the
physical agent cell, a vertical movement. The intracellular movement
is preferred. Therefore, my system would prefer the understanding
that dogs also bark at drugs.
In the story Men Are Different, a robotic
archaeologist is studying the destroyed civilization of
mankind; the story is presented as a first-person
narrative.
ISAAC is aware that narrators, archaeologists, and
protagonists are all known to be human; robots are industrial tools;
but the narrator, archaeologist, and protagonist of the story
is known to be a robot. ISAAC can select
to create a new type of robot which
embodies agent-like aspects, or it can change the definitions
of narrators, archaeologists, protagonists, and the actions
in which they may participate. Creating a single
new robot concept, therefore, represents a more minimal
change than having to alter the definitions of
all the other concepts. As a result of the minimal
change heuristic, then, the use of base-constructive
analogy to create the intelligent robot concept is
the preferred option.
The final example involves the story Zoo,
in which the reader is
presented with an intergalactic zoo which travels
from planet to planet, giving the inhabitants
of those planets a chance to view exotic creatures.
At the end of story, however, the reader is shown that
the true nature of the intergalactic ship--it is an
opportunity for the ``creatures'' on the ship to visit
exotic planets, protected from the dangerous inhabitants
by the cages they are in.
To understand the new zoo, the system draws
an analogy between the known zoo and the novel one.
The result, then, is simply a shift from one
physical object to another physical one.
Next: Discussion
Up: The control of understanding
Previous: The interest restriction
Kenneth Moorman
11/4/1997