...Now we know. And knowing is half the battle.
-G.I. Joe animated series
Every researcher working in artificial intelligence has faced the issue of representing knowledge in their theories and models. The choices which are made concerning knowledge representation and organization can have a powerful impact on the outcome of the research; unfortunately, it is often the case that these decisions are never explicitly presented to the audience, unless the work is directly concerning knowledge representation. My commitment to a functional theory, in the way I described it in Chapter 2, requires that I elaborate on the underlying representations in order to precisely see from where the accomplishments of the theory arise. This chapter will address the issues, assumptions, and claims being put forth by my work concerning knowledge representation. Although knowledge representation is not the focus of my research, it is a crucial aspect of the overall theory. In particular, I will address three major questions:
I begin by discussing the issues of why an artificial intelligence theory should even consider using an explicit representation of the world. This particular aspect of the work is at an assumption level as there is not enough empirical evidence to decide the issue one way or the other. The next section, then, explains why my research assumes the need for an explicit representation.