Artificial Intelligence/Knowledge Representation

Knowledge representation (KR) is the name we give to how we encode knowledge, beliefs, actions, feelings, goals, desires, preferences, and all other mental states in artificial systems. As nearly all of AI research is done by programming computers, these representations often end up being some kind of character set in software. However, we can discuss them at a higher level than that of code: the information level.

Suppose, for example, we have a list of countries in North America in the year 2000. The knowledge we encode would be the three countries: The United States, Canada, and Mexico. This is an unordered list. We can then represent this knowledge in different ways in different computer programming languages. For example:

Python:

LISP:

...and so on. The world is a complex place, and the job of the AI programmer is to decide which details to encode in the KR and which to ignore. Sometimes this is simple. For example, if you were programming a computer to decide on chess moves, it's not necessary to encode exactly where on the square each piece is. It's irrelevant to what move will be made. Other decisions are more difficult.

In general, though, you want to have as much complexity as you need and no more. The reason for this is that added representational complexity results in more complicated programming and computational processes when you run the AI program.

Having "as much complexity as you need" is a call for "representational adequacy," which basically means that your representation has the information it needs to do what it needs to do. Three ways to evaluate a KR are in terms of clarity, precision, and naturalness.

Clarity

A KR that has clarity is one that is not ambiguous. One of the reasons we use programming languages at all is that the languages we use naturally, such as English, are ambiguous. Programming languages are designed to have no ambiguity.

Precision

A KR with precision gets to the level of detail necessary. In chess, it's not enough to know that a piece is on the board--the agent needs to know which square it's on, what kind of piece it is, and what color it is. Whether or not a piece has a scratch on it is not necessary for playing chess, so details like this can be left out of the representation.

Naturalness

A third criterion is naturalness, which is how easily the KR can be interpreted by people, or, perhaps, how easily it can be translated into something a person can understand. This is important because people in many instances don't want to simply trust what an AI tells them--they want an explanation. Being able to see what reasons the AI used to generate its output is only useful if the KR has a degree of naturalness.

Naturalness is also important for the programmers. It's good to have a natural KR so that future programmers (perhaps even you!) will be able to make sense of what was written at a later date. For this reason, it's important that your KR have explicit semantics and syntax.

For example, look at this representation:

What does this mean? and  are just symbols; they can mean whatever we want them to mean. Let's assume that the KR was designed to be natural, and they mean what they appear to man in English. There's still ambiguity. Does it mean that all cars are red? That a particular car is red? Is it a function that returns true if the car is red? Or is it a command to paint a car red?

What looks obvious at programming time might be confusing later.

Let's return to our example above, representing North American countries. There's no reason why the three countries in the list need to be ordered. However, in LISP and in Python the list data structures are ordered. In the language Smalltalk, for example, there is a data structure called "bag" which can hold an unordered collection of items. The programming language we choose often puts constraints on the KR.

How many countries are in North America? There are three, but is this explicitly represented in the code above? No. Lists are represented. If we wanted to encode the list lengths explicitly, we could do something like this:

However, it's so easy to count the things in a list that most programmers would just recalculate the list length with a function when needed. This also keeps the system from needing to change the explicitly represented number every time the list length changes.

This gets at the inferential adequacy of the KR, which is how well the system can use the KR to infer what it needs to. Representational efficiency refers to how efficiently the AI can use the KR to do its reasoning--in terms of time or computational resources, as appropriate.

Although any programmer can make up any KR he or she wants to, there are many already created, and their properties are well understood. It's often advisable to use a a well-known KR than to make up your own. In this book we'll show you many different KRs. In the next parts, we'll talk about two: semantic networks and frames.

Important Vocabulary:
 * knowledge Representation
 * KR
 * representational adequacy
 * clarity
 * precision
 * naturalness
 * inferential adequacy
 * representational efficiency

Next: Semantic Networks