Metaphor and Common-Sense Reasoning

>Inferenccs based on metaphors appear to play a major role in human common sense reasoning. This paper identifies and analyzes general inference patterns based upon ur Jerlying metaphors, in particular the pervasive balance principle. Strategies for metapnor comprehension are explored, and analogical mapping structures are proposed as a means of representing metaphorical relationships between domains. In addition, a framewo~rk for a computational model embodying principles of metaphorical common sense reasoning is dsusd*k


Introduction
The theory that metaphor dominates large aspects of human thinking, as well playing a significant role in linguistic commur.cation, has been argued with considerable force [26, 24,8,5]. However, the validity of such a theory is a matter of continuing debate that appears neither to dissuade its proponents nor convince its detractors. Being among the proponents, we propose to develop a computational reasoning system for performing metaphorical inferences. If such a system exhibits cognitively plausible common sense reasoning capabilities, it will demonstrate, at the very least, the utility of metaphorical inference in noodeling significant aspects of naive human reasoning. This paper reviews our initial steps towards the development of a computational model of metaphor-based reasoning.

Experiential Reasoning vs Formal Systems
Humans reason and learn from experience to a degree that no formal system, Al model, or philosophical theory has yet been able to explain. The statement that the human mind is (or contains) the sum total of its experiences is in itself rather vacuous. A more precise formulation of experiencebas.d reasoning must be structured in terms of coordinated answers to the following questions: How are experiences brought to bear in understanding new situations? How is long tern memory modified and indexed? How are inference patterns acquired in a particular domain and adapted to apply in novel situations? How does a person "see the light" when a previously incomprehensible problem is viewed from a new perspective? How are the vast majority of irrelevant or inappropriate experiences and inference patterns filtered out in the understanding process? Answering all these "how" questions requires a process model capable of organizing large amounts of knowledge and mapping relevant aspects of past experience to new situations. Some meaningful starts have been made towards large-scale episodic.based memory organization [32,33, 34, 28, 25]and towards episodic-based analogical reasoning [9, 12,7]. Bearing these questions in mind, we examine the issue of common sense reasoning in knowledge-rich mundane domains.

Our central hypothesis is:
Experiential reasoning hypothesis: Reasoning in mundane, experience-rich recurrent situations is qualitatively different from formal, deductive reasoning evident in more abstract, experimentally contrived, or otherwise non-recurrent situations (such as some mathematical or puzzle-solving domains).

Carbonell and Minton
In the statement of our hypothesis we do not mean to exclude experience-rich metaphorical inference from scientific or mathematical thought. Rather, we claim that formal deductive inference is definitely not the dominant process in mundane reasoning. In essence, the experiential reasoning hypothesis states that structuring new information according to relevant past experience is an important aspect of human comprehension --perhaps more important than other aspects studied thus far in much greater depth.
Common-sense experience-rich reasoning consists of recalling appropriate pest experiences and inference patterns, whereas solving abstract problems divorced frnm real-world experience requires knowledge-poor search processes more typical of past and present Al problem solving systems.
Since computer programs perform much better in simple, elegant, abstract domains than in "scruffy" experience-rich human domains, it is evident that a fundamental reasoning mechanism is lacking from the Al repertoire. The issue is not merely that Al systems lack experience in mundane human scenarios .-they would be unable to benefit from such experience if it were encoded in their knowledge base. We postulate that the missing reasoning method is based on the transfer of proven inference patterns and experiential knowledge across domains. This is not to say that humans are incapable of more formal reasoning, but rather that such reasoning is seldom necessary, and when applied it requires a more concerted cognitive effort than mundane metaphorical inference.
There is evidence that human expertise, far beyond what we would label common sense reasoning, draws upon past experi6nce and underlying analogies. For instance, the master chess player is not a better deductive engine than his novice counterpart. Rather, as Chase  People's well-developed ability to perform analogical reasoning is at least partly responsible for what we call "common-sense" reasoning. Roughly speaking, analogical reas.: ning is the process by which one recognizes that a new situation is similar to some previously encountered situation, and uses the relevant prior knowledge to structure and enri , one's understanding of the new situation.
We refer to metaphorical reasoning as that subset if analogical reasoning in which the analogy is explicitly stated or otherwise made evident to the understender. For instance, comprehending "John Metaphor and Common Sense Reasoning 3 is an encyclopedia" entails metaphorical reasoning since the analogy between John and an encyclopedia is explicitly suggested. However, constructing a novel analogy In order to explain some new situation is a different task which requires searching memory for a previously encountered similar situation. Both of these forms of inference may be labeled common-sense reasoning in so far as they require access to large amounts of past knowledge and reaching conclusions without benefit of formal deduction.

Patterns of Metaphorical Inference
A metaphor, simile or analogy can be said to consist of 3 parts: a target, a source and an analogical mapping. For example: John was embarrassed. His face looked like a beet, Here the target is "John's face" and the source is "a beet". The analogical mapping transmits information from the source to the target domain. In this case, the mapping relates the color of John's face to the color of a beet. Our use of the same terminology to describe metaphors, similes and analogies reflects our opinion that they are all merely different linguistic manifestations of the same underlying cognitive process: analogical reasoning. That is, they differ primarily in their form of presentation rather than in their internal structure. Consequently, although our choice of terminology may indicate that we are centrally concerned with the phenomenon of metaphor, we mean to include simile and analogy as well. Arms control is a weighty issue.
The worries of a nation weigh heavily upon his shoulders.
The Argentine air force launched a mnassive attack on the British fleet. One frigate was heavily damaged, but only light casualties were suffered by British sailors. The Argentines payed a heavy toll in downed aircraft.

Carbonell ond Minion
Not being in the mood for heavy drama, John went to a light comedy, which turned out to be a piece of meaningless fluff.
Pendergast was a real heavyweight in the 1920s Saint Louis political scene.
The crime weighed heavily upon his conscience.
The weight of the evidence was overwhelming.

The Physical Metaphor Hypothesis
Weight clearly represents different things in the various metaphors: the severity of a nation's problems, the number of attacking aircraft, the extent of physical damage, the emotional affect on audiences of theatrical productions, the amount of political muscle (to use another metaphor), the reaction to violated moral principles, and the degree to which evidence is found to be convincing. In general, more is heavier; less is lighter. One may argue that since language is heavily endowed with words that describe weight, mass and other physical attributes (such as height and orientation

5
Both conservative and liberal arguments appeared to carry equal weight with the president, and his decision hung on the balance. However, his long.standing opposition to abortion tipped the scale in favor of the consorvatives.
The Steelers were the heavy pregame favorites, but the Browns started piling up points and accumulated a massive half.time lead. In spite of a late rally, the Steelers dir' not score heavily enough to pull the game out.
The job applicant's shyness weighed against her, but her excellent recommendations tipped the scales in her favor.
In each example above the same basic underlying inference pattern recurs, whether represeneng the outcome of a trial, statements of relative military power, decision-making processes, or the outcome of a sporting event. The inference pattern itself is quite simple: it takes as input signed quantities whose magnitudes are analogous to t'% "i stated "weight" and whose signs depend on which side of a binary issue those weights correspond -. and selects the side with the maximal weight, computing some qualitative estimate of how far out of balance the system is. Moreover, the inference pattern also serves to infer the rough weight of one side if the weight of the other side and the resultant balance

Knowledge Acquisition via Analogical Mappings
The following example, found in a children's book, illustrates an explanation in which the reader (presumably a child) is expected to create a, analogical mapping and transfer information across domains.
A motorcycle is a vehicle. Like a car it has a motor. But it looks more like a bicycle.
The author attempts to explain the concept of a motorcycle by referring to other, presumably more familiar, objects. But his statement implios much more than is explicitly stated. For instance, it suggests not only that a motorcycle has a motor, but that it has a motor in the same way that a car has a motor: that the motor is an internal combustion engine, it uses gasoline, it causes the machine to move, etc. The reference to a car is essential; consider the effect of substituting "electric shaver" for "car" in the example. (After all. electric shavers have motors too, but their motors are not a means of propulsion). Certainly. drawing an analogy to electric shavers would not be nearly as helpful in communicating what a motorcycle is.
31mplementing domain comparisons in a computer is typically accomplished by attempting to find tmatchas between the representations of the target and source domains. As we shall see in the next section, these representations are typically graphs or equivalent structures. Although the details of the matching process vary considerably depending on the representation system used, the computation can be quite expensive if performed upon arbitrary domains. Indeed. the related "Subgraph somorphism" problem is NP-complete (19]. Given a precise formulation of the matching problem, it is easy to demonstrate that it too is intractable unless it is bounded in some principled way.

Carbonell and Minion
Although analogies, such as the one above, can obviously be used to great advantage In transmitting new information, the reader is often left in the position of not knowing how far carry the analogy. To a child who has never seen a motorcycle, the previous description of a motorcycle, though informative, is still quite ambiguous. "Does a molorcycle have pedals?" he may ask. In order to gauge the extent of the analogy, to verify which of his inference patterns relevant to cars and bicycies are valid for motorcycles, the child must either read further or find a picture of a motorcycle.
A prlod, them is no way for him to be sure which inferences to make. But aleast the set ofsanible questions he may ask will be focused by the analogy. Thus, it is reasonable to ask about handlebars or pedals, but not about whiskers or wins In most mundane situations, knowing which inferences are correct seldom poses a significant problem for people, largely due to the fact that there are characteristic ways of expressing metaphors so that the mapping problem is easier to solve. For instance, consider that the truly novel metaphor is rarely encountered. Through frequent use, many metaphors acquire idiomatic meanings, to a greater or lesser degree. We refer to these metaphors as frozen. "John is a hog" and "Sheila is a dog" both exemplify frozen metaphors. The latter would probably be interpreted as a rude comment concerning Sheila's looks, rather than a compliment on her loyalty, which seems to be an equally reasonable interpretation given only one's knowledge about dogs. Frozen metaphors are easy to understand because the analogical mapping has been (to some degree) precomputed, and so does not have to be reconstructed, only remembered and reapplied. Hence, neither a complex matching process nor prior knowledge about the target are necessary in order to find the mapping. There is little question of which are the right inferences and which are the wrong ones.

Salience and Novel Metaphors
If a metaphor is novel, other strategies are available for coping with the complexity of the mapping problem. One way is to focus on salient features of the source [30, 35). Consider the example "Billboards are like warts" in which both the target and source are familiar objects. Most people interpret this as meaning that billboards stick out, and are ugly. Their mapping relates attributes that are common to both source and target, but particularly emphasizes those such as "ugliness" that are "high-salient" attributes of warts, the scurce, It is our contention that by focusing on prominent features and ignorinq unimportant ones, the computational complexity of the mapping problem is reduced.
Concentrating the initial mapping to salient features of the source is an effective strategy even when one's knowledge of the target domain is limited. In fact, it is likely that the salient features are the very ones that should be mapped into the target domain, as is the case the following metaphor: is the Freddie Laker of consumer electronics, Although the target is an unknown company, and the metaphor is novel, it is understandable simply because the source, Freddie Ltker of Laker Airiines, has certain outstandingly salient feature& Of course. the creator of the metahor expects that hi audience will all hav the tae opinion a to which of Laker's features at* salient. Why certain features awe considered univm sally salient wherHm others are not is a difficult problem in its own right, one which we will not pause to consider here.
We have examined -wo types of metaphors which can be understood in spite of Incomplete knowledge of the target: frozen metaphors and metaphors based on the source's salient features.
These illustrate just two of the many ways pragmatic considerations enable one to bypass much of the complexity of the mapping problem. Occasionally, however, one cannot avoid more complex versions of the mapping problem. For us, this is the most interesting case. It occurs frequently during cxlanations involving extended analogies, such as when a grade.school mathematics teacher begins his algebra ciass by proclaiming: An equation is like a balance. You must keep the same mount of weight on each side of the equals sign....
Certainly there will be students in the class for whom this is a novel idea, and who spend the next 10 minutes desperately trying to find the intended analogical mopping. Or consider a secondary school biology text which begins a chapter on the human nervous system by comparing it to a telephone network, Or a treatise on "iamlet" whose thesis is that the protagonist's life is a metaphor for adolescence. When confronted with one of these aralogies in context, one may need to search for appropriate hypotheses; one's analogical mapping will be elaborated and changed as one's understanding, of the target domain grows, Is it really conceivable that the Fed produced these gyrations on purpose, given its repeated protestations that it was committed to a steady and moderate rate of monetary growth?
Why the gyrations? A better explanation is that the Fed is, as it were, driving a car with a highly defective steering gear. It is driving down a road with walls on both sides. It can go down the middle of the road on the average only by first bouncing off one wall and then off the opposite wa!l. Not very good for the car or its passengers or bystanders, but one way to get down the road, This interpretation raises two key questions: first, why doesn't the Fed replace the defective steering gear? Second, what course will this defective steering gear lead to over Friedman communicates his belief that this situation is bad --but not totally disastrous .-without ever having to explain the underlying monetary and fiscal reasons. In fact, when we informally questioned people about what exactly Friedman is referring to when he speaks of "walls", most admitted that they weren't really sure. A typical response was that the "walls" represented some sort of "limits".
And yet, these people felt that they had understood, or gotten the gist of, the metaphor. Apparently one's analogical mapping does not have to be particularly detailed, as Ic ng as key inferences can be made. It seems that once certain connec.ions or beachheads have been established between the target and source domains, people are content to incrementally elaborate the mapping as they find it necessary during further reading or problem solving.

A Classification Based on Processing Requirements
In our discussion thus far, we have identified various analogical mapping strategies whose applicability depends upon the properties of the metaphor under consideration. We therefore offer the following pragmatic classification, based on what we believe are meaningful distinctions in the type of processing employed during comprehension. We caution that these categories should not be viewed as distinct; it seems more reasonable to view metaphors as occurring along a continuum with respect the criteria presented below.

Representing Metaphors: The LIKE Relation
In the previous paragraphs we have discussed the problems involved in finding an analogica mapping and making metaphorical inferences. We now turn our attention from the process of comprehension to issues of representation. How do we represent an analogical mapping in a computational model? We know that our representation must satisfy two requirements: 1. The representation must facilitate the transfer of information from the source domain to the target domain, and 2. it must be dynamic, enabling the analogy to be elaborated over time.
In this section we discuss how analogies (and metaphors) can be represented in semantic networks so as to satisfy these requirements. Although our work was motivated by representation languages such as KL.ONE [4], SRL [391, NETL [17] and KRL [2], we intend the ideas presented below to be applicable on a broad basis, and therefore make no commitment to any particular representational scheme. The notation presented in our diagrams is meant to oe purely illustrative. Exactly how this is managed is of importance to the domain matching process, but it need not concern us geady for t purposes of this discussion. Because the mapping structure can be modified dynamically, at any particular time it represents the current conception of what the metaphor means. Of course, this implies that the mapping structure must be retained for some unspeci;Wed duration. We assume that the mapping structure will be "forgotten" (i.e. discarded by some autonomous process supporting the representation system) if it dos not continue to be accessed as a source of inferences when target domain information is retrieved from or added to memory. as the FUNNY attribute describing puppets, are included for illustrative purposes. Note that in both the target and the source, there is a node signifying the "relative size differences" of the objects (admittedly a gross representational simplification). Although this node is not part of the mapping, it might very well be included later if the mapping is extended.
As we pointed out in the previous section, a metaphor may become frozen through frequent use.
An advar'age gained by the use of, mapping structures is that we can model computationally this The statement "John eats like a pig" is a typical example of a frozen metaphor. Notice that it is understanabfte even though we are using "John" as a generic person. In our model, the mapping structure corresponding to "...eats like a pig" is asociaed with the section of the knowledge netwok where information about pigs' eating habits is stored. Parsing "John eats like a pig" requires retrieving this mapping structure, noticing the exact correspondence between the source in the structure and the source in the new metaphor, ard then instantliating the structure with "John" as the target domain. Instantiation is relatively easy to do,. because the mapping structure specifies which nodes map from the source [8]. Obviotsly we have glossed over many important problems in his description, such as how mapping structures can be retrieved given a source description, and whether a new physical copy of the mapping structure must be generated for each instantiation of a frozen metaphor. These questions are being studied at the present time.

Generalizing Mapping Structures
In the previous section, mapping structures were proposed as a means for representing arbitrary inter-domain correspondences. It is our intention that mapping structures be viewed as datastructures which implement LIKE relations. That is, a LIKE relation still exists between the source ind target domain of an analogy, but it Is too complex to be implemented with a simple link. Instead, a more elaborate mechanism is required to represent the internal structure of the aWalogical relationship. The indirect implementation of an analogical relationship as a data structure declaratively specifying the mapping process provides a necessary extra level of abstraction along its functional dimension. Thus, one can refer to the entire analogy as a unit, or one can access and elaborate the constituent parts of the mapping structure.
At the present time, we are considering other relations that may be better implemented by mapping structures rather than by simple links. Perhaps the most obvious candidate the IS-A relation, which provides a way to structure a knowledge network into a type hierarchy so that properties of a class representative can be mapped automatically to members of that class. We refer to this as vertical

Carbon*#l an# Minton
inheritance, because each concept inherits from those above It In tetype hiearchy. 5 Historiclly, vertical inheritance has been used in knowledge representation systems to Implement certain types of default reasoning. For example, knowing that Clyde is an elephant, and elephants have trunks, a system might use Inheritance to infer that Clyde has a trunk.6 MOVI~nl Nos few Al systems to date have used analogical reasoning as a primary inference method. Analogical reasoning has been viewed as a difficult problem in its own right, which must be solved before it can be incorporated in applications systems (such as parsers and medical diagnosis systems). However, a robust system must be able to operate analogically, especially if intended for naive users, otherwise they would find its lack of "common sense" intolerable. For example, a parser which could not understand metaphors, analogies, or similes would be useful only in the most limited of situations. 5 With these thoughts in mind we have begun initial work towards a prser which can reason metaphorically, and below present the following conceptual steps in the metaphor-recognition pasn prccem 1. Identification of the source and target concepts. This is done during the parser's normal, non-metaphorical operation.
2. Recognition that the input currently being parsed cannot be handled literally, and is in fact an instance of a metaphor. This is actually a non-trivial task requiring considerable all-or-nothing inheritance, consider the tact that the average mammal may be 3 feet tall, or may range from a 1/? inCh to 21 M ta. Wheres we want our concept of "Giraffe" to inheirt most of our knowledge of rammals, we clearly do not w6nt to say OW the average giraffe a 3 feet tall. nor that graftes range in height from 1/2 inch to 21 feet tall. Hence. the IS-A miatlon inherits only certain classes of attributes and excludes others; typically intrinsic properties of individual nwrber are inhrftd wheresl aggregate set properties are not. A mopping structur can be used to make explicit statemtla. such as the one oa.
regarding the information that may be transmitted from one concept to another via any particular inheritance An (A8, 6. 8 Skeptics who dispute this claim are invited to examine any source of common everyday text. such a a copy of Time magazine or even the New York Times Financial section, and count the number of metaphors occurring on a skg Pae

IS
Carbonell and Minton sophistication. For example, the parser must realize that the input is not simply erroneous. This judgement depends to a 'arge degree on pragmatic consderations.
Creation of an analogical mapping from the source domain unto the target domain so that corresponding subconcepts in the two domains map to each other. This phase may be broken down further as follows: a. Search for a pre.existing mapping structure associated with the source domain.
b. If any such structure is found, check whether it is appropriate with respect to the new target domain. This is done by building incrementally a new mapping structure containing the same nodes as the old structure. As each new node is created a corresponding node in the target domain must be identified.
c. If no pro-existing mapping structure is found for the source, or those that are found prove to be inappropriate 'r the new target, then a new mapping must be constructed from scratch. A matching algorithm must search the two domains in order to find similarities. In this case, one should use as many heuristics as possible for reducing the amount of domain comparison that must be done. Possible heuristics include focusing on salient concepts in the source, and focusing on certain categores of knowledge which tend to be mapped invariant in meaningful metaphors (8]. 4. Once corresponding nodes in the two domains have been identified (by constructing a mapping structure), knowledge from the source can be added to the mapping, thereby generating corresponding inferences within the target domain. 9 In an abstract sense, this mechanism accomplishes an implicit transfer of infonrMion from the source to the target. Verification that the metaphorical inferences are compatible with the target domain is an integral part of this process.
Whether or not it is possible to develop a robust metaphor comprehension system with today's technology is a matter of debate. Metaphorical understanding requires a potentially vast amount of world knowledge, as well as an efficient way of comparing large domains for similarities. However, we feel that even a fragile, partial model built along these lines is a worthwhile endeavor, since eventually these problems must be solved in order to create a truly intelligent parser and inference system.
In cooperation with work towards a model of metaphor understanding, we are also studying the role that metaphorical inference olays in scientific reasoning. As ditussed earlier, metaphorically-based we acknowledge that f. m s speciied does not account for tte way people understand metaphors such s "840Y is a block of ice*. in which the properties trlanlerred from the source are themsel metaphnrical. The metaphor tranehera t proprty "cold" from ice to Mary, but this in a metaphor *ithin a metaphor because we are refering to Mary's personalty rallhr than her tempetature. Metaphors occur in all shapes and sizes, and we have not addressed many of the subtler nuamces of the phenomenon in this paper. We do believe. however, that the model can be elaborated to handle more sophisticated metaphors without revising the general framework we have presented. general patterns of inference do not appear confined to naive reasoning in mundane situations. [24] have argued the significant role that metaphor plays in formulating scientific theories. In preliminary inve-tigations, Larkin and Carbonell [27, 10] have isolated general inference patterns in scientific reasoning that transcend the traditional boundaries of a science. For instance, the notion of equilibrium (of forces on a rigid object, or of ion transfer in aqueous solutions, etc.) is, in essence, a more precise and general formulation of the balance metaphor. Reasoning based on recurring general inference patterns seems common to all aspects of human cognition. These patterns encapsulate sets of rules to be used in unison, and thereby bypass some of the combinatorial search problems that plague more traditional rule-based deductive inference systems. The inference patterns are frozen from experience and generalized to apply in many relevant domains.

Gentner [20] and Johnson
At the present stage in the investigation, we are searching for general inference patterns and the metaphors that give rise to them, both in mundane and in scientific scenarios. As these patterns are discovered, they are catalogeo arcording to the situational features that indicate their presence. The basic metaphor utderlying each inference pattern is recorded along with exemplary linguistic manifes:ations. The internal structure of the inference patterns themselves are relatively simple to encode in an Al system. The difficulty arises in connecting them to the external world (i.e., establishing apprupriate mappii,.ls) and in determining their conditions of applicability (which are more accurutely represented as partial matIets of the situations where apply. rather than as simple binary tests). For instance, it is difficult to formulate a general process capable of drawing the mapping between the "weight" of a hypothetical object and the correspending aspect of tMe nonphysical ertity under f.onsiderabon, so that the balance inference pattern my apply. It is equally difficult to determine the degree to which this ot any other inference pattern can make a useful contribution to novel situatiors tha, oear sufticient sinilarity to past experience [12].

Conclusion
In this paper we hav3 analyzed the role of metaphors in common sense reasoning. In particular, we showed how the balance metaohor exemplifies metaphorical inference, suggested that inference patterns valid for physical domains might provide the foundation upon which much of human common-sense reasoning rests, and provided the first steps toward a computationally-effective method for representing analogical mappings. However, since the current study is only in its inftial stages. the hypothesis that metaphorical inference dominates human cognition retains the status of a conjectur, pending additional investigation. We would say that the weight of the evidence is as yet insufficient to tip the academic scales.

Carbune~l and Minton
Our Investigations to date sugqest th'tt intensified eforts to resove the questions raised in this paper may prove fruitful, in addition to pursuing the following related research objectves: e Develop an augmented represeettation language that handles analogc mappings M a natural operation. We intend to start from a fairly flexible, operational language such as SAL [39). Using this language we intend to build ald test a system that acquires new information from external metapherlcal explanations.
e Continue to develop the MULTIPAR multi-strategy parsing system [21,221 and incorporate within its evolving flexible parsing strategies a means of recognizing and processing metaphors along the lines mentioned in this paper.
*Examine the extent to which linguistic metaphors reflect underlying inference pattern.
The existence of a number generally useful inference patterns based on underlying metaphors provides evidence against, but does not refute, the possibility that the vest majority of metaphors remain mere linguistic devices, as previously thought. In essence, the existence of a phenomenon does not necessarily imply its universal presence. This is a matter to be resolved by more comprehensive future investigation.
* Investigate the close connection between models of experiential learning and metaphorical inference. In fact, our earlier investigation of analogical reasoning patterns in learning problem solving strategies first suggested that the inference patterns that could be acquired from experienco coincide with those underlying many common metaphors [12, 81.
*Exploit the human ability for experientially-based metaphorical reasoning in order to enhance the educational process. In fact, Sleeman and others have independently used the balance metaphor to help teach algebra to young or learning disabled children. Briefly, a scale is viewed as an equation, where the quantities on the right and left hand sides must balance. Algebraic manipulations correspond to adding or deleting equal amounts of weight from both sides of the scale, hence preserving balance. First, the child is taught to use the scala with color-coded boxes or diff"rent (integral) weights. Then, the transfer to numbers in simple algebraic equations is performed. Preliminary results indicate that children learn faster and better when they are able to use explicitly this general inference pattern. We foresee other applications of this and other metaphorical inference patterns in facilitating instruction of more abstract concepts. The teacher must make the mapping explicit to the student in domains alien to his or her past experience. As discussed earlier, establishing and instantiating the appropriate mapping is also the most problematical phase from a computational standpoint, and therefore should correspond to the most difficult step in the learning process.
Clearly, the possible research directions suggested by our initial investigations far outstrip our resources to pursue them in para:Iel. Hence. we will focus first on the basic representation ad parsing issues central to a computEptional model of metap.iorical reasoning.