Calculus of Meanings

advertisement
Calculus of meanings.
By Meir Kozarinsky.
Israel.
The problem that facing the human being stands at is to encode an infinite number
of situations by means of a finite number of words. He needs this encoding for
economical thinking and for communication. He does this, and achieves it by two means:
generalization and combination. Words are generalizations and phrases are combinations.
Yet the question is raised (Katz and Fodor, 1963): given the combination of words, how
are the word meanings are integrated into the one phrase meaning? Having been
accumulated, the meaning gives more and more information to the receiver, allowing him
eventually to restore reconstruct the underlying situation, and this it is very important for
cooperative activity between people. This tendency transition of understanding from
meanings to reference referents is not always got achieved only through words alone;
sometimes this process needs applying of common knowledge and awareness of the
concrete pragmatic context.
Let us now describe now the basic operation for syntactic- semantic integration of
meanings (Kozarinsky, 1973). In every such operation there are at least two components
with syntactic and semantic information connected to them. In every combination of
phrase syntactic categories in a phrase, there is the one, which is dominant according to
syntactic rules, and this category will define the resulting syntactic category of the
phrase. Furthermore, the integrated phrase meaning will be the meaning of the dominant
component, which is enriched through semantic relations with other phrase components.
We believe that every word has the a semantic field of words, which are associated with
the first one through known semantic relations. For example, we know that words “man”
and “car” might be connected through such possible semantic relations as “a man drives a
car”, or “a man’s car”. We also know that “a man has a hand” and that “a car has a
handle”, etc…. We also suppose that all semantic fields are connected in the semantic
network, which was introduced by M. Quillian in 1968 (Quillian, 19683). Additionally it
is possible to include syntactic links in that network.
Let ’us introduce some symbols. Let “#” will symbolize the integration of meanings; and
let “/” will symbolize some link in the semantic-syntactic network; and let “/.” will
symbolize the stop in some semantic-syntactic branch. Also let also “->” will mean “lead
to”; and “semr” will mean a semantic relationship and “synr” will mean a syntactic
relationship. Using these symbols we could write:
green # park -> green/synr/ADJ/. green/semr/included in/colour/. # park/synr/NOUN/.
park/semr/includes/colour/. ->
(green park)/synr/NOUN/. (green park)/semr/park/includes/colour/includes/green/.(green
park)/name/NP1/., where “NP1” would be the name of the phrase “green park” phrase.
Here the semantic links of two phrase words are bridged through by the common element
- “colour”, and the resulting syntactic category of the phrase would be NOUN and the
resulting meaning would be “park which colour is green”.
In principle when we find the way between two elements in a semanticsyntactic network we could build (calculate) the integrated meaning, and finding
different ways would give us different meanings.
Now, suppose we have a set of dominance rules for every possible combination of
syntactic categories. Let us describe the process of sentence parsing, needed in machine
translation. In this process we will use the meaning integration operations to find the
possible short ways between elements of the phrase in the semantic-syntactic network
and to cut off impossible branches of parsing.
This procedure uses a three-part window. The Central part is the Current attention part. It
consists of an unit which we try to combine with other parts of the window. The Righthand part contains the next word of the sentence. The Left-hand part contains a stack list
of units that are have not yet been combined yet. For any Current unit, it must be decided
what other part it should be preferentially combined with. These are decision rules:
PRR) If the Current unit may be combined with the Right-hand unit, then the Current unit
is updated accordingly and the next word of the sentence is moved to the Right-hand part;
PRL) If the Current unit may be combined with the top unit of the Left-hand part, the
Current unit is updated accordingly and the Left-hand part is reduced;
PRB) If combining is possible in both either directions, the first preference would be
given at first to that direction which conserves the syntactic category of the Current unit.
If the both directions conserve the same category, it is better to choose the shorter link;
PRN) If it is impossible to combine the Current unit neither with either the Right neither
with or the Left unit, the Current unit is pushed o into the Left-hand stack, the former
Right-hand unit is shifted to the place of Current unit and the next word of the sentence is
moved to the Right-hand part. If there is not any next no further word, the other elements
of the Left-hand stack might be tried for possible combinations, otherwise it is necessary
to backtracking to the nearest preceded preceding decision point is necessary;
PRS) A parsing process is ended successfully if all sentence parts are united semantically
and syntactically in one unit.
Now let us take some example sentence in order to parse it: “the trouble the rumor you
started caused”. While parsing we mark the possible syntactic-semantic combinations by
sign “#”.
Left unit
combinations”
NP1
Current unit
the
(the # trouble)/name/NP1
Right unit
Comments
trouble
the
“no possible
the
rumor
NP1
combinations”
NP1, NP2
NP1, NP2
NP1
NP1
(the # rumor)/name/NP2
you
you
(you # started)/name/VP1
(NP2 # VP1)/name/NP3
(NP3 # caused)/name/VP2
(NP1 # VP2)/name/NP5
started
caused
caused
“no possible
End of parsing.
Now let ’us try to apply the ideas of meanings integration to communication with smart
robots. A robot usually knows how to do some basic operations on the certain objects.
Suppose, we would like to ask it to do something more complex than it knows. In this
case the smart robot would use some common knowledge storage store (through the
Internet, for example) and see there how the requested complex operation might be
constructed from the basic operations known to the robot. Suppose, that we, like that the
robot, would were able to understand the people in every corner of the world, it would
easier for the robot to communicate through some international language, for example,
Esperanto. In such a case we might ask it: “Robotico, faros cafon por me.”
References.
Katz, J.and Fodor, J. (1963).The structure of a semantic theory.
Language, vol.39, pp.170-210.
Kozarinsky, M. (1973). Motivation of compound mathematical terms.
In: Linguistic problems of functional modeling of speech activity,
Leningrad University, Leningrad, pp. 52-58.
Quillian, M. R. (1968). Semantic memory.
In: Minsky (ed.), Semantic information processing.
Cambridge, Mass., MIT press, pp. 216-270.
Download