Patterns bit by bit. An Entropy Model for Linguistic Generalization

advertisement
Patterns bit by bit. An Entropy Model for Linguistic Generalization
Silvia Rădulescu
Children are faced with the difficult task of acquiring the rules of the language they are being exposed
to. Inferring generalized rules from a limited set of examples, and applying those rules to strings of
words never heard before, describes the induction problem for language acquisition. However children
can go about the induction problem and acquire the rules of their language in an amazingly short period
of time. Despite extensive efforts, we still know very little about how infants so brilliantly manage to do
so.
This presentation addresses the puzzle of how humans make linguistic generalizations, and proposes an
innovative entropy model for generalization. This model is designed to bridge the gap between previous
findings and to unify them under one consistent account based on an information-theoretic approach to
rule induction. The prediction made by this model is that generalization is a cognitive mechanism that
results from the interaction of input complexity (entropy) and the processing limitations of the human
brain, namely a limited channel capacity.
In a pilot experiment with adults, a miniature artificial grammar was designed to probe the effect
of input complexity on the process of generalization. Entropy was used as a measure of input
complexity, given that entropy varies as a function of the number of items in the input and their
probability of occurrence (which varies depending on their frequency). Therefore the number and
frequency of linguistic items was manipulated to obtain different degrees of input complexity. Results
showed that the more complex the linguistic environment, the higher the brain’s tendency to create
generalized rules in response to the input complexity.
Download