False reputation in social control

advertisement

False reputation in social control

Mario Paolucci

IP-CNR

Italian National Research Council

Viale Marx, 15.

I-00137 Rome, Italy paolucci@istat.it

ABSTRACT: A simulation study is conduced on the spread of reputation in mixed respectfulcheater population. The effects of various mechanism giving raise to false reputation are shown, together with a discussion of foreseen results. The actual results of the simulation are then presented; informational accuracy is found to be an essential condition to maintain an advantage for the norms-followers.

RÉSUMÉ

: Une étude de simulation est conduite sur la diffusion de la réputation dans une population mélangée de respectueux-tricheur. Les effets du divers mechianism produisant la réputation fausse sont montrés, ainsi qu'une discussion des résultats prévus. Les résultats réels de la simulation sont présentés par la suite; l'exactitude informationnelle s'avère essentiel pour maintenir l’avantage pour le respectueux.

KEY WORDS : simulation, norms, information exchange

"La calunnia è un venticello,/un'auretta assai gentile/ che insensibile, sottile,/leggermente, dolcemente,/ incomincia a sussurrar./ Piano piano, terra terra,/ sotto voce, sibilando,/va scorrendo, va ronzando;/ nelle orecchie della gente, /s'introduce destramente,/ e le teste ed i cervelli,/ fa stordire e fa gonfiar./

(...) Alla fin trabocca e scoppia,/ si propaga, si raddoppia/ e produce un'esplosion/ come un colpo di cannone,/ un tremuoto, un temporale,/ un tumulto generale,/ che fa l'aria rimbombar./ "

(Cesare Sterbini, from Il Barbiere di Siviglia , Act I)

1. Introduction

This paper presents the results of a simulation study of the spread of reputation in intra-group social control. This study is part of a research project on the social norms in artificial societies. In particular, the norm under study is a norm of precedence in access to food. Previous studies ([CAS 98], [PAO 99]) brought about the role of a mechanism of reputation and exchange of information about reputation in re-distributing the costs of normative compliance. In fact, in mixed populations, including agents respecting a given norm and agents violating it, the latter (the cheaters) are found better-off than the respectful, unless the latter exchange information about the reputation of cheaters [CAS 98], and punish them. If agents exercize what sociologists call social control and punish norm transgressors, their outcomes will be competitive with those obtained by cheaters. Results were found highly stable (cf. [PAO 97]) in conditions of incomplete but true information.

The present study is aimed to check the effects of incorrect information: what happens if false beliefs propagate among respectful? Is the advantage which they obtain from communication affected by the truth value of their beliefs? These questions arise from the necessity to check the efficiency of social control in more realistic conditions. The ease and speed of the mechanism of propagation are in sharp contrast with its accuracy. How then can it be relied upon? Is it really possible that the success of social norms depends upon such a superficial mechanism as gossip, when calumny and other types of false reputation spread so easily and quickly?

The paper is organized as follows. In the next section, relevant work and theories will be examined, with a special emphasis on social control. In the third section, the previous studies on social control will be described and our hypotheses about the effects of false reputation will be discussed at some length. In the fourth section, the present experiment will be designed and the expected results will be presented. In the fourth section, the findings from simulation experiments with different types of informational errors in the propagation of reputation will be discussed. In the final section, some conclusions will be drawn and the relevance of this experiments will be discussed.

2. Related Work

Social control (cf. [HEC 90]; [HOM 74]; [MAC 95]; [FLA 96]) is usually meant as a distributed, non-institutional sanctioning (such as disapproval) of transgressions. By sanctioning the transgressions of their in-groups, agents cooperate with the norms in force in their group. Social control is then a normcooperative process. The question is why agents cooperate with the norms. In exchange theory ([HOM 74]), social control is explained as an exchange of a special type of resource, i.e. social approval. Agents approve of others, in order to obtain their approval. The more the agents which I will approve of, and the more approval I will obtain in return.

The exchange of social approval is a fragile form of cooperation with the norms, because agents will approve of others when they have an interest in doing so.

Indeed, the theory accounts for the formation of sub-groups which might interfere with the norms in force at the large group level. This effect has been shown by simulation studies ([MAC 95], [FLA 96]). These studies point out a threshold effect, a critical point in group's density, after which the density negatively interferes with the efficiency of social control. When exchange frequency exceeds the threshold, it may give rise to behavioural regularities which contrast with the norms in force.

This phenomenon, of great scientific interest, is the effect of local implicit agreements, which give rise to local conventions. Rather than a theory of social control, the exchange of approval is one possible explanation for the emergence of conventions.

Heckathorn poses the question of social control in rational, game-theoretic terms.

In his view, agents not only must decide whether to obey a norm or not, but they will also make a corresponding decision at the higher level. Social control is a second-order cooperation, and therefore it is given the same explanation which is given to first-order cooperation, namely, as a rational decision. Agents will cooperate both at the first and at the second level if it is rational to do so, that is, when the entity of sanction multiplied by its probability is more costly than the convenience of transgression. Heckathorn seems to take for granted that agents perform social control in order to avoid sanctions. If exchange of approval is a weak form of social control, the theory of the second-order cooperation is too strong. Or, better, it is does not provide a convincing account of the individual reasons for agents to perform social control. For example, which sanctions are applied, how are they applied and by whom? Heckathorn does not address these questions explicitly.

3. The Present Work

This study is a continuation of a series of studies ([CAS 98]; [PAO 99]) on the effects of respectful agents exchanging information about the reputation of cheaters in social control. Social control is meant as punishing norm transgressors.

Information exchange allows the good guys to know which are the bad guys without directly sustaining the costs of learning information. The previous results will be

summarized in the following subsection. The present study intends to clarify and control what we call "reciprocal altruism of beliefs", and explore both the socially desirable effects of this phenomenon (e.g. social control) and the socially undesirable ones (e.g., the diffusion of social prejudice).

3.1. Previous Work

Castelfranchi and associates ([CAS 1998]) carried out a simulation study of two forms of social control. The frame of the experiment is a two-dimensional, discrete, torical world in which the agent seek (scarce) resources scattered on the grid. At the onset of each simulation, an equal value of strength is assigned to agents; this value varies with their moving around, eating, and attacking one another. Physical aggression is directed to acquire food, and consists of the agents' attempts to snatch food from one another's hands.

The norm implemented in both cases was a norm of precedence on food resources. Agents in fact are designed to attack "eaters" to get hold of their food, when no cheaper alternative is feasible (attacks are costly). The norm was meant to reduce the number of attacks, and was implemented in the following way. At the onset of the simulation, food appearing in the vicinity of any given agent will be marked as its own, and the norm prevents agents eating their own food to be attacked. The entire population of agents is divided into two main sets, those which respect the norm (called Respectful) and those who violate the norm (the Cheaters).

The latter will observe a simpler and selfish rule: they attack eaters which are not stronger than themselves (since the success of an attack is determined by the physical strength of competitors: the stronger always wins), whether these are owners or not.

In the first form of social control, the respectful keep a record of cheaters

(reputation), and retaliate against them in future encounters. Retaliation was meant to affect the final redistribution of costs and benefits: agents identified as cheaters were treated by the respectful as cheaters, and therefore were expected to share

(some of) the costs of cheat. However, this form of social control did not produce the expected results, because the respectful acquired information about reputation by their direct experience with cheaters, and therefore only after having been exploited at least once by them.

In the second set of experiment on social control, we have checked the effect of the propagation of reputation through the population: when meeting (i.e., if in adjacent grid cells), respectful agents exchange information about the reputation of others. In this condition (spread of reputation), the respectful obtain far better results than in the previous one. Their outcomes are generally competitive with those obtained by cheaters, and in some circumstances exceed them. In the result from the previous experiment, reported in Table 1, the agents population is composed by two subpopulations of 40 respectful agents and 10 cheaters. Their results are compared on the Number of living agents at the end (in the reference conditions, nearly all the agents manage to survive at the end of the 2000 moves), and on the average Strength

and internal standard deviation ( StdDev ) of the subpopulations. In this case, there is a definite advantage for respectfuls, both in terms of average strength, and in fairness of the distribution.

Apparently, the spread of reputation proves essential for an efficacious social control. However, the results obtained so far refer to an ideal condition: information exchanged is not complete but is necessarily correct. Neither bluff nor errors are allowed. What happens in sub-ideal conditions, in which the constraint upon correct information is relaxed, and errors start to spread? What are the effects of false reputation in social control? This is the focus of the present study.

Respectful

Cheaters

Number

39,98

9,92

Strength

5353,36

4600,24

StdDev

600,00

2465,84

Table 1. Previous Experiment .

3.2.

Why Bother With False Reputation

There are several reasons of interest in these questions. First, it contributes to the plausibility and realism of results, since in real matters often only partial and incorrect information is available. This is even more crucial with information about reputation, which is known to be often wrong. Secondly, it raises the issue of the utility of such information. Taking a rational theory viewpoint, one might doubt about the benefit of incorrect information: in this approach, partial and false information is a limit, an obstacle for rational agents. That false beliefs are not always disadvantageous has already been shown (cf. [DOR 98]). Here, we investigate a special type of beliefs, i.e. social, and more specifically group beliefs, that is beliefs about a given sub-population and shared by another part of the population. We hypothesize that these may be advantageous for the agents holding them provided these achieve them through one another, even at the risk of incorrectness. This type of belief ha a special feature: they do not need to be

"accepted" before being transmitted. Agents spread gossip which they do not trust completely. This makes transmission easy and fast.

The easier and faster the transmission, the wider the propagation of beliefs and the number of agents which can profit from them, and the more it is advantageous for them. But the speed and easiness of the mechanism of transmission facilitates errors since agents do not need to control the validity of the information received.

Indeed, agents accept information about reputation for prudence; Therefore, they will behave according to the information received even when they do not trust it completely. Consequently, the transmission of beliefs about cheaters leaves room for the spread of false information.

3.3. Expected Results

Information may be incorrect in two senses:

- inclusive error: agents which are believed to be good guys are hidden cheaters.

This error may be seen as an ingenuity, an exceeding credulity on the side of respectful.

- exclusive error: some respectful agents are target of a calumny, and believed to be cheaters.

Which results could we expect from these two different errors? In principle, when they fall prey of an exclusive error, respectful agents are expected to punish some members of their own group. The population of respectful should attack a subset of their weaker members, i.e. those which are erroneously believed to be cheaters. These are expected to impoverish to the benefit of the stronger respectful, whether they are believed to be cheaters or not. Results should be closer to those of effective cheaters.

With an inclusive error, a subset of cheaters, i.e. the hidden cheaters, will not be punished even if they are weaker. Consequently, the effects of reputation will be reduced. In this condition, the outcomes of respectful should be lowered. In sum, incorrect information is expected to be advantageous when it reduces the set of respectful and enlarges that of cheaters (exclusive belief), and disadvantageous when it reduces the set of cheaters and enlarges that of respectful (inclusive belief).

One could then draw the conclusion that the two errors neutralize themselves.

4. The Design of the Experiment

In the previous studies, agents kept a partial record of their social experience: they registered into their knowledge base only the identity of cheaters. If an agent eating its own food is attacked by a neighbour, this is recorded by the attacked agent as a cheater. Given the design of the simulation, direct experience of a cheater is always correct. Beliefs are then exchanged among respectful, according to a simple mechanism: each will update its own list by incorporating into it the entries comprised in the list of the other.

In the present study, the algorithm was slightly modified so as to allow the respectful to keep a record of both social categories, cheaters and respectful. When an agent is not attacked by a stronger neighbour, this should be record as a respectful. This leaves a large space to incorrect information (since a stronger neighbour might be about to attack someone else), and acts as a bias in favour of the respectful, a strong presumption of innocence. Furthermore, it generates a rather self-centered view of reputation: agents record as respectful those agents which respect the norm to their own benefit.

Information about others' reputation may be acquired or updated thanks to inputs from others. Exchange of information always occurs among respectful. More

precisely, agents will accept information only from those which they know to be respectful, and reject it when it comes from cheaters. Besides, agents accept any new piece of information from others: anytime they are informed about the reputation of someone which is not mentioned in either of their lists, they will update their knowledge to converge with that received. When two records are incompatible, they will be cancelled from the lists of both agents. Finally, when one agent receives contradictory information from two or more neighbours, one of them will be chosen randomly.

In order to mitigate the bias in the direct acquisition of information, a variable threshold controls the acceptance of indirectly acquired information (information received by others). This threshold can be set to a high or low value. A low threshold of acceptance will confirm the presumption of acceptance; a high threshold will instead contrast with it.

There are two sources of error,

- direct experience, this concerns only the respectful; agents have a bias to accept as respectful anyone which respect the norm to their own advantage. The obtained list of R is usually longer than initial lists of Cs, and may contain errors (inclusive errors).

- communication: this concerns both respectful and cheaters. A variable noise produces a mutation in the lists which agents communicate to one another. At every tournament of the simulation, a randomly generated number of entries will be modified into their opposite: a C will be "read" as R, and viceversa. To simplify matters, the noise parameter can be set to a high or low value.

The noise is expected to magnify the error and increase the speed of propagation.

The threshold is expected to determine the prevailing error: a high threshold is expected to favour exclusive errors, a low threshold is expected to favour inclusive errors.

5. Findings

Our expectations concerned two different but strictly intertwined aspects of the propagation of false reputation in social control: (a) its effects on the respective outcomes of cheaters and honest, and (b) features of its transmissibility ([DAW 76]), i.e. the relative speed and pervasiveness of inclusive vs. exclusive errors.

5.1. Transmissibility of False Reputation

We expected calumny and credulity to spread as an effect of the initial conditions of the simulations. In order to observe the effect of opposite types of errors in the transmission of reputation, we have implemented two mechanisms, a variable noise (which essentially turns a reputation marker (R, for respectful, or C,

for cheater) into its opposite, and a variable threshold of acceptance of beliefs about good reputation.

- A low threshold of acceptance was expected to favour the spread of the good reputation (since agents are more easily recorded as R),

- a high threshold of acceptance was expected to obstacle the spread of the good reputation and proportionately to favour the spread of the opposite reputation.

- On the other hand, the noise was expected to equally favour both errors.

Findings confirmed this expectation, and show that the mechanism of propagation is extremely powerful. False reputation spreads easily. The prevailing error (whether inclusive or exclusive) essentially depends upon the threshold of acceptance: if threshold of acceptance of good reputation is low, good reputation will prevail over bad reputation (whether deserved or not). The number of hidden cheaters out-competes the number of calumnied, if any. On the other hand, if the threshold of acceptance is high, the number of calumnied exceeds the number of hidden cheaters, if any (exclusive error). However, when the threshold of acceptance is high and noise is low, errors tend to be eliminated during transmission. In these conditions, propagation is allowed to reduce and eventually eliminate false reputation (accuracy).

Low Threshold

High Threshold

Low Noise

Prevalence of Cheaters,

Credulity (inclusive error)

Prevalence of Respectful, no error

High Noise

Prevalence of Cheaters,

Credulity (inclusive error)

Prevalence of Cheaters,

Calumny (exclusive error)

Table 2. Overview of results.

Details of results are shown in the following Tables 3 to 9, accompanied by graphical representation. The tables show again the Number of living agents at the end, plus the average Strength and internal standard deviation ( StdDev ) of the subpopulations.

Respectful

Number

39,92

Strength

5252,42

StdDev

605,64

Cheaters 9,94 5053,80

Table 3. Reference Values: high threshold, low noise.

2533,38

For each setting except the previous one, detailed results are shown for the subpopulations whose diffused reputation is different from their effective behaviour.

In the following two tables, hidden cheaters are the ones which managed to obtain a respectful reputation, while open cheaters are recognized as such from the agents

following the norm. Hidden and open cheaters together return the total value for cheaters.

Respectful

Cheaters

Hidden cheaters open cheaters

Respectful

Cheaters

Hidden Cheaters

Number

39.93

9.93

4.67

5.26

Number

39.93

9.98

4.60

Strength

Table 4. Credulity I. Low threshold, low noise

4912.82

StdDev

6367.68

6966.21

5836.28

Strength

4821.40

6672.23

6913.50

580.70

1710.51

604.59

2692.38

StdDev

613.85

1301.66

628.39

Open Cheaters 5.39 6466.35 1876.15

Table 5. Credulity II. Low threshold, high noise.

In the calumny setting, detailed values are shown for the respectful agents that are recognized as such, and for the calumnied ones. The results for strength are then compared, for all settings, in Figure 1.

Respectful

Number

39.91

Strength

4923.24

StdDev

917.64

Calumnied R.

Recognized R.

Cheaters

Table 6. Calumny.

10.50

29.41

9.99

3969.25

5263.89

5818.53

1100.72

852.26

2085.87

Res pec tful

3500

Cal um nied

R.

Rec ogn ized

R.

Ch eate rs

Hid den

Ch eate rs

Op en C hea ters

4500 5500 6500

Reference

Credulity I

Credulity II

Calumny

Figure 1. Strength comparison.

5.2 Effects of False Reputation

The situation which we have called inclusive error, or credulity, was expected to be convenient to cheaters, a subset of which (hidden cheaters) should be treated as honest, and continue to practice their anti-social activity without sustaining the cost of their reputation.

In the opposite case, which we have called exclusive error or calumny, we expected a subset of the honest group (i.e., those which enjoy a good and deserved reputation) to obtain an advantage at the expense of the complementary subset, which instead suffers from an undeserved bad reputation (calumnied). Therefore, the calumny was not expected to make a substantial difference from a no-error situation.

Findings confirmed only in part the expectations. Unlike what was expected, false reputation, whether at the expense of a subset of honest (calumny), or to the benefit of hidden cheaters (credulity), is always convenient for the whole population of cheaters, and disadvantageous for the honest if considered as a whole (cf. Table 3

as compared to Tables 4 to 6). False beliefs about reputation always penalize honesty and norm-obedience, and reinforce cheat.

However, as expected, the two errors show a different effect. In particular, credulity is more convenient for cheaters than calumny (cf. tables 4 and 5 vs. Table

6): cheaters do not have much to gain from calumny. But unlike what was expected, neither the honest and respected guys (those who enjoy a deserved good reputation) do! The interpretation of this unexpected finding points to some sort of self-fulfilling prophecy ([SYN 78]). Consequent to the propagation of false reputation at their expense, the honest guys suffer from frequent and "unexpected" attacks also on the part of those guys which they "believe" to be their fellows (other respectful).

Consequently, the calumnied will revise their lists, update the records corresponding to these agents (turning them into Cheaters), and will behave accordingly. Rather than perishing under undeserved attacks, the calumnied will start to retaliate, and behave according to their false reputation, which in our algorithm will determine a sudden improvement of their outcomes.

In sum, whilst informational accuracy is an essential condition for maintaining the advantage of honest over cheaters, false reputation may produce different effects. False good reputation (inclusive error) is mostly convenient for cheaters and mostly inconvenient for the honest. False bad reputation (calumny) is still more convenient for cheaters, but not as inconvenient for the honest as one could expect.

In particular, the respected honest will suffer from the retaliations of the calumnied fellows, but these will enjoy the consequences of such retaliations.

6. Concluding Remarks and Discussion

Which lessons can we draw from the results presented above? That false good reputation penalizes the honest citizens is not surprising. Perhaps, it is less obvious that credulity is more detrimental to the honest than it is calumny, however socially unattractive or unfair this effect may appear since gullibility is certainly more acceptable and less reproacheable than calumny. What appears decisively far from our intuition is the fate of the calumnied, which are usually (and probably correctly) perceived as the weakest and unarmed part of the population. This finding deserves some careful consideration.

On one hand, our algorithm is not yet fully apt to deal with such complex and subtle social phenomena as those underlying the transmission of reputation. In the real matters, the calumnied agents which receive unfair attacks do not proceed immediately to revise their social beliefs. They will most probably face a delicate decision-making, whether to update their beliefs about the reputation of their presumed fellows, or else to acquire another important social belief, concerning their own reputation: they learn that they are ill-reputed by others. Consequently, they will have to take another important decision, whether to stick to the norm, or else to accept their undeserved fate and join the crew of cheaters. As we know from experience and from some good social psychological evidence, the latter option is

rather frequent: people will not get rid easily of bad reputation (whether deserved or not) once this starts to spread, and this bad reputation will acts as a self-fulfilling prophecy, forcing them into the corresponding behaviour. However, this effect is not as immediate and pervasive as it results from our simulations. Plausibly, to implement the agents' beliefs about their own reputation is necessary to account for the effect of calumny.

A second important lesson which can be drawn from these findings is the efficacy and speed of false reputation. Our simulation model does justice to the intuition that a low degree of noise is insufficient to prevent inaccuracy of beliefs, since false reputation spreads even with low noise. Rather, accuracy is a combined effect of low noise and high threshold of acceptance of information about (good) reputation. In addition, the initial conditions strongly affect the propagation process: if there is an initial bias towards the good reputation, this is bound to spread until all agents are believed to be good guys, and the same is true for the opposite bias.

Finally, the two mechanisms which we have implemented give rise to four possible combinations (high threshold high noise, low threshold low noise, high threshold low noise, low threshold and high noise). Of these four conditions, only one produces eventual accuracy. Therefore, in the three remaining conditions, noise produces error, and this was found to penalize the good guys and reinforce cheaters.

But then what is the real advantage of the spread of reputation for the honest? How can they trust a mechanism so fragile and often detrimental to them?

One answer could depend upon a comparison between the outcomes which the honest obtain from the propagation of (false) reputation and those which they obtain in absence of this mechanism. This comparison shows that in one case out of four

(information accuracy), the honest gain from the spread of reputation. And in one of the three remaining cases (calumny), they gain much more, or lose much less than is the case when no propagation mechanism is activated. Their utility in the case of propagation is therefore higher then it is in absence of such a mechanism. Moreover, the higher the threshold of acceptance of good reputation, the higher the utility of the propagation mechanism for the honest. With a high threshold, either propagation will end up with eliminating errors, and re-establishing information accuracy (which gives a superiority to the honest); or it will end up with a lower difference between the honest and the cheaters than is the case without propagation. Especially with high threshold, propagation is worth the costs of errors!

Acknowledgements

Many ideas in this paper come from discussions with Cristiano Castelfranchi.

The preparation of this work would have been impossible were not for the support of

Rosaria Conte. The author is glad to have a chance to thank them both.

Bibliography

[CAS 98] C ASTELFRANCHI C., C ONTE R., P AOLUCCI M., “Normative reputation and the costs of compliance”,

Journal of Artificial Societies and Social Simulation, vol. 1 no. 3, 1998, http://www.soc.surrey.ac.uk/JASSS/1/3/3.html

.

[DOR 98] D ORAN J., “Simulating Collective Misbelief”, Journal of Artificial Societies and

Social Simulation, vol. 1 no. 1, 1998, http://www.soc.surrey.ac.uk/JASSS/1/1/3.html

.

[DAW 76] D AWKINS , R., The Selfish Gene , New York, 1976, Oxford University Press.

[FLA 96] F LACHE , A., The double edge of networks: An analysis of the effect of informal networks on co-operation in social dilemmas, Amsterdam, Thesis Publishers, 1996.

[HEC 90] H ECKATHORN , D.D., “Collective sanctions and compliance norms: a formal theory of group-mediated social control”, American Sociological Review , 55, 1990, p. 366-383.

[HOM 74] H OMANS , G.C., Social Behaviour. Its elementary forms ., N.Y., Harcourt, 1974.

[MAC 95] M ACY , M., F LACHE , A., “Beyond rationality in models of choice”, Annual Review of Sociology , 21, 1995, p. 73-91.

[PAO 99] P AOLUCCI , M., C ONTE , R., “Reproduction of normative agents: A simulation study”,

Adaptive Behavior , forthcoming, 1999

[PAO 97] P AOLUCCI , M., M ARSERO , M., C ONTE , R., "What's the use of gossip? A sensitivity analysis of the spreading of respectful reputation", presented at the Schloss Dagstuhl Seminar on Social Science Microsimulation. Tools for Modeling, Parameter Optimization, and

Sensitivity Analysis, 1997.

[SYN 78] S NYDER , M., S WANN , W.B.

JR

, “Hypothesis testing procsses in social interaction”,

Journal of Experimental Social Psychology , 14, 1978, p. 148-162.

Download