A Model of Stage and Value to Predict Behavior Michael Lamport Commons Harvard Medical School Commons@tiac.net Presented to the Department of Psychology, University of Minho Thursday, July 4th, 2012, Braga, Portugal © 2012 Dare Association, Inc., Cambridge, MA 1. Measuring the A Priori Difficulty of a Task Contingency Using Order of Hierarchical Complexity • The Model of Hierarchical Complexity is a quantitative behavioraldevelopmental theory that suggests an objective way of determining the a priori difficulty of a task's contingencies. The model explains why stagelike performances are observed. The model proposes that stages result from the hierarchical structure of tasks. A task is defined as hierarchically more complex when it organizes, in a non-arbitrary fashion, two or more less complex tasks. Using this model, sixteen orders of hierarchical complexity have been generated. In this study, the model is used to generate stimuli in the form of either problems or stories. The stimuli within a domain consist of an ordered series of tasks, from order 1, reflexes and conditioned reflexes, up to order 12, the discrimination of efficient market. Tasks were generated in several domains, including reinforcement contingencies (economic), mathematical, scientific, moral, political, and social domains. 280 people were recruited through online groups. A Rasch analysis of the responses showed that, within each domain, items were well scaled on a single dimension reflecting the predicted difficulty of the item. Participants' performances were shown to conform to the predictions of the model, with very high amounts of variance accounted for (from .73% and up). 2 1. Measuring the A Priori Difficulty of a Task Contingency Using Order of Hierarchical Complexity Michael Lamport Commons 3 The Model of Hierarchical Complexity • The Model of Hierarchical Complexity is a quantitative behavioral-developmental theory • It suggests an objective way of determining the a priori difficulty of a task's contingencies • The model explains why stage-like performances are observed The Model of Hierarchical Complexity • Behavior can be analyzed by the difficulty of tasks that an individual successfully address • We divide the task properties that influence item difficulty into three parts: – Order of hierarchical complexity of the items in a task – Task content – Language • And the associated country of the participants • It is hypothesized that the most important predictor of difficulty is the order of hierarchical complexity – Commons’ model identifies 16 orders of hierarchical complexity – It deconstructs tasks into the actions that must be done at each order to build the behavior needed to successfully complete the task – It classifies each task by its order of hierarchical complexity A task is at a higher order if: • It is defined in terms of two or more lower order task actions • It organizes lower order task actions • This organization is non-arbitrary 16 Orders of Hierarchical Complexity Order Name Complexity 8 Concrete 0 Calculatory 9 Abstract 1 Sensory & Motor 10 Formal 2 Circular Sensory-motor 11 Systematic 3 Sensory-motor 12 Metasystematic 4 Nominal 13 Paradigmatic 5 Sentential 14 Crossparadigmatic 6 Preoperational 15 Meta Crossparadigmatic 7 Primary 7 The Model of Hierarchical Complexity • Task sequences form a hierarchy from simpler to more complex • Task-Performance should always follow that certain (developmental) order • Under the above conditions, development occurs in stages reflecting the necessity to coordinate lower level action Empirical Studies • The Model of Hierarchical Complexity is used to generate stimuli in the form of either problems or stories • The stimuli within a domain consist of an ordered series of tasks, from usually Preoperational order 6 up to Metasystematic order 12 • Tasks were generated in several domains – Reinforcement contingencies (economic) – Mathematical, scientific – Moral, political, and social domains The Laundry Instrument The Laundry Instrument consists of isolation of Variable problems – Inhelder and Piaget (1958) developed the pendulum and chemicals tasks • Participants had to perform an experiment by manipulating a single variable while holding all other variables constant • They had to figure out which variable controlled the rate that a pendulum weight would cross the low point Participants were instructed to answer a series of multiple choice questions – The answers were coded right or wrong • The questions were constructed so that the higher order questions coordinated the lower order questions Laundry Instrument Example: Abstract Order 9 • This is the first order in which participants are required to see what the operative variable is Laundry Instrument Example: Formal Order 10 • Higher order problems added more ingredients to the mixture • Also make deduction more difficult by including many more possible operative variables • They also included multiple possible relationships among the variables such as And/Or scenarios The Laundry Instrument Participants • The overall study is made up of data collected from 5 separate convenience samples – Each was tested on one of five similarly constructed isolation of variables problems • There were a total of 1263 participants across all studies • Here, we will briefly describe – The characteristics of each separate sample – The characteristics of the instrument used with each sample. • There were two quasi-independent variables that were not independent of one another – Content – Language The Laundry Instrument Subsample Language, Version and Content 1) Iraqi, Arabic speaking, Laundry Instrument: 450 participants completed paper and pencil versions of the Arabic laundry instrument. 2) English, First Revised Laundry Instrument: 215 participants from various Listservs 3) English speaking, Short Laundry Instrument: 78 participants from various Listservs 4) German speaking, Combustion Instrument: 459 students completed paper and pencil versions 5) English speaking, Atheism and Belief Instrument: 61 participants from various Listservs The Laundry Instrument Procedure • Participants filled out the questionnaires either on paper or online • The subtasks were presented in a sequence from easy to hard – Low to high order of hierarchical complexity • Having successfully completed the easy problems, which most of the participants did, served as support for the harder problems • One gets better performance in going from easy to hard (e.g. Aamodt & Mcshane, 1992; Hodson, 2006) • Rasch (1980/1980) Analysis was used to analyze the data Rasch Model • Under the Rasch model these linear measures are – Item-free (item-distribution-free) – Person-free (person-distribution-free) • This means that the measures are statistically equivalent – For the items regardless of which persons (from the same population) are analyzed, and – For the people regardless of which items (from the same set) are analyzed • The person and item total raw scores were also used as dependent variables – Sum the item raw scores separately for each order of complexity and then that could be used to correlate with hierarchical complexity • The Rasch person reliability of the combined data was .94 • The Rasch item reliability was 1.0 – In the context of a Rasch analysis, this means that there is a high probability that items estimated with higher measures do in fact have higher measures than those estimated with lower measures – There is no equivalent traditional measure Rasch Analysis of Data from All Studies • How well did the order of hierarchical complexity predict performance stage on that item? – The results are illustrated using a Rasch variable map • If performance on the items were in perfect order, there would be no item reversals – No cases in which a higher order item appears below a lower order item • The closer to the top the items are, the more difficult they were to answer • Participants had a 50% chance of correctly answering items that are located directly across from them • • • Person Rasch scores on the left hand side Rasch scaled items scores on the right hand side The scaling showed – The primary items at the bottom – The metasystematic items at the top • All of the other stage are seen to be in the correct order – The exception is the abstract and concrete orders, which are intermixed – One can see gaps between the stages – The mixing of concrete and abstract was due to the concrete tasks having too many ingredients (variables) – These were removed in subsequent versions of the instruments Item Stage Score Versus Item Order of Hierarchical Complexity r2 = .804 • The item stage score plot shows how the items performed – Items with a hierarchical complexity of 10 would be expected to have a stage score of 10 • This plot shows that this trend is followed Social and Behavioral Tasks A series of studies were conducted using content that is related to social and behavioral problem – Political development (Sonnert & Commons, 1994) – Therapists’ decisions to report patient’s prior crimes (Commons, Lee, Gutheil, Goldman, Rubin, & Appelbaum, 1995) – The relationships between more and less powerful persons such as doctors and patients (Commons & Rodriguez, 1990, 1993) & counselors and patients (Commons, et al, 2006) In each study, participants received 5 vignettes about interactions between two person – Each vignette represented each order of Hierarchical Complexity 20 Helper-Person Problem The vignettes were about the interaction between a helper and a person Participants were asked three questions − Rate the method of offering guidance and assistance of each Helper − Rate the degree to which each Helper informed their Person − how likely you would be to accept the guidance and assistance offered by the Helper Figure 2a. Method Figure 2b. Inform Figure 2c. Guide 21 Politician-Voter Problem They rated three issues about the interaction between politician and voter ‒ Rate each of the politicians' methods ‒ Rate the degree to which the Politicians informed their Voters ‒ Rate how likely you would be to vote for the Politicians Figure 3a. Method Figure 3b. Inform Figure 3c. Vote *** Each data point represents an average of all 3 items at each stage***22 Item Order of Hierarchical Complexity Predicted Item Rasch Scaled Scores • • • • • • Helper-Person Politician-Voter Anti-Incest Reporting Anti Death Penalty Pro Death Penalty Jesus Stoning Dilemma r(3) = .967, r(3) = .978, r(3) = .973 r(3) = .920, r(3) = .895, r(3) = .900 r(3) = .898, r(3) = .850, r(3) = .834 r(3) = .854, r(3) = .801 r(3) = .849, r(3) = .758, r(3) = .875 r(3) = .539, r(3) = .564 • The measured difficulty reflects the theoretical difficulty • (Order of hierarchical complexity) 23 **Overall Figure 9a. Overall Best Figure 9b. Helper-Person Inform Next Best Figure 9c. Politician-Voter Inform 24 Summary • The Model of Hierarchical Complexity measures the a priori difficulty of tasks, a new approach to Behavioral Developmental Analysis • Empirical studies show that the – Order of Hierarchical Complexity of tasks predict the difficulty in performing such tasks, as measured by Rasch Analysis (r = 0.8 to .984) • Tasks have been constructed in a variety of domains showing the generality of these results • The MHC behaviorally explains why there are stage like behaviors in development 2. Can Perceived Value Be Explained by Schedules of Reinforcement? • Can schedules perceived value be explained by the perceived value of just a few reinforcers? The additive noise model states that discounted reinforcer value simply adds together linearly but as time passes noise is added, VO = Overall value = Σvi where vi = effective or perceived value of a reinforcer at time i. Our unified theory integrates initial value of outcomes, delay and risk. Results from samples suffice to characterize entire schedules. Three difference equations of immediate reinforcer value with respect to time of a reinforcer summate many properties of discounting accounts of reinforcement schedules. A trial consisted of a two chain schedule. The first link consisted of the presentation of one of a large number of samples from a t schedule (Schoenfeld & Cole, 1972 Schoenfeld & Cole, 1972). The second link was a choice between a left key indicting the sample was lean or the right key indicating it rich. The overall value was Am = the total value of all the reinforcers delivered until total satiation has occurred, Am = ΣΔAm. The value of an instantaneous reinforcer is ΔAm. The perceived sample value was an hyperbolic function of how soon before choice a single reinforcer was (first difference equation) as the Commons et al.(1982)/Mazur’s (1987) equation for delay, s20 Vi (delay) = ΔAi/(1+ k2di), d = Δt – 1 delay equals change in time minus 1; k2 = Sensitivity to delay. Risk is the second difference equation of Commons et al.(1982)/Mazur’s (1987) equation for delay: vi (risk) = Δ(ΔAi / Δd) Δd = Δ(ΔAi/(1 + kd3)/c, sensitivity to change in delay was well fit by a negative power function as proposed. 26 2. Can Perceived Value Be Explained by Schedules of Reinforcement? Nicholas Hewlett Keen Commons-Miller, Tufts University Michael Lamport Commons, Harvard Medical School Robin Francis Gane-McCalla, Dare Institute Alex Pekker, University of Texas Michael Woodford, Columbia University The pigeons were run in the laboratories of John Anthony Nevin at Columbia University Michael Lamport Commons at the University of Manitoba and Northern Michigan University Richard J. Herrnstein at Harvard University Robert Cook at Tufts University Motivations For This Research • There have been a number of proposals for how value is determined with delay using schedules of reinforcement – Bickel, Miller, et. al. 2010; Lawyer, William, et al. 2010; McKerchar, Green, (2008, 2010) • There should be a unified explanation that relates responding to – – – – Immediate reinforcement Delayed reinforcement or Time between possible reinforcements Change in delays • An account should integrate over micro, molecular and molar levels – A micro view looks at the contribution of each occurrence or nonoccurrence of a reinforcer or other event – A molecular view looks at a sample or local rates of reinforcement – A molar view looks at the overall rate of reinforcement 28 Issue This Research Resolves This presentation addresses five unresolved issues that should allow for an integration these issues 1. To relate the micro to the molecular what needs to be known is how well do samples from schedules approximate the overall schedules 2. The integration should also to address what events from the micro analysis are accumulated, and how 3. Do reinforcers, after being discounted, simply add together? 4. Is a pigeon’s perception of the discounting of value based on relative rates of reinforcement and delay of reinforcement? 5. Also, can the equations explaining immediate value, and its difference equations, represent not only the simplest account of value, delay and risk, but fit the data well? 29 Evolutionary Explanations For Organisms’ Systems of Reward Processing • Organisms are always faced with the decision to stay or move on to a new source (patch) of reinforcement • Energy expenditure over long periods of time is regular – Obtaining energy is critical, and is dependent on time between reinforcers – To model this decision people have been interested in the reinforcement value of schedules and schedule samples 30 How Do We Model Organisms’ Discounting of Value Over Time? • There have been a number of simple models in which discounted reinforcing events are aggregated • There has been a great deal of interest in how reinforcing events are discounted in – Behavioral economics – Quantitative analysis of behavior – Comparative animal cognition • There has also been interest in how the discounted values combine – Do they add? – Do the values interact in some fashion? 31 The Additive Noise Model • Reinforcers add together linearly • But as time passes reinforcer noise added earlier interferes with control by reinforcers further from choice • The expected value of a reinforcer, SR+, at instant i is the product of the probability the reinforcer occurs and the value of the reinforcer. (Economic Model) Expected Value of a reinforcer (SR+) of a sample of equal probability possible reinforcer events at instances, i, is the product EV = Σip(SR+) * Vi (SR+) i = the instance being examined; (1) Vi(SR+) = value of reinforcer, SR+ , i; p(SR+) = probability of obtaining a reinforcer immediately after time i 32 Quantitative Analysis of Behavioral View • The first equation simply states that the expected value, EV, – Is the sum of probability of an instance of reinforcement p, times – The effective or perceived value of that reinforcer, Vi , at instance i. • This has been a general tenet of both older and even more modern learning theories • Our model goes beyond this simple formulation to show that how an organism assigns value is also a function of other parameters 33 Advantages and Purpose • Most economic models of utility use value and probability • In contrast, this paper tests the idea that animals are sensitive to relative time, but not directly sensitive to probability • The rates of change of value with respect to time mimic the regularities in the evolutionary environment • Regularities involve – Immediate value of reinforcement – Temporal delay of reinforcement – Changes in temporal delay of reinforcement • Accordingly, we use rates of change of value to test whether animals are sensitive to relative time 34 • Commons-Miller et al. (2010) proposed a model for choice that includes three variables, and their associated parameters • The first variable is reinforcement – Its associated parameter is sensitivity to reinforcement. • The second variable is delay – Along with its associated parameter, sensitivity to delay • There is risk and sensitivity to risk 35 Total Reinforcement Value • Consider that reinforcement is a single accession from a long sequence of reinforcements. There is a possibility that all reinforcers satiate. Food, water, tastes do. Does money do so? Gates and Buffett and others seem to think so • Am = the total value of all the reinforcers delivered until total satiation has occurred. • Each instance of a reinforcer, m, occurs in what may be a very long sequence of events. For creative scientists, it may be an event every few years. • ΔAm = the change in overall value of reinforcers delivered with no delay when the position in a sequence of reinforcers is ignored until satiation occurs. • In equation 1, this is the perceived reinforcing value of event m. • Am = ΣΔAm (1) 36 • The term, “Diminishing Returns”, is the way economists talk about the fact that the value of reinforcement decreases as the number of delivered reinforcers increases. • Each time a reinforcer is delivered as m increases, it reduces the value of ΔAm by a discrete amount. • Mathematically this is: o m = the mth delivered reinforcer in a sequence of reinforcing events • ΔA1, > ΔA2, > ΔA3 .... > 0 37 Total Value • The total value, Am , is the total value of all the reinforcers delivered with no delay – When total satiation has occurred and ΔAi decreases in value to 0. • The strength of ΔAm not only varies with where in a sequence of reinforcing events it occurs, but on a number of other factors: – The animal under consideration – Its preferences for food, water, mates, prey, companions, tastes, etc. – In humans, ΔAi also varies with personal interests, culture and genes. 38 Sensitivity to Delay • The effect of delay on reinforcement value as reflected in performance was modeled early on by Chung and Herrnstein (1967), Anslie (1974) and Fantino, Abarca and Dunn (1987) • The value of a reinforcement instance, ΔAi, is measured with respect to changes in the time • Change in time is measured from the instance of the reinforcer to the choice, is Δti. • Now if the ratio of the differences, value, ΔAi with respect to time, Δdi, is taken – One gets the Commons/Mazur additive noise model (Commons, Woodford & Duchney, 1982; Mazur, 1987) shown immediately below. • This is a slightly revised version of Commons/Mazur, V is replaced by ΔV because Ai has been replaced by ΔAi in equation 2. 39 Sensitivity to Delay Equation 2 • ΔV = ΔAi /(1+ k1d) = Discounted value of a reinforcer i (2) • d = Δt – 1 delay equals change in time minus 1. • Δt = Change in time. – Note that for t = 1, reinforcement is not delayed i.e., d = 0. • j = is an index of which difference equation it is. – j = 1 value; j = 2 is delay, j = 3 is risk • k2 = is for sensitivity to delay • Consider the case of ΔAi /(1+ k2d) with d = 0, no delay • This makes 1+ k2d = 1, then ΔVi = ΔAi 40 • In contrast, taking the long view, means being relatively insensitive to delay • It should be reflected in a small value for k2, the delay parameter • To successfully address high order of complexity scientific tasks, one has to have long term goals that allow for a large delay of reinforcement • Most discoveries take multiple years to achieve. 41 Sensitivity to Change in Delay • Here risk is represented by how sensitive an individual is to a change in delay, usually increases in delay. • This is the quantification of Vaughan’s (1976; 1981; Herrnstein & Vaughan 1980) melioration concept (also see Herrnstein, & Prelec, 1991). • This is represented by taking the differences with respect to changes in time in the second difference equation, Commons/Mazur equation 2: • This would be Δ(ΔAi/ Δd)Δd = Δ(ΔAi/(1+ k3d))/Δd (3) • k3 = is sensitivity to risk, the change in value with respect to change in delay 42 • There is no theory describing discounting of reinforcement that integrates – Different length samples from a schedule – Schedules themselves – First and second difference equations of value with respect to time • Our unified theory integrates – Initial value of outcomes – Delayed values – Risk • Results from samples may suffice to characterize entire schedules 43 Samples and Schedules • Schedules are a sequence of samples – When the number of instances per sample is low, the values of the instances in a sample simply add – But when the number of instances per sample gets large, then process of addition breaks down and memory starts to fail causing value to be under estimated • Within a sample, the values of the instances differ – The further an instance is from choice, the less important the contribution – Samples of increasing numbers of instances reach an asymptotic representation of the entire schedule 44 Procedure • Many studies have measured discrimination between reinforcement densities (e.g. Commons, 1979, Commons, 1982) • A sample presents five possibilities for reinforcement for peckingfour on the center key and one on the left or right keys • The center key pecks are reinforced on either a rich or a lean schedule – Each center key peck during a cycle had a certain probability of reinforcement – Rich schedules had a 0.75 probability, and lean schedules had a 0.25 • After the cycles in a sample, pigeons pecked either a left or right key – A right key peck was reinforced if the sample was a rich schedule, and a left key peck was if the sample was a lean schedule • A small number of probe trials consisted of randomly assigned instances in which the length of time of an instance increased by certain factors – The multiples were 2 or 3 for pigeons 30, 84, 102 and 995 46 Results • The first three figures show how the pigeons dealt with delay – Figure 2 shows that the perceived value of reinforcers adds • It also shows pigeons are mostly sensitive to relative rates of reinforcement – Figure 3 also shows support that relative rate of reinforcement as an explanation for what controls choice • Pigeons estimated a sample as being less rich when there were temporary increases in time between possible reinforcers – Figure 4 shows reinforcers were discounted hyperbolically – Figure 5 show relatively little discounting over absolute time 47 Reinforcer Values Simply Add •Figure 2 shows the effects of delay – The first difference equation of value •The perceived richness of the sample is on the y-axis •Number of reinforcers in the sample are on the x-axis •With the probe multiple as one, the number of reinforcers predict choice – The perceived value of the reinforcers simply add together Figure 2: The perceived value of a samples is plotted against the number of reinforcers in the sample. Circles represent cycle lengths of two seconds, triangles three and squares four. As one can see, the cycle length has very little effect. Pigeons from Commons, Woodford and Trudeau 1991 •As probe multiple increases, the slopes decrease •When the cycle length is doubled, reinforcers are valued around half as much •This is because pigeons are mostly sensitive to relative rates of reinforcement – Pigeons are effectively insensitive to time 48 Second difference equation of Value: Momentarily Increasing Cycle Time Had a Large Effect on Perceived Risk •Figure 3 shows risk •Risk is the second difference equation of value with respect to time •Slopes from linear regression of perceived value versus number of reinforcers Figure 2 were plotted against the multiple of cycle of time •The red lines represent the expected slope of the perceived values as a function of the multiple values •High r-values and the close fit to the predicted slope support relative rate of Figure 3: The slopes from Figure 2 are plotted against the probe multiples. reinforcement as an explanation for what Bird 102 is excluded because its slopes were negative and thus incompatible with hyperbolic regression. controls choice •Pigeons estimated a sample as being less rich when there were temporary increases in time between possible reinforcers •This is because the pigeon have learned that previous samples’ reinforcers are packed more densely •Closer together in time 49 The First Difference Equation of Value • The data are from pigeons 27, 29, 31 and 85 (Commons 1979) – There was just one reinforcer in the sample to be discriminated • On the x-axis, the number of cycles between the single reinforcer in the sample and choice. This is delay • The y-axis shows the perceived value of the sample as rich (Probit transformed) – This result provides empirical evidence for Commons/Mazur’s Equation • Commons and Mazur predict hyperbolic discounting over time – Reinforcers closer to choice will be valued significantly more than those far from choice (Commons, Woodford & Ducheny, 1982; Mazur 1987) – This decreasing hyperbolic function is a negative power function Figure 4: The perceived value of a sample is plotted against the distance in number of cycles that reinforcer is away from the choice period. 50 Testing The Second Difference Equation Of Value: Possible Contributions To Perceived Risk • Perceived risk as represented by the slopes from linear regression of perceived value versus number of reinforcers • The possible contributions were examined to perceived risk by – Probe multiple – Cycle length – Total time = Probe multiple * Cycle length in seconds • For probes, the r(178) = .864, p = .000 • Cycle length had no systematic effect on perceived risk, r(178) = -.033, p = .660 • Total time was very collinear with both probe multiple and cycle length as would be expected • Therefore with probe, cycle length and total time, the r(176) = .868, p = .000, an insignificant change 51 Conclusion • A unified theory of value, its time difference equations, and their effects on discounting were found to describe the data • The theory also accounts for the difference equations of value with respect to delay, risk and change in risk • The assumptions were rather simple – Individual events were shown to be processed with respect to a background rate of reinforcement – The value of these events was shown to be simply summated after discounting • A set of experiments tested the theory – The experiments supported additivity, hyperbolic discounting for the first difference equation, control by relative rates, and risk as a second difference equation. • A time based model rather than a probability based utility model may reflect how organisms perceive reinforcers delivered over time 52 • There are other possible formulations for discounting both the simple delay and of risk • Studies with more cycles need to be run • We have to formulate why the ki are different 53 3. An Integrative Account of Stage and Value as Determinants of Action • • MICHAEL LAMPORT COMMONS (Harvard Medical School) Abstract: Accounts of stage and moral action have not integrated behavioral, developmental and quantitative paradigms. This presentation integrates the three by using a mathematical model of value obtained from developmental action and from stage, as in the Model of Hierarchical Complexity. The result is a behavioral-developmental account of stage and action, rather than a mentalistic one. Both value and stage are necessary for determining actions. Each consists of a matrix. The Value matrix has a number of vectors. For humans, there are 6 Holland Code variables in the value vector. The second vector is discounting-difference ratio between change in the overall value vector and change in time. The third vector is the change in differences in value over time, or risk. The second matrix is Stage, which measures performance in meeting difficulties produced by the order of hierarchical complexity of particular tasks, as discussed in the earlier talks. A mathematical account of the value and the stage matrices and their interaction terms are used to predict moral behavior. Example from predicting forensic expert bias and from peddlers income based on the order of hierarchical complexity will be given. • • • 54 3. How Stage and Value can be Incorporated to Predict Behavior Michael Lamport Commons Harvard Medical School Commons@tiac.net Timothy Barry-Heffernan Harvard University timothy.barryheffernan@gmail.com Introduction • This presentation is about a behavioral-developmental account of stage and action that integrates the three paradigms – Behavioral paradigm – Developmental paradigm – Quantitative paradigm • A mathematic technique for predicting an organism’s behavior would be extremely valuable and widely applicable to a range of organisms and behaviors – Such predictions may variabilize diagnosis in mental illness, replacing symptom based diagnoses • Prediction by the technique that follows relies only on proper weighting of various scores – Difficulty of tasks accomplished – Preference for outcomes of tasks accomplished in terms of • Overall value in a domain • Discounted value • Risk 56 Implications of a Stage and Value Model to Predict Behavior Personality Disorders • A theory of personality disorders: – Personality disorders may be rooted in low social perspective-taking skill and an inappropriate estimation of discounted value and of risk – The two categories of people with personality disorders (i.e. SAD, and BAD) fit this rule – MAD should be on the spectrum of the psychotic disorders • A psychopath, from the BAD category, may imitate a high stage of behavior and may – Overestimate the loss of value due to delay of reinforcement – Underestimate risk in their own behavior – Psychopaths may consequently appear unemotional and apathetic Implications of a Stage and Value Model to Predict Behavior Personality Disorders • People with autism spectrum disorders do not read emotions or gestures from other people – Consequently do not value these things • People with borderline personality disorders – Feel abandoned – Fear more abandonment, thus overvaluing “companionship” • People with SAD disorders have been beaten down over a period – They have developed methods of coping like compulsions • Dependent personalities are megalomaniacs who value abuse as attention Implications of a Stage and Value Model to Predict Behavior Selections for Employment • A model that predicts successful task completion would be applicable to the workplace – One could determine who is fit to work at what position by means of a test or by means of observation and mathematic evaluation • Since this model takes into account ability, motivation, difficulty and reinforcement, it is applicable to numerous scenarios – Particularly, careers Purpose and Outline of Paper • The purpose of this paper is to examine the interplay of Model Hierarchical Complexity derived stage scores and valuation of reinforcers in predicting behavior Theoretical mathematics will be used acquire a “Behavior Matrix” − This matrix will predict the likelihood of successful task accomplishment in an experiment where • j types of reinforcers are administered i times To predict behavior reasonably, one must consider: − − − − Delay Risk An organism’s predisposition to different forms of reinforcement A reinforcer’s value in different domains First Step: Acquiring a Preference Column Matrix Any prediction must begin with valuation − Valuation is rooted in value Suppose all n categories in which reinforcement is valuable were indexed − Categories would include: • Basic biological ones Food Sex Other areas in which an organism’s behavior may be reinforced • Ones related to interest as measured by the Holland scales Realistic - practical, physical, hands-on, tool-oriented Investigative - analytical, intellectual, scientific, explorative Artistic - creative, original, independent, chaotic Social - cooperative, supporting, helping, healing/nurturing Enterprising - competitive environments, leadership, persuading Conventional - detail-oriented, organizing, clerical First Step (continued): Acquiring a Preference Column Matrix A scalar Qi expresses an organism’s valuation of a specific form of reinforcer − i.e. A food-deprived bear would have a high value in the food category • We assign to an organism a column matrix, or vector [Q], which contains scalar numbers {Q1, Q2,…Qn} – Each scalar corresponds to the organism’s preference for the corresponding form of reinforcement Second Step: Acquiring a Reinforcer Value Matrix • Take a reinforcer, and associate with it a row matrix, or dual space map [P] – Matrix expresses the reinforcer’s value in different domains, with the same n domains as ordered in the column matrix [Q] – [P] contains real numbers {P1, P2,… Pn}, where each number expresses the reinforcer’s value in a particular domain – Pi and Qi refer to the value and valuation, respectively, in the same domain i • i.e. A piece of meat would have a high food value to a food-deprived bear, and very low values in other domains Fifth Step: Acquiring a Proxy for Likelihood of Successful Completion Difficulty must be accounted for if one is to successfully predict whether an organism will complete a task One can express the likelihood of completion of a reinforcer’s corresponding task with its degree of difficulty, task’s order of hierarchical complexity OHCij − The weighting technique depends on the sensitivity of the behavior prediction one intends to obtain − An example difficulty measure is the task’s order of hierarchical complexity Dij = OHCij; OHCij equals a task’s order of hierarchical complexity. One can estimate the likelihood of successful completion by dividing an organism’s weighted stage score, u, by the weighted degree of difficulty of the task Conclusion and Thoughts on Testing To test the model, data need to be obtained from people varying in personal interests − Personal interests may be represented by factor scores obtained from the Holland factors In a test of the model, discounting would be measured by a perceived value experiment in which the length of delay would be varied on probe trials In the case where behavior may be related to task completion for reinforcement, it should be possible to predict behavior mathematically with the methods just outlined The variables mentioned in the presentation are defined loosely so that one can apply the technique for predicting behavior in a broad context Works Cited • Commons, M. L. (1979). Decision rules and signal detectability in a reinforcement-density discrimination. Journal of Experimental Analyses of Behavior, 32, 101-120. • Commons, M. L. & Pekker, A. (In preparation). A New Discounting Model of Reinforcement, under review. • Commons, M. L., Woodford, M., & Ducheny, J. R. (1982). How Reinforcers are Aggregated in Reinforcement-Density Discrimination and Preference Experiments. • Commons, M. L., Woodford, M., & Trudeau, E. J. (1991). How Each Reinforcer Contributes to Value: "Noise" Must Reduce Reinforcer Value Hyperbolically. 4. How Stage and Value Explain the Morally Questionable Basis Expert Witnesses • This paper is an empirical study where it is shown that the stage of behavior required by a situation is related to perceived biasing of a situation. When experts are hired to give an opinion in a court, for example, the ideal is that they be unbiased when issuing their option. In the current study, forensic experts were asked to what extent various situations that experts might find themselves in could cause the resulting opinions of the expert to be biased. This paper is an empirical test of part of the stage and value model, because it uses the hierarchical complexity of the items to empirically predict moral action. In addition, the degree of bias that was perceived and the stage requirements of the items that reflected perceived bias are related. The moral question is, how do expert witnesses perceive the possible biases of their fellow expert witnesses? Participants, who were attendees at a workshop at the American Association of Psychiatry and the Law, were asked to rate for their biasing potential a number of situations that might affect the behavior of an opposing expert. A Rasch Analysis produced a linear scale as to the perceived biasing potential of these different kinds of situations from the most biasing to the least biasing. Working for only one side in both civil and criminal cases had large scaled values, which means that they were seen as highly biasing; they were also the first factor in a factor analysis. In interesting contrast, a) an opposing expert also serving as the litigant's treater and b) an opposing expert being viewed as a "hired gun" (supplying an opinion only for money) were two situations viewed as not very biasing. In a regression analysis, the order of hierarchical complexity of an item predicted the perceived bias of the items from the 1st, 2nd and 3d factors. 72 How Do Forensic Expert Witnesses Perceive The Possible Biases Of Their Fellow Expert Witnesses? Patrice Marie Miller, Ed.D. Thomas Gordon Gutheil, MD Michael Lamport Commons Harvard Medical School Expert witness is a person with education or training in his or her area of expertise and whose opinion is regarded to be worthy rely upon in a legal case The objectivity that an expert witness brings to the legal system is the most valued quality of an expert One of the most challenging but necessary ideals for expert witnesses to uphold, therefore, is dealing with “expert bias” Expert bias is seen as a deviation from the “ideal” neutral balanced assessments, judgments and the like 74 In our previous work on expert bias, we showed that expert witnesses in our survey perceived the existence of a good deal of such bias Perceived bias here is operationally defined as how strongly biasing the study participants found certain situations to be This did allow a conclusion that some situations were perceived by experts to be actually more biasing than others, but there was no way to ascertain specifically how much more or how much less biasing each situation was perceived to be 75 The purpose of the current study is to find out how potentially biasing each of the situations is perceived to be on a ruler-like scale that jurors may readily understand, using a technique called Rasch analysis We hoped that forensic experts might benefit from being informed as to the perceived degree of seriousness of various biasing situations With such information, forensic experts can consider altering their own behavior and/or informing the jury of the seriousness of biases which the other side may hold 76 46 participants ‒ 81.4% (35 out of 43) M.D.’s ‒ The average number of years in forensic practice were 11.34 (SD = 9.32) ‒ Annual number of forensic cases was 48.82 (SD = 79.07) Data was analyzed using Rasch Analysis 77 The Model of Hierarchical Complexity classifies tasks as to their complexity A task action is defined as more hierarchically complex when the higher order action: – Is defined in terms of two or more actions from the next lower order – Organizes these lower-order actions – Organizes these lower-order actions 78 A task is at a higher order if: • It is defined in terms of two or more lower order task actions • It organizes lower order task actions • This organization is non-arbitrary Instrument In a questionnaire, subjects were asked to think of recent cases in which they had served as expert witnesses as they answered the questions These snipids from the cases were rated on six point scales as to how biasing they were ‒ ‒ ‒ ‒ The first series of queries, were on the issue of an expert’s influence on case outcomes and the subjects’ emotional reactions to those outcomes Next series of queries asked subjects to identify potentially biasing factors, from least biasing to most biasing, such as money, prestige, high profile cases, etc. The final series of queries focused on expert attitudes towards bias and biasing factors, such as money, prestige, high profile cases The majority of questions were asked in regards to “opposing experts” 80 Rasch Analysis Ranked Factors That Participants Rated As Potentially Biasing The most biasing situations: ‒ The frequency with which a respondent will turn down cases that evoke personal discomfort (Rasch score -1.55) ‒ Certain expert witnesses testified consistently for ‒ ‒ ‒ ‒ Only one side (e.g. plaintiff-only civil-case, -.89; Prosecution-only criminal-case -.87 Defense-only criminal-case, -.83 Defense-only civil-case, -.79 81 The least biasing situations: ‒ The opposing expert had been the examinee’s treater (1.99) ‒ The respondent had assessed the expert witness on the opposite side to be a hired gun (1.6) ‒ Respondents’ assessment of their own degree of happiness in a given case in which they had testified “appropriately,” but that side of the case lost with a probably unjust outcome "Possibly unjust outcome" as used in the instrument may itself be subject to interpretation bias (.97) 82 Situations that do not fit: ‒ The respondent was asked whether they had ever decided to take action after concluding that an expert witness on the side opposite from the respondent's had acted unprofessionally during the course of a case 83 Stage Of Pricing Strategy Predicts Earnings: A Study Of Informal Economics Lucas A. H. Commons-Miller Hudson Fernandes Golino Michael Lamport Commons Dare Institute Universidade Federal de Minas Gerais Harvard Medical School Abstract • This is the first cross cultural study of stages of development on economic tasks • Studying informal economies across cultures allows us to test the stage of pricing strategies used by people of varying levels of education ranging from no schooling to completion of college, and at different stages of development ranging from primary operations to metasystematic • We found that the hierarchical complexity of their pricing strategies correlated with how much they earned and the assets they accumulated Method • Interviews of 51 peddlers and carters were conducted: – 33 (64.7%) were from Rio De Janeiro and Belo Horizonte, Brazil. – 18 (35.3%) were in Richmond, California, and Dorchester, Massachusetts • • • • There were 35 (68.6%) who were male There were 14 (27.5%) who were female. The gender of two subjects was not recorded (3.9%) Participants ages ranged from 17 to 85 with M = 47.1 (SD = 14.13) Procedure • Participants were asked questions such as: – “How do you set your prices” – “Do you earn more now than you did in the past” – “Do you know what others charge? • Participants were also asked questions about their health and habits and some data was inferred by the experimenter through observation Pricing Strategy Scoring Manual • At the Primary stage, peddlers do not control their prices – Either someone sets the price for them – Or they take whatever someone offers • At the Concrete stage, they set their price by adding an amount to the price they paid, or by negotiating with the buyer • At the Abstract stage, they set their prices based on norms – They know what others are charging and they match or beat that price • At the Formal stage, they set prices based on a proportional markup – They add a proportion of what they paid for the goods to the price. • Most people who employ this strategy use numerical percentages • One participant had a formal concept of proportionality based on magnitude and estimation Pricing Strategy Scoring 2 • At the Systematic stage, people use multiple factors to set prices. – They usually have a marketing strategy – They understand how the markets they sell in work • They especially understand about the markets in which they trade • At the Metasystematic stage, pricing strategies involve being able to compare and choose between different business models – The one metasystematic performing participant, did not sell his wares on the street, he traveled from city to city – He had a complex business model • He understood what niche of the jewelry industry his business was filling Results Stage r Sig. (2-tailed) N Country r Sig. (2-tailed) N Education r Sig. (2-tailed) N Earnings Category Sig. (2-tailed) N Stage 1 . 48 Country .196 .182 48 1 . 51 Education .258 .091 44 .575(**) .000 45 1 . 45 Earnings Category .506(**) .000 45 .581(**) .000 45 .568(**) .000 41 1 . 45 ** Correlation is significant at the 0.01 level (2-tailed) Results • • In the correlation analysis Stage alone predicted earnings significantly r = .506 – fig. 1: earning category by stage Stage, however, did not perform much differently than the other two predictors on their own, and this difference was not significant: – Country: r = .581 – Education: r = .568 E a 6 r n 5 i n g 4 C a 3 t e g 2 o r Earnings Categories: 1= 0-4 $ p/day 2= 4-16 $ p/day 3= 16-64 $ p/day 4= 64-256 $ p/day 5= 256-1024 $ p/day 6= 1024-4096 $ p/day 1 i e s 7 8 9 10 Stage 11 12 Results A multiple linear regression of the 3 main predicting variables, Stage, Country, and Education showed that while all three predicted earnings, with an r value of: r(37) = .738, p = .000 Stage was a little better than the other two factors; the betas of were: β = .367 (Stage) β = .353 (Country) β = .270 (Education) The multiple regression is shown in the table below (Fig. 6) Unstandardized Coefficients Mode l 1 B Constant Stage Country Education Std.Error -2.407 1.105 .388 .122 .881 .254 Standardized Coefficients Beta Collinearity Statistics t Sig. Tolerance VIF -2.178 .036 .367 3.192 .003 .930 1.075 .339 .353 2.601 .013 .667 1.499 .130 .270 1.961 .057 .647 1.544 Discussion • The stage of pricing strategy predicted earnings – It did so better than the other two main variables • Country • Education • The r = 575 between country and education probably indicates that it is easier to obtain an education in the United States than in Brazil – This shows some co-linearity • Stage, country and education all predicted earnings slightly better than stage alone – Country predicted education – Country and education did not predict stage – This further supports that stage was the main predicting factor in earnings • This is the first study showing a stage effect in behavioral economics • This is further evidence that the stage is applicable in a number of fields and across cultures 96