EVALUATING A FINANCIAL LITERACY PROGRAM: THE CASE OF THE

advertisement
EVALUATING A FINANCIAL LITERACY PROGRAM: THE CASE OF THE
AUSTRALIAN MONEYMINDED PROGRAM
Roslyn Russell *, Robert Brooks ** and Aruna Nair *
* Research Development Unit, RMIT Business
** Department of Econometrics and Business Statistics, Monash University
ABSTRACT
The promotion of higher levels of financial inclusion has become a major issue of policy concern and action
across a number of countries, including the US, UK and Australia. Typically programs to improve financial
inclusion involve either a matched savings program and/or financial literacy education. In the context of
financial literacy education, Fox, Bartholomae and Lee (2005) have recently proposed a five tiered approach
to program evaluation. This paper provides an illustration of the application of this approach in the context of
a particular Australian financial literacy program, specifically the MoneyMinded program. The results
demonstrate the general applicability of the framework. Further illustrations are provided on how statistical
techniques might be used, and how longer-term impact might be assessed.
Correspondence to: Robert Brooks, Department of Econometrics and Business Statistics, Monash
University, PO Box 1071, Narre Warren Victoria 3805, Australia Phone: 61-3-9904 7076, Fax: 61-3-9904
7225,
Email: robert.brooks@buseco.monash.edu.au
Acknowledgements: We wish to acknowledge the financial support of the ANZ Banking Corporation in the
conduct of this research. We also wish to thank the participants and facilitators in the MoneyMinded program
for their willing provision of evaluation data on their experiences. In addition we also wish to thank two
anonymous reviewers for their helpful comments on an earlier version of this paper.
1
EVALUATING A FINANCIAL LITERACY PROGRAM: THE CASE OF THE
AUSTRALIAN MONEYMINDED PROGRAM
ABSTRACT
The promotion of higher levels of financial inclusion has become a major issue of policy concern and action
across a number of countries, including the US, UK and Australia. Typically programs to improve financial
inclusion involve either a matched savings program and/or financial literacy education. In the context of
financial literacy education, Fox, Bartholomae and Lee (2005) have recently proposed a five tiered approach
to program evaluation. This paper provides an illustration of the application of this approach in the context of
a particular Australian financial literacy program, specifically the MoneyMinded program. The results
demonstrate the general applicability of the framework. Further illustrations are provided on how statistical
techniques might be used, and how longer-term impact might be assessed.
2
Introduction
The promotion of greater levels of financial inclusion is now seen to be a major public policy and corporate
initiative across a variety of countries. The core elements of programs to promote greater financial inclusion
appear to include matched savings programs and financial literacy education. In the United States such
programs include the American Dream Demonstration and its emphasis on the role of Individual
Development Accounts in asset accumulation (see Sherraden (1991), (2000); Beverley and Sherraden
(2001); Schreiner, Clancy and Sherraden (2002). In the United Kingdom such programs include the Saving
Gateway (see Kempson, McKay and Collard (2003), while in Australia such programs include Saver Plus
(see Russell and Fredline (2004); Russell and Wakeford (2005), Russell, Brooks, Nair and Fredline (2005,
2006)). With a range of programs on offer across a variety of countries, effective evaluation of their role in
improving financial inclusion is essential.
In the context of financial literacy education, Fox, Bartholomae and Lee (2005) have recently provided an
overview of the wide range of financial literacy education programs available in the United States and
discussed their impacts. Fox, Bartholomae and Lee (2005) also discuss the evaluation of such programs,
noting that to date no single framework appears to guide the evaluation process. They then extend the
framework of Jacobs (1988) to develop a five tiered evaluation framework against which evaluations of
financial literacy education programs can be conducted. The framework proposed by Fox, Bartholomae and
Lee (2005) is general in its potential application. As such, the purpose of the present paper is to explore this
evaluation framework in the context of a financial literacy program designed and developed in a different
country, specifically the MoneyMinded financial literacy program that is offered in Australia. This provides an
extension to the work of Fox, Bartholomae and Lee (2005) that developed and discussed their framework in
the context of United States programs. The consideration of a program from outside the United States thus
provides information on the generalisability of the framework. Further, the results are likely to be of interest
across a range of countries given the comparable initiatives relating to financial inclusion across a variety of
countries.
The plan of this paper is as follows. Section two provides an overview of the evaluation framework that is
developed and discussed in Fox, Bartholomae and Lee (2005). Section three discusses the evaluation of the
MoneyMinded program undertaken by Russell, Brooks and Nair (2005) in the context of that framework.
Section four then contains some concluding remarks, which can be drawn from this wider application of the
framework.
3
The Evaluation Framework
A key component in an effective financial education program is an evaluation of the effectiveness of such a
program. To aid such a process, Fox, Bartholomae and Lee (2005) propose a five-tiered evaluation
framework based on the work of Jacobs (1988). The five tiers in this framework are: (1) pre-implementation,
(2) accountability, (3) program clarification, (4) progress towards objectives and (5) program impact.
The pre-implementation stage involves a needs assessment on the financial literacy training required by
participants, this might be in terms of the needs identified from more general financial literacy testing and/or
specific needs assessment around program participants. For a US example targeted at low income groups
see the evaluation of the FLLIP program reported in Anderson, Zhan and Scott (2004). The accountability
stage involves the collection of data on the education and services provided as part of the program, the costs
involved and demographic information on participants. The program clarification stage involves ongoing
assessment of the program’s strengths and weaknesses, and plans to review the program in the light of such
assessments. For US examples of such evaluations see Anderson, Zhan and Scott (2004) and Lyons and
Scherpf (2004). The progress towards the objectives stage involves the collection of data on the impacts of
the program on participants. For a US example of the need to clearly define program impact goals in the
context of the FDIC Money Smart program see Lyons and Scherpf (2004). Finally, the program impact stage
builds on the previous stage in terms of long and short-term impacts, typically involving comparison of
participants and non-participants.
The Fox, Bartholomae and Lee (2005) framework is potentially very general in terms of its ability to evaluate
financial literacy programs. In particular, there is nothing about the framework that is particularly specific to
the US context in which it was initially proposed. Therefore the use of the framework to discuss the
evaluation of an Australian financial literacy program is valuable in informing as to the general applicability of
the framework.
4
Evaluating the MoneyMinded program
Having outlined the five tiered approach to program evaluation advocated in Fox, Bartholomae and Lee
(2005), this approach is now applied to considering evaluation of the Australian MoneyMinded program. The
MoneyMinded program has a broad objective of assisting people to make informed financial decisions and
take control of their finances for their future. The MoneyMinded program consists of 17 workshops which are
separated into six key topics areas, specifically, Planning and saving, Easy payments, Understanding
paperwork, Living with debt, Everyday banking and financial products and Rights and responsibilities. The
program allows individual participants to choose which workshops they wish to complete. The program was
developed, in part, as a response to the results from the first-ever major survey of consumer financial literacy
in Australia (see ANZ 2003)). The results of this survey identified both those groups within the community
that could benefit from improved financial literacy education and also the issues that could be addressed in
such an education program. Further details on the program and its objectives are available on the
MoneyMinded website (http://www.moneyminded.com.au/). The MoneyMinded program is currently at the
initial roll-out stage involving training being facilitated by the involvement of five major community groups.
Previous research has found that delivery of a matched savings program through the involvement of
community groups has been successful (for details see Russell and Fredline (2004); Russell and Wakeford
(2005); Russell, Brooks, Nair and Fredline (2005, 2006)), and as such that approach of involving community
groups in the delivery of the program has also been extended to financial literacy education. The long-term
plan of ANZ is to partner with around 100 community organisations during the next five years to deliver the
MoneyMinded program to 100,000 people nationally. Further details on the current status of MoneyMinded
and the future plans can be found on the ANZ website
(http://www.anz.com/aus/aboutanz/Community/Programs/MoneyMinded.asp). Given these long-term plans
for such a comprehensive and extensive delivery of the program, an evaluation of the initial roll-out would
seem critical. In fact, such an evaluation is consistent with Fox, Bartholomae and Lee’s (2005) noting of the
Jacobs (1988) emphasis on the importance of evaluation being collected and analysed in a systematic
manner.
Having briefly outlined the MoneyMinded program, attention now turns to considering its evaluation in terms
of the five tiered approach. The first stage in this approach is the pre-implementation needs assessment.
This needs assessment in the first instance did not involve a formal needs analysis at the individual
participant level but instead is more generally based on the results of the financial literacy survey reported in
ANZ (2003). The ANZ (2003) financial literacy survey involved asking people questions about financial
literacy via a telephone survey customised to their particular needs and circumstances, as distinct from
asking general questions about financial literacy that may not have been relevant to particular individuals.
The ANZ (2003) survey identified that the following groups were strongly over-represented in the lowest
financial literacy quintile: lowest education levels, not working or in unskilled work, lower income levels, lower
savings levels, single and those at the extreme ends of the age profile. There was also modest overrepresentation of females. A selection of these quintile results are summarised in table 1. These results
suggest a targeting of the program at these over-represented groups. In the context of the MoneyMinded
program this has been achieved through the involvement of community groups in facilitating delivery to
participants. In the context of the matched savings program Saver Plus, this involvement of community
groups has also proven to be successful in the recruitment of participants (for details see Russell and
Fredline (2004); Russell, Brooks, Nair and Fredline (2005, 2006)).
There are other elements of the MoneyMinded program that have also addressed needs assessment at the
pre-implementation stage. The program required participants to complete a pre-training questionnaire. While
this questionnaire captured a range of demographic information (which is relevant for the accountability
stage in the Fox, Bartholomae and Lee (2005) framework), it also captured data on the areas that
participants identified as where they most needed financial literacy education in terms of the modules offered
as part of the MoneyMinded program. This information is summarised in table 2 and further qualitative
information is available in Russell, Brooks and Nair (2005). Together with the fact that the MoneyMinded
workshop is composed of individual workshops the participation in which is chosen at the individual level
suggests that the design performs well on the criteria relevant to the pre-implementation stage.
The accountability stage of the evaluation framework requires that details be collected on a range of matters
including the education and services provided and base participant information. The details on the education
and services provided as part of the program are available on the MoneyMinded website
(http://www.moneyminded.com.au/workshops/default.asp). In terms of basic participant information the pretraining questionnaire collected data on demographics (including gender, age, education levels, income,
employment), mathematical and computer literacy (the results in ANZ (2003) suggest that these are
important factors in financial literacy), internet usage, current savings and spending behaviour, savings
5
intentions, and current usage of a range of financial products. Further, the pre-training questionnaire also
captured data on the financial literacy of participants by asking questions on their understanding of credit
card liability, their understanding of financial products, their understanding and approaches to bill payment,
how they would handle unexpected charges by their banks and their attitude to debt. A summary of the
socio-demographic data collected is reported in table 3 and more details on this data and a summary of
results are available in Russell, Brooks and Nair (2005). The collection of such data provides the potential for
a rich analysis to inform the program clarification, progress towards objectives and program outcome stages
of any evaluation by allowing an exploration of the influences of these demographic and financial factors.
This type of data collection is also broadly consistent with the data collected in the US studies reviewed by
Fox, Bartholomae and Lee (2005).
The program clarification stage requires an assessment of the program strengths and weaknesses, and the
use of such information in any re-assessment of program goals and objectives. This element is also present
in the MoneyMinded evaluation. While the long term plan for MoneyMinded is for a large scale delivery, the
initial roll out to a small number of community organisations and participants allows for the type of
assessment and re-thinking that is at the core of the program clarification stage. In addition to collecting pretraining data from participants, the MoneyMinded evaluation also involved the administration of post-training
questionnaires to participants. This questionnaire requires participants to provide an assessment of each
workshop completed in terms of how useful they found the workshop to be, and how satisfied they were with
the topics and material covered in the workshop. This data took the form of an ordinal rating on a seven point
scale. The post-training questionnaire also required an assessment of overall program satisfaction, again in
the form of an ordinal rating on a seven point scale. This post-training evaluation data when coupled with the
pre-training data provides the scope for a rich analysis at the program clarification stage. The results
reported in Russell, Brooks and Nair (2005) show high overall ratings in terms of satisfaction and usefulness.
More specifically, in terms of the workshop level data the results showed high usefulness ratings (40.6% of
participants found the workshops to be extremely useful, while another 46.7% of participants found the
workshops to be useful), and high satisfaction ratings (42.8% of participants were very satisfied with the
topics and material covered in the workshop, while another 47.2% of participants were satisfied with the
topics and material covered). These high ratings at the workshop level also transferred to high levels of
satisfaction with the overall program (45.3% of participants reported being very satisfied with the overall
program, while another 38.7% of participants reported being satisfied with the overall program).
While these high ratings in terms of usefulness and satisfaction provide some comfort in terms of the quality
of the overall program and the constituent workshops, a richer analysis for the program clarification is
potentially provided by considering whether there are differences in the satisfaction and usefulness ratings
across categories in the demographic and financial data collected in the pre-training questionnaire. In this
context, Russell, Brooks and Nair (2005) found statistically significant differences in terms of usefulness and
satisfaction ratings across participant categories. More specifically, Russell, Brooks and Nair (2005) found
higher ratings were provided by those in categories that captured lower incomes, education levels,
employment status, mathematical and computer literacy, savings levels and intentions and base levels of
financial literacy. A selection of these results is reported in table 4, with the full set of detailed results
available in Russell, Brooks and Nair (2005). Linking this result back to the needs assessment at the preimplementation stage provides richer information on the program, over and above that provided by the
satisfaction and usefulness ratings at the aggregate level.
The bulk of the data collected in both the pre-training and post-training questionnaires used in the
MoneyMinded evaluation conducted by Russell, Brooks and Nair (2005) is categorical in nature. Therefore
the statistical testing was conducted using the χ2 testing framework appropriate to such data. An obvious
issue in any formal statistical testing is whether the data collected satisfies the assumptions required of the
statistical procedure being used. In terms of χ2 testing of categorical data the key assumption is around the
expected cell count size, where the conservative rule of thumb is an expected cell count of 5 for the
asymptotic χ2 approximation to work. With an evaluation instrument allowing for ratings on a seven point
scale, the evaluation being focused on the initial roll-out, and the heavy concentration of ratings at the high
end of spectrum this requirement was not satisfied in the initial data. There are two approaches available to
those doing an evaluation in this context. The first approach is to aggregate cells (in this case rating
categories) together and then conduct the tests. Given the MoneyMinded ratings this involved aggregating
together the lower usefulness and satisfaction ratings categories. The second approach is to conduct a
simulation analysis of the exact distribution of the test statistic typically involving some Monte Carlo
technique, a feature that is now available in many standard software packages, including SPSS. Overall, this
suggests that appropriate statistical analysis can be beneficial at the program clarification stage of a
structured evaluation process.
The post-training questionnaire also provided useful data on some dimensions of program delivery. A
website of resources was made available to participants, however the results suggested that participants
6
made little use of the website. Russell, Brooks and Nair (2005) found that 92% of participants did not access
the website. This is a strong result, as it is also suggests that use of technology in delivery should be
assessed, and just not assumed in the design features. The post-training questionnaire also captured data
on the number of individual workshops that participants attended. The median number of workshops
attended by participants was 2. The majority of participants attended only a single workshop, while a small
number of participants attended 12 workshops. Interestingly the results in Russell, Brooks and Nair (2005)
show that participants who attended multiple workshops had higher usefulness and satisfaction ratings than
those participants who attended a single workshop. Further, these differences in the ratings are statistically
significant. This suggests that consideration be given in the program design to encouraging attendance at
multiple workshops via greater bundling, while recognising that workshops are already bundled by broad
topic area. This is useful and constructive information to obtain at the program clarification stage of an
evaluation. There is however an issue in the statistical analysis of the ratings, where individuals provide
multiple ratings across a number of workshops, in that such responses are likely to be correlated across the
workshops attended by that individual. This issue is explored in the context of a cluster sampling framework
by Fry, Mihajilo, Russell and Brooks (2006).
The assessment of MoneyMinded also involved the completion of questionnaires by the facilitators who
actually delivered the financial literacy training, in terms of their expected usefulness of the workshops prior
to delivery, and then their perceptions of participant usefulness post-delivery. In both of these assessments
Russell, Brooks and Nair (2005) found that the ratings were high, and were broadly consistent with the
ratings provided by participants. This again is useful data at the program clarification stage in informing how
the program goes forward.
The final data collected of relevance to the program clarification stage was questionnaire data from both
participants and facilitators as regards improvements that could be made to the program in terms of
expanding the topic coverage. In this regard, a number of responses suggested the need for workshops on
investment, superannuation and retirement planning. To some extent this is to be expected in the Australian
context where a major emphasis of government policy in recent years has been to move the funding of
retirement incomes from the pension system out of government funds to private funding via long-term
superannuation investments. For some discussion of this policy focus see McKeown (2001).
The fourth stage of the Fox, Bartholomae and Lee (2005) approach to evaluation is in terms of progress
towards objectives. Clearly, the objective of a financial literacy education program is in terms of improved
financial literacy, and as a consequence better financial decision making by participants. However in the
context of an initial evaluation of the roll-out of what is to become a wider program, such a stage is difficult to
effectively action, except in terms of interpretation of ratings of satisfaction and usefulness that have been
previously utilised at a program evaluation stage.
Following on from progress towards objectives is the final stage of program impact, both in a short-term and
long-term sense. Clearly, here one is looking for wider demonstrable evidence in terms of improved financial
literacy and better financial decision making by those who have participated in financial literacy programs.
The recommendation in Fox, Bartholomae and Lee (2005) is that this measurement of impact be done
relative to a control group that have not participated in such training. This stage is clearly not relevant in the
current assessment of MoneyMinded, given that the program is only at an initial roll-out stage. In fact, Fox,
Bartholomae and Lee (2005) note that this is characteristic of most programs even in the US, thus making it
premature to make any definitive statements as regards the impacts of financial literacy education. However,
in that context, it is worthwhile speculating as to how such an evaluation might be built for use in the future.
The needs assessment for MoneyMinded derives in part from the ANZ (2003) survey on adult financial
literacy. One mechanism to analyse longer-term program impacts would be to make sure that future broad
surveys of financial literacy include questions for the respondents on whether they have previously
participated in a financial literacy education program. One could then compare measures of financial literacy
across groups that have/have not undertaken financial literacy education, after making appropriate
adjustments for other socio-economic and demographic factors that are known to influence financial literacy.
This would have the potential to provide for a longer term assessment. A further mechanism to measure
program impacts would be in terms of the impacts on savings behaviour of participants who have been
through financial literacy education programs. The current Saver Plus program includes some base financial
literacy education (for details see Russell and Fredline (2004); Russell, Brooks, Nair and Fredline (2005,
2006)). Thus a longer-term analysis of the savings behaviour of participants in a matched savings program
would provide evidence of the joint impacts on financial decision making of participation in both a matched
savings program and financial literacy education.
7
Conclusion
This paper has conducted an exploration of the Fox, Bartholomae and Lee (2005) framework for the
evaluation of financial literacy education programs in the context of a specific program, the Australian
MoneyMinded program. In general, the framework developed in the context of US programs appears to
generalise well to the Australian context, thus providing support to the general applicability of the framework.
In applying the framework the paper also illustrates how formal statistical evaluation might be used at certain
stages. Further the paper also speculates on how measurements of longer-term impact might be conducted
in terms of the program.
References
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Anderson, S., Zhan, M. and Scott, J. (2004), Targeting Financial Management Training at Low
Income Audiences, Journal of Consumer Affairs 38, 167-177.
ANZ Banking Group (2003), ANZ Survey of Adult Financial Literacy in Australia, Roy Morgan
Research.
ANZ Banking Group (2005), ANZ Survey of Adult Financial Literacy in Australia, AC Neilsen.
Beverley, S. and Sherraden, M. (2001), How People Save and the Role of IDAs, in R. Boshara (ed.),
Building Assets: A Report on the Asset Development and IDA Field, Corporation for Enterprise
Development.
Fox, J., Bartholomae, S. and Lee, J. (2005), Building the Case for Financial Education, Journal of
Consumer Affairs 39, 195-214.
Jacobs, F. (1988), The Five Tiered Approach to Evaluation: Context and Implementation, in H.
Weiss and F. Jacobs (eds.), Evaluating Family Programs, New York: Aldine DeGruyter.
Kempson, E., McKay, S. and Collard, S. (2003), Evaluation of the CFLI and Saving Gateway Pilot
Projects, Interim Report on the Saving Gateway Pilot Project, Personal Finance Research Centre,
University of Bristol.
Lyons, A. and Schrepf, E. (2004), Moving from Unbanked to Banked: Evidence from the Money
Smart Program, Financial Services Review 13, 215-231.
McKeown, W. (2001), Superannuation, Chapter 12 in D.Beal and W.McKeown (eds.), Personal
Finance, John Wiley and Sons.
Russell, R., Brooks, R. and Nair, A. (2005), Evaluation of MoneyMinded: An Adult Financial
Education Program, RMIT University,
Russell, R., Brooks, R., Nair, A. and Fredline, L. (2005) Saver Plus Improving Financial Literacy
Through Encouraging Savings: Evaluation of the Savers Plus Pilot Phase 1 - Final Report
(Melbourne: RMIT University).
Russell, R., Brooks, R., Nair, A. and Fredline, L. (2006), The initial impacts of a matched savings
program: the Saver Plus program, Economic Papers 25, 32-40.
Russell, R. and Fredline, L. (2004) Saver Plus Progress and Perspectives: Evaluation of the Savers
Plus Pilot Project Interim Report (Melbourne: RMIT University).
Russell, R. and Wakeford, M. (2005), Saver Plus: Saving for the Future, Proceedings of the
Transition and Risk: New Directions for Social Policy Conference, Centre for Public Policy, University
of Melbourne.
Schreiner, M., Clancy, M. and Sherraden, M. (2002), Saving Performance in the American Dream
Demonstration: A National Demonstration of Individual Development Accounts, Final Report, Center
for Social Development, Washington University.
Sherraden, M. (1991), Assets and the Poor: A New American Welfare Policy, Armonk, NY:
M.E.Sharpe Inc.
Sherraden, M. (2000), From Research to Policy: Lessons From Individual Development Accounts,
Journal of Consumer Affairs 34, 159-181.
8
Table 1: Representation of certain groups in the lowest financial literacy quintile
This table reports details of a selection of the groups that are over represented in the lowest financial literacy
quintiles of the ANZ financial literacy surveys in 2003 and 2005. The table reports the percentage of
respondents in the lowest financial literacy quintile for a series of socio-demographic groups
Group
Females
Education Less than Yr 10
Looking for Work
Semi-skilled
Unskilled
Single Living Alone
Aged 18-24
Aged 70+
Renting
2003 Survey (%)
24
42
32
28
40
26
31
31
29
2005 Survey (%)
25
43
31
34
38
26
33
37
27
Data Sources: ANZ Surveys of Adult Financial Literacy in Australia 2003 (p.5) & 2005 (p.22).
Table 2: Areas of interest for greater financial knowledge amongst participants in terms of MoneyMinded
modules
This table reports the MoneyMinded modules identified by participants as bein areas where they had an
interest in acquiring greater financial knowledge.
Module Area
Planning and saving
Living with and managing your debt
Your consumer rights and
responsibilities
Understanding financial paperwork
Your everyday banking and financial
products
Your payment options
Number of Interested
Participants
101
79
68
71.1
55.6
47.9
68
61
47.9
43.0
51
35.9
Source: Russell, Brooks and Nair (2005), Table 4 – p.12.
9
% of Total Participants
Table 3: Selected characteristics of the MoneyMinded participants
This table reports the characteristics of the MoneyMinded participants across a range of socio-demographic
characteristics.
Gender
Male
Female
Age
15-34
35-44
45-54
55+
Education
Primary/Some Secondary
All Secondary
TAFE/Workplace Training
University
Employment Status
Full time
Part time
Casual
Unemployed/Not looking
Student
Home duties
Retired
Number of Participants
% of Total
19
123
13.4
86.6
34
63
29
14
24.3
45.0
20.7
10.0
30
24
32
46
22.8
18.2
24.2
32.3
50
45
10
10
1
15
10
35.4
31.9
7.1
7.1
0.7
10.6
7.1
Source: Russell, Brooks and Nair (2005), Table 1 – p.8; Table 2 – p.10.
10
Table 4: Selected comparison of satisfaction and usefulness of the overall program and workshops
This table presents a selection of the results on whether the ratings of satisfaction and usefulness provided
by participants vary across the socio-demographic, financial product usage and financial literacy groupings.
Group
Income Source
Employment Status
Income Level
Savings Intentions
Savings Level
Spending Behaviour
Understanding Bills
Credit card usage
Income Source
Employment Status
Income Level
Savings Intentions
Savings Level
Spending Behaviour
Understanding Bills
Credit card usage
Income Source
Employment Status
Income Level
Savings Intentions
Savings Level
Spending Behaviour
Understanding Bills
Credit card usage
χ2 d.f.
χ2 value
Program Satisfaction
2
5.282
4
11.692
6
12.823
6
12.634
6
4.756
6
7.699
2
1.641
2
5.700
Workshop usefulness
2
14.345
4
29.169
12
51.544
6
30.733
6
29.656
6
74.844
6
48.953
6
23.757
Workshop Satisfaction
2
6.210
4
26.590
12
22.480
6
30.453
6
29.337
6
37.084
6
44.261
6
32.603
p-value
0.071
0.020
0.046
0.049
0.576
0.261
0.440
0.058
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.001
0.045
0.000
0.032
0.000
0.000
0.000
0.000
0.000
Source: Russell, Brooks and Nair (2005), Table 17 – p.26, Table 18 – p.28, Table 19 – p.29
11
Download