Uploaded by M “RayRay” VA

4 5992121544250230375

advertisement
1
2
Oxford University Press Southern Africa (Pty) Ltd
Vasco Boulevard, Goodwood, Cape Town, Republic of South Africa
P O Box 12119, N1 City, 7463, Cape Town, Republic of South Africa
Oxford University Press Southern Africa (Pty) Ltd is a subsidiary of
Oxford University Press, Great Clarendon Street, Oxford OX2 6DP.
The Press, a department of the University of Oxford, furthers the University’s objective of
excellence in research, scholarship, and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea
Switzerland Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in South Africa
by Oxford University Press Southern Africa (Pty) Ltd, Cape Town
Personnel Psychology: An Applied Perspective
Print ISBN: 978-0-195988-38-3
ePUB ISBN: 978-0-190404-70-3
© Oxford University Press Southern Africa (Pty) Ltd 2010
The moral rights of the authors have been asserted.
Database right Oxford University Press Southern Africa (Pty) Ltd (maker)
First published 2010
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press Southern Africa (Pty) Ltd,
or as expressly permitted by law, or under terms agreed with the appropriate
designated reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press Southern Africa (Pty) Ltd, at the address above.
You must not circulate this book in any other binding or cover
and you must impose this same condition on any acquirer.
Commissioning editor: Astrid Meyer
Project manager: Nicola van Rhyn
Editor: Adrienne Pretorius
Designer: Samantha Rowles
Researcher: Pat Rademeyer
Indexer: Adrienne Pretorius
Set in Minion Pro 9.5 pt on 12.5 pt by Orchard Publishing
Reproduction by
Cover reproduction by
Printed and bound by
Acknowledgements
The authors and publisher gratefully acknowledge permission to reproduce copyright material
in this book. Every effort has been made to trace copyright holders, but if any copyright
infringements have been made, the publisher would be grateful for information that would
enable any omissions or errors to be corrected in subsequent impressions.
TABLE OF CONTENTS
Preface
About the authors
Editors
Chapter contributors
Acknowledgements
Authors’ acknowledgements
Acknowledgements to copyright holders
PART 1 Introduction to personnel psychology
Chapter 1 Introduction: Personnel psychology in context
Dries Schreuder & Melinde Coetzee
Chapter overview
Learning outcomes
Chapter introduction
The science and practice of industrial and organisational psychology
Definitions of I/O psychology
The industrial psychologist versus the human resource practitioner
Major fields of I/O psychology
Licensing and certification of psychologists
The history of I/O psychology
Contributions of I/O psychology
Historical overview of research in I/O psychology
The scope of this book
Chapter summary
Questions
Chapter 2 Research methods in personnel psychology
Sanet Coetzee
Chapter overview
Learning outcomes
Chapter introduction
The role and use of research in personnel psychology
The research process
Step 1: Formulating the research question
Step 2: Choosing an appropriate design for the study
Step 3: Collecting the data
Step 4: Analysing the data
Step 5: Drawing conclusions from research
Ethical problems in research
Chapter summary
4
uestions
ART 2 Personnel employment
hapter 3 Introduction: The employment context and human resource planning
elinde Coetzee
hapter overview
earning outcomes
hapter introduction
he changing context of employment
Globalisation and competition
Technological advances
The nature of work
Human capital
Diversity and equity in employment
Demographic and workforce trends
The national Occupational Learning System (OLS) and Employment Services South Africa (ESSA)
Other demographic and workforce trends: Generation Y and retirees
uman resource planning
Organisational strategy
The strategic planning process
Business strategy and strategic human resource management
he rationale for human resource planning
What is human resource planning?
Obtaining the correct number of people
The right skills
The appropriate place
The right time
Keeping the people competent
Keeping the people motivated
Becoming involved with strategic-level corporate planning
Succession planning and talent retention
he human resource planning process
Investigative phase
Forecasts and estimations
Planning phase
Implementation phase
valuation of human resource planning
he employment process
Job analysis and criterion development
Human resource planning
Recruitment and selection
Reward and remuneration
Performance evaluation
Training and development
5
Career development
Employment relations
Organisational exit
Chapter summary
Questions
Chapter 4 Job analysis and criterion development
Melinde Coetzee & Herman Roythorne-Jacobs
Chapter overview
Learning outcomes
Chapter introduction
Job analysis
Defining a job
Products of job analysis information
Employment equity legislation and job descriptions
The job analysis process
Types of job analysis
Collecting job data
General data collection methods
Specific job analysis techniques
Computer-based job analysis
The South African Organising Framework for Occupations (OFO) and job analysis
Competency modelling
Defining competencies
Drawbacks and benefits of competency modelling
Phases in developing a competency model
Steps in developing a competency model
Criterion development
Steps in criterion development
Predictors and criteria
Composite criterion versus multiple criteria
Considerations in criterion development
Chapter summary
Questions
Chapter 5 Psychological assessment: predictors of human behaviour
Antoni Barnard
Chapter overview
Learning outcomes
Chapter introduction
Origins and history of psychological testing
The experimental era
The testing movement
The development of psychological testing in South Africa
The quality of predictors: basic psychometric requirements of psychological tests
6
Reliability
Test–retest reliability
Alternate-form reliability
Internal consistency reliability
Inter-rater reliability
Reliability and measurement error
Validity
Content validity
Construct validity
Criterion-related validity
Validity coefficient and the standard error of estimate
Validity generalisation and meta-analysis
The quality of psychological assessment: ethical and professional practice
Classification of tests
Training and registration of test professionals
Codes of conduct: standards for conducting assessment practices
Types of predictors
Cognitive assessment
The structural approach
The information-processing approach
The developmental approach
Personality inventories
Projective techniques
Structured personality assessment
Measures of specific personality constructs
Emotional intelligence
Integrity
Test-taker attitudes and response biases
Behavioural assessment
Work samples
Situational judgement tests
Assessment centres
Interviews
Biodata
Recent developments: online testing
Overview and evaluation of predictors
Chapter summary
Questions
Chapter 6 Recruitment and selection
Melinde Coetzee & Hennie Kriek
Chapter overview
Learning outcomes
Chapter introduction
Recruitment
7
Sources of recruitment
Applicant attraction
Recruitment methods
Recruitment planning
Candidate screening
Importance of careful screening
Short-listing candidates
Evaluating written materials
Managing applicant reactions and perceptions
Considerations in employee selection
Reliability
Validity
Utility
Fairness and legal considerations
Making selection decisions
Strategies for combining job applicant information
Methods for combining scores
Placement
Selection and affirmative action
Fairness in personnel selection
Defining fairness
Fairness and bias
Adverse impact
Fairness and culture
Measures of test bias
The quest for culture-fair tests
Legal framework
Models of test fairness
How to ensure fairness
Chapter summary
Questions
PART 3 Personnel retention
Chapter 7 Introduction: Psychology of personnel retention
Melinde Coetzee
Chapter overview
Learning outcomes
Chapter introduction
Turnover intentions
Forms of turnover
Measuring employee turnover
Outside factors
Pull and push factors
Factors influencing job and occupational embeddedness
8
Unmet expectations
The psychological contract
Balancing individual and organisational needs
The psychological contract and career mobility
Job satisfaction and organisational commitment
Job satisfaction
Facets of job satisfaction
Individual differences and job satisfaction
Organisational commitment
Forms of commitment
Individual variables and commitment
Measuring job satisfaction and organisational commitment
Employee engagement
Dimensions of work engagement
Causes of work engagement
Measuring work engagement
Retention factors in the South African context
Remuneration
Job characteristics
Training and development opportunities
Supervisor support
Career mobility opportunities
Work/life policies
Retention strategies
Assessment and evaluation of employees
Job re-design and work changes
Leadership
Training and development
Career development and mobility
Managing work/home inter-role conflict
Reward and remuneration
Promoting organisational citizenship behaviours
Chapter summary
Questions
Chapter 8 Reward and remuneration
Mark Bussin
Chapter overview
Learning outcomes
Chapter introduction
Purpose of reward and remuneration
Factors influencing remuneration
External environment
Internal environment
The reward system
9
Reward strategy
Organisational strategy
Organisation product life cycle
Remuneration policy
Employee reward preferences and needs
Establishing pay rates
Determining the worth of jobs: job evaluation
Methods of job evaluation
Conducting a wage and salary survey
Determining the pay structure
Determining pay grades
Determining pay ranges
Determining progression through pay scale ranges
Wage and salary curves
Maintaining the pay structure
Participation policies
Pay secrecy
Strategic job value
Elements of remuneration
Incentives
Total guaranteed remuneration package
Basic rate
Benefits
Reward and remuneration trends
Global remuneration
Retention of employees
Media scrutiny
Setting non-executive director (NED) pay
New long-term incentives
Specialist career tracks
More flexibility
Governance
Branding
Broad-banding
Pay for performance
Skills-based pay
Market pricing
Competence-based evaluation
Team-based pay
Chapter summary
Questions
Chapter 9 Performance evaluation
Jo-Anne Botha & Mark Bussin
Chapter overview
10
Learning outcomes
Chapter introduction
Characteristics of performance management
Performance planning phase
Implementation phase
Results assessment phase
Developing a performance appraisal system
Phase 1: Determining the purpose of the performance appraisal process
Phase 2: Determining performance criteria and dimensions
Phase 3: Determining who will be involved in assessing performance
Phase 4: Selecting the appropriate appraisal method(s)
Phase 5: Getting senior management buy-in
Phase 6: Involving employees
Phase 7: Designing the appraisal system
Phase 8: Training raters
Phase 9: Implementing the system/conducting performance appraisals
Phase 10: Evaluating and adapting the system
Performance appraisal methods
Objective methods
Subjective methods
Comparative methods
Rating methods
Goal-based methods
Computerised performance monitoring
Raters of performance
Managers/supervisors
Upward feedback
Multi-source feedback
Peer assessment
Interactive panel for performance appraisal
Self-appraisal
Factors that influence performance appraisals
Organisational environment
Goals of senior management
Group norms
Prevailing attitudes
Perception and attributions
Psychological capital
Performance criteria
Rating errors
Reliability and validity of ratings
Performance appraisal interviews
Approaches to performance appraisal interviews
Sequence of the interview
Content of the interview
11
Frequency of performance appraisals
Performance appraisal and legislation
Code of Good Practice: Dismissal
Employees on probation
Attitudes and reactions to appraisals
Managing poor performance
Chapter summary
Questions
Chapter 10 Training and development
Jerome Kiley
Chapter overview
Learning outcomes
Chapter introduction
Training and development purposes and challenges
Challenges and priorities in the South African skills development environment
South African skills development legislation
The national Occupational Learning System (OLS)
Workplace learning and skills development within the OLS
Adult learning
Why people learn
Adult learners are different
How adults learn
A systematic approach to training
Analysing training needs
Analysis techniques
Data collection methods and tools
Designing the training intervention
Setting training goals, objectives and outcomes
Writing an instructional plan
The transfer environment
Target audience analysis
Training methods
Off-the-job training
On-the-job training
Distance education, training and learning
Blended methods
Management/leadership training methods
Delivering training
Training versus facilitation
Learning support materials
Managing training
Assessment
Evaluation
Training evaluation designs
12
Quality assurance through impact assessment and evaluation
Specialised training programmes
Chapter summary
Questions
Chapter 11 Career development
Melinde Coetzee & Dries Schreuder
Chapter overview
Learning outcomes
Chapter introduction
The meaning of work
Career choice
Super’s theory
Holland’s theory
Jung’s theory
Accident theory
Postmodern perspectives
Life/career stages
The early life/career stage
The mid-life/career stage
The late life/career stage
Career issues
Career anchors
Career patterns
Working couples
Career plateauing
Gender issues in career development
Career development support
Role of managers
Role of employees
Chapter summary
Questions
Chapter 12 Employment relations
Ben Swanepoel
Chapter overview
Learning outcomes
Chapter introduction
Historical, conceptual and theoretical perspectives: a brief overview
From ‘labour/industrial relations’ to ‘employment relations’?
Different theoretical perspectives about conflicting and converging interests in employment relations
Key ‘role-players’ or ‘actors’, and other ‘stakeholders’, in employment relations
Why workers join trade unions
Citizenship behaviour and dual allegiance
Organisational justice perceptions
13
Distributive justice
Procedural justice
Interactional justice
How trade unions operate
South African legal context
Freedom of association
The formation and registration of trade unions
Organisational rights
Trade unions as organisations
Structures, roles, and participation processes
Collective bargaining and dispute resolution
Statutory structures
Bargaining councils
Statutory councils
The Commission for Conciliation, Mediation and Arbitration (CCMA)
Private dispute resolution: accredited agencies
Private dispute resolution: non-accredited agencies
The Labour Court and Labour Appeal Court
Processes and behavioural dynamics
Negotiation
Collective agreements
Industrial action
Dispute resolution
Union–management co-operation
Workplace forums
Handling employee grievances
Dealing with deviant employee behaviour
Discipline and dismissal for misconduct and poor performance
Dismissal based on ‘no fault’ grounds
Chapter summary
Questions
Appendix A
Multiple-choice questions: Solutions
Appendix B
Taylor-Russell tables
References
Glossary of terms
Index
14
PREFACE
Personnel Psychology: An Applied Perspective provides an inviting and comprehensive introduction to the field of personnel
psychology. The field of personnel psychology is concerned with all aspects of theory of psychology applied to understanding
differences between individuals and their job performance in work settings, including applying scientific decision-making methods of
measuring and predicting such differences and performance. Because personnel psychology, as a sub-field of industrial and
organisational psychology, is a field with both a strong scientific base and an applied orientation, the book demonstrates the connection
between psychological theory, human resource management activities, and their application in South African work settings. Although
the book was designed and written with the student in mind, managers, human resource specialists and industrial psychologists will also
find the concepts, principles and scientific techniques outlined in the various chapters useful in enhancing the quality of decisions related
to the employment and retention of people.
Within the context of the rapidly-changing employment context, the function of personnel psychology in attracting, employing and
retaining valuable, scarce and critical skills has increased in importance. The book covers the main areas of human resource
management activity utilised to achieve organisational objectives and analyses how these activities might be carried out by managers,
human resource specialists and industrial psychologists in order to help organisations make quality personnel-related decisions that lead
to the achievement of their objectives.
Personnel Psychology: An Applied Perspective is an introductory textbook written for the undergraduate student studying industrial
and organisational psychology, personnel psychology, and human resource management. Although the book is written at a level that
makes the material accessible to students who are relatively new to the field of industrial and organisational psychology, the coverage of
topics is comprehensive. The text includes ‘classic’ theories and research with the latest developments that mirror the dynamics of the
field and provide a challenging and insightful overview of the application of the field in the South African organisational context.
The chapters are designed to facilitate learning. Each chapter begins with a chapter outline, a chapter overview and learning
outcomes, and ends with a chapter summary and review questions. Review questions and reflection activities challenge students to
analyse material from the chapters to broaden their insight and understanding of the various themes. These questions and activities are
also suitable for class discussion and written assignments. A glossary of terms and a subject index are included for easy reference to key
concepts and themes addressed in the various chapters. Some of the highlights of this first edition include:
• South African research trends in the field of industrial and organisational psychology
• Research methods in personnel psychology
• The contemporary employment context and human resource planning
• Psychological assessment and predictors of human performance
• Job analysis and criterion development
• Personnel selection decision-making techniques and methods
• Fairness and equity in personnel decisions
• The psychology of personnel retention
• Reward and remuneration
• Performance evaluation
• New national developments regarding workplace learning or skills development in the South African workplace
• Contemporary career development in organisational context
• Employment relations.
Personnel Psychology: An Applied Perspective is divided into three parts. Part One provides an introduction to the field and an
overview of research methods used by industrial psychologists specialising in personnel psychology. Part Two covers issues related to
the employment of people, including separate chapters on the employment context and human resource planning, job analysis and
criterion development, psychological assessment and predictors of human behaviour, and recruitment and selection. Part Three deals
with issues related to the retention of people, including chapters on the psychology of retention, reward and remuneration, performance
evaluation, training and development, career development, and employment relations.
15
ABOUT THE AUTHORS
Editors
Melinde Coetzee (Editor)
(Specific chapter contributions: Chapters 1, 3, 4, 6, 7 and 11)
Melinde Coetzee (DLitt et Phil) is currently filling the role of professor in the Department of Industrial and Organisational Psychology
at the University of South Africa. She has 14 years’ experience in organisational development, skills development and human resource
management in the corporate environment and has been lecturing subjects such as Personnel, Career, Organisational and Managerial
Psychology since 2000 on undergraduate, honours and masters levels. She also presents short learning programmes in skills
development facilitation and organisational career guidance through UNISA’s Centre for Industrial and Organisational Psychology.
Melinde is a professionally registered Industrial Psychologist with the Health Professions Council of South Africa and a master human
resource practitioner with the South African Board for Personnel Practices. She is section editor of the SA Journal of Human Resource
Management and also the author, co-author and editor of a number of academic books. She has published in numerous accredited
academic journals and has also co-authored and contributed chapters to books nationally and internationally. She has presented
numerous academic papers and posters at national and international conferences.
Dries Schreuder (Editor)
(Specific chapter contributions: Chapters 1 and 11)
Dries Schreuder (DAdmin) has been registered as an Industrial Psychologist since 1992. He obtained his doctorate degree in Industrial
Psychology in 1989. He is currently Professor in the Department of Industrial and Organisational Psychology at UNISA and lectures in
Forensic Industrial Psychology and Career Psychology. He is a member of the Education Committee of the South African Board for
Personnel Practice and also an appointed mentor of the Board. He has presented papers at various national and international conferences
and has published extensively in accredited journals. He is also the author, co-author and editor of a number of academic books.
Chapter contributors
Antoni Barnard (Chapter 5)
Antoni Barnard (DLitt et Phil) registered as an Industrial Psychologist in 1995 and in the additional category of Counselling
Psychologist in 1998 (PS0050164). She obtained her Doctorate in Industrial and Organisational Psychology in 2008. She is currently a
Senior Lecturer in the Department of Industrial and Organisational Psychology at UNISA and lectures in Psychological Assessment on
undergraduate, honours and masters levels. She has presented several papers at conferences and has published numerous articles in
accredited journals in South Africa.
Jo-Anne Botha (Chapter 9)
Jo-Anne Botha (BCom Hons) is a lecturer in the Department of Human Resource Management at the University of South Africa. She
has 19 years’ experience in the ETD field, designing and presenting courses on various topics in the business world, such as training
supervisors, middle and senior managers, team building, communication skills, time management, strategic planning, training
management, and human resource development. Jo-Anne is co-author of Practising Education, Training and Development in South
African Organisations, and has also contributed to various study guides relating to her fields of expertise.
Mark Bussin (Chapters 8 and 9)
Mark Bussin (DCom) is the Chairman of 21st Century Pay Solutions Group and has over 20 years’ remuneration experience across all
industry sectors. He is viewed as a thought leader in the remuneration arena. He serves on and advises numerous boards and
Remuneration Committees on executive remuneration. Mark holds a Doctorate in Commerce and has published or presented over 100
articles and papers. He has received awards for his outstanding articles in this field. He has appeared in the media for his expert views on
remuneration. Mark is a guest lecturer at various academic institutions around the country including Wits, University of Johannesburg,
GIBS and UCT; and supervises Masters and Doctoral theses.
Sanet Coetzee (Chapter 2)
Sanet Coetzee (DLitt et Phil) has been registered as an Industrial Psychologist since 2000 (PS0068012). She obtained her Doctorate in
Industrial and Organisational Psychology in 2004. She is currently an Associate Professor in the Department of Industrial and
16
Organisational Psychology at UNISA and lectures in Psychological Assessment and Research Methodology on undergraduate, honours
and masters levels. She has presented several papers at various national and international conferences and has published a number of
articles in accredited journals in South Africa and internationally.
Jerome Kiley (Chapter 10)
Jerome Kiley (MA, BA Hons HRD) is registered as a Masters Personnel Practitioner (Human Resource Development) with the South
African Board for Personnel Practice. He is currently a lecturer in the Department of Human Resource Development at the Cape
Peninsula University of Technology. Jerome runs the first year Industrial Psychology Programme at the University of the Western Cape
and lectures in the Honours Programme in Human Resource Development at the University of Johannesburg. He has extensive
experience in the field of skills development and human resource development management, both in the public and private sectors.
Jerome is a registered assessor and moderator and serves in this capacity for a number of institutions. He is co-author of Practising
Education, Training and Development in South African Organisations.
Hennie J Kriek (Chapter 6)
Hennie J Kriek (DLitt et Phil) is currently President of SHL Americas. He was the founding member and Managing Director of SHL
South Africa in 1994 and served as Professor of Industrial and Organisational Psychology and faculty member of the University of
South Africa for more than 12 years. He received his Doctorate at the University of South Africa in 1988 and was a visiting scholar at
Colorado State University from 1989 to 1990. He is an honorary life member of SIOPSA (the Society for Industrial Psychology of South
Africa) and the Assessment Centre Study Group of South Africa. He has also acted as Chair of the Association of Test Publishers (ATP)
of South Africa and PAI (People Assessment in Industry), an interest group of SIOPSA. He serves on the editorial board of a number of
prestige journals internationally and locally, including the Southern African Business Review and the Journal of Industrial Psychology.
Herman Roythorne-Jacobs (Chapter 4)
Herman Roythorne-Jacobs (MA) is an experienced Industrial Psychologist who consults for major organisations on strategic human
resource and career management, skills development, and competency profiling matters. He is co-author of Career Counselling and
Guidance in the Workplace: A Manual for Career Practitioners.
Ben Swanepoel (Chapter 12)
Ben Swanepoel (DCom), a past President of IRASA (Industrial Relations Association of South Africa) and former editor of the South
African Journal of Labour Relations, is presently on unpaid leave from the University of Southern Queensland. He has extensive
international experience – as an academic, a practising manager, and a consultant and advisor in the field of employment relations,
including human resource management and management and leadership development. He has published extensively.
ACKNOWLEDGEMENTS
Authors’ acknowledgements
Our understanding of personnel psychology has been shaped by many friends, colleagues, clients and students, past and present, in the
South African and international multicultural workplace contexts. We are truly grateful for these wonderful people who have shared
their practices, wisdom and insights with us in person and through professional literature. We would also like to thank the team of
authors we worked with on this book for their quality contributions, hard work and their forbearance.
Prof Melinde Coetzee
Prof Dries Schreuder
June 2010
Acknowledgements to copyright holders
Reflection 3.1 Mining industry skills: another R9 billion needed for artisan training. 24 July 2008. (<www.skillsportal.co.za>)
Fig 3.2 Indices of labour turnover. Marchington & Wilkinson. 2008:230. Reprinted with the permission of the publisher, the Chartered Institute of Personnel
and Development, London (<www.cipd.co.uk>).
Fig 4.3 Example of structural elements of the South African OFO. Reproduced with the permission of the author, M Stuart, RainbowSA. 2010. (<www
.RainbowSA.co.za>)
Table 4.4 Example of job-related items in a job analysis questionnaire. Adapted from Riggio. 2008. Reproduced with the permission of Pearson Education,
Inc.
Table 4.5 Levels of data, people and things (FJA). Aamodt. © 2010. 57. Reproduced with permission. (www.cengage.com/permissions)
Table 4.6 Excerpts from abilities in Fleishman’s taxonomy. Adapted from Landy & Conte. © 2004. 83. Reproduced with permission of The McGraw-Hill
Companies.
Table 4.7 Twelve personality dimensions covered by the PPRF. Adapted from Landy & Conte. © 2004. 193. Reproduced with permission of The McGraw-Hill
Companies.
Table 5.1 Classification of psychological tests according to the APA Gregory. © 2007. 28. Reproduced with the permission of Pearson Education, Inc.
Table 6.2 Audit questions for addressing applicant perceptions and reactions to selection procedures. Ryan & Tippins. 2004. 43(4):314. Reprinted with the
permission of CCC/Rightslink on behalf of Wiley Interscience.
Fig 6.3 Effect of a predictor with a high validity (r = 0,80) on test utility. Muchinsky. 2009. Reproduced with the permission of PM Muchinsky.
Fig 6.4 Effect of a predictor test with no validity (r = 0,00) on test utility. Muchinsky. 2009. Reproduced with the permission of PM Muchinsky.
Fig 6.5 Effect of a large selection ratio (SR = 0,75) on test utility. Muchinsky. 2009. Reproduced with the permission of PM Muchinsky.
Fig 6.6 Effect of a small selection ratio (SR = 0,25) on test utility. Muchinsky. 2009. Reproduced with the permission of PM Muchinsky.
Fig 6.7 Effect on selection errors of moving the cut-off score. Landy & Conte. © 2004. 264. Reproduced with the permission of The McGraw-Hill Companies.
Fig 6.8 Determination of cut-off score through criterion-related validity of test. Muchinsky. 2009. Reproduced with the permission of PM Muchinsky.
Fig 6.9 Effect of varying base rates on a predictor with a given validity. Cascio, Wayne & Boudreau. © 2008. 178. Reproduced with the permission of
Pearson Education, Inc.
Fig 6.10 Effect of predictor and criterion cut-offs on selection decisions. Cascio, Wayne & Boudreau. © 2008. 179. Reproduced with the permission of
Pearson Education, Inc.
Fig 6.13 Venn diagrams depicting multiple predictors. Muchinsky. 2009. Reproduced with the permission of PM Muchinsky.
Fig 6.15 Valid predictor with adverse impact. Cascio & Aguinis. © 2005. 185. Reproduced with the permission of Pearson Education, Inc.
Fig 6.16 Equal validity, unequal predictor means. Cascio & Aguinis. © 2005. 186. Reproduced with the permission of Pearson Education, Inc.
Fig 7.1 Human resource percentage financial investment areas in 2009. Lander. 2008. <www.workinfo.com/Articles/hr_service_delivery_survey_2008.htm>)
Reflection 7.1 Hottest sectors for jobs right now. A Hammond. May 2009. (<www.skillsportal.co.za>)
Reflection 7.2 The psychological contract. Adapted from Baruch. 2004. 9(1):58–73. Reprinted with the permission of CCC/Rightslink on behalf of The Emerald
Group.
Table 7.1 The value-percept theory of job satisfaction. Adapted from Colquitt et al. 2009. 108. Reproduced with the permission of The McGraw-Hill
Companies.
Table 7.9 Uses of engagement information. SHL. 2009. 12.
Reflection 7.5 Talent retention. By Kim Kemp, CEO, Pure Innovations. 14 April 2009. (<www.skillsportal.co.za>)
Reflection 7.6 Talent management: change management as a basis for accelerating skills development. By Robert Sussman. May 2009. (<www.skillsportal
.co.za>)
Reflection 7.7 Talent retention (continued). By Kim Kemp, CEO, Pure Innovations. 14 April 2009. (<www.skillsportal.co.za>)
EBSCO Publishing : eBook Collection (EBSCOhost) - printed on 7/14/2018 2:35 PM via UNISA
AN: 948979 ; Schreuder, A. M. G., Coetzee, Melinde.; Personnel Psychology: : An Applied Perspective
Account: s7393698
18
Fig 8.2 WorldatWork‘s total rewards model. WorldatWork. 2007. 7 (Fig 1-3). Reprinted with the permission of John Wiley & Sons.
Table 10.4 Major perspectives on adult learning. Adapted from Swanson & Holton. 2008. 195.
Fig 10.3 Kolb’s experiential learning cycle. Kolb. 1984. 21. Reprinted with the permission of Prof DA Kolb.
Table 10.4 Kolb and Fry’s learning styles. Adapted from Tennant. 1997. 90; and Swanson & Holton. 2001. 168.
Fig 10.5 Swanson’s taxonomy of performance. Adapted from Swanson. © 2007. 24.
Table 10.7 Classroom training methods and their use. Adapted from Molenda & Russell. 2006.
Fig 10.6 Kirkpatrick’s Hierarchy. Based on Kirkpatrick. 1994. 19–24.
Fig 10.7 Nadler’s model of evaluation. Based on Nadler. 1982. 12.
Reflection 10.1 Skills revolution gets a shot in the arm. By Jim Freeman. 30 June 2009. (<www.skillsportal.co.za>)
Reflection 12.1 State of the Nation address: national re-skilling plan to avoid job losses. By Jim Freeman. 4 June 2009. (<www.skillsportal.co.za>)
19
part 1
Introduction to personnel psychology
20
CHAPTER 1
INTRODUCTION: PERSONNEL PSYCHOLOGY IN CONTEXT
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
THE SCIENCE AND PRACTICE OF INDUSTRIAL AND ORGANISATIONAL PSYCHOLOGY
• Definitions of I/O psychology
• The industrial psychologist versus the human resource practitioner
• Major fields of I/O psychology
LICENSING AND CERTIFICATION OF PSYCHOLOGISTS
THE HISTORY OF I/O PSYCHOLOGY
• Contributions of I/O psychology
• Historical overview of research in I/O psychology
THE SCOPE OF THIS BOOK
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
This first chapter is intended to give a broad perspective of personnel psychology as a sub-field of industrial and organisational (I/O)
psychology. We will therefore in this chapter explore the early history of I/O psychology, the profession of industrial psychologists, and
current research trends in the field. Personnel psychology in the context of industrial and organisational psychology is a broad field and to
comprehend its scope fully, you need to work through this entire textbook. Each chapter from 2 to 12 presents a general theme and several
areas of practice studied by industrial psychologists who specialise in personnel psychology. As you work through each chapter, step back
and reflect on how the various themes fit together. Figure 3.3 in chapter 3 also provides a brief overview of how the various areas of
specialisation are co-dependent.
Learning outcomes
When you have finished studying this chapter, you should be able to:
1. Explain how personnel psychology relates to the profession of industrial and organisational psychology and psychology as a whole.
2. Give a broad outline of the historical roots of industrial and organisational psychology.
3. Describe the major fields of industrial and organisational psychology.
4. Describe personnel psychology as a sub-field of industrial and organisational psychology.
5. Explain the rationale and procedure for the licensing and professional certification of industrial psychologists in South Africa.
6. Describe the various areas of specialisation in industrial and organisational psychology.
7. Evaluate current and future trends in industrial and organisational psychology.
CHAPTER INTRODUCTION
The word ‘personnel’ means people. Personnel psychology is therefore concerned with all aspects of the theory of psychology applied
to understanding differences between individuals in work settings. Psychology is the scientific study of behaviour and the mind. The
term behaviour refers to actions and responses that we can directly observe, whereas the term mind refers to internal states and
processes – such as thoughts and feelings – that cannot be seen directly and that must be inferred from observable, measurable
behavioural responses (Passer et al, 2009). For example, we cannot see a person’s feeling of discontent with her job. Instead, we must
infer, based on her verbal statements and non-verbal behavioural responses, that she is unhappy with her job.
The discipline of psychology is considered a science. Science involves two types of research: basic research, which reflects the
21
quest for knowledge purely for its own sake, and applied research, which is designed to solve specific, practical problems. For
psychologists (who include industrial psychologists specialising in personnel psychology), most basic research examines how and why
people behave, think, and feel the way they do. In applied research, psychologists often use basic scientific knowledge to design,
assess and implement intervention programmes.
As a science, psychology has five central goals: (1) to describe how people and other species behave; (2) to understand the causes of
these behaviours; (3) to predict how people and other species will behave under certain conditions; (4) to influence behaviour through
the control of its causes; and (5) to apply psychological knowledge in ways that enhance human welfare. Psychologists (such as
industrial psychologists) who study human behaviour usually attempt to discover principles that ultimately will shed light on human
behaviour and improve human wellbeing (Passer et al, 2009). The South African Organising Framework for Occupations (OFO)
describes the skills specialisation of psychologists as the investigation, assessment and provision of treatment and counselling to foster
optimal personal, social, educational and occupational adjustment and development (OFO code 2723).
Psychology is both an academic and an applied field. In an academic field, different topics in psychology are studied, while applied
psychology uses psychological principles and knowledge to solve problems. Sub-fields in psychology are:
• Clinical psychology (the diagnosis and treatment of mental disorders)
• Abnormal psychology (the study of abnormal behaviour and psychopathology)
• Cognitive psychology (the study of thought processes and thinking)
• Comparative psychology (the study of animal behaviour, which can lead to a better understanding of human psychology)
• Developmental psychology (the study of human growth and development)
• Forensic psychology (the application of psychological principles in legal contexts)
• Biological psychology (the study of how biological processes influence the mind and behaviour)
• Personality psychology (the study of individual differences and similarities between people)
• Educational psychology (assisting children with academic, social and emotional issues)
• Social psychology (the study of social interaction, social perception, and social influences), and
• Industrial and organisational psychology (the scientific study of human behaviour in work settings, including the
application of psychological theories and scientific research methods to explain and enhance the effectiveness of human
behaviour, cognition and wellbeing in the workplace).
Many psychologists are united profession-ally through membership of the Psychological Society of South Africa (PsySSA), founded in
January 1994. In 2009 there were about 2 500 members. PsySSA represents the views of psychologists and speaks on their behalf.
Originally, there were two professional associations in South Africa: the South African Psychological Association and the
Psychological Institute of the Republic of South Africa. In 1982, the Psychological Association of South Africa (PASA) was
established. As a result of a process of transformation, PASA was replaced by the present PsySSA.
PsySSA is the only representative professional body of psychology nationally. It is committed to the development of the profession,
ensuring quality of service to the community, safeguarding ethical standards, building professional relationships in South Africa and
abroad, collective marketing and bargaining for new work opportunities, better remuneration and conditions of service for
psychologists (Psy Talk 2001:12). Most psychologists who specialise in basic areas (for example, experimental, social, or
developmental) are employed at tertiary institutions. Applied psychologists (those with training in clinical and industrial psychology
and counselling) mostly work in non-academic or work settings.
PsySSA publishes a journal, the South African Journal for Psychology. It is a vehicle through which psychologists can communicate
their research findings to other scholars. PsySSA also holds a national convention each year, sets standards for graduate training in
certain areas of psychology, and develops and enforces a code of professional ethics.
THE SCIENCE AND PRACTICE OF INDUSTRIAL AND
ORGANISATIONAL PSYCHOLOGY
Industrial and organisational psychology (I/O psychology) is a field of specialisation in psychology. Industrial psychology, also
historically called personnel psychology, is the study of how individuals behave within work settings. Industrial psychologists who
specialise in personnel psychology practise psychology within the workplace and engage in a variety of activities such as:
• Job analysis and criterion development
• Psychological assessment, employee selection and placement
• Employee reward and remuneration
• Employee training and development
• Employee career development
• Employee performance evaluation
• The attraction and retention of people, and
• The promotion of adherence to employment-related legislation.
All of these areas are extensively explored in this textbook.
Because organisational psychology is closely related to industrial or personnel psychology, the two areas are referred to jointly as
22
industrial and organisational (I/O) psychology. Industrial psychologists who specialise in organisational psychology work at the
organisational level to understand how workers function in an organisation and how the organisation functions as a whole. Typical
activities include: the promotion of job satisfaction, commitment and employee engagement (also the topic of chapter 7), quality of work
life, leadership development and training, and organisational development.
According to the Health Professions Council of South Africa (HPCSA), the co-ordinating body for all registered health professions
(psychologists) in South Africa, industrial or organisational psychologists practise in business or industrial settings with the general aim
of directly benefiting the economic wellbeing of the employing organisation. They are concerned with people functioning effectively in
relation to their working environments. Their areas of expertise include:
• Recruitment and selection
• Training, appraisal and review
• Vocational guidance and career development
• Industrial relations
• Occupational health and safety
• Planning technological and organisational change
• Organisational behaviour
• Ergonomics
• Consumer behaviour
• Job redesign, and
• Marketing.
(<www.hpcsa.co.za>)
It is evident from the above descriptions that industrial psychology and organisational psychology are overlapping fields whose
distinctions are fuzzy because in practice, practitioners and industrial psychologists often share job descriptions and duties. Different
terminology is therefore used in different countries to describe the field of I/O psychology. In the United States, I/O psychology is
described as ‘industrial and organisational psychology’, in the United Kingdom it is referred to as ‘occupational psychology’, and some
European countries refer to the field as ‘work and organisational psychology’.
In South Africa the term ‘industrial psychology’ is still used to describe the total field of I/O psychology. However, there is an
increasing tendency to include the word ‘organisational’, and there are efforts to do away with the word industrial. For example, the new
South African Organising Framework for Occupations (OFO), which is discussed in chapter 4, uses the term organisational
psychologist (OFO code 272303) as the occupational unit, with the alternative titles industrial psychologist or occupational psychologist
. The OFO describes the skills specialisation of the organisational psychologist as ‘the application of psychological principles and
techniques to study occupational behaviour, working conditions and organisational structure, and solve problems of work performance
and organisational design’. The HPCSA uses the term industrial psychologist for the professional registration of qualified practitioners
who have specialised in the field of I/O psychology.
The Society for Industrial and Organisational Psychology (SIOP) in the United States is considering changing its name to ‘The
Society for Organisational Psychology’ or something similar in order to eliminate the word ‘Industrial’ and retain the term
‘Organisational’. In 2001, Veldsman also questioned the appropriateness of using the terms ‘industrial psychology’ and ‘industrial
psychologist’ in South Africa. He argued that industrial psychology has moved beyond the turn of the previous century, when the
Industrial Society was at its zenith. He stated that the Information and Knowledge Society is in control with a new game which has
different rules, and industrial psychology has to make its contribution within this new setting. He suggested that terminology such as
‘the psychology of work’, ‘work psychology’ and ‘workology’ (in other words, the science of work) be considered for the discipline,
and names such as ‘work psychologist’, ‘consulting psychologist’ and ‘organisational psychologist’ for the profession.
Definitions of I/O psychology
I/O psychology can be defined as the scientific study of people within their work environment which includes the application of
psychological principles, theory, and research to the work setting (Landy & Conte, 2004; Riggio, 2009). I/O psychology has two
objectives: first, to conduct research in an effort to increase knowledge and understanding of human work behaviour; and second, to
apply that knowledge to improve work behaviour, the work environment, and the psychological conditions of workers. Industrial or
organisational psychologists are therefore trained to be both scientists and practitioners, in what is referred to as the scientist–practitioner
model (Riggio, 2009). According to Raubenheimer (1987) in Louw and Edwards (1997), this implies that the industrial or organisational
psychologist engages in:
• Scientific observation (investigation, research)
• Evaluation (assessment, evaluation or appraisal, measurement, problem-identification)
• Optimal utilisation (selection, placement, management, development, retention), and
• Influencing (changing, training, developing, motivating) of normal and, to a lesser degree, deviant behaviour in interaction
with the environment (physical, psychological, social and organisational), as manifested in the world of work.
The South African Professional Board for Psychology describes the scope of practice of the industrial psychologist as follows (<www
.hpcsa.co.za>; HPCSA 2009/06/16:1):
23
‘Industrial psychologists apply the principles of psychology to issues related to the work situation of relatively well-adjusted adults in order to
optimise individual, group and organisational well-being and effectiveness.’
The industrial psychologist versus the human resource practitioner
There is a distinct difference between the work of the industrial psychologist and that of the human resource practitioner. Industrial
psychologists fulfil a professional role and usually operate in one of the fields of application of their science. They act as internal or
external consultants for management and the human resource manager. Their role is primarily to diagnose and intervene. Their anchor is
mainly theoretical knowledge and research expertise, and their knowledge base fundamentally industrial or personnel psychology,
organisational psychology, general psychology, personality psychology, social psychology, sociology, anthropology, and the economic
sciences.
Human resource practitioners are predominantly responsible for the organisation’s effective daily utilisation and management of
human resources through the implementation of behavioural science knowledge. They design and implement systems, practices and
policies to improve the general effectiveness of the organisation within the strategy of the business. Their knowledge base is mainly
industrial or personnel psychology, management sciences and labour law.
The OFO also differentiates between the occupational profiles of psychologists and human resource professionals (OFO code 2231),
and describes the skills specialisation of the human resource professional as the planning, development, implementation and evaluation
of staff recruitment, assisting in resolving disputes by advising on workplace matters, and representing industrial, commercial, union,
employer and other parties in negotiations on issues such as enterprise bargaining, rates of pay, and conditions of employment. They are
involved in recruitment, training and development, induction of new employees, maintaining personnel records, compiling workplace
skills plans and annual training reports, conducting needs analyses, advising management and employees on policies and procedures,
and studying and interpreting legislation impinging on the employment relationship.
Reflection 1.1
Study chapter 4 and review the tasks or skills of the psychologist as described in the OFO and shown in Table 4.8 (including those of
the industrial psychologist, shown in Figure 4.3). How does the job of an industrial psychologist differ from that of a human resource
professional?
Review also the two case studies provided in Reflection activity 1.2. Can you differentiate between the roles of the industrial
psychologist and the human resource professional?
Now review the objectives of I/O psychology. How do the tasks or skills of the industrial psychologist complement those of the
human resource practitioner?
I/O psychology is represented in South Africa by the Society for Industrial and Organisational Psychology (SIOPSA). SIOPSA aims to
encourage the existence of a fair and humane work situation in South Africa, to which all have an equal opportunity of access and within
which all can perform according to their abilities (<www.SIOPSA.org.za>). In 2009, SIOPSA had about 380 registered members.
Many industrial psychologists, especially those who work in the human resources field, are also members of the Institute of People
Management (IPM). This organisation is dedicated to the human resource profession and strives for the development of human potential.
This is accomplished through ensuring access to appropriate knowledge and information and providing the opportunity to network.
Major fields of I/O psychology
As a field of psychology, I/O psycho-logy can be seen as the scientific hub for various sub-fields. As shown inFigure 1.1, the six major
sub-fields are personnel psychology, organisational psychology, career psychology, psychometrics, ergonomics, and consumer
psychology. For the purposes of this book, employment relations is also discussed as a sub-field. Since organised labour has become
subject to legislation in South Africa, the psychology of employment relations has become an area of interest in the domain of personnel
psychology (Van Vuuren, 2006).
Personnel psychology
Personnel psychology is one of the oldest and more traditional fields of I/O psychology. This field is concerned with the scientific study
of individual differences in work settings. Industrial psychologists and human resource professionals who specialise in the field of
personnel psychology apply a scientific decision-making framework (discussed in more detail in chapter 6) to enhance the quality of
decisions in the employment and retention of employees. Note that person-nel psychology is not synonymous with human resource
management. As a sub-field of I/O psychology, personnel psychology represents the overlap between psychology and human resource
management (HRM).
HRM is a distinctive approach to employment management which seeks to achieve competitive advantage through the strategic
development of a highly committed and capable workforce, using an integrated array of cultural, structural and sophisticated personnel
techniques (Storey, 2001). HRM is essentially embedded at workplace level in interactions between members of staff (individually or
collectively) and their supervisors. Because of this, HRM is fundamentally concerned with the attraction, selection, retention, utilisation,
24
motivation, training and development, evaluation or appraisal, rewarding, and disciplining of managers and employees in the
organisation. In short, HRM is concerned with the management of the human and social capital in the context of the employment
relationship (Lewin, 2007; Marchington & Wilkinson, 2008).
Figure 1.1 I/O psychology as a scientific hub
On the other hand, personnel psychology is an applied discipline that focuses on individual differences in behaviour and job
performance and on methods of scientifically measuring and predicting such differences to enhance the quality of personnel decisions
(Cascio & Aguinis, 2005). The field of personnel psychology informs HRM practices by its focus on the scientific study of individual
differences and behaviour and their consequences for the organisation. Personnel psychology is therefore also about providing a
comprehensive understanding of a broad range of factors which influence the attraction, selection, placement, retention, performance,
development, satisfaction, commitment and engagement of individuals in the workplace (Cartwright & Cooper, 2008).
Personnel psychology is further concerned with continuous research on topical and emergent issues and constructs in the field of
HRM in order to enhance managers’, practitioners’ and employees’ understanding of the human factor in the employment and retention
process. Table 1.2 provides a few examples of the typical research focus areas in South African work settings. Since achieving optimal
performance from individual employees is of paramount importance to the sustained growth, development and financial performance of
any organisation, HRM draws on the scientific techniques and insights provided by the field of personnel psychology to manage the
organisation’s human and social capital effectively in the context of the employment relationship. As also outlined in the OFO, the
professional distinction between the human resource practitioner and the industrial psychologist lies in the various roles they fulfil in the
organisation. The various chapters in this textbook explore the focus areas of personnel psychology and the role of the industrial
psychologist in personnel employment and retention.
Organisational psychology
Organisational psychology had its origin in the post-World War II human relations movement, when the need arose to reflect the
growing influence of social psychology and other relevant sciences (Van Vuuren, 2006). Organisational psychology focuses on the
influence organisations may have on the attitudes and behaviour of their employees. Organisations are complex social entities and the
point of departure of organisational psychology is therefore social and group influences on behaviour. While personnel psychology is
more concerned with individual-level issues, organisational psychology is more concerned with social and group influences, culture and
climate, and leadership behaviour on the overall effectiveness or performance of the organisation.
•
Organisational development and structure.
In the area of organisational development, industrial psychologists are con-cerned with improving or changing (that is, ‘developing’)
organisations to make them more efficient. The industrial psychologist must be able to diagnose an organisation’s problems, rec-ommend
or enact changes, then assess the effec-tiveness of the changes. Organisational develop-ment involves the planned, deliberate change in an
organisation to resolve a particular problem. The change may involve people, work proce-dures or technology. Organisational
development provides exciting opportunities for some indus-trial psychologists to help organisations resolve or adapt to their problems.
Career psychology
Career psychology is concerned with counselling employees and assisting them in making career choices. Its core focus is the
psychological contract between organisations and employees. Career psychology is about optimising the respective expectations of
organisation and employee and what both are prepared to give to ensure the integrity of the psychological contract (Slay & Taylor,
2007). When most people think about a career, they think about advancing in an organisation and moving up the career ladder. The 21st
century world of work has moved away from an era of the one-life-one-career perspective, where one series of career stages covered the
whole of a person’s work life. In its place is a new form of a career, which consists of a series of learning cycles across multi-directional
career pathways (Baruch, 2004; Coetzee, 2006; Weiss, 2001).
Career psychology focuses on areas such as the career development of employees (the topic of chapter 11), the meaning of work in
people’s lives, individual vocational behaviour across the lifespan, career counselling and guidance, career issues that influence
individuals’ career development, and organisational career development support initiatives.
Psychometrics
Although psychometrics are not classified as a sub-field of I/O psychology, industrial psychologists use psychological testing in their
fields of application, especially in personnel psychology – also the topic of chapter 5. Psychological assessment in general entails any
method or psychological activity to obtain samples of human behaviour and decision-making with regard to people and their context.
Assessments provide a way of determining an objective and standardised measure of a specific sample of behaviour.
As also discussed in chapter 5, in South Africa, psychological testing is prohibited unless the test being used has been scientifically
proved to be valid and reliable, can be applied fairly to all employees, and is not biased towards any employee or group. This means that
the use of psychological tests has to be validated for every culture and situation, and care should be taken to ensure that the tests used in
organisations adhere to the requirements of the Employment Equity Act 55 of 1998.
Tests can be classified according to their content. Examples of tests that are used in industry are as follows:
• Intelligence tests
• Mechanical aptitude tests
• Personality tests
• Integrity tests
• Interest tests, and
• Computerised adaptive testing.
Chapter 5 explores the use of tests in personnel assessment in more detail.
Ergonomics
The objective of the field of ergonomics is to modify the work environment to be compatible with the characteristics of human beings.
Macleod (1995:9) describes ergonomics as follows:
‘Ergonomics provides a set of conceptual guideposts for adapting workplaces, products, and services to fit human beings. The field provides a
strategy for engineering design and a philosophy for good management, all with the underlying goal of improving the fit between humans and their
activities.’
As can be seen from the definition, people are the focus of ergonomics. However, ergonomics is not only about the physical
environment and tools that are used in the execution of tasks, but also includes aspects such as interesting jobs, which could contribute
to motivation in the workplace. The term human–machine interface is often used in the literature to describe ergonomics. This concept
refers to a person working with a complex piece of equipment, for example, a pilot working in the cockpit of an aeroplane. A broader
and more recent description would be the interactions between humans and systems, such as production systems, communication
networks, and decision-making processes (Macleod, 1995).
Consumer psychology
Consumer psychology is defined as the behaviour that consumers display in searching for, purchasing, using, evaluating and disposing
of products and services, behaviour that they expect will satisfy their needs. As such, consumer behaviour looks at the way consumers
make decisions to spend their resources on products or services (Shiffman & Kanuk, 2000).
Consumer psychologists devote their atten-tion to understanding the behaviour of indi-viduals whose actions are directed at obtaining,
26
consuming and disposing of consumer-related items. An understanding of the decision-making processes that precede and follow
consumption behaviour is therefore important.
Employment relations
This sub-field was once better known as labour or industrial relations. Although the area of industrial rela-tions has over many years also
developed as a separate (but interdisciplinary) field of study, the focus of employment relations from the perspective of I/O psychology
is primarily the behavioural dynamics related to the juxtapo-sition of conflict and common ground in any employment relationship. The
emphasis falls on the collective relationship and trade union-related dynamics are therefore relevant. Industrial psychologists involved in
the employment rela-tions sub-field seek, for instance, to understand why people (employees) join and support trade unions, and what
behavioural dynamics are involved in active union membership and indus-trial action such as strikes and in negotiation and
dispute-resolution processes such as media-tion.
Industrial psychologists are also particu-larly interested in the behavioural dimensions involved in union-management corporation
pro-cesses. They investigate issues such as organisa-tional justice perceptions and dual commitment (to the union and the employing
organisation). Some individual-based issues revolve around the psychosocial dynamics involved in discipline and dismissal, grievance
handling, and retrench-ment or lay-offs.Chapter 12 explores these issues in more detail.
Cross-cultural industrial psychology
Although both the I/O psychology and HRM fields are beginning to recognise the importance of a multicultural foundation for
understanding work behaviour, cross-cultural (or multicultural) industrial psychology is not cur-rently viewed as a recognised area of I/O
psychology. However, some South African uni-versities have introduced this field of study as a part of their I/O psychology programmes.
Cross-cultural or multicultural psychology looks at similarities and differences in individual psychological and social function-ing in
various cultures and ethnic groups (-Kagiticibasi & Berry 1989; Landy & Conte, 2004). Work plays an important role in our lives, and it
is important that industrial psychologists investig-ate cross-cultural or multicultural factors in work behaviour. Changes in world
conditions which are affecting our work lives are explored in chapter 3. Erez (1994) summarises these changes as follows:
• Cultural diversity of the labour force: The South African labour force is characterised by its diver-sity. The Bureau of Market
Research at the University of South Africa (Unisa) predicts that by the year 2011, black workers will make up 77,3 per cent of
the labour force, whites 11,8 per cent, coloureds 8,5 per cent, and Asians 2,4 per cent. It is also expected that the female
labour force will grow more quickly than the male group. The average growth rate per year to 2011 will be 3,2 per cent for
women but only 2,4 per cent for men (Sadie & Martins, 1994).
• Mergers and acquisitions: In recent years a sub-stantial number of companies in South Africa went through a merging
process and a reduction of their workforce. South African organisations have cut their personnel strengths substantially over
the past decade. Currently the world is in a recession and millions of people have lost their work in 2009 and this trend will
probably continue in 2011. Well-established companies such as General Motors in the USA have experienced financial
difficulty, and even financial institutions have gone bankrupt. The US government has had to intervene to rescue these
institutions from insolvency. Mergers are often not well managed, with a lack of sensitivity for the difference in
organisational cultures.
• Emergence of high technology and telecommunication systems: ‘The revolution in telecommunications introduced electronic
mail, fax machines, cellu-lar phones and tele-conferences. New technology has facilitated communication across geographic
areas and reduced the time needed for commu-nication. Such changes have accelerated cross-cultural communication and
exposure to differ-ent systems of values, norms, and behaviours’ (Muchinsky, 2003).
According to Muchinsky (2003), Erez indicated why industrial psychologists must be aware of cul-tural differences when proposing
solutions to problems concerning work behaviour. Man-aging techniques can be implemented successfully only if cultural values are
respected. Cultural values serve as criteria for evaluating the con-tributions of various managerial practices to employee well-being (Erez,
1994:601).
Values and customs prevalent in our soci-ety are not necessarily true of other cultures. Additionally, the concept of ‘satisfying’ work
differs across cultures. Jobs with a sense of challenge often appeal to people from Western cultures, while those from other cultures
might prefer jobs which provide opportunities for affiliation (Muchinsky, 2003).
The modern work population is increasingly growing more diverse and more blended in terms of race, ethnicity, nationality, religion,
gender and sexual orientation. The mixing of races and ethnic groups in the workplace requires greater awareness of differences, such as
differences with regard to work goals and work values. Differences per se can provide meaning to work and, although they generate
conflict, they can also foster sensitivity to different identities. Diversity implies variety and richness, and appreciation of diversity carries
with it the potential of mutual cross-fertilisation (Hankin, 2005; Kinicki & Kreitner, 2008). Industrial psycholo-gists are increasingly
playing the role of facilitating reconciliation in order to place organisational effectiveness and individual needs in equilibrium. They are
therefore striving to maintain equity in the workplace.
LICENSING AND CERTIFICATION OF PSYCHOLOGISTS
In South Africa, as in many other countries, the practices of psychologists are controlled by legislation and controlling bodies such as the
27
Health Professions Council of South Africa (HPCSA). The HPCSA is a co-ordinating body for all the professions registered with the
HPCSA.
The respective boards that are established for a specific profession deal with matters relating to that specific profession. These
boards consist of members appointed by the Minister of Health, educational institutions, and elected members. The aim of establishing
the Health Professions Council of South Africa (HPCSA) was to provide better control over the training, registration and practices of
practitioners of health professions, and to provide for matters incidental thereto. The boards are currently considering implementing
mandatory insurance cover for all health care practitioners. The Professional Board for Psychology has in fact finalised this issue, and
the relevant regulations have been promulgated. These regulations will form the basis of a generic set of regulations applicable to all
boards.
The objective of these boards is to promote the health of the South African population. There is also a key role in determining and
upholding the standards of education and training, while keeping registers of each profession, provision for which is made in terms of
the Act. The HPCSA and Professional Boards also determine and maintain standards of professional practice and conduct, and advise
the Minister of Health on matters pertaining to the Act.
All professional and practicing psychologists must therefore be registered with the Health Professions Council of South Africa,
which through the Professional Board for Psychology controls and applies the laws regarding psychological training and professional
actions. The Psychological Society of South Africa (PsySSA) and its various institutes and interest groups to which psychologists can
voluntarily subscribe promote the interests of clients and psychologists through conferences, training, publications, newsletters and
marketing actions. PsySSA manages a publication, Ethical Codes for Psychologists, which stipulates values and norms for psychologists
with respect to professional actions such as testing, therapy, and research, personal actions, and behaviour towards colleagues and,
especially, clients.
The following professions are registered under the auspices of the Professional Board for Psychology:
• Psychologists
• Intern psychologists
• Student psychologists
• Registered counsellors (for example, human resources, career, employee wellbeing)
• Psychometrists, and
• Psychotechnicians.
There are five categories of registration in psychology, namely clinical, counselling, educational, industrial, and research psychology. In
June 2009, a total of 6 607 psychologists were registered with the Board, of whom 1 219 (18 per cent) were industrial psychologists.
Table 1.1 provides an overview of the competency requirements for human resource professionals who want to register as professional
counsellors with the Board for Psychology. The requirement for registration as a psychometrist, human resource counsellor, career
counsellor, or employee wellbeing counsellor is a four-year or honours degree in I/O psychology and a completed approved 6-month
practicum. The competency requirements examples provided in Table 1.1 are especially relevant to the field of personnel psychology.
In order to register as an industrial psychologist with the Professional Board, a master’s degree and a formal internship are required.
The duration of the internship is one year. Internship training is provided by accredited institutions, but in the case of industrial
psychologists (because of the limited number of accredited institutions), the Board allows individuals to complete an intern-ship while
performing their normal work with their employer. In such a case, the internship programme must be approved in advance by the Board.
Many industrial psychologists also register as human resource practitioners with the South African Board for People Practice
(SABPP). This is not a statutory body and is not regulated by legislation as in the case of the Professional Board. However, it is in the
process of seeking statutory recognition. The aim of the SABPP is to establish and maintain a high standard of professionalism and
ethical behaviour in person-nel practice. This aim is achieved through the development and promotion of the personnel profession: by
setting standards for training and for the competence of those practising the profession and by evaluating and advising on skills
acquisition and development. Individuals can be registered at the SABPP in the following categories: Master HR Practitioner (MHRP);
Chartered HR Practitioner (CHRP); HR Practitioner (HRP); HR Associate (HRA); HR Technician (HRT). Apart from the quali-fication
requirement mentioned at each level, appropriate experience, proven competence and continued professional development are always
requirements at each level.
Table 1.1 Examples of competency requirements for professionally registered counsellors (<www.hpcsa.co.za>)
28
29
30
THE HISTORY OF I/O PSYCHOLOGY
This section provides a broad overview of the historical evolution of the field of I/O psychology. Although there are indications in the
literature that as early as 1527, contemporary industrial psychology terms were used in business, it is generally accepted that industrial
psychology as a subject originated at the beginning of the 20th century in the USA. This means that the discipline has a development
history of approximately 100 years. If one examines the history of industrial psychology, two names, Walter Dill Scott and Hugo
Munsterberg, stand out. Walter Dill Scott was the first person to apply psychological principles in advertisements, personnel selection
and management issues. His famous works on inter alia the psychology underlying advertising and human efficiency in business had a
profound influence on the public’s awareness of industrial psychology. Hugo Munsterberg, who was regarded by many authors as the
‘father’ of industrial psychology, was particularly interested in applying traditional psychology methods to practical industrial problems.
In his work on, inter alia, the selection of workers and the application of psychology in selling, there were even indications of the present
core areas in I/O psychology, namely organisational psychology and personnel psychology.
In the literature, the names of Marion Bills, Elsie Bregman, Lilian Gilbreth and Mary Hayes stand out as the four women
psychologists who contributed the most to the development of I/O psychology from 1917 to approximately 1947. These four individuals
worked in different areas of industrial psychology, but concentrated primarily on personnel matters, which was not unusual for that
period. These four pioneers’ involvement in the field entailed scientific practice, the application of psychology in industry, and
professional services (Koppes, 1997).
Reflection 1.2
Read carefully through the case studies below and answer the questions that follow.
Case study 1: My career as industrial psychologist
I am an industrial psychologist in a large manufacturing company. After completing my master’s degree and internship in I/O
psychology, I was fortunate to be employed by the company as a senior Human Resource Development (HRD) officer. Organisations
spend billions of rands each year on training and development. As HRD officer and industrial psychologist, I play a part in the
spending of those billions of rands. However, I like to think of it as ‘investing’ rather than spending. As with anything we invest in,
we want to make sure we get a ‘pretty good’ return on our investment. Training is no different. We do not want to be in a situation
where the training rands we spend are costing us more than the returns we receive. In this case, the returns are enhanced employee
performance that can lead to increased company profits. As HRD officer, I am responsible for ensuring that our organisation gets a
‘pretty good’ return from their training rands. Just as many variables can affect our investments, many variables can also affect
training.
Training usually begins with a need. Whether it is introducing a new product or providing a more effective way to communicate,
the training department is often called upon to support these issues. I work on developing partnerships with managers from other
departments for the purpose of solving issues jointly, rather than having training being viewed as a ‘fix-all’ department. I also provide
managers with a checklist of questions they should be asking themselves before they call the training department to ‘fix’ their
problems. These questions will help the manager determine if training really is the answer to their problems.
Creating these partnerships also helps with the transfer of learning back to the job. For example, before a team member attends a
training programme, it is necessary for that team member and his or her manager to have a discussion regarding the expectations and
results of the training programme he or she is about to attend. This enables the team member to become more focused on the
knowledge and skills to be learned in the learning environment.
Once the team member enters the learning environment, I am responsible for facilitating the training programme. Using such adult
learning techniques as setting expectations, creating and maintaining interest, encouraging participation, and providing an opportunity
for skills practice helps me create an environment that is conducive to learning. The majority of training programmes I facilitate fall
into the area of professional/management development as opposed to technical development. Some examples include supervision,
communication, presentation, interviewing, and team-building skills.
I often work with a team of consultants from various backgrounds to develop new training programmes. Depending on the type of
programme and the method of delivery, it can take many months to complete a new training programme. Then, of course, we need to
pilot the programme and make adjustments accordingly. Sometimes an outside vendor may be used to assist us with a project. A
training programme is never really complete. There is always fine tuning that can be done to make the programme more effective,
especially if we are to customise a training programme for a specific department.
Training does not end once the team member leaves the classroom. I follow up with team members and managers to determine if
the knowledge and skills learned in the classroom are being applied in the work area. I hold focus groups with managers and team
members for the purpose of making the training programme more effective. The manager also reinforces the learning by coaching and
providing feedback to the team member. In some cases, especially for new employees, a mentor will be assigned to the team member
for continuous on-the-job training. A mentor checklist will be provided to ensure standardisation.
To increase performance, it is important for me to understand our business goals and objectives and to align my consulting and
training skills with them. At times, this may mean talking with a subject matter expert (SME) in an area I am working on for improved
performance.
Measuring the improved performance is a necessary but often difficult task to accomplish. We use surveys, pre- and
31
post-assessments, focus groups, and a wealth of reports from management to assist us with the measured results from our training
programme. Let’s face it, some things are just much easier to measure than others! However, this should not deter us from
determining the impact our training programme has on performance. The more accurately we can assess the impact our training
programmes have on improved performance, the more accurately we can link improved performance to profitability. After all, we
want to be sure that we get a good return on our investment.
I have realised the value of my studies in I/O psychology, because the design and evaluation of training programmes require a good
grasp of psychology and scientific measuring techniques. I find my career rewarding and very stimulating because I know I can use
my knowledge and skills in practical ways that contribute to the overall performance of the company.
Case study 2: My career as human resource practitioner
I am a senior Human Resource Officer in a large financial institution. As a human resource practitioner, I find my career rewarding
and stimulating. I am responsible for employee relations, employee training and development, compensation, benefits administration,
employee recruitment, administration of selection tests, interviewing, employee orientation, and safety. Part of my time is spent
recruiting and interviewing applicants for vacant positions. Recruiting and hiring highly-qualified applicants is a critical component in
the success of our company, as well as the reputation of the human resources department.
Whether a position is a new or existing one, the first step is ensuring that an accurate job description exists for the position.
Information from this job description is then used for recruitment. In advertising the position, it is important to target the appropriate
market, which depends, in large part, on the type of position to be filled and the availability of qualified local applicants.
Because our company is located in a rural area, we sometimes have difficulty attracting qualified applicants for the upper-level
positions. Therefore, it is essential to use a variety of recruitment methods to reach more potential applicants. All open positions are
posted on the company’s web site, which includes a searchable database for all open positions. The job posting web site allows
internal and external candidates to search by location, facility, or job category. Candidates can complete an online application and may
also attach a resumé file to their online application. The company also offers incentives to colleagues who refer candidates who are
subsequently hired for positions with the company. In addition to the company web site, a variety of other advertising methods are
used. For entry-level and mid-level positions, an advertisement is typically placed in a local newspaper. An employment
advertisement is placed with the state employment office as well as with college placement offices. Recruiting for upper-level
positions requires a broader focus. These positions are typically advertised regionally as well as nationally. Particularly for
higher-level positions, we often utilise web-based job posting sites.
As is the case in most organisations, an important part of the selection process is the employment interview. To increase the
effectiveness of the interview as a selection tool, we use a structured interview for all open positions. For each position, a set of
essential competencies is identified. We also, from time to time, call in the services of a qualified industrial psychologist to assist us
with this important task. Questions for each competency are established, and may be either technical or situational in nature.
Applicants are interviewed by a member of the HR staff, then by the position supervisor, and lastly by another supervisor. Each
interviewer takes notes during the interview and completes a standard rating sheet with ratings for each of the identified competencies.
Using a structured interview and having several independent interviewers increases the amount of pertinent information received and
reduces the bias involved in typical interviews. Depending upon the position, applicants also may be required to pass typing,
data-entry or PC skills assessments.
Once an offer of employment is made, the candidate is required to complete a physical and substance abuse-screening test. In
addition, criminal background checks are conducted for all new employees. This is to ensure that there are no criminal record issues
that may prelude a colleague from working in certain positions, such as handling prescription drugs.
(Adapted from Kuther, 2005:18–22)
Questions
1. How would you describe the role of the industrial psychologist and that of the human resource practitioner? Do these roles overlap?
In what way?
2. Differentiate between the two sides of I/O psychology: science and practice. What are the objectives of each side? Can you identify
these aspects in case study 1? And in case study 2?
3. Would you describe the profession of the industrial psychologist and human resource practitioner as interesting, useful or
stimulating? Give reasons for your answer.
Although the above-mentioned works evoked a fair amount of interest in I/O psychology, it was the utilisation of psychologists in
the area of personnel selection in particular, during the two World Wars, which focused the attention on the operation of the subject.
Psychometrics as a field of application in I/O psychology was established in this way. Thereafter the utilisation of psychological
principles and methods in the industry rapidly spread to other parts of the world. It was also during World War II that the development
of complex weaponry found expression in engineering psychology. Psychologists and engineers worked closely together to develop
advanced equipment to adapt to the limitations of human capacities.
In 1924, industrial psychology was expanded considerably with the application of the Hawthorne studies. The finding of this
research, namely that the social and psychological environment has potentially greater importance than physical working conditions,
enabled I/O psychology to advance beyond selection and placement to the more complex problems of interpersonal relationships,
32
motivation and organisational issues. During the early period, the Hawthorne studies probably had the greatest influence on I/O
psychology.
Consumer psychology, whose origin can be traced back to the works of Scott in 1903, developed its own identity after World War II,
with fields such as marketing, economics, sociology and personality and social psychology (Jacoby, 1976). After the 1950s, industrial
psychology developed rapidly. This period was characterised mainly by the establishment of Carl Rogers’s person-centred approach and
Abraham Maslow’s theories of motivation, the initiation of Skinner’s research and the application of behaviourism in organisations, the
propagation of Peter Drucker’s approach to management by objectives, and an unprecedented interest among industrial psychologists in
labour relations.
In the 1960s and 1970s, a subject such as work motivation sparked much interest. Vroom’s (1964) discussion of this subject in his
work entitled Work and Motivation, especially the part in which he expanded on the expectancy model, sparked a great deal of interest
among industrial psychologists. Well-known theories, such as McGregor’s X and Y theories, Porter and McClelland’s performance
theory, Herzberg’s two factor theory, and Locke’s goal approach to motivation, were important developments in the field of motivation.
Other subjects such as measurement of job satisfaction and quality of work life, the influence of work on people, and research on the
validity and fairness of selection tests also came under the spotlight in I/O psychology.
In contrast to the growing influence of neo-behaviourism on management and industrial psychology during this period, there was
increasing application of the cognitive approach in certain topics in I/O psychology such as problem solving, decision-making,
performance evaluation, leadership, job design, motivation and consumer behaviour (Katzell & Austin, 1992).
In the 1960s to the 1980s, the focus also began to shift from the individual worker and his or her work and work groups to
organisational behaviour. Theories and research that dealt with matters such as communication in organisations, conflict management,
socialisation, career in organisations, organisational influence on individual work behaviour, and organisational climate in particular,
became more prominent in the literature. Together with this interest in organisational psychology, techniques were also developed to
facilitate organisational change and development. Examples are laboratory training, diagnostic interviewing, team development and
integrated techniques, such as Blake and Mouton’s Managerial Grid (Katzell & Austin, 1992).
During the mid-1980s to the early 1990s, the above topics were studied further and continual attention was paid to validation
strategies, validity generalisation, assessment centres, performance criteria, job analysis, training and development, employment equity,
remuneration, and promotion. At organisational level, researchers continued to study matters such as organisational design, change
management, motivation, attitudes, leadership, and job design.
Positive psychology officially emerged in 1998 with Martin Seligman’s (1999) ‘Presidential Address’ to the American Psychological
Association (APA). Positive psychology is the scientific study of optimal human functioning. Researchers in South Africa focused on
self-actualisation in the late 1970s and 1980s. The first prominent acknowledgement of positive psychology as a new field of study in
South Africa was done by Strümpfer in his article ‘Salutogenesis: A New Paradigm’ (1990:45–52), based on the work of Antonovsky
(1987), regarding the construct sense of coherence. Wissing and Van Eeden (1997) went a step further by emphasising the study of the
nature, manifestations and ways of enhancing psychological wellbeing and developing human capacities. They suggested the emergence
of a new sub-discipline which they called psycho-fortology. Their research stimulated the emergence of numerous conferences and
research on wellness in South Africa (Coetzee & Viviers, 2007). Research on topics such as work stress, employee wellness, and the
need to maintain a balance between work and family life continue to receive ongoing attention in I/O psychology.
Contributions of I/O psychology
The contributions of I/O psychology can be summarised as follows (Katzell & Austin, 1992; Koppes, 1997):
• Industrial psychology developed into a viable scientific discipline which made a significant contribution to society’s
knowledge of work behaviour.
• The subject made a noticeable contribution to the development of the management profession. Large numbers of industrial
psychologists work in the private and public sector. Some are consultants, while others work as academics involved in the
training of managers.
• Industrial psychologists have played a vital role in the establishment of human resource management practices, policies and
systems.
• The subject has contributed to the general improvement of the South African community. This is evident in the following
areas: people are selected for jobs to which they are suited; they are trained and developed to be more efficient in their
careers; prejudice towards the disadvantaged is limited; improvements are evident in the safety and convenience of the
workplace; and the quality of work life has been improved.
• Industrial psychologists have undertaken research that is of both professional interest and practical value (see also Table 1.2).
• A body of knowledge, supported by approximately 108 years of research, has been developed and is applied daily in
organisations to find solutions to problems.
• I/O psychology has played a prominent role in improving the effectiveness of organisations as a whole by continually
endeavouring to improve the job performance of individual employees and groups.
• Over the years, industrial psychologists have expressed concern about the welfare of workers. Comprehensive research in
areas such as job satisfaction, career development, job/family matters, fairness, work stress, employee wellness, safety,
training, reward and recognition, and ethics bears testimony to this.
33
The Journal of Industrial Psychology (published as South African Perspectives in Industrial Psychology from 1975 to 1985) was also a
major development. This journal serves as an independent publication for scientific contributions to the field of industrial psychology,
i.e. organisational psychology, per-sonnel psychology, employment relations, career psychology, psychometrics, ergonomics, and
consumer behaviour. Currently the Journal has evolved to an on-line journal which can be accessed at <www.sajip.co.za>. More
recently, the South African Journal of Human Resource Management has been established to provide a forum for human resource and
personnel psychology related scientific contributions. The journal is also an on-line journal, and can be accessed at <www.sajhrm.co.za>
.
Historical overview of research in I/O psychology
Over decades, research in South Africa has been done in the different sub-fields of I/O psychology. The purpose of this research was to
increase knowledge and understanding of human work behaviour and to apply that knowledge to improving work behaviour, the work
environment, and the psychological conditions of the worker. Research in I/O psychology in the United States started to evolve at the
beginning of the 20th century. Studies in personnel selection, work methods, and job design were done as early as 1913 (Katzell &
Austin, 1992). In 1946, research in I/O psychology came into its own in South Africa with the establishment of the National Institute for
Personnel Research (NIPR). Studies into a wide range of subjects were undertaken over the years. Simon Biesheuvel, who is regarded as
the father of industrial psychology in South Africa, was the Director of the NIPR. His research on the selection of flight crews and his
presentation of a number of scientific papers made him one of the most respected psychological researchers in the country (Biesheuvel,
1984).
Table 1.2 Examples of titles of research articles in the South African Journal of Industrial Psychology and the South African Journal of
Human Resource Management
South African Journal of Industrial Psychology
• The development of industrial psychology at South African universities: a historical overview and future perspective
• Lest we forget that industrial and organisational psychology is psychology
• Black middle managers’ experience of affirmative action in a media company
• The predictive validity of the selection battery used for junior leader training within the South African National Defence
Force
• Explaining union participation: the effects of union commitment and demographic factors
• A psychometric approach to supervisory competency assessment
• The postmodern consumer: implications of changing customer expectations for organisational development in service
organisations
• Reliability of competency-based, multi-dimensional, multi-rater performance ratings
• Assessment in multi-cultural groups
• The relationship between emotional intelligence and job performance in a call centre
• Psychological career resources of working adults: an exploratory study
• Theory and practice in industrial psychology. Quo vadis?
• Work-related concerns of South Africans living with Aids
• The job-demand control model and job strain across gender
• Occupational stress, sense of coherence, coping, burnout and work engagement of registered nurses in South Africa
South African Journal of Human Resource Management
• Applying the Burke-Litwin model as a diagnostic framework in assessing organisational effectiveness
• Shift-share analysis of manufacturing as a measuring instrument for human resource management
• Perceived fairness of disciplinary procedures: an exploratory study
• The adequacy of the current social plan to address retrenchment challenges in South Africa
• Work-readiness skills in the FASSET Sector
• Career and life balance of professional women: a South African study
• Implementing and sustaining mentoring programmes: a review of the application of best practices in the South African
organisational context
• The interface between knowledge management and human resources: a qualitative study
• Evaluating a methodology for assessing the strategic alignment of a recruitment function
• Applying the nominal group technique in an employment relations conflict: a case study of a university maintenance
section in South Africa
• The development of a hassle-based diagnostic scale for predicting burnout in call centres
• Macro and micro challenges for talent retention in South Africa
• Employee health and wellness in South Africa: the role of legislation and management standards
• The validation of the Minnesota Satisfaction Questionnaire in selected organisations in South Africa
34
The establishment of the Human Sciences Research Council (HSRC) in 1969 made a significant contribution to the development of
industrial psychology in South Africa. The contributions of the HSRC’s Institute of Manpower Research and Institute of Statistical
Research were of special significance to the subject (Raubenheimer, 1974b).
Table 1.3 provides an overview of general (South African) research frequency trends in the sub-fields of I/O psychology from 1950
to 2008 (Schreuder & Coetzee, 2009). The sub-fields are ranked (1 = high) in terms of the frequency trends of research projects which
were carried out in a specific field.
From the research trends shown in Table 1.3 it can be concluded that:
• There has been a significant decrease in personnel psychology-related research since 1990.
• There has been a significant increase in organisational psychology- (since 1980) and employee wellness- (since 1990) related
research.
• There has been constant increase in psychological assessment-related research since 1990.
• Topics in career psychology were consistently researched with low frequency.
• Very little research was done in consumer psychology, ergonomics, and employment relations.
THE SCOPE OF THIS BOOK
In this book we shall cover areas traditionally included in a study of personnel psychology. In chapter 2 we explore the research methods
employed by industrial psychologists who work in the field of personnel psychology. Chapter 3 gives a broad overview of the
employment context and how it influences the scope and practice of personnel psychology in the contemporary world of work, and in
particular the human resource planning process. Chapter 4 discusses job analysis and criterion development as important activities in
personnel attraction, recruitment, selection, reward and remuneration, performance evaluation, and training and development. In chapter
5 we explore the use of psychological tests in the assessment and prediction of human behaviour in the workplace. Chapter 6 builds on
chapter 5 by discussing the recruitment and selection of employees from a scientific decision-making framework. Chapter 7 introduces
the principles underpinning the retention of valuable, scarce and critical human capital. Chapters 7, 8, 9, 10 and 11 discuss the core
practices that influence the retention of employees. These include themes such as reward and remuneration, performance evaluation,
training and development, and career development. Finally, in chapter 12 we take a look at employment rela-tions, traditionally better
known as labour or industrial relations.
Table 1.3 I/O psychology research frequency trends in South Africa (Adapted from Schreuder & Coetzee, 2009)
*1=highest frequency
CHAPTER SUMMARY
35
I/O psychology is the sub-field of psychology that deals with the study of human behaviour in work settings. While the scientific goal of
I/O psychology is to increase our knowledge and understanding of work behaviour, the practical goal is to use or apply that knowledge
to improve the psychological wellbeing of workers. In South Africa the profession of industrial psychologists is a nationally-recognised
occupation, and industrial psychologists are required to be registered with the Professional Board for Psychology. Important historical
contributions by various researchers in the field have led to the evolution of I/O psychology. In South Africa today, I/O psychology is a
rapidly growing field. Personnel psychology is one of the oldest and most traditional activities of industrial psychologists and focuses on
individual differences in behaviour and performance. A core focus of personnel psychology is the application of decision-making theory
(discussed in chapter 6) in the attraction, recruitment, selection, placement, and retention of employees.
Several important trends present challenges to I/O psychology and, in the context of this book, the field and profession of personnel
psychology. These trends include the changing employment context which is discussed in chapter 3.
Review questions
You may wish to attempt the following as practice examination-style questions.
1. Why can I/O psychology be described as the scientific hub for personnel psychology?
2. Differentiate between the two sides of I/O psychology: science and practice. What are the objectives of each side?
3. Compare the definitions of psychology and I/O psychology. What principles of the field of psychology do industrial psychologists
apply in their profession?
4. How does personnel psychology overlap with and complement human resource management? What differentiates personnel
psychology from human resources management?
5. Differentiate between the roles of the industrial psychologist who specialises in personnel psychology and that of the human
resource professional.
6. Review the section on the history of I/O psychology. What were the important events that shaped the field of I/O psychology?
Describe the major fields of industrial and organisational psychology.
7. Why has cross-cultural or multicultural industrial psychology become an important consideration in workplaces?
8. What new contribution does positive psychology bring to the field of I/O psychology?
9. If you wanted to pursue a career in I/O psychology and register as a professional human resource counsellor or industrial
psychologist, what would you need to do to make this possible?
10. Why is it important to license and certify industrial psychologists and counsellors?
11. What are the main research trends in I/O psychology?
12. Name the three ‘founding fathers’ of I/O psychology and describe their major contributions.
13. Draw a diagram using the dates below and indicate the main trends and events that helped to shape the evolution of I/O psychology.
Years: 1910–1930/1930–1940/1950–1960/1960–1970/1980–1990/2000–2008
Multiple-choice questions
1. Which of the following subspecialities of I/O psychology is concerned with determining the human skills and talents needed for
certain jobs, grading employee job performance, and training workers?
a. Organisational behaviour
b. Organisational development
c. Personnel psychology
d. Human resource management
2. Many prominent industrial psychologists throughout the course of the 20th century can trace their professional roots back to:
a. Simon Biesheuvel
b. Hugo Munsterberg
c. Lilian Gilbreth
d. Martin Seligman
3. The most representative journal in the field of I/O psychology in South Africa is the:
a. Personnel Journal
b. Journal of Applied Psychology
c. Journal of Psychology
d. South African Journal of Industrial Psychology
4. I/O psychology is represented by which division of the PsySSA?
a. The Society for Industrial Psychology
b. The Professional Board for Psychology
c. The Institute of Personnel Management
d. The South African Psychological Association
5. Which one of the following statements is FALSE?
a. A master’s degree is necessary to qualify as an industrial psychologist.
b. All industrial psychologists must belong to the SABPP.
c. An internship must be completed to register as an industrial psychologist.
36
d. There are about 1 219 industrial psychologists in South Africa.
37
CHAPTER 2
RESEARCH METHODS IN PERSONNEL PSYCHOLOGY
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
THE ROLE AND USE OF RESEARCH IN PERSONNEL PSYCHOLOGY
THE RESEARCH PROCESS
• Step 1: Formulating the research question
– Quantitative and qualitative research
• Step 2: Choosing an appropriate design for a study
– Types of designs
• Step 3: Collecting the data
– Data-gathering techniques
• Step 4: Analysing the data
• Step 5: Drawing conclusions from research
ETHICAL PROBLEMS IN RESEARCH
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
This chapter introduces the basic research process and the various decisions that need to be made throughout the process, taking into account
and adhering to the ethical principles of conducting research in the context of personnel psychology. Many of the concepts discussed in this
chapter will be used throughout the book when presenting and discussing theories, interpreting research results, evaluating the effectiveness
of interventions by industrial psychologists or human resource practitioners, and evaluating the quality and utility of personnel decisions
throughout the employment process. Because this chapter introduces a number of important themes, you should plan to spend some time
studying their definitions and understanding how they are used in the employment and personnel retention context.
Learning outcomes
When you have finished studying this chapter, you should be able to:
1. Motivate the role and use of research in personnel psychology.
2. Describe the process of conducting research.
3. Participate in the process of conducting research for the purpose of improving organisational effectiveness.
4. Apply the concepts and terms relevant to research in the employment process.
CHAPTER INTRODUCTION
We all have ideas or beliefs about the nature of human behaviour. Some of us believe that red-haired people are temperamental,
dynamic leaders are big and tall, blue-collar workers prefer beer to wine, the only reason people work is to make money, and the like.
The list is endless. Which of these beliefs are true? One way to find out is to conduct research, or in other words, to do a systematic
study or investigation of phenomena according to scientific principles.
Consider the following case study: A leading manufacturing organisation in South Africa is launching a new division and needs to
recruit and select a number of new employees. Before they embark on this, they decide to re-evaluate their selection practice. The
organisation has been using a selection interview as the main selection device for the past few years. They decide to evaluate the
quality and effectiveness of this selection technique. Some of the aspects that they are concerned about are the following:
• How useful is the interview as a selection technique?
38
• Is the training programme that is currently presented to their interviewers equipping them with the necessary skills?
• What type of interview is the best to use?
• What kind of influencing techniques used by candidates should interviewers be aware of?
• Should feedback be provided to the candidates if they do not get appointed?
The purpose of this chapter is to illustrate that research is not an activity that happens in isolation, but that it forms part of everyday
practices and experiences in organisations and personnel psychology, just the same as in the case of the organisation mentioned above.
Reflection 2.1
The organisation tasked the personnel officers of the organisation to conduct a research study to answer these questions. Where would
you advise them to begin?
THE ROLE AND USE OF RESEARCH IN PERSONNEL PSYCHOLOGY
Personnel psychology is the sub-discipline of industrial and organisational psychology that mainly focuses on the employee as an
individual in the organisation. The purpose of the profession of personnel psychology (as an area of specialisation in the field of
industrial and organisational psychology) is to appoint an individual in the correct position in an organisation using valid and fair
selection practices, and then to manage and take care of the individual as an asset of the organisation in such a manner that the wellbeing
and performance of the individual is enhanced. Subsequently, industrial psychologists are continually faced with a host of practical
problems in this regard, similar to the one the manufacturing organisation mentioned above is currently facing in using a selection
interview. A few other examples of practical problems are:
• What are the distinguishing features of a healthy and productive employee?
• Does the organisation’s selection practice impact on the bottom line profit of the organisation?
• Are the organisation’s recruitment and selection practices perceived to be fair?
• How effective is the current performance appraisal system that is being used in the organisation?
• Are the compensation practices of the organisation causing the job dissatisfaction of the employees?
All of these questions can be answered by conducting research.
Reflection 2.2
What are some practical problems in your organisation that may be addressed through research?
Knowledge of research methods makes us better able to find useful solutions to these problems rather than merely stumbling across
them by chance.
Apart from answering some of the questions that arise in an organisation or solving some of the practical problems that occur,
research can also be used to develop new practices, such as a new selection procedure. Kehoe (2000) notes in this regard that it is also
important for researchers to create information, raise possibilities and expand the horizon of questions. In this sense, the role of research
is continuous and does not necessarily end when the question has been answered or the problem solved. Although you will not always
conduct the research yourself, knowledge of research will also enable you to evaluate the work of others or new practices or procedures
introduced by others before the organisation implements these at a considerable cost and not being convinced of their effectiveness.
In conclusion, research can be used to answer questions or solve practical problems, develop new practices and procedures and also
to evaluate the research conducted or the new practices suggested by others in the field.
THE RESEARCH PROCESS
Consider the question asked earlier in the case study presented at the beginning of the chapter. Where would you advise the personnel
officers of the organisation to start in answering the questions posed to them? Although there may be some variations to this, most
research studies are conducted following a basic process. Figure 2.1 shows the steps that researchers take in conducting research. The
research process is basically a five-step procedure with an important feedback factor; that is, the results of the fifth step influence the
first step in future research studies.
• First, the research process begins with the identification of the problem: What question or problem needs to be answered?
• Second, how do you design a study to answer the question?
• Third, how do you measure the information that you need and collect the necessary data in order to answer the research
question?
• Fourth, how do you analyse the data? (In other words, how do you make some sense out of all the information collected?)
39
•
Finally, how do you draw conclusions from analysing the data?
It is important to remember that each of these steps influences the next step in the research process. The researcher takes a sequence of
carefully planned and reasoned decisions throughout the research process. Each decision is followed by certain consequences. You will
understand this better after we’ve gone through the process and explained what each step entails.
Figure 2.1 The research process (Coetzee, 2005:21).
Let’s look at each of these steps in more detail.
The first step in any research would be to refine and specify the research question.
Step 1: Formulating the research question
In the case study presented in the questionaire, there are various questions asked that could be addressed in a research study. Each
question requires a very specific answer. Based on the kind of answer that is required, we distinguish between various types of
questions, namely:
• Exploratory questions
• Descriptive questions
• Predictive questions
• Evaluative questions, and
• Causal questions.
The exploratory question is often asked when a relatively new field or area is investigated. The results of this research can often be used
to generate more specific research questions that should be addressed in consecutive studies. The descriptive question is like taking a
photograph – it provides a picture of a state of events. Researchers may describe levels of productivity, numbers of employees who left
during the year, average levels of job satisfaction, and so on. By answering the predictive question, researchers try to predict which
employees will be productive, which ones are likely to leave, and which ones will be dissatisfied. This information is then used to select
applicants who will be better employees. The evaluative question is set to determine the quality or effectiveness of a programme,
practice or procedure, for instance, whether a new training or learning programme is effective in producing better performance in
employees (the evaluation of training or learning programmes is discussed in more detail in chapter 10). The causal question is perhaps
the most difficult to unravel. It is a question asking why events occur as they do. It tries to find causes: why production is at a certain
level, why employees leave, why they are dissatisfied, and so forth.
Table 2.1 shows how the questions from the case study (after some of them have been refined and made more specific) can be
classified according to the various types of questions.
It is important to determine the type of research question, because the type of question defines the goal or objective of the study, as
well as the variables that you want to investigate. In a research study we refer to the aspects being investigated as variables. The term
variable is often used in conjunction with other terms in industrial psychological research. Four such terms that will be used throughout
this book are independent, dependent, predictor, and criterion. Independent variables or predictor variables (as they are also referred to)
are those variables that are manipulated or controlled by the researcher. They are chosen by the researcher, set or manipulated to occur at
a certain level, and then examined to assess their effect on some other variable. In the causal question stated in Table 2.1, the
independent variable is the feedback provided to the participant. The dependent variable or criterion variable is most often the object of
the researcher’s interest. It is usually some aspect of behaviour (or, in some cases, attitudes). In the causal question in Table 2.1, the
dependent variable is the participant’s experience of a negative selection decision.
The same variable can be selected as the dependent or the independent variable, depending on the goals of the study. Figure 2.2
shows how a variable (employee performance) can be either dependent or independent. In the former case, the researcher wants to study
40
the effect of various leadership styles (independent variable) on employee performance (dependent variable). The researcher may select
two types of leadership styles (a stern taskmaster approach versus a relaxed, easygoing one) and then assess their effects on job
performance. In the latter case, the researcher wants to know what effect employee performance (independent variable) has on the ability
to be trained (dependent variable). The employees are divided into ‘high-performer’ and ‘low-performer’ groups. Both groups then
attend a training programme to assess whether the high performers learn faster than the low performers. Note that variables are never
inherently independent or dependent. Whether they are one or the other is up to the researcher’s discretion.
Table 2.1 Type of research questions
Question type
Example
Exploratory
question
What are the kinds of influencing techniques that candidates use in a selection interview?
Descriptive
question
Is there a relationship between the type of interview conducted and the interviewer’s success of rating an
applicant’s personality characteristics?
Predictive
question
Can the results of a selection interview be used to successfully predict the performance of an applicant?
Evaluative
question
How effective is the current interviewer training programme used in the organisation?
Causal question
Does feedback after a negative selection decision reduce the negative effect thereof on job applicants?
Figure 2.2 Employee performance used as either a dependent or an independent variable (Coetzee, 2005:24)
Once the research question has been formulated, the researcher needs to determine what type of study will best answer the specific
research question.
Quantitative and qualitative research
We can distinguish between two broad types of research, namely quantitative research and qualitative research. Qualitative research
aims to provide in-depth information and a deeper understanding of, for instance, behaviour at work. It is the best kind of research
method for discovering underlying motivations, feelings, values, attitudes, and perceptions. Quantitative research aims to describe or
explain a variable or situation. This type of research collects some type of numerical data and uses statistical analysis to answer a given
research question.
The same research steps will be used throughout the research process, irrespective of whether a qualitative or a quantitative study is
conducted. But quantitative and qualitative research studies each have specific types of designs, data collection methods, and data
analysis techniques. The research question will determine the type of study that will best answer the question. If we refer to the type of
questions in Table 2.1, the exploratory type of question regarding the kind of influencing techniques will best be answered by a
qualitative study. We would like to discover the influencing techniques that underlie the applicants’ communication.
The descriptive question regarding the relationship between the type of interview conducted and the interviewer’s success in rating
can best be answered by a quantitative research study. In order to determine a relationship between variables, one needs to gather
numerical data and use statistical analysis. The quantitative type of study would therefore be best suited for this question.
It is also possible to use a combination of a qualitative and quantitative study to answer a specific question. This type of
mixed-method design will become more and more prevalent in order to investigate the complex and interrelated arena of the world of
work and organisations.
41
Reflection 2.3
Can you think of an example of a question that could be investigated by a mixed-method design?
An example of a question that could be investigated by a mixed-method design would be the evaluative type of question regarding
the effectiveness of the current training programme. Qualitative information can be gathered regarding the trainees’ perceptions of the
training programme, and quantitative information can be gathered regarding their performance improvement after the training
programme.
Quantitative research has dominated industrial and organisational psychology research for a long time. Today, more scientists are
starting to acknowledge the value of qualitative research. Aguinis, Pierce, Bosco and Muslin (2009) have shown that qualitative topics
have become increasingly popular during the last decade in looking at organisational research methods. However, they are still not
receiving as much attention as quantitative topics, and Aguinis et al (2009) have expressed the need for further work to be conducted in
the qualitative arena.
Once a researcher has chosen a type of study to answer the question, the next step would be to select an appropriate research design.
Step 2: Choosing an appropriate design for the study
A research design is a plan or blueprint of how one intends to conduct the research. Research designs can be distinguished from one
another in terms of two aspects, namely the naturalness of the research setting, and the degree of control that the researcher has.
Naturalness of the research setting
The naturalness of the research setting refers to the environment in which the study is conducted. Most research studies are conducted in
the natural environment of the organisation (this is frequently referred to as a field study). This is desirable, because we would like to
investigate the variable exactly as it occurs in the natural organisational setting. For instance, when investigating the value of an
interview to predict job performance (see Table 2.1), we would like to investigate this aspect in a normal selection situation. We would
like the interviewer and selected applicants to behave exactly as they would in a real selection situation.
Degree of control that the researcher has
In a natural organisational setting there are a number of other aspects or variables present that do not necessarily form a part of the study.
These other aspects may influence the results of your study. For instance, in determining if feedback reduces the negative effect of not
being appointed after a selection process (see Table 2.1), the perception of the applicant regarding the fairness of the selection process
can also influence the effect of the negative decision on them. The perception of fairness is therefore another possible variable, apart
from the feedback provided, that influences the results of this study. These variables are called extraneous variables.
Therefore, the degree of control that the researcher has over these other variables is the second aspect that is important in a research
design. When one conducts research in an environment where one can control all the aspects that influence the study, there is a high
degree of control. However, this is likely to be an unnatural environment like a laboratory, outside the natural setting of the organisation.
From this one can deduct that there will always be a trade-off between the naturalness of the environment and the degree of control that
the researcher has.
Types of designs
As mentioned earlier, various research studies each have specific types of designs. Within quantitative research, there are basically three
types of research, namely non-experimental, experimental, and quasi-experimental.
• Non-experimental research is a descriptive type of research where the goal is to attempt to provide an accurate description or
picture of a particular situation or variable. It attempts to identify variables that exist in a given situation and tries to describe
the relationship that exists between the variables. Descriptive and predictive questions can be answered by non-experimental
research.
• Experimental research tries to determine cause-and-effect relationships or aims to determine the cause of variables.
Experimental research is the only type of research that can determine causality (in other words, the degree to which one
variable causes another variable). From this one can deduct that the causality type of question (as set out in Table 2.1) would
be answered by choosing an experimental design.
• Quasi-experimental research occurs when experimental procedures are applied, but not all other influencing variables are
controlled in the study. This is the best type of study if one wants to investigate causality in the natural environment of the
organisation.
Within qualitative research, the most common designs used in industrial psychology research are the following:
• A case study aims to provide an in-depth description of an object to be studied. In studies of an organisation, the object of the
study is usually one of the following: a single organisation or several; an organisational sub-unit or department; a work team;
a particular organisational practice such as selection; or one or more industries (Rogelberg, 2002).
42
•
Fetterman (1998) described ethnography as the art and science of describing a group or culture. The description may be of
any group, such as a work group or an organisation. An ethnographer details the routine daily lives of people in the group,
focusing on the more predictable patterns of behaviour.
• Grounded theory is a qualitative research strategy in which the researcher attempts to construct a new theory from the
available data. It involves multiple stages of data collection and looking for connections between categories of information
(Creswell, 2003). It is an inductive approach used to develop a theoretical concept about the life world of some selected group
of people. It is also useful for exploring processes, activities and events. An example of grounded theory research in industrial
psychology can be the work of Burden (Burden & Roodt, 2007), who used grounded theory to create a model for
organisational re-design.
• Phenomenological research is a type of research in which the researcher tries to understand the lived experience of
participants. This involves studying a small number of participants over a period of time and through extensive engagement in
order to gain a thorough understanding of their experience, developing patterns and relationships of meaning in this regard
(Creswell, 2003). One can, for example, investigate the experience of a newly-appointed manager over a period of time.
• Narrative research is a form of inquiry in which a researcher studies the life of an individual and asks one or more
individuals to provide stories about their lives (Creswell, 2003). Weeks and Benade (2008) used a narrative enquiry
conducted by means of discussions with 24 South African executives to determine the impact of the dual economy on South
African organisations and the influence thereof from a management perspective.
The various types of designs are illustrated in Figure 2.3.
The next step in the research process would be to collect the information, or data as it is called in research, that is needed to answer
the question.
Step 3: Collecting the data
In the process of data collection, the researcher needs to make decisions regarding a few aspects. Firstly, the people from which the data
will be collected should be identified. This group of people is called a sample. Secondly, the instruments or tools that are going to be
used should be identified or developed, and thirdly, the instruments or tools should be administered to the identified group of people (the
sample of participants) in order to gather the data. These steps would again be different for qualitative and quantitative research.
Sampling
In the first step of data collection, a relevant sample must be identified and drawn. In a quantitative research study, the researcher
normally wants to answer the research question in such a manner that the answer is true or relevant for the whole group of employees
that is investigated or a whole organisation that is studied. This is called generalisability. However, it is in most cases impossible, as a
result of time, practical and cost constraints, to gather data from the whole group or organisation. Therefore quantitative research studies
require the researcher to draw only a subset (sample) from the whole group or organisation that is to be investigated.
Various methods of drawing samples exist, of which random sampling is the most common. A random sample implies that each and
every member of the group or organisation had an equal chance of being included in the sample. If a random sample is not drawn and
only an available sample is used, then the results of the research cannot necessarily be generalised to the whole group or organisation. In
quantitative research, the samples that are drawn usually contain a large number of participants.
In qualitative research studies, the aim is not necessarily to generalise the research findings, but rather to gain a deeper understanding
of a certain variable or situation. Therefore random sampling is not necessary, and much smaller sample sizes are also used. The
researcher normally identifies the employees that are most knowledgeable or have the most experience in terms of the research topic, in
other words, the researcher should identify employees who would be able to provide the most information regarding the topic and ask
them to participate in the study.
Data-gathering techniques
The most frequently used data-gathering techniques are defined in Table 2.2. A distinction is also made in terms of their application to
qualitative or quantitative research. A short description of each follows.
Surveys
Surveys rely on individuals’ self-reports as the basis for obtaining information. Again, the purpose of the research would determine the
type of survey to be used. In some instances, standardised survey measures may exist, while in other situations an existing survey tool
will need to be modified or a new measure created. They can be constructed to match the reading ability level of the individuals being
surveyed. In a study conducted by Schinkel, Van Dierendonck and Anderson (2004) on the impact of feedback effects after a negative
selection decision (the causal question in Table 2.1), they used standardised existing surveys, namely the Core Self-Evaluations
Questionnaire of Judge, Erez, Bono and Thoresen (2002) and the Affective Wellbeing Scale of Warr (1990), to assess the impact of the
feedback on the applicants. In using existing measures, one needs to make sure that one has legal and ethical access to the instruments.
Some scales and questionnaires are protected by copyright. Many psychological tests are licensed and require approval and the payment
of a fee in order to use the instrument. The instrument must be proved to be valid and reliable as well as being relevant for South African
43
purposes. Most existing questionnaires and scales were probably developed in Europe and North America. Such instruments cannot be
applied to the South African context without some adaptation (Mouton, 2003).
Figure 2.3 Types of designs
Other means of conducting surveys can be by using computers, face-to-face interviews, telephone interviews, or even the Internet,
via e-mail or the World Wide Web (Spector 2008).
In recent years, the web-based approach has been gaining in popularity as a means for organisations to gather survey data from their
employees (Aguinis et al, 2009). Some of the positives of Internet surveys include reduced research costs, enlarged sample sizes,
improved access to populations that would otherwise have been hard to reach, enhanced interactivity of research materials, as even a
video clip or audio material can be included in the survey material, and the fact that the survey can be customised for a particular
respondent (Rogelberg, Church, Waclawski & Stanton, 2002). However, the use of Internet surveys also lends itself to some problems.
Even with careful and extensive security measures by the researchers, there is no way to guarantee that a participant’s responses cannot
be accessed by a third party. As a communication medium, the Internet is too difficult to control to be able to make perfect assurances of
anonymity or confidentiality.
Other practical limitations of survey research are that the return rate of mailed questionnaires is often less than 50 per cent. There is
also some debate about how accurate self-report measures are. The tendency to slant responses in socially desirable directions is an issue
for some topics, especially if one needs to rate one’s own performance. Despite these limitations, questionnaires are used extensively in
industrial psychology to address a broad range of research questions.
Observations
Observation is a method that can be used when the research is examining overt behaviour. In natural field settings, behaviour may be
observed over extended periods and then recorded and categorised. By observing the types of behaviour necessary for the target job,
researchers have identified the critical skills and abilities of employees. Apart from job analysis, observational approaches can also be
used to study, for example, the effectiveness of coaching styles, the use of specific safety procedures, or many other behaviour-based
phenomena. As a research method, observation requires a substantial amount of time and energy. Observation is often a fruitful method
for generating ideas that can be further tested with other research methods.
Observation can be done in an obtrusive manner, with the employees’ knowledge, or in an unobtrusive manner, without the
employees’ knowledge. One disadvantage of the obtrusive manner is that the researcher can affect the variable or situation being
studied. The first time this phenomenon was observed was in the Hawthorne study. The intention of this study was to observe the effect
of lighting on the job performance of employees. However, job performances kept going up, no matter what lighting levels were chosen.
It was found that the fact that employees knew that they were being observed changed their behaviour. Unobtrusive methods can
therefore be valuable, as they avoid such effects. It is, however, not always possible to use unobtrusive methods because of the ethical
and legal requirements to respect people’s privacy. It has been suggested that acceptance and trust of the observers by the participants
being observed are critical to the success of this research method.
Table 2.2 Data-gathering techniques
Technique
Definition
Quantitative or qualitative application
44
A survey is a set of questions that Closed-ended questions can be asked in a structured questionnaire for a
Surveys
(Questionnaires) requires an individual to express quantitative study. Open-ended questions can be asked in a semi-structured or
an opinion or answer, or provide unstructured questionnaire for a qualitative study.
a rating regarding a specific
topic.
Observation
The researcher observes (which
entails watching and listening)
employees in their organisational
setting.
Use a pre-developed checklist to rate the existence or frequency of certain
behaviours and events in a quantitative study. When the research questions are
more exploratory (as in a qualitative study), the researcher can take detailed
field notes.
Interviews
Interviews are one-on-one
sessions between an interviewer
and an interviewee, typically for
the purpose of answering a
specific research question.
Although a structured interview format can be used in a quantitative study,
interviews are used most often in qualitative studies, where a semi-structured or
unstructured interview can be used to gather information.
Focus groups
This is a method of data
collection in which pre-selected
groups of people have a
facilitated discussion with the
purpose of answering specific
research questions.
Usually used in a qualitative study.
Archival data
Archival data, also called
documentary sources of
information, is material that is
readily available, where the data
has already been captured in one
form or another.
In a quantitative study, the archival data would consist of numerical
information such as questionnaire responses, test scores, performance ratings,
financial statistics, or turnover rates. In a qualitative study, the archival data
would include textual information such as documents, transcripts of interviews,
letters, annual reports, mission statements, or other official documentation.
Interviews
Interviews are a more natural way of interacting with people than making them fill out a questionnaire, do a test, or perform some
experimental task. Apart from the degree to which they are structured or not, interviews can also vary in terms of several other
dimensions, depending on the purpose of the interview. For example, in a qualitative exploratory study, a researcher may prefer a single
open-ended question right at the start of an interview (such as ‘Please tell me about your experience when you went through our
selection process’), providing very little further structuring during the rest of the interview. If you want to find out about something quite
specific, such as how people feel about a specific recruitment advertisement published in a local newspaper, you would conduct short,
structured interviews with a fairly large number of people. If you want to know about something weightier, such as trying to understand
how people experience and deal with a negative selection decision, you could opt for a series of in-depth interviews with a single person
in which trust is established over an extended period of time (Terre Blanche & Durrheim, 2006).
The most popular kind of interview is the semi-structured interview, where one develops an interview schedule (or list of key topics
and perhaps sub-topics) in advance. The personal interview is an expensive and time-consuming technique, as it requires a face-to-face
meeting with the respondents. There is also a high cost involved in training capable interviewers. Training is vital because the
appearance, manner and behaviour of the interviewers influence the way people co-operate with them and answer their questions
(Schultz & Schultz, 2002). However, in-depth information can also be derived by using the interview technique of data-gathering, and
respondents can ask for clarification if they do not understand any of the questions.
The information from an interview can be captured by taking notes during the interview, or recording the interview on a tape or
video-recorder and transcribing it afterwards. Privacy laws vary with regard to the recording of sessions. Audio- or video-recording is
not usually appropriate for very sensitive topics. Local laws and company policies must be followed, and permission to tape the session
must be obtained from the participant prior to the interview. Exactly how the tapes will be handled after the session should also be
clearly
(Bachiochi & Weiner, 2002).
Focus group
‘Focus group’ is a general term given to a research interview conducted with a group. A focus group is typically a group of people who
share a similar type of experience, but who do not necessarily know each other in the normal course of their lives (Terre Blanche &
Durrheim, 2006). A focus group should be planned to include 8 to 10 participants to facilitate the best level of group interaction. Key
subgroups should be identified, and it should be determined whether to conduct sessions with them separately. For example, for a topic
such as a manager’s role in career development and its impact on retention, managers and subordinates should not be combined in one
group. However, across focus groups, a wide variety of participants should generally be sampled (Bachiochi & Weiner, 2002).
A focus group will usually start with an introductory statement from the researcher, explaining the purpose of the research and how
45
the results will be used. The participants are asked to maintain the confidentiality of the session and not to reveal any sensitive
information that was discussed during the group session. The group is asked open-ended questions, and the researcher may probe the
group to ensure that the session stays focused. At the end, the participants are thanked for their co-operation, and the group is again told
how the results will be used. Excellent facilitation skills are critical for conducting successful focus groups. The facilitator should be
objective, avoid engaging in a dialogue with the participants, and maintain the flow of the discussion. A live note-taker can be used to
record the information during the session or it can be audio- or videotaped and the comments transcribed later.
There are advantages and disadvantages to using recording devices during focus groups. One advantage is that the facilitator can
focus exclusively on the flow and content of the discussion. It is quite difficult to take comprehensive notes and effectively facilitate a
focus group at the same time. However, recording devices may make some participants uncomfortable or self-conscious and therefore
inhibit them from participating spontaneously, resulting in different behaviour than might otherwise not have been observed (Bachiochi
& Weiner, 2002).
Archival data or secondary data sources
The most frequently used form of archival data in personnel psychology research is probably some form of company or personnel
record. Storing archival data presents challenges for organisations as they need to address the need for safe, secure, accessible, and
cost-effective digital storage of the data. These records can, however, serve as long-term knowledge assets to drive better performance in
an organisation. In a study conducted by Papadakis and Barwise (2002) on how much CEOs and top management matter in the strategic
decision-making of an organisation, they used internal documents, reports and minutes of meetings in addition to other sources of data in
order to answer the research question. Another example is a study conducted by Libet, Frueh, Pellegrin, Gold, Santos, and Arana (2001)
that used archival data to determine if a relationship existed between absenteeism figures and productivity of employees in a medical
centre.
If one decides to use these sources of information, it is, however, necessary to answer a few questions. Will I be able to get access to
the information? Who needs to grant permission to access and use the records? What is the quality of the existing information? In
summary, archival data presents opportunities to test important research questions in an organisational setting, but there are problems
surrounding the security of and access to the archived records.
After the data have been collected, the next step in the research process would be to analyse the data.
Step 4: Analysing the data
After the data have been collected, the researcher has to make some sense out of them. Again, the research question and the type of
study that one is conducting will determine the kind of analysis techniques that one will choose.
Although there are generic steps involved in analysing qualitative data, the data analysis needs to be tailored for the specific type of
qualitative research design or strategy used. The generic steps involved could be to:
• Organise and prepare the data by scanning, transcribing or typing the data and arran-ging it into different types of
information.
• Obtain a general sense of the data by reading through all of it and reflecting on its overall meaning. General notes or
thoughts could be recorded at this stage.
• Do a detailed analysis by coding the data. Coding is about organising the data into meaningful categories and labelling each
category.
• Generate a description of the setting or people as well as categories and identify a small number of themes. These themes
can then be interconnected to form a storyline (as in narrative research) or developed into a theoretical model (as in grounded
theory), or shaped into a general description (as in phenomenology). Themes can also be analysed for individual cases and
across different cases (as in case study research).
• Convey the findings of the analysis. This is often done by means of a narrative passage, but many qualitative researchers also
include figures, visuals, tables or even process models as part of this discussion.
• Make an interpretation of the meaning of the data. This is almost like summarising the lessons learned. In this section,
finding could be compared with literature or extant theories, but can also suggest new questions that need to be asked
(Creswell, 2003).
In a quantitative study, statistical analysis is normally used. Many students get anxious over the topic of statistics. Although some
statistical analytic methods are quite complex, most are reasonably straightforward. Think of statistical methods as golf clubs – tools for
helping do a job better. Just as some golf shots call for different clubs, different research problems require different statistical analyses.
Knowing a full range of statistical methods will help you understand the research problem better. It is impossible to understand the
research process without some knowledge of statistics.
Statistics are a tool to help us summarise and describe masses of data and to enable us to draw inferences or conclusions about their
meaning. The field of statistics is typically divided into two components, descriptive and inferential statistics. Descriptive statistics are
used to summarise our data in such a manner that it allows us to describe the data in a meaningful fashion. Let us look at an example to
explain this further. In Table 2.1 we posed the exploratory question that set out to identify the influencing techniques that candidates use
in a selection interview. McFarland, Ryan and Kriska (2002) investigated this research question. In their study, they asked the
interviewers to rate each of the interviewees on their oral ability, interpersonal skills, information analysis, and problem-solving ability.
46
A total of 146 interviewees were interviewed and rated on each of the above-mentioned skills. A sample of what the dataset that contains
these ratings may look like is presented in Table 2.3. Please note that this is not a sample of the actual dataset but a fictitious
representation of what it may look like. (Interview ratings were made on a 9-point scale.)
Table 2.3 Rating of interviewees
In Table 2.3 only 10 of the 146 interviewees’ ratings are displayed. It is difficult to make sense of the data just by looking at the
dataset as it is displayed in Table 2.3. And that is what the purpose of using descriptive statistics is, namely to summarise and describe
our data in such a way that we can make sense of it. There are two basic ways in which our dataset can be described: in terms of central
tendency and variability. Each of these will be discussed briefly.
Measures of central tendency measure the centre of a group of scores. It is an indication of the typical or average score in a distribution.
The mean is the most common measure of central tendency. The mean is the arithmetic average score in the distribution. It is computed
by adding all of the individual scores and then dividing the sum by the total number of scores in the distribution. McFarland et al (2002)
calculated the average rating for all interviewees on the four skills. Their results are displayed in Table 2.4.
Table 2.4 Means of interview ratings
Interview rating
Mean
Oral ability
6,62
Interpersonal skills
6,04
Information analysis
6,64
Problem-solving
5,91
From Table 2.4 we can already deduce that the interviewees were rated on average, the highest on their information analysis and lowest
in terms of their problem-solving. This is a deduction that we were not able to make before calculating descriptive statistics.
The measure of central tendency may indicate the middle score of a dataset, but it does not give any indication about how much the
observations differ from one another in value. For example, the following five observations: 48, 49, 50, 51, 52, have the same mean of
50 as 0, 1, 50, 99, 100, even though there is a larger difference among observations in the second case.
Measures of variability indicate the degree to which the observations differ from one another, or to which degree our scores are
distributed around the mean. In other words, our variability is then also an indication of how representative the mean is as a measure of
central tendency. The standard deviation is frequently reported in research papers as the measure of variability.
So far we have been concerned with the statistical analysis of only one variable: its typical score and dispersion. But most industrial
psychological research deals with the relationship between two (or more) variables. In particular, we are usually interested in the extent
to which we can understand one variable (the criterion or dependent variable) on the basis of our knowledge about another (the predictor
or independent variable). A statistical procedure useful in determining this relationship is called the correlation coefficient. A correlation
coefficient reflects the degree of linear relationship between two variables, which we shall refer to as X and Y. The symbol for a
47
correlation coefficient is r, and its range is from –1,00 to +1,00. A correlation coefficient tells two things about the relationship between
two variables: the direction of the relationship and its magnitude.
The direction of a relationship is either positive or negative. A positive relationship means that as one variable increases in
magnitude, so does the other. An example of a positive correlation is between interview ratings and job performance once selected. As a
rule, the higher the ratings an applicant received during the selection interview, the higher his or her job performance is likely to be once
he or she is selected and appointed.
A negative relationship means that as one variable increases in magnitude, the other gets smaller. An example of a negative
correlation is, for example, the more an employee experiences stress, the worse his or her health becomes. The magnitude of the
correlation is an index of the strength of the relationship. Large correlations indicate greater strength than small correlations. A
correlation of 0,80 indicates a very strong relationship between the variables, whereas a correlation of 0,10 indicates a very weak
relationship. Magnitude and direction are independent; a correlation of –0,80 is just as strong as one of +0,80. With research being
conducted in organisations, you are most unlikely to obtain perfect correlations between any variables. It is much more likely that you
will obtain correlations between 0,1 and 0,6 or –0,1 and –0,6.
The correlation between two variables can be illustrated by means of a scatter plot diagram. Figure 2.4 is an example of a scatter
plot which illustrates a correlation of 0,52 between interview ratings and job performance scores. This shows that as interview ratings
increase, job performance ratings are also likely to be higher.
The correlation coefficient, however, does not permit any inferences to be made about causality – that is, whether one variable
caused the other to occur. Even though a causal relationship may exist between two variables, just computing a correlation will not
reveal this fact.
The use of regression and multiple regression as statistical techniques in personnel decisions are discussed in more detail in chapter 6
. Regression is a statistical analysis that can be used after a significant relationship has been established between two variables.
Regression will allow us to predict one variable based on another variable, if the two variables are strongly related. This calculation is
done using a regression equation computed from a set of data. The regression equation provides a mathematical formula that allows for
the prediction of one variable from another. Entering the value of one variable (the predictor) into the equation will give the value for
the other variable (the criterion).
Figure 2.4 Scatter plot of two variables that have a positive correlation
Reflection 2.4
Suppose you wish to compute the correlation between the applicants’ perception of fairness of the selection process and their
likelihood of being successful in the selection process. The correlation coefficient turns out to be 0,61. On the basis of this, you
conclude that because candidates are successful in the selection process, they will view it as a fair process. However, your friend takes
an opposite view. She says that because the applicants felt that the selection process is fair, they gave their best during the selection
process and therefore were selected. Who is correct?
On the basis of the existing data, no one is correct, because causality cannot be inferred from a single correlation coefficient. It
cannot be said that the perception of fairness caused the selection success of applicants or that selection success caused the perception
of fairness of the applicants. Proof of causality must await experimental research.
The descriptive question posed in Table 2.1 is an example of a study that used correlation to determine the relationship between the
type of interview conducted and the success of the interviewer’s rating of an applicant’s personality characteristics. Because
48
correlation is a common analytical technique in industrial psychological research, many of the empirical findings in this book will be
expressed in those terms. Chapter 4 deals in more detail with the use of correlations in personnel selection decisions.
The predictive question in Table 2.1 is an example of a study that could use regression to determine if a selection interview
(predictor) can be used to predict job performance (criterion). It is also possible to combine data from two or more predictor variables in
order to predict a criterion variable.
Multiple regression is a technique that enables the researcher to combine the predictive power of several variables to improve
prediction of a criterion variable. For example, it can be investigated if the results of an interview (predictor 1), the results of an ability
test (predictor 2), and the results of practical exercise (predictor 3) in combination with each other predict the job performance (criterion)
of an applicant. You will learn more about this in the chapter on selection.
Descriptive statistics therefore allows us to describe a dataset and determine the relationship between variables. However, as
explained earlier as part of the discussion on sampling, researchers most often want to obtain results in a study that can be generalised to
the rest of the group or organisation as well.
Inferential statistics enable us to draw conclusions that generalise from the subjects we have studied to all the people of interest by
allowing us to make inferences based on probabilities. Let us look at an example to clarify this further. One way in which the evaluative
question in Table 2.1 about the effectiveness of the training programme (the topic of chapter 10) can be addressed is to present the
training course to a group of interviewers and compare their interviewing skills before and after the training programme, as well as
compare their interviewing skills with a group of interviewers who did not undergo the training. Naturally, the interviewers have
different levels of skills, experience and learning ability, and would therefore also differ in terms of their interviewing skills. This is
called error variance. No two samples would perform exactly the same, based on the fact that individuals differ from one another. If we
find a difference between the groups and between the before and after measurements of the interviewing skills, how would we know that
the difference is because of our training programme and not simply because of error or, in other words, because of the particular sample
that we used in our study?
Inferential statistics function as a control of error variance and help us to determine if the difference between the groups is large
enough for us to be able conclude that the difference is caused by the training programme. Inferential statistics allow us to calculate the
probability of finding the observed results. If the probability of finding the difference by chance is less than 1 in 20, we can conclude
that the difference is more likely due to the training rather than error or chance. There are a dozen different statistical tests that are used
to test this probability. Some are used for experimental designs, others are used for non-experimental designs, and others can be used for
both. The most frequently cited test statistics in industrial psychology would probably be the t-test (used to test for the difference
between two groups) and the ANOVA or F-test (used to test for the difference between three or more groups).
Meta-analysis is a secondary research analysis method that re-analyses findings or conclusions reached from previously conducted
studies (Hunter & Schmidt, 2004). Meta-analysis is being used with increasing frequency in industrial psychology. Meta-analysis is a
statistical procedure designed to combine the results of many individual, independently-conducted empirical studies into a single result
or outcome. The logic behind meta-analysis is that we can arrive at a more accurate conclusion regarding a particular research topic if
we combine or aggregate the results of many studies that address the topic, instead of relying on the findings of a single study. The result
of a meta-analysis study is often referred to as an ‘estimate of the true relationship’ among the variables examined, because we believe
such a result is a better approximation of the ‘truth’ than would be found in any one study.
A typical meta-analysis study might combine the results from perhaps 25 or more individual empirical studies. As such, a
meta-analysis investigation is sometimes referred to as ‘a study of studies’. Although the nature of the statistical equations performed in
meta-analysis is beyond the scope of this book, they often entail adjusting for characteristics of a research study (for example, the quality
of the measurements used in the study and the sample size) known to influence the study’s results.
Despite the apparent objectivity of this method, the researcher must make a number of subjective decisions in conducting a
meta-analysis. For example, one decision involves determining which empirical studies to include in a meta-analysis. Every known
study ever conducted on the topic could be included or only those studies that meet some criteria of empirical quality or rigour. The
latter approach can be justified on the grounds that the results of a meta-analysis are only as good as the quality of the original studies
used. The indiscriminate inclusion of low-quality empirical studies lowers the quality of the conclusion reached.
Another issue is referred to as the ‘file drawer effect’. Research studies that yield negative or non-supportive results are not
published (and are therefore not made widely available to other researchers) as often as studies that have supportive findings. The
unpublished studies are ‘filed away’ by researchers, resulting in published studies being biased in the direction of positive outcomes. A
meta-analysis of published studies could therefore lead to a distorted conclusion because of the relative absence of (unpublished) studies
reporting negative results. These are two examples of the issues that must be addressed in conducting a meta-analysis (Wanous, Sullivan
& Malinak, 1989).
Despite the difficulty in making some of these decisions, meta-analysis is a popular research procedure in industrial psychology.
Hausknecht, Day and Thomas (2004) conducted a meta-analysis on applicant reactions to selection procedures (the topic of chapter 4).
Some of their findings revealed that interviews and work samples were perceived more favorably by applicants than cognitive ability
tests, which were perceived more favorably than personality inventories, honesty tests, biodata, and graphology. Their results also
indicated that applicants who hold positive perceptions about selection are more likely to view the organisation favourably and report
stronger intentions to accept job offers and recommend the employer to others.
Hunter and Schmidt (1996) are very optimistic about the scientific value of meta-analysis. They believe it has the power to change
49
how we conduct our research and to provide guidance on major social policy issues. Shadish (1996) contended that meta-analysis can
also be used to infer causality through selected statistical and research design procedures. It is clear that meta-analysis has become a
prominent data-analysis method for researchers and will undoubtedly continue to be so in the future.
Step 4 of the research process is summarised in Figure 2.5.
Step 5: Drawing conclusions from research
After analysing the data, the researcher draws conclusions. Generally, it is unwise to implement any major changes based on the results
of only one study. As a rule, we prefer to know the results from several studies. We want to be as certain as possible that any
organisational changes are grounded in repeatable, generalisable results. Sometimes the conclusions drawn from a study modify beliefs
about a problem. Note in Figure 2.1 that a feedback loop extends from ‘conclusion and interpretation’ to ‘question’.
The findings from one study influence research problems in future studies. Research is a cumulative process. Researchers build on
one another’s work in formulating new research questions. They communicate their results by publishing articles in journals. A
competent researcher must keep up to date in his or her area of expertise to avoid repeating someone else’s study. The conclusions
drawn from research can affect many aspects of our lives.
ETHICAL PROBLEMS IN RESEARCH
The ethical standards for conducting research in organisations have at least four basic requirements. Firstly, no harm should come to an
individual as a result of his or her participation in a research study. Secondly, the participant must be fully informed of any potential
consequences of his or her participation (informed consent). Thirdly, interviewees must understand that their participation is voluntary.
Fourthly, all reasonable measures should be taken to ensure that the anonymity and confidentiality of the data collected are maintained
(Lowman, 1998). Researchers who violate these rights, particularly in studies that involve physical or psychological risk, can be subject
to professional censure and possible litigation.
The South African Psychological Association (PsySSA) (1992) has a code of ethics that must be honoured by all PsySSA members
who conduct research. Among the researcher responsibilities covered by the code of ethics are the accurate advertising of psychological
services, the confidentiality of information collected during research, and the rights of human participants. The code of ethics was
created to protect the rights of subjects and to avoid the possibility of having research conducted by unqualified people.
The researcher is faced with additional problems with regard to employees of companies. Even when managers authorise research, it
can cause problems in an industrial context. Employees who are naive about the purpose of research are often suspicious when asked to
participate. They wonder how they were ‘picked’ for inclusion in the study and whether they will be asked difficult questions. Some
people even think a psychologist can read their minds and thereby discover all sorts of private thoughts. Research projects that arouse
emotional responses may place managers in an uncomfortable interpersonal situation.
Figure 2.5 Step 4 of the research process (Coetzee, 2005:39)
As a practical example of ethical problems experienced while conducting research in an organisation, consider the following
role-conflict problem. The researcher used a questionnaire to assess the employees’ opinions and morale. Management had
commissioned the study. As part of the research design, all employees were told that their responses would be confidential. One survey
response revealed the existence of employee theft. Although the researcher did not know the identity of the employee, with the
information given and a little help from management, that person could have been identified.
e
50
Lowman (1998) presented a series of cases on ethical problems for industrial psychologists. Taken from real-life experiences, the
multitude of ethical dilemmas cover such issues as conflict of interest, plagiarising, and ‘overselling’ research results. The pressures to
conduct high-quality research, the need to be ethical, and the reality of organisational life sometimes place the researcher in a difficult
situation. These demands place constraints on the industrial psychologist that researchers in other areas do not always face.
In conclusion, research is a vital part of industry; it is the basis for changes in products and services. Research can be a truly exciting
activity, although it is seemingly tedious if you approach it only from the perspective of testing stuffy theories, using sterile statistics,
and inevitably reaching dry conclusions. Daft (1983) suggested that research is a craft, and a researcher, like an artist or craftsperson, has
to pull together a wide variety of human experiences to produce a superior product. Being a researcher is more like unravelling a
mystery than following a cookbook. However, research is not flash-in-the-pan thrill seeking; it involves perseverance, mental discipline,
and patience. There is also no substitute for hard work. Researchers can recall many times when they anxiously anticipated seeing
computer analyses that would foretell the results of a lengthy research study. This sense of anticipation is the fun of doing research – and
research, in the spirit of Daft’s view of researchers being craftspeople, is a craft we try to pass on to our students.
Reflection 2.5
Should the researcher violate the promise of confidentiality and turn the information over to management? Should the researcher tell
management that some theft had occurred but they had no way to find out who had done it (which would not have been true)? Or was
the researcher to ignore what he knew and fail to tell management about a serious problem in the company?
In this case, the researcher informed the company of the theft, but refused to supply any information about the personal identity of
the thief. This was an uneasy compromise between serving the needs of the client and maintaining the confidentiality of the
information source.
Gonzo and Binge (2003) regard a researcher as an investigator who can make the most of the practical situation in which he or she
has to conduct his or her study. McCall and Bobko (1990) also noted the importance of serendipity in scientific research. Serendipity
refers to a happy chance occurrence or happening that influences the research process. The history of science is filled with chance
discoveries. For example, a contaminated culture eventually led Alexander Fleming to learn about and recognise the properties of
penicillin. Rather than discarding the culture because it was contaminated, Fleming sought to understand how it had become so. The
lesson is that we should allow room for lucky accidents and unexpected observations to occur and be prepared to pursue them when we
do.
Rogelberg and Brooks-Laber (2002) propose that industrial psychologists should increasingly demonstrate the value of their
research. If we can influence the extent to which others see our research as credible, stakeholders will be more likely to use our findings
for organisational and individual improvement. Rogelberg and Brooks-Laber (2002) quote a few prominent authors on their views about
challenges facing those designing and doing research in industrial psychology. According to Church and Waclawski (cited in Rogelberg
& Brooks-Laber, 2002), one way in which the value of our research can be demonstrated is by focusing more on designing, analysing
and connecting data from existing organisational situations and contexts so that linkages between industrial psychology theory and
practice and definite, quantifiable outcomes can be demonstrated. In addition to this, they state that technology will affect the way
people work, communicate, interact, and relate to others, and will therefore have the greatest impact on industrial psychology research.
Hofmann (cited in Rogelberg & Brooks-Laber, 2002) is of the opinion that we should be sensitive to the changing nature of work,
particularly in the areas of diversity, technology, and globalisation, and attempt to address questions of organisational and societal
interest. Bachiochi (cited in Rogelberg & Brooks-Laber, 2002) stated that it is becoming more incumbent upon industrial psychology
researchers to reflect more accurately the feelings of previously under-represented minority groups. Aguinis (cited in Rogelberg &
Brooks-Laber, 2002) is of the opinion that because organisations are becoming global, they need research-based solutions that will
provide answers applicable to the organisation not only in their home country, but also in other countries.
All these aspects not only impact on how we do research, but should also influence the type of questions we ask. An understanding
of research methods is vital for industrial psychologists to resolve the problems that confront humankind in an increasingly complex
world.
CHAPTER SUMMARY
The goals of industrial and organisational psychology are to describe, explain, predict, and then modify work behaviour. Research
methods are important tools for industrial psychologists who specialise in personnel psychology, because they provide a systematic
means for investigating and changing work behaviour and for evaluating the validity, reliability, utility, fairness and quality of personnel
decisions throughout the employment process. Rational decision-making and objectivity are the overriding themes of the socio-scientific
method used to study work behaviour in the employment and personnel retention context.
Reflection 2.6
Let us now consider once again the case study presented at the beginning of this chapter.
51
How should I investigate this?
You are the industrial psychologist for a large financial company. The company has recently decided to implement a graduate
recruitment and selection programme. They have been recruiting and selecting 100 graduates annually for the past two years.
Some of the questions management has asked you to investigate scientifically are:
• Which factors influence the performance of a graduate recruit in the first six months of their employment?
• Can academic performance be used to predict which graduates would perform better?
• Is the current two-week induction programme effective for preparing the graduates for the company?
These aspects can be investigated qualitatively, quantitatively, or by using a combination of qualitative and quantitative measures.
Describe in detail how you would plan and execute your study to investigate these aspects. Explain clearly what aspects you would
investigate in a qualitative, quantitative, or combined quantitative and qualitative manner, and explain your reasons for using a
particular method for a specific aspect of the research project. Also describe the sample, the method of data collection, and the method
of data analysis you would use to investigate the problem.
The research process comprises various steps, including formulating the research question, choosing an appropriate design for the
study, collecting the data, analysing the data, and drawing conclusions from the research. Quantitative research yields results in terms of
numbers and qualitative research tends to produce flow diagrams and descriptions. The two are not mutually exclusive, however. The
process of triangulation involves combining results from different sources, which may include both kinds of research. The results of
research can be generalised to areas included in the study; therefore, the more areas a study includes, the greater its generalisability.
A critical concern to industrial psychologists is adhering to ethical principles and guidelines set by the South African Board for
Psychology, which governs research and practice in industrial and organisational psychology, including personnel psychology. The
overriding ethical principle is ‘do no harm’.
Review questions
You may wish to attempt the following as practice examination-style questions.
1. Consider the steps in the research process. What are some of the major problems that are likely to be encountered at each step of the
research process?
2. What are the strengths and weaknesses of the experimental and the correlation methods? Under what circumstances would you use
each?
3. Review chapter 6. How can correlation, regression and multiple regression be used to enhance the reliability, validity, quality and
utility of personnel selection decisions?
4. Review chapter 10. How can evaluative research questions be used in training and development?
5. Explain the concept of research design. What are the aspects that help researchers to decide on the most appropriate design for a
research study?
6. Why is it important for industrial psychologists to adhere to ethical principles and standards?
7. Discuss the various aspects and techniques that can be considered in collecting data for research purposes.
Multiple-choice questions
1. Management is interested in knowing the degree to which employees are currently experiencing burnout in the organisation. This is
an example of a ________ question.
a. causal
b. predictive
c. descriptive
d. evaluative
2. A correlation coefficient of –0,24 is a ______ relationship.
a. moderate negative
b. weak negative
c. moderate positive
d. strong positive
3. In order to determine if one variable causes another variable, the best type of design to be used is a:
a. non-experimental design
b. mixed method design
c. qualitative design
d. experimental design
4. In order to collect data in a quantitative study, the following methods are most often used: surveys, observations, focus groups and
archival data.
a. True
52
b. False
5. The Human Resources manager wants to know if there is a difference between various genders and age groups and their degree of job
satisfaction. In this study, the independent and dependent variables are as follows:
a. The independent variables are gender and age, and the dependent variable is job satisfaction.
b. The independent variable is job satisfaction, and the dependent variables are gender and age.
c. The independent variable is gender, and the dependent variables are age and job satisfaction.
d. The independent variables are job satisfaction and gender, and the dependent variable is age.
53
part 2
personnel employment
54
CHAPTER 3
INTRODUCTION: THE EMPLOYMENT CONTEXT AND HUMAN RESOURCE PLANNING
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
THE CHANGING CONTEXT OF EMPLOYMENT
• Globalisation and competition
• Technological advances
• The nature of work
• Human capital
• Diversity and equity in employment
• Demographic and workforce trends
• The national Occupational Learning System (OLS) and Employment Services South Africa (ESSA)
• Other demographic and workforce trends: Generation Y and retirees
HUMAN RESOURCE PLANNING
• Organisational strategy
• The strategic planning process
• Business strategy and strategic human resource management
THE RATIONALE FOR HUMAN RESOURCE PLANNING
• What is human resource planning?
• Obtaining the correct number of people
• The right skills
• The appropriate place
• The right time
• Keeping the people competent
• Keeping the people motivated
• Becoming involved with strategic-level corporate planning
• Succession planning and talent retention
THE HUMAN RESOURCE PLANNING PROCESS
• Investigative phase
• Forecasts and estimations
• Planning phase
• Implementation phase
EVALUATION OF HUMAN RESOURCE PLANNING
THE EMPLOYMENT PROCESS
• Job analysis and criterion development
• Human resource planning
• Recruitment and selection
• Reward and remuneration
• Performance evaluation
• Training and development
• Career development
• Employment relations
• Organisational exit
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
55
CHAPTER OVERVIEW
As outlined in chapter 1, personnel psychology addresses issues that concern the employment and retention of personnel. Industrial
psychologists who specialise in personnel psychology are involved in activities that form the basis of the employment process. These
activities include human resource planning, job analysis and the development of performance criteria, employee recruitment and selection, the
measurement of employee performance and the establishment of good performance review procedures, the development of employee training
and development programmes, the career development and guidance of employees, the formulation of criteria for promotion and succession
planning, and dismissal and disciplinary action. They may also establish effective programmes for employee retention, compensation and
benefits, create incentive programmes, and design and implement programmes to enhance employee health and wellbeing.
This chapter discusses the employment context within which the employment process occurs. It further outlines the human resource
planning (HRP) process and its function in the employment process. The chapter concludes with an overview of the employment process.
Learning outcomes
When you have finished studying this chapter, you should be able to:
1. Describe the global and national factors and trends that influence the employment and human resource planning process.
2. Evaluate the function of human resource planning in the employment process.
3. Assess the link between strategic planning and human resource planning.
4. Give an outline of the human resource planning process.
5. Describe the various phases of the employment process.
CHAPTER INTRODUCTION
Over the past decade, dramatic changes in markets, technology, organisational designs, and the respective roles of managers and
employees have inspired renewed emphasis on and interest in personnel psychology (Cascio & Aguinis, 2005). Competitive,
demographic, and labour market trends are forcing businesses to change how they do things. For example, more competition means
more pressure to lower costs and to make employees more productive. Most employers are therefore cutting staff, outsourcing tasks,
and instilling new productivity-boosting technologies. And as part of these efforts, they also expect their human resource managers,
human resource practitioners and industrial psychologists to ‘add value’, in other words, to improve organisational performance in
measurable ways (Dessler, 2009). As such, human resource management and personnel psychology have evolved in response to
evolving trends and issues.
As companies expanded in the early 1990s, employers needed more assistance with ‘personnel’. The earliest personnel departments
took over hiring and firing from supervisors, ran the payroll department, and administered benefits. As know-how in things like testing
and interviewing emerged from the field of personnel psychology, the personnel department began playing a bigger role in employee
selection, training and promotion. The appearance of union legislation in the 1930s added ‘protecting the company in its interaction
with unions’ to the personnel department’s responsibilities. Then, as new equal employment legislation created the potential for
discrimination-related lawsuits, employers turned to personnel managers and industrial psychologists to provide the requisite advice
(Dessler, 2009).
Today, competitive, technological, and workforce trends mean business is much more competitive than in the past. Companies are
increasingly realising that they need to consider the demands and pressures of the external environment. They are functioning in a
much more ‘hostile’ or demanding environment that has resulted in tougher work targets and work intensification, which erode the
stability of employment, high pay and career mobility opportunities within the organisation, often leading to employees becoming
disillusioned and leaving the organisation or staying and becoming dis-engaged (Boxall & Purcell, 2003; Truss, 2001). In this regard,
human resource managers and industrial psychologists face new issues.
The role of the personnel or human resource management department has shifted to improving organisational performance by
tapping the potential of the company’s human capital while considering the impact of the external environment and wider competitive
pressures on the performance and behaviour of people (Dessler, 2009; Robinson, 2006). The concept of ‘human capital’ is predicated
on the notion of people as an organisational resource or asset. It is fundamentally about measurement and has led to the development
of a range of metrics and reporting measures designed to quantify the contribution of people to organisational success (Robinson,
2006). Highly-trained and committed employees, not machines, have now become a company’s main competitive advantage.
Competitive advantage stems from an organisation’s ability to build distinctive ‘core’ competencies which are superior to those of
rivals. In other words, it is the skills, abilities and capabilities of people that exist within an organisation at any given time (the stock of
human capital) along with its systems and processes used to manage this stock of human capital that constitute a source of competitive
advantage. Human capital advantage, along with human process advantage (as critical aspects of the company’s competitive
advantage), is therefore concerned with recruiting and retaining outstanding people through capturing a stock of exceptional human
talent, latent with productive possibilities (Boxall, 1996). Human process advantage, on the other hand, puts the spotlight on the actual
link between processes that connect inputs (HR practices) and outputs (financial performance). Both human capital and organisational
processes can generate exceptional value and competitive advantage when an organisation’s ability to recruit people with talent and
56
scarce and/or critical skills is supported by processes, systems and practices that ensure people are motivated to work, engage in
organisational citizenship behaviours, and perform effectively (Boxall, 1996; Robinson, 2006).
The emphasis on competitive advantage means that employers have to rethink how their human resource management departments
do things. We’ll see in this book that the trend today is for the human resource practitioner and industrial psychologist to spend less
time on administrative, transactional services such as benefits administration, and more time doing three main things:
• Supporting senior management’s strategic planning efforts
• Acting as the company’s ‘internal consultant’ for identifying and institutionalising changes that will help the company’s
employees contribute better to the company success and engage in organisational citizenship behaviour, and
• Assisting management with and advising them on critical decisions they need to make concerning the employment and
retention of valuable, talented, skilled and high-performing people.
The shift that has occurred therefore emphasises much more input in the form of knowledge, skills and expertise to strategic,
people-related decisions that ‘inform and support’, rather than merely providing people management services at an operational level.
Human resource managers, practitioners and industrial psychologists who specialise in personnel psychology therefore add valuable
knowledge and skills to companies today by their focus on empirically studying individual differences and behaviour in job
performance (within the changing context of employment) and their consequences for the organisation, and providing scientific
techniques of measuring and predicting such differences and consequences. Scientific decision-making techniques ultimately lead to
high-quality decisions that improve overall organisational effectiveness and bottom-line performance (Cartwright & Cooper, 2008;
Cascio & Aguinis, 2005).
THE CHANGING CONTEXT OF EMPLOYMENT
As mentioned previously, attracting and retaining high performing human capital are not what they were just a few years ago.
Competitive, labour market, technological, demographic, and workforce trends are driving businesses to change how they do things.
These trends are briefly discussed below, since they play a major role in the employment and retention of personnel.
Globalisation and competition
Globalisation refers to the tendency of companies to extend their sales, ownership, and/or manufacturing to new markets abroad. More
and more, the markets for the goods and products that companies sell are international, which means, of course, that competition for
those markets as well as nationally-established ones is also increasingly becoming international (Dessler, 2009; Torrington, Hall, Taylor
& Atkinson, 2009). Large organisations that were able to dominate national markets a decade or two ago (many owned and operated by
governments) are now mainly privately owned and faced with vastly more competition from similar organisations based all over the
world. This has led to consolidation through the construction of global corporations and strategic alliances whose focus in terms of
employment processes and people management is also international (Torrington et al, 2009). More globalisation also means more
competition, and more competition means more pressure to be ‘world class’ – to lower costs, to make employees more productive, and
to do things better and less expensively (Dessler, 2009).
Technological advances
Developments in information technology, energy production, chemical engineering, laser technology, transportation and biotechnology
are in the process of revolutionising the way that many industries operate. It is partly the sheer pace of change and the need for
organisations to stay ahead of this very fast-moving game which drive increased competition. Technology education, as a school
discipline, is an international trend that has resulted from the need for all citizens to be technologically literate in order to understand the
basic elements of modern technology and its impact on modern life and in the quest to acquire technological skills. This trend is
increasingly supported by organised industry and labour because of its potential impact on the quality, level and preparedness of future
entrants to the world of work (merSETA, 2008). In practical terms, the accelerating pace of technological advances means that human
resource management techniques and practices continually have to be reviewed and developed to enhance an organisation’s competitive
position (Torrington et al, 2009). For example, managers have to learn how to attract, select, retain and manage an international and
culturally diverse workforce effectively and how best to attract, retain, develop and motivate people with those relatively scarce skills
that are essential if an organisation is to harness and deploy evolving technologies effectively.
The nature of work
Technology is also changing the nature of work, even factory work. In plants throughout the world, knowledge-intensive high-tech
manufacturing jobs are replacing traditional factory jobs (Dessler, 2009). Moreover, today more people are being employed mostly in
producing and delivering services, not products. This trend has also led to a significant shift in people’s work values and career
orientations. Whereas twenty years ago, people were more attracted to jobs that provided security and stability and opportunities to
develop their expertise in a specific area of specialisation, the new generation workforce are increasingly found to place higher value on
jobs that provide for a balanced work-lifestyle, challenging work, autonomy, and opportunities for further growth and development,
which includes intra- and inter-organisational mobility and further education and training (Coetzee & Schreuder, 2008).
57
Human capital
As mentioned previously, human capital refers to the knowledge, education, training, skills and expertise of a company’s workers
(Dessler, 2009). Today, employment is fast moving from manual and clerical workers to knowledge workers who resist the
command-and-control model of management, and who demand a more values-centred leadership style which emphasises human rights
in the workplace, respect, and dignity. In practical terms, this means a stronger focus on the psychological factors that influence the
motivation, performance, satisfaction and engagement of employees. These factors are discussed in more detail in chapter 7. In addition,
mana-gers need new world-class personnel or human resource management techniques and human resource management systems and
skills to select, train, motivate and engage their contemporary workforce and to get them to work more like committed partners (Dessler,
2009).
Diversity and equity in employment
In today’s world of globalisation and demographic change, the workforce is becoming increasingly diverse, and it is more important
than ever for organisations to develop and manage equal opportunity and diversity strategies that attract and retain talent to improve
workforce performance and so promote their competitive position (Torrington et al, 2009). The basic concept of diversity in the
workplace accepts that the workforce consists of a diverse population of people. The diversity consists of visible and non-visible
differences which include factors such as gender, age, socio-economic background, race, ethni-city, disability, personality and
work-style. In this regard, the concept of diversity is focused on the business benefits of making the most effective use of the skills,
abilities and potential of all employees. Whereas employment equity initiatives in the South African work context are mostly focused on
compliance with legislation and the needs of historically disadvantaged groups, diversity is a much broader, more inclusive concept
focused on the challenge of meeting the needs of a culturally diverse workforce and of sensitising workers and managers to differences
associated with gender, race, ethnicity, age, and nationality as a way of maximising the potential productivity of all employees
(Robinson, 2006).
The business case for diversity has been reinforced by global demographic trends and skills shortages. On a simple level, if an
employer invests in ensuring that all employees are valued and given the opportunity to develop their potential, they are enabled to make
their maximum contribution to the company’s performance. For example, a company that discriminates, directly or indirectly, against
older or disabled people, women, historically disadvantaged groups, minorities, people with different sexual orientations, or other
minority groups will be curtailing the potential of available talent, particularly in times of skills shortages (Torrington et al, 2009).
Moreover, Hicks-Clarke and Iles (2000) have found that employees’ job satisfaction and commitment are related to a positive diversity
climate.
The South African employment equity legislation emphasises historically disadvantaged groups, and the need, for example, to set
targets for those groups to ensure that their representation in the workplace reflects their representation in wider society in occupations
where they are under-represented. In principle the Employment Equity Act 55 of 1998 (EEA) aims to achieve equity in the South
African workplace by promoting equal opportunities and fair treatment in employment through the elimination of unfair discrimination
and implementing affirmative action measures to redress the disadvantages in employment experienced by designated groups (Africans,
coloureds, Asians, women, and people with disabilities) to ensure their equitable representation in all occupational categories and levels
in the workplace.
Affirmative action measures need to be taken to address those employment policies, practices and working conditions that have an
adverse effect on the employment and advancement of members of the designated groups. Adverse impact refers to the total employment
process which results in a significantly higher percentage of a designated group in the candidate population being rejected for
employment, placement or promotion. The importance of fairness and equity in personnel decisions are addressed in more detail in
chapter 6. Employers may not institute an employment practice that causes a disparate impact on a particular class of people unless they
can show that the practice is job related and necessary (Dessler, 2009).
In the South African context, the notion of employment equity and adverse impact include employment practices such as the
following (Grobler, Wärnich, Carrell, Elbert and Hatfield, 2006):
• Appointment of members from designated groups. This would include transparent recruitment strategies such as appropriate
and unbiased selection criteria by selection panels and targeted advertising.
• Increasing the pool of available candidates. Community investment and bridging programmes to increase the number of
potential candidates.
• Training and development of people from designated groups. These include access to training by members of designated
groups; structured training and development programmes such as learnerships and internships; on-the-job mentoring and
coaching; and accelerated training for new recruits.
• Promotion of people from designated groups. This could form part of structured succession and experience planning and
appropriate and accelerated training.
• Retention of people from designated groups. Retention strategies could include promotion of a more diverse organisational
culture, an interactive communication and feedback strategy, and ongoing labour turnover.
• Reasonable accommodation for people from designated groups. These include providing an enabling environment for
disabled workers and workers with family responsibilities so that they may participate fully and, in so doing, improve
58
productivity. Examples of reasonable accommodation are accessible working areas, modifications to buildings and facilities,
and flexible working hours.
Demographic and workforce trends
Workforce demographic trends are making finding and hiring high-performing, talented and skilled employees more challenging. Many
industries worldwide, for example, have found themselves facing skills shortages in recent years. This impact varies from country to
country depending on relative economic prosperity, but most organisations in South Africa have seen a tightening of their key labour
markets in recent years. Competition among South African industries has also led to increased labour costs as more and more companies
are reporting high costs of skilled labour (merSETA, 2008).
The key South African labour market indicators as reported in 2009 by the Department of Labour indicate that:
• Unemployment remains the key challenge for transformation of the South African labour market. Providing training,
especially to the historically disadvantaged unemployed youth, could enhance their prospects of accessing the labour market.
Such an intervention can also promote social cohesion and build the skills base from which accelerated growth and
development can be launched. Furthermore, a significantly higher burden of unemployment is borne by women and youth
(particularly the 20- to 24-year age group) in the labour market.
• The proportion of African workers with relatively low educational levels (up to and including Grade 12 (‘matric’)) remains
large and should form a focal point in attempts to link skills development and equity. However, an analysis of Labour Force
Survey data for 2001 to 2007 shows an upward trend of around 20 000 per annum in the numbers of people holding a
qualification in the manufacturing, engineering and technology areas.
• African workers remain under-represented in certain high skill occupations and this should also form a focal point in linking
skills and equity where education and skills development can assist chances for promotion and mobility in the workforce. As
a result of the apartheid legacy of unequal educational and employment opportunities, the racial profile of employment in
South Africa remains skewed. There is a much greater representation of blacks in the informal sector of the economy and a
very low percentage of whites and Asians in elementary non-skilled occupations. Whites, and to some degree Asians, are still
over-represented in highly-skilled and high-salaried jobs. However, current trends indicate that whites are not as prevalent
among young professionals, implying that one can expect that over time there will be a move towards an overall profile that is
more representative of the country’s population (Department of Labour, 2009).
• Employment of professionals increased slightly in the early part of 2002. Given the evidence of shortages of many categories
of professionals in the labour market, this decline may be affected by the number of people emigrating and retiring, or it may
be due to surveyor statistical changes or errors. Labour market indicators also show that skilled and semi-skilled occupations
have come to dominate the make-up of the South African workforce, as has been the case in many other countries.
• A critical requirement is the provision of adequate numbers of graduates in tertiary technical fields such as science and
engineering, which in turn requires a much larger poll of school leavers with university entrance in mathematics and science.
This implies purposeful efforts to improve the cadre of school leavers necessary for an increasingly knowledge-driven
economy.
Higher Education and Training (HET) and Further Education and Training (FET) institutions are critical in developing the
skills necessary for the sector to be productive and competitive. Available data provides an insight into the limited current
ability of these institutions to produce the skills needed, as well as the challenges that the manufacturing, engineering and
related services sector (merSETA) is likely to face. When the engineering fields – a good proxy of the core skills sets required
by MerSETA chambers – are examined, the data indicate a pattern of poor graduation rates.
A Human Sciences Research Council (HSRC) study (2006, cited in merSETA, 2008) infers that one of the principal reasons
for such a low graduation rate is the poor quality of mathematics and science teaching in South African schools. Just 5% of
the 471 080 high school students who took their final exams in 2004 passed mathematics, needed for entrance into
engineering studies at university. Another problem is the almost total absence of structured career education or development
in respect of careers linked to science and technology. The need for interventions to increase the number of female learners
who embark on careers in science, engineering and technology is also universally recognised. The HSRC study points to a
crisis in dropout rates, signalling that there was an almost 50% dropout rate at universities in the period from 2000 to 2003.
• Workers with disabilities, by and large, remain excluded from the South African labour market.
• South Africa’s human development index declined from 1992 to 2005, and is the lowest when compared with other
developing countries (such as Botswana, Brazil, Mexico, Malaysia and Thailand). The human development index (HDI),
developed in 1990, is used extensively in debates on the level of human development of a country. The downward trend in
South Africa’s HDI is largely as a result of the fall in the life expectancy index which is highly sensitive to the impact of
HIV/Aids. According to the Bureau of Economic Research (2006, cited in merSETA, 2008) report, the HIV/Aids prevalence
in metal products, machinery and equipment manufacturing sector (merSETA) is among those on the high levels with about a
16,1% prevalence rate in 2005. It is projected that by 2010 the sector will have about 2,4% of its workforce sick with
HIV/Aids-related illnesses and with mortality rates of about 1,2%. About 48% of companies in the sector complain that HIV
has affected output levels; 42% contend that they have lost highly-skilled personnel owing to HIV/Aids-related illnesses and
deaths; while 58% hold that HIV/Aids has contributed to lower productivity through sick leave and frequent visits to the
hospital.
59
Change in employment by occupation is also an important labour market indicator. Researchers often use occupations as a proxy for
measuring the demand for skills. By tracing shifts over time, it is possible to identify which occupations are growing in their share of
employment and, therefore, where demand exists for certain occupational groupings. Overall, in South Africa there has been a
significant shift in occupational patterns, especially with regard to the growth in employment in middle-level occupations, such as
technical and associated professionals, clerical workers, and craft workers.
In summary, the South African labour market is characterised by an over-supply of unskilled workers and a shortage of skilled ones.
High population growth constantly exceeds the growth in employment demands. This is compounded by the consistent loss of jobs in
the formal sector as the country’s economy moves away from labour-intensive towards capital-intensive operations in a
globally-orientated economy which increases the need or demand for highly-skilled human resources (Department of Labour, 2009). In
South Africa, unemployment levels have remained high, with the availability of people with scarce and critical skills and experience in
high-technology jobs (for example, engineering) remaining alarmingly low. Demographic trends in South Africa and across the globe
have also created a situation in which the number of older, more experienced people retiring is greater than the number of younger,
skilled people entering the job market.
In practical terms, the unavailability of experienced people with critical and scarce skills in the South African labour market means
that employers will have to make themselves more attractive to employees than has been necessary in recent years. No longer can they
simply assume that people will seek work with them or seek to remain employed with them. In a tight labour market people have more
choice about where and when they work, resulting in their not having to put up with a working environment in which they feel unhappy
or dissatisfied. If they do not like their jobs, there are more opportunities for them to look elsewhere. So organisations are increasingly
required to compete with one another in labour markets as well as in product markets (Torrington et al, 2009).
Furthermore, South Africa is faced with skills gaps, an ageing highly-skilled workforce, increasingly complex technology, and rising
consumer expectations from service providers. As people need to be assisted to acquire the skills needed for employment, the demand
for quality education and skills-based development has become a national priority (Department of Labour, 2009). South African policies
such as Black Economic Empowerment (BEE), Industrial/Sectoral Charters, employment equity legislation, human resources
development strategy, the National Skills Development Strategy (including the scarce skills approach), and skills development
legislation offer opportunities for upskilling, thereby also ensuring that different industries enter value-added markets to stimulate
demand, employers act in their long term interest, and there are incentives for institutions to secure both high and basic skills. However,
the scale of the challenge is still daunting to most employers,and the magnitude of the task of appropriate skilling lies ahead of all South
African workplaces (Telela, 2004).
The national Occupational Learning System (OLS) and Employment Services South Africa
(ESSA)
In 2004 the national skills crisis and the drive for equity in the workplace put the Department of Labour in a position where they needed
to respond urgently. In line with international trends, the labour market was put in the centre of the solution to the skills crisis, and an
approach to learning which served occupations was adopted. The thinking was that if the needs of the labour market were understood
and the solutions were provided in a form usable by the labour market (occupations), then workplace learning (education, training and
development) could be provided that would by definition be occupationally relevant and lead to employment opportunities. The
Organising Framework for Occupations (OFO) was established as a national framework with which to report on skills demand and
skills supply, and select occupations for skills development interventions. The OFO, an internationally comparable framework already in
international use with several variations, classifies about 1 300 occupations into groups and related clusters. By mapping the OFO to the
National Qualifications Framework (NQF) levels (the topic of chapter 10), a career pathway framework, namely the National
Occupational Pathways Framework (NOPF) could be established to clarify horizontal and vertical career pathway progression options
(RainbowSA, 2010).
Against the backdrop of the OFO and NOPF, the national Occupational Learning System (OLS) was established as a new set of
systems, structures and processes designed by the Department of Labour to improve work-related (occupational) learning. The OLS
includes an Occupational Qualifications Framework (OQF) as a new sub-framework of the NQF, and a Quality Council of Trades and
Occupations (QCTO) as a new standards-setting and quality assurance body, as well as several innovations relating to the planning,
implementation and impact assessment of learning in business and industry. The OLS brings a much tighter link between skills demand
and skills supply, involves professional bodies and practising experts in the design of learning solutions, maps out clear career pathway
options by way of the NOPF, and matches one qualification with each occupation registered on the OFO (RainbowSA, 2010).
Employment Services South Africa (ESSA) is a government employment agency and database established to capture data on the
supply and demand of skills in the country so that the OLS can respond with relevant learning (education, training and development)
solutions. Sector Education and Training Authorities (SETAs) gather detailed information on the supply and demand of skills in each of
the 21 socio-economic sectors. They also analyse critical and scarce skills requirements. This information is gathered via Workplace
Skills Plans and Annual Training Reports. Research and surveys are also undertaken by SETAs to understand the skills needs in the
specific sector more clearly. A five-year Sector Skills Plan (SSP), updated annually, is the result of the foregoing work, and is captured
in the ESSA database. Workplaces report to SETAs annually using Workplace Skills Plans and Annual Training Reports. They also
60
register vacancies on ESSA and receive accreditation by the QCTO to deliver the workplace components of occupational learning. The
NOPF further allows for the simple translation of the data from ESSA into appropriate skills development strategies and interventions in
the workplace.
ESSA will not only inform companies and educational institutions what skills are in demand, and how well those skills are being
supplied, it will also provide information regarding which occupations are dying out or changing shape – information equally important
for human resource planning (HRP) purposes. Since ESSA is based on real-time feedback from the labour market, it will also be able to
inform companies and educational institutions on a daily basis how successful learning interventions in workplaces are. This will enable
the OLS to adapt learning solutions more rapidly based on objective feedback from the labour market, and to justify investment and
expenditure in those areas which are effective. ESSA is expected to be operational from 2010 (RainbowSA, 2010).
As an integrated data management system, some of the specific benefits of ESSA will include the following:
• A skills profile of all work-seekers and an accurate register of who they are, where they live, and so forth
• An accurate record of scarce and critical skills based on day-to-day labour market demand, which will be used to develop the
national Scarce and Critical Skills List
• An effective career information and guidance service
• An accurate register of placement opportunities
• Matching of individuals to placement opportunities (supply and demand), and
• Records of placements and skills development providers and programmes.
Although information on scarce and critical skills is currently compiled by the Department of Labour via Sector Skills Plans, ESSA will
have the additional advantage of details of all registered placement opportunities and will therefore be even more accurate in terms of
current (day-to-day) labour market demand (RainbowSA, 2010).
The ESSA regulations define the work of private employment service agencies with regard to fees they charge, daily record-keeping,
and reporting to ESSA of vacancies and work-seekers. Besides capturing data from the labour market, ESSA will offer free public
employment services. Table 3.1 provides an overview of the type of services that will be offered by ESSA.
Other demographic and workforce trends: Generation Y and retirees
Some experts note that the incoming young workers (the so-called Generation Y) may have different work-related values from those of
their predecessors. Based on one study, current (older) employees are more likely to be work-centric (focus more on work than on
family with respect to career decisions). Younger workers tend to be more family-centric or dual-centric (balancing family and work
life) (Dessler, 2009).
Fortune Magazine (cited in Dessler, 2009:34) says that today’s new Generation Y employees will bring challenges and strengths to
the workplace. It says they may be the ‘most high maintenance workforce in the history of the world’. On the other hand, as the first
generation raised on pagers and e-mail, their information technology skills will also make them the highest performers.
One survey (Dessler, 2009) calls the ‘ageing workforce’ the biggest demographic trend impacting employers. For example, how will
employers replace these retiring employees in the face of a diminishing supply of young workers? Employers are dealing with this in
various ways. Dessler (2009) report findings of a survey which observed that 41% of surveyed employers are bringing retirees back into
the workforce, and 31% are offering employment options designed to attract and retain semi-retired workers. This retiree trend helps
explain why ‘talent retention’ (also the topic of chapter 7), or getting and keeping good employees, ranks as companies’ top concern.
Table 3.1 Services offered by ESSA (RainbowSA, 2010:253)
Service
Description
Registration
services
These include the registration of individuals, employers, employment opportunities, and skills development providers.
The registration of individuals includes the development of a skills profile where the person’s qualifications and
experience are recorded according to the OFO.
Career
information
and guidance
services
Career guidance, or employment counselling, includes providing career, labour market, and scarce and critical skills
information and guidance on accessing placement opportunities.
Recruitment
and selection
services
These include the proactive identification of opportunities through networking with stakeholders, the matching of
individuals on the database to opportunities, recruitment and selection for a particular opportunity, and placement.
Skills
development
services
These include identifying scarce and critical skills, registering training courses with the National Skills Fund,
allocating funding for skills development, selecting skills development providers, contracting providers, monitoring
training, processing provider claims, and scheduling assessments at INDLELA.
INDLELA is a national institution that focuses on addressing the shortage of suitably qualified artisans, developing a
career path for artisans as well as enhancing the quality of artisan development and trade assessments or tests.
61
Information
services
These include producing information such as brochures, pamphlets, career packages, and advocacy for accessing
employment and skills development services.
Special
services
Special services include services provided for vulnerable groups such as people with disabilities, youth, retrenched
employees, and ex-offenders.
HUMAN RESOURCE PLANNING
In the early 1970s many companies were planning significant expansion. During this period such companies were quick to realise that
the key to success was an adequate supply of appropriately skilled people. This led to the emergence of human resource planning (HRP)
as a personnel tool. Companies attempted to forecast their human resource requirements in the medium to long term and then to analyse
their ability to achieve the forecast levels. The recession of the 1980s and 1990s, together with the readily available pool of skilled
labour, resulted in much less emphasis on the need for HRP (Parker & Caine, 1996). However, the current global and national skills
shortages and changing demographic trends have caused a resurgence in the importance of HRP.
Reflection 3.1
Read through the excerpt below and answer the questions that follow.
Mining industry skills: another R9 billion needed for artisan training
24 July 2008
(Source: <www.skillsportal.co.za>)
The mining industry has met many of the employment equity and skills development challenges it has faced over the past few years
and far exceeds other industries in its commitment to training. This is the finding of mining research carried out by executive search
firm Landelahni Business Leaders. Says CEO Sandra Burmeister: ‘The mining industry has been sorely challenged, and it has
responded positively.’
In 2006, revenue of the top 40 global mining industry players increased by 37%. Globally the industry is struggling to meet
demand, as the number of mining development projects soars. Constraints include regulatory controls, procurement delays, increased
risk, and a shortage of skills, with engineers and artisans in particular in short supply. Such pressures, says Burmeister, have made this
country a poaching ground for Australian and Canadian companies. South Africa is renowned for its mining expertise, and the
industry is competing for skills in the global resourcing market. It is also competing for scarce skills with infrastructure, construction,
manufacturing, and other industry sectors, while facing pressure for upping the pace of transformation.
The 2008 Landelahni Mining Survey researched 12 of the 18 main participants in the gold, coal, diamond, platinum, and uranium
mining sectors in South Africa, representing a sample of 177 491 permanent employees (56,1% of total permanent employees).
Employment and gender equity
The survey shows that Black representation in the mining industry at top management level more than doubled from 12,5% in 2001 to
30,6% in 2006, ahead of the all industry average of 22,2%. The single biggest shift occurred at non-executive director level – from
0,5% to 37%. However, in management, professional, skilled, and semi-skilled categories, mining lags behind the industry average.
The mining industry has made some strides in terms of gender equity, with women in top management increasing from 0,01% to
9,3% between 2001 and 2006, women in senior management increasing from 0,03% to 10,1%, and those in mid-management and
professional positions increasing from 5,4% to 18,3%. However, across all levels, mining lags behind the all industry average.
‘These findings must be seen in context,’ says Burmeister. ‘Mining by its nature is a male-orientated industry with a large
proportion of operational teams on remote sites. This influences the number of women in the sector. For similar reasons, there are still
significantly fewer women in technical and engineering across all industries.
‘What is encouraging, however, is a fundamental shift of blacks and women into core operational positions in the mining sector.
This demonstrates a massive commitment to change on the part of mining houses. Whereas past surveys indicated that as many as
80% of blacks and females were employed in human resources, finance, marketing, and other support roles, the 2006 data shows the
proportion has now dropped to 65%, with a substantial increase in black and female employees in core business and mining
operations.’
The question has been raised in the media and other forums as to whether the industry should be concerned about EE numbers
when the skills shortage is so dire. In Burmeister’s view, employment equity is integral to ongoing transformation, and most sectors,
including mining, have addressed employment equity in some form or another. However, she believes that the focus should be on
training and development from graduate level through to middle and professional levels, and by virtue of numbers, equity concerns
will automatically be addressed. ‘It’s not about the importance of one aspect versus the other,’ she says. ‘Skills development and
employment equity are both fundamental to corporate success.’
Skills development: Graduates
Because mining includes a large component of technical and professional staffing, ‘It is critical,’ says Burmeister, ‘that, apart from
developing leadership and management as most organisations would do, mining should place additional emphasis on graduates,
professionals and skilled workers.
62
‘University and technikon graduation across engineering disciplines was flat between 1998 and 2004, with a significant increase of
close to 1 000 engineering graduates year-on-year in 2005 and 2006. This is a step in the right direction,’ says Burmeister. ‘However,
low actual graduation numbers as a proportion of enrolments are a cause for concern.’ Only 11,5% of enrolments actually graduated
between 1998 and 2006. So while total enrolments over the period numbered 304 240, graduates numbered only 35 511.
To compound the matter, the numbers entering mining-specific disciplines such as mining engineering and metallurgical
engineering are low. Some 337 mining engineers graduated in 2003, rising to 428 in 2006. Metallurgical engineering boasted 165
graduates in 2003, rising to 263 in 2006. ‘Clearly,’ says Burmeister, ‘mining is starting to be considered as an attractive industry in
which to work. Efforts by the sector to increase numbers of bursaries and attract students to the sector have borne fruit.
‘However, one must question whether the industry can support its growth objectives on 428 mining graduates a year. The answer
clearly is: “No”. While some progress has been made, an increase in effort is required.’
Skills development: Artisans
According to the Joint Initiative for Priority Skills Acquisition (JIPSA), at least 12 500 artisans should be produced each year over the
next four years to meet demand. However, South Africa continues to suffer a severe shortage of qualified, competent and experienced
artisans. The number of artisans tested across all trades, increased from 15 000 in 1970 to 26 500 in 1986, while those who passed
trade tests increased from 6 000 to 13 500. From 1986, however, the numbers tested dropped to 9 041, and those who passed dropped
to a low 3 222, or 42%.
The Mining Qualifications Authority (MQA) target registration for artisans in 2008 is 1 766, against 1 034 actual registrations.
Assuming an average pass rate of 42%, some 434 artisans are trained each year under MQA auspices with specific qualifications in
mining. ‘Artisan training requires a significantly increased investment by both government and private sector,’ says Burmeister. ‘The
current artisan population is aging, with an average age of 50-55 years. So we should not merely be training for current needs, but also
to replace the ageing workforce.’
Skills development: Professionals
There are currently 14 234 professional engineers registered across all disciplines – 1 100 fewer than 10 years ago. Following a sharp
fall-off in registrations in 1998, Engineering Council of SA records show a dramatic increase in registration across all engineering
categories in 2007, with 2 438 registrations, including 1 188 blacks. This bodes well in terms of a pipeline of engineers coming
through the ranks.
Registration of professional engineers, on the other hand, has not recovered from its steep drop in 1998, with only 342 registering
with the council in 2007, and a preponderance of 221 whites. ‘Only a small proportion of these registered engineers work actively in
the mining sector,’ Burmeister points out. ‘This has serious implications for all aspects of mining operations.’
Training
It is in the area of training that mining has shown true grit. The mining industry has increased training spend significantly and is way
ahead of other industries in respect of the number of people being trained at almost every level. In 2006, the proportion of top
management receiving training was 34,0% compared to the industry average of 20,9%. Some 99% of unskilled workers in the mining
industry received training compared to the 36,7% industry average.
‘While accepting that training is coming off a low base, since skills development received scant attention from the mining industry
prior to 2005, the industry deserves due recognition for its recent efforts in this area,’ says Burmeister. ‘However, it has no grounds
for resting on its laurels.’ Certainly, mining certification gives cause for concern. Mine managers’ certificates issued climbed to a high
of 123 in 1997, dropping to zero in 2003, and increasing to 80 in 2006. ‘The renewed upward trend is encouraging,’ says Burmeister.
‘Mining certification needs to be accelerated to meet the growth of the sector.’
Redoubling of efforts
Globally, according to Burmeister, employers have been forced to recognise that the war for talent is over, and talent has won. ‘The
demand for skills and the transferability of skills across industries is only going to increase,’ she says. So what can the industry do to
accelerate skills development and employment equity to the required pace?
‘The shortage of skills – particularly of engineers – in the mining industry calls for a more innovative approach when recruiting at
all levels in the organisation. We must encourage the return of retired mining professionals to run key projects or act as coaches and
mentors to those coming up the ranks. This should be part of a formal company-wide coaching and mentoring programme.
‘Internal skills should be identified and assessed to determine competency levels and potential for ongoing training and
development. Such investment in skills development is more than a scorecard measure. It is an economic imperative.’
On the reward side, according to Burmeister, mining remuneration packages have tripled and quadrupled over the past two years.
‘Scarce skills incentives, shares, long-term incentives, performance bonuses, and retention incentives may well continue to be the
norm in the mining sector, and skills premiums for specific core business activities are likely to continue to rise.
‘However, a significant increase in investment in the development of graduates and young and mid-tier professionals will help to
balance supply and demand, and in the long run will be effective in driving down the remuneration spiral. Executives incentives
should be aligned to increasing skills across the business, not just to advancing the bottom line.’
While, traditionally, mining houses have tended to be conservative, with slow career acceleration, there has, according to
Burmeister, been a shift towards faster upward career progression, with younger CEOs and shorter tenure in key positions. Moreover,
mining has moved from a cost centre to a profit centre business model, which demands general management and business skills, as
well as technical skills, on the part of mine managers.
63
Such changes call for the introduction of formal career planning for employees and a clear career path. ‘This applies to long-term
employees in the industry as much as it does to new recruits,’ says Burmeister. ‘Executives must manage expectations of all
employees and understand that, in this era of shorter tenure, a clear career path is essential to retaining key talent. General
management and business experience at head office level is important as part of a broader retention strategy, as is international
exposure for up-and-coming young executives.
‘Global resourcing is critical to the success of the local mining industry, given the spread of exploration and mining activities
across the globe. Many mining houses have international operations, and can make use of management exchanges and offshore
projects to accelerate skills development. Smart strategies for resourcing include pooling scarce resources across continents and
sharing skills across mines.’
Questions
•
•
•
Discuss the trends and factors that appear to influence the supply and demand of human resources in the mining industry.
Identify the strategies that were initiated to address the human resource challenges faced by the mining industry.
What role can the OLS and ESSA play in future in addressing the supply and demand of human resources in the mining
industry?
Sound human resource planning is linked to the larger business planning or company strategic planning process. Human resource
planning is not an end in itself, just as human resource management is not an end in itself. The human resource management function,
including human resource planning, is meant to support and enable the company to attain its business goals, and as such it needs to be
linked to and driven by those business or strategic goals. In this regard, then, human resource planning is viewed as the pro-cess by which
management ensures that it has the right mix of human capital or people, who are capable of completing those tasks that help the
organisation reach its objectives. Rigorous human resource planning links people management to the organisation’s mission, vision,
goals and objectives, as well as its strategic plan and bud-getary resources. A key goal of human resource planning is to get the right
number of people with the right skills (qualifications, competencies, knowledge and experience) in the right jobs at the right time at the
right cost (Bacal, 2009).
Organisational strategy
Every company needs a strategy. A strategy is a course of action included in the company’s plan outlining how it will match its internal
strengths and weaknesses with external opportunities and threats in order to maintain a competitive advantage (Dessler, 2009). It is often
those organisations best able to identify trends and developments within their environments that gain the competitive advantage,
allowing them to survive and pros-per within their industries. In a turbulent global business world greatly affected by rapid change,
oppor-tunities and threats arise and disappear swiftly. Organisations need to develop strategies that will allow them to take advantage of
opportuni-ties and minimise the effect of the threats that occur, placing the organisation in a position to ‘manage’ its environment (Bews,
2005).
Organisational strategy deals with fundamental issues that affect the future of the organisation, seeking to answer fundamental
organisational questions such as: Where are we now? Where do we want to be? How do we get there? Ideally, strategy matches internal
resources with the external environment, involving the whole organisation and covering the range and depth of its activities.
Organisational strategy means that an organisation will seek to organise both its tangible resources in the form of the core skills of its
staff, physical assets, and finance, and also its intangible resources such as brand, image, reputation, and knowledge, in order to deliver
long-term added value to the organisation (Porter, Bingham & Simmonds, 2008).
Typical terms associated with an organisation’s strategy include the following (Johnson, Scholes & Whittington, 2005; Porter et al,
2008):
• Mission – the overriding purpose and direction of the organisation in line with the values or expectations of the stakeholders
• Vision – a desired future state; the aspirations of the organisation
• Objective – a statement of what is to be achieved and when (quantifiable if possible)
• Policy – a statement of what the organisation will and will not do
• Strategic capability – resources, activities and processes. Some will be unique and provide competitive advantage.
The strategic planning process
The strategic planning process consists of a number of key elements that relate primarily to an organisation’s ability to add value and
compete in the market place. As such it should be:
1. Sustainable (to ensure the long-term survival of the organisation)
2. Deliverable, by helping the company evolve towards its mission, vision and objectives
3. Competitive, by delivering competitive advantages for the organisation over its actual or potential competitors
4. Exploitative, taking advantage of links between the organisation and its external environment (suppliers, customers, competitors and
government) that cannot easily be duplicated and which contribute to superior performance
5. Visionary, by moving the company forward beyond the current environment. This may involve the development and implementation
of innovative strategies that could focus on growth or competition or a combination of both (Porter et al, 2008), and
64
6. Flexible, by having the capability to meet a variety of needs in a dynamic environment. This environment is both internal and external
to the organisation.
Organisations embrace flexibility as a strategic option to gain competitiveness. Baruch (2004) distinguishes between resource
flexibility (the extent to which a resource can be applied to a wide range of alternative uses), and co-ordination flexibility (the extent to
which the organisation can rethink and redeploy resources). In terms of the human factor, the resources reside within people, in their
competencies and skills, knowledge and abilities, and in their commitment and loyalty (Baruch, 2004).
There are three aspects to the strategic planning process: strategic assessment; strategy formulation and strategy implementation
(Dessler, 2009; Porter et al, 2008):
• Strategic assessment: Strategic planning starts with a strategic assessment made by management of the current position of the
organisation – where are we now and what business do we want to be in, given the threats and opportunities we face and our
company’s strengths and weaknesses? This includes evaluating the impact of the external environment on strategy – the
political, economic, social, technological, legal and environmental issues; the identification of resources available to the
organisation – physical, human, financial and intangible assets such as knowledge, brand and image; expectations of
stakeholders; and structures of power and influence. Based on the strategic assessment, managers often formulate new
business vision and mission statements. Managers then choose strategies – courses of action such as buying competitors or
expanding overseas – to get the company from where it is today to where it wants to be tomorrow.
• Formulating strategies to achieve the strategic goals comprises an assessment of where the company wants to be. This is
achieved by the generation of different strategic options, an evaluation of those options, and the selection of the most
appropriate strategy (or courses of action) that will enable the company to achieve its strategic goals.
• Strategy implementation is an assessment of how the company plans to achieve its strategic goals. It entails translating the
strategic plan into actions and results. This involves in practical terms employing or firing people, building or closing plants,
and adding or eliminating products and product lines. For the implementation stage to be successful, the organisation requires
a structure designed to deliver the required performance, deployment of the necessary resources, and a constant awareness of
the changing circumstances of the external environment.
Managing strategy is an ongoing process. Competitors introduce new products, technological innovations make production processes
obsolete, and social trends reduce demand for some products or services while boosting demand for others. Strategic control or
management keeps the company’s strategy up to date. It is the process of assessing progress toward strategic goals and taking corrective
action as needed (Dessler, 2009).
Business strategy and strategic human resource management
Strategy execution is traditionally the heart of the human resource manager’s strategic job. Top management formulates the company’s
corporate and competitive strategies. The human resource manager then considers the human resource implications of the proposed
business strategies, and further formulates a human resource strategy that parallels and facilitates implementation of the strategic
business plan (Cascio & Aguinis, 2005). The integration of the human resource strategy takes two forms:
• Vertical integration, which refers to the links between HRM and both wider business strategies and the political, economic,
social and legal forces that shape (and to some extent are shaped by) organisations, and
• Horizontal integration, which refers to the ‘fit’ between different HR policies, practices and activities, and the degree to
which they support or contradict one another.
In HRM (including the function of human resource planning), perhaps more than in any other area of management, the choices that are
made can have significant implications for the future and lead an organisation down a path that is difficult to alter without lots of effort.
Therefore, helping companies to acquire and retain the right resources, at the right time and in the right place, cannot be left to chance.
Detailed, strategic human resource planning and forecasting is needed to match supply and demand, optimise staff mix, maximise
employee productivity and engagement, understand the cost base, and generate satisfactory returns for shareholders. HR planning cannot
be considered in a vacuum. The implications of a detailed HR budget and forecast need to be reflected in the financial and other
operational plans of the business. Similarly, other expense plans and capital expenditure and revenue forecasts need to be integrated with
the financial plans, allowing the organisation to develop complete visibility of planned performance across the enterprise.
However, because the employment relationship is incomplete, ambiguous and contested, this means that HRM can never be a simple
technical exercise whereby answers are read off according to some scientific formula, and implemented without a problem. HR
professionals and industrial psychologists have to become accustomed to the fact that although there are increasing pressures to
demonstrate added value to organisations, their work is going to be fraught with tensions and contradictions, and with situations that are
characterised by uncertainty, indeterminacy, and competing perspectives (Marchington & Wilkinson, 2008).
THE RATIONALE FOR HUMAN RESOURCE PLANNING
Human resource planning provides a system-atic framework for organisations to consciously plan their human resource requirements in
terms of external and internal supplies. These plans must take the needs of both the organisation and the individual into account, while
65
remaining constantly in tune with the external environment through strategic planning. In general, the primary reasons for organisa-tions
to undertake human resource planning are to (Beardwell & Holden, 2001):
• Ensure that a strategic plan is achieved
• Cope with future staff needs
• Cope with change
• Ensure an adequate supply and mix of highly qualified staff
• Provide human resource information to other organisational functions
• Ensure a fair representation of the popu-lation mix throughout the hierarchy of the organisation, and
• Determine human resource policies and planning practices that will attract and retain the appropriate people.
The recruitment, selection and employment of personnel are informed by the organisation’s approach to workforce or human resource
planning. An organisation not only needs to know that it has people with the right skills (including qualifications, knowledge,
competencies and experience) to achieve its existing goals and strategies but also that it has human capital resources for future growth
(Robinson, 2006). An initial workforce analysis is needed to establish whether or not a particular post needs to be filled internally on a
permanent, open-ended contract, on a temporary basis or externally through an agency. If it is decided that the post should be filled
internally, job descriptions, person specifications or competency profiles should guide the process (Marchington & Wilkinson, 2008).
Human resource planning is generally conducted once the behavioural requirements of jobs (by means of job analysis – the topic of
chapter 4) have been identified in order to identify the numbers of employees and the skills required to do these jobs (Cascio, 2003).
What is human resource planning?
All organisations must provide either prod-ucts or services to their customers under the constraint of limited resources.
Appropriately-skilled people are among those limited resources; they must be managed effectively, and it is the function of the human
resource department to assist line managers in achieving this. Planning is about deciding what needs to be achieved and then setting
about achieving it. Human resource planning is one of the tools that organisations use to attain their overall goals by managing their
resources.
Managing resources involves consistently providing the right product or service at the right place, at the right time and at the right
price. With this in mind, and while emphasis-ing the human resource strategic role, human resource planning can be seen as a means of
(Bews, 2005):
• Obtaining the correct number of people, with the right skills, at the appropriate place, at the right time, while remaining in
line with corporate strategy
• Keeping these people competent and moti-vated so that they add value to the organisa-tion, assisting it to achieve its overall
strate-gic objectives
• Becoming involved with corporate planning at a strategic level
• Ensuring a pipeline of potential successors, and
• Retaining high-potential, talented staff.
With the need for organisations to become more flexible and forward-looking, in a world where employees and their needs are changing,
the emphasis in human resource planning is moving away from statistical forecasting and succession planning towards a more pro-active
approach to skills development, quality, and culture changes (O’Doherty in Beardwell & Holden, 2001; Bews, 2005). Human resource
planning therefore aims to add value to organisa-tional attempts to be aligned with the rapidly changing environment.
Obtaining the correct number of people
Organisations develop short-, medium-, and long-term plans, including projections of resources needed to achieve these plans. The
company will estimate, in its strategic plan, that it requires a cer-tain amount of money by a certain time to be able to embark on a certain
project. It will also esti-mate the type and quantity of materials needed for the project and the number of people required to run the
project. It is a human resource function to provide the necessary number of people for the project, and this, in many cases, will require
planning. Perhaps the organisation already has enough staff to deal with the project, but needs to move them from one worksite to
another, such as is common in the construction industry; or it may be necessary to recruit additional people owing to expansion.
Whatever the case, a degree of planning will be necessary, particularly when the required skills are in short supply. In this regard, a
company’s analysis of its skills shortages informs the particular Sector Education and Training Authority (SETA) within which it
resides.
A Sector Skills Plan (SSP) provides an analysis of the labour market on a national level and sector-specific level. This is compiled
once every five years, and is submitted to the Department of Higher Education and Training. The SSP is annually updated based on the
human resource planning information it obtains from the various companies that reside within the particular sector. A SSP provides the
profile of the labour force within the sector by province, race, age, gender, qualifications, and occupational category. The national
Organising Framework for Occupations (OFO) – the topic of chapter 4 – provides an integrated framework for storing, organising and
reporting occupation-related information not only for statistical but also for client-orientated applications, such as identifying and listing
scarce and critical skills, matching job seekers to job vacancies, providing career information, and registering learnerships. The
mentioned information is generated by underpinning each occupation with a comprehensive occupational profile which is generated by
66
Committees of Expert Practice (CEPs). Occupational profiles are also used to inform organisations’ job and post profiles, which
simplifies, inter alia, conducting skills audits and performance reviews. The structure of the OFO also guides the creation of Career Path
Frameworks and related learning paths.
The OFO is a skills-based coded classification system, which encompasses all occupations in the South African context. The
classification of occupations is based on a combination of skill level and skill specialisation which makes it easy to locate a specific
occupation within the framework (Department of Labour, 2008a; 2008b). As discussed earlier, apart from Sector Skills Plans (SSPs), the
newly established national data management system, Employment Services South Africa (ESSA) will in future play a key role in
informing companies about the supply and demand of scarce and critical skills in their particular industry.
The SSP:
• Monitors the supply of and demand for labour within the sector (with emphasis on critical and scarce skills)
• Identifies employment profiles (current skills in the sector per occupational category registered on the OFO)
• Tracks the absorption of new labour market entrants into the sector
• Tracks the flow and retention of scarce, critical and priority skills (based on sub-sectors, occupations within sub-sectors, and
levels of employment)
• Identifies the critical factors influencing skills development and retention
• Identifies areas of skills growth and skills needs (e.g. needs related to recruitment, labour shortage, skills gaps, professional
ethics, values, and objective skills, which are skills for maintaining the status quo, innovation or transformation in the sector)
• Identifies existing skills provisioning (training) and skills training needed in future, and
• Identifies opportunities and constraints on employment growth in the sector.
The SSP forms the key strategic analysis guiding the implementation of training and skills development within the sector. An SSP is
generally complemented and extended by a human capital or resource development strategy (HCD/HRD) that addresses the
implementation aspects of the SSP within a framework of five years and 10 to 20 years. In terms of the Skills Development Levies Act 9
of 1999, employers are obliged to register with the SA Revenue Services and pay 1% of the monthly payroll as a skills levy. On
submission by the company to the SETA, and subsequent approval of, the Workplace Skills Plan and the Annual Training Report – the
topic of chapter 10 – the company becomes eligible for both the mandatory training grant and the discretionary training grant from the
SETA.
Since human resource development (including skills development in the workplaces) is a national strategy aimed at economic
growth by addressing the challenges South Africa faces regarding its skills shortages, the purpose of the SSP is to ensure that the
particular SETA has relevant, up-to-date information and analysis. This allows the SETA to perform its strategic skills planning function
for the sector and to maximise participation by employers in the National Skills Development Strategy through the efficient use of
resources available for training and up-skilling within the sector. Along with the new national integrated labour market management data
system, ESSA, the national strategic skills development approach by Government is a key driver for the human resource planning efforts
of South African companies.
The right skills
Matching the availability of skilled and unskilled resources to market demands lies at the heart of human resource planning. When
planning the ‘shape’ of a business’, people, that is, the proportion of individuals in each band of seniority or grade, are critical to the
long-term health of the business. The appropriate staff mix, that is, blend of skills, experience and seniority, not only drives profitability
in a company but is also fundamental to the delivery of customer satisfaction (Simon, 2009).
Rapid changes in technology may mean that, even though an organisation has the right num-ber of employees, these people lack the
required skills – their skills have become redundant because of the introduction of advanced technologies. Human resource departments
will need to plan carefully for this type of development, as it involves the replacement of redundant skills with appropriate skills,
possibly involving the retrenchment of some employees, which in itself will introduce certain complications.
Section 189(2) of the Labour Relations Act 66 of 1995 (LRA) requires the employer to consult with employees in an attempt ‘to …
reach consensus on:
• appropriate measures–
to avoid the dismissals;
to minimise the number of dismissals;
to change the timing of the dismissals; and
to mitigate the adverse effects of the dismissals;
• the method for selecting the employees to be dismissed; and
• the severance pay for dismissed employees’.
No longer can an employer proceed with a retrenchment exercise without referring to its employees. This means that careful attention
must be paid to human resource planning, and an effort must be made to integrate it with stra-tegic planning to avoid the loss of jobs.
Just as companies compete for market share, so too do they compete for available skills in the market place. The more scarce the
skills, the more aggressive the efforts need to be to attract those skills. Attracting the right skills takes well-planned recruitment efforts
over a relatively long time span and may involve visits to schools, universities of technology, and higher education institutions to ensure
67
that when they are needed, the right skills are available. In a competitive en-vironment, the development of a well-planned yet flexible
marketing strategy in line with overall company strategy and aimed at attracting the best skills available could mean the difference
between success or failure. In South Africa there is the added complication of having to attract skilled people from previously
disadvantaged population groups. Once again, ESSA and the Sector Skills Plan (SSP) enable companies to gain a broader perspective of
the availability and demand for scarce and critical skills (including priority skills) in a particular sector. The OFO (discussed in chapter 4
) also provides a useful framework for companies to identify and specify the particular skills that are needed.
Table 3.2 Understanding scarce and critical skills in the South African context (LGSETA, 2007/2008)
Scarce skills
Critical skills
Scarce skills refer to those occupations in which there is a scarcity or
Critical skills refer to ‘top-up’ skills within an
shortage of qualified and experienced people. This scarcity can be current or occupation:
anticipated in the future, and usually occurs because people with these skills
• Generic ‘top-up’ skills, including (in
are simply not available, or they are available but they do not meet the
National Qualifications Framework (NQF)
company’s employment criteria. The scarcity can arise from one or a
terminology) critical cross-field outcomes.
combination of the following, grouped as absolute or relative:
These would include cognitive skills
(problem-solving, learning to learn), language
Absolute scarcity: Suitably skilled people are not available at all, for
and literacy skills, mathematical skills,
example:
computer literacy skills, team work, and
• In a new or emerging occupation, there are few, if any, people in
self-management skills.
the country with the requisite skills (qualification, competence and
experience), and education and skills development (training)
• Technical ‘top-up’ skills are skills which are
providers have yet to develop learning programmes to meet the
required over and above the
skills requirements.
generally-accepted skills associated with an
• People have chosen not to pursue training or careers in the
occupation. These skills may have emerged as
occupation, for a variety of reasons.
a result of changing technology, new forms of
work organisation, or even the operational
Relative scarcity: Suitably skilled people are available but do not meet
context in which the occupation is being
other employment criteria, for example:
applied.
• Geographical location may be a factor – people are unwilling to
work outside of urban areas.
• Equity considerations may mean that there are few if any
candidates with the requisite skills (qualification, competence and
experience) from the designated groups available to meet the skills
requirements of the company or job.
• Long skills development and learning (training) lead time leads to
short-term unavailability – there are people in education and
training (formal and workplace learning) who are in the process of
acquiring the necessary skills (qualification, competence and
experience), but the lead time will mean that they are not available
in the short term to meet replacement demand.
Priority skills refer to those skills required by a sector for resolution of
immediate skills shortages.
Examples:
1. If a municipality cannot recruit any town planners because there are simply none available – no-one responds to advertisements or
the company has used a recruitment agency which has been unsuccessful – then town planning is an absolute scarce skill.
2. If the company does have people responding to the recruitment advert, but none of the potential applicants wants to relocate to the
small rural town in which the municipality is located, then town planning is a relatively scarce skill, by reasons of geographical
location.
3. If the company has determined, in the Employment Equity Plan, that they require a black woman in the position of town planner,
and only white people or men respond to the recruitment advertisements, then town planning is a relatively scarce skill, by reasons
of employment equity.
4. If the company cannot recruit anyone to the position, but they have two young women doing work experience in the town planning
department who will finish their degrees only in two years’ time, then town planning is a relatively scarce skill, by reasons of long
training (learning) lead time.
5. If the company can recruit town planners, but find that they have difficulty in working in teams and supervising others, then team
work and supervisory skills are generic ‘top-up’ skills attached to the occupation of town planner.
6. If the municipality can recruit town planners, but finds that they have difficulty in developing plans for labour-intensive
developments in a rural environment, then the ability to develop plans for labour-intensive developments in a rural environment is a
68
technical ‘top-up’ skill attached to the occupation of town planner.
The appropriate place
It is all very well to acquire the right skills, but these skills are needed at a particular place. Some companies face great difficulties in
obtaining the right skills at the appropriate place. Mossgas, for instance, needed skills on an oil rig some kilometres off the South
African east coast, and the company LTA needed skills in the highlands of Lesotho to complete the Highlands Water Project. To recruit
people with the required skills, prepared to spend up to two weeks at a time on an oil rig, or months on a construc-tion site in a remote
area, takes some effort and might entail the development of an attractive remuneration (salary along with all other ben-efits) package.
This must be planned, together with the finance department, well in advance.
Certain urban areas tend to attract certain skilled people owing to the availability of oppor-tunities, and this may drain other areas of
these skills. For instance, there was a time when arti-sans were attracted to the developing Richards Bay area, draining other developing
areas at the time, such as Secunda. Richards Bay had the added attraction of being on the coast. When the Mossgas project was started at
Mossel Bay, again certain skills were attracted by the opportunities created, not only through the project, but through the secondary
development that arose to support that project. Many of these workers, particularly skilled artisans, came from Richards Bay (Bews,
2005).
Of course, the reverse can also happen, as in Port Elizabeth, when the Ford manufacturing plant was taken over by SAMCOR and
moved to the Pretoria area. This left a glut of certain skills in Port Elizabeth for a period, making it relatively easy and inexpensive for
other motor manufacturers to acquire people with the neces-sary skills. Occurrences like this provide the ideal opportunity for other
motor manufactur-ers to launch projects that will give them the edge over the competition (Bews, 2005).
To be in a position to take advantage of such opportunities, strategic plans need to be in place and the efforts of all departments well
integrat-ed. Human resource practitioners can play their part by helping to gather information on which to base strategic plans, and
communicating this information to the rest of the management team. Often an increase in the number of applicants from a competitor, or
an increase in trade union activity, may serve as a cue that leads a company to identify the strategic moves of its competi-tors.
Information gathered in this manner, when analysed together with information gathered by finance, marketing, or sales (such as changes
in delivery patterns of competitors or decreases in buying trends), can be used to predict the stra-tegic moves of competitors. By
identifying these strategic moves at an early stage, companies may be able to counter with strategic moves of their own and thereby gain
the competitive edge (Bews, 2005).
The right time
Both labour and idle time are expensive; there-fore timing is crucial when it comes to providing labour. If labour is provided too soon, it
has to be paid for; if it is not provided on time, expen-sive plant and machinery may stand idle and penalties may have to be paid for late
comple-tion of a project. An electrical contractor who must lay electrical pipes and cables prior to the construction company throwing a
concrete floor will agree on a date by which the job must be completed. If the electrical company is unable to complete the job on time
(and this could be due to not being able to recruit sufficient electri-cians), then the construction team will be held up and consequently the
electrical company may face financial penalties (Bews, 2005).
If the production department orders new machinery to be delivered on a certain day and trained operators are needed by that day,
each day that passes without these operators being on the job will result in production losses and will be a financial burden to the
company. While machines stand idle, leases for machinery and rent for floor space must still be paid, and if products cannot be
manufactured, there is likely to be a ripple effect on other departments. To prevent these types of occurrences, human resources planning
must be co-ordinated with other organisational functions (Bews, 2005).
Many industries such as the hotel, leisure, and agricultural industries are seasonal, expe-riencing peaks and troughs in the demand for
products or services. Human resource practi-tioners working in these areas need to plan the supply of labour carefully to meet consumer
demands during the busy peak periods and to reduce excess labour during off-seasons. The management of this type of sensitive activity
needs close co-ordination between the human resources and the production departments for any effort to be successful. Other possible
disruptions to production schedules which can be dealt with by human resources are absentee levels and leave schedules (Bews, 2005).
Keeping the people competent
More than ever before, human resources, just as with other resources, need investment. As buil-dings and equipment constantly need
upgrading, so people need to have their level of knowledge, skills, and experience constantly enhanced. One of the strategies that
businesses seeking to succeed well into the new century can employ to achieve this is to expand training and development budgets
(Sommers, 1995). Economic competitiveness is measured not only by the aggregate skills of a country’s or company’s workforce, but –
perhaps most importantly – by the flexibility and capacity of the workforce to adjust speedily to the rapid changes in technology,
production, trade, and work organisation. Consequently, the ability to respond to these changes with speed and effectiveness has now
become the area where many countries and companies seek competitive advantage. According to Ziderman (1997, cited in the National
Human Resource Development Strategy, 2009:10):
‘There has been a move from primary reliance on policies that emphasised capital investment in plant, machinery and infra-structure, or export-led
growth strategies, to a broader approach that assigns a central role to investments in human capital. Expenditures on improved education, training
69
(skills development) and health are now no longer regarded solely as benefits stemming from economic growth and rising incomes; increasingly,
they are also seen as investments in human capital that make sustained economic growth possible.’
The United Nations (HRD-SA, 2009) also makes an emphatic case for human resource development:
‘It is generally agreed that if overall human conditions are to improve, there must be increasing emphasis on human resources development.
Appropriately, such development provides for increases in productivity, enhances competitiveness and supports economic growth.’
Training and developing the workforce to meet future challenges and demands is clearly a human resource function that needs serious
attention and careful planning. Currently, training and developing the work-force in South Africa has the added dimension of correcting
past racial imbalances. New national systems, processes and frameworks such as the OLS and NOPF aim to address the current skills
shortages that South Africa faces by focusing on occupationally relevant and occupationally-specific skills learning (education, training
and development) solutions in South African workplaces.
Occupationally relevant, skills-based education and skills development efforts in this regard are vital for the future success of South
African companies and the country as a whole, and will achieve success only if explicitly written into corporate stra-tegic plans that
enjoy the full and unequivocal support of the CEO and executive management team. To deal with a rapidly changing and high-ly
technical and competitive environment, clear yet flexible planning that is revised on a regular basis, and that constantly remains aligned
with corporate strategy and national goals, is essential.
Keeping the people motivated
Not only do organisations require competent people, but these people must be motivated through a system of rewards and challenges
that liberates the creativity within them so that they consistently strive to accomplish new heights of achievement. All too often the
control systems employed by organisations stifle rather than challenge employees. In planning motivational strategies, attention must be
paid to the chang-ing needs of employees. What may be important to one employee at a particular stage in a career may not be
motivational to the next employee or at a later stage in the same employee’s career. Pension benefits to younger employees are less
motivational than they are to older employees; crèche facilities may elicit a sense of loyalty from young parents, particularly working
mothers, but are unlikely to have any effect on employees without young children (Bews, 2005).
Employee values are changing. More women and black people are entering the job market, and the dual-career family has become
common (with, in some cases, men’s careers taking second place to the careers of their wives). Acceptance of diversity among people is
growing, and this introduces special issues as well as new talents to the workplace. These and other changes on either side of the
employment contract require careful consideration and plan-ning by human resource practitioners who need to recognise these chan-ging
values, focus on them, and assist employers to retain employee loyalty (Bews, 2005; Redman & Wilkinson, 2009).
The Calvert Group based in Maryland, USA, structured its staff benefits and human resources programme around the needs of its
employees and reduced staff turnover from 30 per cent to 5 per cent. Some of the benefits introduced, such as flexitime and a
dress-as-you-like policy, were inexpensive, but the return was ‘... increased loyalty, high production rates and low turnover ... all of
which translates into increased profit-ability’ (Anfuso 1995:71). It is this type of initiative that needs to be introduced into corporate
strategic plans, allow-ing employers to match the needs and values of their employees with company needs and goals (Bews, 2005).
Becoming involved with strategic-level corporate planning
Few businesses in today’s economy can afford to stand still. Product offerings have to change in sympathy with market demands and
new opportunities, but skill sets can quickly become out of kilter or irrelevant in the marketplace. Anticipating market trends and
aligning human resource skills (qualifications, competencies and experience) is a cornerstone of human resource planning. This could
involve creating new cost centres, shifting or loaning resources to another cost centre, and formulating completely new reward and
remuneration and training schemes, all of which need to be anticipated within human resource planning (Simon, 2009).
In a high-tech service-based economy at a time when employee values are changing (Cascio, 2003), the emphasis is shif-ting towards
the human side of organisations. There is a growing realisation that it is peo-ple who make the machinery and systems in organisations
work. The human resource func-tion is consequently beginning to play more of a strategic role than it had previously. Where, in the past,
human resource planning was more concerned with obtaining the most suitable employees and formed ‘... the first step in the human
resources provision process’ (Nel, van Dyk, Haasbroek, Schultz & Sono, 2003:82) it is now having to deal with much broader strategic
issues which are ‘... future-orientated and pro-active’ (Rothwell & Kazanas, 1994).
Succession planning and talent retention
Human resource planning is often a finely-tuned balancing act. Planning staff resources in the long term can be especially challenging,
but it is also crucial to the sustainability of a business. An over-abundance of senior personnel can impact profitability in the short term,
but create long term staff retention problems as younger staff find their careers blocked, with little headroom for promotion. On the other
hand, not planning for future leadership could leave the business exposed as senior managers retire with nobody trained to take their
place (Simon, 2009). Succession planning refers to the plans a company makes to fill its most important executive positions. Moreover,
70
succession planning entails a process of ensuring a suitable supply of successors for current and future key jobs arising from business
strategy, so that the careers of individuals can be planned and managed to optimise the organisation’s needs and the individual’s
aspirations (Dessler, 2009).
In terms of process, succession planning involves identifying positions and roles where vacancies are anticipated, and identifying
how the company will fill those positions. When it is determined that succession planning will rely on internal promotions, some
companies will begin a process of identifying one (or more than one) potential candidate, and begin the development process with them,
so that when it is time for a person to step up, they have ample experience and the necessary skills to do so. For example, a person
targeted to fill an anticipated vacancy from within may be encouraged to take relevant university courses, attend seminars for skill
building, shadow the current incumbent to learn the ropes, receive coaching and mentoring from the incumbent, participate in job
rotations, and be party to other developmental activities. While it may seem that succession planning applies only to internal staff
(preparing an existing employee to move up), it can also be used with a new employee who may be hired before the incumbent leaves,
and is prepared for the full position while the incumbent is still in place. The purpose is to ensure continuity of operations.
However, for most employers today, succession planning cannot ensure that the company will have all the talent that it needs.
Whereas succession planning generally tends to focus just on the company’s top positions, talent management focuses on attracting,
developing and retaining key talent (Loftus, 2007). Talent management involves instituting, in a planned and thoughtful way, a
co-ordinated process for identifying, recruiting, hiring, and developing high-potential employees, often those with scarce and critical
skills. A survey by Laff (2006) reports that CEOs of the largest companies typically spend between 20 per cent and 40 per cent of their
time on talent management. The growing popularity for talent management and retention can be attributed to employers now competing
globally for scarce and critical, high-potential talent. Further, the availability of new talent management information systems makes it
possible for managers to integrate talent management systems components such as succession planning, recruitment, performance
evaluation, learning (skills development), and employee reward and remuneration, enabling seamless updating of data among them
(Dessler, 2009).
THE HUMAN RESOURCE PLANNING PROCESS
Human resource or workforce plans must flow from, and be consistent with, the overall business and human resource strategies (Cascio
& Aguinis, 2005). The heart of human resource planning involves predicting the skills and competencies the employer will need to
execute strategy (Dessler, 2009).
As shown in Figure 3.1, the human resource planning process can be divided into the following four main phases:
1. Investigation, where information is gathered by means of research, taking into account the various influences on the organisation.
2. Forecasts and estimations, where predictions about the future are made.
3. The planning process, in which action plans are developed and agreed upon to meet future needs.
4. Implementation, where plans agreed to for the management of the process are implemented.
Investigative phase
During the investigative phase of the process, the information that is collected from the operational environment focuses on four broad
fronts:
• External
• Internal
• Organisational culture, and
• Organisational objectives.
These areas are constantly being subjected to the dynamics of corporate strategy as this strategy reacts to the tensions of environmental
change. External influences are usually subdivided into political, economic, social, technical, legal, and ecological factors, referred to as
the PESTLE factors (Bews, 2005). Internal influences are occurrences within the organisation, such as the changing attitudes of
employees, the introduction of new technologies, or the introduction of an early retirement option. In order to accelerate the rate of
change, many South African organisations such as the SABC, SANDF, SAPS, Transnet and the Greater Johannesburg Transitional
Council, among numerous others, introduced early retirement options which, in many cases, resulted in the loss of employee loyalty and
trust (Bews, 2005).
Organisational culture is a consequence of organisational norms and values, as well as leadership and management styles. Any
change in these factors will have a direct effect on the organisation and the human resource plan-ning process. If a charismatic leader
such as Raymond Ackerman or Tokyo Sexwale were to leave an organisation, there would certainly be a culture change which would, in
turn, affect the human resource planning process.
The human resource planning process must be driven by organisational objectives such as production and profit targets. If these
objectives change, the human resource planning pro-cess must also change to promote the achieve-ment of the changed organisational
objectives. If new markets are entered and new technology is introduced to survive these markets, human resources may have to shift
focus onto recruit-ment, training and development to ensure that there are enough skilled people available to achieve these new
objectives. To do this, the human resource plan must be flexible and constantly attuned to provid-ing a service to the organisation as a
whole (Bews, 2005).
71
Furthermore, human resource planning involves the collection and use of personnel data, so it can be used as input into the strategic
human resource function. Poor or incomplete data means poor conclusions and decisions. The organisation must devise an inventory of
available talent and skills (knowledge, skills, abilities, experiences and qualifications) of present employees (Cascio & Aguinis, 2005).
To assist during the investigative stage, sophisti-cated computer software has been developed specifically to cater for human resource
manage-ment needs. The human resource information system (HRIS) can provide organisations with accurate information such as
employees’ names, qualifications, work locations, salary data, per-formance evaluation, training, and development needs. This allows for
a quick analysis of the state of the organisation’s human resources, which is vital to the planning process (Bews, 2005).
Figure 3.1 The human resource planning (HRP) process
72
HRISs are rapidly developing to become more user-friendly and powerful. Some can be linked to the organisation’s management
information systems, providing a useful tool with which to manage human resources. Access to informa-tion retained on the HRIS can be
73
restricted, depending on the level of management and the category of the information needed by managers at that particular management
level. Software is also available that can be used to assist with the formulation of strategic plans and to link these plans with human
resource planning.
Forecasts and estimations
The next phase of the planning process is based on information gathered during the first phase. It is concerned with forecasts and
esti-mations of the supply and demand for human resources. Knowledge of the internal state of the organisation, in terms of the state of
its human resources (for example, staff turnover, number of employees, skills and talents available, and retire-ments), together with
information about short- and long-term business objectives, will allow organisations to predict future needs. This will allow them to
adjust recruitment, training and development, and succession plans to meet these needs. It is also important for human resource plan-ners
to bear in mind the influence of external factors and the organisational climate when making forecasts and estimations. Currently,
external influences demand that attention be given to affirmative action processes and skills shortages when planning. To ignore these
influences may have a negative effect on future organisational suc-cesses (Bews, 2005).
The human resource planning process generally starts with a proper workforce analysis. Planning employment requirements entails
forecasting for future personnel or human capital needs or demands, and the supply of internal and outside candidates. In this regard
ESSA will be a valuable source of information on the supply and demand of scarce and critical skills in South Africa.
Workforce analysis
Key components of human resource planning are understanding the company’s workforce and planning for projected shortages and
surpluses in specific occupations and skill sets. As shown in Table 3.3, workforce analysis involves identifying current and anticipated
future supply of labour and skills, what the company needs and will need in the future in terms of labour and skills (qualifications,
competence, and experience). This is also referred to as a demand analysis. The next step is to identify the gaps between the current and
future supply and current and future demands (a gap analysis) to determine future shortages and excesses in the number of employees
needed, types of occupations, and skills (qualifications, competencies, and experience). Then an action plan is drafted for reducing the
identified gaps (Bacal, 2009).
The workforce analysis is informed by an external and internal environmental scan, the company strategic business plan, the Sector
Skills Plan, and ESSA. As shown in Table 3.4, an external scan determines the most important environmental factors expected to affect
workforce capacity, given known operational and human resource priorities and emerging issues. The internal scan identifies factors
internal to the organisation that may affect human resource capacity to meet organisational goals.
Forecasting future demand
Through the application of various models designed to assist with forecasting and predic-tions, a human resource service is provided to
the management of the organisation on a day-to-day basis. Information gathered through these models can also prove useful during the
corporate strategic planning and the strategic human resource planning processes. At the operational level, the function of human
resource planning is also to ensure that the organisation’s human resources will be capable of meeting short- and long-term company
objectives (Bews, 2005).
The type of information necessary for needs forecasting is:
• Current number of employees
• Dates of commencement or finalisation of proposed projects
• Skills needed or redundant (scarce and critical skill shortages)
• Skills available and location of these skills
• An estimate of training needed and time needed to provide this training, and
• Turnover rates, by department and occu-pation.
Two measures are typically used to calculate rates of labour turnover. First is the wastage rate, which divides the number of staff
leaving in a given period by the number of staff employed overall. The formula is presented in Figure 3.2. Both the numerator and the
divisor can include different elements and be applied to different departments in the organisation. For example, ‘leavers’ may refer
solely to those who quit voluntarily, or it can include those made redundant, those at the end of fixed-term contracts, or those dismissed,
each of which inflates the numerator. The divisor can be calculated on the basis of the number employed at the beginning of the year, at
the end, or the average of the two figures. Because comparisons of raw data are inevitably complicated by this, it is essential to know the
basis on which the statistics were derived before making comparisons. Exit interviews may shed some light on the problem, but people
are often unwilling to provide an honest answer to explain their resignation. A more serious problem with these indices, however, is that
they do not differentiate between leavers in terms of their length of service, grade, or gender (Marchington & Wilkinson, 2008).
The second measure is the stability index. Here, the number of staff with a certain minimum period of service (say, one year) at a
certain date is divided by overall numbers employed at that date (Figure 3.2 also shows this calculation). Stability indices provide a good
indicator of the proportion of long-term staff or, conversely, the extent to which the turnover problem is specific to new recruits. This
latter phenomenon is referred to as the ‘induction crisis’ as it occurs within the first few months of employment; perhaps people find out
the job is rather different from what they expected, or that their previous post may not have been that bad after all (Marchington &
74
Wilkinson, 2008). Taplin, -Winterton and Winterton (2003) found a turnover rate of 26.5 per cent in their study of the clothing industry,
with 45 per cent leaving during the first three months of their employment and only one-third lasting beyond a year. Overall, they
concluded that unless companies in this sector could find ways of incorporating workers into new routines and remunerating them
appropriately, they would continue to be plagued by issues such as turnover as workers seek alternative employment.
The actual forecasting of future human resource needs is a difficult process with accuracy of predictions varying between two per
cent and 20 per cent, depending on industry or company (Cascio, 2003). Predictions can either be based on intuition and experience, as
with the expert or managerial estimate technique and the Delphi technique, or they can be mathematically based, as with trend
projection, computer simulation, and the Markov model (Byars & Rue 2004).
The ‘expert’ or managerial estimates tech-niqueuses the knowledge of an expert or man-ager onwhich to predict human resource
needs. This technique, probably the most common in use, particularly in the small and medium-sized organisation, can be used in
conjunction with the Delphi technique (Bews, 2005).
Figure 3.2 Indices of labour turnover (Marchington & Wilkinson, 2008:230)
[Reprinted with the permission of the publisher, the Chartered Institute of Personnel and Development, London (<www.cipd.co.uk>)]
Table 3.3 Elements of workforce analysis (Adapted from Bacal, 2009)
Supply analysis
•
•
•
•
Internal supply
Current workforce demographics
Talent/workforce inventory
Workforce trends – eligibility for retirement,
separation rate, turnover rate
• External supply
• Sector Skills Plan
Demand analysis
• Critical occupations and skills (qualifications, competence,
experience) required to meet projected needs
• Anticipated changes of programmes and services (volume,
delivery channel, location, duration)
• Separation/turnover rates
• Vacancy rates
• Sector Skills Plan
Table 3.4 Aspects to consider when conducting an environmental scan (Adapted from Bacal, 2009)
Internal environmental scan
• Are there any key forces affecting the organisation’s operations
(collective agreements, staffing issues, cultural issues, work/life
balance, demographics, technology requirements, budget issues,
expectations of clients, skill shortages, scarce and critical skills)?
• What knowledge, skills (qualifications, competencies and
experience) and capabilities does the organisation have
(talent/human resource inventory)
• What is the company’s current internal environment? What
elements support the company’s strategic direction? What elements
deter the organisation from reaching its goals?
• How has the organisation changed its organisational structure? How
is it likely to change in the future?
• How has the organisation changed with respect to the type and
amount of work it does, and how is it likely to change in the future?
• How has the organisation changed regarding the use of technology,
and how will it change in the future?
• How has the company changed with respect to the way people are
recruited?
• What are the public’s (or customers’) perceptions of the quality of
the organisation’s products, programmes, and/or services? What is
being done well? What can be done better?
External environmental scan
• What is the current external environment?
What elements of the current environment
are relevant to the company? Which are
likely to inhibit the company from reaching
its goals?
• What are the company’s specific issues and
implications of these issues? What key
forces in this environment need to be
addressed, and which ones are less critical?
• What is the impact of local (sectoral and
national) trends on the company
(demographic, economic, political,
intergovernmental, cultural, technological,
and educational)?
• Are there comparable operations that
provide a similar service? How might that
change? How would that affect the
company?
• Where does the work of the company come
from? How might that change, and how
would it affect the organisation?
75
• Are current programmes, processes or services contributing to the
achievement of specific organisational goals?
• How might the external environment differ
in the future? What forces at work might
change the external environment? What
implications will this have for the
organisation?
• What kinds of trends or forces affect similar
work in other jurisdictions?
• What kinds of trends or forces affect the
company’s partners or stakeholders and
customers?
The Delphi technique uses the opinions of a group of experts to make predictions. During the Delphi technique process a group of
experts are independently and anonymously questioned and their responses are summarised, indicating the average and most extreme
responses. These summarised responses are then fed back to the group for a further round of evaluation, a pro-cess that continues until
the responses become narrow enough to be used as a forecast (Bews, 2005).
Trend projection relates a single factor, such as room occupancy, to employment (Bews, 2005; Dessler, 2009). Trend analysis
involves studying the company’s employment levels over the past five years or so to predict future needs. One might therefore calculate
the number of employees in one’s company at the end of each of the past five years, or perhaps the number in each sub-group (such as
sales people, production people, secretarial and administrative) at the end of that period. The aim is to identify employment trends one
thinks might continue into the future (Dessler, 2009). The leisure industry often applies this technique, as it relates to the type of
seasonal peaks and troughs that occur in the industry. For instance, based on past statistical records, room occupancy at a hotel indicates
the number of employees that will be needed to provide an efficient service. In this way a fairly accurate statistics-based estimation can
be made of the staff needs of the hotel at any particular time (Bews, 2005).
Cascio and Aguinis (2005) provide the following example of the trend analysis for forecasting workforce demand: Find the
appropriate predictor, that is, the business factor to which workforce needs will be related. For a company specialising in the
manufacturing of small electrical appliances, it may be the number of appliances it turns out per annum. The next step is to plot the
historical record of that factor in relation to the number of employees employed. The past relationship of the business factor to staffing
levels must be determined accurately and the future levels of the business factor estimated. The organisation needs to know, for
example, that it takes 237 employees to turn out 372 529 small electric appliances per year, or approximately 1 572 appliances per
individual. The output per individual ratio is known as labour productivity.
The productivity ratio for the previous five or preferably 10 years is now calculated in order to determine the average rate of
productivity change. By determining the rate at which productivity is changing, workforce requirements can be projected into the future.
In this regard, the coefficient projected for the target year must reflect the productivity anticipated at that time. The projected level of the
business factor is then multiplied by the productivity ratio to arrive at the effective number of employees required. Adjustments to the
projections for the influence of special factors (such as, for example, the amount of contract labour) will yield a net figure for the
workforce demand at that time (Cascio and Aguinis, 2005).
Computer modelling (Bews, 2005) allows for the manipu-lation of variables to answer a series of ‘what if?’ questions. A computer
forecasting model requires the identification of key variables such as:
• Current distribution of human resources in terms of position
• Anticipated growth rate
• Annual losses due to any cause, and
• Number of persons recruited from external sources.
A computer fed with this information can simu-late the future. Numerous projects are possible using a computer model, making the
computer a useful human resources planning tool. There are software products on the market that have been specifically developed for
this process, but it is usually only larger organisations that engage in computer modelling (Cascio, 2003).
The Markov model can be used to fore-cast future human resource requirements by reflecting system progression from one state to
another over time. A Markov analysis focuses on internal factors, showing the pattern of movement of employees through the
organisa-tion. By simple arithmetical calculations of the movement of employees from one job level tothe next, including staff losses, it
is possible to project future human resource needs on the basis of stability patterns. The Markov model is stochastic – that is,
probabilistic: data is gathered over a period and averaged in order to determine probabilities to be used for forecasting. A critical
assumption therefore is that the transition probabilities remain stable (Bews, 2005).
Forecasting the supply of inside candidates
Forecasting the supply of a company’s inside candidates requires analysing demographics, turnover and other data. The manager asks
questions such as ‘How many current employees are due to retire?’ and ‘What is our annual turnover?’ A qualifications inventory can
facilitate forecasting internal candidates. Qualifications inventories contain summary data such as each current employee’s performance
record, educational background, age, and promotability, compiled either manually or in a computerised system. Personnel replacement
charts show present performance and promotability for each potential replacement for important positions. Alternatively, position
76
replacement cards are developed for each position, showing possible replacements as well as present performance, promotion potential,
and training required by each possible candidate (Dessler, 2009).
Forecasting the availability of inside candidates is particularly important in succession planning and the retention of key talent,
scarce and critical skills. Table 3.5 provides an overview of the aspects to be considered when conducting succession planning and
analysing staff retention trends.
Forecasting the supply of outside candidates
Forecasting the supply of external candidates (those not currently employed by the organisation) is especially important in the current
context of global and national skills shortages. This may require forecasting general economic conditions, local market and sectoral
conditions, and occupational market conditions. Generally the first step is to forecast general economic conditions and the expected
prevailing rate of unemployment. Usually, the lower the rate of unemployment, the lower the labour supply and the harder it is to recruit
personnel (Marchington & Wilkinson, 2008).
Local labour market conditions are also important. Further, forecasting the availability of potential job candidates in specific
occupations or with suitable qualifications for which the organisation will be recruiting is essential. Tables 3.6 to 3.9 provide examples
of supply and demand analyses conducted at a sectoral level. The major factors influencing labour supply at a national level, and by
implication locally, include:
• Levels of unemployment in general, and in particular occupations
• Number of graduates in general and in specific fields
• Employment equity legislation
• National Skills Development Strategy and Human Resource Development Strategy.
Table 3.5 Elements to consider in forecasting the supply of inside candidates (Adapted from Bacal, 2009)
Succession planning
Staff retention analysis
Supply and demand analysis
• Talent inventory of those currently occupying senior positions
• Organisation’s future supply of talent (compare with future
requirements, in number, occupation and skill type)
• Internal structural changes and external business or political changes
Actions
• Recruitment strategies to meet a shortage of those with senior
management potential
• Allowing faster promotion to fill immediate gaps
• Developing cross functional transfers for high flyers
• Hiring on fixed-term contracts to meet short-term skills/experience
deficits
• Reducing staff numbers to remove blockages or forthcoming
surpluses
Trend analysis
• Monitor extent of resignations
• Discover the reasons for resignations (exit
interviews/climate survey)
• Establish what it is costing the
organisation
• Compare loss rates with other similar
organisations
Table 3.6 Example of engineering-related pipeline supply of skills analysis (enrolments and graduates) (merSETA, 2008:55)
Table 3.7 Example of priority skills demand analysis in metal chamber (merSETA, 2008:70–71)
77
The merSETA Sector Skills Plan, 2005–2010 (merSETA, 2008) indicated, for example, that of the 35 299 occupations in demand in the
sector, the ‘most in demand’ is the category Fitter (General), comprising 31% of the total (11 000). This skill category includes all
ranges of fitters, which occupational categories are located within OFO code 323201 (Fitter General, at Skill Level 3, as defined in the
OFO categorisation. ‘Sheet Metal Trades Worker’ (7000), with the OFO Code 322201, is ranked ‘second’ in the hierarchy and makes up
20% of the total skills needs of the sector). The third most important skill needed in this sector is OFO Code 321201, which incorporates
Automotive vehicle mechanics of various types, comprising 12% of total demand and approximately 4 328 personnel required in the
sector.
Table 3.8 Example of priority non-specific skills demand analysis in Auto sector (merSETA, 2008:74)
78
Table 3.9 Example of priority skills demand analysis in motor chamber (numbers to be trained) (merSETA, 2008:75)
Reconciling demand and supply
A comparison of the demand for labour in order to meet organisational goals and strategies and sources of supply is likely to result in
different scenarios and different human resource policies and plans. Equality between demand and supply is likely only in very stable
conditions. Where demand for labour exceeds supply, the organisation is likely to be involved in recruitment and selection activities in
order to acquire the necessary skills and competencies. Where internal supply exceeds demand, human resource plans will need to be
focused on eliminating the surplus through redeployment, freezing recruitment, and redundancy (Robinson, 2006).
Planning phase
During the investigative and forecasting phases, certain discrepancies between the existing state of human resources within the
organisation and the organisation’s future requirements will be highlighted. This is referred to as the estimate of human resource
imbalance. It is during the planning phase that actions to resolve problems caused by this imbalance are decided on (Bews, 2005). These
plans, which could be of a short- or long-term nature (or a combination of the two), and of an operational or strategic nature, may
include:
• A recruitment plan with either external or internal focus (see chapter 6)
79
•
•
•
•
•
•
•
•
•
•
A training and development plan (see chapter 10)
An education programme
A coaching and mentorship programme
An accelerated, ‘fast track’ development pro-gramme
An early retirement programme
An affirmative action initiative
Career development (see chapter 11)
Performance management (see chapter 9)
Organisational development, and
New technology.
During this phase, strategies are developed that will move towards closing the gap between the current position, as identified during the
investi-gative phase, and the ‘ideal’ situation, identified through estimation. A simple example is a hotel with six wait-ers on its staff,
which uses the trend projec-tion method and estimates the need for three additional waiters in the holiday season. Based on these
estimations, the hotel then implements a training plan to upgrade two porters to the position of waiter and a recruitment strategy to
replace the porters and recruit an additional waiter for the holiday season. The development of plans for large multi-skilled organisations
are naturally a great deal more complicated than the example of the hotel and need expert input from both operations and human
resources to be successful (Bews, 2005).
Implementation phase
The implementation of plans is often con-sidered the most difficult of all phases of any change process. Plans are usually formulated at
one level and implemented at the next. This often causes com-plications and a lack of commitment from those charged with
implementation, who may have had little input during the formulation process and who do not fully understand or even neces-sarily agree
with the plan. Human resource planning activities span the functional boundaries of the organisation, placing the responsibility for
human resource planning with line management, and the facilita-tion and service responsibility with the human resource function (Bews,
2005; Cascio, 2003).
Line managers are responsible for developing their own operational plans, taking into account human resource factors, the number of
employ-ees needed and at what levels, training and development needs, and the like. The human resource department must co-ordinate
these needs and is responsible for planning a corporate strategy to satisfy them. A certain line function may need two people trained in
project management skills; the organi-sation as a whole may need thirty people trained in the same skills. It is a line responsibility to plan
the time allocated to train the two employ-ees and to inform human resources of this need; it is a human resource function to plan the
co-ordination, delivery, and evaluation of the training for the 30 employees. Often training is planned with no reference to line, and line
responds by refusing to allow time off for train-ing. Joint planning should reduce this type of occurrence. A link must be established
between the organisational strategic plan and human resources operational and strategic planning. This link, discussed earlier, is vital to
the survival of human resources and flows in a two-way direction (Bews, 2005).
EVALUATION OF HUMAN RESOURCE PLANNING
Control and evaluation procedures must be established in order to provide feedback on the adequacy of the human resource planning
effort. Monitoring, evaluating and reporting (internally and publicly) performance results advances a company’s capacity to measure
performance, set targets and, most importantly, to integrate results information into decision-making processes and determine future
priorities (Cascio and Aguinis, 2005).
The planning process must be evaluated regu-larly to ensure delivery of an effective service. Regular evaluation provides an
opportunity to highlight weaknesses in the plan and to make the necessary adjustments. Evaluation should be both qualitative and
quantitative in order to achieve objectivity (Cascio, 2003): qualitative, in obtaining direct feedback from line management and
employees about the service being provided; and quantita-tive in respect of reduction in turnover rates, absenteeism, and grievances
lodged (Bews, 2005).
THE EMPLOYMENT PROCESS
This book discusses the various phases of the employment process and the techniques and methods used by industrial psychologists to
inform and support decisions that need to be made during each phase of the process. As shown in Figure 3.3, the employment process
consists of various phases which are conducted in a particular sequence. The various phases are concerned with job analysis and
evaluation, human resource planning, recruitment and selection, reward and remuneration, performance evaluation, training and
development, career development, employment relations, and organisational exit.
Cascio and Aguinis (2005) view the various phases of the employment process as a network or system of sequential, interdependent
decisions, with feedback loops interconnecting all phases in the employment process. Each decision during a particular phase of the
80
employment process is an attempt to discover what should be done with one or more individuals. The costs, consequences, and
anticipated payoffs of alternative decision strategies can then be assessed in terms of their ramifications for the organisation as a whole.
In more practical terms, suppose, for example, both supervisors and job incumbents determine that the task and personal
requirements of a particular job have changed considerably from those originally determined in job analysis. This would imply that the
original job analysis must be updated to reflect the newer requirements, but this may also affect the salary paid on that job. In addition,
human resource planning strategies may have to be modified in order to ensure a continuous flow of qualified persons for the changed
job, different recruiting strategies may be called for in order to attract new candidates for the job, new kinds of information may be
needed in order to select or promote qualified individuals, and the content of training programmes for the job may have to be altered
(Cascio & Aguinis, 2005).
Job analysis and criterion development
Job analysis (the topic of chapter 4) is one of the most basic human resource management functions and forms the foundation of nearly
all personnel activities. It entails the systematic study of the tasks, duties, and responsibilities of a job and the knowledge, skills, and
abilities needed to perform it. The analysis of jobs draws heavily on the research methods and measurement issues discussed in chapters
2 and 5. Measurement methods and techniques of observing and recording data are critical issues in analysing jobs. Job analysis is also
critically important for developing the criterion required for assessing personnel. The development of criterion for selection purposes is
discussed in more detail in chapter 4. Before a worker can be hired or trained and before an employee’s performance can be evaluated, it
is critical to understand exactly what the employee’s job entails. Such analyses should also be conducted on a periodic basis to ensure
that the information on jobs is up to date. In other words, it needs to reflect the work actually being performed (Riggio, 2009).
Human resource planning
Human resource planning (also the topic of this chapter) is generally conducted once the behavioural and skills requirements of jobs (by
means of job analysis) have been identified in order to identify the number of employees and the skills required to do those jobs (Cascio,
2003). HRP further involves determining the available competencies to allow the organisation to plan for the changes to new jobs
required by corporate goals and to allocate scarce or abundant human capital resources where they can do the most good for the
individual and the organisation (Cascio, 2003). HRP also includes strategic decisions related to the size of the workforce and how it is
developed across different organisational jobs and occupations, for example, by reducing the size of the workforce by 10 per cent within
three months; or reducing the number of employees by 1 000 in planning staff jobs and re-deploying them into sales jobs within the
year.
Figure 3.3 Overview of the employment process
81
Recruitment and selection
Recruiting is the process of identifying and attracting a pool of candidates who will be selected to receive employment offers. An
organisation needs to know not only that it has people with the right skills and competencies to achieve its existing goals and strategies
but also that it has human capital resources for future growth and development (Robinson, 2006).
82
The recruitment and selection of personnel (the topic of chapter 6) is informed by the organisation’s approach to HR planning. Based
on HRP information, appropriate recruiting methods and an implementation strategy are selected. For example, management may focus
more on learnerships and internships than university recruiting for a particular year, with television being better than newspapers as an
advertising medium.
Because recruitment and selection is a critical feature of HRM in all organisations irrespective of their size, structure or sector,
industrial psychologists have sought ways to improve the quality, including the reliability, validity, fairness, and utility of selection
decisions (the topics of chapters 5 and 6). Appropriate recruitment, selection and placement of employees promote their overall job
satisfaction, morale and productivity. Industrial psychologists therefore employ a scientific approach in developing and applying
employee selection and placement methods that ensure that the right applicants are hired and employees’ talents match their jobs. They
create, validate and choose tests and interviews that are administered to job applicants to determine whether they should be hired and, if
so, where they should be placed (Kuther, 2005).
This is important because new recruits provide managers with an opportunity to acquire new skills as well as amend organisational
cultures. However, too often vacancies are filled in an ad hoc and reactive manner without a systematic analysis of whether specific jobs
are needed. Moreover, other key areas of staffing – for example, HR planning, labour turnover, retention and recruitment – are often
downplayed because attention is focused on how selection decisions can be improved by using ‘new’ or ‘sophisticated’ techniques. Yet,
without proper understanding of job analysis and HR planning, or of the best channels to use for attracting candidates, selection
decisions might be worthless (Marchington & Wilkinson, 2008).
Reward and remuneration
Reward and remuneration (the topic of chapter 8) are central to the employment relationship. While there are many people who enjoy
working and who would continue working even if they won a large sum in a lottery, most people work mainly because it is their only
means of earning the money they need to sustain themselves and their families in the style to which they are accustomed. How much
they are paid and in what form therefore matters hugely to people. It helps determine which jobs they choose to do, with which
employers they seek work and, to a significant extent, how much effort they put into their work. For these reasons, effective reward and
remuneration systems and methods are also very important for employers. Getting it wrong makes it much more difficult to recruit and
retain high-calibre human capital and much easier to demotivate those already in the organisation (Torrington et al, 2009).
Performance evaluation
The evaluation of employees’ job performance is a vital personnel function and of critical importance to the organisation. In chapter 9,
we will consider the very important variable of job performance in the context of assessments and evaluations. The measurement of job
performance serves as the criterion measure to determine if employee screening and selection procedures are working. In other words,
by assessing new employees’ performance at some point after they are hired, organisations can determine if the predictors of job
performance do indeed predict success on the job (Riggio, 2009). Industrial psychologists determine criteria or standards by which
employees will be evaluated. In other words, they determine how to define employee competence and success. Chapter 4 discusses the
issue of criterion development in more detail.
Measurement of job performance is also important in determining the effectiveness of employee training and career development
programmes and to make decisions that affect employees’ reward and remuneration. Performance appraisals generally function as the
foundation for pay increases, promotions and inter-organisational career mobility (the topic of chapter 9), provide feedback to help
improve performance and recognise weaknesses, and offer information about the attainment of work goals (Riggio, 2009).
Training and development
Employee training and development (the topic of chapter 10) is a planned effort by an organisation to facilitate employees’ learning,
retention, and transfer of job-related behaviour (Riggio, 2009). Industrial psychologists are also involved in training employees and
guiding them in their professional development. By conducting needs analyses, or large surveys and interview-based assessments,
industrial psychologists determine what skills and technical needs employees have or desire and develop training programmes to impart
those skills. Industrial psychologists also devise methods for evaluating the effectiveness of training programmes and improve them
based on those evaluations (Kother, 2005).
In South Africa, the training and development of employees is regarded as a national and strategic imperative to address the skills
shortages faced by the country. For this purpose, workplace learning (training and development) will in future essentially be guided by
the OLS, OFO and NOPF within workplaces. Job analysis and performance evaluations assist in determining employees’ training and
development needs. Industrial psychologists can increase significantly the effectiveness of the employees and managers of an
organisation by employing a wide range of training and development techniques (Cascio & Aguinis, 2005).
Career development
Career development is the topic of chapter 11. Performance evaluation, training and development and career development are key
elements in high commitment human resource management, both in defining performance expectations and in providing employees with
the targets at which to aim and the incentives for future employment opportunities. As we will see in more detail in chapter 9, this can
83
start with induction into the organisation, followed by regular meetings between managers and their staff, and formal evaluation and
feedback to staff via performance appraisals. Industrial psychologists conduct career guidance workshops to help employees plan and
manage their careers. They assist organisations in establishing career development support systems and policies. They also train
managers in conducting career discussions with their employees. Industrial psychologists also provide individual career counselling and
guidance services to employees. In addition they use questionnaires and inventories to help employees identify their career interests,
motivations and needs.
Although one part of performance review is concerned with ensuring that current levels of performance are acceptable, the exercise
is also critical in helping employees to plan their future within the organisation and in determining any future learning and training
needs. Furthermore, industrial psychologists can significantly contribute to the establishment of effective career development systems
that support the retention and productive engagement of employees. Organisational support for employees’ career development – for
example, through personal development opportunities and performance evaluation – is considered to be important in helping employees
acquire the career competencies they need to sustain their employability in an uncertain and turbulent world of work.
Employment relations
Employment relations and psychology interrelate, and industrial psychologists bring personnel psycho-logical knowledge and research
expertise in establishing sound employment relations in the workplace. Business and personnel decisions are subject to external
regulation in terms of legislation. Employment law and the legislation underpinning the employment relationship impinge on the ways
in which organisations select, recruit, deploy, reward and dismiss their employees. Fundamental to the ways in which managers and
those they manage interact is the employment contract, which places legal responsibilities on the organisation and also provides the
person entering into the employment relationship with varying degrees of legal protection. Industrial psychologists and human resource
practitioners need a knowledge and understanding of developments in employment law in order to both devise appropriate policies and
procedures and provide correct advice to line managers and employees. They themselves also need to know when, and where to seek
more specialist advice. Chapter 12 explores the legislative nature of the employment relationship in more depth.
Organisational exit
As shown in Figure 3.3, organisational exit influences, and is influenced by, prior phases in the employment process. For example,
large-scale layoffs may affect the content, design, and remuneration of remaining jobs; the recruitment, selection, training and career
mobility of new employees with strategically relevant skills; and changes in performance evaluation processes to reflect work
reorganisation and new skills requirements (Cascio & Aguinis, 2005).
CHAPTER SUMMARY
This chapter has reviewed the employment context within which the employment process occurs. The South African legislative context,
labour market trends, and skills shortages present unique challenges which must be considered in the employment process. The
resourcing pro-cesses of human resource planning, recruitment, selection and retention are critical in ensuring that an organisation has the
human capital or skills, experience and talent to meet the demands of its environment and adapt to change. HRP is a systematic process
of analysing the organisation’s human resource needs and developing specific mutually supportive policies and plans to meet those
needs. HRP further represents a tool for translating strategic intent into practical action.
Finally, this book discusses the various phases of the employment process illustrated in Figure 3.3, which are generally regarded as
the topical areas of personnel psychology. The various phases of the employment process are concerned with decisions that need to be
made by industrial psychologists and managers with regard to one or more individuals. The costs, consequences, and anticipated payoffs
of alternative decision strategies can then be assessed in terms of their ramifications for the organisation as a whole.
Review questions
You may wish to attempt the following as practice examination-style questions.
1. Discuss the function of human resource planning in helping companies to sustain their competitive edge.
2. Give an outline of the major factors and trends that influence the human resource planning process.
3. Consider an organisation with which you are familiar and analyse whether it has an organisation strategy. To what extent does the
organisation strategy that you have identified concern itself with human resource or personnel issues?
4. How would you explain the need for HR planning to be integrated into the overall organisational strategy?
5. Differentiate between scarce and critical skills. How do skills shortages influence the human resource planning process?
6. Describe the South African government’s solution for addressing the skills crisis in South Africa’s workplaces.
7. How will the newly established OLS and ESSA influence human resource planning in the South African organisational context?
8. Discuss the various phases of the human resource planning process.
9. What do you see as the benefits for the organisation in investing in the process of predicting future labour and skill requirements?
10. What are some of the difficulties associated with HRP in practice?
11. Give an overview of the topical areas of personnel psychology that form the foundation of the employment process.
84
Multiple-choice questions
1. The changing nature of work has resulted in:
a. More jobs becoming available in the job market
b. Guarantees of lifelong employment
c. The arrival of the information age
d. A reduction in job security and a greater emphasis on family and leisure roles
2. Human resource planning is a means of:
a. Obtaining the correct number of people with the right skills
b. Obtaining the correct number of people at the appropriate pace, at the right time, while remaining in line with corporate strategy
c. Keeping people competent and motivated and becoming involved with corporate planning at a strategic level
d. All of the above
3. Strategic planning means the development and maintenance of a competitive advantage, which could mean:
a. Remaining focused on the original vision at all costs
b. Ensuring that all organisational functions are focused on a common direction
c. Moving outside of recognised ‘comfort zones’
d. Remaining focused on survival
4. A company employs 1 433 wage earners and 150 salaried employees, and loses a total of 287 employees a year, 12 of whom are
salaried employees. By using a formula, calculate the overall labour turnover at this company:
a. 20,02
b. 19,19
c. 18,13
d. 5,51
5. In order to make an accurate prediction, trend projection:
a. Cannot be relied on
b. Is used exclusively in the leisure industry
c. Must be used together with other techniques
d. Relates a single factor to employment
6. The Markov model:
a. Focuses on internal factors
b. Is a better alternative to computer modelling
c. Uses complicated mathematical calculations
d. Uses the knowledge of experts
7. The primary reasons for organisations to undertake human resource planning are:
a. To ensure that a strategic plan is achieved and to cope with change and future staff needs
b. To provide human resource information to other organisational functions and to ensure that a supply of highly qualified staff is
available
c. To ensure a fair representation of the population mix throughout the hierarchy of the organisation and to determine human resource
policies and planning practices that will attract and retain the appropriate people
d. All of the above
Reflection 3.2
Read through the excerpt below and answer the questions that follow.
The challenge to integrate a critical focus at all levels
Source: <www.skillsportal.co.za> (2009)
The number of black employee placements in corporate South Africa has more than doubled over the past two years, according to
research by leading South African headhunting company Jack Hammer Executive Headhunters, and figures show the greatest area of
transformation at senior management levels are among black females. Company statistics indicate that black African placements
(male and female combined) increased from 16% in 2006 to 28% in 2008, while black female placements increased 10% to 33%
during the same period.
Madge Gibson, Partner at Jack Hammer Executive Headhunters, says most of her company’s placements are the result of mandated
search assignments where there is often a specific preference – and in some cases a requirement – to appoint an ‘employment equity’
(EE) candidate. ‘But despite this, the total employment equity placement figures, when one takes into account all black, Asian and
coloured placements, has only increased from 32% of placements in 2006 to 38% in 2008.
‘On one hand this indicates a focus on black African appointments, but also reveals a continuing shortage of suitably skilled EE
candidates at senior management levels.
‘In our experience, employers continue to prioritise transformation. It seems it’s only when they have no choice – owing to a lack
of specialist technical knowledge or particular expertise – that a prospective employer considers other candidates.’
85
And yet despite this, it should be noted that the placement of white candidates at a senior management level remains at a much
higher percentage than EE placements.
‘There has been some reduction in the number of white placements from 68% in 2006 to 62% in 2008, but considering the amount
of effort, energy, resources and money placed on transformation it is concerning to see over 60% of all senior management placements
remaining with white men and women.’
In addition, Gibson says that experienced EE candidates are not necessarily receiving financial packages significantly more
generous than those offered to their white counterparts.
‘Some myths abound about packages up to 50% higher for EE candidates. But our direct experience proves employers are
unwilling to offer packages that deviate drastically from their company’s predetermined salary bands. And EE candidates are not
exempt from these rules,’ says Gibson.
Questions
•
•
How do diversity and equity in employment influence the human resource planning process?
Why is it important for employers and industrial psychologists to consider South African labour market trends and the
information and services provided by ESSA in human resource planning?
86
CHAPTER 4
JOB ANALYSIS AND CRITERION DEVELOPMENT
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
JOB ANALYSIS
• Defining a job
• Products of job analysis information
• Employment equity legislation and job descriptions
• The job analysis process
• Types of job analysis
• Collecting job data
• General data collection methods
• Specific job analysis techniques
• Computer-based job analysis
• The South African Organising Framework for Occupations (OFO) and job analysis
COMPETENCY MODELLING
• Defining competencies
• Drawbacks and benefits of competency modelling
• Phases in developing a competency model
• Steps in developing a competency model
CRITERION DEVELOPMENT
• Steps in criterion development
• Predictors and criteria
• Composite criterion versus multiple criteria
• Considerations in criterion development
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
The topics of this chapter, job analysis and criterion development, form the foundation of nearly all personnel activities. Job descriptions, job
or person specifications, job evaluations and performance criteria are all products of job analysis. They are invaluable tools in the recruitment,
selection, performance evaluation, training and development, reward and remuneration and retention of employees. They also provide the
answers to many questions posed by legislation, such as whether a certain requirement stated on a vacancy advertisement is really an inherent
job requirement. They are further helpful in deciding if a psychological test should be used and which one should be used in the prediction of
job performance (Schreuder, Botha & Coetzee, 2006).
In general, industrial psychologists aim to demonstrate the utility of their procedures such as job analysis in enhancing managers’, workers’
and their own understanding of the determinants of job success. This chapter firstly explores the function and uses of job analysis in
understanding performance expectations (properties of the job in the context of the organisation’s expectations) as well as the required
knowledge, skills, experience, abilities and personal characteristics necessary to meet those expectations. Secondly, since the behaviour that
constitutes or defines successful performance of a given task is regarded as a criterion that needs to be measured in a valid and reliable
manner, the chapter also addresses the matter of criterion development, for which job analysis provides the raw material.
Learning outcomes
When you have finished studying this chapter, you should be able to:
87
1. Describe the purpose and products of job analysis.
2. Differentiate between task performance and contextual performance.
3. Discuss the job analysis process, the sources of information, and the types of information to be collected.
4. Differentiate between the uses of the various job analysis methods and techniques.
5. Outline the information to be contained in a job description.
6. Discuss employment equity considerations in job analysis and job descriptions.
7. Evaluate the usefulness of the Organising Framework for Occupations (OFO) in the context of job analysis.
8. Identify factors that influence job analysis reliability and validity.
9. Differentiate between job analysis and competency modelling and evaluate the advantages and disadvantages of each approach.
10. Discuss the concept of criterion distortion.
11. Evaluate the various aspects to be considered in criterion development.
CHAPTER INTRODUCTION
As discussed in chapter 1, personnel psychology deals with the behaviour of workers or worker performance. Time to complete a
training course; total number of days absent; promotion rate within an organisation; and turnover rate are all examples of variables that
industrial psychologists have used as measures of performance. However, measuring employee performance is often regarded as being
quite a challenge since in nearly all jobs, performance is multi-dimensional; that is, the prediction (the topic of chapter 5) and
measurement of behaviour requires a consideration of many different aspects of performance. Workers call on many attributes to
perform their jobs, and each of these human attributes is associated with unique aspects of performance. Furthermore, measures of
individuals’ performance may often include factors beyond the control of individuals. For example, time to complete a training course
may be constrained by how much time the employee can be away from the workplace, or the promotion rate within an organisation
may be affected by the turnover rate in that organisation (Landy & Conte, 2004).
Furthermore, industrial psychologists have devoted a great deal of their research and practice to understanding and improving the
performance of workers. Research has shown that individuals differ regarding their level of performance. Campbell, Gasser and
Oswald (1996) found, for example, that the ratio of the productivity of the highest performer to the lowest performer in jobs of low
difficulty ranges from 2 : 1 to 4 : 1; while in jobs of high difficulty, this ratio can be as much as 10 : 1. This represents a striking
degree of variation which has important implications for companies who strive to sustain their competitive edge in the contemporary
business environment. Imagine a sales representative who closes on average five contracts a month versus one who brings in 50. From
this it is clear why industrial psychologists and employers are interested in worker performance and why measuring employee
performance can be quite a challenge (Landy & Conte, 2004).
Performance in the context of personnel psychology is behaviour that can actually be observed. Employee performance is therefore
discussed in the context of one or more tasks that define a job. These tasks can be found in, for example, job descriptions and work
orders. Of course, in many jobs behaviour relates to thinking, planning or problem-solving and cannot actually be observed; instead, it
can be described only with the help of the individual worker. In the work setting, performance includes only those actions or
behaviours that are relevant to the organisation’s goals and can be measured in terms of each individual’s proficiency in terms of
specific tasks. Employers and industrial psychologists concern themselves with the performance that the organisation employs an
employee to achieve and to achieve well. Performance in this regard is not the consequence or result of action; it is the action itself
over which the worker has to an extent a measure of control (Landy & Conte, 2004).
Since human performance is variable (and potentially unreliable) owing to various situational factors that influence individuals’
performance, industrial psychologists are often confronted by the challenge of developing performance criteria that are relevant,
reliable, practical, adequate, and appropriate for measuring worker behaviour. Industrial psychologists generally refer to the ‘criterion
problem’ when pointing out the difficulties involved in the process of conceptualising and measuring performance constructs that are
multi-dimensional, dynamic, and appropriate for different purposes (Cascio & Aguinis, 2005). Since job analysis provides the raw
material for criterion development, this chapter therefore also explores the development and use of criteria in measuring the task
performance of employees.
The task performance of workers (the topic of chapter 9) is generally contrasted with their contextual performance. Whereas task
performance is defined as the proficiency with which job incumbents perform activities that are formally recognised as part of their job
(Borman & Motowidlo, 1993:73), contextual performance, in contrast, is more informal and refers to activities not typically part of job
descriptions but which support the organisational, social and psychological environment in which the job tasks are performed. These
include extra-role behaviours (also called organisational citizenship behaviours) such as endorsing, supporting and defending
organisational objectives or volunteering to carry out task activities that are not formally part of one’s own job description (Landy &
Conte, 2004). Job analysis and criterion development concern themselves with the task performance of workers, while the retention of
employees (the topic of chapter 7) is usually concerned with both the task performance and contextual performance of workers.
JOB ANALYSIS
Job analysis refers to various methodologies for analysing the requirements of a job and is regarded as one of the most basic personnel
functions. As jobs become more and more complex, the need for effective and comprehensive job analyses becomes increasingly
88
important (Riggio, 2009).
One of the first industrial psychologists to introduce standardised job analysis was Morris Viteles who used job analysis as early as
1922 to select employees for a trolley car company (Viteles, 1922). In over 80 years, since 1922, the purpose of job analysis has not
changed; it remains one of understanding the behavioural requirements of work. The job analyst wants to understand what the important
tasks of the job are, how they are carried out, and what human attributes are necessary to carry them out successfully (Landy & Conte,
2004). In short, job analysis is the systematic study of the tasks, duties and responsibilities of a job and the knowledge, skills and
abilities needed to perform it. Since job analysis reflects an up-to-date description of the work actually being performed by a worker, it
provides managers and industrial psychologists with a greater understanding of what a particular job entails (Riggio, 2009).
In addition, job analysis is an essential instrument in human resource applications that relate to the employment and retention of
personnel. It forms the basis for personnel decisions in focusing on answers to questions such as: Selection for what? Reward and
remuneration for what? Appraisal for what? (Voskuijl & Evers, 2008). Before a worker can be hired or trained and before a worker’s
performance can be evaluated, it is critical to understand exactly what the worker’s job entails. Such analyses should also be conducted
on a periodic basis by using precise and comprehensive methods to ensure that the information on jobs is up to date. Moreover, in
today’s complex and ever-changing, ever-evolving jobs, job analysis should not be a limiting process, that is, analyses of jobs should
allow for flexibility and creativity in many jobs, rather than being used to tell people how to do their work (Riggio, 2009).
Defining a job
In the South African occupational learning context (the topic of chapter 10), a job is seen as a set of roles and tasks designed to be
performed by one individual for an employer (including self-employment) in return for payment or profit (Department of Labour, 2009).
However, in the context of job analysis, a job is generally defined as a group of positions that are similar enough in their job elements,
tasks and duties to be covered by the same job analysis, for example, human resource manager (Schreuder et al, 2006:32). In practical
terms, this means that a job is not a single entity, but rather a group of positions. A position is a set of tasks performed by a single
employee. For example, the job of a secretary may exist at an organisation, and ten employees may be employed in the position of
secretary. Any secretary appointed to this position has to perform various tasks. Tasks are regarded as the basic units of work that are
directed towards specific job objectives (Muchinksy, Kriek & Schreuder, 2005). In more practical terms, tasks make up a position, while
a group of positions make up a job. Jobs can also be grouped into larger units, which are called job families. For example, the jobs of a
secretary, receptionist, clerk, and bookkeeper can all be grouped into the job family of clerical positions (Schreuder et al, 2006).
Most jobs are quite complex and require workers to possess certain types of knowledge and skills to perform a variety of different
tasks. Workers may need to operate complex machinery to perform their job, or they may need to possess a great deal of information
about a particular product or service. Jobs may also require workers to interact effectively with different types of people, or a single job
may require a worker to possess all these important skills and knowledge. Because job analysis typically involves the objective
measurement of work behaviour performed by actual workers, the job analyst must be a professional who is well trained in the basic
research methods discussed in chapter 2 to perform a good or accurate job analysis (Riggio, 2009). However, Aamodt (2010) advises
that employees should be involved in the job analysis. In organisations in which many people perform the same job (e.g. educators at a
university, assemblers in a factory), not every person need participate. For organisations with relatively few people in each job, it is
advisable to have all people participate in the job analysis.
Deciding how many people should be involved in the job analysis depends on whether the job analysis will be committee-based or
field-based. In a committee-based job analysis, a group of subject matter experts (SMEs) – for example, employees and supervisors –
meet to generate the tasks performed, the conditions under which they are performed, and the knowledge, skills, attributes, and other
characteristics (KSAOs – see Table 4.1) needed to perform them. Rouleau and Krain (1975) recommend that a committee-based
approach should have one session of four to six incumbents for jobs having fewer than 30 incumbents and two to three sessions for jobs
with higher numbers of incumbents.
In a field-based job analysis, the job analyst individually interviews or observes a number of incumbents out in the field. Taken
together, the results of research studies suggest that committee-based job analyses yield similar results to field-based job analyses.
Employee participants should be selected in as random a way as practical, yet still be representative. The reason for this, according to
research, is that employee differences in gender, race, job performance level, experience, job enjoyment, and personality can at times
result in slightly different job analysis outcomes (Aamodt, 2010).
Furthermore, it is important for companies and industrial psychologists to adopt a future-orientated approach to job analysis, where
the traditional job analysis approach is supplemented by group discussion to identify how existing KSAOs (see Table 4.1) are likely to
change in the future. A key problem with job analysis is often that it assumes that the job being reviewed will remain unchanged. The
nature of the environment within which many organisations are required to operate means that such assumptions can be inaccurate
(Robinson, 2006).
Table 4.1 Description of KSAOs
Knowledge: A collection of discrete but related facts and information about a particular domain, acquired through formal education or
training, or accumulated through specific experiences.
Skill: Behaviour capability or the skills that are the observed behaviour or practiced act (that is, the cultivated, innate ability or
aptitude to perform the act, for example, playing the violin, at a specified level of proficiency).
89
Ability: The end product of learning is ability. Abilities are behavioural attributes that have attained some stability or invariance
through a typically lengthy learning process. In other words, ability refers to the stable capacity to engage in a specific behaviour such
as playing the violin. Abilities are indicators of aptitude and can be used to predict socially significant future behaviour, such as job
performance. Abilities help to predict the proficiency level on a particular skill that a person can attain if given the opportunity to learn
the skill. Such a predicted skill level is referred to as aptitude.
Other characteristics: Personality variables, interests, training and experience.
Reflection 4.1
Study the job attributes described below, and sort the KSAOs described below under the appropriate headings in the table.
Advertised job attributes
Position: Facilities Manager
Requirements: B Degree in Management. Able to work with MS Office (Excel®, Access®, Word®, PowerPoint® and e-mail). Good
English communication skills. Knowledge of project management.
Relevant experience: 2 to 3 years in facility management. Must have knowledge and experience in implementation of all the relevant
legislation, such as the Occupational Health and Safety Act, and be able to plan, lead, organise and control a team. Experience in
people management will be an added advantage.
Person profile: The ideal candidate should be conversant with the relevant legislation and Public Service Regulations and GIAMA. A
proactive, innovative, cost-conscious, resourceful, diversity- and customer-focused individual with integrity and good interpersonal
skills, planning and organising abilities, time management, and problem-solving skills is necessary. Must be quality-orientated, and a
strong administrator. Must be a team player, performance driven, and able to work in a very pressurised environment.
Key performance areas (KPAs):
• To ensure maintenance of the workplace facilities (offices, boardrooms, and other facilities).
• To ensure maintenance of telecommunications infrastructure and telecommunications services in the Department.
• To ensure that the workplace facility is designed in line with the Occupational Health and Safety as well as GIAMA
standards.
• To ensure supervision of Facilities Management Services.
• To compile all statistics to contribute to the monthly supervision of logistical support staff service and annual reports.
• To compile the Logistical Support monthly, quarterly and annual reports.
Attribute
K, S, A or O
Justification
Products of job analysis information
As shown in Figure 4.1, a job analysis leads directly to the development of several other important personnel ‘products’: a job
description (what the job entails), a job or person specification (what kind of people to hire for the job), a job evaluation, and
performance criteria.
Job description
One of the written products of a job analysis is a job description – a brief, two- to five-page summary of the tasks and job requirements
found in the job analysis. Job descriptions refer to the result of defining the job in terms of its task and person requirements (KSAOs),
and include characteristics of the job such as the procedures, methods and standards of performance (Voskuijl & Evers, 2008). In other
words, the job analysis is the process of determining the work activities and requirements, and the job description is the written result of
the job analysis. Job analyses and job descriptions serve as the basis for many human resource activities related to the employment
process discussed in chapter 3, including human resource planning, employee selection, performance evaluation, reward and
remuneration, training and work design (Aamodt, 2010). Job descriptions are especially useful in recruitment and selection in clarifying
the nature and scope of responsibilities attached to specific jobs. They are also useful in training and development and performance
appraisal or evaluation (Robinson, 2006).
Figure 4.1 Links between job analysis products and the employment and retention processes
90
Job descriptions should be updated if a job changes significantly. With high-tech jobs, this is probably fairly often. With jobs such as
package handling, the job might not change substantially. Job crafting – the informal changes that employees make in their jobs – is one
of the reasons that job descriptions change across time (Aamodt, 2010; Wrzesniewski & Dutton, 2001). It is common for employees to
expand the scope of their jobs quietly to add tasks they want to perform and to remove tasks that they do not want to perform. In a study
of sales representatives, 75 per cent engaged in job crafting in just one year (Lyons, 2008)!
Job descriptions vary in form and content but generally specify the job title, location, reporting relationships, job purpose, key
responsibilities, and any limits on authority or responsibility (Robinson, 2006). A job description typically includes the following
(Aamodt, 2010; Marchington & Wilkinson, 2008):
• Job title: A clear statement is all that is required, such as ‘transport manager’ or ‘customer service assistant’. An accurate title
describes the nature of the job and aids in employee selection and recruitment. If the job title indicates the true nature of the
job, potential applicants for a position will be better able to determine whether their skills and experience match those
91
•
•
•
•
•
•
•
•
•
•
•
•
required for the job.
A job title can also affect perceptions of the status and worth of a job. For example, job descriptions containing gender-neutral
titles such as ‘administrative assistant’ are evaluated as being worth more money than ones containing titles with a female-sex
linkage, such as executive secretary (Aamodt, 2010).
Location: Department, establishment, the name of the organisation.
Responsible to: The job title of the supervisor to whom the member of staff reports.
Responsible for: Job titles of the members of staff who report directly into the job-holder, if any.
Main purpose of the job: A short and unambiguous summary statement indicating precisely the overall objective and purpose
of the job, such as ‘assist and advise customers in a specific area’, ‘drill metals in accordance with manufacturing policy’.
This summary can be used in help-wanted advertisements, internal job postings, and company brochures.
Responsibilities/duties/work activities: A list of the main and subsidiary elements in the job, specifying in more or less detail
what is required, such as maintaining records held on a computer system, or answering queries. Tasks and activities should be
organised into meaningful categories to make the job description easy to read and understand. The work activities performed
by a bookkeeper can, for example, be divided into seven main areas: accounting, clerical, teller, share draft, collections,
payroll, and financial operations.
Work performance: The job description should outline standards of performance and other issues such as geographical
mobility. This section contains a relatively brief description of how an employee’s performance is evaluated and what work
standards are expected of the employee.
Job competencies: This section contains what are commonly called job or person specifications or competencies. These are
the knowledge, skills, abilities and other characteristics (KSAOs), such as interest, personality, and training, that are
necessary to be successful on the job. The competencies section should be divided into two subsections: The first contains
KSAOs that an employee must have at the time of hiring. The second subsection contains the KSAOs that are an important
part of the job but can be obtained after being hired. The first set of KSAOs is used for employee selection and the second for
training purposes.
Working conditions: A list of the major contractual agreements relating to the job, such as pay scales and fringe benefits,
hours of work, holiday entitlement, and union membership, if appropriate. The employee’s actual salary or salary range
should not be listed on the job description.
Job context: This section should describe the environment in which the employee works and should mention stress level,
work schedule, physical demands, level of responsibility, temperature, number of co-workers, degree of danger, and any other
relevant information. This information is especially important in providing applicants with disabilities with information they
can use to determine their ability to perform a job under a particular set of circumstances.
Tools and equipment used: A section should be included that lists all the tools and equipment used to perform the work
activities, responsibilities or duties listed. Even though tools and equipment may have been mentioned in the activities
section, placing them in a separate section makes their identification simpler. Information in this section is used primarily for
employee selection and training. That is, an applicant can be asked if he or she can operate an adding machine, a computer,
and a credit history machine.
Other matters: Information such as career path or career mobility opportunities.
Any other duties that may be assigned by the organisation.
It is important to note that job descriptions can quickly become out of date and may lack the flexibility to reflect changes in the job role
which may constrain opportunities to reorganise responsibilities and reporting lines (Robinson, 2006). A study by Vincent, Rainey,
Faulkner, Mascio and Zinda (2007) compared the stability of job descriptions at intervals of 1, 6, 10, 12 and 20 years. After one year, 92
per cent of the tasks listed in the old and updated job descriptions were the same, dropping to 54 per cent after ten years.
Moreover, despite being widely used, job descriptions have been heavily criticised for being outmoded and increasingly irrelevant to
modern conditions, reflecting an inflexible and rules-orientated culture. It is argued that workers should not be concerned with the
precise definition of ‘standard’ behaviour but rather with how ‘value’ can be added through personal initiative and organisational
citizenship behaviours. Consequently, highly-specific job descriptions have been replaced by more generic and concise job profiles or
accountability statements that are short – sometimes less than one page – and focused on the outcomes of the job rather than its pro-cess
components. Another alternative is to use role definitions and ‘key result area’ statements (KRAs) that relate to the critical performance
measures for the job (Marchington & Wilkinson, 2008). An example of a key result area (KRA) or key performance area (KPA) is
preparation of marketing plans that support the achievement of corporate targets for profit and sales revenue. A focus on accountability
and/or KRAs (or KPAs) also sends a clear message about performance expectations in the organisation (Robinson, 2006).
Reflection 4.2
Review the previous Reflection activity about KSAOs. Read through the KPAs listed in the advertisement. See if you can identify the
critical performance measures of the job.
Notwithstanding the criticisms, job descriptions provide recruits with essential information about the organisation and their potential
role, and without them people would apply for jobs without any form of realistic job preview. Furthermore, having to outline critical
result areas or accountability profiles can help managers decide whether or not it is necessary to fill a post, and if so in what form and at
what level (Marchington & Wilkinson, 2008).
Job or person specifications
A job analysis also leads to a job specification, which provides information about the human characteristics required to perform the job,
such as physical and personal traits, work experience, and education. Usually job specifications give the minimum acceptable
qualifications that an employee needs to perform a given job. Job specifications are determined by deciding what types of KSAOs are
needed to perform the tasks identified in the job analysis. These KSAOs can be determined through a combination of logic, research,
and use of specific job analysis techniques discussed later in the chapter.
The traditional job or person specification is giving way to competency frameworks, the most significant advantage of these being
that the focus is – or should be – on the behaviours of job applicants. This can decrease the degree of subjectivity inherent in the
recruitment and selection process, and reduce the tendency to make personal inferences about the personal qualities that might underpin
behaviour (Marchington & Wilkinson, 2008). However, job descriptions and job/person specifications often exist alongside
competency-based approaches (Taylor, 2005), because they set a framework within which subsequent human resource practices – such
as performance evaluation, training and development, and pay and grading – can be placed. In addition, the competencies can be related
to specific performance outcomes rather than being concerned with potentially vague processes, such as disposition or interests outside
of work. Moreover, these approaches eschew the use of criteria just because they are easy to measure – for example, educational
qualifications or length of service – but may not relate closely to job effectiveness, which is the evaluation of the results of performance
(Marchington & Wilkinson, 2008).
Job evaluation
A third personnel ‘product’, job evaluation, is the formal assessment and systematic comparison of the relative value or worth of a job to
an organisation with the aim of determining appropriate compensation (the topic of chapter 8), or reward and remuneration. The wages
paid for a particular job should be related to the KSAOs it requires. However, a number of other variables, such as the supply of
potential workers, the perceived value of the job to the company, and the job’s history, can also influence its rate of compensation
(Riggio, 2009).
Reflection 4.3
Use the following headings to determine your position, tasks, job and job family and other specifications:
My position is:
Tasks associated with this position are:
The job is called:
The job family is:
The job/person specifications are:
The required KSAOs are:
The KRAs (KPAs) are:
Once a job analysis has been completed and a thorough job description written, it is important to determine how much employees in
a position should be paid (Aamodt, 2010). The basic job evaluation procedure is to compare the content of jobs in relation to one
another. This may be considered in terms of their effort, responsibility, and skills. A company that knows (based on their salary survey
and compensation policies) how to price key benchmark jobs, and can use job evaluation to determine the relative worth of all other jobs
in the company relative to these key jobs, is well on the way to being able to price all the jobs in their organisation equitably (Dessler,
2009).
The concept of comparable worth in job evaluation practices relates to the notion that people who are performing jobs of comparable
worth to the organisation should receive comparable pay. Comparable worth is a phrase that contains practical, philosophical, social,
emotional, and legal implications. In more practical terms, it means that people who are performing comparable work should receive
comparable pay, so that their worth to the organisation in terms of compensation is ‘comparable’. Various experts have suggested the
use of internal controls (such as job evaluation) and external controls (for example, salary surveys) to assure this comparability, or that
job evaluation techniques be used to calibrate the pay levels of various jobs in an organisation and thereby assure at least some internal
comparability (Landy & Conte, 2004). Job evaluation and the issue of comparable worth are discussed in much more depth in chapter 8.
Performance criteria
Finally, a job analysis helps outline performance criteria, which are the means for evaluating a worker’s success in performing a job.
Once the specific elements of a job are known, it is easier to develop the means to assess levels of successful or unsuccessful
performance (Riggio, 2009). In this regard, the development of performance assessment and evaluation systems is an extension of the
use of job analysis for criterion development. Once the job analyst identifies critical performance components of a job, it is possible to
93
develop a system for evaluating the extent to which an individual worker has fallen short of, met or exceeded the standards set by the
organisation for performance on those components (Landy & Conte, 2004). Performance criteria and performance evaluation are the
topics of chapter 9.
Other uses of job analysis information
The results of a job analysis can further be used for other purposes such as recruiting and selecting. If one knows what the job requires
and which attributes are necessary to fulfil those requirements, one can target a company’s recruiting efforts to specific groups or
potential candidates. For technical jobs, these groups may be defined by credentials (such as a bachelor’s degree in engineering) or
experience (for example, five years’ programming in a specific area of the job). When one knows the attributes most likely to predict
success, one can identify and choose (or develop) the actual assessment tools (the topic of chapter 5). Based on the job analysis, tests
that measure specific attributes such as personality, general mental ability, or reasoning may be chosen. Similarly, an interview format
intended to get at some subtle aspects of technical knowledge or experience can also be developed by using the information that
stemmed from the job analysis (Landy & Conte, 2004).
A job analysis also helps to identify the areas of performance that create the greatest challenges for incumbents which can help
managers, human resource professionals, or industrial psychologists to identify training and learning opportunities for individual
workers. Furthermore, detailed job analyses provide a template for making decisions in mergers, acquisitions and downsizing and
rightsizing scenarios which lead to workforce reduction or restructuring. Mergers and acquisitions call for identifying duplicated
positions and centralising functions. The challenge is to identify which positions are truly redundant and which provide a unique added
value. In downsizing and rightsizing interventions, positions with somewhat related tasks are often consolidated into a single position.
The job descriptions of those who stay with the organisation are enlarged, with the result that more responsibilities are assumed by fewer
people. Job analysis provides the information needed by management to decide which tasks to fold into which positions (Landy &
Conte, 2004).
Job analysis further allows the organisation to identify logical career paths as well as the possibility of transfer from one career
ladder to another by means of identifying job families or job ladders. A job ladder or job family is a cluster of positions that are similar
in terms of the human attributes needed to be successful in those positions or in terms of the tasks that are carried out. Accounting jobs,
for example, are closer to budgeting and invoicing positions than they are to engineering or production positions (Landy & Conte,
2004).
Job analyses and their products are also valuable because of legal decisions that make organisations more responsible for personnel
actions as part of the movement towards greater legal rights for the worker. Foremost among these laws are those concerned with
employment equity matters related to employees from the so-called designated groups. Employers cannot make hasty or arbitrary
decisions regarding the hiring, firing or promotion of workers and often need to defend their personnel decisions in court. Certain
personnel actions, such as decisions to hire or promote, must therefore be made on the basis of a thorough job analysis. Sometimes a job
analysis and a job description are not enough. Courts have also questioned the quality of job descriptions, the methods used in job
analysis by many companies, and whether they reflect the requirements stipulated in the Employment Equity Act (Riggio, 2009).
Employment equity legislation and job descriptions
In term of the South African Employment Equity Act 55 of 1998, fair discrimination must be based on the inherent requirements of the
job. The Act prohibits unfair discrimination against an employee or discrimination in any employment practice, unless the
discrimination is for purposes of affirmative action or based on the inherent requirements of the job. This is especially important when
considering the physical requirements of a job and ensuring that reasonable accommodation is made to make jobs accessible to
previously disadvantaged people.
Inherent requirements refer to the minimum requirements that cannot be eliminated because they are essential to the successful
completion of the job, and often by their nature exclude some people. For example, a fire-fighter must be able-bodied and willing and
capable of doing hard physical work. Someone with a physical disability, such as someone who has lost a leg or an arm, is automatically
excluded from being a fire-fighter. One example that is often found in job descriptions but is not always valid is that the incumbent must
have a driver’s licence and own transport. These requirements can be inherent requirements only if the job is that of a driver, delivery
person, salesperson, or someone who has to travel as part of their job. If the job is an office job, there is no valid reason for requiring a
driver’s licence and own transport (Schreuder et al, 2006).
To ensure that any unfair discrimination is eliminated, employers must ensure that the inherent requirements of a job are clearly
spelled out in the job description. In this regard, job analysis is the process of drawing up a job description and specification and
essentially describing what should be done in a job and what the minimum requirements are. In practical terms, this means that
managers should continuously revise existing job descriptions to ensure that what is required of the incumbent are really inherent
requirements of the job (Schreuder et al, 2006).
The job analysis process
The job analysis process is essentially an information-gathering and data-recording or documentation process. Depending on the type of
job analysis and therefore the type of information to be collected, various job analysis techniques are used. Aamodt (2010) differentiates
between five steps (shown in Figure 4.2) commonly followed in conducting a job analysis. These include:
94
1. Identifying tasks performed
2. Writing task statements
3. Rating task statements
4. Determining essential KSAOs, and
5. Selecting tests to tap KSAOs.
Reflection 4.4
Study your own job description. If you are not employed or do not have a job description, study the job description of a friend, family
member or relative. Compare the job description with the requirements stipulated by the Employment Equity Act and the general
content guidelines outlined for job descriptions. Does the chosen job description contain all the necessary information? What would
you add or change so that the job description does contain all the necessary information? Would it be better to do a job analysis and
draft a new job description? Does this job description comply with the requirements of employment equity?
Each of these steps will be briefly discussed.
Step 1: Identifying tasks performed
The first step in conducting a job analysis is to identify the major job dimensions and the tasks performed for each dimension, the tools
and equipment used to perform the tasks, and the conditions under which the tasks are performed. This step typically involves gathering
existing information, interviewing subject matter experts (SMEs) (people knowledgeable about the job such as job incumbents,
supervisors, customers, and upper-level management), incumbent observations, and job participation where the job analyst actually
performs an unfamiliar job.
Figure 4.2 The job analysis process
Step 2: Writing task statements
Writing task statements involves using the identified tasks in developing a task inventory that will be reflected in the job description.
Written task statements must include at minimum an action (what is done) and an object (to which the action is done). For example, the
task ‘sends purchase requests’ is too vague, while the statement ‘sends purchase requests to the purchasing department using the
company mail facilities’ is much clearer and more specific. Often, task statements will also include such components as where the task is
done, how it is done, why it is done, and when it is done.
It has also been suggested that a few tasks not part of a job (generally referred to as ‘bogus tasks’) be placed into the task inventory.
Bogus or irrelevant tasks that are often rated by job incumbents as being part of their job are often removed from the job analysis as a
result of carelessness. Pine (1995) included five such items in a 68-item task inventory for corrections officers and found 45 per cent
reported performing at least one of the bogus tasks.
Step 3: Rating task statements
A task analysis is conducted once task statements have been written. Generally, these may include some 200 tasks. A group of SMEs is
used to rate each task statement on the frequency and the importance or critical nature of the task being performed (see Table 4.2). After
a representative sample of SMEs rates each task, the ratings are organised into a format similar to that shown in Table 4.3. Tasks will
generally not be included in the job description if their average frequency rating is 0,5 or below. Tasks will not be included in the final
95
inventory if they have either an average rating of 0,5 or less on either the frequency (F) or importance (I) scales, or an average combined
rating (CR) of less than 2. Using these criteria, tasks 1 and 2 in Table 4.3 would be included in the job description, and task 2 would be
included in the final task inventory used in the next step of the job analysis (Aamodt, 2010).
Table 4.2 Example of a scale used to rate importance of KSAO for a fire-fighter
Importance of KSAO
0 KSAO is not needed for satisfactory job performance.
1 KSAO is helpful for satisfactory job performance.
2 KSAO is important/essential for satisfactory job
performance.
Step 4: Determining essential KSAOs
KSAOs are identified once the task analysis is completed and a job analyst has an inventory or list of tasks essential for the proper
performance of a job. To link KSAOs logically to tasks, a group of SMEs brainstorm the KSAOs needed to perform each task. Once the
list of essential KSAOs has been developed, another group of SMEs is given the list and asked to rate the extent to which each of the
KSAOs is essential for performing the job, including when the KSAO is needed. However, rather than using this process, KSAOs can
also be identified using such structured methods as, for example, the Fleishman Job Analysis Survey (F-JAS), critical incident technique
(CIT), and the Personality-Related Position Requirements Form (PPRF). Each of these techniques will be discussed later in the chapter.
Table 4.3 Example of task analysis ratings
F = frequency; I= importance; CR = combined rating
A draft job description and specification is compiled once the important KSAOs have been identified. The drafts are reviewed with
the manager and SME committee. Recommendations are integrated into the job description and specification and these documents are
finalised.
Step 5: Selecting tests to measure KSAOs
Once the important KSAOs have been identified, the next step is to determine the best methods (such as, for example, interviews, ability
tests, personality tests, assessment centres) to measure the KSAOs needed at the time of hire. These methods, and how to choose them,
are discussed in more detail in chapter 5.
Types of job analysis
As mentioned previously, the purpose of a job analysis is to combine the task demands of a job with one’s knowledge of human
attributes (KSAOs) and produce a theory of behaviour for the job in question. According to Landy and Conte (2004), there are two ways
to approach building that theory. One is called the task-orientated job analysis and the other the worker-orientated job analysis.
Task-orientated job analysis begins with a statement of the actual tasks as well as what is accomplished by those tasks. As a starting
point, the worker-orientated approach focuses on the attributes of the worker necessary to accomplish the tasks. Since worker-orientated
job analyses tend to be more generalised descriptions of human behaviour and behaviour patterns, and are less tightly tied to the
technological aspects of a particular job, they produce data which is more useful for structuring training programmes and giving
feedback to employees in the form of performance evaluation information. Given the volatility that exists in today’s typical workplace
that can make specific task statements less valuable in isolation or even obsolete through technology changes, employers are
significantly more likely to use worker-orientated approaches to job analysis than they did in the past (Landy & Conte, 2004). However,
task-orientated job analysis is regarded as being less vulnerable to potential distorting influences in job analysis data collection than is
the worker-orientated approach (Morgeson & Campion, 1997). The potential distorting influences include such factors as a need on the
part of the employee doing reporting, commonly referred to as a subject matter expert (SME), to conform to what others report, the
96
desire to make one’s own job look more difficult, attempts to provide the answers that the SME thinks the job analyst wants, and mere
carelessness.
Regardless of which approach is taken, the next step in the job analysis is to identify the attributes (KSAOs) that an incumbent needs
for either performing the tasks or executing the human behaviours described by the job analysis. Tests and other assessment techniques
can then be chosen to measure the KSAOs that have been identified. Traditional task-orientated job analysis has often been accepted as a
necessary, although not sufficient, condition for establishing the validity of selection tests (the topic of chapter 5). In other words, while
a sound job analysis will not guarantee that a test would be found valid, the absence of a credible job analysis may be enough to doom
any claim of job relatedness. How can a test be job related if the testing agent does not know what the critical tasks of the job are? In this
regard, it is likely that there will always be a role for some traditional form of job analysis such as a task- or human attributes-based
analysis. However, the newer and more dynamic extensions such as competency modelling, cognitive task analysis, and performance
components to facilitate strategic planning will enhance the results of a job analysis even more (Landy & Conte, 2004).
Collecting job data
In relation to the purpose of job analysis, the type of job data to collect is one of the choices to be made when applying job analysis
(Voskuijl & Evers, 2008). Regardless of the approach the job analyst decides to use, information about the job is the backbone of the
analysis. The more information and the more ways in which the analyst can collect that information, the better the understanding of the
job.
Types of job data
McCormick (1976, cited in Cartwright & Cooper, 2008:140), distinguished the following types of information to be collected:
• Work activities
• Work performance (for example, time taken and error analysis)
• Job context (for example, social context and physical working conditions)
• Machines, tools, equipment, and work aids used
• Job-related tangibles and intangibles such as materials processed and services rendered, and
• Personnel requirements.
Work activities are divided into job-orientated and worker-orientated activities. Job-orientated activities are usually expressed in job
terms and indicate what is accomplished. Worker-orientated activities refer to behaviours performed in work (for example,
decision-making). Personnel requirements include the KSAOs described in Table 4.1. The process of deriving human attributes (such as
the KSAOs) is described as the psychological part of doing job analysis (Sanchez & Levine, 2001). In this regard, Sackett and Laczo
(2003) observe a growing trend toward the incorporation of personality variables in job analysis. The justification for using
personality-based job analysis is found in the fact that several personality dimensions proved to be valid predictors of work performance.
Furthermore, it is suggested that the use of personality-based job analysis increases the likelihood that the most important personality
traits required for a job are identified (Voskuijl & Evers, 2008; Salgado & Fruyt, 2005).
Agent or source of the information
Job incumbents, supervisors and professional job analysts were traditionally the most important sources of job information. However, a
broader range of information agents is required as the boundaries of jobs become less clear-cut. These include, for example, customers
or training experts. Apart from people, devices such as the use of videotapes and other electronic information provided by cameras, tape
recorders and computers can also be used as a source of job information (Voskuijl & Evers, 2008). Sanchez and Levine (2001)
emphasise the importance of electronic records of performance (for example, in call centres, the number of calls handled) as reliable
sources of work information. Naturally, each type of resource has its particular strengths and weaknesses; for example, incumbents may
have the most information about the content of a job, but professional job analysts may be more familiar with job analysis methods.
Job analysts generally prefer incumbent ratings because these ratings have high face validity. However, the use of incumbent job
analysis data has several disadvantages, such as the following (Voskuijl & Evers, 2008; Sanchez & Levine, 2000):
• It takes up valuable time of large numbers of incumbents.
• Incumbents are not always motivated to rate their jobs conscientiously.
• Rating instructions and the survey format are not always well understood.
• There is no empirical evidence that incumbents are most qualified to ensure valid job information.
Validity and reliability of job analysis data
Voskuijl and Evers (2008) point out that several studies show variability in incumbent ratings that may be unrelated to job content, such
as work attitudes (for example, job satisfaction, organisational commitment, and job involvement – discussed in chapter 7). Differences
within incumbents and between incumbents and others might reflect real differences, for example, employees with longer job tenure
may have more freedom to develop unique patterns or profiles of activities which are all correct. The meta-analysis of Dierdorff and
Wilson (2003) revealed that the rater source did affect the reliability coefficients.
Job analysis outcomes are often the result of subjective (human) judgments which lead to inaccuracy. Two broad categories of
inaccuracy exist, namely social (for example, conformity pressures and social desirability) and cognitive (for example, information
97
overload and extraneous information) (Morgeson & Campion, 1997). These sources of inaccuracy affect the reliability (the consistency
or stability of a job analysis method or technique) and validity (accuracy) of job analysis data. In more practical terms, reliability refers
to the extent to which the data collected by means of a specific job analysis procedure or technique would consistently be the same if it
was collected again, at a different time, or if different raters were used as sources of information. Validity refers to the accuracy of
inferences made based on the data yielded from the specific job analysis method or technique. It also addresses whether the job analysis
data accurately and completely represents what was intended to be measured (Landy & Conte, 2004). The issue of reliability and
validity is discussed in more depth in chapter 5.
According to Voskuijl and Evers (2008), almost every study on job analysis presents some measure of inter-rater reliability or
intra-rater re-liability. Inter-raterreliability refers to consis-tency across raters, often expressed in intra-class correlations and by means of
pair-wise correlations. Intra-rater reliability refers to a type of test–retest measurement. A meta-analysis conducted by Dierdorff and
Wilson (2003) shows that tasks generally have a higher inter-rater reliability than generalised worker activities. However, task data
showed lower estimates of intra-rater reliability. Professional analysts display higher inter-rater reliabilities than other sources (for
example, incumbents, supervisors, and trained students).
In terms of validity, Sanchez and Levine (2000) question the meaning of accuracy in terms of the correspondence between job
analysis data and the ‘true’ job characteristics. They argue that it is not possible to assess the true job content and therefore accuracy
cannot be expressed in terms of deviation of an objective and absolute standard. Harvey and Wilson (2000) argue that this view holds
only for highly questionable data collection methods (for example, using inexperienced raters and poorly-anchored scales, and drawing
unverifiable inferences from abstract job dimensions). In their opinion, it is possible to assess jobs correctly and accurately if the analyst
uses the ‘right’ combination of the work or job descriptors and rating scales.
General data collection methods
Choosing a particular method for collecting job data is dependent on the level of specificity required. Specificity of information refers to
the degree of behavioural and technological detail that needs to be provided by the job description items, ranging from specific,
observable, and verifiable to holistic, abstract, and multi-dimensional (Voskuijl & Evers, 2008). That is, should the job analysis break a
job down into very minute, specific behaviours (for example, ‘tilts arm at a 90-degree angle’ or ‘moves foot forward 10 centimetres’), or
should the job be analysed at a more general level (for example, ‘makes financial decisions’, ‘speaks to clients’)? Although most jobs
are analysed at levels somewhere between these two extremes, there are times when the level of analysis will be closer to one end of the
spectrum than the other (Aamodt, 2010). The job- and worker-orientated activities described earlier are respectively highly and
moderately specific. The degree of specificity has an effect on the possibility of cross-job comparisons (Voskuijl & Evers, 2008).
A related decision addresses the issue of formal versus informal requirements. Formal requirements for a secretary may include
typing letters or filing memos. Informal requirements may involve making coffee or collecting the boss’s dry cleaning. Including
informal requirements has the advantages of identifying and eliminating duties that may be illegal or unnecessary. Suppose a job
analysis reveals that a secretary in one department collects the boss’s dry cleaning and takes it to his house. This is an important finding,
because the company may not want this to occur (Aamodt, 2010).
There are various methods for collecting job information. Sackett and Laczo (2003) distinguish between qualitative versus
quantitative methods and taxonomy-based versus blank slate approaches. Taxonomy-based approaches make use of existing taxonomies
of characteristics of jobs, while in blank slate approaches lists of job activities or attributes are generated.
Quantitative and taxonomy-based methods mostly involve standardised questionnaires. In the qualitative and/or blank slate
approaches, interviews with (groups of) incumbents, direct observation of job incumbents, and diaries kept by incumbents are more
appropriate. The more contemporary methods include electronic performance monitoring such as recording of job activities by means of
videotapes or electronic records and cognitive task analysis. Combinations of several methods are also possible. Multi-method
approaches result in more complete pictures of the job (Landy & Conte, 2004). The choice of a specific job analysis method (or
combination of methods) is guided by the purpose and concerns regarding time and cost involved when using a particular method
(Voskuijl & Evers, 2008).
As previously mentioned, an important concern in all methods of job analysis is potential errors and inaccuracies that occur and
which influence the reliability and validity of the information obtained, simply because job analysts, job incumbents, and SMEs are all
human beings (Riggio, 2009). Potential sources of inaccuracy in job analysis range from mere carelessness and poor job analyst training,
to biases such as overestimating or underestimating the importance of certain tasks and jobs, to information overload stemming from the
complexity of some jobs (Morgeson & Campion, 1997). As you will recall from our discussion of research methods in chapter 2, an
important theme for industrial psychologists is to take steps to ensure the reliability and validity of the methods they use in all sorts of
organisational analyses. Nowhere is this more important than in conducting job analyses (Riggio, 2009).
Observation
The observation of jobs helps one to understand not only the jobs in question, but work in general. Observation was perhaps the first
method of job analysis industrial psychologists used. They simply watched incumbents perform their jobs and took notes. Sometimes
they asked questions while watching, and not infrequently they even performed job tasks themselves. Near the end of World War II,
Morris Viteles studied the job of navigators on a submarine. Viteles (1953) attempted to steer the submarine toward the island of
98
Bermuda. After five not-so-near-misses of 100 miles (160 kilometres) in one direction or another, one frustrated officer suggested that
Viteles raise the periscope, look for clouds, and steer toward them (since clouds tend to form above or near land masses). The vessel
‘found’ Bermuda shortly thereafter (Landy & Conte, 2004).
Observational techniques usually work best with jobs involving manual operations, repetitive tasks, or other easily seen activities.
For example, describing the tasks and duties of a sewing machine operator is much simpler than describing the job of a computer
technician, because much of the computer technician’s job involves cognitive processes involved in troubleshooting computer problems
(Riggio, 2009). In observational analysis, the observer typically takes detailed notes on the exact tasks and duties performed. However,
to make accurate observations, the job analyst must know what to look for.
It is important that the times selected for observation are representative of the worker’s routine, especially if the job requires that the
worker be engaged in different tasks during different times of the day, week, or year. An accounting clerk may deal with payroll
vouchers on Thursdays, spend most of Fridays updating sales figures, and be almost completely occupied with preparing company’s tax
records during the month of January. One concern regarding observational methods is whether the presence of the observer will in some
way influence workers’ performance. The question is whether workers will perform the task differently because they know that they are
being watched (Riggio, 2009). It is therefore important to supplement observation by interviews which involve talking with incumbents
either at the worksite or in a separate location (Landy & Conte, 2004).
Interviews
Interviews can be open-ended (‘tell me all about what you do on the job’), or they can involve structured or standardised questions
(Riggio, 2009). However, interviews are most effective when structured with a specific set of questions based on observations, other
analyses of the types of jobs in question, or prior discussions with human resource professionals, trainers, or managers knowledgeable
about the jobs (Landy & Conte, 2004). One concern regarding interviews as a technique of job analysis is the fact that any one source of
information can be biased. Therefore, to ensure the reliability of the information obtained from the interviews, the job analyst may want
to get more than one perspective by interviewing the job incumbent, the incumbent’s supervisor, and, if the job is a supervisory one, the
incumbent’s subordinates. The job analyst may also interview several job incumbents or SMEs within a single organisation to get more
reliable representation of the job and to see whether various people holding the same job title in a company actually perform similar
tasks (Riggio, 2009). Group interviews are also referred to as SME conferences (Aamodt, 2010).
Critical incidents and job or work diaries
The critical incident technique (CIT) asks SMEs (subject matter experts who have detailed knowledge about a particular job) to identify
critical aspects of behaviour or performance in a particular job – either through interviews or questionnaires – that have led to success or
failure. The supervisor of a computer programmer may report that in a very time-urgent project, the programmer decided to install a
subroutine without taking time to ‘debug’ it; eventually the entire system crashed because of a flaw in the logic of that one sub-routine
(Landy & Conte, 2004). The real value of the CIT is in helping to determine the particular KSAOs that a worker needs to perform a job
successfully. The CIT technique is also useful in developing appraisal systems for certain jobs, by helping to identify the critical
components of successful performance (Riggio, 2009). However, the CIT’s greatest drawback is that its emphasis on the difference
between excellent and poor performance ignores routine duties. The CIT therefore cannot be used as the sole method of job analysis
(Aamodt, 2010).
Job or work diaries require workers and/or supervisors to keep a log of their activities over a prescribed period of time. An
advantage of the job or work diary is that it provides a detailed, hour-by-hour, day-by-day account of the worker’s job. However, one
difficulty of diary methods is that it they are quite time consuming, both for the worker who is keeping the diary and for the job analyst
who has the task of analysing the large amount of information contained in the diary (Riggio, 2009).
Questionnaires/surveys
Survey methods of job analysis usually involve the administration of a pencil-and-paper or computerised questionnaire that incumbents
or supervisors (SMEs) often respond to as part of a job analysis. As shown in Table 4.4, these questionnaires include task statements in
the form of worker behaviours or categories of job-related aspects. SMEs are asked to rate each statement from their experience on a
number of dimensions such as frequency of performance, importance to overall job success, and whether the task or behaviour must be
performed on the first day of work or can be learned gradually on the job. Questionnaires also ask SMEs to rate the importance of
various KSAOs for performing tasks or task groups, and may ask the SME to rate work context (Landy & Conte, 2004). Examples of
these questionnaires are discussed as specific job analysis techniques in this chapter.
The survey method has three advantages over the interview method. First, the survey is generally more cost effective since it allows
the collection of information from a number of workers simultaneously which provide various perspective of a job. Second, survey
methods are considered to be more reliable since they are anonymous and there may be less distortion or withholding of information
than in a face-to-face interview (Riggio, 2009). Third, unlike the results of observations or interviews, the questionnaire responses can
be statistically analysed to provide a more objective record of the components of the job (Landy & Conte, 2004).
Existing data
Information or records such as a previous job analysis for the position or an analysis of a related job are usually available in most large,
established organisations. Such data may also be borrowed from another organisation that has conducted analyses of similar jobs.
99
However, Riggio (2009) cautions that existing data should always be checked to make sure it conforms to the job as it is currently being
performed and also to determine if the existing data accounts for the inclusion of new technology in the job.
Specific job analysis techniques
In addition to the general methods for conducting job analyses, there are also a number of widely-used specific, standardised analysis
techniques. These methods tend to provide information on one of four specific factors that are commonly included in a job description:
worker activities, tools and equipment used, work environment, and KSAOs. We will consider a number of these specific techniques.
Table 4.4 Example of job-related items in a job analysis questionnaire (Adapted from Riggio, 2009:70)
POSITION ANALYSIS QUESTIONNAIRE (PAQ)
1. INFORMATION INPUT
1.1 Sources of job information
Rate each of the following items in terms of the extent to which it is used by the worker as a source of information in performing his or
her job.
Code Extent of use
N
1
2
3
4
5
Does not apply
Nominal/very infrequent
Occasional
Moderate
Considerable
Very substantial
1 ________ Written material (books, reports, office notes, articles, job instructions, signs, etc)
2 ________ Quantitative materials (materials which deal with quantities or amounts, such as graphs, accounts, specifications, tables of
numbers, etc)
3 ________ Pictorial materials (pictures or picture-like materials used as sources of information, for example, drawings, blueprints,
diagrams, maps, tracings, photographic films, x-ray films, TV pictures, etc)
4 ________ Patterns/related devices (templates, stencils, patterns, etc., used as sources of information when observed during use; do
not include here materials described in item 3 above)
5 ________ Visual displays (dials, gauges, signal lights, radarscopes, speedometers, clocks, etc)
Methods providing general information about worker activities
Several questionnaires have been developed to analyse jobs at a more general level. This general analysis saves time and money, and
allows jobs to be more easily compared with one another than is the case if interviews, observations, job participation, or task analysis is
used (Aamodt, 2010). Some of the questionnaires include the Position Analysis Questionnaire (PAQ), the Job Structure Profile (JSP),
the Job Elements Inventory (JEI), and Functional Job Analysis (FJA).
The Position Analysis Questionnaire (PAQ), developed at Purdue University by McCormick, Jeanneret and Mecham (1972), is
based on the worker-orientated approach. This means that generalised worker behaviours (behaviours that are involved in work
activities, for example, advising) are rated and that the instrument has a moderate level of behaviour specificity. The term ‘generalised’
refers to the fact that the elements are not job specific, in order to make it possible to compare different jobs (Voskuijl & Evers, 2008).
In a study conducted by Arvey and Begalla (1975), the PAQ was used to analyse the job of a homemaker. It was found that a
homemaker’s job is most similar to the jobs of a police officer, fire-fighters, and an airport maintenance chief. However, although
research speaks favourably about the reliability of the PAQ, it also provides cause for concern, because the PAQ appears to yield the
same results regardless of how familiar the analyst is with the job (Aamodt, 2010).
The PAQ contains 194 items organised into six main dimensions:
• Information input
• Mental processes
• Work output
• Relationships with other persons
• Job context, and
• Other job-related variables such as work schedule, pay, and responsibility (Aamodt, 2010).
Table 4.4 provides examples of the typical items included in the information input dimension of the PAQ. The PAQ is regarded as one
of the most widely used and thoroughly researched methods of job analysis (Riggio, 2009). It is also inexpensive and takes relatively
little time to use (Aamodt, 2010).
The Job Structure Profile (JSP) is a revised version of the PAQ developed by Patrick and Moore (1985), which includes changes to
100
CHAPTER 4
JOB ANALYSIS AND CRITERION DEVELOPMENT
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
JOB ANALYSIS
• Defining a job
• Products of job analysis information
• Employment equity legislation and job descriptions
• The job analysis process
• Types of job analysis
• Collecting job data
• General data collection methods
• Specific job analysis techniques
• Computer-based job analysis
• The South African Organising Framework for Occupations (OFO) and job analysis
COMPETENCY MODELLING
• Defining competencies
• Drawbacks and benefits of competency modelling
• Phases in developing a competency model
• Steps in developing a competency model
CRITERION DEVELOPMENT
• Steps in criterion development
• Predictors and criteria
• Composite criterion versus multiple criteria
• Considerations in criterion development
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
The topics of this chapter, job analysis and criterion development, form the foundation of nearly all personnel activities. Job descriptions, job
or person specifications, job evaluations and performance criteria are all products of job analysis. They are invaluable tools in the recruitment,
selection, performance evaluation, training and development, reward and remuneration and retention of employees. They also provide the
answers to many questions posed by legislation, such as whether a certain requirement stated on a vacancy advertisement is really an inherent
job requirement. They are further helpful in deciding if a psychological test should be used and which one should be used in the prediction of
job performance (Schreuder, Botha & Coetzee, 2006).
In general, industrial psychologists aim to demonstrate the utility of their procedures such as job analysis in enhancing managers’, workers’
and their own understanding of the determinants of job success. This chapter firstly explores the function and uses of job analysis in
understanding performance expectations (properties of the job in the context of the organisation’s expectations) as well as the required
knowledge, skills, experience, abilities and personal characteristics necessary to meet those expectations. Secondly, since the behaviour that
constitutes or defines successful performance of a given task is regarded as a criterion that needs to be measured in a valid and reliable
manner, the chapter also addresses the matter of criterion development, for which job analysis provides the raw material.
Learning outcomes
When you have finished studying this chapter, you should be able to:
87
1. Describe the purpose and products of job analysis.
2. Differentiate between task performance and contextual performance.
3. Discuss the job analysis process, the sources of information, and the types of information to be collected.
4. Differentiate between the uses of the various job analysis methods and techniques.
5. Outline the information to be contained in a job description.
6. Discuss employment equity considerations in job analysis and job descriptions.
7. Evaluate the usefulness of the Organising Framework for Occupations (OFO) in the context of job analysis.
8. Identify factors that influence job analysis reliability and validity.
9. Differentiate between job analysis and competency modelling and evaluate the advantages and disadvantages of each approach.
10. Discuss the concept of criterion distortion.
11. Evaluate the various aspects to be considered in criterion development.
CHAPTER INTRODUCTION
As discussed in chapter 1, personnel psychology deals with the behaviour of workers or worker performance. Time to complete a
training course; total number of days absent; promotion rate within an organisation; and turnover rate are all examples of variables that
industrial psychologists have used as measures of performance. However, measuring employee performance is often regarded as being
quite a challenge since in nearly all jobs, performance is multi-dimensional; that is, the prediction (the topic of chapter 5) and
measurement of behaviour requires a consideration of many different aspects of performance. Workers call on many attributes to
perform their jobs, and each of these human attributes is associated with unique aspects of performance. Furthermore, measures of
individuals’ performance may often include factors beyond the control of individuals. For example, time to complete a training course
may be constrained by how much time the employee can be away from the workplace, or the promotion rate within an organisation
may be affected by the turnover rate in that organisation (Landy & Conte, 2004).
Furthermore, industrial psychologists have devoted a great deal of their research and practice to understanding and improving the
performance of workers. Research has shown that individuals differ regarding their level of performance. Campbell, Gasser and
Oswald (1996) found, for example, that the ratio of the productivity of the highest performer to the lowest performer in jobs of low
difficulty ranges from 2 : 1 to 4 : 1; while in jobs of high difficulty, this ratio can be as much as 10 : 1. This represents a striking
degree of variation which has important implications for companies who strive to sustain their competitive edge in the contemporary
business environment. Imagine a sales representative who closes on average five contracts a month versus one who brings in 50. From
this it is clear why industrial psychologists and employers are interested in worker performance and why measuring employee
performance can be quite a challenge (Landy & Conte, 2004).
Performance in the context of personnel psychology is behaviour that can actually be observed. Employee performance is therefore
discussed in the context of one or more tasks that define a job. These tasks can be found in, for example, job descriptions and work
orders. Of course, in many jobs behaviour relates to thinking, planning or problem-solving and cannot actually be observed; instead, it
can be described only with the help of the individual worker. In the work setting, performance includes only those actions or
behaviours that are relevant to the organisation’s goals and can be measured in terms of each individual’s proficiency in terms of
specific tasks. Employers and industrial psychologists concern themselves with the performance that the organisation employs an
employee to achieve and to achieve well. Performance in this regard is not the consequence or result of action; it is the action itself
over which the worker has to an extent a measure of control (Landy & Conte, 2004).
Since human performance is variable (and potentially unreliable) owing to various situational factors that influence individuals’
performance, industrial psychologists are often confronted by the challenge of developing performance criteria that are relevant,
reliable, practical, adequate, and appropriate for measuring worker behaviour. Industrial psychologists generally refer to the ‘criterion
problem’ when pointing out the difficulties involved in the process of conceptualising and measuring performance constructs that are
multi-dimensional, dynamic, and appropriate for different purposes (Cascio & Aguinis, 2005). Since job analysis provides the raw
material for criterion development, this chapter therefore also explores the development and use of criteria in measuring the task
performance of employees.
The task performance of workers (the topic of chapter 9) is generally contrasted with their contextual performance. Whereas task
performance is defined as the proficiency with which job incumbents perform activities that are formally recognised as part of their job
(Borman & Motowidlo, 1993:73), contextual performance, in contrast, is more informal and refers to activities not typically part of job
descriptions but which support the organisational, social and psychological environment in which the job tasks are performed. These
include extra-role behaviours (also called organisational citizenship behaviours) such as endorsing, supporting and defending
organisational objectives or volunteering to carry out task activities that are not formally part of one’s own job description (Landy &
Conte, 2004). Job analysis and criterion development concern themselves with the task performance of workers, while the retention of
employees (the topic of chapter 7) is usually concerned with both the task performance and contextual performance of workers.
JOB ANALYSIS
Job analysis refers to various methodologies for analysing the requirements of a job and is regarded as one of the most basic personnel
functions. As jobs become more and more complex, the need for effective and comprehensive job analyses becomes increasingly
88
important (Riggio, 2009).
One of the first industrial psychologists to introduce standardised job analysis was Morris Viteles who used job analysis as early as
1922 to select employees for a trolley car company (Viteles, 1922). In over 80 years, since 1922, the purpose of job analysis has not
changed; it remains one of understanding the behavioural requirements of work. The job analyst wants to understand what the important
tasks of the job are, how they are carried out, and what human attributes are necessary to carry them out successfully (Landy & Conte,
2004). In short, job analysis is the systematic study of the tasks, duties and responsibilities of a job and the knowledge, skills and
abilities needed to perform it. Since job analysis reflects an up-to-date description of the work actually being performed by a worker, it
provides managers and industrial psychologists with a greater understanding of what a particular job entails (Riggio, 2009).
In addition, job analysis is an essential instrument in human resource applications that relate to the employment and retention of
personnel. It forms the basis for personnel decisions in focusing on answers to questions such as: Selection for what? Reward and
remuneration for what? Appraisal for what? (Voskuijl & Evers, 2008). Before a worker can be hired or trained and before a worker’s
performance can be evaluated, it is critical to understand exactly what the worker’s job entails. Such analyses should also be conducted
on a periodic basis by using precise and comprehensive methods to ensure that the information on jobs is up to date. Moreover, in
today’s complex and ever-changing, ever-evolving jobs, job analysis should not be a limiting process, that is, analyses of jobs should
allow for flexibility and creativity in many jobs, rather than being used to tell people how to do their work (Riggio, 2009).
Defining a job
In the South African occupational learning context (the topic of chapter 10), a job is seen as a set of roles and tasks designed to be
performed by one individual for an employer (including self-employment) in return for payment or profit (Department of Labour, 2009).
However, in the context of job analysis, a job is generally defined as a group of positions that are similar enough in their job elements,
tasks and duties to be covered by the same job analysis, for example, human resource manager (Schreuder et al, 2006:32). In practical
terms, this means that a job is not a single entity, but rather a group of positions. A position is a set of tasks performed by a single
employee. For example, the job of a secretary may exist at an organisation, and ten employees may be employed in the position of
secretary. Any secretary appointed to this position has to perform various tasks. Tasks are regarded as the basic units of work that are
directed towards specific job objectives (Muchinksy, Kriek & Schreuder, 2005). In more practical terms, tasks make up a position, while
a group of positions make up a job. Jobs can also be grouped into larger units, which are called job families. For example, the jobs of a
secretary, receptionist, clerk, and bookkeeper can all be grouped into the job family of clerical positions (Schreuder et al, 2006).
Most jobs are quite complex and require workers to possess certain types of knowledge and skills to perform a variety of different
tasks. Workers may need to operate complex machinery to perform their job, or they may need to possess a great deal of information
about a particular product or service. Jobs may also require workers to interact effectively with different types of people, or a single job
may require a worker to possess all these important skills and knowledge. Because job analysis typically involves the objective
measurement of work behaviour performed by actual workers, the job analyst must be a professional who is well trained in the basic
research methods discussed in chapter 2 to perform a good or accurate job analysis (Riggio, 2009). However, Aamodt (2010) advises
that employees should be involved in the job analysis. In organisations in which many people perform the same job (e.g. educators at a
university, assemblers in a factory), not every person need participate. For organisations with relatively few people in each job, it is
advisable to have all people participate in the job analysis.
Deciding how many people should be involved in the job analysis depends on whether the job analysis will be committee-based or
field-based. In a committee-based job analysis, a group of subject matter experts (SMEs) – for example, employees and supervisors –
meet to generate the tasks performed, the conditions under which they are performed, and the knowledge, skills, attributes, and other
characteristics (KSAOs – see Table 4.1) needed to perform them. Rouleau and Krain (1975) recommend that a committee-based
approach should have one session of four to six incumbents for jobs having fewer than 30 incumbents and two to three sessions for jobs
with higher numbers of incumbents.
In a field-based job analysis, the job analyst individually interviews or observes a number of incumbents out in the field. Taken
together, the results of research studies suggest that committee-based job analyses yield similar results to field-based job analyses.
Employee participants should be selected in as random a way as practical, yet still be representative. The reason for this, according to
research, is that employee differences in gender, race, job performance level, experience, job enjoyment, and personality can at times
result in slightly different job analysis outcomes (Aamodt, 2010).
Furthermore, it is important for companies and industrial psychologists to adopt a future-orientated approach to job analysis, where
the traditional job analysis approach is supplemented by group discussion to identify how existing KSAOs (see Table 4.1) are likely to
change in the future. A key problem with job analysis is often that it assumes that the job being reviewed will remain unchanged. The
nature of the environment within which many organisations are required to operate means that such assumptions can be inaccurate
(Robinson, 2006).
Table 4.1 Description of KSAOs
Knowledge: A collection of discrete but related facts and information about a particular domain, acquired through formal education or
training, or accumulated through specific experiences.
Skill: Behaviour capability or the skills that are the observed behaviour or practiced act (that is, the cultivated, innate ability or
aptitude to perform the act, for example, playing the violin, at a specified level of proficiency).
89
Ability: The end product of learning is ability. Abilities are behavioural attributes that have attained some stability or invariance
through a typically lengthy learning process. In other words, ability refers to the stable capacity to engage in a specific behaviour such
as playing the violin. Abilities are indicators of aptitude and can be used to predict socially significant future behaviour, such as job
performance. Abilities help to predict the proficiency level on a particular skill that a person can attain if given the opportunity to learn
the skill. Such a predicted skill level is referred to as aptitude.
Other characteristics: Personality variables, interests, training and experience.
Reflection 4.1
Study the job attributes described below, and sort the KSAOs described below under the appropriate headings in the table.
Advertised job attributes
Position: Facilities Manager
Requirements: B Degree in Management. Able to work with MS Office (Excel®, Access®, Word®, PowerPoint® and e-mail). Good
English communication skills. Knowledge of project management.
Relevant experience: 2 to 3 years in facility management. Must have knowledge and experience in implementation of all the relevant
legislation, such as the Occupational Health and Safety Act, and be able to plan, lead, organise and control a team. Experience in
people management will be an added advantage.
Person profile: The ideal candidate should be conversant with the relevant legislation and Public Service Regulations and GIAMA. A
proactive, innovative, cost-conscious, resourceful, diversity- and customer-focused individual with integrity and good interpersonal
skills, planning and organising abilities, time management, and problem-solving skills is necessary. Must be quality-orientated, and a
strong administrator. Must be a team player, performance driven, and able to work in a very pressurised environment.
Key performance areas (KPAs):
• To ensure maintenance of the workplace facilities (offices, boardrooms, and other facilities).
• To ensure maintenance of telecommunications infrastructure and telecommunications services in the Department.
• To ensure that the workplace facility is designed in line with the Occupational Health and Safety as well as GIAMA
standards.
• To ensure supervision of Facilities Management Services.
• To compile all statistics to contribute to the monthly supervision of logistical support staff service and annual reports.
• To compile the Logistical Support monthly, quarterly and annual reports.
Attribute
K, S, A or O
Justification
Products of job analysis information
As shown in Figure 4.1, a job analysis leads directly to the development of several other important personnel ‘products’: a job
description (what the job entails), a job or person specification (what kind of people to hire for the job), a job evaluation, and
performance criteria.
Job description
One of the written products of a job analysis is a job description – a brief, two- to five-page summary of the tasks and job requirements
found in the job analysis. Job descriptions refer to the result of defining the job in terms of its task and person requirements (KSAOs),
and include characteristics of the job such as the procedures, methods and standards of performance (Voskuijl & Evers, 2008). In other
words, the job analysis is the process of determining the work activities and requirements, and the job description is the written result of
the job analysis. Job analyses and job descriptions serve as the basis for many human resource activities related to the employment
process discussed in chapter 3, including human resource planning, employee selection, performance evaluation, reward and
remuneration, training and work design (Aamodt, 2010). Job descriptions are especially useful in recruitment and selection in clarifying
the nature and scope of responsibilities attached to specific jobs. They are also useful in training and development and performance
appraisal or evaluation (Robinson, 2006).
Figure 4.1 Links between job analysis products and the employment and retention processes
90
Job descriptions should be updated if a job changes significantly. With high-tech jobs, this is probably fairly often. With jobs such as
package handling, the job might not change substantially. Job crafting – the informal changes that employees make in their jobs – is one
of the reasons that job descriptions change across time (Aamodt, 2010; Wrzesniewski & Dutton, 2001). It is common for employees to
expand the scope of their jobs quietly to add tasks they want to perform and to remove tasks that they do not want to perform. In a study
of sales representatives, 75 per cent engaged in job crafting in just one year (Lyons, 2008)!
Job descriptions vary in form and content but generally specify the job title, location, reporting relationships, job purpose, key
responsibilities, and any limits on authority or responsibility (Robinson, 2006). A job description typically includes the following
(Aamodt, 2010; Marchington & Wilkinson, 2008):
• Job title: A clear statement is all that is required, such as ‘transport manager’ or ‘customer service assistant’. An accurate title
describes the nature of the job and aids in employee selection and recruitment. If the job title indicates the true nature of the
job, potential applicants for a position will be better able to determine whether their skills and experience match those
91
•
•
•
•
•
•
•
•
•
•
•
•
required for the job.
A job title can also affect perceptions of the status and worth of a job. For example, job descriptions containing gender-neutral
titles such as ‘administrative assistant’ are evaluated as being worth more money than ones containing titles with a female-sex
linkage, such as executive secretary (Aamodt, 2010).
Location: Department, establishment, the name of the organisation.
Responsible to: The job title of the supervisor to whom the member of staff reports.
Responsible for: Job titles of the members of staff who report directly into the job-holder, if any.
Main purpose of the job: A short and unambiguous summary statement indicating precisely the overall objective and purpose
of the job, such as ‘assist and advise customers in a specific area’, ‘drill metals in accordance with manufacturing policy’.
This summary can be used in help-wanted advertisements, internal job postings, and company brochures.
Responsibilities/duties/work activities: A list of the main and subsidiary elements in the job, specifying in more or less detail
what is required, such as maintaining records held on a computer system, or answering queries. Tasks and activities should be
organised into meaningful categories to make the job description easy to read and understand. The work activities performed
by a bookkeeper can, for example, be divided into seven main areas: accounting, clerical, teller, share draft, collections,
payroll, and financial operations.
Work performance: The job description should outline standards of performance and other issues such as geographical
mobility. This section contains a relatively brief description of how an employee’s performance is evaluated and what work
standards are expected of the employee.
Job competencies: This section contains what are commonly called job or person specifications or competencies. These are
the knowledge, skills, abilities and other characteristics (KSAOs), such as interest, personality, and training, that are
necessary to be successful on the job. The competencies section should be divided into two subsections: The first contains
KSAOs that an employee must have at the time of hiring. The second subsection contains the KSAOs that are an important
part of the job but can be obtained after being hired. The first set of KSAOs is used for employee selection and the second for
training purposes.
Working conditions: A list of the major contractual agreements relating to the job, such as pay scales and fringe benefits,
hours of work, holiday entitlement, and union membership, if appropriate. The employee’s actual salary or salary range
should not be listed on the job description.
Job context: This section should describe the environment in which the employee works and should mention stress level,
work schedule, physical demands, level of responsibility, temperature, number of co-workers, degree of danger, and any other
relevant information. This information is especially important in providing applicants with disabilities with information they
can use to determine their ability to perform a job under a particular set of circumstances.
Tools and equipment used: A section should be included that lists all the tools and equipment used to perform the work
activities, responsibilities or duties listed. Even though tools and equipment may have been mentioned in the activities
section, placing them in a separate section makes their identification simpler. Information in this section is used primarily for
employee selection and training. That is, an applicant can be asked if he or she can operate an adding machine, a computer,
and a credit history machine.
Other matters: Information such as career path or career mobility opportunities.
Any other duties that may be assigned by the organisation.
It is important to note that job descriptions can quickly become out of date and may lack the flexibility to reflect changes in the job role
which may constrain opportunities to reorganise responsibilities and reporting lines (Robinson, 2006). A study by Vincent, Rainey,
Faulkner, Mascio and Zinda (2007) compared the stability of job descriptions at intervals of 1, 6, 10, 12 and 20 years. After one year, 92
per cent of the tasks listed in the old and updated job descriptions were the same, dropping to 54 per cent after ten years.
Moreover, despite being widely used, job descriptions have been heavily criticised for being outmoded and increasingly irrelevant to
modern conditions, reflecting an inflexible and rules-orientated culture. It is argued that workers should not be concerned with the
precise definition of ‘standard’ behaviour but rather with how ‘value’ can be added through personal initiative and organisational
citizenship behaviours. Consequently, highly-specific job descriptions have been replaced by more generic and concise job profiles or
accountability statements that are short – sometimes less than one page – and focused on the outcomes of the job rather than its pro-cess
components. Another alternative is to use role definitions and ‘key result area’ statements (KRAs) that relate to the critical performance
measures for the job (Marchington & Wilkinson, 2008). An example of a key result area (KRA) or key performance area (KPA) is
preparation of marketing plans that support the achievement of corporate targets for profit and sales revenue. A focus on accountability
and/or KRAs (or KPAs) also sends a clear message about performance expectations in the organisation (Robinson, 2006).
Reflection 4.2
Review the previous Reflection activity about KSAOs. Read through the KPAs listed in the advertisement. See if you can identify the
critical performance measures of the job.
Notwithstanding the criticisms, job descriptions provide recruits with essential information about the organisation and their potential
92
role, and without them people would apply for jobs without any form of realistic job preview. Furthermore, having to outline critical
result areas or accountability profiles can help managers decide whether or not it is necessary to fill a post, and if so in what form and at
what level (Marchington & Wilkinson, 2008).
Job or person specifications
A job analysis also leads to a job specification, which provides information about the human characteristics required to perform the job,
such as physical and personal traits, work experience, and education. Usually job specifications give the minimum acceptable
qualifications that an employee needs to perform a given job. Job specifications are determined by deciding what types of KSAOs are
needed to perform the tasks identified in the job analysis. These KSAOs can be determined through a combination of logic, research,
and use of specific job analysis techniques discussed later in the chapter.
The traditional job or person specification is giving way to competency frameworks, the most significant advantage of these being
that the focus is – or should be – on the behaviours of job applicants. This can decrease the degree of subjectivity inherent in the
recruitment and selection process, and reduce the tendency to make personal inferences about the personal qualities that might underpin
behaviour (Marchington & Wilkinson, 2008). However, job descriptions and job/person specifications often exist alongside
competency-based approaches (Taylor, 2005), because they set a framework within which subsequent human resource practices – such
as performance evaluation, training and development, and pay and grading – can be placed. In addition, the competencies can be related
to specific performance outcomes rather than being concerned with potentially vague processes, such as disposition or interests outside
of work. Moreover, these approaches eschew the use of criteria just because they are easy to measure – for example, educational
qualifications or length of service – but may not relate closely to job effectiveness, which is the evaluation of the results of performance
(Marchington & Wilkinson, 2008).
Job evaluation
A third personnel ‘product’, job evaluation, is the formal assessment and systematic comparison of the relative value or worth of a job to
an organisation with the aim of determining appropriate compensation (the topic of chapter 8), or reward and remuneration. The wages
paid for a particular job should be related to the KSAOs it requires. However, a number of other variables, such as the supply of
potential workers, the perceived value of the job to the company, and the job’s history, can also influence its rate of compensation
(Riggio, 2009).
Reflection 4.3
Use the following headings to determine your position, tasks, job and job family and other specifications:
My position is:
Tasks associated with this position are:
The job is called:
The job family is:
The job/person specifications are:
The required KSAOs are:
The KRAs (KPAs) are:
Once a job analysis has been completed and a thorough job description written, it is important to determine how much employees in
a position should be paid (Aamodt, 2010). The basic job evaluation procedure is to compare the content of jobs in relation to one
another. This may be considered in terms of their effort, responsibility, and skills. A company that knows (based on their salary survey
and compensation policies) how to price key benchmark jobs, and can use job evaluation to determine the relative worth of all other jobs
in the company relative to these key jobs, is well on the way to being able to price all the jobs in their organisation equitably (Dessler,
2009).
The concept of comparable worth in job evaluation practices relates to the notion that people who are performing jobs of comparable
worth to the organisation should receive comparable pay. Comparable worth is a phrase that contains practical, philosophical, social,
emotional, and legal implications. In more practical terms, it means that people who are performing comparable work should receive
comparable pay, so that their worth to the organisation in terms of compensation is ‘comparable’. Various experts have suggested the
use of internal controls (such as job evaluation) and external controls (for example, salary surveys) to assure this comparability, or that
job evaluation techniques be used to calibrate the pay levels of various jobs in an organisation and thereby assure at least some internal
comparability (Landy & Conte, 2004). Job evaluation and the issue of comparable worth are discussed in much more depth in chapter 8.
Performance criteria
Finally, a job analysis helps outline performance criteria, which are the means for evaluating a worker’s success in performing a job.
Once the specific elements of a job are known, it is easier to develop the means to assess levels of successful or unsuccessful
performance (Riggio, 2009). In this regard, the development of performance assessment and evaluation systems is an extension of the
use of job analysis for criterion development. Once the job analyst identifies critical performance components of a job, it is possible to
93
develop a system for evaluating the extent to which an individual worker has fallen short of, met or exceeded the standards set by the
organisation for performance on those components (Landy & Conte, 2004). Performance criteria and performance evaluation are the
topics of chapter 9.
Other uses of job analysis information
The results of a job analysis can further be used for other purposes such as recruiting and selecting. If one knows what the job requires
and which attributes are necessary to fulfil those requirements, one can target a company’s recruiting efforts to specific groups or
potential candidates. For technical jobs, these groups may be defined by credentials (such as a bachelor’s degree in engineering) or
experience (for example, five years’ programming in a specific area of the job). When one knows the attributes most likely to predict
success, one can identify and choose (or develop) the actual assessment tools (the topic of chapter 5). Based on the job analysis, tests
that measure specific attributes such as personality, general mental ability, or reasoning may be chosen. Similarly, an interview format
intended to get at some subtle aspects of technical knowledge or experience can also be developed by using the information that
stemmed from the job analysis (Landy & Conte, 2004).
A job analysis also helps to identify the areas of performance that create the greatest challenges for incumbents which can help
managers, human resource professionals, or industrial psychologists to identify training and learning opportunities for individual
workers. Furthermore, detailed job analyses provide a template for making decisions in mergers, acquisitions and downsizing and
rightsizing scenarios which lead to workforce reduction or restructuring. Mergers and acquisitions call for identifying duplicated
positions and centralising functions. The challenge is to identify which positions are truly redundant and which provide a unique added
value. In downsizing and rightsizing interventions, positions with somewhat related tasks are often consolidated into a single position.
The job descriptions of those who stay with the organisation are enlarged, with the result that more responsibilities are assumed by fewer
people. Job analysis provides the information needed by management to decide which tasks to fold into which positions (Landy &
Conte, 2004).
Job analysis further allows the organisation to identify logical career paths as well as the possibility of transfer from one career
ladder to another by means of identifying job families or job ladders. A job ladder or job family is a cluster of positions that are similar
in terms of the human attributes needed to be successful in those positions or in terms of the tasks that are carried out. Accounting jobs,
for example, are closer to budgeting and invoicing positions than they are to engineering or production positions (Landy & Conte,
2004).
Job analyses and their products are also valuable because of legal decisions that make organisations more responsible for personnel
actions as part of the movement towards greater legal rights for the worker. Foremost among these laws are those concerned with
employment equity matters related to employees from the so-called designated groups. Employers cannot make hasty or arbitrary
decisions regarding the hiring, firing or promotion of workers and often need to defend their personnel decisions in court. Certain
personnel actions, such as decisions to hire or promote, must therefore be made on the basis of a thorough job analysis. Sometimes a job
analysis and a job description are not enough. Courts have also questioned the quality of job descriptions, the methods used in job
analysis by many companies, and whether they reflect the requirements stipulated in the Employment Equity Act (Riggio, 2009).
Employment equity legislation and job descriptions
In term of the South African Employment Equity Act 55 of 1998, fair discrimination must be based on the inherent requirements of the
job. The Act prohibits unfair discrimination against an employee or discrimination in any employment practice, unless the
discrimination is for purposes of affirmative action or based on the inherent requirements of the job. This is especially important when
considering the physical requirements of a job and ensuring that reasonable accommodation is made to make jobs accessible to
previously disadvantaged people.
Inherent requirements refer to the minimum requirements that cannot be eliminated because they are essential to the successful
completion of the job, and often by their nature exclude some people. For example, a fire-fighter must be able-bodied and willing and
capable of doing hard physical work. Someone with a physical disability, such as someone who has lost a leg or an arm, is automatically
excluded from being a fire-fighter. One example that is often found in job descriptions but is not always valid is that the incumbent must
have a driver’s licence and own transport. These requirements can be inherent requirements only if the job is that of a driver, delivery
person, salesperson, or someone who has to travel as part of their job. If the job is an office job, there is no valid reason for requiring a
driver’s licence and own transport (Schreuder et al, 2006).
To ensure that any unfair discrimination is eliminated, employers must ensure that the inherent requirements of a job are clearly
spelled out in the job description. In this regard, job analysis is the process of drawing up a job description and specification and
essentially describing what should be done in a job and what the minimum requirements are. In practical terms, this means that
managers should continuously revise existing job descriptions to ensure that what is required of the incumbent are really inherent
requirements of the job (Schreuder et al, 2006).
The job analysis process
The job analysis process is essentially an information-gathering and data-recording or documentation process. Depending on the type of
job analysis and therefore the type of information to be collected, various job analysis techniques are used. Aamodt (2010) differentiates
between five steps (shown in Figure 4.2) commonly followed in conducting a job analysis. These include:
94
1. Identifying tasks performed
2. Writing task statements
3. Rating task statements
4. Determining essential KSAOs, and
5. Selecting tests to tap KSAOs.
Reflection 4.4
Study your own job description. If you are not employed or do not have a job description, study the job description of a friend, family
member or relative. Compare the job description with the requirements stipulated by the Employment Equity Act and the general
content guidelines outlined for job descriptions. Does the chosen job description contain all the necessary information? What would
you add or change so that the job description does contain all the necessary information? Would it be better to do a job analysis and
draft a new job description? Does this job description comply with the requirements of employment equity?
Each of these steps will be briefly discussed.
Step 1: Identifying tasks performed
The first step in conducting a job analysis is to identify the major job dimensions and the tasks performed for each dimension, the tools
and equipment used to perform the tasks, and the conditions under which the tasks are performed. This step typically involves gathering
existing information, interviewing subject matter experts (SMEs) (people knowledgeable about the job such as job incumbents,
supervisors, customers, and upper-level management), incumbent observations, and job participation where the job analyst actually
performs an unfamiliar job.
Figure 4.2 The job analysis process
Step 2: Writing task statements
Writing task statements involves using the identified tasks in developing a task inventory that will be reflected in the job description.
Written task statements must include at minimum an action (what is done) and an object (to which the action is done). For example, the
task ‘sends purchase requests’ is too vague, while the statement ‘sends purchase requests to the purchasing department using the
company mail facilities’ is much clearer and more specific. Often, task statements will also include such components as where the task is
done, how it is done, why it is done, and when it is done.
It has also been suggested that a few tasks not part of a job (generally referred to as ‘bogus tasks’) be placed into the task inventory.
Bogus or irrelevant tasks that are often rated by job incumbents as being part of their job are often removed from the job analysis as a
result of carelessness. Pine (1995) included five such items in a 68-item task inventory for corrections officers and found 45 per cent
reported performing at least one of the bogus tasks.
Step 3: Rating task statements
A task analysis is conducted once task statements have been written. Generally, these may include some 200 tasks. A group of SMEs is
used to rate each task statement on the frequency and the importance or critical nature of the task being performed (see Table 4.2). After
a representative sample of SMEs rates each task, the ratings are organised into a format similar to that shown in Table 4.3. Tasks will
generally not be included in the job description if their average frequency rating is 0,5 or below. Tasks will not be included in the final
95
inventory if they have either an average rating of 0,5 or less on either the frequency (F) or importance (I) scales, or an average combined
rating (CR) of less than 2. Using these criteria, tasks 1 and 2 in Table 4.3 would be included in the job description, and task 2 would be
included in the final task inventory used in the next step of the job analysis (Aamodt, 2010).
Table 4.2 Example of a scale used to rate importance of KSAO for a fire-fighter
Importance of KSAO
0 KSAO is not needed for satisfactory job performance.
1 KSAO is helpful for satisfactory job performance.
2 KSAO is important/essential for satisfactory job
performance.
Step 4: Determining essential KSAOs
KSAOs are identified once the task analysis is completed and a job analyst has an inventory or list of tasks essential for the proper
performance of a job. To link KSAOs logically to tasks, a group of SMEs brainstorm the KSAOs needed to perform each task. Once the
list of essential KSAOs has been developed, another group of SMEs is given the list and asked to rate the extent to which each of the
KSAOs is essential for performing the job, including when the KSAO is needed. However, rather than using this process, KSAOs can
also be identified using such structured methods as, for example, the Fleishman Job Analysis Survey (F-JAS), critical incident technique
(CIT), and the Personality-Related Position Requirements Form (PPRF). Each of these techniques will be discussed later in the chapter.
Table 4.3 Example of task analysis ratings
F = frequency; I= importance; CR = combined rating
A draft job description and specification is compiled once the important KSAOs have been identified. The drafts are reviewed with
the manager and SME committee. Recommendations are integrated into the job description and specification and these documents are
finalised.
Step 5: Selecting tests to measure KSAOs
Once the important KSAOs have been identified, the next step is to determine the best methods (such as, for example, interviews, ability
tests, personality tests, assessment centres) to measure the KSAOs needed at the time of hire. These methods, and how to choose them,
are discussed in more detail in chapter 5.
Types of job analysis
As mentioned previously, the purpose of a job analysis is to combine the task demands of a job with one’s knowledge of human
attributes (KSAOs) and produce a theory of behaviour for the job in question. According to Landy and Conte (2004), there are two ways
to approach building that theory. One is called the task-orientated job analysis and the other the worker-orientated job analysis.
Task-orientated job analysis begins with a statement of the actual tasks as well as what is accomplished by those tasks. As a starting
point, the worker-orientated approach focuses on the attributes of the worker necessary to accomplish the tasks. Since worker-orientated
job analyses tend to be more generalised descriptions of human behaviour and behaviour patterns, and are less tightly tied to the
technological aspects of a particular job, they produce data which is more useful for structuring training programmes and giving
feedback to employees in the form of performance evaluation information. Given the volatility that exists in today’s typical workplace
that can make specific task statements less valuable in isolation or even obsolete through technology changes, employers are
significantly more likely to use worker-orientated approaches to job analysis than they did in the past (Landy & Conte, 2004). However,
task-orientated job analysis is regarded as being less vulnerable to potential distorting influences in job analysis data collection than is
the worker-orientated approach (Morgeson & Campion, 1997). The potential distorting influences include such factors as a need on the
part of the employee doing reporting, commonly referred to as a subject matter expert (SME), to conform to what others report, the
96
desire to make one’s own job look more difficult, attempts to provide the answers that the SME thinks the job analyst wants, and mere
carelessness.
Regardless of which approach is taken, the next step in the job analysis is to identify the attributes (KSAOs) that an incumbent needs
for either performing the tasks or executing the human behaviours described by the job analysis. Tests and other assessment techniques
can then be chosen to measure the KSAOs that have been identified. Traditional task-orientated job analysis has often been accepted as a
necessary, although not sufficient, condition for establishing the validity of selection tests (the topic of chapter 5). In other words, while
a sound job analysis will not guarantee that a test would be found valid, the absence of a credible job analysis may be enough to doom
any claim of job relatedness. How can a test be job related if the testing agent does not know what the critical tasks of the job are? In this
regard, it is likely that there will always be a role for some traditional form of job analysis such as a task- or human attributes-based
analysis. However, the newer and more dynamic extensions such as competency modelling, cognitive task analysis, and performance
components to facilitate strategic planning will enhance the results of a job analysis even more (Landy & Conte, 2004).
Collecting job data
In relation to the purpose of job analysis, the type of job data to collect is one of the choices to be made when applying job analysis
(Voskuijl & Evers, 2008). Regardless of the approach the job analyst decides to use, information about the job is the backbone of the
analysis. The more information and the more ways in which the analyst can collect that information, the better the understanding of the
job.
Types of job data
McCormick (1976, cited in Cartwright & Cooper, 2008:140), distinguished the following types of information to be collected:
• Work activities
• Work performance (for example, time taken and error analysis)
• Job context (for example, social context and physical working conditions)
• Machines, tools, equipment, and work aids used
• Job-related tangibles and intangibles such as materials processed and services rendered, and
• Personnel requirements.
Work activities are divided into job-orientated and worker-orientated activities. Job-orientated activities are usually expressed in job
terms and indicate what is accomplished. Worker-orientated activities refer to behaviours performed in work (for example,
decision-making). Personnel requirements include the KSAOs described in Table 4.1. The process of deriving human attributes (such as
the KSAOs) is described as the psychological part of doing job analysis (Sanchez & Levine, 2001). In this regard, Sackett and Laczo
(2003) observe a growing trend toward the incorporation of personality variables in job analysis. The justification for using
personality-based job analysis is found in the fact that several personality dimensions proved to be valid predictors of work performance.
Furthermore, it is suggested that the use of personality-based job analysis increases the likelihood that the most important personality
traits required for a job are identified (Voskuijl & Evers, 2008; Salgado & Fruyt, 2005).
Agent or source of the information
Job incumbents, supervisors and professional job analysts were traditionally the most important sources of job information. However, a
broader range of information agents is required as the boundaries of jobs become less clear-cut. These include, for example, customers
or training experts. Apart from people, devices such as the use of videotapes and other electronic information provided by cameras, tape
recorders and computers can also be used as a source of job information (Voskuijl & Evers, 2008). Sanchez and Levine (2001)
emphasise the importance of electronic records of performance (for example, in call centres, the number of calls handled) as reliable
sources of work information. Naturally, each type of resource has its particular strengths and weaknesses; for example, incumbents may
have the most information about the content of a job, but professional job analysts may be more familiar with job analysis methods.
Job analysts generally prefer incumbent ratings because these ratings have high face validity. However, the use of incumbent job
analysis data has several disadvantages, such as the following (Voskuijl & Evers, 2008; Sanchez & Levine, 2000):
• It takes up valuable time of large numbers of incumbents.
• Incumbents are not always motivated to rate their jobs conscientiously.
• Rating instructions and the survey format are not always well understood.
• There is no empirical evidence that incumbents are most qualified to ensure valid job information.
Validity and reliability of job analysis data
Voskuijl and Evers (2008) point out that several studies show variability in incumbent ratings that may be unrelated to job content, such
as work attitudes (for example, job satisfaction, organisational commitment, and job involvement – discussed in chapter 7). Differences
within incumbents and between incumbents and others might reflect real differences, for example, employees with longer job tenure
may have more freedom to develop unique patterns or profiles of activities which are all correct. The meta-analysis of Dierdorff and
Wilson (2003) revealed that the rater source did affect the reliability coefficients.
Job analysis outcomes are often the result of subjective (human) judgments which lead to inaccuracy. Two broad categories of
inaccuracy exist, namely social (for example, conformity pressures and social desirability) and cognitive (for example, information
97
overload and extraneous information) (Morgeson & Campion, 1997). These sources of inaccuracy affect the reliability (the consistency
or stability of a job analysis method or technique) and validity (accuracy) of job analysis data. In more practical terms, reliability refers
to the extent to which the data collected by means of a specific job analysis procedure or technique would consistently be the same if it
was collected again, at a different time, or if different raters were used as sources of information. Validity refers to the accuracy of
inferences made based on the data yielded from the specific job analysis method or technique. It also addresses whether the job analysis
data accurately and completely represents what was intended to be measured (Landy & Conte, 2004). The issue of reliability and
validity is discussed in more depth in chapter 5.
According to Voskuijl and Evers (2008), almost every study on job analysis presents some measure of inter-rater reliability or
intra-rater re-liability. Inter-raterreliability refers to consis-tency across raters, often expressed in intra-class correlations and by means of
pair-wise correlations. Intra-rater reliability refers to a type of test–retest measurement. A meta-analysis conducted by Dierdorff and
Wilson (2003) shows that tasks generally have a higher inter-rater reliability than generalised worker activities. However, task data
showed lower estimates of intra-rater reliability. Professional analysts display higher inter-rater reliabilities than other sources (for
example, incumbents, supervisors, and trained students).
In terms of validity, Sanchez and Levine (2000) question the meaning of accuracy in terms of the correspondence between job
analysis data and the ‘true’ job characteristics. They argue that it is not possible to assess the true job content and therefore accuracy
cannot be expressed in terms of deviation of an objective and absolute standard. Harvey and Wilson (2000) argue that this view holds
only for highly questionable data collection methods (for example, using inexperienced raters and poorly-anchored scales, and drawing
unverifiable inferences from abstract job dimensions). In their opinion, it is possible to assess jobs correctly and accurately if the analyst
uses the ‘right’ combination of the work or job descriptors and rating scales.
General data collection methods
Choosing a particular method for collecting job data is dependent on the level of specificity required. Specificity of information refers to
the degree of behavioural and technological detail that needs to be provided by the job description items, ranging from specific,
observable, and verifiable to holistic, abstract, and multi-dimensional (Voskuijl & Evers, 2008). That is, should the job analysis break a
job down into very minute, specific behaviours (for example, ‘tilts arm at a 90-degree angle’ or ‘moves foot forward 10 centimetres’), or
should the job be analysed at a more general level (for example, ‘makes financial decisions’, ‘speaks to clients’)? Although most jobs
are analysed at levels somewhere between these two extremes, there are times when the level of analysis will be closer to one end of the
spectrum than the other (Aamodt, 2010). The job- and worker-orientated activities described earlier are respectively highly and
moderately specific. The degree of specificity has an effect on the possibility of cross-job comparisons (Voskuijl & Evers, 2008).
A related decision addresses the issue of formal versus informal requirements. Formal requirements for a secretary may include
typing letters or filing memos. Informal requirements may involve making coffee or collecting the boss’s dry cleaning. Including
informal requirements has the advantages of identifying and eliminating duties that may be illegal or unnecessary. Suppose a job
analysis reveals that a secretary in one department collects the boss’s dry cleaning and takes it to his house. This is an important finding,
because the company may not want this to occur (Aamodt, 2010).
There are various methods for collecting job information. Sackett and Laczo (2003) distinguish between qualitative versus
quantitative methods and taxonomy-based versus blank slate approaches. Taxonomy-based approaches make use of existing taxonomies
of characteristics of jobs, while in blank slate approaches lists of job activities or attributes are generated.
Quantitative and taxonomy-based methods mostly involve standardised questionnaires. In the qualitative and/or blank slate
approaches, interviews with (groups of) incumbents, direct observation of job incumbents, and diaries kept by incumbents are more
appropriate. The more contemporary methods include electronic performance monitoring such as recording of job activities by means of
videotapes or electronic records and cognitive task analysis. Combinations of several methods are also possible. Multi-method
approaches result in more complete pictures of the job (Landy & Conte, 2004). The choice of a specific job analysis method (or
combination of methods) is guided by the purpose and concerns regarding time and cost involved when using a particular method
(Voskuijl & Evers, 2008).
As previously mentioned, an important concern in all methods of job analysis is potential errors and inaccuracies that occur and
which influence the reliability and validity of the information obtained, simply because job analysts, job incumbents, and SMEs are all
human beings (Riggio, 2009). Potential sources of inaccuracy in job analysis range from mere carelessness and poor job analyst training,
to biases such as overestimating or underestimating the importance of certain tasks and jobs, to information overload stemming from the
complexity of some jobs (Morgeson & Campion, 1997). As you will recall from our discussion of research methods in chapter 2, an
important theme for industrial psychologists is to take steps to ensure the reliability and validity of the methods they use in all sorts of
organisational analyses. Nowhere is this more important than in conducting job analyses (Riggio, 2009).
Observation
The observation of jobs helps one to understand not only the jobs in question, but work in general. Observation was perhaps the first
method of job analysis industrial psychologists used. They simply watched incumbents perform their jobs and took notes. Sometimes
they asked questions while watching, and not infrequently they even performed job tasks themselves. Near the end of World War II,
Morris Viteles studied the job of navigators on a submarine. Viteles (1953) attempted to steer the submarine toward the island of
98
Bermuda. After five not-so-near-misses of 100 miles (160 kilometres) in one direction or another, one frustrated officer suggested that
Viteles raise the periscope, look for clouds, and steer toward them (since clouds tend to form above or near land masses). The vessel
‘found’ Bermuda shortly thereafter (Landy & Conte, 2004).
Observational techniques usually work best with jobs involving manual operations, repetitive tasks, or other easily seen activities.
For example, describing the tasks and duties of a sewing machine operator is much simpler than describing the job of a computer
technician, because much of the computer technician’s job involves cognitive processes involved in troubleshooting computer problems
(Riggio, 2009). In observational analysis, the observer typically takes detailed notes on the exact tasks and duties performed. However,
to make accurate observations, the job analyst must know what to look for.
It is important that the times selected for observation are representative of the worker’s routine, especially if the job requires that the
worker be engaged in different tasks during different times of the day, week, or year. An accounting clerk may deal with payroll
vouchers on Thursdays, spend most of Fridays updating sales figures, and be almost completely occupied with preparing company’s tax
records during the month of January. One concern regarding observational methods is whether the presence of the observer will in some
way influence workers’ performance. The question is whether workers will perform the task differently because they know that they are
being watched (Riggio, 2009). It is therefore important to supplement observation by interviews which involve talking with incumbents
either at the worksite or in a separate location (Landy & Conte, 2004).
Interviews
Interviews can be open-ended (‘tell me all about what you do on the job’), or they can involve structured or standardised questions
(Riggio, 2009). However, interviews are most effective when structured with a specific set of questions based on observations, other
analyses of the types of jobs in question, or prior discussions with human resource professionals, trainers, or managers knowledgeable
about the jobs (Landy & Conte, 2004). One concern regarding interviews as a technique of job analysis is the fact that any one source of
information can be biased. Therefore, to ensure the reliability of the information obtained from the interviews, the job analyst may want
to get more than one perspective by interviewing the job incumbent, the incumbent’s supervisor, and, if the job is a supervisory one, the
incumbent’s subordinates. The job analyst may also interview several job incumbents or SMEs within a single organisation to get more
reliable representation of the job and to see whether various people holding the same job title in a company actually perform similar
tasks (Riggio, 2009). Group interviews are also referred to as SME conferences (Aamodt, 2010).
Critical incidents and job or work diaries
The critical incident technique (CIT) asks SMEs (subject matter experts who have detailed knowledge about a particular job) to identify
critical aspects of behaviour or performance in a particular job – either through interviews or questionnaires – that have led to success or
failure. The supervisor of a computer programmer may report that in a very time-urgent project, the programmer decided to install a
subroutine without taking time to ‘debug’ it; eventually the entire system crashed because of a flaw in the logic of that one sub-routine
(Landy & Conte, 2004). The real value of the CIT is in helping to determine the particular KSAOs that a worker needs to perform a job
successfully. The CIT technique is also useful in developing appraisal systems for certain jobs, by helping to identify the critical
components of successful performance (Riggio, 2009). However, the CIT’s greatest drawback is that its emphasis on the difference
between excellent and poor performance ignores routine duties. The CIT therefore cannot be used as the sole method of job analysis
(Aamodt, 2010).
Job or work diaries require workers and/or supervisors to keep a log of their activities over a prescribed period of time. An
advantage of the job or work diary is that it provides a detailed, hour-by-hour, day-by-day account of the worker’s job. However, one
difficulty of diary methods is that it they are quite time consuming, both for the worker who is keeping the diary and for the job analyst
who has the task of analysing the large amount of information contained in the diary (Riggio, 2009).
Questionnaires/surveys
Survey methods of job analysis usually involve the administration of a pencil-and-paper or computerised questionnaire that incumbents
or supervisors (SMEs) often respond to as part of a job analysis. As shown in Table 4.4, these questionnaires include task statements in
the form of worker behaviours or categories of job-related aspects. SMEs are asked to rate each statement from their experience on a
number of dimensions such as frequency of performance, importance to overall job success, and whether the task or behaviour must be
performed on the first day of work or can be learned gradually on the job. Questionnaires also ask SMEs to rate the importance of
various KSAOs for performing tasks or task groups, and may ask the SME to rate work context (Landy & Conte, 2004). Examples of
these questionnaires are discussed as specific job analysis techniques in this chapter.
The survey method has three advantages over the interview method. First, the survey is generally more cost effective since it allows
the collection of information from a number of workers simultaneously which provide various perspective of a job. Second, survey
methods are considered to be more reliable since they are anonymous and there may be less distortion or withholding of information
than in a face-to-face interview (Riggio, 2009). Third, unlike the results of observations or interviews, the questionnaire responses can
be statistically analysed to provide a more objective record of the components of the job (Landy & Conte, 2004).
Existing data
Information or records such as a previous job analysis for the position or an analysis of a related job are usually available in most large,
established organisations. Such data may also be borrowed from another organisation that has conducted analyses of similar jobs.
99
However, Riggio (2009) cautions that existing data should always be checked to make sure it conforms to the job as it is currently being
performed and also to determine if the existing data accounts for the inclusion of new technology in the job.
Specific job analysis techniques
In addition to the general methods for conducting job analyses, there are also a number of widely-used specific, standardised analysis
techniques. These methods tend to provide information on one of four specific factors that are commonly included in a job description:
worker activities, tools and equipment used, work environment, and KSAOs. We will consider a number of these specific techniques.
Table 4.4 Example of job-related items in a job analysis questionnaire (Adapted from Riggio, 2009:70)
POSITION ANALYSIS QUESTIONNAIRE (PAQ)
1. INFORMATION INPUT
1.1 Sources of job information
Rate each of the following items in terms of the extent to which it is used by the worker as a source of information in performing his or
her job.
Code Extent of use
N
1
2
3
4
5
Does not apply
Nominal/very infrequent
Occasional
Moderate
Considerable
Very substantial
1 ________ Written material (books, reports, office notes, articles, job instructions, signs, etc)
2 ________ Quantitative materials (materials which deal with quantities or amounts, such as graphs, accounts, specifications, tables of
numbers, etc)
3 ________ Pictorial materials (pictures or picture-like materials used as sources of information, for example, drawings, blueprints,
diagrams, maps, tracings, photographic films, x-ray films, TV pictures, etc)
4 ________ Patterns/related devices (templates, stencils, patterns, etc., used as sources of information when observed during use; do
not include here materials described in item 3 above)
5 ________ Visual displays (dials, gauges, signal lights, radarscopes, speedometers, clocks, etc)
Methods providing general information about worker activities
Several questionnaires have been developed to analyse jobs at a more general level. This general analysis saves time and money, and
allows jobs to be more easily compared with one another than is the case if interviews, observations, job participation, or task analysis is
used (Aamodt, 2010). Some of the questionnaires include the Position Analysis Questionnaire (PAQ), the Job Structure Profile (JSP),
the Job Elements Inventory (JEI), and Functional Job Analysis (FJA).
The Position Analysis Questionnaire (PAQ), developed at Purdue University by McCormick, Jeanneret and Mecham (1972), is
based on the worker-orientated approach. This means that generalised worker behaviours (behaviours that are involved in work
activities, for example, advising) are rated and that the instrument has a moderate level of behaviour specificity. The term ‘generalised’
refers to the fact that the elements are not job specific, in order to make it possible to compare different jobs (Voskuijl & Evers, 2008).
In a study conducted by Arvey and Begalla (1975), the PAQ was used to analyse the job of a homemaker. It was found that a
homemaker’s job is most similar to the jobs of a police officer, fire-fighters, and an airport maintenance chief. However, although
research speaks favourably about the reliability of the PAQ, it also provides cause for concern, because the PAQ appears to yield the
same results regardless of how familiar the analyst is with the job (Aamodt, 2010).
The PAQ contains 194 items organised into six main dimensions:
• Information input
• Mental processes
• Work output
• Relationships with other persons
• Job context, and
• Other job-related variables such as work schedule, pay, and responsibility (Aamodt, 2010).
Table 4.4 provides examples of the typical items included in the information input dimension of the PAQ. The PAQ is regarded as one
of the most widely used and thoroughly researched methods of job analysis (Riggio, 2009). It is also inexpensive and takes relatively
little time to use (Aamodt, 2010).
The Job Structure Profile (JSP) is a revised version of the PAQ developed by Patrick and Moore (1985), which includes changes to
100
item content and style, and new items to increase the discriminatory power of the intellectual and decision-making dimensions. In
addition it emphasises its use by a job analyst, rather than the incumbent. However, although shown to be reliable, further research is
needed before it is known whether the JSP is a legitimate improvement on the PAQ (Aamodt, 2010).
The Job Elements Inventory (JEI), developed by Cornelius and Hakel (1978), was designed as an alternative to the PAQ. It is also
regarded as a better replacement for the difficult-to-read PAQA. The instrument contains 153 items and has a readability level
appropriate for an employee with only a tenth-grade education (Aamodt, 2010).
Although the job element approach to job analysis is person-orientated, some of the elements can be considered to be
work-orientated (for example, an element for the job police officer is ‘ability to enforce laws’) because of their job specificity. The Job
Element Method (JEM) looks at the basic KSAOs that are required to perform a particular job. These KSAOs constitute the basic job
elements that show a moderate or low level of behavioural specificity. The elements for each job are gathered in a rather unstructured
way in sessions of SME panels and therefore cross-job comparisons are very limited (Voskuijl & Evers, 2008).
Functional Job Analysis (FJA) resulted from the development of the United States of America’s Dictionary of Occupational Titles
(DOT) (Voskuijl & Evers, 2008). FJA was designed by Fine (1955) as a quick method that could be used by the United States
Department of Labour to analyse and compare the characteristics, methods, work requirements, and activities to perform almost all jobs
in the United States. Jobs analysed by FJA are broken down into the percentage of time the incumbent spends on three functions: data
(information and ideas), people (clients, customers, and co-workers), and things (machines, tools, and equipment). An analyst is given
100 points to allot to the three functions. The points are usually assigned in multiples of 5, with each function receiving a minimum of 5
points. Once the points have been assigned, the highest level at which the job incumbent functions is then chosen from the chart shown
in Table 4.5.
Methods providing information about tools and equipment
The Job Components Inventory (JCI) was developed by Banks, Jackson, Stafford & Warr (1983) for use in England in an attempt to take
advantage of the PAQ’s strengths while avoiding some of its problems. The JCI consists of more than 400 questions covering five major
categories: tools and equipment, perceptual and physical requirements, mathematical requirements, communication requirements, and
decision-making and responsibility. It is the only job analysis method containing a detailed section on tools and equipment (Aamodt,
2010). Research further indicates that it is reliable (Banks & Miller, 1984), can differentiate between jobs (Banks et al, 1983), can
cluster jobs based on their similarity to one another (Stafford, Jackson & Banks, 1984), and unlike the PAQ, is affected by the amount of
information available to the analyst (Aamodt, 2010; Surrette, Aamodt & Johnson, 1990).
Methods providing information about KSAOs/competencies
In the 1980s the task-based DOT was evaluated as no longer apt to reflect changes in the nature and conditions of work. The
Occupational Information Network (O*NET) (Peterson et al, 1999) is an automated job classification system that replaces the DOT.
O*NET focuses in particular on cross-job descriptors. It is based on a content model that consists of five categories of job descriptors
needed for success in an occupation (Reiter-Palmon et al, 2006):
• Worker requirements (for example, basic skills, cross-functional skills)
• Worker characteristics (for example, abilities, values)
• Experience requirements (for example, training)
• Occupational requirements (for example, generalised work activities, work context), and
• Occupation-specific requirements (for example, tasks, duties).
The O*NET also includes information about such economic factors as labour demand, labour supply, salaries and occupational trends.
This information can be used by employers to select new employees and by applicants who are searching for careers that match their
skills, interests, and economic needs (Aamodt, 2010).
O*NET is a major advancement in understanding the nature of work, in large part because its developers understood that jobs can be
viewed at four levels: economic, organisational, occupational, and individual. As a result, O*NET has incorporated the types of
information obtained in many job analysis techniques (Reiter-Palmon et al, 2006).
In addition to information about tools and equipment, the Job Components Inventory (JCI) which was discussed earlier also provides
information about the perceptual, physical, mathematical, communication, decision-making, and responsibility skills needed to perform
the job (Aamodt, 2010).
Table 4.5 Levels of data, people and things (FJA) (Aamodt, 2010:52)
101
The Fleishman Job Analysis Survey (F-JAS) is a taxonomy-based job analysis method based on more than 30 years of research
(Fleishman & Reilly, 1992). Through a combination of field and laboratory research, Fleishman and his associates developed a
comprehensive list, or taxonomy (an orderly, scientific system of classification), of 52 abilities. These can be divided into the broad
categories of verbal, cognitive, physical, and sensory or perceptual-motor abilities. Table 4.6 provides some examples of these abilities.
Fleishman’s list of abilities can be used for many different applied purposes. It is an effective way to analyse the most important
abilities in various occupations. It can also be used to determine training needs, recruiting needs, and even work design. Once an analyst
knows the basic abilities that can be brought to the job, it is much easier to identify which of those abilities are truly important (Landy &
Conte, 2004).
Table 4.6 Excerpts from abilities in Fleishman’s taxonomy (Adapted from Landy & Conte, 2004:83)
[Reproduced with permission of The McGraw-Hill Companies.]
102
103
The F-JAS requires incumbents or job analysts to view a series of abilities and to rate the level of ability needed to perform the job.
These ratings are performed for each of the 52 abilities and knowledge. The F-JAS is commercially available and easy to use by
incumbents or trained analysts. It is furthermore supported by years of research (Aamodt, 2010).
The Job Adaptability Inventory (JAI) is a 132-item inventory developed by Pulakos, Arad, Donovan and Plamondon (2000) which
taps the extent to which a job incumbent needs to adapt to situations on the job. The JAI is relatively new and it has excellent reliability.
It has also been shown to distinguish among jobs (Pulakos et al, 2000). The JAI has eight dimensions (Aamodt, 2010):
• Handling emergencies or crisis situations
• Handling work stress
• Solving problems creatively
• Dealing with uncertain and unpredictable work situations
• Learning work tasks, technologies and procedures
• Demonstrating interpersonal adaptability
• Demonstrating cultural adaptability
• Demonstrating physically-orientated adaptability.
Historically, job analysis instruments ignored personality attributes and concentrated on abilities, skills, and less frequently, knowledge.
Guion and his colleagues (Guion, 1998; Raymark, Schmit & Guion, 1997) developed a commercially available job analysis instrument,
the Personality-Related Position Requirements Form (PPRF). The PPRF is devoted to identifying personality predictors of job
performance. The instrument is not intended to replace other job analysis devices that identify knowledge, skills or abilities, but rather to
supplement job analysis by examining important personality attributes in jobs (Landy & Conte, 2004).
The PPRF consists of 107 items tapping 12 personality dimensions that fall under the ‘Big 5’ personality dimensions (openness to
experience, conscientiousness, extroversion, agreeableness, and emotional stability). However, Raymark et al (1997) departed from the
‘Big 5’ concept since they found that the five factors were too broad to describe work-related employee characteristics. As shown in
Table 4.7, they defined twelve dimensions (for example, conscientiousness was covered by the sub-scales: general trustworthiness,
adherence to a work ethic, and thoroughness and attentiveness) (Voskuijl & Evers, 2008). Though more research is needed, the PPRF is
reliable and shows promise as a useful job analysis instrument for identifying the personality traits necessary to perform a job (Aamodt,
2010). The PPRF is low on behaviour specificity and high on cross-job comparability (Voskuijl & Evers, 2008).
Table 4.7 Twelve personality dimensions covered by the PPRF (Adapted from Landy & Conte, 2004:193)
[Reproduced with permission of The McGraw-Hill Companies.]
104
Computer-based job analysis
Computer-based job analysis systems use the same taxonomies and processes across jobs which make it easier to understand job
similarities, job families and career paths. In addition it provides a number of advantages, including:
• Time and convenience to the employer since SMEs need not be assembled in one spot at one time but work from their desks
at their own pace and submit reports electronically
• Efficiency with which the SME system can create reports that serve a wide range of purposes, from individual goal-setting
and performance feedback to elaborate person–job matches to support selection and placement strategies, and
• Facilitating vocational counselling and long-term strategic human resource planning in the form of replacement charts for
key positions (Landy & Conte, 2004).
The Work Profiling System (WPS) (Saville & Holdsworth, 2001) is an example of a computerised job analysis instrument by means of
which the data collection and interpretation process of job analysis can be streamlined, reducing costs to the organisation, minimising
distractions to the SMEs and increasing the speed and accuracy of the job analysis process.
The WPS consists of three different questionnaires for the following groups of employees: managerial and professional; service and
administrative; manual and technical. Each questionnaire consists of a job content part, where the main tasks are established, and a job
context part, where, for example, the physical environment, responsibility for resources, and remuneration are explored. To use the
WSP, each SME from whom information is sought fills out an onscreen job analysis questionnaire. SMEs respond using scales to
indicate the typical percentage of their time spent on a task as well as the relative importance of the task. A separate work context
section covers various aspects of the work situation. The WSP human attribute section covers physical and perceptual abilities, cognitive
abilities, and personality and behavioural style attributes (Landy & Conte, 2004).
The WPS can provide job descriptions, people specifications, assessment methods that can be used to assess candidates for a
vacancy, an individual development planner for a job incumbent, a tailored appraisal document for the job, and a person–job match,
which can match candidates against the key requirements of the job (Schreuder et al, 2006).
Reflection 4.5
Consider the advantages and shortcomings of each of the following methods of collecting job analysis information: observation,
performing actual job tasks (job participation), interviews, critical incident identification, work diaries, job analysis questionnaires,
and computer-based job analysis techniques.
Assuming that you would use three methods in a single project to collect job analysis information, which of the three methods
listed above would you choose? In what order would you arrange these methods (that is, which type of data would you gather first,
then second, then last?). Why would you use this order?
The South African Organising Framework for Occupations (OFO) and job analysis
The Organising Framework for Occupations (OFO) is an example of a skills-based approach to job analysis. As discussed in chapter 3,
the focus on skills has become more important in today’s business environment as a result of rapid technological change and skills
shortages nationally and globally. Since national and global imperatives have indicated that identifying occupational skills requirements
is critical to identify and address the most critical skills shortages, traditional job analysis approaches which purely focus on static tasks
105
and work behaviours are regarded as being no longer appropriate. In contrast, focusing on occupation-specific skills in addition to the
traditional job analysis approach is seen to provide a more dynamic and flexible approach to job analysis in the contemporary
employment context. Occupation-specific skills involve the application of a broader skill in a specific performance domain.
Furthermore, these skills are generally limited to one occupation or a set of occupations (such as a job family), but are not designed to
cut across all jobs. However, these more specific skills can be utilised across jobs when jobs include similar occupation-specific skills
(Reiter-Palmon et al, 2006). To allow for comparisons between jobs, and therefore flexibility and efficiency in training design, the
occupation-specific information is usually anchored in a broader, more general and theoretical context. The South African OFO was
specifically designed to provide this broader context but not job-specific information.
The Department of Labour, with the assistance of German Technical Co-operation (GTZ), introduced the OFO in February 2005 to
align all skills development (education, training and development or learning) activities in South Africa. The framework is based on
similar international development done by the Australian Bureau of Statistics (ABS) and Statistics New Zealand, which lead to an
updated classification system, the Australian and New Zealand Standard Classification of Occupations (ANZSCO). Inputs from
stakeholders in South Africa were used to adapt the framework and its content to the South African context (RainbowSA, 2010).
Although not developed as a job analysis method but rather as broader framework for the South African Occupational Learning
System (OLS), which is discussed in chapter 10, the OFO is regarded as a valuable source of information about occupations and jobs
that are linked to scarce and critical skills and jobs in the South African context. The OFO became operational in April 2010.
Application of the OFO
The OFO provides an integrated framework for storing, organising and reporting occupation-specific information not only for statistical
but also for client-orientated applications, such as identifying and listing scarce and critical skills, matching job seekers to job vacancies,
providing career information, and registering learnerships. The mentioned information is generated by underpinning each occupation
with a comprehensive competence or occupational profile which is generated by Committees (or Communities) of Expert Practice
(CEPs). A Committee or Community of Expert Practice is a group of practitioners currently active in a specific occupation. Where there
is a professional body for the occupation, they will be the membership of the CEP.
The occupational profiles are also used to inform organisations’ job and post profiles, which simplifies, inter alia, conducting job
analyses, skills audits, and performance evaluations. The structure of the OFO also guides the creation of Career Path Frameworks and
related learning paths. Personal profiles can be linked to job profiles and job profiles to occupational profiles so that workforce or human
resource planning is streamlined and unpacked down to the level of personal development plans that address the development or
up-skilling of scarce and critical skills in the workplace (RainbowSA, 2010).
OFO structural elements and underlying concepts
The OFO is an occupation-specific skills-based coded classification system, which encompasses all occupations in the South African
context. The classification of occupations is based on a combination of skill level and skill specialisation which makes it easy to locate a
specific occupation within the framework. It is important to note that a job and an occupation are not the same. A job is seen as a set of
roles and tasks designed to be performed by one individual for an employer (including self-employment) in return for payment or profit.
An occupation is seen as a set of jobs or specialisations whose main tasks are characterised by such a high degree of similarity that
they can be grouped together for the purposes of the classification. The occupations identified in the OFO therefore represent a category
that could encompass a number of jobs or specialisations. For example, the occupation ‘General Accountant’ would also cover the
specialisations ‘Financial Analyst’ and ‘Insolvency Practitioner’.
Identified occupations are classified according to two main criteria: skill level and skill specialisation, where skill is used in the
context of competency rather than a description of tasks or functions. The skill level of an occupation is related to competent
performance of tasks associated with an occupation. Skill level is an attribute of an occupation, not of individuals in the workforce, and
can operationally be measured by the level or amount of formal education and/or training (learning), the amount of previous experience
in a related occupation, and the amount of on-the-job training usually required to perform the set of tasks required for that occupation
competently. It is therefore possible to make a comparison between the skill level of an occupation and the normally required
educational level on the National Qualifications Framework (NQF).
The skill specialisation of an occupation is a function of the field of knowledge required, tools and equipment used, materials
worked on, and goods or services provided in relation to the tasks performed. Under some occupations, a list of alternative titles has
been added. The purpose of this is to guide users of the OFO to identify the relevant occupation under which to capture data
(RainbowSA, 2010). As shown in Figure 4.3, within the current OFO there are 8 major groups. These major groups comprise 43
sub-major groups, 108 minor groups, 408 unit groups, and 1 171 occupations. We have included in the example the occupation of
industrial psychologist as outlined in the OFO. Table 4.8 shows how the role of a psychologist (including the occupation of an industrial
psychologist) differs from those of the human resource professional) as described on the OFO.
An example of how useful the OFO is comes from a Human Sciences Research Council (HSRC) analysis of skills demand as
reflected in newspaper advertisements over a three-year period. Three national newspapers were analysed and 125 000 job
advertisements found, advertising 28 000 unique job titles. Using the OFO, the HSRC could isolate 1 200 unique occupations from the
28 000 job titles and 125 000 advertisements (RainbowSA, 2010).
106
COMPETENCY MODELLING
Competency modelling is viewed as an extension rather than a replacement of job analysis. Just as job analysis seeks to define jobs and
work in terms of the match between required tasks and human attributes, competency modelling seeks to define organisational units
(that is, larger entities than simply jobs or even job families) in terms of the match between the goals and missions of those units and the
competencies required to meet those goals and accomplish those missions. From the perspective of advocates of competency modelling,
the notion of competencies is seen to be rooted in a context of organisational goals rather than in an abstract taxonomy of human
attributes (Landy & Conte, 2004).
Figure 4.3 Example of structural elements of the South African OFO (Stuart (RainbowSA), 2010)
[Reproduced with the permission of the author.]
Reflection 4.6
•
Review chapter 1 and study the objectives of I/O psychology. Now study the tasks or skills of the psychologist shown in
Table 4.8 (including those of the industrial psychologist shown in Figure 4.3). How does the job of an industrial
psychologist differ from those of a human resource professional?
• Review the objectives of personnel psychology as a sub-field of I/O psychology described in chapter 1 and the
107
employment process in chapter 3. How do the tasks or skills of the industrial psychologist complement those of the human
resource professional in the employment process discussed in chapter 3?
• Obtain a job description of a human resource professional from any organisation. Does the job description reflect the tasks
or skills outlined on the OFO? Which of the tasks overlap with those of the industrial psychologist?
• Do you think that the OFO will be useful in developing a job description? If your answer in the affirmative, in what
manner? Provide reasons for your answer.
Table 4.8 Example of tasks and skills descriptions on the OFO (Stuart (RainbowSA), 2010)
[Reproduced with the permission of the author.]
Psychologists
(Skill level 5) – OFO code: 2723
Human resource professionals
(Skill level 5) – OFO code: 2231
Skill specialisation:
Psychologists investigate, assess and provide
treatment and counselling to foster optimal
personal, social, educational and
occupational adjustment and development.
Skill specialisation:
Human resource professionals plan, develop, implement and evaluate staff recruitment,
assist in resolving disputes by advising on workplace matters, and represent industrial,
commercial, union, employer and other parties in negotiations on issues such as
enterprise bargaining, rates of pay, and conditions of employment.
Tasks or skills:
Tasks or skills:
• Administering and interpreting
• Arranging for advertising of job vacancies, interviewing, and testing of
diagnostic tests and formulating
applicants, and selection of staff
plans for treatment
• Arranging the induction of staff and providing information on conditions of
• Collecting data about clients and
service, salaries and promotional opportunities
assessing their cognitive,
• Maintaining personnel records and associated human resource information
behavioural and emotional
systems
disorders
• Overseeing the formation and conduct of workplace consultative committees
• Collecting data and analysing
and employee participation initiatives
characteristics of students and
• Provides information on occupational needs in the organisation, compiles
recommending educational
workplace skills plans and training reports and liaise with Sector Education
programmes
and Training Authorities (SETAs) regarding Learning Agreements and
• Conducting research studies of
Learning Contracts in accordance with skills development legislation
motivation in learning, group
• Providing advice and information to management on workplace relations
performance and individual
policies and procedures, staff performance and disciplinary matters
differences in mental abilities and
• Providing information on current job vacancies in the organisation to
educational performance
employers and job seekers
• Conducting surveys and research
• Receiving and recording job vacancy information from employers, such as
studies on job design, work
details about job description and wages and conditions of employment
groups, morale, motivation,
• Studying and interpreting legislation, awards, collective agreements and
supervision and management
employment contracts, wage payment systems and dispute settlement
• Consulting with other
procedures
professionals on details of cases
• Undertaking and assisting in the development, planning and formulation of
and treatment plans
enterprise agreements or collective contracts, such as productivity-based wage
• Developing interview techniques,
adjustment procedures, workplace relations policies and programmes, and
psychological tests and other aids
procedures for their implementation
in workplace selection, placement,
• Undertaking negotiations on terms and conditions of employment, and
appraisal and promotion
examining and resolving disputes and grievances
• Developing, administering and
evaluating individual and group
treatment programmes
• Formulating achievement,
diagnostic and predictive tests for
use by teachers in planning
methods and content of instruction
• Performing job analyses and
establishing job requirements by
observing and interviewing
employees and managers
However, it is still a problem that definitions of the term ‘competencies’ or ‘competency’ are not unequivocal and are sometimes
108
even contradictory (Voskuijl & Evers, 2008). McClelland (1973) introduced the term as a predictor of job performance because he
doubted the predictive validity of cognitive ability tests. He proposed to replace intelligence testing with competency testing. Though
McClelland (1973) did not define the term competency, he made it explicitly clear that the term did not include intelligence. Today, we
know that general mental ability is the most valid predictor of job performance (Voskuijl & Evers, 2008; Schmidt & Hunter, 1998).
However, in spite of the paucity of empirical evidence that competencies add something to the traditional concepts in the prediction or
explanation of job success (KSAOs), competencies, competency modelling (the term mostly used in the USA), and competency
frameworks (the term mostly used in the UK) are very popular.
In general, competency frameworks or competency models describe the competencies required for effective or excellent
performance on the job. As shown in Table 4.10, the models mostly consist of lists of competencies (typically 10 to 20), each with a
definition and examples of specific behaviours. Competency models are often based on content analysis of existing performance models
and managerial performance taxonomies (Voskuijl & Evers, 2008). In addition, organisation-, industry-, and job-specific competencies
are identified. As in job analysis, there are several methods for identifying competencies (Voskuijl & Evers, 2008; Garavan & McGuire,
2001; Robinson et al, 2007):
• Direct observation
• Critical incident technique
• SME panels or teams or focus groups
• Questionnaires
• Repertoire grid technique, or
• Conventional job analysis.
In South Africa, the terms competency modelling and competency framework are both adopted by some companies. However,
considering new legislative developments in the South African organisational context such as the Occupational Learning System (OLS),
the National Qualifications Framework (NQF), and the new Organising Framework for Occupations (OFO) (the topics of chapter 10) –
all of which will have a major influence on the education, training and development of people in South African workplaces – it can be
foreseen that companies will in future endeavour to align their competency modelling approaches with the national occupation-specific
skills-based approach.
Defining competencies
Boyatzis’ (1982) definition of competency is regarded as being the most frequently cited. Boyatzis describes competency as an
underlying characteristic of a person which results in effective and/or superior performance of a job. This definition is based on the data
of McClelland (1973) and refers to key success areas (KSAs), motives, traits and aspects of one’s self-image or social roles of
individuals related to job performance. Garavan and McGuire (2001) define competencies in narrower terms by regarding them as
combinations of KSAOs that are needed to perform a group of related tasks or bundles of demonstrated KSAOs.
Other authors refer more explicitly to behaviours. As illustrated in Figure 4.4, Zacarias and Togonon (2007:3) define a competency
as ‘the sum total of observable and demonstrated skills, knowledge and behaviours that lead to superior performance’, while Tett et al
(2000:215) regard a competency as future-evaluated work behaviour by describing it as an ‘identifiable aspect of prospective work
behaviour attributable to the individual that is expected to contribute positively and/or negatively to organisational effectiveness’.
However, other authors such as Kurz and Bartram (2002) are of the firm opinion that a competency is not behaviour. Kurz and Bartram
(2002:230), for example, state:
‘A competency is not the behaviour or performance itself, but the repertoire of capabilities, activities, processes and responses available that enable
a range of work demands to be met more effectively by some people than by others. Competence is what enables the performance to occur.’
Voskuijl and Evers (2008) point out that the common thread underlying the above-mentioned definitions of competencies is the focus on
the characteristics of the individual job holder. In this regard, the descriptions of competencies are person-based – an approach that
found its origin in the USA. Within this approach, competencies are conceived as individual characteristics that are related to excellent
or superior performance. This perspective is worker-orientated and is concerned with the input of individuals in terms of behaviour,
skills, or underlying personal characteristics required by job holders.
However, the UK approach to competency frameworks tends to integrate the person-based approach with the job-based approach.
The job-based approach is task centred and focuses on the purpose of the job or occupation, rather than the job holder. Further, the job
competency approach is directed at job output and it generally assumes that the characteristics required by job holders exist when the
standards are met. The UK approach further differentiates between the terms ‘competence’ and ‘competency’. The differences between
the terms parallel the differences between the job-based versus the person-based perspectives. In the UK (as in South Africa), in
particular, the term ‘competence’ is generally associated with the job-based approach, especially in the industry and educational sectors.
Person-based competency frameworks appear to be more relevant in the area of management and professionals, especially in selection
and assessment (Voskuijl & Evers, 2008).
Similarly, in South Africa, the concept of competence refers to meaningful skills related to specific occupations, as described in the
OFO. As previously mentioned, the terms skills is used in the context of competency rather than a description of tasks or functions. The
skills level of an occupation is related to the competent performance of tasks associated with an occupation. The OFO identifies
competency and professional requirements through six dimensions: skills, knowledge, qualification, experience, professional
109
registration, and disability. Competence in occupational learning is demonstrated against the three learning components: a knowledge
and theory component and standard, a practical skills standard, and a work experience standard.
Figure 4.4 Definition of a competency (Zacarias & Togonon, 2007:3)
It is important to note that the OFO is aimed at creating an OLS linked with a National Career Path Framework (NCPF) to address
the skills shortages in South Africa rather than addressing specific strategic business goals of companies. Considering that the education,
training and development of a workforce have an effect on companies’ capability to sustain their competitive edge in a global business
market, the OFO provides a valuable framework of occupation-specific skills that can be used as an input to job analysis and
competency modelling practices.
Drawbacks and benefits of competency modelling
Most of the criticism against the use of competencies refers to matters of validity and theoretical background of the concept (Voskuijl &
Evers, 2008; Garavan & McGuire, 2001). Practical disadvantages include aspects related to time, costs and effort. For example,
Mansfield (1996) mentions that the oldest competency models were developed for single jobs. These models included extensive
data-gathering (for example, interviews, focus groups, surveys, and observation) with SMEs (for example, managers, job holders, and
customers). The competencies were distilled from the collected data lists of only ten to twenty skills or traits. Mansfield (1996) refers
further to the ‘one-size-fits-all’ model which defines one set of competencies for a broad category of related jobs, for example, all
managerial jobs. The most important drawback of this model is the impossibility of describing the typical requirements of specific jobs;
therefore it is of limited use in several human resource practices, such as selection and promotion procedures. Another example is the
‘multiple job model’ that assumes, for example, experience in building competency models, and the existence of many single-job
models and consultants specialised in competency work. If such conditions are met, it may be possible to develop a competency model
for a specific job in a quick, low-cost way (Voskuijl & Evers, 2008).
However, the strength of the behavioural competency approach seems to be the use of criterion samples. Sparrow (1997) argues that
the approach focuses on those behaviours that have proved to result in successful performance in a sample of job holders who have had
success in their jobs. Although behavioural competencies have been testified to as being useful in the areas of recruitment and selection,
career development, performance management and other human resource processes, these claims still need to be empirically verified.
Furthermore, although several weaknesses of the use of competences in the context of management development have been pointed out,
the competence approach concentrates on what managers and workers actually do, rather than on beliefs about what they do (Landy &
Conte, 2004).
A study conducted by members of the Job Analysis and Competency Modelling Task Force (Schippman et al, 2000) showed a
superior overall evaluation of job analysis in comparison to competency modelling. The Task Force conducted a literature search and
interviewed thirty-seven subject matter experts, such as human resource consultants, former presidents of the US Society for Industrial
110
and Organizational Psychology, leaders in the area of competency modelling, and industrial psychologists who represent a traditional
job analysis perspective. The sample of experts represented advocates as well as opponents of either job analysis or competency
modelling. The study revealed that, in terms of the evaluative criteria used in the study, except for the focus of competency modelling on
the link between people’s competencies and business goals and strategies, job analysis demonstrates medium/high and high rigour, while
competency modelling demonstrates low/medium to medium rigour with reference to the evaluative criteria. The evaluative criteria used
included: method of investigation, procedures for developing descriptor content, detail of descriptor, and content according to the level
of rigour with which they were practised (Voskuijl & Evers, 2008).
Although job analysis is considered as being more psychometrically sound than competency modelling, in contrasting job analysis
and competency modelling, the conclusion is that the integration of the strengths of both approaches could enhance the quality of the
results (-Cartwright & Cooper, 2008; Lievens, Sanchez & De Corte, 2004). A study by -Siddique (2004) indicated that a company-wide
policy of job analysis is an important source of competitive advantage. He found a relationship between frequency or regularity of
conducting job analysis and organisational performance (for example, administrative efficiency, quality of organisational climate,
financial performance, and overall sales growth). The relationship was stronger when the job analysis approach was competency
focused. By that Siddique meant approaches that place greater emphasis on characteristics of employees considered essential for
successful job performance (for example, motivation, adaptability, teamwork orientation, interpersonal skills, innovative thinking, and
self-motivation).
Phases in developing a competency model
Designing a competency model generally involves the following broad phases:
• Competency identification
• Competency definition, and
• Competency profiling.
Competency identification involves the identification of key competencies that an institution needs in order to deliver on its strategy
effectively. This phase involves a thorough assessment of the organisation’s directions and the internal processes necessary to support its
strategic initiatives. The output of this phase is a complete list of required competencies.
Once the key competencies have been identified, definitions are crafted and specific behavioural manifestations of the competencies
are determined. The behavioural requirements can be developed after a series of interviews with management and key staff. In this
phase, competencies may also be differentiated into proficiency levels. A competency directory should be produced at the end of this
phase.
Competency profiling occurs when competencies are identified for specific jobs within an institution. The mix of competencies will
depend on the level of each position. The resulting job profiles are then tested and validated for accuracy.
Steps in developing a competency model
Typically, the process steps are as follows:
• Assemble the Competency Process Team.
• Identify the key business processes.
• Identify competencies.
• Define proficiency levels.
• Develop job and competency profiles.
• Weight competencies.
• Apply the competency weighting.
• Validate and calibrate results.
Assemble the Competency Process Team
In general, the team should consist of an outside consultant or consultants, and representatives of the organisation (both human capital
and business) undertaking the competency process (Competency Task Team or Steering Committee). It is important that the team
comprises people who have deep knowledge of the institution – in particular, key institutional processes – and people who have the
authority to make decisions. The participation of the CEO is critical in sending the right message to the rest of the team regarding the
significance of the undertaking.
Identify the key business processes
In order to guide the competency model towards strategic success, all key processes or those critical to the success of the business
should be identified and unpacked through a process of process mapping. In this regard, the process would also be influenced by the
organisation’s balanced score card requirements.
Identify competencies
It is important to allocate competencies (as shown in Table 4.10) in terms of the various categories in order to prioritise the focus of
111
competency modelling.
Table 4.9 The balanced scorecard
What exactly is a balanced scorecard?
The balanced scorecard is a managerial accounting technique developed by Robert Kaplan and David Norton that seeks to reshape
strategic management. It is a process of developing goals, measures, targets and initiatives from four perspectives: financial, customer,
internal business process, and learning and growth. According to Kaplan and Norton (1996), the measures of the scorecard are derived
from the company’s strategy and vision. More cynically, and in some cases realistically, a balanced scorecard attempts to translate the
sometimes vague, pious hopes of a company’s vision and mission statement into the practicalities of managing the business better at
every level.
To embark on the balanced scorecard path, an organisation must know (and understand) the following:
• The company’s mission statement
• The company’s strategic plan/vision
• The financial status of the organisation
• How the organisation is currently structured and operating
• The level of expertise of employees
• The customer satisfaction level.
Table 4.10 Categories of competencies
Key
competence
The vital few competencies required for business success from a short- and long-term perspective.
Strategic
competence
Competence related to business success from a long-term perspective.
Critical
competence
Competence related to business success from a short-term perspective.
Declining
competence
Competence that will be phased out or shifted according to market driven changes.
Obsolete
competence
Competence that will be phased out or shifted in accordance with to market driven changes.
Core
competence
• A special skill or technology that creates customer values which can be differentiated.
• It is difficult for competitors to imitate or procure.
• It enables a company to access a wide variety of seemingly unrelated markets by combining skills and
technologies across traditional business units.
In defining the various categories of competencies, the KSAs, emerging competencies, and job inputs and outputs need to be
considered:
Define proficiency levels
In defining and allocating proficiency levels (shown in Tables 4.13 to 4.15), various variations could serve as guiding framework,
depending on the most suitable, valid and fair criteria to promote the strategic impact of this application.
Proficiency levels refer to the competence (KSAs) level required to do the job (Basic, Intermediate, and Advanced) and the
descriptors that are provided for each defined level. Variations in proficiency levels could include scales based on the level of work
criteria application (1, 2 and 3, or Basic, Intermediate, and Advanced) and correlation with existing performance management scales of
proficiency.
Table 4.11 Defining competencies
Knowledge
What the job requires the incumbent to know (bodies of knowledge) to perform a given task successfully, or
knowledge that an individual brings to the organisation through previous job or work experiences, formal
qualifications, training, and/or self-study
Skills
Infers the ‘ability to do’. The practical know-how required for a job, or the abilities individuals bring into the
organisation gained through previous job experiences or training
Attributes
Underlying characteristics of an individual’s personality and traits expressed in behaviour, including physical traits
Emerging
Key success areas (KSAs) required facing new organisational challenges
112
competencies
Inputs
Knowledge, skills and attributes required to achieve job outputs
Job outputs
Job outputs as contained in job profiles
Table 4.12 Excerpts of elements in a competency framework
Table 4.13 Proficiency levels: knowledge
Basic
Can explain day-to-day practices regarding bodies of knowledge, ideas or concepts and why they are needed
Intermediate
• Past or present knowledge on how to administer process or enable productivity
• Shares knowledge by assisting others with work-related problems or issues
Advanced
•
•
•
•
In-depth knowledge of area, considered a Subject Matter Expert (SME)
Expert knowledge must be backed up with qualifications or significant experience
Can advise others
Incorporates new learning into work plans and activities
Table 4.14 Proficiency levels: skills and attributes
Basic
This proficiency level requires functioning in an environment of routine or repetitive situations with detailed rules or
instructions, under supervision, with an impact limited to own area of operation.
Intermediate This proficiency level requires functioning in an environment (context) of similar situations with well-defined
procedures, under general supervision. Impact beyond immediate area.
Advanced
This proficiency level requires functioning in an environment of diverse situations with diversified procedures and
standards, with an impact wider than a specific section. Can work independently. Is capable of coaching others on the
competency, and can apply the competency to a range of new or different situations.
113
Table 4.15 Example of proficiency level descriptors
Proficiency
level
Career sta ge
Description
A
Learner
/Contributor
Has basic knowledge and understanding of the policies, systems and processes required in the
competency area.
Implements and communicates the programme/system associated with the competency.
B
Specialist
Has greater responsibilities and performs more complex duties in the competency area, requiring
analysis and possibly research.
Interacts and influences people beyond own team.
C
Seasoned
professional
Recommends and pursues improvements and/or changes in the programme/system.
Has a broader range of influence, possibly beyond the institution.
Provides coaching, mentoring to others on area of competency.
D
Authority/Expert
Formulates policies in the competency area.
Sets directions and builds culture around this competency.
Provides leadership in this area, within both the organisation and the larger industry.
Develop job and competency profiling
This stage involves setting performance standards, profiling the job, identifying core job categories and compiling core job
competencies. Elements of these processes are shown in Table 4.16.
Weight competencies
Another part of the job profiling process is the weighting of competencies. Weights reflect the degree to which a specific competency is
important to a job and by extension to an employee holding the position. This allows for more important skills to be treated as more
significant in gap analysis. Using a range of one to three, the following weights may be given to each of the competencies in a job
profile:
1 Occasionally helpful, of modest value
2 High importance, regularly used to advantage
3 Critical requirement.
The weight of competency in the job profile is not equivalent to the depth of expertise or breadth of knowledge that is required for the
job, which will already have been captured in the proficiency level. The weighting of competencies should be the extent to which a
particular level of competency can spell either success or failure in a given job.
Apply the competency weighting
The weighting is then applied in terms of critical or core competencies identified, for example, financial control will be of high
importance to a finance executive receiving a weight of 3.
Validate and calibrate results
The final stage of the competency modelling exercise is the validation of the model and the calibration of job profiles. The validation of
the model is done through interviews with key people in the organisation. On-the-job observations of incumbents are also useful in
validating that behaviors captured in the competency model are typical of the job context.
Calibration of job profiles is the process of comparing the competencies across positions or within departments to ensure that the
required proficiency levels reflect the different requirements of the job. To facilitate this process, a benchmark position is usually
chosen; for non-managerial positions this could be the position of credit controller. Once the competency levels have been identified for
the benchmark position, all other positions are calibrated relative to it. For supervisory and managerial positions, a similar approach can
be employed, using the investment fund manager position as the point of reference.
Table 4.16 The job and competency profiling process
114
CRITERION DEVELOPMENT
As noted earlier in this chapter, performance criteria are among the products that arise from a detailed job analysis, for once the specific
elements of a job are known, it is easier to develop the means to assess levels of successful or unsuccessful performance. Criteria within
the context of personnel selection, placement, performance evaluation, and training are defined as evaluative standards that can be used
as yardsticks for measuring employees’ degree of success on the job (Cascio & Aguinis, 2005; Guion, 1965). As discussed in chapter 1,
the profession of personnel psychology is interested in studying human behaviour in work settings by applying scientific procedures
such as those discussed in chapters 2 and 5.
Furthermore, industrial psychologists generally aim to demonstrate the utility of their procedures in practice, such as, for example,
job analysis, to enhance managers’, workers’ and their own understanding of the determinants of job success. For this purpose, criteria
are often used to measure performance constructs that relate to worker attributes and behaviour that constitute successful performance.
The behaviour that constitutes or defines the successful or unsuccessful performance of a given task is regarded as a criterion that needs
to be measured in a reliable and valid manner (Riggio, 2009).
Steps in criterion development
Guion (1961) outlines a five-step procedure for criterion development, depicted in Figure 4.5:
1. Conducting an analysis of the job and/or organisational needs.
2. Developing measures of actual behaviour relative to expected behaviour as identified in the job and need analysis. These measures
should supplement objective measures of organisational outcomes such as quality, turnover, absenteeism, and production.
3. Identifying criteria dimensions underlying such measures by factor analysis, cluster analysis or pattern analysis.
4. Developing reliable measures, each with high construct validity, of the elements so identified.
5. Determining the predictive validity of each independent variable (predictor) for each one of the criterion measures, taking them one at
a time.
Figure 4.5 Steps in criterion development
115
Step one (job analysis) has been discussed in detail in this chapter. In step two, the starting point for the industrial psychologist is to have
a general conceptual or theoretical idea of the set of factors that constitute successful performance. Conceptual criteria are ideas or
theoretical constructs that cannot be measured. We can take the selection of a candidate for a vacancy as an example. Conceptual criteria
would be what we see in our mind’s eye when we think of a successful employee (one who can do the job successfully and thereby
contribute to the profitability of the organisation). To make this ideal measurable, the criteria have to be turned into actual criteria.
Actual criteria can serve as real measures of the conceptual criteria (Schreuder et al, 2006).
For example, quality of work is an objective measure of a valued organisational outcome. However, because quality of work is only
a conceptual criterion or a theoretical abstraction of an evaluative standard, the industrial psychologist needs to find some way to turn
this criterion into an operationally measurable factor. That is, the actual criteria that will serve as measures (or evaluative standards) of
the conceptual criterion (quality of work) need to be identified. Quality is usually operationally measured in terms of errors, which are
defined as deviations from a standard. Therefore, to obtain an actual or operational measure of quality, there must be a standard against
which to compare or evaluate an employee’s work. A secretary’s work quality would be judged by the number of typographical errors
(the standard being correctly spelt words); and a cook’s quality might be judged by how her food resembles a standard as measured by
size, temperature and ingredient amounts (Aamodt, 2010).
Similarly, attendance as a valued organisational outcome is one aspect for objectively measuring employees’ performance.
Attendance can be separated into three distinct conceptual criteria: absenteeism, tardiness, and tenure. In terms of tenure, for example,
employees may be considered ‘successful’ if they stay with the company for at least four months and ‘unsuccessful’ if they leave before
that time (Aamodt, 2010). On the other hand, productivity can be operationally measured by actual criteria such as the number of
products assembled by an assembly-line worker, or the average amount of time it takes to process a claim in the case of an insurance
claims investigator (Riggio, 2009).
The development of criteria is often a difficult undertaking. This is further complicated by the dimension of time. A short-term
definition of quality, for example, may not be the same as the long-term definition. Therefore the criteria used for short-term
descriptions of the ideal may differ from the criteria used for a long-term description. Proximal criteria are used when short-term
decisions about quality must be made, while distal criteria are used to make long-term decisions about quality (Schreuder et al, 2006).
In step three, industrial psychologists can use statistical methods such as a factor analysis, which shows how variables cluster to
form meaningful ‘factors’. Factor analysis (also discussed in chapter 5) is useful when an industrial psychologist has measured many
variables and wants to examine the underlying structure of the variables or combine related variables to reduce their number for later
analysis. Using this technique, a researcher measuring workers’ satisfaction with their supervisors, salary, benefits and working
conditions may find that two of these variables, satisfaction with salary and benefits, cluster to form a single factor that the researcher
calls ‘satisfaction with compensation’. The other two variables, supervisors and working conditions, form a single factor that the
researcher labels ‘satisfaction with the work environment’ (Riggio, 2009).
Steps four and five consider important topics such as predictor and criterion reliability and validity which are addressed in detail in
chapter 5. The aspects of relevance to criterion development are addressed in the discussion that follows.
Predictors and criteria
116
Managers involved in employment decisions are most concerned about the extent to which performance assessment information will
allow accurate predictions about subsequent job performance. Industrial psychologists are therefore always interested in the predictor
construct they are measuring and in making inferences about the degree to which that construct allows them to predict a theoretical
(conceptual) job performance construct. Criterion-related, construct-related and content-related validity studies (also the topics of
chapter 5) are regarded as three general criterion-related research designs in generating direct empirical evidence to justify that
assessment (predictor) scores relate to valid measures of job performance (criterion measures).
Criterion-related validity refers to the extent to which the test scores (predictor scores) used to make selection decisions are
correlated with measures of job performance (criterion scores) (Schmitt & Fandre, 2008). Criterion-related validity requires that a
predictor be related to an operational or actual criterion measure, and the operational criterion measure should be related to the
performance domain it represents. Performance domains are comprised of behaviour-outcome units that are valued by an organisation.
Job analysis data provides the evidence and justification that all major behavioural dimensions that result in valued job or organisational
outcomes (such as the number of products sold, or the number of candidates attracted) have been identified and are represented in the
operational criterion measure (Cascio & Aguinis, 2005).
Construct-related validity is an approach in which industrial psychologists gather evidence to support decisions or inferences about
psychological constructs; often starting off with an industrial psychologist demonstrating that a test (the predictor) designed to measure a
particular construct (criterion) correlates with other tests in the predicted manner. Constructs refer to behavioural patterns that underlie
behaviour sampled by the predictor, and in the performance domain, by the criterion. Construct validity represents the integration of
evidence that bears on the interpretation or meaning of test scores – including content and criterion-related evidence – which are
subsumed as part of construct validity (Landy & Conte, 2004). That is, if it can be shown that a test (for example, reading
comprehension) measures a specific construct, such as reading comprehension, that has been determined by a job analysis to be critical
for job performance, then inferences about job performance from test scores are, by logical implication, justified (Cascio & Aguinis,
2005).
As shown in Figure 4.6, job analysis provides the raw material for criterion development since it allows for the identification of the
important demands (for example, tasks and duties) of a job and the human attributes (KSAOs) necessary to carry out those demands
successfully. Once the attributes (for example, abilities) are identified, the test that is chosen or developed to assess those abilities is
called a predictor (the topic of chapter 5), which is used to forecast another variable. Similarly, when the demands of the job are
identified, the definition of an individual worker’s performance in meeting those demands is called a criterion, which is the variable that
the industrial psychologist wants to predict (Landy & Conte, 2004).
Predictors are also regarded as evaluative standards. An example would be performance tests administered before an employment
decision (for example, to hire or to promote) is made. On the other hand, criteria are regarded as the evaluative standards that are
administered after an employment decision has been made (for example, evaluation of performance effectiveness, or the effectiveness of
a training programme or a recruitment campaign) (Cascio & Aguinis, 2005). The line in Figure 4.6 that connects predictors and criteria
represents the hypothesis that people who do better on the predictor will also do better on the criterion, that is, people who score higher
on the test will be more successful on the job. Validity evidence is then gathered to test that hypothesis (Landy & Conte, 2004). In
criterion-related validity studies (discussed in chapter 5) the criterion is the behaviour (the dependent variable) that constitutes or defines
successful performance of a given task. Independent variables such as scores on a test of mental ability are correlated with criterion
measures to demonstrate that those scores are valid predictors of probable job success.
In content-related validity studies (discussed in chapter 5), the industrial psychologist establishes logical links between important
task-based characteristics of the job and the assessment used to choose among candidates. In a criterion-related validity study of a
problem-solving test for software consultants, a job analysis may indicate that one of most common and important tasks of the
consultant is to identify a flaw in a software programme. As a result, the industrial psychologist may then develop a measure of the
extent to which the consultant consistently identifies the flaw without asking for assistance. This measure may be in the form of a rating
scale of ‘troubleshooting’ that would be completed by the consultant’s supervisor. The industrial psychologist would then have both the
predictor score and a criterion score for the calculation of a validity coefficient (Landy & Conte, 2004).
Criterion-related, construct-related and content-related validity studies generally treat validity as situational to the particular
organisation owing to the problem of obtaining large enough samples to allow validity generalisations to some larger population. In
practical terms, this means that validity needs to be established in every new setting. Therefore, apart from these three criterion-related
research designs, researchers are also using synthetic validation as a methodology that allows validities to be used across organisations
and jobs.
Synthetic validity is an approach whereby validity of tests for a given job is estimated by examining the validity of these tests in
predicting different aspects (components) of the particular job present in other occupations or organisations, instead of making global
assessments of job performance. These estimates are combined to calculate the validity of a new battery of instruments (tests) for the
target job (Schmitt & Fandre, 2008). Because it uses job components and not jobs, synthetic validity allows the researcher to build up
the sample sizes to the levels necessary to conduct validity studies. For example, a particular job with component ‘A’ may have only
five incumbents, but component ‘A’ may be a part of the job for 90 incumbents in an organisation. In most cases, the sample sizes will
be dramatically larger when using job components. The synthetic validity approach is often used when criterion data are not available. In
this case, synthetic validity can be very useful for developing predictor batteries for recently-created jobs or for jobs that have not yet
been created (Scherbaum, 2005).
Synthetic validity is a flexible approach that helps to meet the changing prediction needs of organisations, especially in situations
117
when no jobs exist, when a job is redesigned, or when jobs are rapidly changing. Its techniques rest on two primary assumptions. Firstly,
when jobs have a component in common, the human attribute(s) (such as cognitive, perceptual and psychomotor abilities) required for
performing that component will be the same across jobs. That is, the attributes needed to perform the job component do not vary
between jobs. Therefore, a test for a particular attribute can be used with any job containing the component that requires the particular
attribute. Secondly, the validity of a predictor for a particular job component is fairly constant across jobs. The important components of
a particular job or job family are determined by means of a structured job analysis. The job analysis should allow across-job
comparisons and accurately describe the job components at a level of detail that facilitate the identification of predictors to assess the job
components. Predictors are typically selected using the judgements of SMEs or previous research that demonstrates a predictor test is
related to the job component. However, new predictors can also be developed (Scherbaum, 2005).
Figure 4.6 The link between predictors and criteria
Composite criterion versus multiple criteria
Since job performance is generally regarded as being multi-dimensional in nature, industrial psychologists often have to consider
whether to combine various criterion measures into a composite score, or whether to treat each criterion measure separately (Cascio &
Aguinis, 2005).
Using a composite criterion relates to the assumption that the criterion should represent an economic rather than a behavioural
construct. In practical terms, this means that the criterion should measure the overall contribution of the individual to the organisation in
terms of a rand value. This orientation is generally labelled as the ‘dollar criterion’, since the criterion measures overall efficiency
(quantity, quality, and cost of the finished product) in ‘dollars’ (rands) rather than behavioural or psychological terms by applying cost
accounting concepts and procedures to the individual job behaviours of the employee (Brogden & Taylor, 1950).
A composite criterion is a single index that generally treats multiple criterion dimensions separately in validation. However, when a
decision is required, these single criterion dimensions are then combined into a composite. A quantitative weighting scheme is applied to
determine in an objective manner the importance placed on each of the criteria used to form the composite. An organisation may decide
to combine two measures of customer service: one collected from external customers that purchase the products offered by the
organisation, and the other from internal customers, that is, the individuals employed in other units within the same organisation. Giving
these measures equal weights implies that the organisation values both external and internal customer quality. On the other hand, the
organisation may decide to form the composite by giving 70 per cent weight to external customer service and 30 per cent weight to
internal customer service. This decision is likely to affect the validity coefficients between predictors and criteria (Cascio & Aguinis,
2005).
Reflection 4.7
Scenario A
118
Division X of Company ABC creates a new job associated with a new manufacturing technology. In order to develop a strategy for
hiring applicants for that job, they ask Division Y of the company, a division that has been using the new technology for several
months, to provide a sample of workers who hold the job title in question to complete a potential screening examination for the job.
Division X then correlates the test scores of these workers with performance ratings. What type of validity design has Division X
chosen? Name one alternative design they might have chosen and describe how it would have satisfied their need.
Scenario B
Owing to the rapidly-changing manufacturing technology in the highly competitive industry, a restructuring exercise has led to the
redesign of jobs in Company ABC. Division X decides to develop new predictors of job performance. What type of validity design
would you recommend to the company? Give reasons for your answer.
Advocates of multiple criteria view the increased understanding of work behaviour in terms of psychological and behavioural rather
than economic constructs as an important goal of the criterion validation process. In the multiple criteria approach, different criterion
variables are not combined since it is assumed that combining measures that are in essence unrelated will result in a composite that is
not only ambiguous, but also psychologically nonsensical (Cascio & Aguinis, 2005). In measuring the effectiveness of recruiters,
Pulakos, Borman and Hough (1988) found that selling skills, human relations skills, and organising skills were all important and related
to success. It was further found that these three behavioural dimensions were unrelated to each other – that is, the recruiter with the best
selling skills did not necessarily have the best human relation skills or the best organising skills (Cascio & Aguinis, 2005).
Cascio and Aguinis (2005) posit that the resolution of the composite criterion versus multiple criteria dilemma essentially depends
on the objectives of the investigator. Both methods are legitimate for their own purposes. If the goal is increased psychological
understanding of predictor-criterion relationships, then the criterion elements are best kept separate. If managerial decision-making is the
objective, then the criterion elements should be weighted, regardless of their inter-correlations, into a composite representing an
economic construct of overall worth to the organisation.
Considerations in criterion development
Since human performance is variable (and potentially unreliable) as a result of various situational factors that influence individuals’
performance, industrial psychologists are often confronted by the challenge of developing performance criteria that are relevant, reliable,
practical, adequate and appropriate for measuring worker behaviour. Industrial psychologists generally refer to the ‘criterion problem’
when pointing out the difficulties involved in the process of conceptualising and measuring performance constructs that are
multi-dimensional, dynamic, and appropriate for different purposes (Cascio & Aguinis, 2005).
As already mentioned, the conceptual criterion is an idea or concept that is not measurable. This creates the situation where the
actual criteria are never exactly the same as the conceptual criteria. In practical terms, this means that there is a certain amount of
criterion distortion, which is described as criterion irrelevance, deficiency, and contamination (Muchinsky et al, 2005). In an ideal
world, industrial psychologists would be able to measure all relevant aspects of job and worker performance perfectly. A collective
measure of all these aspects would be called the ultimate criterion (that is, the ideal measure of all the relevant aspects of performance).
However, since industrial psychologists can never reliably measure all aspects of performance, they generally settle for an actual
criterion (that is, the actual measure of job performance obtained) (Landy & Conte, 2004).
Criteria should be chosen on the basis of validity or work or job relevance (as identified by a job analysis), freedom from
contamination, and reliability, rather than availability. In general, if criteria are chosen to represent work-related activities, behaviours or
outcomes, the results of a job analysis are helpful in criteria construction. If the goal of a given study is the prediction of organisational
criteria such as tenure, absenteeism, or other types of organisation-wide criteria such as, for example, employee satisfaction, an in-depth
job or work analysis is usually not necessary, although an understanding of the job or work and its context is beneficial (SIOPSA, 2005).
Criteria relevance
The usefulness of criteria is evaluated in terms of its judged relevance (that is, whether the criteria are logically related to the
performance domain being measured) (Cascio & Aguinis, 2005). Relevant criteria represent important organisational, team, and
individual outcomes such as work-related behaviours, outputs, attitudes, or performance in training, as indicated by a review of
information about the job or work. Criteria can be measures of overall or task-specific work performance, work behaviours, or work
outcomes. These may, for example, include criteria such as behavioural and performance ratings, success in work-relevant training,
turnover, contextual performance (organisational citizenship), or rate of advancement. Regardless of the measures used as criteria,
industrial psychologists must ensure their relevance to work or the job (SIOPSA, 2005).
Criterion deficiency and contamination
Criterion deficiency and contamination reduce the usefulness and relevance of criteria. Criterion deficiency occurs when an actual
criterion is missing information that is part of the behaviour one is trying to measure, that is, the criterion falls short of measuring job
performance or behaviour perfectly. In practical terms, criterion deficiency refers to the extent to which the actual criterion fails to
overlap the conceptual criterion (Landy & Conte, 2004; Riggio, 2009). For example, if an industrial psychologist considers the
performance of a police officer to be defined exclusively by the number of criminals apprehended, ignoring many other important
aspects of the police officer’s job, then that statistic would be considered a deficient criterion (Landy & Conte, 2004). An important goal
119
of performance measures is to choose criteria that optimise the assessment of job success, thereby keeping criterion deficiency to a
minimum.
Figure 4.7 Criterion distortion described in terms of criteria irrelevance, deficiency and contamination
Similarly, criterion contamination occurs when an actual or operational criterion includes information (variance) unrelated to the
behaviour (the ultimate criterion) one is trying to measure. Criteria contamination can result from extraneous factors that contribute to a
worker’s apparent success or failure in a job. For instance, a sales manager may receive a poor performance appraisal because of low
sales levels, even though the poor sales actually result from the fact that the manager supervises a group of young, inexperienced sales
people (Riggio, 2009).
Gathering criteria measures with no checks on their relevance or worth before use often leads to contamination (Cascio & Aguinis,
2005). For example, if a production figure for an individual worker is affected by the technology or the condition of the particular
machine which that worker is using, then that production figure (the criterion) is considered contaminated. Similarly, a classic validity
study may test cognitive ability (the predictor) by correlating it with a measure of job performance (the actual criterion) to see if higher
scores on the test are associated with higher levels of performance. The differences between the ultimate criterion and the actual
criterion represent imperfections in measurement – contamination and deficiency (Landy & Conte, 2004).
Criterion bias
A common source of criterion contamination stems from rater or appraiser biases. Criterion bias is a systematic error resulting from
criterion contamination or deficiency that differentially affects the criterion performance of different subgroups (SIOPSA, 2005). For
example, a supervisor may give an employee an overly-positive performance appraisal because the employee has a reputation of past
work success or because the employee was a graduate of a prestigious university (Riggio, 2009). Similarly, a difference in criterion
scores of older and younger workers or day- and night-shift workers could reflect bias in raters or differences in equipment or
conditions, or the difference may reflect genuine differences in performance. Industrial psychologists must at all times consider the
possibility of criterion bias and attempt to protect against bias insofar as is feasible, and use professional judgement when evaluating
data (SIOPSA, 2005).
Prior knowledge of or exposure to predictor scores are often among the most serious contaminants of criterion data. For example, if
an employee’s supervisor has access to the prediction of the individual’s future potential by the industrial psychologist conducting the
assessment, and if at a later date the supervisor is asked to rate the individual’s performance, the supervisor’s prior exposure to the
assessment prediction is likely to bias this rating. If the supervisor views the employee as a rival, dislikes him or her for that reason, and
wants to impede his or her progress, the prior knowledge of the assessment report could serve as a stimulus for a lower rating than is
deserved. On the other hand, if the supervisor is especially fond of the employee and the assessment report identifies the individual to be
a high-potential candidate, the supervisor may rate the employee as a high-potential individual (Cascio & Aguinis, 2005).
The rule of thumb is to keep predictor information away from those who must provide criterion data. Cascio and Aguinis (2005)
suggest that the best way to guard against predictor bias is to obtain all criterion data before any predictor data are released. By
shielding predictor information from those who have responsibility for making, for example, promotion or hiring decisions, much
120
‘purer’ validity estimates can be obtained.
Ratings as criteria
Although ratings are regarded as the most commonly used and generally appropriate measures of performance, rating errors often reduce
the reliability and validity or accuracy of criterion outcomes. The development of rating factors is ordinarily guided by job analysis
when raters (supervisors, peers, individual self, clients or others) are expected to evaluate several different aspects of a worker’s
performance (SIOPSA, 2005). Since the intent of a rating system is to collect accurate estimations of an individual’s performance,
industrial psychologists build in appropriate structural characteristics (that is, dimension definitions, behavioural anchors, and scoring
schemes) that are based on a job or work analysis in the measurement scales they use to assist in the gathering of those accurate
estimations. However, irrespective of these measures, raters don’t always provide those accurate estimates, leading to rating errors.
Rating errors are regarded as the inaccuracies in ratings that may be actual errors or intentional or systematic distortions. In practical
terms, this means that the ratings do not represent completely ‘true’ estimates of performance. Some of the most common inaccuracies
or errors identified by industrial psychologists include central tendency error, leniency–severity error, and halo error (Landy & Conte,
2004).
Central tendency error occurs when raters choose a middle point on the scale as a way of describing performance, even though a
more extreme point may better describe the employee.
Leniency–severity error is a distortion which is the result of raters who are unusually easy (leniency error) or unusually harsh
(severity error) in their assignment of ratings. The easy rater gives ratings higher than the employee deserves, while the harsh rater gives
ratings lower than the employee deserves. In part, these errors are usually the result of behavioural anchors that permit the rater to
impose idiosyncratic meanings on words like ‘average’, ‘outstanding’, and ‘below average’. The problem is that the rater can feel free to
use a personal average rather than one that would be shared with other raters. One safeguard against this type of distortion is to use
well-defined behavioural anchors for the rating scales.
Halo error occurs when a rater assigns the same rating to an employee on a series of dimensions, creating a halo or aura that
surrounds all of the ratings, causing them to be similar (Landy & Conte, 2004).
Since it is generally assumed that rater distortions are unintentional, and that raters are unaware of influences that distort their
ratings, it is essential that raters should be sufficiently familiar with the relevant demands of the work and job as well as the individual to
be rated to effectively evaluate performance. Since valid inferences are generally supported by knowledgeable raters, rater training in the
observation and evaluation of performance is recommended (SIOPSA, 2005).
Finally, it is important that management be informed thoroughly of the real benefits of using carefully-developed criteria when
making employment decisions. Criterion measurement should be kept practical and should ultimately contribute to the organisation’s
primary goals of profit, growth and service while furthering the goal of building a theory of work behaviour. Management may or may
not have the expertise to appraise the soundness of a criterion measure or a series of criteria measures. However, objections will almost
certainly arise if record-keeping and data collection for criteria measures become impractical and interfere significantly with ongoing
operations (Cascio & Aguinis, 2005).
CHAPTER SUMMARY
This chapter reviewed job analysis and criterion development as important aspects of the employment process. Job descriptions, job and
person specifications, job evaluation, and performance criteria are all products of job analysis that form the cornerstone in measuring
and evaluating employee performance and job success. Job analysis products are invaluable tools in the recruitment, selection, job and
performance evaluation, training and development, reward and remuneration and retention of employees. They also provide the answer
to many questions posed by employment equity legislation, such as whether a certain requirement stated in a vacancy advertisement is
really an inherent requirement of a job and whether a psychological test should be used and which test should be used in making fair
staffing decisions. Various methods and techniques are used in the job analysis process. The South African OFO provides additional
valuable information that can be used as an input to the job analysis process.
Job analysis further provides the raw material for criterion development. Since criteria are used to make decisions on employee
behaviour and performance, industrial psychologists should be well trained in how to determine measures of actual criteria and how to
eliminate criterion distortion.
Review questions
You may wish to attempt the following as practice examination-style questions.
1. What is the purpose of job analysis in the employment process?
2. How does job analysis relate to task performance? How does task performance differ from the contextual performance of workers?
3. What is job analysis? How can managers and industrial psychologists make use of the information it provides?
4. How does the OFO inform the job analysis process?
5. What items are typically included in a job description?
6. Discuss the various methods and techniques for collecting job analysis data. Compare these methods and techniques, explain what
each is useful for, and list the advantages and disadvantages.
7. Discuss the issue of reliability and validity in job analysis.
121
8. Explain the steps you would follow in conducting a job analysis.
9. Why are traditional job analysis approaches shifting to an occupation-specific skills-based approach?
10. Differentiate between job analysis and competency modelling. Which approach would be more appropriate in analysing managerial
jobs? Give reasons for your answer.
11. Explain the concept of criterion development and the aspects that need to be considered by industrial psychologists in developing
criteria.
12. How do synthetic validity studies differ from criterion-related validity research designs? Which of these designs would be more
appropriate for an occupation-specific skills-based job analysis approach? Give reasons for your answer.
13. Explain the concept of criterion distortion.
14. What is the link between predictors and criteria? Draw a diagram to illustrate the link.
Multiple-choice questions
1. Which type of job analysis questionnaire is geared towards the work that is performed on the job?
a. Worker-orientated
b. Job-orientated
c. Performance-orientated
d. Activity-orientated
2. The abbreviation SME refers to a:
a. Structured questionnaire method of job analysis
b. Logbook method of job analysis
c. Job position being evaluated for compensation purposes
d. Job expert
3. Job analysis includes collecting data describing all of the following EXCEPT:
a. What is accomplished on the job
b. Technology used by the employees
c. Each employee’s level of job performance
d. The physical job environment
4. ‘Friendliness’ is one dimension on which Misha’s Café evaluates its waitresses and waiters. Customers’ rating forms are used to
assess levels of friendliness. In this case ‘friendliness’ is a(n):
a. Conceptual criterion
b. Structured criterion
c. Multiple criterion
d. Actual criterion
5. An industrial psychologist decides to use a ‘multiple uses test’ to assess creativity. In this test, individuals who identify the most uses
for a paper clip, for example, are considered to be more creative than their colleagues. Creativity represents the … criterion, while the
multiple uses test represents the … criterion.
a. Conceptual; actual
b. Actual; conceptual
c. Structured; multiple
d. Structured; actual
6. Criterion deficiency refers to the extent to which the actual criterion:
a. And the conceptual criterion coincide
b. Fails to overlap with the conceptual criterion
c. Measures something other than the conceptual criterion
d. Is influenced by random error
7. To assess a job applicant’s citizenship, the Modimele Corporation counts the number of community activities in which the individual
is involved. However, it fails to assess the quality of participation, which may be even more important. This is an example of:
a. Criterion contamination
b. Criterion irrelevance
c. Criterion deficiency
d. Criterion bias
Reflection 4.8
A large engineering company decided to expand its housing project design division. In the pro-cess several new jobs were created for
which new job descriptions had to be compiled. However, owing to their concerns in addressing the skills shortages in the industry,
they decided to follow a skills-based approach in their job analysis.
Draw a flow diagram that shows each step you would take in conducting the job analysis. The flow diagram should include the
122
types of data you will gather, the methods you would use in collecting job analysis information, the order in which you would use
these methods, from whom you will gather the information, and what you will do with those data in making your recommendations.
Explain also why competency modelling would not be appropriate in this scenario.
Reflection 4.9
A private tourism company has decided to advertise a vacancy for a chief operating officer in a national newspaper. Some of the
aspects they decided to include in the advertisement are provided below. Study these requirements, and answer the questions that
follow.
Excerpt from advertisement
We are looking for an experienced individual to take up the position of Chief Operating Officer (COO). The successful candidate will
report to the Chief Executive Officer (CEO).
Duties and responsibilities
The successful candidate will be responsible for overseeing the day-to-day operations of the organisation and working with the CEO
to keep head office focused on its objectives. The incumbent will also be responsible for positioning the organisation as the source of
useful tourism business intelligence.
Key Performance Areas (KPAs)
• Set up and manage the organisation’s tourism knowledge facility programme
• Produce demand-driven and supply-side tourism business and economic intelligence
• Oversee the company’s focus on strategic projects and ability to leverage funds and programmes
• Oversee the day-to-day operation and implementation of programmes of the organisation and relationships that contribute
to organisational priorities
• Manage industry alignment and co-operation agreements and processes
• Act as Company Secretary and assist the CEO in overall management and delivery of the company mandate to its
constituencies.
Required skills and attributes
• At least 8 years’ senior management experience
• Must have at least a degree and a suitable post degree qualifications
• Experience in the travel and tourism sector preferable but not essential
• Experience in research and analysis essential and have good writing skills
• Must have strong interpersonal skills, leadership qualities, be energetic, be able to work in a team and meet targets, and
deadlines.
Questions
1. Review the skills and attributes outlined in the advertisement. Draw up a list with the headings: Knowledge, Skills, Abilities, and
Other characteristics (KSAOs). Now list the KSAOs under the appropriate KSAO headings. Do you think the advertisement covers
sufficient information regarding the job? If not, what type of information should have been added? Give reasons for your answer.
2. Do you think the advertisement complies with employment equity requirements? Give reasons for your answer.
3. Review the KPAs outlined in the advertisement. See if you can identify the KSAOs required to perform successfully in the key
performance areas. Add these to your KSAO list.
4. Explain the use of job analysis information in compiling an advertisement such as the one provided here.
123
CHAPTER 6
RECRUITMENT AND SELECTION
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
RECRUITMENT
• Sources of recruitment
• Applicant attraction
• Recruitment methods
• Recruitment planning
CANDIDATE SCREENING
• Importance of careful screening
• Short-listing candidates
• Evaluating written materials
• Managing applicant reactions and perceptions
CONSIDERATIONS IN EMPLOYEE SELECTION
• Reliability
• Validity
• Utility
• Fairness and legal considerations
MAKING SELECTION DECISIONS
• Strategies for combining job applicant information
• Methods for combining scores
• Placement
• Selection and affirmative action
FAIRNESS IN PERSONNEL SELECTION
• Defining fairness
• Fairness and bias
• Adverse impact
• Fairness and culture
• Measures of test bias
• The quest for culture-fair tests
• Legal framework
• Models of test fairness
• How to ensure fairness
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
This chapter builds on the theory and concepts discussed in the previous chapters. A study hint for understanding the principles and concepts
that are discussed in this chapter is to review the previous chapters before working through this chapter.
The recruitment and selection of competent people are crucial elements of the employment process. Recruitment and selection provide the
means to resource, or staff, the organisation with the human capital it needs to sustain its competitive edge in the broader business
environment. This chapter reviews the theory and practice of recruitment and selection from the perspective of personnel psychology. The
application of psychometric standards in the decision-making process is discussed, including the aspects of employment equity and fairness in
the recruitment and selection process.
124
Learning outcomes
When you have finished studying this chapter, you should be able to:
1. Differentiate between the concepts of recruitment, screening and selection.
2. Discuss the sources and methods of recruitment.
3. Explain the importance of managing applicant reactions and perceptions in the recruitment, screening and selection process, and
suggest strategies for dealing with applicant reactions and perceptions.
4. Explain the role of job analysis in recruitment planning and selection.
5. Discuss the recruitment planning process and the techniques that can be applied to enhance the quality of recruitment strategies.
6. Discuss the screening process.
7. Discuss the decision framework for making selection decisions and the various techniques that can be applied to ensure reliable, valid,
fair, and useful (quality) decisions.
8. Explain and illustrate the relationship between predictor and criterion scores in determining the validity and fairness of selection
decisions.
9. Explain the concept of prediction errors and how they influence selection decisions.
10. Determine the utility of a selection device.
11. Discuss strategies for combining job applicant information.
12. Differentiate between selection and placement.
13. Discuss employment equity and affirmative considerations in recruitment, screening and selection.
14. Discuss the issues of fairness, adverse impact and bias in selection decision-making.
CHAPTER INTRODUCTION
Recruitment and selection provide the conduit for staffing and resourcing the organisation. As discussed in chapter 4, job analysis is
the basic foundation of personnel psychology and a cornerstone in the recruitment and selection of people, since it enables industrial
psychologists, human resource professionals, and managers to obtain a complete and accurate picture of a job, including the important
tasks and duties, and the knowledge, skills, abilities and other desired characteristics (KSAOs) needed to perform the job. Recruitment
is about the optimal fit between the person and the organisation as well as finding the best fit between the job requirements (as
determined by the job analysis process) and the applicants available. If both of these aspects are achieved, it is believed to lead to
increased job satisfaction, enhanced job performance, and potentially the retention of talent (the topic of chapter 7). Selection involves
the steps, devices and methods by which the sourced candidates are screened for choosing the most suitable person for vacant
positions in the organisation. This includes using selection devices and methods that tie in directly with the results of a job analysis.
The importance of recruiting and selecting the right people, including scarce and critical skills, and of being seen as an ‘employer of
choice’, has been enhanced by the increasingly competitive and globalised business environment and the requirement for quality and
customer service (Porter et al, 2008). Therefore, recruitment and selection are increasingly regarded as critical human resource
functions for organisational success and survival. Moreover, the systematic attraction, selection and retention of competent and
experienced scarce and critical skills have become core elements of competitive strategy, and an essential part of an organisation’s
strategic capability for adapting to competition (McCormack & Scholarios, 2009).
In view of the foregoing, a company’s recruitment and selection processes are seen as enablers of important human resource
outcomes, including the competency of employees, their commitment, employee-organisation fit (that is, congruence between the
employee and the organisation’s goals), and the cost-effectiveness of human resource policies and practices (Porter et al, 2008).
Recruitment and selection processes are also increasingly recognised as critical components in successful change management and
organisational transformation, since they provide a means of obtaining employees with a new attitude, as well as new skills and
abilities (Iles & Salaman, 1995). Moreover, companies that are able to employ better people with better human resource processes
(such as job analysis, recruitment and selection processes) have been found to have an increased capacity in sustaining their
competitive advantage (Boxall & Purcell, 2003).
Generally, recruitment provides the candidates for the selector to judge. Although selection techniques cannot overcome failures in
recruitment, they make them evident (Marchington & Wilkinson, 2008). Therefore, industrial psychologists apply the methodological
principles and techniques discussed in chapters 2, 3, 4 and 5 in enabling managers to make high-quality decisions in the recruitment
and selection of people. Effective employee screening, testing, and selection require grounding in research and measurement issues,
particularly reliability and validity (the topic of chapters 2 and 5). Further, much of the strength or weakness of any particular
employment method or process is determined by its ability to predict important work outcomes such as job performance. As discussed
in chapters 4 and 5, the ability to predict future employee performance accurately and in a scientific, valid, fair and legally defensible
manner from the results of employment tests or from other employee screening procedures is regarded as critical to the profession of
personnel psychology (Riggio, 2009).
The function of industrial psychologists in the recruitment and selection of people concerns their involvement in staffing practices,
which include decisions associated with recruiting, selecting, promoting, retaining, and separating employees (Landy & Conte, 2004).
Industrial psychologists generally follow the so-called psychometric approach to recruitment and selection, which focuses on the
125
measurement of individual differences. The psychometric approach is based on scientific rationality and its systematic application by
industrial psychologists to candidate selection in particular. To enable this to be put into practice, psychometric standards such as
reliability, validity, and fairness are applied in the decision-making process, which generally follows a systematic, logical or sequential
order (Porter et al, 2008; McCormack & Scholarios, 2009). In addition, when evaluating the outcomes of selection decisions, the
practical utility and cost-effectiveness of recruitment and selection methods and procedures are also of major concern for industrial
psychologists and managers alike.
RECRUITMENT
Recruitment is concerned with identifying and attracting suitable candidates. Barber (1998:5) describes recruitment as practices and
activities carried out by the organisation with the primary purpose of identifying and attracting potential employees. The recruitment
process focuses on attracting a large number of people with the right qualifications, knowledge, skills, abilities, and other desired
characteristics (KSAOs) or competencies (as determined by the job analysis process), to apply for a vacancy (Schreuder et al, 2006). On
the other hand, selection represents the final stage of decision-making by applying scientific procedures in a systematic manner in
choosing the most suitable candidate from a pool of candidates.
Sources of recruitment
As shown in Table 6.1, sources of recruitment are broadly categorised as internal and external. Internal sources of recruitment are
focused on the organisation’s internal labour market (that is, the people already in the employ of the organisation) as a means of filling
vacancies. On the other hand, external sources are people outside the organisation, who may include full-time, subcontracted,
outsourced, or temporary staff. In South Africa, temporary staffing has increased in popularity, even at executive levels. Temporary
assignments appear to appeal to a fair proportion of the younger generation (generation Y or new millennium). Furthermore, they seem
to prefer a more diverse and flexible work environment. It is expected that temporary staffing will increase. In addition, temporary
appointments allow the company to evaluate a potential candidate and position before it is made permanent. Temporary workers are also
useful in organisations where there are seasonal fluctuations and workload changes. They also allow companies to save on the costs of
permanent employees (O’Callaghan, 2008).
Table 6.1 Sources of internal and external recruitment (Schreuder et al, 2006:53, 54)
Internal sources of labour
External sources of labour
Skills inventories and career development systems
A skills inventory is a record system listing employees with specific skills.
Career development systems develop the skills and knowledge of
employees to prepare them for a career path. Skills inventories and career
development systems are quick ways of identifying candidates.
Employment agencies
Agencies recruit suitable candidates on the instructions of
the organisation. Once candidates have been indentified,
either the organisation or the agency could do the
selection.
Job posting
Information about vacancies is placed on notice boards or in information
bulletins. Details of the job are provided and employees may apply. This
source enhances the possibility that the best candidate will apply, but it can
also cause employees to hop from job to job. In addition, because of a lack
of applications, the position may be vacant for a long time.
Walk-ins
This occurs when a prospective employee applies directly
to the organisation in the hope that a vacancy exists,
without responding to an advertisement or
recommendation by someone.
Inside ‘moonlighting’ or contracting
In the case of a short-term need or a small job project that does not involve
a great deal of additional work, the organisation can offer to pay bonuses
for employees to do the work. Employees who perform well could be
identified, and this could promote multi-skilling.
Supervisor recommendations
Supervisors know their staff and can nominate employees for a specific
job. Supervisors are normally in a good position to know the strengths and
weaknesses of their employees, but their opinions could be subjective and
liable to bias and discrimination.
Referrals
This is an inexpensive and quick resource. Current
employees refer or recommend candidates from outside
the organisation for a specific vacancy. Current
employees are not likely to recommend someone who is
unsuitable, because this may reflect negatively on
themselves.
Professional bodies
People who attend conventions have opportunities to
network. Furthermore, advertisements for vacancies can
be placed in the publications of professional bodies.
Head-hunting
This source refers to the recruitment of top professional
people through specialised agencies. These candidates
126
are approached personally with a job offer, or an
advertisement can be drawn up specifically with that
candidate’s qualifications, experience, and skills in mind.
Educational institutions
This source can provide an opportunity to approach or
invite the best candidates to apply for vacancies or
entry-level positions.
ESSA
Employment Services SA is a national integrated labour
market data management system that keeps accurate
records of skills profiles, scarce and critical skills, and
placement opportunities. (See chapter 3 for more detail.)
Deciding on the source of recruitment depends to a large extent on the expected employment relationship. Managers often apply a
rational decision choice here with respect to the level of human capital required to perform the job. For example, jobs which require high
skills and knowledge unique to the organisation (hence implying greater investment in training) are better managed as internal
promotions or transfers, as these have implications for building a committed workforce. On the other hand, jobs which do not require
costly training and which can be performed at a lower skill level can be externalised with a view to shorter-term employment
relationship (Redman & Wilkinson, 2009).
According to Schreuder et al (2006), using internal sources may increase morale when employees know that promotion or other
forms of intra-organisational career mobility opportunities are available. Research by Zottoli and Wanous (2000) indicates, for example,
that employees recruited through inside sources stayed with the organisation longer (higher tenure) and performed better than employees
recruited through outside sources. While the use of internal sources is regarded as a faster and less expensive way of filling a vacant
position, external sources provide a larger pool of potential candidates, and new employees may bring fresh ideas and approaches into
the organisation. However, it is also important to recognise that there are alternatives to recruitment. For example, it may be possible to
reorganise the job, reallocate tasks, and even eliminate the job through automation, rather than replace an individual who has left the
organisation (Robinson, 2006).
Reflection 6.1
Review Table 6.1, Sources of internal and external recruitment. Study the situations below and indicate which recruitment sources
would be the most appropriate to use. Give reasons for your answer.
1. A private college for higher education programmes experiences severe staff shortages during the registration period.
2. A doctor’s receptionist is going on leave and the doctor has to find a replacement for five months.
3. An engineering technician has to take early retirement for medical reasons and the section where he works cannot function without
someone with his skills and knowledge.
4. The CEO of a company is retiring in three years’ time and his successor has to be appointed.
5. The financial manager of a large multinational has given three months’ notice of her resignation.
6. The till supervisor in a supermarket has to be replaced.
7. The organisation is looking for a replacement for the messenger, who has resigned.
8. A bank which has traditionally filled positions by using employee referrals, because they have found that the most trustworthy
employees can be recruited this way has decided that from now on all vacancies must be filled with candidates from previously
disadvantaged groups.
9. An IT company is looking for a professionally qualified industrial psychologist to assist with the selection of a large pool IT
technicians who applied for an advertised vacancy.
10. A large manufacturing company wants to recruit young, talented graduates as part of their learnership strategy.
(Schreuder et al, 2006:54)
Applicant attraction
An important recent development is the greater attention devoted to the so-called ‘attraction element’ (Searle, 2003) or ‘applicant
perspective’ (Billsberry, 2007), which focuses on how people become applicants, including how potential applicants perceive and act on
the opportunities offered. Applicant attraction to organisations implies getting applicants to view the organisation as a positive place to
work. It further acknowledges a two-way relationship between organisations and applicants, where the applicant’s perception of
decision-making becomes an important factor shaping whether the recruitment process is successful or not (Redman & Wilkinson,
2009).
127
In view of the foregoing, it is important that recruitment activities and methods enhance potential applicants’ interest in and
attraction to the organisation as an employer; and increase the probability that they will accept a job offer (Saks, 2005). The global
challenge of attracting, developing, and retaining scarce and critical skills in times of skill shortages and the search for more highly
engaged and committed employees both place greater importance on the perceptions of potential and actual applicants. In some
situations, companies will have to become smarter in terms of the methods they use to attract qualified and competent applicants,
maintain their interest in the company, and convince them that they should accept an offer of employment (Redman & Wilkinson, 2009).
Recruitment sources generally have only a slight effect on tenure of future employees. Moreover, in their efforts to attract applicants,
companies may at times ‘oversell’ a particular job or their organisation. Advertisements may state that ‘this is a great place to work’, or
that the position is ‘challenging’ and ‘offers ‘tremendous potential for advancement’. This is not a problem if such statements are true,
but if the job and the organisation are presented in a misleading, overly positive manner, the strategy will eventually backfire. New
employees will quickly discover that they were fooled and may look for work elsewhere or become dissatisfied and demotivated
(Riggio, 2009). One method of counteracting potential misperceptions and to attract potential candidates is the realistic job preview
(RJP).
RJPs can take the form of an oral presentation from a recruiter, supervisor, or job incumbent, a visit to the job site, or a discussion in
a brochure, manual, video, or company web site. However, research indicates that face-to-face RJPs may be more effective than written
ones (Saks & Cronshaw, 1990). RJPs provide an accurate description of the duties and responsibilities of a particular job and give an
applicant an honest assessment of a job (Aamodt, 2010). For example, instead of telling the applicant how much fun she will have
working in a call centre environment, the recruiter honestly tells her that although the pay is well above average, the work is often
boring and strenuous, with long working hours and limited career advancement opportunity. Although tel-ling the truth may scare away
many applicants, especially the better qualified ones, the ones who stay will not be surprised about the job, since they know what to
expect. Research has shown that informed applicants will tend to stay on the job longer than applicants who did not understand the
nature of the job. More-over, RJPs are regarded as important in increasing job commitment and satisfaction and in decreasing initial
turn-over of new employees (Hom, Griffeth, Palich & Bracker, 1998). Taylor (1994) found, for example, that an RJP at a long-distance
trucking company increased job satisfaction and reduced annual turnover from 207 per cent to 150 per cent.
Another important aspect that needs to be considered in attracting potential candidates is to avoid intentional or unintentional
discrimination. In terms of the Employment Equity Act 55 of 1998 (EEA) (discussed in chapter 3), employment discrimination against
the designated groups specified in the Act, intentional or unintentional, is illegal. In order to avoid unintentional discrimination,
employers should take steps to attract applicants from the specified designated groups in proportion to their numbers in the population
from which the company’s workforce is drawn. Furthermore, in terms of affirmative action, preference should be given to candidates
from the historically disadvantaged groups, such as blacks, Asians, and coloureds.
Reflection 6.2
Study the two advertisements below and then answer the questions that follow.
Advertisement 1
Advertisement 2
Telesales Executive Classified
publishing group is expanding and
therefore has a unique and exciting
opportunity for a dynamic Telesales
Executive.
Company ABC is looking for a Field Sales Executive.
A fantastic opportunity exists with the leading classified publication in the province for a
Field Sales Executive. The ideal candidate must be a target-driven, dynamic go-getter and be
able to handle the pressures of deadlines. This person will inherit a client base that offers a
current earning potential of approximately R20 000 per month and will be required to grow
this business.
The successful candidate must
have:
• Excellent communication,
verbal and written, coupled
with strong people skills
• Strong sales and
negotiation skills with a
‘can do’ attitude
• Target- and results-driven,
hardworking professional
• Must enjoy working under
pressure; be
entrepreneurial, with no
need for supervision
• Identify sales
opportunities while
The successful applicant must meet the following criteria:
• Minimum 3 years’ sales experience
• A proven sales track record
• Traceable references
• Own reliable car
• Cellphone
• Previous advertising sales experience would be an advantage.
If you fit the profile and feel that you have what it takes, please send your 2-page CV by
e-mail to …
Only short-listed candidates and candidates who meet the minimum requirements will be
contacted.
128
and monitoring relations
with customers to ensure
world class customer
satisfaction
• His or her challenge will
be to create a positive sales
environment and develop
and grow their area
• Publishing experience an
advantage but not
necessary.
In return we offer
• The opportunity to
become part of a
progressive publishing
company
• Full training and support
• An opportunity to earn a
good basic salary,
commission and
performance bonuses
• Benefits – normal fringe
benefits in line with a
well-established company.
If you meet our criteria, we look
forward to receiving your 2-page CV
with a motivating covering letter to
…
Only short-listed candidates and
candidates who meet the minimum
requirements will be contacted.
Questions
•
Consider the needs of the older generation workforce compared with a younger generation applicant. Review the two
advertisements and decide which advertisement will attract a generation Y candidate. Give reasons for your answer.
• Which advertisement shows elements of ‘overselling’, if any?
• Would you recommend a Realistic Job Preview for both companies? Give reasons for your answer.
Recruitment methods
Recruitment methods are the ways organisations use to make it known that they have vacancies. According to Redman and Wilkinson
(2009), there is no ‘best practice’ recruitment approach, although methods which comply with employment equity legislation generally
are a requirement. The type of recruitment method chosen by an organisation will be dependent on the type of vacancy and the
organisation concerned. For example, the methods used by a large well-known organisation seeking to recruit a managing director are
likely to be different from those of a small business seeking an assistant to help out on a specific day of the week. Moreover, the
challenges posed by the employment context discussed in chapter 3 have led to companies using more creative solutions, targeting
diverse applicant groups and using Internet channels to communicate with potential applicants alongside traditional methods, such as
advertising, employment agencies, and personal contact. The challenges faced by South African organisations (as discussed in chapter 3)
will require of them flexible and innovative recruitment strategies and methods to find and retain scarce and critical skills and contribute
to building skills capacity in South Africa.
Furthermore, it is suggested that recruitment methods that work for the older generation workforce (the so-called baby boomers) will
have no appeal for the younger generation (the so-called generation Y or new millennium person). For example, a generation Y or new
millennium candidate will probably find a job through a friend of a friend on Facebook, whereas the baby boomer will diligently look
through a newspaper such as The Sunday Times or the Mail & Guardian to see what is available. Younger generations will also be
attracted to advertisements and job information that sell diversity, individual growth, and career mobility opportunity, as opposed to
older generations, who will be attracted to job content, titles and security (O’Callaghan, 2008).
Most recruitment methods can be classified as open search techniques. Two examples of open searches are advertisements and
129
e-recruitment.
Advertisements
The method most commonly used is the advertisement. When using advertisements as a method of attracting potential candidates, two
issues need to be addressed: the media and the design of the advertisement. The selection of the best medium (be it a local paper – for
example, the Pretoria News – or a national one, such as The Sunday Times, Mail & Guardian, JobMail or The Sowetan, the Internet, or a
technical journal) depends on the type of positions for which the company is recruiting. Local newspapers are usually a good source of
blue-collar candidates, clerical employees, and lower-level administrative people, since the company will be drawing from a local
market. Trade and professional journals are generally more appropriate to attract specialised employees and professionals. One
drawback to print advertising is that there may be a week or more between insertion of the advertisement and the publication of the
journal. Employers tend to turn to the Internet for faster turnaround (Dessler, 2009).
Experienced advertisers use a four-point guide labelled AIDA for the effective design of an advertisement (Dessler, 2009; Schreuder
et al, 2006):
• A = attention. The advertisement should attract the attention of job seekers. Employers usually advertise key positions in
separate display advertisements to attract attention.
• I = interest. The advertisement should be interesting and stimulate the job seeker to read it. Interest can be created by the
nature of the job itself, with lines such as ‘Are you looking to make an impact?’ One can also use other aspects of the job,
such as its location, to create interest.
• D = desire: The advertisement should create the desire to work for the company. This can be done by spotlighting the job’s
interest factors with words such as travel or challenge. For example, having a university nearby may appeal to engineers and
professional people.
• A = action: The advertisement should inspire the right applicants to apply for the advertised position. Action can be
prompted with statements like ‘Call today’ or ‘Please forward your resumé’.
Advertisements should further comply with the EEA (discussed in chapter 3), avoiding features such as, for example, ‘man wanted’. In
terms of the EEA, an advertisement should reach a broad spectrum of diverse people and especially members of designated groups. To
ensure fairness and equity, employers need to consider where the advertisement should be placed in order to reach a large pool of
candidates that are as diverse as possible. However, this can be quite expensive, especially when the goal is to advertise nationally. In
addition to the principles of fairness and equity, the decision to advertise nationally or only locally should be determined by the
employer’s size, financial resources, geographic spread, and the seniority of the vacant position. For example, a small organisation that
sells its product only in the immediate geographical area does not need to advertise nationally (Schreuder et al, 2006).
E-recruitment
E-recruitment (or electronic recruitment) refers to online recruitment, which uses technology or web-based tools to assist the recruitment
process by posting advertisements of job vacancies on relevant job sites on the Internet (O’Callaghan, 2008). The tool can be a job web
site, the organisation’s corporate web site, or its own intranet. In South Africa, CareerJunction (<www.careerjunction.co.za>) is an
example of a commercial job board available on the Internet. Commercial boards are growing in popularity since they consist of large
databanks of vacancies. These may be based on advertising in newspapers and trade magazines, employment agencies, specific
organisation vacancies, social networking web sites, and many other sources. Commercial job boards often have questionnaires or tests
for applicants to improve their job-hunting skills or to act as an incentive for them to return to the web site (O’Callaghan, 2008).
E-recruitment has become a much more significant tool in the last few years, with its use almost doubling since the turn of the
century. By 2007, 75 per cent of employers made use of electronic media – such as the company’s web site – to recruit staff, making
e-recruitment one of the most widely-used methods (Marchington & Wilkinson, 2008). In South Africa the following was found in a
survey conducted by CareerJunction with Human Resource directors and managers from 60 of the top 200 companies (as defined by the
Financial Mail) (O’Callaghan, 2008):
• Approximately two-thirds (69 per cent) believe the Internet is an effective recruitment channel.
• Almost half (47 per cent) are using it as part of their overall recruitment strategy.
• The results show an increase of 23 per cent in the use of e-recruitment since 2003.
The survey further indicated that South African companies make more use of traditional methods than overseas companies. South
African companies that use e-recruitment also use the following methods: printed media (25 per cent); recruitment agencies (37 per
cent); word-of-mouth (19 per cent); and other means (19 per cent). Most companies in South Africa have a careers page on their web
site (71 per cent). Just over 6 per cent advertise their job pages in print, and 13 per cent make use of job boards. In line with international
trends, most South African companies opt to rent recruitment application technology and services (28 per cent), compared to 6 per cent
who opted to develop their own technology (O’Callaghan, 2008).
An interesting statistic revealed by the survey is that over 84 per cent of South African companies store resumés in a talent-pool
database. The survey further indicated that the key drivers for e-recruitment appear to be:
• Reducing recruitment costs
• Broadening the selection pool
130
•
•
•
Increasing the speed of time to hire
Greater flexibility and ease for candidates, and
Strengthening of the employer brand.
However, some of the disadvantages of e-recruitment include limiting the applicant audience, as the Internet is not the first choice for all
job seekers – a large portion of South African’s talent do not necessarily have access to a computer. E-recruitment may also cause
application overload or inappropriate applications, and in terms of equity considerations, it may limit the attraction of those unable to
utilise technology fully, for example, certain disabled groups. E-recruitment may also give rise to allegations of discrimination, in
particular the use of limited keywords in CV search tools. Potential candidates may also be ‘turned-off’, particularly if the web site is
badly designed or technical difficulties are encountered (O’Callaghan, 2008).
Although e-recruitment appears to have become the method of choice for many organisations, agency recruitment has also become
quite popular with the rise of outsourced recruitment and the use of flexible labour (Redman & Wilkinson, 2009). South African
companies appear to take a ‘partnership’ approach, working closely with recruitment consultancies and specialised web agencies who
manage the online process for them as they do not have the necessary skills in-house (O’Callaghan, 2008).
Recruitment planning
Filling a vacancy begins with the job analysis process. The job analysis process and its two products, namely job descriptions (including
competency profiles) and person specifications (discussed in detail in chapter 4), are regarded as the most fundamental pre-recruitment
activities. Person specifications which detail the personal qualities that workers require to perform the job are derived from the job
descriptions and/or competency profiles that result from the job analysis process (McCormack & Scholarios, 2009). These activities
inform the human resource planning process discussed in chapter 3. The actual process of recruitment planning begins with a
specification of human resource needs as determined by the human resource planning process. In practical terms, this means a clear
specification of the numbers, skills mix, levels, and the timeframe within which such requirements must be met (Cascio & Aguinis,
2005). In addition, the company’s affirmative action targets as set out in the employment equity plan must be examined and considered
in the recruitment or resourcing plan. The next step is to project a timetable for reaching the set targets and goals based on expected job
vacancies as set out in the recruitment plan. In addition, the most appropriate, cost-effective and efficient methods that will yield the best
results in the shortest time period in the most cost-effective way must also be determined.
Projecting a timetable involves the estimation of three key parameters (Cascio & Aguinis, 2005): the time, the money, and the
potential candidates necessary to achieve a given hiring rate. The basic statistic needed to estimate these parameters is the yield ratio or
number of leads needed to generate a given number of hires in a given time. Yield ratios include the ratios of leads to invitations,
invitations to interviews, interviews (and other selection instruments) to offers, and offers to hires obtained over a specified time period
(for example, three months, or six months). Time-lapse data provide the average intervals between events, such as between the extension
of an offer to a candidate and acceptance or between acceptance and addition to the payroll. Based on previous recruitment experience
accurately recorded and maintained by the company, yield ratios and time-lapse data can be used to determine trends and generate
reliable predictions (assuming labour market conditions are comparable).
If no prior experience data exist, hypotheses or ‘best guesses’ can be used, and the outcome (or yield) of the recruitment strategy can
be monitored as it unfolds (Cascio & Aguinis, 2005). In this regard, the concept of a recruitment yield pyramid (shown in Figure 6.1) is
useful when deciding on a recruitment strategy. Let us say that the goal is to hire five managers. The company has learned from past
experience that for every two managers who are offered jobs, only one will accept. Therefore, the company will need to make ten offers.
Furthermore, the company has learned that to find ten managers who are good enough to receive an offer, 40 candidates must be
interviewed; that is, only one manager out of four is usually judged acceptable. However, to get 40 managers to travel to the company
for an interview, the company has to invite 60 people; which indicates that typically only two out of three candidates are interested
enough in the job to agree to be interviewed. Finally, to find 60 potentially interested managers, the company needs to get four times as
many contacts or leads. Some people will not want to change jobs, others will not want to move, and still others will simply be
unsuitable for further consideration. Therefore, the company has to make initial contacts with about 240 managerial candidates. Note the
mushrooming effect in recruiting applicants. Stated in reverse order, 240 people are contacted to find 60 who are interested, to find 40
who agree to be interviewed, to find ten who are acceptable, to get the five people who will accept the offer. These yield ratios (that is,
240 : 5) differ depending on the organisation and the job in question (Muchinsky et al, 2005:126).
Figure 6.1 Recruitment yield pyramid (Adapted from Hawk (1967), cited in Muchinksky et al, 2005:126)
131
Additional information can be derived from time-lapse data. For example, past experience may show that the interval from receipt of
a resumé to invitation averages five days. If the candidate is still available, he or she will be interviewed five days later. Offers are
extended on average three days after the interview, and within a week after that, the candidate either accepts or rejects the offer. If the
candidate accepts, he or she reports to work, on average, four weeks from the date of acceptance. Therefore, if the company begins
today, the best estimate is that it will be more or less 40 days before the first new employee is added to the payroll. This information
enables the company to determine the ‘length’ of the recruitment pipeline and adjust the recruitment plan accordingly (Cascio &
Aguinis, 2005).
Although yield ratios and time-lapse data are important for estimating recruiting candidates and time requirements, these parameters
do not take into account the cost of recruitment campaigns, the cost-per-applicant or qualified applicant, and the cost of hiring. Cost
estimates such as the following are essential (Aamodt, 2010; Cascio & Aguinis, 2005):
• Cost-per applicant and/or cost-per-qualified applicant (determined by dividing the number of applicants by the amount spent
for each strategy)
• Cost-per-hire (salaries, benefits and overtime premiums)
• Operational costs (for example, telephone, candidate and recruiting staff travel and accommodation expenses; agency fees,
advertising expenses, medical expenses for pre-employment physical examinations, and any other expenses), and
• Overhead expenses such as rental expenses for facilities and equipment.
Reflection 6.3
A large manufacturing company has drawn up a recruitment strategy to recruit 100 additional engineers. In your role as industrial
psychologist in the recruitment section, management has requested you to assist in the planning of the recruitment of potential
candidates. They also requested an estimate of the yield ratio to be able to establish the costs for budgeting purpose. You have decided
to establish the yield ratios based on previous recruitment experience in the company and to present the data in the form of a
recruitment yield pyramid.
Based on existing past data, you are able to make the following predictions:
With technical candidates, you have learned from past experience that for every two candidates who are offered jobs, only one will
accept (an offer-to-acceptance ratio of 2 : 1). Based on this ratio, you determine that the company will have to extend 200 offers.
Furthermore, if the interview-to-offer ratio in the past has been 3 : 2 (that is, only two engineers out of three is usually judged
acceptable), then the company needs to conduct 300 interviews.
Since the invitations-to-interview ratio is 4 : 3 (that is, typically only three out of four candidates are interested enough in the job to
agree to travel to be interviewed), the company needs to invite as many as 400 candidates. Finally, past experience has shown that
contacts or leads required to find suitable candidates to invite are in a 6 : 1 proportion, so they need to make 2 400 contacts to find 100
potentially interested engineers.
Using this experience data, draw a recruitment yield pyramid that will assist you in presenting the yield ratio estimates to
management.
132
In addition to an analysis of cost-per-hire, yield ratios, and time-lapse from candidate identification to hire, an analysis of the source
yield adds to the effectiveness of a recruitment strategy. Source yield refers to the ratio of the number of candidates generated from a
particular source to hires from that source. For example, a survey of 281 corporate recruiters found that newspapers were the top source
of applicants, followed by e-recruiting and employee referrals (Cascio & Aguinis, 2005). Looking at the number of successful
employees or hires generated by each recruitment source is an effective method, because generally not every candidate will be qualified,
nor will every qualified applicant become a successful employee (Aamodt, 2010).
CANDIDATE SCREENING
Personnel selection occurs whenever the number of applicants exceeds the number of job openings. Having attracted a pool of
applicants, the next step is to select the best person for the job. Screening refers to the earlier stages of the selection process, with the
term ‘selection’ being used when referring to the final decision-making stages (Cascio & Aguinis, 2005). In this regard, screening
involves reviewing information about job applicants by making use of various screening tools or devices (such as those discussed in
chapter 5) to reduce the number of applicants to those candidates with the highest potential for being successful in the advertised
position.
A wide variety of data sources can be used in screening and selecting potential employees to fill a vacancy. We will consider some
of the most widely-used initial screening methods, including resumés and job applications, reference checks and letters of
recommendation, and personal history data. Job applicant testing and selection interviews are also important predictor elements of the
screening and selection decision-making process. However, because of the variety and complexity of tests used in applicant screening
and selection, the use of psychological testing and interviewing as predictors of employee performance have been discussed in more
detail in chapter 5.
Importance of careful screening
The careful testing and screening of job candidates are important because they lead to quality decisions that ultimately contribute to
improved employee and organisational performance. The screening and selection process and in particular the tools and techniques
applied by industrial psychologists enable a company to identify and employ employees with the right skills and attribute mix who will
perform successfully in their jobs. Carefully screening job applicants can also help to reduce dysfunctional behaviours at work by
screening out undesirable behaviours such as theft, vandalism, drug abuse, and voluntary absenteeism, which influence the climate and
performance of the organisation negatively.
Incompetent selection or negligent hiring also have legal implications for a company. Negligent hiring refers to the hiring of workers
with criminal records or other such problems without proper safeguards. Courts will find employers liable when employees with
criminal records or other problems use their access to customers’ homes or similar opportunities to commit crimes. Avoiding negligent
hiring claims requires taking ‘reasonable’ action to investigate the candidates’ background. Among other things, employers must make a
systematic effort to gain relevant information about the applicant, verify documentation, follow up on missing records or gaps in
employment, and keep a detailed log of all attempts to obtain information, including the names and dates of phone calls and other
requests (Dessler, 2009).
Most importantly, particularly in the South African legal context, effective screening and selection can help prevent decisions related
to adverse impact and fairness, which are discussed in more detail towards the end of this chapter. Equal employment opportunity
legislation mandates fair employment practices in the screening and selection of job candidates. Screening and selection techniques (for
example, application forms, psychological tests, and interviews) should be job related (or ‘valid’, in personnel psychology terms) and
minimise adverse impact on disadvantaged and minority groups. Questions asked in the screening and selection process that are not job
related, and especially those that may lead to job discrimination, such as inquiries about age, ethnic background, religious affiliation,
marital status, or finances, should not be included in screening and selection devices. Companies must also try to prevent reverse
discrimination against qualified members of advantaged groups. Targets of discrimination may include historically disadvantaged
groups, older workers, women, disabled persons, gay people, and people who are less physically attractive (Riggio, 2009; Schultz &
Schultz, 2010).
Finally, as previously discussed, recruiting and hiring or employing people can be a costly exercise. Effective screening procedures
can also help contribute to the organisation’s bottom line by ensuring that the tools and techniques applied are cost-effective.
Short-listing candidates
Short-listing is the initial step in the screening process. It is done by comparing the information provided in the application form or
curriculum vitae (CV or resumé) and information obtained from testing (or measures of performance) with the selection criteria, as
identified by the job analysis process and stipulated in the job or person specification. In theory, those who match the selection criteria
will go on to the next stage of the selection process. The profession of personnel psychology follows a psychometric approach which
aims to measure individual characteristics and match these to the requirements of the job in order to predict subsequent performance. To
achieve this, candidates who are successfully short-listed face a number of subsequent selection devices for the purpose of predicting
whether they will be successful in the job. These can be viewed as a series of hurdles to jump, with the ‘winner’ being the candidate or
133
candidates who receive the job offer(s). The techniques that industrial psychologists apply for short-listing are based on the scientific
principles that underlie the science of decision theory. These scientific selection procedures and techniques are discussed in the section
dealing with selection decision-making.
Evaluating written materials
The first step in the screening process involves the collection of biographical information or biodata on the backgrounds of job
applicants. This involves evaluating written materials, such as applications and resumés (CVs), references and letters of
recommendations, application blanks, and biodata inventories. The rationale for these screening devices is the belief that past
experiences or personal attributes can be used to predict work behaviour and potential for success. Because many behaviours, values,
and attitudes remain consistent throughout life, it is not unreasonable to assume that behaviour in the future will be based on behaviour
in the past (Aamodt, 2010; Schultz & Schultz, 2010).
The initial determination about the suitability for employment is likely to be based on the information that applicants supply on a
company’s application blank or questionnaire. The application blank is a formal record of the individual’s application. It contains
biographical data and other information that can be compared with the job specification to determine whether the applicant matches the
minimum job requirements. With the increasing popularity of e-recruiting, fewer organisations today use the standard paper forms as
they tend to rely instead on applications completed online. Although many companies require all applicants to complete an application
form, standard application forms are usually used for screening lower-level positions in the organisation, with resumés used to provide
biographical data and other background information for higher-level jobs.
The main purpose of the application blank and resumé is to collect biographical information such as education, work experience,
skills training, and outstanding work and school accomplishments. Such data are believed to be among the best predictors of future job
performance. Moreover, written materials are usually the first contact a potential employer has with a job candidate, and therefore the
impressions of an applicant’s credentials received from a resumé or application are very important. Research has also shown that
impressions of qualifications from written applications have influenced impressions of applicants in their subsequent interviews (Macan
& Dipboye, 1994; Riggio, 2009). In addition, work samples (also discussed in chapter 5) are often used in the screening process as a
measure to predict future job performance. Work samples usually contain a written sample in the form of a report or document or
portfolio of an applicant’s work products as an indication of their work-related skills (Riggio, 2009).
Biographical information blanks or questionnaires (BIBs) are used by organisations to quantify the biographical information
obtained from application forms. Weighted application forms assign different weights to each piece of information on the form. The
weights are determined through detailed research, conducted by the organisation, to determine the relationship between specific bits of
biographical data, often referred to as biodata, and criteria of success on the job. Although research has also shown that biodata
instruments can predict work behaviour in a wide spectrum of jobs, their ability to predict employee behaviour has been shown to
decrease with time. Furthermore, although biodata instruments are valid and no more prone to adverse impact than other selection
methods, applicants tend to view them and personality tests as being the least job-related selection methods. Therefore, their use may
increase the chance of a lawsuit being filed, but not the chance of losing a lawsuit (Aamodt, 2010).
Most organisations attempt to confirm the accuracy of biographical information by contacting former employers and the persons
named as references. However, research has suggested that references and letters of recommendation may have limited importance in
employee selection because these sources of information tend to be distorted in an overly positive direction so that they may be useless
in distinguishing among applicants. People also usually tend to supply the names of persons who will give positive recommendations. In
addition, because of increased litigation or lawsuits (for example, for slander) against individuals and former employers who provide
negative recommendations, many companies are refusing to provide any kind of reference for former employees except for job titles and
dates of employment. Few give evaluative information such as performance ratings, or answering questions about why the employee left
or whether the company would re-employ the employee, making it difficult for an organisation to verify certain kinds of information
provided on an application form or application blank. Some companies simply forgo the use of references checks and letters of
recommendation (Riggio, 2009; Schultz & Schultz, 2010).
As previously mentioned, psychological testing and selection interviews are important aspects of the screening and selection process.
You are advised to review chapter 5 to deepen your understanding of the role that these selection devices play in predicting human
performance when screening and selecting candidates.
Managing applicant reactions and perceptions
Recently researchers have developed an interest in examining selection from the applicant’s perspective, recognising that not only do
companies select employees, but applicants also select the organisation to which they will apply and where they are willing to work
(Hausknecht, Day & Thomas, 2004). Managing applicants’ perceptions of and reactions to the recruitment, screening and selection
process and procedures is especially important in a recruitment context that is characterised by skills shortages and an inadequate supply
of experienced and high-performing labour. The ‘war for talent’ and the search for more engaged and committed employees has led to
companies realising that following a purely selective approach to hiring based on matching job and person characteristics has become
inadequate for finding the ‘right’ employees. The overall resourcing strategy and in particular the screening and selection process is
therefore viewed as an interactive social process where the applicant has as much power about whether to engage in the applicant
process as the organisation. This places greater importance on the perceptions of potential and actual applicants (McCormack &
134
Scholarios, 2009). Negative impressions may be caused by uninformative web sites; uninterested recruiters; long, complicated
application and screening processes; or any message which communicates undesirable images of the employer brand.
Applicants who find particular aspects of the screening and selection system invasive may view the company as a less attractive
option in the job search process. Maintaining a positive company image during the screening and selection process is of significant
importance because of the high costs associated with losing especially top, talented candidates with scarce and critical skills. Moreover,
candidates with negative reactions to a screening and selection experience may dissuade other potential applicants from seeking
employment with the organisation. Candidates may also be less likely to accept an offer from a company with screening and selection
practices that are perceived as unfavourable, unfair, or biased. More importantly, as previously mentioned, applicant reactions may be
related to the filing of legal complaints and court challenges, particularly in situations where a specific selection technique is perceived
as invasive, unfair, biased, or inappropriate. In addition, although there is little empirical data on these issues, it is also possible that
applicants may be less likely to reapply with an organisation or buy the company’s products if they feel mistreated during the screening
and selection process (Hausknecht et al, 2004).
The term ‘applicant reactions’ has been used to refer to the growing body of literature that examines ‘attitudes, affect, or cognitions
an individual might have about the hiring process (Ryan & Ployhart, 2000:566). Applicant perceptions take into account applicant views
concerning issues related to organisational justice, such as the following (Greenberg, 1993):
• Perceptions regarding the perceived fairness of the outcome of the selection process, including the test scores or rating earned
by applicants on a given selection device (distributive justice)
• Rules and procedures used to make those decisions, including the perceived predictive validity of selection devices, the
length of the selection process (procedural justice)
• Sensitivity and respect shown to individuals (interpersonal justice), and
• Explanations and accounts given to individuals (informational justice).
The basic premise of organisational justice theory in selection contexts is that applicants view selection procedures in terms of the
aforementioned facets of justice, and that these perceptions influence candidates’ thoughts and feelings about testing, their performance
in the screening and selection process, and broader attitudes about tests and selection in general (Hausknecht et al, 2004). Research by
Wiechmann and Ryan (2003)indicates, for example, that applicants perceive selection more favourably when procedures are not
excessively long and when applicants receive positive outcomes. Providing applicants with an adequate explanation for the use of
selection tools and decisions may also foster positive perceptions among applicants. In addition, researchers have proposed that
applicant perceptions may be positively related to perceived test ease (Wiechmann & Ryan, 2003, cited by Hausknecht et al, 2004) and
the transparency of selection procedures (Madigan, 2000). Overall, research suggests that applicants will not react negatively to tools
that are well-developed, job-relevant, and used in screening and selection processes in which the procedures are appropriately and
consistently applied with time for two-way communication, decisions are explained, and applicants are treated respectfully and
sensitively (Ryan & Tippins, 2004). Table 6.2 provides an overview of a list of selection audit questions that can be used to manage
applicant perceptions and reactions during the recruitment, screening and selection process.
Reflection 6.4
Review chapter 1. Study case study 2 in Reflection activity 1.2. Can you identify the various aspects of importance in recruitment
discussed in the case study? How do quality decisions during the planning and execution of recruitment activities influence the
performance of the organisation?
CONSIDERATIONS IN EMPLOYEE SELECTION
Employee selection is concerned with making decisions about people. Employee selection therefore entails the actual process of
choosing people for employment from a pool of applicants by making inferences (predictions) about the match between a person and a
job. By applying specific scientific techniques and procedures in a systematic or step-by-step manner, the candidate pool is narrowed
through sequential rejection decisions until a selection is made. Since the outcomes of staffing decisions are expected to serve a
business-related purpose, staffing decisions are aimed at populating an organisation with workers who possess the KSAOs that will
enable the organisation to sustain its competitive edge and succeed in its business strategy. However, decision-makers cannot know with
absolute certainty the outcomes of rejecting or accepting a candidate. In this regard, they generally prefer to infer or predict in advance
the future performance of various candidates on the basis of available information and choose those candidates with the highest
probability of succeeding in the position for which they applied (Cascio & Aguinis, 2005; Landy & Conte, 2004; Riggio, 2009).
Table 6.2 Audit questions for addressing applicant perceptions and reactions to selection procedures (Ryan & Tippins, 2004:314)
[Reprinted with the permission of CCC/Rightslink on behalf of Wiley Interscience.]
Selection audit questions
•
Have we determined which applicant groups to target?
135
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Are efforts being made to recruit a diverse applicant pool?
Are efforts being made to have a low selection ratio (that is, low number of people selected relative to the total number of
applicants)?
Are we considering combinations of tools to achieve the highest validity and lowest adverse impact?
Have we considered how our ordering of tools affects validity and adverse impact?
Are we considering all aspects of job performance in choosing tools?
Have we determined which recruiting sources provide the best yield?
Are we providing applicants with the specific information they desire?
Have we selected recruiters who are warm and friendly?
Is appropriate attention being given to early or pre-recruitment activities?
Are applicants being processed quickly?
Do we solicit feedback from applicants on satisfaction with the staffing process?
Are applicants provided with accurate information about the job-relatedness of the selection process?
Are applicants provided with accurate information on which to judge their fit with the position?
Do we have evidence that selection procedures are job-related (that is, valid)?
Are applicants treated with respect?
Is the selection process consistently administered?
Does the process allow for some two-way communication?
Is feedback provided to applicants in an informative and timely manner?
‘Best-practice’ employee selection is usually associated with the psychometric model. As discussed in chapter 5, the psychometric
approach recommends rigorously-developed psychometric tests, performance-based or work-simulation methods, and the use of
multiple methods of assessment, all designed to measure accurately and objectively and predict candidates’ knowledge, skills,
competencies, abilities, personality and attitudes (KSAOs). Answering questions such as, ‘How do we identify people with knowledge,
skill, competency, ability and the personality to perform well at a set of tasks we call a job?’ and, ‘How do we do this before we have
ever seen that person perform a job?’ has given prominence to the psychometric model in terms of its utility and value in predicting the
potential job performance ability of applicants in an objective manner (Scholarios, 2009).
Various psychometric standards are applied to evaluate the quality of staffing decision outcomes that are based on the psychometric
model. These include: reliability, validity, utility, fairness and legal considerations. Each of these standards will be considered
separately.
Reliability
The aspect of reliability has been discussed in detail in chapter 5. Reliability is the extent to which a score from a selection measure is
stable and free from error. If a score from a measure is not stable or error free, it is not useful (Aamodt, 2010). Reliable methods are
accurate and free from contamination. They have high physical fidelity with job performance itself, are standardised across applicants,
have some degree of imposed structure, and show consistency across multiple assessors (Redman & Wilkinson, 2009).
Validity
As discussed in chapters 4 and 5, validity refers to the degree to which inferences from scores on tests or assessments are justified by the
evidence. Reliability and validity are related. As with reliability, a test must be valid to be useful. The potential validity of a test is
limited by its reliability. If a test has poor reliability, it cannot have high validity. However, a test’s reliability does not necessarily imply
validity (Aamodt, 2010).
Selection methods must be valid, that is, relevant for the work behaviours they are meant to predict. As discussed in chapter 4, to be
valid, assessment must be designed around a systematic job analysis, job description, and job or person specification for the job. Since
the linkage between predictors and criteria is the essence of validity, a valid method should also show an association between scores on
the assessment tool (predictor) and desired job behaviours (criterion). As shown in the scatter plots depicted in Figure 6.2, this linkage is
often expressed as a correlation coefficient – known as a criterion-related validity coefficient (r) – representing the strength and
direction (positive or negative) of the relationship between scores on the predictor (or proposed selection method, for example, test or
test battery) and scores on a criterion (or proxy measure) of job performance (for example, level of performance). As discussed in
chapter 2, this correlation coefficient (r) can range from 0,00 (chance prediction or no relationship) to 1,0 (perfect prediction). For
example, a researcher measuring the relationship between the age of factory workers and their job performance finds a correlation
coefficient of approximately 0,00, which shows that there is no relationship between age and performance (Riggio, 2009).
As you will recall from studying chapter 2, a positive correlation coefficient means that there is a positive linear relationship
between the predictor and criterion, where an increase in one variable (either the predictor or criterion) is associated with an increase in
the other variable. A negative correlation coefficient indicates a negative linear relationship: an increase in one variable (either the
predictor or criterion) is associated with a decrease in the other.
136
Predictor validity
Scatter plot (a), shown in Figure 6.2, illustrates a case of perfect prediction (which is however often rare), that is, a perfect positive linear
relationship as indicated by the validity coefficient, r = +1,00. In practical terms this means that each and every test score has one and
only one performance score associated with it. If, for example, a candidate scores a 50 on the test, we can predict with great precision
that the candidate will achieve a performance score of 80. Scatter plot (b) depicts a case where the validity coefficient, r = 0,00. In
practical terms this means that no matter what predictor (test) score a candidate obtains, there is no useful predictive information in that
test score. Regardless of whether the candidate obtains a test score of 10, 50 or of 60, there is no way to predict what that candidate’s
performance will be. We know only it will be somewhere between 60 and 100 (Landy & Conte, 2004).
Figure 6.2 Scatter plots depicting various levels of relationships between a predictor (test score) and a criterion (measure of
performance) (Based on Landy & Conte, 2004:263)
137
Figure 6.3 shows a predictor-criterion correlation of r = 0,80, which is reflected in the oval shape of the scatter plot of the predictor
and criterion scores. Apart from representing the relationship between the predictor and criterion, the oval shape also represents the
clustering of the data points of the individual scores on the scatter plot. Along the predictor axis is a vertical line – the predictor cut-off –
that separates passing from failing applicants. People above the cut-off are accepted for hire; those below it are rejected. Also, observe
the three horizontal lines. The solid line, representing the criterion performance of the entire group, cuts the entire distribution of scores
in half. The dotted line, representing the criterion performance of the rejected group, is below the performance of the total group.
Finally, the dashed line, which is the average criterion performance of the accepted group, is above the performance of the total group.
The people who would be expected to perform the best on the job fall above the predictor cut-off. In a simple and straightforward sense,
that is what a valid predictor does in personnel selection: it identifies the more capable people from the total pool (Muchinksky et al,
2005:138).
138
A different picture emerges for a predictor that has no correlation with the criterion, as shown in Figure 6.4. As previously
mentioned, the oval shape represents the clustering of the data points of the scores on the scatter plot. Again, the predictor cut-off
separates those accepted from those rejected. This time, however, the three horizontal lines are all superimposed, that is, the criterion
performance of the accepted group is no better than that of the rejected group, and both are the same as the performance of the total
group. The value of the predictor is measured by the difference between the average performance of the accepted group and the average
performance of the total group. As can be seen, these two values are the same, so their difference equals zero. In other words, predictors
that have no validity also have no value. On the basis of these examples, we see a direct relationship between a predictor’s value and its
validity: the greater the validity of the predictor, the greater its value, as measured by the increase in average criterion performance for
the accepted group over that for the total group (Muchinksky et al, 2005:138).
Figure 6.3 Effect of a predictor with a high validity (r = 0,80) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
Selection ratios
The selection ratio (SR) indicates the relationship between the number of individuals assessed and the number actually hired (SR = n/N,
where n represents the number of jobs available, or number of hires to be made, and N represents the number of people assessed). For
example, if a company has short-listed 100 applicants for 10 vacant positions, the SR would be SR = 0,10 (10/100). If there were 200
applicants instead of 100, the selection ratio would be SR = 0,05 (10/200). Paradoxically, low selection ratios are regarded as being
actually better than high selection ratios, because the more people being assessed, the greater the likelihood that individuals who score
high on the test will be found. By assessing 200 applicants instead of 100, the company is more likely to find high scorers (and better
performers), which, of course, will also be more expensive. In addition, a valid test will translate into a higher likelihood that a good
performer will be hired (Landy & Conte, 2004).
The effect of the SR on a predictor’s value can be seen in Figures 6.5 and 6.6. Let us assume that we have a validity coefficient of r
= 0,80 and the selection ratio is SR = 0,75, meaning we will hire three out of every four applicants. Figure 6.5 shows the
predictor-criterion relationship, the predictor cut-off that results in accepting the top 75 per cent of all applicants, and the respective
average criterion performances of the total group and the accepted group. If a company hires the top 75 per cent, the average criterion
performance of that group is greater than that of the total group (which is weighted down by the bottom 25 per cent of the applicants).
Again, value is measured by this difference between average criterion scores. Furthermore, when the bottom 25 per cent is lopped off
(the one applicant out of four who is not hired), the average criterion performance of the accepted group is greater than that of the total
group (Muchinsky et al, 2005:139, 140).
Figure 6.4 Effect of a predictor test with no validity (r = 0,00) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
139
Figure 6.5 Effect of a large selection ratio (SR = 0,75) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
In Figure 6.6 we have the same validity coefficient (r = 0,80), but this time the SR = 0,25; that is, out of every four applicants, we
will hire only one. The figure shows the location of the predictor cut-off that results in hiring only the top 25 per cent of all applicants
EBSCO Publishing : eBook Collection (EBSCOhost) - printed on 7/14/2018 2:42 PM via UNISA
AN: 948979 ; Schreuder, A. M. G., Coetzee, Melinde.; Personnel Psychology: : An Applied Perspective
Account: s7393698
140
and the respective average criterion performances of the total and accepted groups. The average criterion performance of the accepted
group is not only above that of the total group, as before, but the difference is also much greater. In other words, when only the top 25
per cent are hired, their average criterion performance is greater than the performance of the top 75 per cent of the applicants, and both
of these values are greater than the average performance of the total group (Muchinsky et al, 2005:140).
The relationship between the SR and the predictor’s value should be clear: the smaller the SR, the greater the predictor’s value. This
should also make sense intuitively. The more particular decision-makers are in admitting people (that is, the smaller the selection ratio),
the more likely it is that the people admitted (or hired) will be of the quality the company desire (Muchinsky et al, 2005:140).
Prediction errors and cut scores
The level of validity is associated with prediction errors. Prediction errors can be costly for an organisation because they lead to errors in
staffing decisions. Prediction errors are generally common when validity coefficients are less than 1.00. In the case of a validity
coefficient, for example, r = 0,00 (as depicted in Figure 6.2 (b) and Figure 6.4), the ‘best’ prediction about the eventual performance
level of any applicant, regardless of the test score, would be average performance. In contrast, when the validity coefficient, r = +1.00
(perfect prediction), there will be no error in prediction of eventual performance (and therefore no error in the staffing decision) (Landy
& Conte, 2004).
Two types of prediction errors may be committed by decision-makers: false positive errors or false negative errors. A false positive
error occurs when a decision-maker falsely predicted that a positive outcome (for example, that an applicant who was hired would be
successful) would occur, and it did not; the person failed. The decision is false because of the incorrect prediction that the applicant
would have performed successfully and positive because the applicant was hired (Landy & Conte, 2004).
Figure 6.6 Effect of a small selection ratio (SR = 0,25) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
A false negative error occurs when a decision led to an applicant who would have performed adequately or successfully was
rejected. The decision is false because of the incorrect prediction that the applicant would not have performed successfully and negative
because the applicant was not hired. Whereas a true positive refers to a case when an applicant is hired based on a prediction that he or
she will be a good performer and this turns out to be true, a true negative refers to a case when an applicant is rejected based on an
accurate prediction that he or she will be a poor performer (Landy & Conte, 2004). Figure 6.7 graphically presents the two types of true
and false decisions.
Cut scores are generally used to minimise both types of errors by moving the score that is used to hire individuals up or down. A cut
score (also referred to as a cut-off score) is a specified point in a distribution of scores below which candidates are rejected. As can be
seen in Figure 6.7, the decision about where to set the cut score in conjunction with a given selection ratio can detract from the
EBSCO Publishing : eBook Collection (EBSCOhost) - printed on 7/14/2018 2:42 PM via UNISA
AN: 948979 ; Schreuder, A. M. G., Coetzee, Melinde.; Personnel Psychology: : An Applied Perspective
Account: s7393698
141
predictive value of that device. In Figure 6.7, the predictor cut-off, X’, equates false positive (lower right) and false negative (upper left)
errors, resulting in a minimum of decision errors. Raising the cut-off to W’ results in a decrease of false positives (A), that is, applicants
erroneously accepted, but an even greater increase in false negatives (B), that is, applicants erroneously rejected. Similarly, lowering the
cut-off to Z’ yields a decrease in false negatives (C), but a larger increase in false positives (D), or applicants erroneously hired. In
practical terms, Figure 6.7 illustrates that by lowering the cut score from 50 to 25, the number of candidates that would be incorrectly
rejected will be reduced, but the percentage of poor performers among the candidates that are hired will also substantially increase
(Landy & Conte, 2004:264). However, in some situations where the cost of a performance mistake can be catastrophic (for example, a
nuclear power plant operator hired by Koeberg Power Station, or an airline pilot hired by South African Airways), a better strategy may
be to be very selective and accept a higher false negative error rate to reduce the frequency of false positive errors.
Figure 6.7 Effect on selection errors of moving the cut-off score (Landy & Conte, 2004:264)
[Reproduced with permission of The McGraw-Hill Companies.]
Establishing cut scores
Criterion-referenced cut scores and norm-referenced cut scores are two methods of establishing cut scores. As illustrated in Figure 6.8,
criterion-referenced cut scores (also often called domain-referenced cut scores) are established by considering the desired level of
performance for a new hire and finding the test score (predictor cut-off score) that corresponds to the desired level of performance (
criterion cut-off score). Often the cut score is determined by having a sample of employees take the chosen test, measure their job
performance (for example, through supervisory rating), and then see what test score corresponds to acceptable performance as rated by
the supervisor. Asking a group of subject matter experts (SMEs) to examine the test in question, consider the performance demands of
the job, then pick a test score that they think a candidate would need to attain to be a successful performer, is an alternative method of
setting criterion-referenced cut scores. However, these techniques tend to be complex for SMEs to accomplish and have been the subject
of a great deal of debate with respect to their accuracy (Landy & Conte, 2004).
Norm-referenced cut scores are based on some index (generally the average) of the test-takers’ scores rather than any notion of job
performance (the term ‘norm’ is a shortened version of the word ‘normal’, meaning average). In educational settings such as the
University of South Africa, for example, passing scores are typically pegged at 50 per cent. Any score below 50 is assigned a letter
grade of F (for ‘failing’). In the selection of Masters’ student candidates, a university may, for example, decide to peg the selection of
candidates with a graduate pass average (GPA) of 60 per cent for an honours level qualification for the short-list to be invited to a
selection interview. There is no connection between the chosen cut-off score (60 per cent) and any aspect of anticipated performance
(other than the simple notion that future performance may be predicted by past performance and that people with scores below the
142
cut-off score are likely to do less well in the Masters’ programme than applicants with scores higher than the cut-off score). Frequently,
cut-off scores are set because there are many candidates for a few openings. For example, the university may have openings for a group
of only 25 Masters’ students per annum, and 200 candidates may apply for selection to the Masters’ Programme in a particular year.
Figure 6.8 Determination of cut-off score through test’s criterion-related validity (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
A cut-off score that will reduce the applicant population by a specific percentage is generally determined so that only candidates who
are in the top 25 per cent of the score distribution, for example, are selected. Naturally, if the applicant population turned out to be very
talented, then the staffing strategy would commit many false negative errors, because many who were rejected might have been good
performers. On the other hand, the utility of the staffing strategy might be high because the expense of processing those additional
candidates in later assessment steps might have been substantial (Landy & Conte, 2004).
Cascio, Alexander and Barrett (1988) addressed the legal, psychometric, and professional issues associated with setting cut-off
scores. As they reported, there is wide variation regarding the appropriate standards to use in evaluating the suitability of established
cut-off scores. In general, a cut-off score should normally be set to be reasonable and consistent with the expectations of acceptable job
proficiency in the workplace. Also, the cut-off should be set at a point that selects employees who are capable of learning a job and
performing it in a safe and efficient manner. As previously shown in Figure 6.7, undesirable selection consequences are associated with
setting the cut-off score ‘too low’ (an increase in false positive selection decisions) or ‘too high’ (an increase in false negative selection
decisions) (Muchinsky et al, 2005:143).
When there is criterion-related evidence of a test’s validity, it is possible to demonstrate a direct correspondence between
performance on the test and performance on the criterion, which aids in selecting a reasonable cut-off score. Take, for example, the case
of predicting academic success at university. The criterion of academic success is the university exam marks average, and the criterion
cut-off is 50 per cent. That is, students who attain an average mark of 50 per cent or higher graduate, whereas those with an average less
than 50 per cent do not.
Furthermore, assume that the selection (admission) test for entrance into the university is a 100-point test of cognitive ability. In a
criterion-related validation paradigm, it is established that there is an empirical linkage between scores on the test and the university
examination marks average. The statistical analysis of the scores reveals the relationship shown in Figure 6.8. Because the minimum
average of 50 per cent is needed to graduate and there is a relationship between test scores and the university examination marks
average, we can determine (through regression analysis) the exact test score associated with a predicted average mark of 50 per cent. In
this example, a test score of 50 predicts an examination average mark of 50 per cent. Therefore, a score of 50 becomes the cut-off score
on the intellectual ability test (Muchinsky et al, 2005:144).
The task of determining a cut-off score is more difficult when only content-related evidence of the validity of a given test is
available. In such cases it is important to consider the level of ability associated with a certain test score that is judged suitable or
relevant to job performance. However, obvious subjectivity is associated with such decisions. In general, there is no such thing as a
143
single, uniform, correct cut-off score. Nor is there a single best method of setting cut-off scores for all situations. Cascio and Aguinis
(2005) made several suggestions regarding the setting of cut-off scores. Here are four:
• The process of setting a cut-off score should begin with a job analysis that identifies -relative levels of proficiency on critical
knowledge, skills, and abilities (KSAs).
• When possible, data on the actual relationship of test scores (predictor scores) to criterion measures of job performance
should be considered carefully.
• Cut-off scores should be set high enough to ensure that minimum standards of job performance are met.
• Adverse impact must be considered when setting cut-off scores.
Utility
The concept of utility addresses the cost–benefit ratio of one staffing strategy versus another. The term ‘utility gain’ (the expected
increase in the percentage of successful workers) is therefore synonymous with utility (Landy & Conte, 2004). Utility analysis considers
three important parameters: quantity, quality, and cost. At each stage of the recruitment, screening, and selection process, the candidate
pool can be thought of in terms of the quantity (number) of candidates, the average or dispersion of the quality of the candidates, and the
cost of employing the candidates. For example, the applicant pool may have a quantity of 100 candidates, with an average quality of
R500 000 per year, and a variability in quality value that ranges from a low of R100 000 to a high of R700 000. This group of candidates
may have an anticipated cost (salary, benefits and training) of 70 per cent of their value. After screening and selection, the candidates
who are accepted might have a quantity of 50 who receive job offers, with an average quality value of R650 000 per year, ranging from
a low of R500 000 to a high of R650 000. Candidates who receive offers may require employment costs of 80 per cent of their value,
because the decision-makers have identified highly-qualified and sought-after individuals with scarce and critical skills. Eventually the
organisation ends up with a group of new hires (or promoted candidates in the case of internal staffing) who can also be characterised by
quantity, quality and cost (Cascio & Boudreau, 2008).
Similarly, the application or use of various screening and selection techniques and devices can be thought of in terms of the quantity
of tests and procedures used, the quality of the tests and procedures as reflected in their ability to improve the value or quality of the
pool of individuals that are accepted, and the cost of the tests and procedures in each step of the screening and selection process. For
example, as we have discussed in this chapter and previous chapters, the quality of selection tests and procedures is generally expressed
in terms of their validity, or accuracy in forecasting future job performance. As previously discussed, validity is typically expressed in
terms of the correlation between the predictor scores (the scores on a selection procedure) and some measure of job performance (the
criterion scores), such as the rand value of sales, for example.
Validity may be increased by including a greater quantity (such as a battery of selection procedures or tests), each of which focuses
on an aspect of knowledge, skill or ability, or other characteristic that has been demonstrated to be important to successful performance
on a job. Higher levels of validity imply higher levels of future job performance among those selected or promoted thereby improving
the overall payoff or utility gain to the organisation. As a result, those candidates that are predicted to perform poorly are never hired or
promoted in the first place (Cascio & Boudreau, 2008).
The utility of a selection device is regarded as the degree to which its use improves the quality of the individuals selected beyond
what would have occurred had that device not been used (Cascio & Boudreau, 2008). Consider, for example, that although a test may be
both reliable and valid, it is not necessarily useful or cost effective when applied in predicting applicants’ job performance. In the case of
a company that already has a test that performs quite well in predicting performance and has evidence that the current employees chosen
on the basis of the test are all successful, it will not be cost effective to consider a new test that may be valid, because the old test might
have worked just as well. The organisation may have such a good training programme that current employees are all successful. A new
test, although it is valid, may not provide any improvement or utility gain for the company (Aamodt, 2010).
Several formulae and models have been designed to determine how useful a test would be in any give situation. Each formula and
table provides slightly different information to an employer. Two examples of these formulae and models will be discussed: the
Taylor-Russell tables and proportion of correct decisions charts.
Taylor-Russell tables
The Taylor-Russell tables (Taylor & Russell, 1939) provide an estimate of the percentage of total new hires who will be successful
employees if an organisation uses a particular test. Taylor and Russell (1939) defined the value of the selection device as the success
ratio, which is the ratio of the number of hired candidates who are judged successful on the job divided by the total number of
candidates who were hired. The tables (see Appendix B) illustrate the interactive effect of different validity coefficients, selection ratios,
and base rates on the success ratio (Cascio & Boudreau, 2008):
144
The usefulness of a selection measure or device (test) can therefore be assessed in terms of the success ratio that will be obtained if the
selection measure is used. The gain in utility to be expected from using the instrument (the expected increase in the percentage of
successful workers) then can be derived by subtracting the base rate from the success ratio (equation 6-3 minus equation 6-1). For
example, given an SR of 0,10, a validity of 0,30, and a base rate (BR) of 0,50, the success ratio jumps to 0,71 (a 21 per cent gain in
utility over the base rate) (Cascio & Boudreau, 2008). (To verify this figure, see Appendix B.)
To use the Taylor-Russell tables, the first information needed is the test’s criterion validity coefficient (r). Conducting a criterion
validity study with test scores correlated with some measure of job performance is generally the best way to obtain the criterion validity
coefficient. However, an organisation often wants to know whether testing is useful before investing time and money in a criterion
validity study. Typical validity coefficients can in this case be obtained from previous research studies that indicate the typical validity
coefficients that will result from various methods of selection (Aamodt, 2010; Schmidt & Hunter, 1998). The validity coefficient
referred to by Taylor and Russell (1939) in their tables is, in theory, based on present employees who have already been screened using
methods other than the new selection procedure. It is assumed that the new procedure or measure will simply be added to a group of
selection procedures or measures used previously, and it is the incremental gain in validity from the use of the new procedure that is
most relevant.
As discussed above, the selection ratio (SR) is obtained by determining the relationship between the number of individuals assessed
and the number actually hired (SR = n/N, where n represents the number of jobs available, or number of hires to be made, and N
represents the number of applicants assessed). The lower the selection ratio is, the greater the potential usefulness of the test.
According to the Taylor-Russell tables, utility is affected by the base rate. The base rate (BR) refers to the proportion of candidates
who would be successful without the selection measure. A meaningful way of determining the base rate is to choose a criterion measure
score above which all employees are considered successful. To be of any use in selection, the selection measure or device must
demonstrate incremental validity by improving on the BR. That is, the selection measure (device or test) must result in more correct
decisions than could be made without using it (Cascio & Boudreau, 2008). Figure 6.9 illustrates the effect of the base rate on a predictor
(selection measure) with a given validity.
Figure 6.9 illustrates that with a BR of 0,80, it would be difficult for any selection measure to improve on the base rate. In fact, when
the BR is 0,80 and half of the applicants are selected, a validity of 0,45 is required to produce an improvement of even 10 per cent over
base rate prediction. This is also true at very low BRs. Selection measures are most useful when BRs are about 0,50. As the BR departs
radically in either direction from this value, the benefit of an additional predictor becomes questionable. In practical terms, this means
that applications of selection measures to situations with markedly different SRs or BRs can result in quite different predictive outcomes.
If it is not possible to demonstrate significant incremental utility by adding a predictor, the predictor should not be used, because it
cannot improve on current selection procedures. As illustrated in Figure 6.9, when the BR is either very high or very low, it is difficult
for a selection measure to improve on it (Cascio & Boudreau, 2008, p. 179).
Figure 6.10 presents all of the elements of the Taylor-Russell model together. In this figure, the criterion cut-off (Yc) separates the
present employee group into satisfactory and unsatisfactory workers. The predictor cut-off (Xc) defines the relative proportion of
workers who would be hired at a given level of selectivity. Areas A and C represent correct decisions, that is, if the selection measure
were used to select applicants, those in area A would be hired and become employees who perform satisfactorily. Those in area C would
be rejected correctly, because they scored below the predictor cut-off and would have performed unsatisfactorily on the job. Areas B and
D represent erroneous decisions; those in area B would be hired because they scored above the predictor cut-off, but they would perform
unsatisfactorily on the job, and those in area D would be rejected because they scored below the predictor cut-off, but they would have
been successful if hired (Cascio & Boudreau, 2008:179).
A major shortcoming of the Taylor-Russell utility model is that it underestimates the actual amount of value from the selection
procedure because it reflects the quality of the resulting hires only in terms of success or failure. In other words, it views the value of
hired employees as a dichotomous classification, successful and unsuccessful; and as demonstrated by the tables in Appendix B, when
validity is fixed, the success ratio increases as the selection ratio decreases. Under those circumstances, the success ratio tells
decision-makers that more people are successful, but not how much more successful. For many jobs one would expect to see
improvements in the average level of employee value from increased selectivity. In most jobs, for example, a very high-quality
employee is more valuable than one who just meets the minimum standard of acceptability (Cascio & Boudreau, 2008).
Proportion of correct decisions charts
Determining the proportion of correct decisions is easier to do but less accurate than the Taylor-Russell tables (Aamodt, 2010). The
proportion of correct decisions is determined by obtaining applicant test scores (predictor scores) and the scores on the criterion. The
two scores from each applicant are graphed on a chart similar to that of Figure 6.10. Lines are drawn from the point on the y-axis
145
(criterion score) that represents a successful applicant, and from the point on the x-axis (predictor score) that represents the lowest test
score of a hired applicant. As illustrated in Figure 6.11, these lines divide the scores into four quadrants. The points located in quadrant
D represent applicants who scored poorly on the test but performed well on the job. Points located in quadrant A represent employees
who scored well on the test and were successful on the job. Points in quadrant B represent employees who scored high on the test, yet
did poorly on the job, and points in quadrant C represent applicants who scored low on the test and did poorly on the job.
Figure 6.9 Effect of varying base rates on a predictor with a given validity (Cascio & Boudreau, 2008:178)
[Reproduced with the permission of Pearson Education, Inc.]
146
Figure 6.10 Effect of predictor and criterion cut-offs on selection decisions (Adapted from Cascio & Boudreau, 2008:179)
[Reproduced with the permission of Pearson Education, Inc.]
147
Note: The oval is the shape of the scatter plot that shows the overall relationship between predictor and criterion score.
More points in quadrants A and C indicate that the test is a good predictor of performance because the points in the other two
quadrants (D and B) represent ‘predictive failures’. That is, in quadrants D and B no correspondence is seen between test scores and
criterion scores. To estimate the test’s effectiveness, the number of points in each quadrant is totalled, and the following formula is used
(Aamodt, 2010:198):
The resulting number represents the percentage if time that the decision-maker expects to be accurate in making a selection decision in
future. To determine whether this is an improvement, the following formula is used:
If the percentage from the first formula is higher than that of the second, the proposed test should increase selection accuracy. If not, it
may be better to use the selection method currently in use.
Figure 6.11 illustrates, for example, that there are 4 data points in quadrant D, 7 in quadrant A, 3 in quadrant B, and 8 in quadrant C.
The percentage of time a decision-maker expects to be accurate in the future would be:
If we compare this result with, for example, another test that the company was previously using to select employees, we calculate the
satisfactory performance baseline:
148
Reflection 6.5
Use the Taylor-Russell tables in Appendix B to determine the utility of a particular selection measure by completing the following
table:
Which selection measure will be the most useful? Give reasons for your answer.
Figure 6.11 Determining the proportion of correct decisions
Using the proposed test would result in a 36% per cent increase in selection accuracy over the selection method previously used
(0,68–0,50 =
= 36%).
Paradoxically, research (Cronshaw, 1997; Latham & Whyte, 1994; Whyte & Latham, 1997)) on the effect of utility results on the
decision by experienced managers to adopt or not adopt a selection strategy showed that a presentation of the positive utility (that is, the
utility to be gained) aspects of a selection strategy actually resulted in a lower likelihood that a manager would adopt the selection
strategy because they often regard utility calculations as a ‘hard sell’ of the selection device. However, although the results of Latham
and Whyte’s research suggest that utility analyses may be of little value to managers in deciding whether or not to adopt a selection
strategy, this may be misleading. Cronshaw (1997) and Landy and Conte (2004) argue that although there appears to be little value in
presenting utility calculations to decision-makers (for example, line managers, senior management, or human resource department),
industrial psychologists should take cognisance of the value added by utility calculations.
Based on utility calculations, industrial psychologists can make more informed decisions with respect to which alternatives to
present to managers. Industrial psychologists can present relevant data (for example, the cost of implementing a new selection system or
procedure) to decision-makers in a more familiar cost–benefit framework. Virtually every line manager and human resource
149
representative will eventually ask cost–benefit questions. However, Cronshaw (1997) cautions that industrial psychologists should
refrain from using utility calculations to ‘sell’ their products but should rather present utility analysis information simply as information
to evaluate the effectiveness of a staffing strategy and selection device or procedure and determining the benefits for the organisation.
Reflection 6.6
Compare Figure 6.11 with the results of the following new proposed test for selecting employees, depicted in Figure 6.12 below.
Calculate the satisfactory performance baseline. Compare the baseline results of the two tests and determine whether any utility gains
were made by using the new test. Which test would you recommend in future for selection purposes? Give reasons for your answer.
Figure 6.12 Determining the proportion of correct decisions – new proposed test
Fairness and legal considerations
Although there are multiple perspectives on the concept of fairness in selection, there seems to be general agreement that issues of
equitable treatment, predictive bias, and scrutiny for possible bias when subgroup differences are observed are important concerns in
personnel selection (Society for Industrial and Organisational Psychology of South Africa, 2005). Most industrial psychologists agree
that a test is fair if it can predict performance equally well for all race, genders and national origins.
In terms of legal considerations, industrial psychologists commonly serve as expert witnesses in employment discrimination cases
filed in courts. These cases are most often filed by groups of individuals claiming violations of the Employment Equity Act 55 of 1998
and the Labour Relations Act 66 of 1995 (LRA). The law and the courts recognise two different theories of discrimination. The adverse
treatment theory charges an employer with intentional discrimination. The adverse impact theory acknowledges that an employer may
not have intended to discriminate against a plaintiff, but a practice implemented by the employer had the effect of disadvantaging the
group to which the plaintiff belongs. In an adverse impact case, the burden is on the plaintiff to show that (1) he or she belongs to a
protected group, and (2) members of the protected group were statistically disadvantaged compared to the majority employees or
applicants (Landy & Conte, 2004).
Because of the importance of issues such as fairness, bias and adverse impact in employee selection, particularly in the South
African national and organisational context, these aspects are discussed in more detail towards the ends of this chapter.
MAKING SELECTION DECISIONS
Employee recruitment, screening and selection procedures are regarded as valuable to the extent that it improves vital decisions about
hiring people. These decisions include how to invest scarce resources (such as money, time and materials) in staffing techniques and
activities, such as alternative recruiting resources, different selection and screening technologies, recruiter training or incentives,
alternative mixes of pay and benefits to offer desirable candidates, and decisions by candidates about whether to accept offers. Effective
150
staffing therefore requires measurement procedures that diagnose the quality of the decisions of managers, industrial psychologists,
human resource professionals and applicants in a systematic or step-by-step manner (Cascio & Boudreau, 2008). The emphasis on the
quality of selection decisions has led to systematic selection being increasingly regarded as one of the critical functions of human
resource management, essential for achieving key organisational outcomes (Storey, 2007), and a core component of what has been
called a high-commitment or high-performance management approach to human resource management (Redman & Wilkinson, 2009).
Since decisions about hiring people or human capital are increasingly central to the strategic success and effectiveness of virtually all
organisations, industrial psychologists apply scientifically rigorous ways to measure and predict the potential of candidates to be
successful in the jobs for which they applied. However, as previously mentioned, the decision–science approach to employee selection
requires that selection procedures should do more than evaluate and predict the performance of candidates; they should extend the value
or utility of measurements and predictions by providing logical frameworks that drive sound strategic decisions about hiring people.
A decision framework provides the logical connections between decisions about a resource (for example, potential candidates to be
hired) and the strategic success of the organisation (Cascio & Boudreau, 2008). A scientific approach to decision-making reveals how
decisions and decision-based measures can bring the insights of the field of industrial and organisational psychology to bear on the
practical issues confronting organisation leaders and employees. Because the profession of personnel psychology relies on a decision
system that follows scientific principles in a systematic manner, industrial psychologists are able not only to demonstrate the validity
and utility of their procedures, but also to incorporate new scientific knowledge quickly into practical applications that add value to the
strategic success of the organisation.
As discussed in previous chapters, the measurement and decision frameworks provided by the profession of personnel psychology
are grounded in a set of general principles for data collection, data analysis, and research design that support measurement systems in all
areas of human resource-related decision-making in the organisation. High-quality staffing decisions are therefore made by following a
comprehensive approach in gathering high-quality information about candidates to predict the likelihood of their success on the varied
demands of the job (Landy & Conte, 2004). The employee selection process involves two important procedures: measurement and
prediction (Cascio & Aguinis, 2005; Wiggins, 1973). Measurement entails the systematic collection of multiple pieces of data or
information using tests or other assessment procedures that are relevant to job performance (such as those discussed in chapter 5).
Prediction involves the systematic process of combining the collected data in such a way as to enable the decision-maker to minimise
predictive error in forecasting job performance.
Strategies for combining job applicant information
Job applicant information can be combined in various ways to make good staffing decisions. Clinical (or intuitive) and statistical (also
referred to as mechanical or actuarial) decision-making strategies are generally regarded as the two most basic ways to combine
information in making staffing decisions. In clinical decision-making, data are collected and combined judgementally. Similarly,
predictions about the likely future performance of a candidate or job applicant are also judgemental in the sense that a set of test scores
or impressions are combined subjectively in order to forecast criterion status. The decision-maker examines multiple pieces of
information by using, for example, techniques such as assessment interviews and observations of behaviour. This information is then
weighted in his or her head and, based on experience and beliefs about which types of information are more or less important, a
subjective decision about the relative value of one candidate over another is made – or simply a select/reject decision about an individual
candidate is made. Although some good selection decisions may be made by experienced decision-makers, clinical or intuitive decisions
tend to be regarded as unreliable and idiosyncratic because they are mostly subjective or judgemental in nature, and are therefore
error-prone and often inaccurate (Cascio & Aguinis, 2005; Landy & Conte, 2004; Riggio, 2009).
In statistical decision-making, information is combined according to a mathematical or actuarial formula in an objective,
predetermined fashion. For example, the statistical model used by insurance companies is based on actuarial data and is intended to
predict the likelihood of events (for example, premature death) across a population, given the likelihood that a given person will engage
in certain behaviours or display certain characteristics (for example, smoking, or over-eating). Personnel selection also uses the
statistical model of prediction to select the candidates with the greatest likelihood of job success (Van Ours & Ridder, 1992).
In personnel selection, short listed job applicants are assigned scores based on a psychometric instrument or battery of instruments
used to assess them. The test scores are then correlated with a criterion measure. By applying specific statistical techniques, each piece
of information about job applicants is given some optimal weight that indicates the strength of the specific data component in predicting
future job performance. Most ability tests, objective personality inventories, biographical data forms (biodata), and certain types of
interviews (such as structured interviews) permit the assignment of scores for predictive purposes. However, even if data are collected
mechanically, they may still be combined judgementally. For example, the decision-maker can apply a clinical composite strategy by
means of which data are collected both judgementally (for example, through interviews and observations) and mechanically (for
example, through tests and BIBs), but combined judgementally. The decision-maker may subjectively interpret the personality profile of
a candidate that derived from the scores of a test without ever having interviewed or observed him or her. On the other hand, the
decision-maker can follow a mechanical composite strategy whereby job applicant data are collected judgementally and mechanically,
but combined in a mechanical fashion (that is, according to pre-specified rules such as a multiple regression equation) to derive
behavioural predictions from all available data.
The statistical or mechanical decision-making strategy is regarded as being superior to clinical decisions because selection decisions
are based on an objective decision-making statistical model. Whereas human beings, in most cases, are incapable of accurately
processing all the information gathered from a number of job applicants, statistical models are able to process all of this information
151
without human limitations. Mechanical models allow for the objective and appropriate weighting of predictors, which is important to
improve the accuracy of predictions. In addition, because statistical methods allow the decision-maker to incorporate additional evidence
on candidates, the predictive accuracy of future performance of candidates can be further improved. However, it is important to note that
judgemental or clinical methods can be used to complement mechanical methods, because they provide rich samples of behavioural
information. Mechanical procedures are applied to formulate optimal ways of combining data to improve the accuracy of job applicant
performance predictions (Cascio & Aguinis, 2005; Riggio, 2009).
Methods for combining scores
Various prediction models are used by decision-makers to improve the accuracy of predictions concerning the future performance of job
candidates on which basis candidates are either rejected or selected. Prediction models are generally classified as being compensatory
and non-compensatory. Compensatory models take into cognisance that humans are able to compensate for a relative weakness in one
attribute through a strength in another when assuming that both attributes are required by the job. In practical terms, this means that a
good score on one test can compensate for a lower score on another test. For example, a good score in an interview or work sample test
may compensate for a slightly lower score on a cognitive ability test. If one attribute (for example, communication skill) turns out to be
much more important than another (for example, reasoning skill), compensatory prediction models such as multiple regression provide
ways to weight the individual scores to give one score greater influence on the final total score (Landy & Conte, 2004). On the other
hand, non-compensatory prediction models such as the multiple-cut-off approach assume curvilinearity (or a non-linear relationship) in
the prediction-criterion relationship. For example, minimal visual acuity may be required for the successful performance of a pilot’s job.
However, increasing levels of visual acuity do not necessarily mean that the candidate will be a correspondingly better pilot (Cascio &
Aguinis, 2005).
Compensatory and non-compensatory approaches are often used in combination to improve prediction accuracy in order to make
high-quality selection decisions. The most basic prediction models that will be discussed in more detail are: the multiple regression
approach, the multiple cut-off model, and the multiple hurdle system.
The compensatory approach to combining scores
As you recall, up to now, we have referred only to situations where we had to examine the bivariate relationship between a single
predictor such as a test score, and a criterion, such as a measure of job performance. However, in practice, industrial psychologists have
to deal with situations in which more than one variable is associated with a particular aspect of an employee’s behaviour. Furthermore,
in real-world job-success prediction, decisions generally are made on the basis of multiple sources of information. Although cognitive
ability is an important predictor of job performance, other variables such as personality, experience, and motivation (the various KSAOs
determined by means of a job analysis and described in a job description) may also play an important role in predicting an applicant’s
future performance on the job. However, examining relationships involving multiple predictor variables (also called multivariate
relationships) can be complex and generally requires the use of advanced statistical techniques.
One such statistical technique which is a very popular prediction strategy and used extensively in personnel decision-making is the
so-called multiple regression model. We discussed the concept of regression in chapter 2. Since the multiple regression technique
requires both predictor data (test or non-test data) and criterion data (performance levels), it can be used only if some measures of
performance are available (Landy & Conte, 2004). In the case of multiple regression, we have one criterion variable, but more than one
predictor variable. How well two or more predictors, when combined, improve the predictability of the criterion depends on their
individual relationships to the criterion and their relationship to each other. This relationship is illustrated by the two Venn diagrams in
Figure 6.13. The overlapping areas in each of the two Venn diagrams show how much the predictors overlap the criterion. The
overlapping areas are the validity of the predictors, symbolised by the notation r1c or r2c or r12c, where the subscript 1 or 2 stands for
the first or second predictor and r12 stands for the inter-correlation coefficient (overlap) between two predictors. The notation c stands
for the criterion. Note in Figure 6.13 (a) that the two predictors are unrelated to each other, meaning that they predict different aspects of
the criterion. Figure 6.13 (b) indicates that the inter-correlation between the two predictors (r12c) is not zero, that is, they share some
variance with one another. Figure 6.13 (b) further indicates that each predictor correlates substantially with the criterion (r1c).
Figure 6.13 Venn diagrams depicting multiple predictors (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
152
The combined relationship between two or more predictors and the criterion is referred to as a multiple correlation (R). The only
conceptual difference between r and R is that the range of R is from 0 to 1,0, while r ranges from –1,0 to +1,0. When R is squared, the
resulting R2 value represents the total amount of variance in the criterion that can be explained by two or more predictors. When
predictors 1 and 2 are not correlated with each other, the squared multiple correlation (R2) is equal to the sum of the squared individual
validity coefficients, or:
The notation R2c.12 is read ‘the squared multiple correlation between the criterion and two predictors (predictor 1 and predictor 2)’. In this
condition (when the two predictors are unrelated to each other), 61 per cent of the variance in the criterion can be explained by two
predictors (Muchinsky et al, 2005:133).
In most cases, however, it is rare that two predictors related to the same criterion are unrelated to each other. Usually all three
variables share some variance with one another; that is, the inter-correlation between the two predictors (r12) is not zero. Such a
relationship is presented graphically in Figure 6.13 (b). In the figure each predictor correlates substantially with the criterion (r1c), and
the two predictors also overlap each other (r12) (Muchinsky et al, 2005:133).
The addition of the second predictor adds more criterion variance than can be accounted for by one predictor alone. Yet not all of
the criterion variance accounted for by the second predictor is new variance; part of it was explained by the first predictor. When there is
a correlation between the two predictors (r12), the equation for calculating the squared multiple correlation must be expanded to:
For example, if the two predictors inter-correlated at 0,30, given the validity coefficients from the previous example and r12 = 0,30, we
have:
As can be seen, the explanatory power of two inter-correlated predictor variables is diminished compared to the explanatory power when
they are uncorrelated (0,47 versus 0,61). This example provides a rule about multiple predictors: it is generally advisable to seek
predictors that are related to the criterion but are uncorrelated with each other. However, in practice, it is very difficult to find multiple
variables that are statistically related to another variable (the criterion) but at the same time statistically unrelated to each other. Usually
variables that are both predictive of a criterion are also predictive of each other. Also note that the abbreviated version of the equation
used to calculate the squared multiple correlation with independent predictors is just a special case of the expanded equation caused by r
12 being equal to zero (Muchinsky et al, 2005:133, 134).
153
As mentioned previously, multiple regression is a compensatory type of statistical model, which means that high scores on one
predictor can compensate for low scores on another predictor. For example, an applicant’s lack of previous job-related experience can be
compensated for or substituted by test scores that show great potential for mastering the job (Riggio, 2009). The statistical assumptions
and calculations on which the multiple regression model is based (beyond the scope of this text) involve a complex mathematical
process that combines and weights several individual predictor scores in terms of their individual correlations with the criterion and their
inter-correlations with each other in an additive, linear fashion (Tredoux, 2007). In employee selection, this means that the ability of
each of the predictors to predict job performance can be added together and there is a linear relationship between the predictors and the
criterion: higher scores on the predictors will lead to higher scores on the criterion. The result is an equation that uses the various types
of screening information in combination (Riggio, 2009).
However, apart from its obvious strengths, the compensatory nature of the multiple regression approach can also be problematic in
selection. Take, for example, the case where an applicant with an uncorrectable visual problem has applied for a job as an inspector of
micro-circuitry (a position that requires the visual inspection of very tiny computer circuits under a microscope). Although the
applicant’s scores on a test of cognitive ability may show great potential for performing the job, he or she may score poorly on a test of
visual acuity as a result of the visual problem. Here, the compensatory regression model would not lead to a good prediction, for the
visual problem would mean that the applicant would fail, regardless of his or her potential for handling the cognitive aspects of the job
(Riggio, 2009).
Other linear approaches to combining scores
Other linear approaches to selection include: unadjusted top-down selection, rule of 3, passing scores, and banding. In a top-down
selection approach, applicants are rank-ordered on the basis of their test scores, and selection is then made by starting with the highest
score and moving down until all vacancies have been filled. Although the advantage of this approach is that by hiring the top scorers on
a valid test, an organisation will gain the most utility, the top-down approach can result in high levels of adverse impact. Similarly to the
unadjusted top-down selection approach, the rule of 3 (or rule of 5) technique involves giving the names of the top three or five scorers
to the person making the hiring decision. The decision-maker can then choose any of the three (or five), based on the immediate needs of
the employer (Aamodt, 2010).
The passing scores system is a means of reducing adverse impact and increasing flexibility in selection. With this system, an
organisation determines the lowest score on a test that is associated with acceptable performance on the job. For example, suppose a
company determines that any applicant scoring 70 or above will be able to perform adequately the duties of the particular job for which
he or she has applied. If the company set 70 as the passing score, they could fill their vacancies with any of the applicants scoring 70 or
better. Because of affirmative action, they would like some of the openings to be filled by black females, for example. Although the use
of passing scores generally helps companies to reach their affirmative action goals, determining these scores can be a complicated
process, full of legal pitfalls. Legal problems can occur when unsuccessful applicants challenge the validity of the passing score
(Aamodt, 2010).
Score banding creates categories of scores, with the categories arranged from high to low. For example, the ‘A band’ may range
from 100 to 90, the ‘B band’ from 89 to 80, and so on. Banding takes into consideration the degree of error associated with any test
score. Industrial psychologists, psychometrists, and statisticians universally accept that every observed score (whether a test score or
performance score) contains a certain amount of error (which is associated with the reliability of the test). The less reliable a test, the
greater the error. Even though one applicant may score two points higher than another, the two-point difference may be the result of
chance (error) rather than actual differences in ability (Aamodt, 2010; Landy & Conte, 2004).
A statistic called the standard error of measurement (SEM) is generally used to determine how many points apart two applicants
have to be before determining whether the test scores are significantly different. Using the SEM, all candidate scores within a band are
considered ‘equal’ with respect to the attribute being measured if they fall within some specified number of SEMs of each other (usually
two SEMs). It is assumed that any within-band differences are really just differences owing to the unreliability of the measure. Using the
banding approach, all candidates in the highest band would be considered before any candidates in the next lower band (Aamodt, 2010;
Landy & Conte, 2004).
The use of banding has been severely debated. Research indicates that banding will result in lower utility than top-down hiring
(Schmidt, 1991) and may also contribute to issues associated with adverse impact and fairness (Truxillo & Bauer, 1999).
Non-compensatory methods of combining scores
The multiple cut-off and multiple hurdle approaches are two examples of non-compensa-tory strategies which are particularly useful
when screening large pools of applicants. The multiple cut-off strategy can be used when the passing scores of more than one test are
available. The multiple cut-off approach is generally used when one score on a test cannot compensate for another or when the
relationship between the selection test (predictor) and performance (criterion) is not linear. Because the multiple-cut-off approach
assumes a curvilinearity (that is, a non-linear relationship) in the predictor-criterion relationship, its outcomes frequently lead to
different decisions from those of a multiple regression analysis, even when approximately equal proportions of applicants are selected by
each method (Cascio & Aguinis, 2005). When used in combination, applicants would, for example, be eligible for hire only if their
regression scores are high and if they are above the cut-off score of the predictor dimensions.
The multiple cut-off model uses a minimum cut-off score on each of the selected predictors. Applicants will be administered all of
the tests at one time. If they fail any of the tests (that is, they fell below the passing score), they will not be considered further for
154
employment. An applicant must obtain a score above the cut-off on each of the predictors to be hired. Scoring below the cut-off on any
one predictor automatically disqualifies the applicant, regardless of the scores on the other tests or screening variables (Aamodt, 2010;
Riggio, 2009). For example, suppose that a job analysis finds that a good fire-fighter is intelligent, has confidence, stamina, and the
aerobic endurance to fight a fire using a self-contained breathing apparatus. He also has a national senior certificate. A validity study
indicates that the relationship of both intelligence and confidence with job performance are linear. The smarter and more confident the
fire-fighter, the better he or she performs. Because the relationship between having stamina and having a national senior certificate is not
linear, a multiple-cut-off approach in which applicants would need to pass the endurance test and have a national senior certificate is
followed. The fire department might also want to set a minimum or cut-off score on the endurance or stamina measure and the grade
point average (GPA) for the national senior certificate. If the applicants meet both of the minimum requirements, their confidence levels
and cognitive ability test scores are used to determine who will be hired.
The main advantage of the multiple cut-off strategy is that it ensures that all eligible applicants have some minimal amount of ability
in all dimensions to be predictive of job success (Riggio, 2009). However, because of legal issues such as fairness and adverse impact,
particular care needs to be taken by industrial psychologists when setting cut-off scores that the cut-off scores do not unfairly
discriminate against members of certain designated groups stipulated in the EEA. Furthermore, the multiple-cut-off approach can be
quite costly. If an applicant passes only three out of four tests, he or she will not be hired, but the organisation has paid for the applicant
to take all four tests (Aamodt, 2010).
The multiple hurdle system is a non-compensatory selection strategy that is quite effective in reducing large applicant pools to a
more manageable size by grouping individuals with similar test scores together in a category or score band, and selection within the
band is then made based on a minimum cut-off score. The multiple hurdle selection approach uses an ordered sequence of screening
devices. At each stage in the sequence, a decision is made either to reject an applicant or to allow the applicant to proceed to the next
stage. In practical terms. this means that the applicants are administered one test at a time, usually beginning with the least expensive.
Applicants who fail a test are eliminated from further consideration and take no more tests. Applicants who pass all of the tests are then
administered the linearly-related tests, and the applicants with the top scores on these tests are hired (Aamodt, 2010). For example, in the
screening of candidates, the first hurdle may be a test of cognitive ability. If the individual exceeds the minimum score, the next hurdle
may be a work sample test. If the candidate exceeds the cut score for the work sample test, he or she is scheduled for an interview.
Typically, all applicants who pass all the hurdles are then selected for the jobs (Landy & Conte, 2004; Riggio, 2009). Although an
effective selection strategy in screening large pools of candidates, it can also be quite expensive and time consuming and is therefore
usually used only for jobs that are central to the operation of the organisation (Riggio, 2009).
Figure 6.14 summarises the various selection decisions that we have discussed up to now. Note that the diagram builds on the
concepts we have discussed in chapters 4 and 5. The Reflection that follows explains the various steps in selection decision-making by
means of an example.
Placement
Employee placement is the process of assigning workers to appropriate jobs after they have been hired (Riggio, 2009). Appropriate
placement of employees entails identifying what jobs are most compatible with an employee’s skills and interests, as assessed through
questionnaires and interviews (Muchinsky, 2003). After placing employees, industrial psychologists conduct follow-up assessments to
determine how well their selection and placement methods predict employee performance. They refine their methods when needed
(Kuther, 2005).
Usually people are placed in the specific job for which they have applied. However, at times an organisation (and especially large
organisations) may have two or more jobs that an applicant could fill. In this case, it must be decided which job best matches that
person’s talents and abilities. Whereas selection generally involves choosing one individual from among many applicants to fill a given
opening, placement, in contrast is often a process of matching multiple applicants and multiple job openings on the basis of a single
predictor score (Landy & Conte, 2004). Placement decisions are easier when the jobs are very different. It is easier to decide whether a
person should be assigned as a manager or a clerk, for example, than to decide between a secretary and a clerk. The manager and clerk
jobs require very different types of skills, whereas the secretary and clerk jobs have many similar job requirements (Muchinsky et al,
2005).
Landy and Conte (2004:278) suggest three general strategies for matching multiple candidates to multiple openings:
• Provide vocational guidance by placing candidates according to their best talents.
• Make a pure selection decision by filling each job with the most qualified person.
• Use a cut-and-fit approach, that is, place workers so that all jobs are filled with adequate talent.
In today’s global environment, many organisations are multinational, with offices around the world. This trend requires organisations to
become more flexible in their selection and, in particular, their placement strategies. In addition, the increasing use of work teams also
suggests that the rigid, ‘pure’ selection model may be less effective than a placement model that can also take into account the KSAOs
of existing team members in deciding where to place a new hire or promotion (Landy & Conte, 2004; Riggio, 2009).
Figure 6.14 Overview of the selection decision-making process
155
Reflection 6.7
156
Review Figure 6.14. Study the following example carefully and see if you can identify each step outlined in the selection
decision-making framework as illustrated in the diagram.
Decision-making in selection: a systematic process
The local municipal fire department decided to recruit 10 new fire-fighters. Since management had experienced some difficulty with
the selection devices they had applied in the past, they decided to obtain the services of a professionally-qualified industrial
psychologist to assist them with this task.
The industrial psychologist discovered that no current job information was available and decided to conduct a formal job analysis
in order to develop criteria for recruitment and selection purposes. In analysing the job demands of a fire-fighter, the following valued
performance aspects were identified as broad conceptual constructs. Successful or high performing fire-fighters are able to rescue
people and property from all types of accident and disaster. They also make an area safer by minimising the risks, including the social
and economic costs, caused by fire and other hazards. They further promote fire safety and enforce fire safety standards in public and
commercial premises by acting and advising on all matters relating to the protection of life and property from fire and other risks.
Their job duties are linked with the service, health and safety vision and mission of the municipality.
Some of the processes and procedures to be included in the job description of a fire-fighter that were revealed by the job analysis
included:
• Attending emergency incidents, including fires, road accidents, floods, spillages of dangerous substances, rail and air
crashes and bomb incidents
• Rescuing trapped people and animals
• Minimising distress and suffering, including giving first aid before ambulance crews arrive
• Inspecting and maintaining the appliance (fire engine) and its equipment, assisting in the testing of fire hydrants and
checking emergency water supplies
• Maintain the level of physical fitness necessary to carry out all the duties of a fire-fighter
• Responding quickly to unforeseen circumstances as they arise
• Writing incident reports
• Educating and informing the public to help promote safety.
The following knowledge, skills, abilities and other characteristics (KSAOs) were identified for the job description, including a person
specification:
• Physical fitness and strength
• Good, unaided vision and hearing
• Willingness to adapt to shift work
• Ability to operate effectively in a close team
• Initiative
• Flexibility
• Honesty
• Ability to take orders
• A reassuring manner and good communication skills to deal with people who are injured, in shock or under stress
• Sound judgement, courage, decisiveness, quick reactions and the ability to stay calm in difficult circumstances (ability to
deal with accidents and emergencies and dealing with fatalities)
• Willingness and ability to learn on a continuing basis
• An interest in promoting community safety, education and risk prevention.
With the job analysis completed, the industrial psychologist was now ready to make various decisions, including deciding on the
predictors that could be used for selection. In identifying the predictors that would be used as measures of successful performance, the
following psychometric standards were considered: reliability, validity, utility, fairness and cost. The following predictors were
chosen:
Test predictors
•
•
•
Statutory physical fitness test
A stringent medical and eye examination
Occupational personality questionnaire to obtain a personality profile required for a successful fire-fighter.
Non-test predictors (behavioural predictors)
•
•
Structured interview to determine the applicant’s professional conduct and attitude
Situational exercises to determine how the applicant will respond in emergency and high pressure situations. A work
simulation was therefore developed, with higher scores indicating better performance. This decision was based on previous
research that showed work simulations or situational exercises to be a valid and reliable measure for performance ability for
jobs that demand physical strength and fitness.
The municipal outsourced the recruitment function to an independent recruitment agency. Based on the job analysis, the industrial
157
psychologist assisted in drawing up the core elements of the advertisement. Because of the high yield ratio, the recruitment agency did
the initial screening of applicants according to the requirements and employment equity plan of the municipality.
The next step for the industrial psychologist was to consider the aspects that would influence the rejection and hiring of candidates
on the short-list provided by the recruitment agency. These include considerations such as:
• Predictor validity
• Selection ratio
• Base rate
• Success rate
• Fairness, bias and equity.
In this particular case, to simplify our example, multiple predictors were chosen to predict the potential of candidates to be successful
in the job, including physical strength (measured by arm and grip strength) and the ability to handle emergency situations calmly. A
multiple regression analysis revealed in previous research that the chosen predictors are unrelated to each other and significant
predictors of performance and when combined, explaining 70 per cent of the total variance in the criterion. Based on this information,
the industrial psychologist felt satisfied that the predictors would indeed predict the actual criterion score (acceptable level of
performance).
Since the municipality wanted to hire ten fire-fighters, the recruitment agency short-listed 40 applicants. The selection ratio is
therefore 0,25, implying that the municipality will hire only the top 25 per cent of all applicants. Because past recruitment experience
showed that competition is fierce with an average of about 150 applicants for each post, and on the whole there is low staff turnover,
management decided to set the predictor cut-off quite high, at 6, since a 7-point Likert-type scale was used for the situational
exercises. In practical terms, this means that only applicants who scored 6 or 7 in the exercises would be hired.
The criterion-cut-off score was also determined by management and was based on the expected acceptable level of proficiency set for
work performance of fire-fighters currently employed. Because of the importance for the company of being renowned for its
compliance with high ethical and social responsibility standards, the industrial psychologist took great care in adhering to principles of
fairness, bias, and equity by applying various measures of test bias, differential validity, and fairness models to make adjustments to
cut-off scores for designated groups within the applicant pool.
Finally, the industrial psychologist decided to determine the proportion of correct decisions in order to evaluate the effectiveness of
the situational exercises in comparison with the previous selection test battery used by the municipality, and whether any utility gains
had been made. When the predictor scores and criterion scores were plotted on a graph, similar to the graph shown in Figure 6.11, the
industrial psychologist was able to determine that the situational exercises were good predictors of performance and that the tool
resulted in a 70 per cent increase in selection accuracy over the selection method previously used.
Reflecting on the various standards and scientific procedures applied within the selection decision-making framework, the
industrial psychologist felt satisfied that quality decisions were made and that they ultimately would contribute to the municipality’s
vision and mission.
(Adapted from Von der Ohe, 2009)
Selection and affirmative action
Affirmative action is a social policy aimed at reducing the effects of prior discrimination. It is a requirement under section 15(1) of the
Employment Equity Act (EEA) to enforce affirmative action:
‘15. (1) Every designated employer must, in order to achieve equity, implement affirmative action measures for people from designated groups in
terms of this Act.’
The original intent of affirmative action was aimed primarily at the recruitment of new employees – namely, that organisations would
take positive (or affirmative) action to bring members of previously disadvantaged groups into the workforce that had previously
excluded them.
Campbell (1996) described four goals of affirmative action:
1. Correct present inequities. If one group has ‘more than its fair share’ of jobs or educational opportunities because of current
discriminatory practices, then the goal is to remedy the inequity and eliminate the discriminating practices.
2. Compensate past inequities. Even if current practices are not discriminatory, a long history of past discrimination may serve to keep
members of a previously disadvantaged group at a disadvantage.
3. Provide role models. Increasing the frequency of previously disadvantaged group members acting as role models can potentially
change the career expectations, education planning, and job-seeking behaviour of younger, previously disadvantaged group members.
4. Promote diversity. Increasing the representation of previously disadvantaged groups in a student body or workforce may act to
increase the range of ideas, skills, or values that can be brought to bear on organisational problems and goals.
As straightforward as these goals may appear, there is great variety in the operational procedures used to pursue the goals. The most
passive interpretation is to follow procedures that strictly pertain to recruitment, such as extensive advertising, in sources most likely to
reach previously disadvantaged group members.
158
A stronger interpretation of the goals is preferential selection, where organisations will select previously disadvantaged group
members from the applicant pool if they are judged to have substantially the same qualifications as white applicants. Affirmative action
needs to address the removal of all forms of discrimination and all obstacles to equality of opportunity (Nkuhlu, 1993). Nkuhlu stresses,
however, that under no circumstances should individuals be selected solely because of their skin colour: selection should be based only
on performance. The most extreme interpretation would be to set aside a specific number of job groups or promotions for members of
protected groups. This is referred to as the ‘quota interpretation’ of affirmative action: organisations staff themselves with explicit
percentages of employees representing the various protected groups, based on local or national norms, within a specific time frame.
Affirmative action has been hotly debated by proponents and critics. In particular, over the past ten years the subject of affirmative
action has been a major political issue. Criticism of the quota interpretation has been strident, claiming the strategy ignores merit or
ability. Under a quota strategy it is alleged that the goal is merely ‘to get the numbers right’. Proponents of affirmative action believe it
is needed to offset the effects of many years of discrimination against specific groups (Muchinsky et al, 2005). More recently, research
by Linton and Christiansen (2006) confirmed that perceptions of discrimination play an important role in predicting whether an
affirmative action policy is viewed as fair.
So, has affirmative action been effective in meeting national goals of prosperity in employment for all people? The perceived
beneficiaries of affirmative action are often stigmatised as being incompetent, as non-beneficiaries attribute their employment in large
part to group membership (Heilman, McCullough & Gilbert 1996). While there appears to be consensus that affirmative action has not
produced its intended goals (Murrell & Jones 1996), there is considerable reluctance to discard it altogether. It is feared that its absence
may produce outcomes more socially undesirable than have occurred with its presence, however flawed it may be. Dovidio and Gaertner
(1996) asserted that affirmative action policies are beneficial in that they emphasise outcomes rather than intentions, and they establish
monitoring systems that ensure accountability.
Kravits et al (1997) documented a large quantity of psychological and behavioural research conducted on affirmative action. Among
the issues studied are reactions to individuals and by individuals hired because of affirmative action policies. Heilman and Herlihy
(1984) investigated the reactions of individuals to women who got their jobs on the basis of merit or because of preferential treatment
based on gender. Neither men nor women were attracted to jobs when they heard that the women in those jobs had been hired because of
their gender. Heilman, Simon and Repper (1987) examined the effects of gender-based selection. In a laboratory study, they led women
to believe they were selected for a group leader position because of either their gender or their ability.
When selected on the basis of gender, women devalued their leadership performance, took less credit for successful outcomes, and
reported less interest in persisting as leaders. The findings suggest that when individuals have doubts about their competence to perform
a job effectively, gender-based preferential selection is likely to have an adverse effect on how they view themselves and their
performance. Heilman, Block and Lucas (1992) reported that individuals who were viewed as having been hired on an affirmative action
basis did not believe their qualifications had been given much weight in the hiring process.
The stigma of incompetence was found to be fairly robust, and the authors asked whether the stigma would dissipate if the
individuals’ presumed incompetence was proved incorrect. Kleiman and Faley (1988) concluded that although granting preferential
treatment as part of an affirmative action programme may help remedy discrimination inequities in the workplace, it may also affect
other important social and organisational outcomes.
In summary, recruitment deals with the process of making applicants available for selection. The ease or difficulty of recruiting
depends on such factors as economic conditions, the job in question, the reputation of the organisation, and the urgency for filling the
job opening. Organisations that have the luxury of a leisurely, deliberate recruiting and selection process will most likely be more
successful than those that are rushed into filling a position. Thorough recruiting does not guarantee that the best candidates will be
selected. However, haphazard or casual recruiting frequently results in too few good candidates to choose from.
FAIRNESS IN PERSONNEL SELECTION
Any personnel-related decision is based on one or another model of decision-making or selection. The question that must be asked is
whether the model itself is ‘fair’, as well as whether or not the assessment instruments used allow for equal treatment of all candidates.
Fairness is a social rather than a psychometric or statistical concept. In a dynamic society such as South Africa, where the emphasis is on
democracy and equality, the focus falls on fair and equal treatment for all. This is particularly true of decisions made in the workplace,
where serious attempts are being made to rectify past practices which have had a negative and discriminatory effect on a large portion of
the population. The spotlight therefore falls on the fairness of all personnel decisions made that affect people’s work-related
opportunities, as well as their quality of life. In the investigation of decision-making policies and practices, the concept of ‘fairness’ is
widely used and heavily relied upon, despite the fact that consensus as to exactly what constitutes fairness has yet to be achieved.
Gender, race and age will be the primary factors against which fairness will be measured (Society for Industrial and Organisational
Psychology, 2005).
Defining fairness
Fairness has no single meaning, and therefore no single statistical definition. Each definition of fairness is based on a different set of
values, and has different implications for how selection and other personnel decisions are to be made (Taylor, 1992). In this regard, we
may expect management and unions to have vastly differing views of what constitutes fairness in personnel decision-making.
159
Fairness, or the lack thereof, is not the result of the selection instrument or predictor, nor is it the property of the selection procedure
used. Fairness is the total of all the variables that play a role or influence the final personnel decision. These can include the test,
predictor, integration of data, recommendations based on these data, or the final decision made by line management (Society for
Industrial Psychology, 1998). Based on the original definition of fairness formulated by Guion in 1965, Arvey and Faley (1988:7)
indicate that:
‘[u]nfair discrimination or bias is said to exist when members of a minority group (or previously disadvantaged individuals) have lower probabilities
of being selected for a job when in fact, if they had been selected, their probabilities of performing successfully in a job would have been equal to
those of non-minority group members’.
In defining fairness, Arvey and Renz (1992) refer specifically to procedural fairness in the selection process, which is related to the
following principles:
• Objectivity
• Consistency in the treatment of all applicants
• Freedom of the selection procedures from any form of manipulation, including adherence to deadlines of applications, and so
forth
• Selection process must be developed and conducted by professionals
• Confidentiality of data to be maintained
• Final decisions to be based on a review of the candidate’s data, and the decision made by more than one individual
• Protection of the candidate’s right to privacy
• Personal information may be sought only in so far as it pertains to the characteristics of the job, and
• Selection procedures in which faking may take place should be considered unfair, as results can be manipulated to meet the
criteria.
Taylor (1992) adds to these the following criteria:
• Information must be quantifiable, thereby ensuring objectivity and allowing for comparisons between individuals on the basis
of ‘scores’ achieved.
• All information used must be directly related and unquestionably relevant to the decision being made.
These principles of fairness are relevant to all personnel decisions, beginning with the selection of job applicants. Fairness must also be
applied when making any decision which is likely to affect the existing workforce, including decisions regarding training and
promotional opportunities, disciplinary steps and measures, and substantive and procedural decisions regarding dismissal. Substantive
and procedural fairness may even extend beyond the duration of the actual employment relationship, particularly when the
re-employment of dismissed workers is under consideration (Kriek, 2005).
Wheeler (1993) emphasises the ethical and moral obligation of psychologists and human resource practitioners to ensure the fair and
equal treatment of people. This is entrenched in the Code of Professional Conduct of the Health Professions Council of South Africa
(HPCSA) and the Professional Board for Psychology, which states:
‘As employees or employers, psychologists must not engage in or condone practices that are inhumane or that result in illegal or unjustifiable
actions. Such practices include, but are not limited to, those based on consideration of sex, race, religion or national origin in hiring, promoting or
training.’
If professionals are seen to be adhering to all of the above-mentioned principles, the use of assessment instruments is more likely to be
accepted, and employees are more likely to be satisfied with the decisions made. This, together with the person–job fit which can be
obtained by correct use of selection methods, should ensure the existence of the satisfied and productive workforce which South Africa
desperately needs to be competitive in the international labour market.
Fairness and bias
Some researchers differentiate between ‘fairness’ and ‘bias’. SHL (2001), for example, states that the two terms have distinctly different
meanings although they are generally used interchangeably in everyday language. It is proposed that ‘bias’ pertains to the impact of the
psychometric properties of the test which have an impact on the test result, while ‘fairness’ pertains to the way in which the results are
interpreted and applied. It is, therefore, a function of how personnel decisions are affected in the social context. SHL (2001:14) provides
an example that clearly illustrates the nuance in the meaning of the two concepts:
‘If the results on a questionnaire indicate that females are more caring than males and in real life it is found that males are more caring than females,
the difference found on the caring scores could be the result of errors in measurement, and this may be evidence of bias. Fairness, on the other hand,
is associated with the value judgements applied in decisions or actions that are based on test or assessment scores.’
A test may therefore be unbiased and valid, but the results used unfairly, resulting in discrimination and possibly even unfair labour
practices.
160
Adverse impact
Adverse impact is also a related concept that is becoming more common in the South African context. Adverse impact refers to the
possible impact of assessment results on personnel decisions regarding different previously disadvantaged groups. The problem arises
when a significant difference is found between the average test performances of different cultural or gender groups. In the absence of
validation evidence, there is likely to be a presumption that the group with the lower average performance was being indirectly
discriminated against. The implication is that, if the same entry standard were demanded of all applicants, the lower scoring group would
find it more difficult to comply with the requirements (Kriek, 2005).
Adverse impact has garnered great attention among psychologists. The term ‘adverse impact’ was defined by what is known as the
‘80 per cent’ (or ‘4/5 (four-fifths)’) rule. The rule states that adverse impact occurs if the selection ratio (that is, the number of people
hired, divided by the number of people who apply) for any group of applicants (such as blacks) is less than 80 per cent of the selection
ratio for another group. Suppose 100 whites apply for a job and 20 are selected. The selection ratio is 20/100 or 0,20. By multiplying
0,20 by 80 per cent, we get 0,16. This means that if fewer than 16 per cent of the black applicants are hired, the selection test produces
adverse impact. So if 50 blacks apply for the job and if at least 8 (50 x 0,16) blacks are not selected, then the test produces adverse
impact (Kriek, 2005).
Positive validation evidence for the test generally justifies its use and rules out the possibility of unfair discrimination. By showing
that those who perform poorly on the test also perform poorly on the job, a positive validation result confirms that it is reasonable to
reject low scorers. This means that differences between the groups will not lead to unfair discrimination but that the differences can be
justified on the basis of an inherent requirement of the job. This, however, means that there will still be adverse impact, but that the
resulting impact can be justified. For example, more females than males may end up as nurses because they scored higher in respect of
caring, but females do tend to be more caring nurses than males (Kriek, 2005).
In general, however, the greater the degree of adverse impact resulting from the use of a test or assessment instrument, the higher the
validity of the test should be to justify this. The alternative would be to search for an alternative test with the same validity or accuracy
of prediction, but less adverse impact. Personality questionnaires, for example, demonstrate less adverse impact than ability tests, and
some organisations give more weight to personality measures in decision-making models to overcome the adverse impact problems
associated with ability tests (Kriek, 2005).
Fairness and culture
Some researchers identify yet another concept related to bias and fairness, namely culture (Landy & Conte, 2004; Murphy &
Davidshofer, 2001). Landy and Conte (2004:129) state that ‘[c]ulture addresses the extent to which the test-taker has had an opportunity
to become familiar with the subject matter or processes required by a test item’. Many assessment instruments make use of, for example,
idiomatic expressions such as ‘Let your hair down’. In the South African environment, with our rich cultural diversity, many previously
disadvantaged people do poorly on these instruments because they have not been exposed to this knowledge through home and school
environments.
To demonstrate this, Williams developed a test during 1972, the Black Intelligence Test of Cultural Homogeneity (BITCH). He
attempted to highlight the influence of culture and subculture on language, and therefore test scores (Landy & Conte, 2004). The test
consisted of items that used the black ghetto slang at that time. One item asked for the meaning of the phrase ‘running game’ and gave
four options to choose from:
a. Writing a bad cheque
b. Looking at something
c. Directing a contest
d. Getting what one wants.
The correct answer in 1972 was ‘d’. Not many white test-takers would have known that. Therefore, to use an assessment instrument in
more than one culture, there must be agreement that the items mean the same thing in the different cultures – there must be construct
equivalence between the different groups. In the development of their Occupational Personality Questionnaire 32, SHL took great care
to ensure that the various cultures and languages were taken into consideration. To test this, SHL performed a study during 2003 where
the construct equivalence of the OPQ32n between black and white subgroups was tested. The conclusion drawn at the end of the study
was ‘that the OPQ scales have structural equivalence between the different ethnic groups in that the inter-correlations between the scales
for the different groups are very high’ (SHL 2004:10).
Measures of test bias
Cole (1973) quotes Anastasi (1968:238), defining test bias as referring to ‘the over-prediction or under-prediction of criterion measures’.
When selecting the appropriate test to be used in assessing personnel, be it for selection, promotion or training purposes, it is essential to
take into consideration the following aspects (SHL, 2001):
• Criteria measured by the specific test
• Characteristics of groups on which the test norms are based
• Test reliability
• Test validity
161
•
•
•
Context in which the attributes being tested for are to be used
Level of difficulty of the test in relation to the levels at which the job requires the attribute being measured, and
Report on steps taken to guard against test bias.
In order to ensure that the assessment instruments being used are fair and unbiased, it is essential for them to be directly related to the
requirements of the job. To achieve this, a job description becomes the key tool from which a person specification (the personal
attributes required to perform the job successfully) can be established. These attributes can then be measured using the appropriate
assessment tools. It is also important to identify the level of each attribute required in the job, so as to ensure that the test is not too
difficult or that cut-off points are not set too high.
Messick (1975) refers to two types of bias, namely intrinsic and predictive or correlational bias. Intrinsic bias refers to the intrinsic
or psychometric properties of the test which may have an effect on the test result. This includes item bias, where an item has different
levels of difficulty, or different meanings, for different population groups. In other words, the probability of obtaining the desired
response differs from group to group, and the total scores achieved have been influenced by factors other than the attribute being
assessed. Other sources of intrinsic bias include the level of language used, administrative procedures, testing environment, and
instructions given. Predictive or correlational bias refers to the usefulness, validity, and fairness of the test for the purpose for which it
was designed (Kriek, 2005).
Jensen (1980), cited in Arvey & Faley (1988:144), realised that, in addition to using external criteria to evaluate a test’s fairness, one
can also rely on the ‘internal psychometric properties’ of a test as an indicator of bias. In this regard, the following may be considered:
• Differential reliability. A test may show greater reliability for one group than for another.
• Test item for race interaction. If a test is biased, one could expect to find some pattern of differential achievement among
different items.
• Differential factor structures. Factor analysis should identify a test which measures different factors for different racial
subgroups.
• Differential item difficulties. A test may show different levels of item difficulty among the items for different subgroups.
The quest for culture-fair tests
Arvey and Renz (1992) refer to empirical data which indicate that certain previously disadvantaged groups may be disadvantaged in
selection because, as a group, they tend to score lower in certain selection processes, particularly cognitive tests. Schmidt (1988)
criticises this belief, stating that ‘cognitive employment tests are equally valid and predictively fair’ for previously disadvantaged
individuals, and that ‘they are valid for virtually all jobs and … failure to use them in selection will typically result in substantial
economic loss to individual organisations and the economy as a whole’. This view is also supported by other researchers.
Group differences may also be due to the fact that certain candidates may never have been exposed to selection procedures. and this
unfamiliarity is likely to have an adverse effect on their performance. It may therefore be useful to introduce all candidates to the testing
session by means of practice examples, to allow them to gain confidence and familiarity in the testing procedures. This should ensure
that test results are determined only by the individual’s characteristics or attributes being assessed.
SHL (2001) propose in their Guidelines for Best Practice in Occupational Assessment in South Africa that, if possible, candidates
should be notified in advance that they are to be assessed. This would provide an opportunity for samples of the instruments (practice
leaflets) to be sent to the candidates so that they can familiarise themselves with the type of tasks they will encounter in the actual
assessment. This would increase the effectiveness of the actual instruments in rendering accurate assessments of candidates’ abilities.
Poor test results may also be the result of environmental factors, such as lack of opportunity and socio-economic differences in
education, language difficulties, and any number of other issues not necessarily related to the validity of the assessment instrument itself.
In testing people of different cultures, attempts have been made to develop a test which may be seen as being ‘culture-fair’. To a
large extent, this has meant eliminating culturally loaded items or items which may refer to situations or experiences unique to Western
society. These attempts may, however, cause members of a previously disadvantaged group to perform poorly if they have been exposed
to ‘white’ culture. Furthermore, the previously disadvantaged candidate may perform better on the test but, when faced with work
situations which represent ‘white’ culture, the predicted performance may not match the level of performance demonstrated or attained
(Kriek, 2005).
Another criterion for developing ‘culture-fair’ tests has been to eliminate, as far as possible, verbal aspects of the test, based on the
premise that previously disadvantaged cultures may not have achieved the level of verbal development required to interpret written
instructions successfully or understand the actual test items. Researchers have, however, found that previously disadvantaged groups
have performed worse on these non-verbal assessments than on the traditional tests (Higgins & Silvers 1958; Moore, MacNaughton &
Osbom 1969; Tenopyr, 1967, as cited in Arvey & Faley, 1988:193).
The quest for ‘culture-fair’ tests therefore seems to have failed dismally, indicating (correctly or not) greater differentials between
cultural and racial groups, without increasing the predictive value of the instrument.
Legal framework
Arvey and Renz (1992:331) note that, although fairness differs from legality, the two concepts may be related in as far as the law
‘reflects societal notions of fairness’. Furthermore, with the changes to the Labour Relations Act (LRA), the Labour Court now has to
162
consider issues of law as well as issues of fairness. In this regard, the Court is likely to rely on social opinion, or what society would
consider to be fair. This social perception is likely to change over time, however, as situations and values change.
With this in mind, according to Pierre Marais of Labour Law Consultants, ‘[t]he intention of anti-discrimination legislation
pertaining to recruitment, selection, placement and promotion is that every person who can do a job should have a fair chance to get the
job’ (IBC Conference, 1996). The impact of this on the use of occupational assessments in personnel decision-making processes is that
equal opportunity must be afforded to all by means of subjecting them to essentially equal assessment methods and procedures.
The LRA does not refer specifically to psychological assessment but provides guidelines for the fairness of all employment and
personnel related decisions. Section 8(2) of the Constitution of the Republic of South Africa, 1996 and section 187 of the LRA extend
such protection to all individuals, as all persons are equal before the law. In the labour arena, this includes employees as well as job
applicants, who are now protected against unfair discrimination on any ‘arbitrary grounds, including, but not limited, to gender, sex,
ethnic or social origin, sexual orientation, age, disability, religion, conscience, belief, political opinion, culture, language, marital status,
or family responsibility including pregnancy, intended pregnancy or any reasons related to pregnancy’, as these grounds are not related
to the inherent requirements of the job.
The EEA was introduced to promote fairness and equality in the workplace. The Act proposes two ways of achieving this: The first
is to promote equal opportunity and fair treatment in employment through the elimination of unfair discrimination. The second is to
implement affirmative action measures to redress disadvantages in employment experienced by previously disadvantaged groups. The
Act does not focus only on differences in ethnic and gender groupings, but also makes provision for people with disabilities. It provides
redress for people with disabilities who are unfairly discriminated against, and employers are required to consider making structural
changes or introducing technical aids to facilitate the employment of people with disabilities. The Code of Good Practice on the
Employment of People with Disabilities was published in 2002 to serve as a guide for employers and employees on the key aspects of
promoting equal opportunities and fair treatment for people with disabilities, as required by the EEA. The Code assists employers and
employees in understanding their rights and obligations in this regard (Kriek, 2005).
The EEA makes specific provision for psychological testing in the workplace. Section 8 (in Chapter II) reads as follows:
‘8 Psychological testing and other similar assessments
Psychological testing and other similar assessments of an employee are prohibited unless the test or assessment being used–
(a) has been scientifically shown to be valid and reliable;
(b) can be applied fairly to all employees; and
(c) is not biased against any employee or group.’
This statement has increased the awareness of employers with regard to the employment practices and procedures they implement. The
Act makes it imperative for selection decisions to be based on good and solid evidence relating the assessment processes to the job
requirements. It should be noted that it is further stated in section 6(2) (in Chapter II) that:
‘(2) It is not unfair discrimination to–
(a) take affirmative action measures consistent with the purpose of this Act; or
(b) distinguish, exclude or prefer any person on the basis of an inherent job requirement of a job.’
If an employer can prove that a decision has been taken on the basis of operational requirements or the inherent requirements of the job,
the decision cannot be said to be unfair or discriminatory. This refers specifically to the minimum requirements of the job, as opposed to
the ideal, and indicates that the above-mentioned rights may be limited under certain circumstances, as long as said limitations are fair
and justifiable, acceptable in an open and democratic society, and do not negate the essence of the right being limited. Furthermore, the
limitation may extend only so far as is required by the purpose of such a limitation (LRA, 1995).
A debate has arisen over the proposed amendment to the EEA with regard to access to information. The amendment proposes that all
information, including selection decisions, be disclosed at the request of the parties involved. This has the implication that, although
sound selection decisions based on the inherent requirements of the job may have been made, confidential information must be released
to unqualified individuals. This could lead to incorrect interpretations of specialised information.
Should a selection decision be challenged in the Labour Court, the burden of proof lies with the employer to prove that the selection
criteria, or personnel-related decision taken, were fair and non-discriminatory. In this regard, the LRA and the EEA refer to direct and
indirect discrimination. Direct discrimination may also be referred to as ‘intentional discrimination’ where the decisions taken (or
occupational assessment methods used) are not related to any inherent requirements of the job. Indirect or ‘disparate impact’ (Seymour,
1988) discrimination refers to decisions which may not be directly or intentionally discriminatory, but which have the effect of unfairly
discriminating against a certain group.
The EEA therefore requires tests used in the selection procedure to be relevant to the abilities and attributes required in the job in
question. The essence, then, is to rely on substantive and procedural fairness in every regard, and to involve all interested parties in
drawing up organisational policies and in the decision-making processes. The emphasis should therefore be on what procedures and
decisions can be considered fair and justifiable, as opposed to what would constitute the minimum legal requirements.
In an attempt to assist employers in this regard, various experts and organisations in the field of industrial psychology have drawn up
guidelines for the development and evaluation of selection procedures. This is an effort to ensure effective and appropriate use of
selection procedures in the employment context. From a legal perspective, then, it would seem that a selection procedure would be
considered fair as long as the procedures followed were fair, the assessment tools valid for the purpose for which they were being used,
163
and all candidates treated equally, even if the proportions of previously disadvantaged and ‘white’ groups selected were not
representative.
De Jong and Visser (2000) performed a study to investigate the perceptions of members of black and white population groups on the
fairness of different selection techniques. The sample consisted of mature university students and was divided into informed and
uninformed groups. Informed students were seen as those who had had exposure to university subjects that would provide them with
theoretical knowledge on selection techniques, for example, strategic personnel management and/or undergraduate industrial
psychology. It was found that significant differences existed between the fairness perceptions of black (uninformed) and white
(uninformed) students. The white (uninformed) group’s perception was more positive towards the selection techniques used (interviews,
written ability tests, personal references, and personality tests). Both population groups viewed the interview as the fairest selection
technique. When the black (uninformed) and black (informed) groups were compared, no significant differences were found between
their perceptions of fairness. De Jong and Visser (2000) therefore conclude that being informed on the nature and value of selection
techniques did not affect a student’s perception of fairness.
Models of test fairness
An assessment tool may be considered fair in as far as it measures the criteria which it proposes to measure, and does so accurately and
reliably. This refers specifically to the validity and reliability of the assessment instrument. Arvey and Faley (1988:121) identify various
models of test fairness which aim to identify ways of determining the fairness of a given test. These models include: systematic mean
differences between subgroups on tests model; differences in validity model; differences in regression lines model; Thorndike’s quota
model; conditional probability model; and equal risk model.
Systematic mean differences between subgroups on tests model
Unfair discrimination resulting from the use of occupational assessment instruments may be identified when systematic mean
differences exist between previously disadvantaged and non-minority group scores, even when the non-minority group mean is lower
than the mean achieved by the previously disadvantaged group. It is, however, imperative to consider the causes of these differences
before determining whether the test is biased or unfair. The background and characteristics of the samples, for example, may be the main
cause of differences, and not culture (Kriek, 2005).
It has, however, been suggested that certain differences do exist between different culture groups with regard to attributes such as
intellect. If this is true, the reasons for these differences remain unclear, but it does indicate that differences need not be due to a biased
test or unfair test items, but can be explained by differences in socio-economic background (Kriek, 2005).
Differences in validity model
This model holds that tests are predictive for all groups, but to differing degrees. This refers to significant differences in the coefficients
calculated for each of the two groups as being an indicator of test bias (‘differential validity’). In conducting such research, however, it
is important to take note of the sizes of the relevant sample groups, as significant differences in sample size are likely to give rise to
unrealistic results or skewed coefficients. The purpose of such a study would be to determine whether the correlation between test results
(predictor scores) and job performances (criterion scores) differs significantly for each of the groups. If this is the case, the test may be
said to be biased, and this bias must be corrected for or taken into consideration when interpreting test results, otherwise discrimination
can be said to have taken place (Kriek, 2005).
Differences in regression lines model
Cleary (1968) identifies the fact that different groups may score significantly differently on certain selection tests, but perform equally
well in the job. In other words, the regression lines and intercept values for blacks and whites, for example, tend to differ. The criterion
score achieved on the assessment tool may therefore under- or over-predict levels of actual performance, yielding different predicted
criterion scores (using regression equations). Cleary therefore stresses the importance of using tests in which the regression lines do not
differ, or using different regression equations for different subgroups so as to identify individuals with the highest predicted criterion
scores, regardless of which equation was used in making the predictions.
Where a common regression line is used, the performance of some individuals is over-predicted, while the performance of others
will be under-predicted. This is of particular importance when selection is based on cut-off points, and Cleary suggests that the use of
one common regression line has a negative effect on the white group, whose scores are usually lowered, while the previously
disadvantaged group’s scores are raised. The negative effect, however, would become apparent when the members of the previously
disadvantaged group were found not to perform as well as expected on the job, based on the (predictive) scores of the assessment
measure (Kriek, 2005).
Cole (1973) states that, if the regression lines are identical for both groups, then the use of a single regression equation can be
considered fair. If the regression lines differ, then different equations must be used. This differs from the use of a single combined
regression equation, where both groups are reduced to an ‘average’ regression line, and the same regression equation is then used on
both groups.
Cole recommends the use of different regression equations for groups whose regression lines differ, so as to ensure the selection of
those applicants with the highest predicted criterion scores, based on the available predictor variables. Cleary therefore suggests that a
e
164
test is fair if there are no differences between the regression lines estimated in predicting between groups. This model has been endorsed
by the Standards for Educational and Psychological Testing (1995), and has been accepted in court cases (for example, Cormier v PPG
Industries, Inc. 1981, as cited in Arvey & Faley, 1988).
Figure 6.10 and Figure 6.4, as previously discussed, illustrate the principles outlined by the differences in validity and differences in
regression lines models. Figure 6.10 shows that the linear regression relationship between the predictor and criterion is positive and that
people with high predictor scores also tend to have high criterion scores. Similarly, people with low predictor scores also tend to have
low criterion scores. The scores tend to cluster in the shape of an ellipse, and since most of the data points fall in the quadrants 1 and 3,
with relatively few ‘dots’ in quadrants 2 and 4, positive validity exists. In investigating differential validity for groups (blacks and
whites), the figure further shows that the joint distribution of predictor and criterion scores is similar throughout the scatter plot in each
group, and the use of the predictor can be continued. However, as illustrated in Figure 6.4, if the joint distribution of predictor and
criterion scores is similar for each group (blacks and whites), but circular, there is no differential validity, and the predictor in this case is
useless, because it supplies no information of a predictive nature. Investigating differential validity and adverse impact in the absence of
an overall pattern of predictor-criterion scores that allows for the prediction of relevant criteria will be a waste of time and effort (Cascio
& Aguinis, 2005).
Figure 6.15 is an example of a differential predictor-criterion relationship that is regarded as legal and appropriate. Using the
example of race groups, the figure illustrates that validity for the black and white groups is equivalent, but in our example, the black
group scores lower on the predictor and does worse on the job (of course, the situation could be reversed). In this example, the very
same factors that depress test scores may also serve to depress job performance scores. Adverse impact is therefore defensible in this
case, since the blacks in our example do more poorly on what the organisation considers a relevant and important measure of job
success. On the other hand, in legal terms, a company would be required to provide evidence that the criterion was relevant, important,
and not itself subject to bias. In addition, alternative criteria that result in less adverse impact would have to be considered along with the
possibility that some third factor (for example, length of service) did not cause the observed difference in job performance (Cascio &
Aguinis, 2005).
Figure 6.15 Valid predictor with adverse impact (Cascio & Aguinis, 2005:185)
[Reproduced with the permission of Pearson Education, Inc.]
Using the example of blacks and whites again, Figure 6.16 illustrates that members of the black group would not be as likely to be
selected, even though the probability of success on the job for both blacks and whites is essentially equal. Under these conditions,
Cascio and Aguinis (2005) re-commend that separate cut scores be used in each group based on predictor performance, while the
165
expectancy of job performance success remains equal. This strategy is justified since the expectancy of success on the job is equal for
both the black and white groups and the predictor (for example, a selection interview) is being used simply as a vehicle to forecast the
likelihood of successful performance. The primary focus is therefore on job performance rather than on predictor performance. A black
candidate with a score of 65 on an interview may have a 75 per cent chance of success on the job. A white candidate with a score of 75
may have the same 75 per cent probability of success on the job (Cascio & Agunis, 2005).
Thorndike’s ‘quota’ model (1971)
In accordance with Cleary’s model, Thorndike (1971) points out that the use of regression lines to set cut-offs for selection is likely to be
disadvantageous to the previously disadvantaged group. He therefore suggests that different cut-off points be identified so as to ensure
that a representative proportion of the previously disadvantaged group is selected. To achieve this, the qualifying scores on a test should
be set at levels that will qualify applicants in the two groups in proportion to the numbers of the two groups that reach a specified level
of performance.
In other words, selection procedures are fair when the selection ratio is proportional to the success ratio between blacks and whites
(Steffy & Ledvinka, 1989). This generally ensures that a greater percentage of the previously disadvantaged group is selected than is
likely to be the case under the other models of test fairness. This coincides with current affirmative action practices in South Africa, but
it could have the effect of reducing levels of productivity, as the best people for the job may not always be selected (Kriek, 2005).
Thorndike therefore proposes that ‘fairness would be achieved only when the ratio between the selection ratio and base rate were equal
across groups’ (Arvey & Renz, 1992:337).
Figure 6.16 Equal validity, unequal predictor means (Cascio & Aguinis, 2005:186)
[Reproduced with the permission of Pearson Education, Inc.]
Cole (1973) defines the quota model as being based on the premise that a certain quota or percentage of applicants from each
identified group (e.g. male and female), as would be representative of the general population, must be selected for such selections to be
fair.
Conditional probability model
Cole (1973) refers to the conditional probability model as one which identifies, for each subgroup, predictor cut-off points above which
an applicant is expected to score before being considered for selection. In other words, candidates are separated into different groups (for
166
example, on the basis of race), and ranked according to their test scores. Each group then has a set cut-off point, and candidates scoring
above that cut-off are selected; or the top candidates of each group may be selected. According to Cole (1973:240): ‘Selection is
[therefore] fair when blacks and whites with the same probability of success have the same probability of selection.’
Arvey and Renz (1992:338) quote Cole as saying:
‘The basic principle of the Conditional Probability Selection Model is that for both previously disadvantaged and white groups whose members can
achieve a satisfactory score, there should be the same probability of selection regardless of group membership.’
Equal risk model
The equal risk model proposed by Einhorn and Bass (1971) suggests the following:
‘By considering the distribution of criterion scores about the regression line, prescribed predictor cut-off points for each subgroup could be
determined above which applicants have a specific minimal chance of being successful (or scoring above some specified criterion).’
For example, suppose that a selector is willing to hire all applicants with at least a 70 per cent chance of success (or 30 per cent risk), as
gauged by the predictor variables used. Then the predictor cut-off is chosen at the point at which the criterion pass point is
approximately one half standard error of estimation below the predicted criterion, since about 70 per cent of the cases fall above minus
one half standard deviation in a normal distribution (Kriek, 2005).
Summary of models of test fairness
Although the above models deal sufficiently with unfair discrimination or test bias, any number of other models have been developed,
many of which are associated with, or are variations of, these models. Furthermore, as Arvey and Faley (1988) point out, no agreement
has been or can be reached with regard to which is the correct or best model. Each researcher and test user will identify the model that
best suits their objectives, and falls in with their definition of fairness in selection and the use of psychometric assessments.
Some researchers go on to suggest that the combination of two or more models is the fairest means of making selection procedures.
In this regard, Cole (1973) refers to Darington’s (1971) suggestion of first deciding, according to the quota model, whether it is viable to
select members of a specific group. If so, the regression lines model is applied, in which one accepts that some differences between
criterion scores will yield equally desirable candidates from different groups.
Both the quota and the regression models use explicit values associated with the selection of members of one group over another.
This means that group membership becomes a more important criterion than the job-related criterion being tested for. This may,
however, be an acceptable situation in certain circumstances, such as the implementation of affirmative action programmes.
In the regression lines and equal risk models, the job-related criterion (as illustrated in Figure 6.15), takes precedence over all else,
increasing the likelihood of later career or job success, provided that the criterion being tested for is directly related to job success. This
approach may be seen as unfair, particularly in societies such as South Africa, where the aim is to address past inequalities in the
selection of previously disadvantaged groups.
Cole (1973) states that the regression lines model is the most often used, particularly because of the large amount of time and money
being put into selection, and the organisation’s consequent desire to ensure the success of this process. Steffy and Ledvinka (1989:297)
investigated research which suggested that the Cleary model ‘has the worst impact on black employment but the most favourable impact
on employee productivity, compared with alternative definitions of fairness’. They go on to state that this adverse impact tends to
worsen over time, while the positive impact on productivity tends to grow over time. The conditional probability model ensures equal
chances of selection regardless of group membership. Cole (1973) therefore proposes that this is likely to result in more previously
disadvantaged group members being selected than is the case under the regression model.
In contrast to the above models and perspectives, Novick and Ellis (1977) argue that all of these ‘group-parity’ approaches are
‘inadequate and inappropriate’ means of achieving equal opportunities. They propose that, instead of using group membership to
determine whether and indeed how much compensation an individual requires, one should rather consider the degree of disadvantage
suffered by that individual in the past. This allows the focus to fall on truly disadvantaged individuals, resulting in what is generally
considered to be socially just and fair. It may also ensure that compensatory programmes are perceived as fair, and are therefore more
readily accepted, as long as the areas in which the individual may be perceived as having been disadvantaged are related to his work
environment and conditions of employment.
Furthermore, obstacles are likely to occur with regard to establishing policies identifying levels of ‘disadvantagement’, proving such,
and linking these levels to appropriate levels of compensation. Monitoring of the system is also likely to present its own set of
difficulties. Whatever the case may be, the need to correct past injustices remains a prevalent issue in South African society, and
particularly in the workplace. Who in fact benefits from these efforts again depends on the principles and values adhered to by the
individuals or companies making the decisions. The South African Society for Industrial Psychology recommended that fairness models
based on the regression lines model be used in studies investigating the fairness of selection procedures (Society for Industrial
Psychology 1998).
Huysamen (1995:6) sums it up as follows:
‘In this country we are in the fortunate position not to have to develop the (above) models or to investigate their usefulness from scratch. At the
same time, it would be foolish to ignore their relevance and usefulness in the South African context. Current practice such as accepting a much
higher proportion of black than white applicant groups neither has any psychometric basis nor agrees with any defined notion of fairness.’
167
How to ensure fairness
The Commission for Racial Equity (1993) in the UK, cited in Kriek (2005:167), states that ‘the use of Job Analysis to design selection
procedures not only gives users the obvious benefits of using appropriate selection techniques, but also provides evidence of their
relevance should any questions arise’. It is equally important to avoid selection criteria which may require prior knowledge of the job,
position or organisation, as this can be said to be discriminatory. Candidates should, therefore, be required to complete successfully only
the relevant tests, as identified, which may include aptitude or ability tests, personality tests, and interest inventories (Kriek, 2005).
Furthermore, to ensure fairness when using assessment material in selection procedures, it is essential for all aspects of the testing
session to be the same for all candidates. This includes aspects such as the physical environment and the instructions given to
candidates. Candidates also tend to respond more favourably to selection procedures which are clearly job-related (such as assessment
centre exercises) and situations where fair personnel policies are followed, such as providing information on the job-relatedness of the
procedure, providing feedback to the candidate, and establishing a good rapport with the candidate (Kriek, 2005).
This perception of fairness is just as important for successful candidates as for unsuccessful candidates, as the fairness of the
selection procedure is thought to have an impact on an individual’s work performance and commitment to the organisation (Gilliland
1993:1994). Gilliland summarises this perceived fairness as surrounding three specific aspects, namely the ‘equity, equality and special
needs’ of the candidates involved. The purpose of psychological assessments is therefore to differentiate between candidates and
employees rather than to discriminate against them. It is therefore recommended that those professionals involved in personnel decisions
establish clear job-related criteria for selection prior to establishing the selection procedure and tools to be used. It may also be
recommended that a model of fairness be established by means of which selection procedures may be planned and carried out.
The essence of this is that the industrial psychologist or human resource practitioner must establish the job relatedness of all the
criteria that are used in any personnel decision. The only way that this can be achieved is by conducting proper and thorough job
analysis to establish the required KSAOs for the job in question. In their Guidelines for Best Practice in Occupational Assessment in
South Africa (2001), SHL recommends that the use of assessment instruments be continually monitored to ensure continued
appropriateness and effectiveness. Monitoring by ethnic and gender groupings is required to identify any changes in the relative scores
of the different groups. This will make it possible to identify any unfairness or adverse impact. Where substantial adverse impact was
found, SHL (2001:16) recommends that the following issues be considered:
• Has the job changed since the original job analysis?
• Are the skills measured relevant to the job?
• Is the way the skills are measured appropriate to all candidates?
• Would customisation of the context in which the skills are measured help?
• Are the assessment instruments at the right level for the job?
• Is there some other instrument which would provide the same information without adverse impact?
• Can the selection rule be designed to minimise adverse impact despite score differences?
• Can the job or training be designed so that the entry-level required for the relevant skill is lower?
• Can disadvantaged groups be trained to offset differences in test taking behaviour or to enhance relevant skills?
CHAPTER SUMMARY
This chapter reviewed recruitment and selection from a personnel psychology perspective. The systematic attraction, selection and
retention of competent and experienced scarce and critical skills have become a core element of competitive strategy, and an essential
part of an organisation’s strategic capability for adapting to competition. Effective resourcing and staffing requires scientific measures
that diagnose the quality of the decisions of managers and applicants. The profession of personnel psychology contributes to the quality
of recruitment and selection decisions by developing assessment instruments and applying the principles and standards underlying
decision science in the recruitment, screening and selection of employees.
The psychometric model of decision-making emphasises the principles of reliability, validity, utility, and fairness, and legal
considerations in selection decision-making. Statistical models are valuable in determining the validity and utility of selection
predictions and provide evidence of the fairness of selection decisions. Feelings of unfairness can lead to negative applicant impressions
and actions, including lawsuits and grievances on the part of an employee or applicant, and can be costly to an organisation.
High-quality staffing decisions are based on combining a number of different pieces of information about candidates. Staffing
strategies vary in their comprehensiveness and in whether they are compensatory or non-compensatory. In making a staffing decision,
false positives and false negative errors may be committed. Both of these types of error can be costly for an organisation. It is valuable
to keep the concept of utility gain in mind when evaluating the effectiveness of a staffing strategy. Considering the increasing global
context of organisations, placement strategies have become important in particularly large, multi-national companies.
This chapter concludes Part II (Personnel Employment) of this textbook. In Part III we will explore those employment practices and
processes that deal with the retention of employees.
Review questions
You may wish to attempt the following as practice examination-style questions.
1. Explain the recruitment, screening and selection process by drawing a diagram to illustrate the various decisions that need to be
168
made in each step of the process.
2. Differentiate between internal and external sources of recruitment and evaluate the value of each in attracting applicants.
3. Discuss advertisements and e-recruiting as two method of recruiting applicants.
4. Discuss the recruitment planning process and the techniques that can be utilised in ensuring the design of a high-quality, cost
effective recruitment strategy.
5. Why is it important to manage applicant reactions and perceptions in the screening and selection process? Draw up a checklist of
selection audit questions that can help to ensure that applicants’ perceptions will be managed in the screening and selection process.
6. How does job analysis help to improve the validity of a selection device?
7. Why is it important to consider the predictor-criterion relationship in selection decisions? Illustrate this relationship by drawing a
diagram.
8. Why can prediction errors be costly to an organisation? Explain the various types of prediction errors and draw a diagram to
illustrate your viewpoint.
9. Give an outline of the various decisions that an industrial psychologist must make in recruitment and selection. Draw a diagram to
illustrate the various decisions that need to be considered in the decision-making process.
10. Why is it important for decision-makers to consider aspects such as reliability, validity, fairness, and utility in selection decisions?
Explain each of these aspects and how they relate to the issue of adverse impact.
11. What is the difference between predictive validity and differential validity, and how do these aspects relate to the issue of adverse
impact?
12. Why is it important for industrial psychologists and decision-makers to focus on the quality of selection decisions? How can the
concept of utility gain be applied in enhancing the quality of decisions? Give an example to illustrate your viewpoint.
13. Staffing strategies vary in their comprehensiveness and in whether they are compensatory or non-compensatory. Explain this
statement, and discuss the advantages and disadvantages of the various methods for combining applicant information.
14. How do linear regression prediction models differ from multiple regression prediction models? What is the use of these models in
making selection decisions?
15. How does the ‘pure’ selection model differ from placement strategies? Why is placement more appropriate in a global, multinational
organisational context?
16. How can the concepts of cut-off scores, selection ratio, and base rate help to improve the quality of selection decisions? Give
examples to illustrate your viewpoint.
17. What are the guidelines for setting cut-off scores to ensure that they do not lead to unfair discrimination?
18. How do affirmative action and employment equity influence recruitment and selection practices in the South African organisational
context?
19. How can an industrial psychologist or managers ensure fairness in their selection decisions?
Multiple-choice questions
1. An industrial psychologist decides she want to use two predictors to select grocery store clerks. Ideally, these predictors will:
a. Have a high inter-correlation
b. Not be inter-correlated
c. Not be related to the criterion
d. Have situational specificity
2. Which of the following statements about realistic job previews (RJP) is FALSE?
a. RJP does not seem to be highly effective in reducing turnover
b. Some companies administer RJPs after a job offer has been accepted
c. RJPs seem to be most effective when presented in written fashion
d. RJPs seem to be best when presented early in the recruiting process
3. Which of the following statements is FALSE?
a. Invalid predictors are not useful in the selection process
b. The smaller the selection ratio, the greater a predictor’s usefulness
c. The largest gain in criterion performance will occur when the base rate is 1,0
d. As long as a predictor has less than perfect validity, selection errors will be made
4. The human resource director at the Automobile Company just hired ten individuals to be production managers. There were 75
applicants. Based on this information, you know that the:
a. Selection rate was 0,13
b. Selection ratio was 7,5
c. Base rate was 0,13
d. Base rate was 7,5
5. True negatives are:
a. People who fail the test and would have performed well
b. People who fail the test and would have performed poorly
c. People who pass the test and ended up performing well
169
d. People who pass the test and ended up performing poorly
6. If an employer can prove that a decision has been taken on the basis of …, the decision cannot be said to be unfair or discriminating.
a. Political beliefs
b. Minimum requirements of the job
c. Ideal requirements for the job
d. Economic forecasts
7. Miriam applies for a position as a guide at a local museum, which attracts many international tourists. Which of the following
requirements would not be granted as ground for unfair discrimination?
a. The successful candidate must be female
b. The successful candidate must be fluent in English
c. The successful candidate must be younger than 30 years of age
d. The successful candidate must be black
Reflection 6.8
Read carefully through the excerpt below and then answer the questions that follow.
Do South African recruitment agencies really get the net?
ICT skills – IT company on aggressive recruitment drive
21 April 2009
(<www.skillsportal.co.za>)
When was the last time we all had a good look at how efficient we are at hiring and retaining our management and specialist skills in
South Africa? We all complain about skills shortages, but have you reflected on what you, as the employer, may be doing to
exacerbate this situation? Are skills shortages the problem or are South African employers merely inefficient in their hiring practices?
Leonie Pentz, Director of AIMS International South Africa, an international headhunting organisation, says, ‘We have heard from
many South African executives (that they) are working internationally because of the ineffective hiring decisions they experienced
here at home. This includes previously disadvantaged managers. A senior executive disclosed to us in an email recently that he was
interviewed for a high level financial role in South Africa, and the company’s internal recruiter interviewing him was asking him
about basic accounting principles.
‘Clearly the person was inexperienced in recruiting at this level, as they should have been testing leadership competencies instead.
This is just one example of how ineffective hiring processes can influence an executive candidate’s decisions.’
This presents corporate South Africa with a challenge in understanding what employers are doing wrong when it comes to
attracting and retaining talent.
According to Deven Govender, a South African employment equity mining executive who is working abroad, and Güera Romo, a
Senior Revenue Assurance Consultant, there are substantial reasons for our failure to secure executive talent in corporate South
Africa.
These reasons include:
• Insufficient development prospects for senior management
• HR focusing on IQ versus EQ
• The focus of executives is not merely to secure a job, but to view their career as an extension of their daily lives
• No commitment from employers to succession planning
• The negative image of executive coaching in SA
• Employers are not willing to provide EE (employment equity) executives with sufficient challenges
• Rigid HR policies unsuitable to Generation X and Y executives
• HR/Recruiters do not understand complex roles
• Employers work ineffectively with their headhunters
• Decision-makers are not allocating sufficient time for executive candidates in the hiring process.
Considering these negative opinions from executive talent regarding hiring inefficiencies, is South Africa being given a second
chance?
Martine Schaffer, Managing Director of Homecoming Revolution, says, ‘There is definitely going to be an increase in inflow and a
decrease in outflow of talent, as it is not as easy to work internationally. This presents an opportunity to South Africa to retain our
talent, and it is very important how corporates present themselves.’
Pentz questioned whether South African employers are ready for an influx of homecoming talent. According to Pentz: ‘There are
many hiring inefficiencies that hinder the effectiveness of headhunters in securing and placing executive talent.’
These include:
• Lack of a thorough brief from the client or misunderstanding of the intricacies surrounding the role by HR
• On-boarding (coaching) process often non-existent, even when Management is aware of a ‘problem’ department
• Senior HR at bigger companies piling vital talent attraction functions onto inexperienced HR Officers’ shoulders
170
•
•
•
Lack of influence and control from HR to Line Management
Limited access to decision-makers
Rigid vendor policies.
Pentz provided some insight into how employers can improve their hiring strategies:
‘Companies can change their focus from being process-driven to talent-driven. You could possibly have a senior HR person doing
executive interviews, but an HR officer may not be sufficiently skilled. There need to be trained HR people who do high-level
interviews for senior management effectively.
‘In the USA there is a term “the people turn up for you”, which means that they roll out the red carpet. They are on time, they offer
you coffee, they make sure the interview venue is ready; they do a presentation on the company and its vision, and ultimately are
properly prepared.
‘Often line managers are much better at this than HR. Choose staff who are true ambassadors and “live” the company values to
meet with headhunted talent.
‘There are appalling examples of headhunted candidates waiting half an hour for the interview, because the employer was not
prepared for them. This is really unacceptable at this level, in fact at all hiring levels.
‘The imperative in hiring senior talent is for decision-makers to be involved in the process. There should not be HR gatekeepers at
this level. However, HR is a vital part of a successful hire and should influence and control the process according to a best-practice
hiring strategy.
‘Decision-makers need to deal directly with the headhunter, making time to take their calls. Where we have interviewed a suitable
person and the decision-maker is hands-on and sufficiently interested to listen to our feedback and manages the process speedily, they
see positive results.
‘Organisations that have too many layers of red tape in screening these types of calls miss the opportunities.
‘Employers must understand that candidates headhunted for opportunities in Africa, for example, all have other offers in the
pipeline … they really do. Yours isn’t the only opportunity they are looking at.
‘Even in the current economic climate, in emerging markets there are key skills that are even more in demand now, for instance,
business development (strong sales and marketing people) to ensure that deals are made and relationships managed, and top finance
(risk and compliance) candidates.’
Pentz concluded:
‘The following factors should really be considered to make the hiring process more efficient:
• Timelines – do not keep high potential talent waiting unnecessarily
• Efficient management of the process by HR
• Sharing crucial information around the role with your headhunter
• Informative pre-interview process selling the opportunity to prospective candidates
• Good offer – there are companies that fail at this critical stage by offering the minimum
• Be open, honest and have integrity – your interviewee today can be your interviewer tomorrow.
‘Ultimately what impact is our complacency regarding these real issues having on our ability to retain and attract executive talent?
Perhaps now is the time that employers should re-evaluate how things are being done to make sure that their organisation shines
sufficiently brightly to attract and retain the brightest executive stars.’
Questions
1. Why are the quality and effectiveness of recruitment practices important when considering the ‘war for talent’ and scarce and
critical skills nationally and globally?
2. How does a company’s recruitment strategy influence applicants’ perceptions of the company? Why is it important to consider
applicants’ reactions and perceptions in recruitment and selection?
3. How can companies improve their recruitment and hiring processes?
4. Review the section in this chapter on factors that influence applicants’ reactions and perceptions. Which of those factors are also
mentioned in this excerpt?
5. What would be the role of job analysis and criterion development in improving recruitment and screening processes in South
African companies? Review chapter 4 to formulate your answer.
6. When considering the problem areas highlighted in the excerpt, what role can the profession of personnel psychology play in
helping to improve recruitment and hiring practices in South Africa? Do you think the scientific decision-making framework will
help to make a difference? Give reasons for your answer.
Reflection 6.9
A language nightmare
Thabile is the senior industrial psychologist at Gold for Ever (Pty) Ltd in Gauteng. Management has assigned her the job of
171
developing an assessment centre to identify leadership potential for the first-line supervisors, with the aim of ensuring that more
previously disadvantaged individuals are selected into the ranks of first-line supervisors.
Thabile started off with a lot of enthusiasm and did a thorough analysis of the main tasks involved in the job of first line supervisor.
She then used the Work Profiling System (discussed in chapter 4), a structured questionnaire published by SHL, to help her to identify
the most important behavioural competencies required by first line supervisors for success in their jobs.
So far so good! The problem, however, began when she started to develop simulations of the supervisor’s job. What Thabile found
was that the current supervisors would speak either an African language or Fanagalo to the mine workers, the great majority of whom
were immigrant workers from other parts of Africa. The current supervisors were mostly Afrikaans speaking, though they used
Afrikaans only during supervisors’ meetings, and some of them battled to express themselves in English. To complicate matters even
further, Thabile found that the top management of Gold for Ever (which was partly British-owned) was mostly English speaking. All
of the documentation and reports to management were written in English by the supervisors.
Thabile would like the assessment centre to be fair to all applicants for the position of first line supervisor, but at the same time to
make sure that the best candidates are selected.
Questions
1. What would be your advice to Thabile with regard to the language that should be used by participants in the assessment centre?
2. In which language should the assessment centre’s written material be presented to the participants?
3. Would it be possible for Thabile to make some degree of language proficiency a minimum requirement for attending the assessment
centre, and what should this be?
4. Thabile is considering using English as the language medium at the assessment centre, but this will have an adverse impact on
previously disadvantaged individuals. Will Thabile be able to justify this as fair discrimination? Motivate your answer.
(Based on Kriek, 2005:169)
172
CHAPTER 5
PSYCHOLOGICAL ASSESSMENT: PREDICTORS OF HUMAN BEHAVIOUR
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
ORIGINS AND HISTORY OF PSYCHOLOGICAL TESTING
• The experimental era
• The testing movement
THE DEVELOPMENT OF PSYCHOLOGICAL TESTING IN SOUTH AFRICA
THE QUALITY OF PREDICTORS: BASIC PSYCHOMETRIC REQUIREMENTS OF PSYCHOLOGICAL TESTS
• Reliability
• Test–retest reliability
• Alternate-form reliability
• Internal consistency reliability
• Inter-rater reliability
• Reliability and measurement error
• Validity
• Content validity
• Construct validity
• Criterion-related validity
• Validity coefficient and the standard error of estimate
• Validity generalisation and meta-analysis
THE QUALITY OF PSYCHOLOGICAL ASSESSMENT: ETHICAL AND PROFESSIONAL PRACTICE
• Classification of tests
• Training and registration of test professionals
• Codes of conduct: standards for conducting assessment practices
TYPES OF PREDICTORS
• Cognitive assessment
• The structural approach
• The information-processing approach
• The developmental approach
• Personality inventories
• Projective techniques
• Structured personality assessment
• Measures of specific personality constructs
• Emotional intelligence
• Integrity
• Test-taker attitudes and response bias
• Behavioural assessment
• Work samples
• Situational judgement tests
• Assessment centres
• Interviews
• Biodata
RECENT DEVELOPMENTS: ONLINE TESTING
OVERVIEW AND EVALUATION OF PREDICTORS
CHAPTER SUMMARY
REVIEW QUESTIONS
173
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
This chapter deals with the concept of individual differences as predictors of occupational performance and the predictor measures used to
predict people’s ability to perform a job according to required standards.
A predictor is any variable used to predict or forecast another variable. Industrial psychologists are interested in predictors of human
behaviour on the job. Before appointing a new recruit we would like to know whether this person will be able to function effectively in the
organisation and whether he or she will have the capacity to perform in a specific job role. We do not use astrology or palmistry as
fortune-tellers do, but the practice of industrial psychology utilises various measuring devices as potential predictors of job performance
criteria (Muchinsky, 2006). In the context of psychology, we refer to these measuring devices as psychological assessment tools and
techniques, some of which are also known as psychometric instruments. In the context of this chapter we also refer to these measuring devices
as predictors or predictor measures. This chapter aims to explore various predictor constructs and the predictor measures that we use to assess
these constructs.
Learning outcomes
When you have finished studying this chapter, you should be able to:
1. Understand the nature and value of different predictor constructs.
2. Discuss key issues in the evolvement of psychological testing to date.
3. Describe and distinguish psychometric requirements for the development of quality predictor measures.
4. Report on various aspects of ethical and professional practice that are required to ensure fair and ethical psychological assessment,
including test classification, training and registration of assessors and ethical standards.
5. Distinguish between different approaches to cognitive and personality assessment and the cognitive and personality predictor
measures that evolved from these approaches.
6. Discuss work samples, situational tests, assessment centres and interviews as examples of behavioural predictor measures.
7. Critically analyse the value of online testing.
CHAPTER INTRODUCTION
Predictor constructs: what do we assess?
In psychological assessment, we assess different psychological characteristics or attributes of people. These psychological attributes
can be categorised as personality attributes, cognitive attributes and behavioural attributes. We can also call them predictor constructs,
as they present the various aspects of human behaviour that are assessed through different measuring devices in order to predict a
person’s future behaviour on the job. Examples of personality attributes as predictor constructs include, among others, motivation,
values, integrity, personality traits. Cognitive attributes that are often assessed include intelligence, learning potential, ability, aptitude,
and general reasoning. When we assess behavioural attributes, we look at skills that manifest as observable behaviour during an
assessment situation.
The purpose of predictors: why do we assess?
When we assess people, we assume that the person’s particular personality characteristics, cognitive ability and skills will tell us
something about how the person is most likely to behave in future. Having an idea of a person’s likely behavioural patterns therefore
enables one to determine whether the person will be able to function effectively within a particular work context. The assessment
results provide us with information about the person that we can relate to the requirements in the specific work context and in that
manner guides our employment decisions about people. The purpose of predictors is therefore to provide a sample of a person’s
behaviour at a particular point in time based on which predictions about the person’s likely behavioural patterns in general can be
made. Remember that a test can never represent all the possible behaviours in any one person. It provides only a sample of that
person’s behaviour and generalisations about the person’s behaviour are made based on that. These generalisations or predictions
about future behaviour guide decision-making in terms of whether or not to select or promote the person.
When we apply psychological assessment in the work context, the purpose of the assessment determines the level on which
behaviour is assessed. In I/O Psychology, behaviour is always assessed on an individual, group or organisational level. If the purpose
of the assessment is selection, we would assess on an individual level. If it is team-building, we would conduct a group assessment,
and when we want to do an organisational culture audit, our assessment will be on the organisational level. As such, the various
predictor constructs of personality, cognition and behaviour can be assessed individually, or within a group or organisational context.
Predictors of human behaviour are not recent phenomena. There is a long history behind psychological assessment, during which
time predictor measures evolved in nature, scope and application – and they continue to evolve to meet the needs and requirements of
a changing workforce and the changing world of work.
174
ORIGINS AND HISTORY OF PSYCHOLOGICAL TESTING
The origin of psychological testing used as a predictor of future behaviour can be traced back to very ancient times. Rudimentary forms
of assessment have been noted to exist in China and Greece about 4 000 years ago (Anastasi & Urbina, 1997; Gregory, 2007) as well as
during the Middle Ages (Anastasi & Urbina, 1997) and in biblical times (Foxcroft & Roodt, 2005). Ever since, various schools of
thought have attempted to devise theories and related assessment tools to predict some or other form of human behaviour in psychology.
Assessment practices in astrology, physiognomy, humorology, palmistry and graphology evolved, although with little scientific
evidence of the predictive value thereof in the human sciences.
The experimental era
Around the middle 1800s, experimental investigation of generalised descriptions of human behaviour in Germany and Great Britain was
the focus of human scientists of that day (Anastasi & Urbina, 1997). Based on the physiological background of the scientists, these
studies investigated behavioural responses to visual, auditory and other sensory stimuli, while predominantly focusing on respondents’
reaction time. The experimental psychologists during this era laid the foundation of viewing assessment of human behaviour in the same
light as an experiment, focusing on objective observation methods and measurable quantities of behaviour, as well as requiring rigorous
control to ensure the least possible error in behavioural observations (Foxcroft & Roodt, 2005). The major contribution of the
experimental psychologists to psychological testing was their emphasis on the scientific method of observing human behaviour, and on
ensuring standardised testing, which requires rigorous control of testing conditions under which behavioural observations are made.
The testing movement
Despite its contributions to psychological assessment, the experimental era of psychology was somewhat inhumane in its philosophy and
was flawed in its very narrow and simplistic view of human behaviour. Sir Francis Galton (1822–1911) pioneered the newly-established
experimental psychology in the early nineteenth century and is regarded by many as primarily responsible for officially launching the
testing movement (Anastasi & Urbina, 1997).
He was an English biologist and particularly interested in human heredity. During the course of his research, he started realising the
need to measure the characteristics of related and unrelated people, hoping to discover the degree of similarity between people related to
one another. He started accumulating the first large, systematic body of data on individual differences with regard to keenness of vision
and hearing, muscular strength, reaction time, and other simple sensory motor functions. Some see him as the first scientist to devise a
systematic way of measuring human behaviour (Muchinsky, 2006), although others regard his attempts to gauge the human intellect as
fruitless (Gregory, 2007). Still, Galton’s studies demonstrated that objective tests could be developed and could produce meaningful
results if administered through standardised procedures. He also introduced the use of rating-scale and questionnaire methods in testing,
and his work on data analysis provided an impetus for the application of the statistical procedure to the analysis of test data.
James Cattell (1960-1944) studied with Galton for some time, but returned to his home country, America, where he was active in
spreading experimental psychology and the testing movement (Anastasi & Urbina, 1997). He developed a test of intelligence based on
sensory discrimination and reaction time, and invented the term mental test. In the mental tests that Cattell proposed, it was clear that he
shared Galton’s view that intellectual functioning could be measured through tests of sensory discrimination and reaction time. Another
scientist, Wissler, found, however, that there was little correspondence between the results of these tests and college students’ academic
grades (Anastasi & Urbina, 1997). Ebbinghaus, a German psychologist, devised mathematics, memory span and sentence completion
tests. The most complex of the three, sentence completion, was the only test in which the results showed a clear relation with children’s
scholastic achievement. Despite the simplicity of Wissler’s analyses (the value of which has been subject to criticism over the years), the
importance of determining the predictive value of a test scientifically had been set.
According to Muchinsky (2006), the biggest advances in the early years of testing were made by Alfred Binet, a French
psychologist. In collaboration with Simon, Binet studied procedures for educating retarded children, and they devised a test of
intelligence, the Binet-Simon scale, to assess mental retardation. The scale included aspects of judgement, comprehension and
reasoning, which Binet regarded as essential aspects of intelligence, and provided an indication of a child’s mental age (mental level) in
comparison with normal children of the same age. In 1916 Terman and his colleagues at the Stanford University in America developed
the Stanford-Binet test, which was a much more extensive and psychometrically-refined version of Binet’s earlier work. Through this
test, Terman introduced the concept of IQ (intelligence quotient), which is an indication of the ratio between a person’s mental age and
chronological age.
One of the criticisms against the Binet-Simon scale was the fact that it was available only in English and French and therefore relied
heavily on the verbal skills of the test-taker. This sparked the development of a number of non-verbal measures (Foxcroft & Roodt,
2005). World War I required psychologists to classify recruits according to their general intellectual level and this sparked the need for
group testing based mostly on a multiple-choice format. The tests that were developed during World War I by the army psychologists
became known as the Army Alpha and Army Beta tests, the latter being a non-verbal test (Anastasi & Urbina, 1997). During this time,
psychological testing also expanded to include tests of aptitude, ability, interest, and personality. Woodworth, for example, devised a
personality-type questionnaire, the Personal Data Sheet, which was also probably the first self-report personality inventory, assessing
among others emotional adjustment.
Soon after the war, the army tests were released for civilian use in, for example, schools, tertiary institutions, and prisons. Soon the
175
general public became very conscious of the IQ concept. Although application of these tests boomed, they were used indiscriminately,
and technical improvement thereof was left unattended. This resulted in discontent with test results and a general scepticism and
hostility towards testing in general. Criticism against test use grew after World War I, pointing to the fact that cognitive tests relied too
heavily on verbal and language skills and were inappropriate for immigrants or illiterates. To address this criticism, Wechsler developed
a non-verbal performance test, assessing a variety of scores related to cognition, and not just one (Gregory, 2007).
According to Foxcroft and Roodt (2005), World War II reaffirmed confidence in psychological testing, based on the emergence of
new test development technologies such as factor analysis. However, the interest in testing again declined between the 1950s and the
1970s owing to stringent and subjective control of testing practice by the American Psychological Association. During the 1980s and
1990s, the influence of multiculturalism on psychological assessment led to attempts to develop culture-free tests. However, it soon
became clear that this was an impossible task, and researchers focused their attention on designing tests that included only behaviour
common to all cultures. Again, this constituted an impractical task, and led to cross-cultural test adaptation in test development practice.
To address issues of bias and fairness in test use, standards for the professional practice of testing have been devised by Bartram in the
UK. Linked to this, guidelines for competency-based training of assessment practitioners were developed.
In conclusion, globalisation and the information technology era have brought with them advances in testing practices with regard to
computerised testing as well as testing via the Internet (Foxcroft & Roodt, 2005). Currently, the ethical use of such testing practices
constitutes the main focus of test developers and users all over the world.
THE DEVELOPMENT OF PSYCHOLOGICAL TESTING IN SOUTH
AFRICA
The development of psychological testing in South Africa closely resembles the development of political ideologies and practices in the
country. Inevitably, the use of psychological tests was initially strongly impacted on by our colonial heritage, linked to Britain, and later
by the apartheid policies governing the country. Early psychological measures used in South Africa were imported from Europe and
America. For example, the Stanford-Binet Scale was adapted for South African use by Fick and became known as the Fick Scale. Some
tests, such as the South African Group Test, were developed in the country: however, this test was standardised for whites only
(Claassen, 1997).
Foxcroft and Roodt (2005) discuss how measures of intelligence were used in particular to show the superiority of one race group
above the other. In this regard, Fick’s research on the cognitive abilities of black children concluded that their inferior performance in
comparison to white children was due to innate differences. Biesheuvel in particular disputed and criticised Fick’s results, and
highlighted the inappropriateness of western-type intelligence tests for blacks. He highlighted the moderating effect of language, culture,
malnutrition, and education on test results. After World War II, and in response to the need to identify the occupational suitability of
poorly-educated blacks, Biesheuvel devised the General Ability Battery (GAB). The GAB included a practice session which familiarised
test-takers with the test concepts prior to commencing the test (Claassen, 1997). Apartheid meant job reservation for whites, and because
they did not compete against blacks for the same positions, the occupational suitability of blacks was assessed by different measures
from those used for whites. Tests were also still imported, such as the Otis Mental Ability Test. Despite the fact that the Otis, being an
American test, had only American norms, it was widely used to assess white applicants’ general cognitive ability.
According to Claassen (1997), many psychological tests were developed for use in industry between 1960 and 1984 by the National
Institute for Personnel Research (NIPR). The NIPR was later incorporated into the Human Sciences Research Council (HSRC). Separate
measures were developed for different racial groups because they were not competing for the same resources, and although some
measures were developed for blacks, coloureds and Asians, many more were developed for whites (Foxcroft & Roodt, 2005).
Differential testing was questioned when the socio-political situation started to change, discriminatory laws were done away with,
and people from different race groups could apply for the same positions. To address this issue, the HSRC focused on translating
existing tests and on developing different norms based on race and gender, and tests such as the General Scholastic Aptitude Test saw
the light of day. To control the moderating effect of language and education, non-verbal cognitive tests such as the APIL-B (a learning
ability battery) were developed. Some imported tests, and locally-developed tests which were standardised only by being administered to
whites, continued to be used, however, and test results were merely interpreted ‘with caution’ (see Foxcroft & Roodt, 2005). Owen
initiated the first study on test bias in 1986 and found major score differences between blacks and whites, confirming the misuse and
discriminatory testing practice in South Africa up to that point. Dynamic testing, where test-takers undergo testing then receive a
training intervention, after which they again complete a post-training test, added value in terms of addressing possible moderating
factors on test results. The TRAM-series of cognitive tests is a good example of such a pre-test training–post-test assessment format that
was developed by Terry Taylor.
According to Foxcroft & Roodt (2005), the strongest stance against the improper use of psychological testing has come from
industry. Since democratisation in 1994, test practices in South Africa have been regulated through legislation. Two specific streams of
legislation control the use of psychological tests in South Africa, one being the formal Acts of Parliament dealing with individual rights
(the Constitution of the Republic of South Africa, 1996, the Labour Relations Act 66 of 1995 and the Employment Equity Act 55 of
1998 (all available at <www.parliament.gov.za/acts/index.asp>), and the other being the Health Professions Act 56 of 1974, also
available at <www.parliament.gov.za/acts/index.asp> and also at <www.hpcsa.co.za>), dealing with the nature and scope of the
psychology profession. In respect of psychological assessment, the Constitution and the Labour Relations Act prohibit unfair
176
discrimination on grounds such as race, gender, marital status, ethnic origin, religion, age, and disability. The Employment Equity Act
(EEA) clarifies that it is not unfair discrimination to distinguish between people for the purposes of affirmative action or on the basis of
requirements inherent to the job.
Reflection 5.1
Questions
1. In the organisation you or a relative or friend work for, for what purposes are psychological assessments used? If psychological
assessment is not applied in the organisation, where do you think its use could be employed and potentially add value?
2. When we assess a person, what different aspects of that person would we like to gather information about in order to make
predictions about his or her future behaviour? In industrial and organisational psychology we refer to these different aspects of
human behaviour as _________ _________.
3. Which of the following strategies was employed in the South African context to curb the misuse of predictors and to control factors
that had a potential moderating effect on test results?
Translation of English tests
Adaptation of imported tests
Development of different norms for the same test
Development of different tests for different race groups
Development of non-verbal tests
Application of interactive assessment principles in test development
Legalisation of psychometric qualities
4. What are the three requirements stipulated in the EEA to which psychological assessment measures should adhere?
Section 8 of the EEA deals specifically with psychological assessment and prohibits psychological testing and similar assessments,
unless the measure has been scientifically proved to be valid and reliable, is not biased against any employee or group, and can be fairly
applied to all employees. Therefore, to ensure that psychological tests are no longer misused in any way, basic psychometric
requirements are now enshrined in law (Mauer, 2000).
As you have by now realised, psychological assessment can have an immense impact on a person’s life – in fact it can alter the
direction of one’s life greatly. Psychological assessment results may impact career choice, appointments, promotions, and benefits. They
may even impact the way in which a person regards him- or herself. Because we can change people’s lives and impact on their future
and their general happiness, we ought to use quality predictors fairly and with great sensitivity. To ensure fair and unbiased assessment
practice, the nature of the assessment tools we use needs to be of a high quality, and we need to use these tools in a manner that
demonstrates professional and ethical assessment practices. The quality of the data that we obtain through an assessment is therefore
determined by the quality of the measuring device itself as well as by the manner in which it is used.
THE QUALITY OF PREDICTORS: BASIC PSYCHOMETRIC
REQUIREMENTS OF PSYCHOLOGICAL TESTS
If we are to make life-changing decisions based on the results a psychological test yields, we would like to ensure that one of good
quality has been used as predictor. For a psychological test to be a good predictor would require the test to measure accurately and
consistently what it was designed to measure. In psychology, the quality of measuring devices is based on the psychometric
characteristics of the device, namely its reliability and its validity. One cannot make any significant or meaningful inferences based on
an unreliable and invalid predictor.
Reliability
Imagine that you weigh yourself on a scale in the morning, and the scale indicates that you weigh 75 kg. If you weigh yourself again
tomorrow, keeping in mind that you have not altered your lifestyle in any way recently, you could expect to weigh about 75 kg again.
Should the scale suddenly indicate that you weigh 95 kg on the second morning, you would definitely not regard your measuring device
as reliable, because the measured property (your weight) could not have changed so much in 24 hours.
Similarly, when we measure psychological attributes, we would like the psychological test (the measuring device or predictor) to
yield the same results at different times, if what is measured (the predictor construct) has not changed. The reliability of a psychological
test refers to the extent to which that measure will repeatedly yield similar results under similar conditions, at various points in time.
Reliability therefore refers to the consistency or stability of that psychological test. A test can yield consistent results over time only if
the test itself and the testing situation are standardised. Standardised testing implies that the conditions of the test situation should be
kept consistent (similar) for everyone being assessed as well as during different assessment situations. Test instructions, test items and
materials, as well as conditions external to the testing situation, should be controlled and consistency maintained.
However, very few physical or psychological measuring devices are always completely consistent. You may, for example, find that
177
if you step on a scale twice in succession, the scale will register 75,2 kg the first time and 75,8 kg the second time. As such, we always
work with some error component, and the concept of reliability is best viewed as a continuum ranging from minimal consistency of
measurement to near perfect repeatability of results. Physical measures such as measures of weight are usually more consistent than
psychological measures, such as measures of personality or cognitive ability.
Various statistical analyses can be employed to estimate the degree of reliability of a psychological test. Based on these, three major
types of reliability can be distinguished, namely test–retest reliability, equivalent-form reliability, internal consistency reliability and
inter-rater reliability.
Test–retest reliability
The simplest and probably most obvious way to determine the reliability of a test is to administer the test to the same group of people at
two different points in time and then compare the scores. The reliability coefficient is the score obtained when the two sets of scores are
compared or correlated. The reliability coefficient is also called the coefficient of stability, because it provides an indication of the
stability of the test over time. A strong correlation would yield a high correlation coefficient and would mean that the scores obtained
during the first administration of the test are very similar to the scores obtained from its second administration. A high correlation
coefficient would then mean that the test can be regarded as reliable. The question remains, how high should the correlation coefficient
be for a test to be regarded as reliable?
According to Muchinsky (2006), a test can never be too reliable, and therefore the higher the correlation coefficient, the better. But
as we have already established, a test is never one hundred per cent reliable. Muchinsky indicates that reliability coefficients of about
0,70 are professionally acceptable, but acknowledges that some frequently-used tests have coefficients of around only 0,50. Test–retest
reliability is prone to a high margin of measurement error. Transfer effects, such as experience, practice and memory may, for example,
impact on the consistency of the scores obtained in a test. The time interval that lapses between the administration of the first and second
tests should also be taken into account – the shorter the time interval, the higher the reliability coefficient will be. Test–retest reliability
is also a function of the purpose of the test – a test used to predict individual brain impairment should have a much higher reliability
coefficient than an attitudinal questionnaire measuring the attitudes of a group of people.
Alternate-form reliability
Alternate-form reliability is also called equivalent-form reliability. In this case, two similar but alternative measures that measure the
same construct are presented to the same group of people on two different occasions. The coefficient of equivalence would be high if the
scores obtained from the first measure correlated with that of the alternative measure. The tests would then be equivalent and reliable
measures of the same construct. If the coefficient is low, the tests are not reliable. This type of reliability testing is not very popular,
because most tests do not have an equivalent form and it is costly and time consuming for test developers to design two tests measuring
the same construct. However, according to Muchinsky (2006), equivalent forms of tests are sometimes available for intelligence and
achievement tests.
Internal consistency reliability
The coefficient of internal consistency provides an indication of the degree of homogeneity of the items within a particular measure.
Internal consistency is usually either determined through the ‘split-half’ reliability method or by calculating either a Cronbach’s alpha or
Kuder-Richardson 20 (KR20) coefficient. Split-half reliability is obtained by administering a test to a group of people and then during
scoring, the number of items in the test is split into equivalent halves. Each person therefore obtains two scores for the test and the
correlation coefficient is calculated by comparing each individual’s two scores. If the test is internally consistent and therefore reliable,
the correlation between the two sets of scores should be high, indicating a high degree of similarity among the items within the test. The
principle underlying the calculation of a Cronbach alpha coefficient is similar to that underlying the KR20. Here, every item of a test is
correlated with every other item on the same test, resulting in a matrix of inter-item correlations, whose average is related to the
homogeneity of the test (Gregory, 2007; Muchinksy 2006). Figure 5.1 below represents an inter-item correlation matrix in which the
inter-rater reliability between items is indicated with coefficient alpha.
Figure 5.1 Inter-item correlation matrix
178
Inter-rater reliability
Assessment is in most cases exposed to the subjectivity of the assessor, whether the influence of the assessor on the test-taker or the test
situation is conscious or unconscious. The reliability coefficients discussed up to this point are useful in tests that are based on
highly-standardised test administration and test scoring. With some psychological measures, it is very difficult to control the
standardised procedure of the test, for example, with projective techniques and interviews with open-ended questions. Here, rater
variance in scores is a potential major source of measurement error, negatively affecting the reliability of the measure. As such, it is
important to try to minimise the effect of rater subjectivity during test administration. A way of ensuring this kind of reliability is to
ascertain the degree of agreement among the assessments of two or more raters. Having all the test-takers’ test protocols scored by two
different assessors and correlating the scores given by one assessor with that of the other for each test-taker gives an indication of
inter-rater reliability.
Reflection 5.2
Factors that impact on the reliability of a predictor
Imagine you have applied for a new position as teller at AB&B Bank Ltd. The requirements of the position have been described in the
job advertisement as:
• Having the ability to do basic to moderately complex calculations accurately and under time pressure
• Effectively communicating with people
• Identifying and resolving customer-related queries efficiently
• Plannning and organising daily activities.
You have been instructed to attend a selection process during which you will undergo psychological assessment, including a cognitive
test, a personality test and a panel interview. On the day of your assessment, you get out of bed with a terrible cold, having slept very
little through the night, partly because of your blocked nose and partly because you have been worrying about your terminally ill
friend in hospital.
Upon arrival at the assessment, you find that you are 30 minutes late as a result of hectic traffic. Still, the assessor allows you to
commence the psychological assessment. Next door to the selection venue, offices are being renovated and there is a terrible noise
every now and then. You struggle to concentrate. To make matters worse, the lighting in the room is poor, and the dust and summer
heat outside make it incredibly stuffy. There is no air-conditioning. You decide to do the best you can do, despite your bursting
headache and poor visibility of written instructions on the badly-photocopied tests. The test administrator is clearly not very
experienced as she hesitates while giving instructions and repeats instructions in different ways, more than once. Luckily, with one of
the cognitive tests, you recognise the questions, as you did the same test for another position about a week ago.
Questions
1. In the scenario above, what factors within you and within the testing situation may have an impact on your test results? Which of
these factors does the test administrator have control over and which not?
2. What recommendation(s) can you make to ensure that an assessment situation such as the above yields the most consistent and
reliable test results for each person being assessed?
Reliability and measurement error
Remember that we have said that no single psychological measure yields 100 per cent consistent or reliable results with repeated use,
and that there is always a degree of measurement error present. Knowledge of measurement error is important, because the measure
becomes less reliable as the measurement error becomes higher. It is therefore important to be aware of the factors that may lead to
measurement error, so that we can try to minimise these during testing. We refer to the various factors that may impact on the
consistency of a test as the sources of measurement error.
During test construction one should carefully select which items to include in the test, because the test is a sample and never the
179
totality of what the predictor construct entails. We can, for example, include about only 30 questions assessing a person’s general
knowledge in a test, but these 30 questions cannot cover the total number of general knowledge questions that could be asked. Because
we select items to include in and exclude from a test, the items we select may not always be equally fair to all persons. There is therefore
always the potential for measurement error as a result of item selection.
The process of test administration is also prone to conditions that affect the degree to which the test is administered from one test
session to the next. Although one may strive to provide optimal and standardised test conditions to different people during different test
sessions, there may be circumstances over which one has no control, and these may add to the measurement error of the test.
Fluctuations in circumstances which one cannot always anticipate include, for example, room temperature, lighting, and noise.
Characteristics of the person completing the test may also fluctuate and impact on the reliability of the test results. An assessor does
not, for example, have control over the fluctuations in test-takers’ anxiety, motivation, physical health, emotional distress, and level of
fatigue (refer to Reflection activity 5.2).
Despite attempts to ensure standardised administration of tests, an assessor may also unconsciously impact on the test results
through, for example, body language, facial expressions, and style of presentation, which may differ from that of another assessor
administering the same test for the same purposes to a different group of people in the venue next door. Lastly, when test scoring entails
some degree of judgement during the scoring and interpretation process, it may affect the reliability of the measure. Therefore most
psychological tests have very well-defined criteria for answers to different questions.
The sources of measurement error that arise from item selection, test administration and test scoring are collectively referred to as
unsystematic measurement errors (Gregory, 2007). Systematic measurement error may also occur when a test measures a construct
different from the psychological attribute it was intended to measure every time that the test is used. In psychology it is very difficult to
measure attributes in isolation from one another, especially in the area of personality assessment. A test may be constructed with the
objective of assessing, for example, honesty as a personality trait, but it may inadvertently also tap into a person’s level of
conscientiousness every time the test is used. Following the steps in test construction is therefore extremely important to minimise the
effect of systematic measurement errors.
Validity
Test validity, the second psychometric property of a measure that is key to the usefulness of that measure, refers to its accuracy. Test
validity tells us what a test score means and denotes the degree of certainty that we are assessing what we think we are assessing. Where
reliability is an inherent characteristic of a measuring device, validity pertains to how appropriate the test is in predicting the predictor
construct (what is measured), and to what extent true and meaningful estimates or inferences can be made based on the test results or
scores. According to Gregory (2007), a test score is useless unless one can draw inferences from it. As such, an elevated score on
emotional stability on the 16PF becomes meaningful only if the assessor can link appropriate and relevant behavioural manifestations to
it. The score in itself means nothing. It is useful only if the assessor can deduce, for example, that ‘[a]n elevated score on emotional
stability suggests that the test-taker will be able to withstand stressful work conditions in a calm and resilient way’. The emotional
stability scale on the 16PF will then be valid to the extent that the deduction made about the test-takers’ behaviour is appropriate,
meaningful and useful.
There are various different ways in which test validity can be determined, but they all involve assessing the appropriateness of the
measure for drawing inferences from its results. Determining whether a test yields appropriate, meaningful and useful results requires
studies on the relationship between the test’s results and other independently-observed behaviour. If such a relationship can be
established statistically, it can be said that the test results provide a valid indication of the observed behaviour. Because we measure
abstract behavioural concepts in psychology, we can never determine with 100 per cent certainty the link between a test score and
manifested behaviour. Validity is a research-based judgement of how adequately a test assesses the predictor construct that it was
designed to measure (Gregory, 2007). Therefore we can at most say that test validity can range from weak to acceptable and from
acceptable to strong.
Traditionally there are three different ways of establishing the validity of a measure, and these have been referred to as the three
types of validity. Gregory (2007), however, cautions that one should not conclude that there are distinct types of validity, but rather that
one should accumulate various types of evidence to establish the overall validity of a measuring device. The three different ways in
which validity information is gathered will be discussed next and include content validity, criterion-related validity, and construct
validity.
Content validity
Content validity is a non-statistical type of validity that refers to whether the items contained in a measure are representative of the
particular behavioural or psychological aspect (predictor construct) that the test was designed to measure. Usually subject matter experts
are used to evaluate the items in a test in order to determine its content validity. Content validity is therefore a useful concept when the
subject matter experts know a lot about the predictor construct underlying the test. It is more difficult to determine if the test measures
an abstract concept, and becomes near impossible when the predictor construct is not well defined. As such, it is expected that measures
designed to assess abstract behavioural constructs such as personality should be based on sound theory and definition of the relevant
construct.
Content validity is commonly applied to achievement tests, which are designed to measure how well a person has mastered a
180
specific skill or area of study. In designing an English spelling test for Grade 12 learners, the test designer should compile a list of all the
possible English vocabulary that would reflect Grade 12 English spelling proficiency. The test would then ultimately be regarded as
highly valid if the items in the test were selected randomly from words of varying difficulty from the vocabulary list.
Content validity has specific significance in industrial and organisational psychology. There is an obvious relationship between the
process of job analysis (as also discussed in chapter 3) and the concept of content validity. Employers typically require assessment
measures that assess the knowledge, skills and abilities needed to perform well in a particular job. Industrial psychologists are therefore
concerned with designing measures that will elicit job-relevant behaviour. As such, content validity of these job-related measures is
represented by the degree to which the content of the job is reflected in the content of the test. This can be achieved only if a thorough
job analysis has been conducted to describe all the relevant knowledge, skills and abilities necessary for the position and if the job
analysis information has been used in designing the test.
A concept related to content validity is that of face validity. However, face validity is not based on subject matter experts’ opinion of
the representativeness of the test items, but on people’s general impression of how appropriate the items appear to be in relation to the
behavioural domain that is assessed. Face validity is therefore not really a psychometric term and is much less useful in terms of adding
to the validity of a test than its content validity. It is possible for a test to look valid (have face validity), but it may not necessarily mean
it has content validity. Think, for example, of an arithmetic test for 2nd graders containing words such as ‘multiply’ and ‘divide’. They
may seem face valid, but closer exploration of the content validity will reveal that these concepts are not appropriate to a 2nd grade-level
mathematical ability. It is also possible that a measure does not appear valid but that it in fact has content validity.
Construct validity
Construct validity refers to the extent to which the assessment device actually measures the construct it was designed to measure. In
psychology a construct is a theoretical, intangible trait or characteristic in which individuals differ from one another. Leadership,
intelligence, depression, numerical ability, and integrity are all examples of predictor constructs. We construct measures to assess the
nature of and the extent to which a particular construct manifests in an individual. Therefore we must be very sure that the assessment
device that we use to assess, for example, a person’s integrity, does not assess that person’s verbal ability, but in actual fact assesses no
other construct than integrity.
Gregory (2007) emphasises that construct validity is the most difficult type of validity to ascertain, the reason being that constructs
are explained by abstract and wide-ranging theoretical components and ‘no criterion or universe of content is entirely adequate to define’
the construct being measured (Cronbach & Meehl, in Gregory, 2007:132). For this reason, a wide variety of research techniques has
been devised to ascertain the construct validity of an assessment measure.
Correlation with other tests
Correlation with other tests measuring the same construct indicates that a predictor measure assesses more or less the same construct as
the other test. But the correlation should be only moderately high, otherwise we are just duplicating the other test.
Test homogeneity
If a test measures a single construct such as integrity, one would expect that the items within the test would all correlate with the overall
score on the test. Then the test items would be internally consistent or homogeneous. Although test homogeneity is an important aspect
during test construction, on its own it does not provide sufficient evidence of the construct validity of a measure.
Factor analysis
Factor analysis relates to test homogeneity and is a stronger way of providing evidence that test items in effect relate to the construct
being measured. Factor analysis aims to determine a smaller set of variables or dimensions around which the items in a test would
cluster. With factor analysis, the factors measured by the test are isolated, and the underlying structure of the test is determined. These
factors are then used to describe the sub-scales of the test. Factor analysis on a test of integrity would reveal that the test measures
sub-dimensions of integrity such as honesty, reliability, loyalty, and conscientiousness. All these sub-dimensions or factors combine to
describe a person’s integrity when measured with that particular test.
Convergent and discriminant validity
Another way to ascertain the correlation between what is actually measured by a test and the theoretical construct it purports to measure
is to compare scores on our test with known and reliable measures of the same construct. For example, if we have a newly-developed
test of integrity, we would like to know whether this test correlates with another known test of integrity. If our new test is a valid
assessment of integrity, then the scores on our test should converge (correlate or be similar to) with constructs measured in this other
known measure of integrity. In technical terms, there should be a high correlation between the scores from our new test of integrity and
the scores derived from existing measures of integrity. These correlation coefficients are referred to as convergent validity coefficients
because they reflect the degree to which these scores converge (or come together) in assessing a common concept (Muchinsky, 2006).
In a similar vein, we do not want our integrity test to produce scores that are related to the scores of a test which measures a totally
different construct, such as, for example, an anxiety test. Scores on the integrity test should therefore diverge (or be separate) from the
constructs assessed in a measure of anxiety, because anxiety is a construct unrelated to integrity. Low correlations should therefore occur
181
between integrity test scores and anxiety test scores. These correlation coefficients are referred to as divergent validity coefficients
because they reflect the degree to which these scores diverge from each other in assessing unrelated constructs (Muchinsky, 2006).
Criterion-related validity
Initially we noted that validity allows us to make accurate inferences based on test results about a person’s future behaviour. If a test is
valid, we can with fair accuracy estimate that a person will behave in a particular way based on his or her test results. To be able to say
in this manner that a test is valid implies that the test scores accurately predict a criterion such as behaviour on the job- or
performance-related behaviour. To determine criterion-related validity, a correlation coefficient is calculated between the predictor (the
test or assessment device or measure) and the criterion (behaviour). Two different types of criterion-related validity are distinguished,
based on the time interval between collecting data of the predictor and of the criterion: concurrent validity and predictive validity.
Concurrent validity
In concurrent validity, the data on the criterion are collected at approximately the same time as the data on the predictor. Concurrent
validity therefore pertains to how accurately the predictor estimates the current behaviour of a person.
Predictive validity
In predictive validity, the data on the predictor are collected first and the data on the criterion are collected at a later date. Predictive
validity therefore provides evidence of how accurately a predictor can predict the future behaviour of a person. Predictive validity is
implicit in the fact that in industrial and organisational psychology we make decisions based on predictor scores because we believe they
accurately predict a person’s future performance-related behaviour. As such, we will decide to select or promote a person based on
assessment results.
Validity coefficient and the standard error of estimate
Criterion or predictive validity denotes the correlation between predictor scores and criterion scores, and a calculated correlation is also
called the validity coefficient. Although one would think that 100 per cent prediction and accuracy are wanted from a predictor, this is
impossible. No test can be expected to be a perfect predictor, because there are always variables other than the predictor measure that
may impact on the outcome of a test. Variables that may impact on the test results may include characteristics of the assessor,
environmental influences, and circumstances of the person being assessed. The assessor may be too lenient or too strict, or make
judgements based only on a general impression of the person being assessed (also called the halo effect). The ventilation, room
temperature, and the level of noise during test administration, may impact on the test results, as may the level of motivation and health
status of the person being assessed. When extraneous variables such as these examples affect the predictor, we refer to criterion
contamination. In reality we can expect that there will always be some extent of criterion contamination, although we would like to do
everything we can to keep this to a minimum. The standard error of estimate (SEE) refers to the margin of error that can be expected in
a predictor score, and it can be statistically calculated from the correlation coefficient and the standard deviation of score of the
predictor. The SEE therefore refers to prediction error, whereas the standard error of measurement (SEM) refers to the measurement
error caused by the unreliability of a test (see the discussion regarding reliability, above).
Reflection 5.3
Read the following scenario and answer the questions below.
A valid predictor?
In AB&B Bank, prospective employees have been assessed over the past years with, among others, a cognitive test. The test has been
administered particularly to applicants applying for a teller position.
Although the test was originally developed and used in the banking industry to assess the cognitive ability of foreign exchange
advisors, appointed tellers’ feedback indicate that the test is perceived as unfair, because it doesn’t resemble the basic day-to-day
calculations of a teller. This feedback has left management very confused, because at the time of development, subject matter experts
were used to design test items, and test results seemed to indicate that those applicants who scored higher on this test also achieved
more accurate end-of-business-day account balances (as was evident in consequent performance evaluations). Furthermore, analytical
studies of the factors revealed only one factor which correlated fairly well with another well-known cognitive test (test ABC),
involving speed and accuracy of calculations.
To assess the situation further, it was decided to administer both the ABC test and the bank’s test to all the new applicants at the
same time. Results again indicated that performance on the bank’s test correlated with that on the ABC test.
Questions
1. On the basis of this information about the cognitive test, how would you rate the validity of the test? Use the following table to
indicate a rating of poor or good for the different types of validities and provide a motivation for your rating in the last column.
182
2. Read the section on the standard error of estimate. Would you advise management to continue to use this cognitive test in the
selection of tellers or not, and why?
In conclusion, we can at most say that the higher the validity coefficient of a test, the more accurate the test is in predicting the
relevant criterion. Muchinsky (2006) uses the analogy of a light switch to explain validity. He compares validity to a light switch
shedding light on the construct we are measuring, but states that validity is evident in varying degrees and therefore more comparable to
a dimmer light switch than an on/off light switch. The question is how much light is needed, or how valid a test should be for us to
predict the criterion adequately. According to Gregory (2007), correlation coefficients rarely exceed 0,80, and even low to mid-range
correlation coefficients may be regarded as acceptable.
Validity generalisation and meta-analysis
Owing to the complexity of the validity concept and the difficulties in establishing the validity of a test, other methods have evolved in
the quest for validity. It is, for example, difficult to determine the predictive validity of a measure in a specific organisation because of
the fact that the samples are too small to yield stable correlation coefficients. Anastasi and Urbina (1997) refer to research evidence
showing that verbal, numerical and reasoning aptitude measures can be regarded as valid predictors of performance across a wide
variety of occupations. This kind of research is based on the premise that validation studies need not be done in only one organisation
with one sample, but can be done across different organisations and even across industries or across countries, in that way gathering data
incrementally to prove validity. Incremental validity research implies that successful performance in a variety of occupations can
apparently be attributed to cognitive skills (Foxcroft & Roodt, 2005). Related to incremental validity studies is meta-analysis. Meta
-analysis refers to a research method in which research literature is reviewed and previous findings on a similar construct are statistically
integrated. Meta-analysis of correlational studies between predictor and criterion constructs are often used in industrial psychology to
ascertain the validity of a test across a broad spectrum of occupations.
As we noted before, the quality of the data we obtain from a predictor is a reflection of its psychometric characteristics as well as of
the manner in which we have administered and used the predictor. The practice of psychological assessment is governed by professional
standards and guidelines, which we will focus on in the next section.
THE QUALITY OF PSYCHOLOGICAL ASSESSMENT: ETHICAL AND
PROFESSIONAL PRACTICE
In our discussion on the history of psychological assessment, we have noted that psychological assessment can easily be misused. We
have also noted that the decisions that we make based on psychological test results may alter people’s lives to the extent that assessment
and assessment results may cause emotional or psychological trauma for the one being assessed. Responsible test use is therefore
non-negotiable. Yet responsible test use should be regulated in one or other form, because we cannot leave such a sensitive and
ambiguous matter to the personal tastes and subjective judgements of assessors. Responsible test use is therefore defined and directed by
written guidelines and codes of conduct, which are published by professional bodies such as the American Psychological Association
(APA) in America, the British Psychological Society (in the UK), and the Health Professions Council (HPCSA) in South Africa.
Based on the International Test Commission’s International Guidelines on Test Use (version 2000), fair assessment practices revolve
around the following aspects:
• Assessment measures and assessment results should be applied and used in a fair, professional and ethical manner.
• The needs and rights of the people being assessed should be regarded with the utmost concern.
• The predictor used should closely match the purpose for which the assessment results will be used.
• Moderating factors that result from the social, political and cultural context of the assessment should be taken into account,
especially considering how these factors may impact on the assessment results.
By implication, ensuring fair and ethical assessment practice requires regulation of the type of tests used (classification of tests) as well
as the competence of the test administrator (training and registration of professionals). It also requires ethical standards in terms of the
rights of those being assessed (a code of conduct).
183
Classification of tests
To prevent psychological measures from falling into the hands of people who are not qualified to use and understand them, the APA
proposed guidelines for the classification of tests that are still relevant today. Classification ensures that the tests are accessible only to
appropriately qualified persons, and test distributors can accordingly ensure that their tests are sold only to people with the necessary
credentials. The classification of a test as a psychological test regulates matters such as how and by whom these instruments may be
used in training or in practice. Tests are usually classified based on the complexity of the predictor construct that the measure assesses,
the complexity of its administration, scoring and interpretation, and the extent to which a person could suffer emotional or psychological
trauma from its results. The APA proposed, for example, that tests should fall into three levels of complexity (differentiated as levels A,
B and C), and that each level should require different levels of expertise from the assessment practitioner (Gregory, 2007). The different
levels and the relevant required competence on each level are presented in Table 5.1.
The APA classification was initially followed in South Africa, but it was replaced by a new classification system in 1999. The
Psychometrics Committee was commissioned by the Professional Board for Psychology (which also falls under the ambit of the
HPCSA) to, provide, among other duties, a classification system for psychological assessments in South Africa. Prior to the institution
of the Psychometrics Committee in 1996, the Test Commission of the Republic of South Africa (TCRSA) was responsible for test
classification and mainly followed the same structure as that of the APA. Assessment measures were classified as A-, B- or C-level tests.
Currently, the test classification system in South Africa is much simpler and contains only two categories. Accordingly, an
assessment measure is regarded simply as either a psychological measure or not. According to Foxcroft and Roodt (2005), the main
criterion given by the Psychometrics Commission for regarding a predictor as a psychological measure is whether or not using the
measure will result in a psychological act. The Psychometrics Committee is guided by the Health Professions Act, in terms of which
only registered psychologists are permitted to perform psychological acts. As such, they constitute the use of any psychometric measure
that assesses intellectual functioning or cognitive ability, aptitude, interest, personality structure or functioning as being a psychological
act.
Table 5.1 Classification of psychological tests according to the APA (Gregory, 2007:28)
[Reproduced with the permission of Pearson Education, Inc.]
Level Complexity
Expertise required
A
Measures classified as level A measures are straightforward
paper-and-pencil measures, with easy-to-use guidelines for
administration, scoring and interpretation contained in a manual.
A-level tests include vocational proficiency and group educational
achievement tests.
No psychology registration is required to administer
A-level tests.
B
These measures are a little more complex in nature and in the manner
in which they were constructed. They tap into aspects of human
behaviour that are more complex and require a deeper understanding
than can be presented in the test manual. Tests included in the B-level
category are aptitude tests and certain personality measures.
Knowledge of test construction and training in
statistics and psychology is required to use B-level
tests. Advanced training and supervision of a qualified
psychologist may be also required.
C
These measures tap into very complex psychological attributes based
on specific theoretical frameworks in psychology and human
behaviour. The manner in which the measures have been designed,
standardised and validated need to be well understood. Individual
intelligence tests, projective personality tests, and neuropsychological
tests are regarded as C-level tests.
Substantial understanding of test construction and
moderating and contextual factors is required.
Supervised experience should be obtained during
training in order to be competent in administering,
scoring and interpreting these measures. Typically at
least a Master’s degree is required.
The two test categories that are described in the Policy on the Classification of Psychometric Measuring Devices, Instruments,
Methods and Techniques (Form 208) (available at <www.hpcsa.co.za>) are as follows:
1. Psychological tests: tests that may be used by psychometrists under the control (supervision) of a psychologist with regard to:
The choice of a test
Administration and scoring
Interpretation
Reporting on results.
2. Prescribed tests for use by other professionals, such as speech, occupational and physiotherapists.
Training and registration of test professionals
From the classification standards with regard to psychological assessment, it is clear that different levels of knowledge (training), skills
and expertise are required of people using psychological measures. In South Africa, the Professional Board for Psychology distinguishes
184
specifically between a psychologist and a psychometrist. As discussed in chapter 1, registration as an industrial psychologist requires a
Master’s qualification with at least 12 months’ practical training (internship) in various industrial and organisational psychological or
human-resources activities. Registered industrial psychologists may use all levels of tests if they have had the required training, and may
control all testing activities carried out by psychometrists.
To find out about the requirements for registration as a psychometrist, you can also refer to the HPCSA web site (<www.hpcsa.co.za
>). Form 94, a document entitled ‘Training and examination guidelines for psychometrists’, indicates that in order to register as a
psychometrist one should:
• Possess an honours degree in Psychology (or an equivalent 4-year BPsych degree), which included modules in testing and
assessment.
• Complete an appropriately supervised practicum minimum of six months in psychological testing and assessment, under the
guidance and supervision of a senior psychologist, within a period of 12 months.
• Obtain at least 70% in the examination of the Professional Board for Psychology.
Psychometrists are not permitted to use certain projective personality measures, specialist neuropsychological measures, and measures
that are used for the diagnosis of psychopathology. According to the Professional Board for Psychology, psychometrists are allowed to
administer, score, and provisionally interpret psychological tests under the supervision and mentoring of a psychologist. However,
reporting the results falls under the final responsibility of the psychologist. It is stated in the training guidelines that psychometrists need
to be mentored and supervised by psychologists because the use of a psychological test constitutes a psychological act.
Codes of conduct: standards for conducting assessment practices
Imagine that you apply for a position as a pharmacist in a pharmaceutical company. During the selection interview, you are informed
that the company requires psychological assessment results as part of the company’s selection procedure and that the company has (to
save costs) obtained the results from an assessment you did while with your current employer. You underwent this assessment when you
went on a leadership development course. Do you think that it is fair that the pharmaceutical company should use your previous
assessment results for selection purposes? Do you think that they have the right to access these results? Of course, your answer should
be no! Apart from the fact that this constitutes a serious violation of your right to privacy and confidentiality has been breached, the
information derived from a leadership assessment is not applicable to selection in a different position and context.
We have on numerous occasions noted that assessment results may yield potentially sensitive and even psychologically harmful
information, which may impact on the person’s future. Codes of conduct for assessment practices are mainly devised based on ethical
principles protecting the needs and rights of those being assessed. Some of the most important ethical principles include the rights to
privacy, confidentiality, and informed consent.
To protect the test-takers’ right to privacy, it is expected that any report on his or her assessment results should contain only
information that is relevant to the purpose of the assessment. When you as an assessor report on assessment results and reveal more than
the relevant information needed to make an informed decision, you have invaded the person’s fundamental right to privacy. Usually
allowance is made for psychologists to discuss confidential information for appropriate research and professional purposes, but then
only with equally well-qualified people, such as other psychologists or behavioural scientists, who are similarly concerned with such
types of information. It is therefore clear that it is of the utmost importance to determine and communicate the purpose of the assessment
prior to communicating the results. When results are eventually communicated, it should be done in a manner that is in line with the
purpose of the assessment.
Psychological assessors’ primary responsibility lies with protecting the confidentiality of the information they have obtained during
assessment. Personal information may be released to someone else only if the test-taker has given ‘unambiguous consent, usually in
written form’ (Gregory, 2007:30). Written consent is therefore important and should be given on the basis that the test-taker understands
the purpose of the assessment and what the results will be used for. Written informed consent is also based on the premise that the
language in which the test-taker is informed of the purpose of the assessment should be one that is reasonably understandable to the
test-taker. Furthermore, the test-taker should give consent with regard to who will have access to the test results (the person to whom the
results will be communicated). Ultimately, the test-taker should be well-informed about the nature, scope and purpose of the assessment
before being requested to consent to it.
TYPES OF PREDICTORS
Psychological tests are usually classified according to their content or purpose. The content or purpose of a test specifically pertain to the
predictor construct that is assessed. In the first section to this chapter we have seen that there are three broad predictor constructs,
namely cognitive, personality, and behavioural attributes. We do, however, also have measures that tap into all three of these constructs,
such as the interview and assessment centres. In this section we will discuss various types of cognitive tests, personality tests, and
behavioural measures.
Cognitive assessment
Cognitive ability testing has been one of the major driving forces in the development and history of psychological assessment in general,
185
but it has also been an area of controversy and debate, that continues to exist. Interest in cognitive assessment dates from 100 years ago
and an extensive body of evidence exists, showing cognitive ability tests to have the highest predictive validities with regard to academic
performance and job performance. However, research has also brought to light differences in mean scores across racial and ethnic
groups, leading on the one hand to criticism of adverse impact, but also to the development of more relevant cognitive constructs to be
measured, as well as innovative ways of cognitive measurement. The development of different cognitive ability tests is also closely
linked to evolvement of the construct of cognition and researchers’ attempts to continue to explore the nature and relevance of the
construct for use in organisations and other settings, such as education. For the purposes of enhancing understanding and knowledge
about cognitive ability tests in this chapter, we will focus on the earliest approach to cognitive assessment – the structural approach – as
well as the information-processing approach and the developmental approach, which evolved later. Within each of these approaches
different cognitive assessment tools have been developed. As part of this section on cognitive assessment, we will also address measures
of specific abilities, namely aptitude tests.
The structural approach
The structural approach to assessing cognition was prevalent during the first half of the 20th century. The primary goal of structural
researchers was to determine the structure of the intellect, and various theories evolved on what intelligence is and how it should be
measured. Factor analysis was applied in operationalising the different meanings given to intelligence in various assessment devices.
Controversy and debate arose because researchers could not come to an agreement on the meaning and definition of the construct of
intelligence.
From the discussion on the history of psychological assessment earlier in this chapter, you should remember various attempts at
developing measures with a focus on measuring simple sensory processes. We have also referred to the Binet-Simon scale that was
initially developed to assess mental retardation. Relevant to the field of industrial and organisational psychology, intelligence as a
predictor construct was initially defined and measured as a single factor. Spearman was the first to conclude that intelligence consists of
one single factor g – also referred to as general mental ability. He proposed that any cognitive task requires a certain quantity of g, but
that tasks may also require additional abilities that are specific to the task, which he referred to as specific abilities or s-factors.
Many cognitive tests can be categorised as reflecting either g- or s-factors or both, and Spearman’s model of intelligence has been
used to construct various intelligence (IQ) tests assessing mainly the g-factor, as well as aptitude test batteries assessing mainly s
-factors. Raymond B. Cattell later contended that the g-factor consists of two distinct sub-factors, namely fluid (gf) and crystallised (gc)
intelligence. According to him fluid intelligence comprises non-verbal abilities that are to a large extent not dependent on educational or
cultural content, and represents the inherent ability to learn and solve problems through inductive reasoning. He described crystallised
intelligence as that component of IQ that represents what an individual has already learnt and is therefore dependent on past educational
and cultural experience. Crystallised intelligence includes language and numerical abilities.
With regard to intelligence measures, we can distinguish between individual intelligence measures and group intelligence measures.
Individual intelligence measures typically consist of a number of verbal and non-verbal sub-tests which, when scored together, provide
an indication of a person’s general intelligence. Verbal, non-verbal, and total IQ scores are usually obtained. Scores on the individual
verbal and non-verbal sub-tests can be analysed should a more detailed idea of a person’s abilities be needed. Verbal sub-tests could
include, for example, the measurement of vocabulary, comprehension, similarities, number problems, and story memory, as in the
Senior South African Individual Scale (SSAIS-R) (Van Eeden, 1991). Non-verbal sub-tests on the SSAIS-R include (as an example)
pattern completion, block design tests, missing parts, and form board. Another well-known individual intelligence test is the Wechsler
Adult Intelligence Scale (WAIS), which has also been standardised for the South African context – in particular its third edition, the
WAIS-III. Sub-tests on the WAIS-III are grouped to provide verbal IQ, performance IQ, and a full-scale IQ scores.
Group intelligence measures are used when large numbers of people need to be assessed simultaneously for the purposes of, for
example, placement in industry or admission to educational institutions. Group tests are particularly designed to assess cognitive
abilities relevant to academic achievement and usually contain verbal or non-verbal sub-tests or both (Van Eeden & De Beer, 2005). An
example of a group intelligence test containing both verbal and non-verbal scales is the General Scholastic Aptitude Test (GSAT), and
an example of a test containing only non-verbal items is the Ravens Progressive Matrices (RPM). Figure 5.2 contains an example of a
test item that closely resembles the type of questions asked in the RPM. Test-takers are instructed to identify from options a, b, c, d, e
and f the piece that is missing and needed to complete the pattern.
Figure 5.2 Example of an item in a non-verbal IQ test
186
item content and style, and new items to increase the discriminatory power of the intellectual and decision-making dimensions. In
addition it emphasises its use by a job analyst, rather than the incumbent. However, although shown to be reliable, further research is
needed before it is known whether the JSP is a legitimate improvement on the PAQ (Aamodt, 2010).
The Job Elements Inventory (JEI), developed by Cornelius and Hakel (1978), was designed as an alternative to the PAQ. It is also
regarded as a better replacement for the difficult-to-read PAQA. The instrument contains 153 items and has a readability level
appropriate for an employee with only a tenth-grade education (Aamodt, 2010).
Although the job element approach to job analysis is person-orientated, some of the elements can be considered to be
work-orientated (for example, an element for the job police officer is ‘ability to enforce laws’) because of their job specificity. The Job
Element Method (JEM) looks at the basic KSAOs that are required to perform a particular job. These KSAOs constitute the basic job
elements that show a moderate or low level of behavioural specificity. The elements for each job are gathered in a rather unstructured
way in sessions of SME panels and therefore cross-job comparisons are very limited (Voskuijl & Evers, 2008).
Functional Job Analysis (FJA) resulted from the development of the United States of America’s Dictionary of Occupational Titles
(DOT) (Voskuijl & Evers, 2008). FJA was designed by Fine (1955) as a quick method that could be used by the United States
Department of Labour to analyse and compare the characteristics, methods, work requirements, and activities to perform almost all jobs
in the United States. Jobs analysed by FJA are broken down into the percentage of time the incumbent spends on three functions: data
(information and ideas), people (clients, customers, and co-workers), and things (machines, tools, and equipment). An analyst is given
100 points to allot to the three functions. The points are usually assigned in multiples of 5, with each function receiving a minimum of 5
points. Once the points have been assigned, the highest level at which the job incumbent functions is then chosen from the chart shown
in Table 4.5.
Methods providing information about tools and equipment
The Job Components Inventory (JCI) was developed by Banks, Jackson, Stafford & Warr (1983) for use in England in an attempt to take
advantage of the PAQ’s strengths while avoiding some of its problems. The JCI consists of more than 400 questions covering five major
categories: tools and equipment, perceptual and physical requirements, mathematical requirements, communication requirements, and
decision-making and responsibility. It is the only job analysis method containing a detailed section on tools and equipment (Aamodt,
2010). Research further indicates that it is reliable (Banks & Miller, 1984), can differentiate between jobs (Banks et al, 1983), can
cluster jobs based on their similarity to one another (Stafford, Jackson & Banks, 1984), and unlike the PAQ, is affected by the amount of
information available to the analyst (Aamodt, 2010; Surrette, Aamodt & Johnson, 1990).
Methods providing information about KSAOs/competencies
In the 1980s the task-based DOT was evaluated as no longer apt to reflect changes in the nature and conditions of work. The
Occupational Information Network (O*NET) (Peterson et al, 1999) is an automated job classification system that replaces the DOT.
O*NET focuses in particular on cross-job descriptors. It is based on a content model that consists of five categories of job descriptors
needed for success in an occupation (Reiter-Palmon et al, 2006):
• Worker requirements (for example, basic skills, cross-functional skills)
• Worker characteristics (for example, abilities, values)
• Experience requirements (for example, training)
• Occupational requirements (for example, generalised work activities, work context), and
• Occupation-specific requirements (for example, tasks, duties).
The O*NET also includes information about such economic factors as labour demand, labour supply, salaries and occupational trends.
This information can be used by employers to select new employees and by applicants who are searching for careers that match their
skills, interests, and economic needs (Aamodt, 2010).
O*NET is a major advancement in understanding the nature of work, in large part because its developers understood that jobs can be
viewed at four levels: economic, organisational, occupational, and individual. As a result, O*NET has incorporated the types of
information obtained in many job analysis techniques (Reiter-Palmon et al, 2006).
In addition to information about tools and equipment, the Job Components Inventory (JCI) which was discussed earlier also provides
information about the perceptual, physical, mathematical, communication, decision-making, and responsibility skills needed to perform
the job (Aamodt, 2010).
Table 4.5 Levels of data, people and things (FJA) (Aamodt, 2010:52)
101
The Fleishman Job Analysis Survey (F-JAS) is a taxonomy-based job analysis method based on more than 30 years of research
(Fleishman & Reilly, 1992). Through a combination of field and laboratory research, Fleishman and his associates developed a
comprehensive list, or taxonomy (an orderly, scientific system of classification), of 52 abilities. These can be divided into the broad
categories of verbal, cognitive, physical, and sensory or perceptual-motor abilities. Table 4.6 provides some examples of these abilities.
Fleishman’s list of abilities can be used for many different applied purposes. It is an effective way to analyse the most important
abilities in various occupations. It can also be used to determine training needs, recruiting needs, and even work design. Once an analyst
knows the basic abilities that can be brought to the job, it is much easier to identify which of those abilities are truly important (Landy &
Conte, 2004).
Table 4.6 Excerpts from abilities in Fleishman’s taxonomy (Adapted from Landy & Conte, 2004:83)
[Reproduced with permission of The McGraw-Hill Companies.]
102
103
The F-JAS requires incumbents or job analysts to view a series of abilities and to rate the level of ability needed to perform the job.
These ratings are performed for each of the 52 abilities and knowledge. The F-JAS is commercially available and easy to use by
incumbents or trained analysts. It is furthermore supported by years of research (Aamodt, 2010).
The Job Adaptability Inventory (JAI) is a 132-item inventory developed by Pulakos, Arad, Donovan and Plamondon (2000) which
taps the extent to which a job incumbent needs to adapt to situations on the job. The JAI is relatively new and it has excellent reliability.
It has also been shown to distinguish among jobs (Pulakos et al, 2000). The JAI has eight dimensions (Aamodt, 2010):
• Handling emergencies or crisis situations
• Handling work stress
• Solving problems creatively
• Dealing with uncertain and unpredictable work situations
• Learning work tasks, technologies and procedures
• Demonstrating interpersonal adaptability
• Demonstrating cultural adaptability
• Demonstrating physically-orientated adaptability.
Historically, job analysis instruments ignored personality attributes and concentrated on abilities, skills, and less frequently, knowledge.
Guion and his colleagues (Guion, 1998; Raymark, Schmit & Guion, 1997) developed a commercially available job analysis instrument,
the Personality-Related Position Requirements Form (PPRF). The PPRF is devoted to identifying personality predictors of job
performance. The instrument is not intended to replace other job analysis devices that identify knowledge, skills or abilities, but rather to
supplement job analysis by examining important personality attributes in jobs (Landy & Conte, 2004).
The PPRF consists of 107 items tapping 12 personality dimensions that fall under the ‘Big 5’ personality dimensions (openness to
experience, conscientiousness, extroversion, agreeableness, and emotional stability). However, Raymark et al (1997) departed from the
‘Big 5’ concept since they found that the five factors were too broad to describe work-related employee characteristics. As shown in
Table 4.7, they defined twelve dimensions (for example, conscientiousness was covered by the sub-scales: general trustworthiness,
adherence to a work ethic, and thoroughness and attentiveness) (Voskuijl & Evers, 2008). Though more research is needed, the PPRF is
reliable and shows promise as a useful job analysis instrument for identifying the personality traits necessary to perform a job (Aamodt,
2010). The PPRF is low on behaviour specificity and high on cross-job comparability (Voskuijl & Evers, 2008).
Table 4.7 Twelve personality dimensions covered by the PPRF (Adapted from Landy & Conte, 2004:193)
[Reproduced with permission of The McGraw-Hill Companies.]
104
Computer-based job analysis
Computer-based job analysis systems use the same taxonomies and processes across jobs which make it easier to understand job
similarities, job families and career paths. In addition it provides a number of advantages, including:
• Time and convenience to the employer since SMEs need not be assembled in one spot at one time but work from their desks
at their own pace and submit reports electronically
• Efficiency with which the SME system can create reports that serve a wide range of purposes, from individual goal-setting
and performance feedback to elaborate person–job matches to support selection and placement strategies, and
• Facilitating vocational counselling and long-term strategic human resource planning in the form of replacement charts for
key positions (Landy & Conte, 2004).
The Work Profiling System (WPS) (Saville & Holdsworth, 2001) is an example of a computerised job analysis instrument by means of
which the data collection and interpretation process of job analysis can be streamlined, reducing costs to the organisation, minimising
distractions to the SMEs and increasing the speed and accuracy of the job analysis process.
The WPS consists of three different questionnaires for the following groups of employees: managerial and professional; service and
administrative; manual and technical. Each questionnaire consists of a job content part, where the main tasks are established, and a job
context part, where, for example, the physical environment, responsibility for resources, and remuneration are explored. To use the
WSP, each SME from whom information is sought fills out an onscreen job analysis questionnaire. SMEs respond using scales to
indicate the typical percentage of their time spent on a task as well as the relative importance of the task. A separate work context
section covers various aspects of the work situation. The WSP human attribute section covers physical and perceptual abilities, cognitive
abilities, and personality and behavioural style attributes (Landy & Conte, 2004).
The WPS can provide job descriptions, people specifications, assessment methods that can be used to assess candidates for a
vacancy, an individual development planner for a job incumbent, a tailored appraisal document for the job, and a person–job match,
which can match candidates against the key requirements of the job (Schreuder et al, 2006).
Reflection 4.5
Consider the advantages and shortcomings of each of the following methods of collecting job analysis information: observation,
performing actual job tasks (job participation), interviews, critical incident identification, work diaries, job analysis questionnaires,
and computer-based job analysis techniques.
Assuming that you would use three methods in a single project to collect job analysis information, which of the three methods
listed above would you choose? In what order would you arrange these methods (that is, which type of data would you gather first,
then second, then last?). Why would you use this order?
The South African Organising Framework for Occupations (OFO) and job analysis
The Organising Framework for Occupations (OFO) is an example of a skills-based approach to job analysis. As discussed in chapter 3,
the focus on skills has become more important in today’s business environment as a result of rapid technological change and skills
shortages nationally and globally. Since national and global imperatives have indicated that identifying occupational skills requirements
is critical to identify and address the most critical skills shortages, traditional job analysis approaches which purely focus on static tasks
105
and work behaviours are regarded as being no longer appropriate. In contrast, focusing on occupation-specific skills in addition to the
traditional job analysis approach is seen to provide a more dynamic and flexible approach to job analysis in the contemporary
employment context. Occupation-specific skills involve the application of a broader skill in a specific performance domain.
Furthermore, these skills are generally limited to one occupation or a set of occupations (such as a job family), but are not designed to
cut across all jobs. However, these more specific skills can be utilised across jobs when jobs include similar occupation-specific skills
(Reiter-Palmon et al, 2006). To allow for comparisons between jobs, and therefore flexibility and efficiency in training design, the
occupation-specific information is usually anchored in a broader, more general and theoretical context. The South African OFO was
specifically designed to provide this broader context but not job-specific information.
The Department of Labour, with the assistance of German Technical Co-operation (GTZ), introduced the OFO in February 2005 to
align all skills development (education, training and development or learning) activities in South Africa. The framework is based on
similar international development done by the Australian Bureau of Statistics (ABS) and Statistics New Zealand, which lead to an
updated classification system, the Australian and New Zealand Standard Classification of Occupations (ANZSCO). Inputs from
stakeholders in South Africa were used to adapt the framework and its content to the South African context (RainbowSA, 2010).
Although not developed as a job analysis method but rather as broader framework for the South African Occupational Learning
System (OLS), which is discussed in chapter 10, the OFO is regarded as a valuable source of information about occupations and jobs
that are linked to scarce and critical skills and jobs in the South African context. The OFO became operational in April 2010.
Application of the OFO
The OFO provides an integrated framework for storing, organising and reporting occupation-specific information not only for statistical
but also for client-orientated applications, such as identifying and listing scarce and critical skills, matching job seekers to job vacancies,
providing career information, and registering learnerships. The mentioned information is generated by underpinning each occupation
with a comprehensive competence or occupational profile which is generated by Committees (or Communities) of Expert Practice
(CEPs). A Committee or Community of Expert Practice is a group of practitioners currently active in a specific occupation. Where there
is a professional body for the occupation, they will be the membership of the CEP.
The occupational profiles are also used to inform organisations’ job and post profiles, which simplifies, inter alia, conducting job
analyses, skills audits, and performance evaluations. The structure of the OFO also guides the creation of Career Path Frameworks and
related learning paths. Personal profiles can be linked to job profiles and job profiles to occupational profiles so that workforce or human
resource planning is streamlined and unpacked down to the level of personal development plans that address the development or
up-skilling of scarce and critical skills in the workplace (RainbowSA, 2010).
OFO structural elements and underlying concepts
The OFO is an occupation-specific skills-based coded classification system, which encompasses all occupations in the South African
context. The classification of occupations is based on a combination of skill level and skill specialisation which makes it easy to locate a
specific occupation within the framework. It is important to note that a job and an occupation are not the same. A job is seen as a set of
roles and tasks designed to be performed by one individual for an employer (including self-employment) in return for payment or profit.
An occupation is seen as a set of jobs or specialisations whose main tasks are characterised by such a high degree of similarity that
they can be grouped together for the purposes of the classification. The occupations identified in the OFO therefore represent a category
that could encompass a number of jobs or specialisations. For example, the occupation ‘General Accountant’ would also cover the
specialisations ‘Financial Analyst’ and ‘Insolvency Practitioner’.
Identified occupations are classified according to two main criteria: skill level and skill specialisation, where skill is used in the
context of competency rather than a description of tasks or functions. The skill level of an occupation is related to competent
performance of tasks associated with an occupation. Skill level is an attribute of an occupation, not of individuals in the workforce, and
can operationally be measured by the level or amount of formal education and/or training (learning), the amount of previous experience
in a related occupation, and the amount of on-the-job training usually required to perform the set of tasks required for that occupation
competently. It is therefore possible to make a comparison between the skill level of an occupation and the normally required
educational level on the National Qualifications Framework (NQF).
The skill specialisation of an occupation is a function of the field of knowledge required, tools and equipment used, materials
worked on, and goods or services provided in relation to the tasks performed. Under some occupations, a list of alternative titles has
been added. The purpose of this is to guide users of the OFO to identify the relevant occupation under which to capture data
(RainbowSA, 2010). As shown in Figure 4.3, within the current OFO there are 8 major groups. These major groups comprise 43
sub-major groups, 108 minor groups, 408 unit groups, and 1 171 occupations. We have included in the example the occupation of
industrial psychologist as outlined in the OFO. Table 4.8 shows how the role of a psychologist (including the occupation of an industrial
psychologist) differs from those of the human resource professional) as described on the OFO.
An example of how useful the OFO is comes from a Human Sciences Research Council (HSRC) analysis of skills demand as
reflected in newspaper advertisements over a three-year period. Three national newspapers were analysed and 125 000 job
advertisements found, advertising 28 000 unique job titles. Using the OFO, the HSRC could isolate 1 200 unique occupations from the
28 000 job titles and 125 000 advertisements (RainbowSA, 2010).
106
COMPETENCY MODELLING
Competency modelling is viewed as an extension rather than a replacement of job analysis. Just as job analysis seeks to define jobs and
work in terms of the match between required tasks and human attributes, competency modelling seeks to define organisational units
(that is, larger entities than simply jobs or even job families) in terms of the match between the goals and missions of those units and the
competencies required to meet those goals and accomplish those missions. From the perspective of advocates of competency modelling,
the notion of competencies is seen to be rooted in a context of organisational goals rather than in an abstract taxonomy of human
attributes (Landy & Conte, 2004).
Figure 4.3 Example of structural elements of the South African OFO (Stuart (RainbowSA), 2010)
[Reproduced with the permission of the author.]
Reflection 4.6
•
Review chapter 1 and study the objectives of I/O psychology. Now study the tasks or skills of the psychologist shown in
Table 4.8 (including those of the industrial psychologist shown in Figure 4.3). How does the job of an industrial
psychologist differ from those of a human resource professional?
• Review the objectives of personnel psychology as a sub-field of I/O psychology described in chapter 1 and the
107
employment process in chapter 3. How do the tasks or skills of the industrial psychologist complement those of the human
resource professional in the employment process discussed in chapter 3?
• Obtain a job description of a human resource professional from any organisation. Does the job description reflect the tasks
or skills outlined on the OFO? Which of the tasks overlap with those of the industrial psychologist?
• Do you think that the OFO will be useful in developing a job description? If your answer in the affirmative, in what
manner? Provide reasons for your answer.
Table 4.8 Example of tasks and skills descriptions on the OFO (Stuart (RainbowSA), 2010)
[Reproduced with the permission of the author.]
Psychologists
(Skill level 5) – OFO code: 2723
Human resource professionals
(Skill level 5) – OFO code: 2231
Skill specialisation:
Psychologists investigate, assess and provide
treatment and counselling to foster optimal
personal, social, educational and
occupational adjustment and development.
Skill specialisation:
Human resource professionals plan, develop, implement and evaluate staff recruitment,
assist in resolving disputes by advising on workplace matters, and represent industrial,
commercial, union, employer and other parties in negotiations on issues such as
enterprise bargaining, rates of pay, and conditions of employment.
Tasks or skills:
Tasks or skills:
• Administering and interpreting
• Arranging for advertising of job vacancies, interviewing, and testing of
diagnostic tests and formulating
applicants, and selection of staff
plans for treatment
• Arranging the induction of staff and providing information on conditions of
• Collecting data about clients and
service, salaries and promotional opportunities
assessing their cognitive,
• Maintaining personnel records and associated human resource information
behavioural and emotional
systems
disorders
• Overseeing the formation and conduct of workplace consultative committees
• Collecting data and analysing
and employee participation initiatives
characteristics of students and
• Provides information on occupational needs in the organisation, compiles
recommending educational
workplace skills plans and training reports and liaise with Sector Education
programmes
and Training Authorities (SETAs) regarding Learning Agreements and
• Conducting research studies of
Learning Contracts in accordance with skills development legislation
motivation in learning, group
• Providing advice and information to management on workplace relations
performance and individual
policies and procedures, staff performance and disciplinary matters
differences in mental abilities and
• Providing information on current job vacancies in the organisation to
educational performance
employers and job seekers
• Conducting surveys and research
• Receiving and recording job vacancy information from employers, such as
studies on job design, work
details about job description and wages and conditions of employment
groups, morale, motivation,
• Studying and interpreting legislation, awards, collective agreements and
supervision and management
employment contracts, wage payment systems and dispute settlement
• Consulting with other
procedures
professionals on details of cases
• Undertaking and assisting in the development, planning and formulation of
and treatment plans
enterprise agreements or collective contracts, such as productivity-based wage
• Developing interview techniques,
adjustment procedures, workplace relations policies and programmes, and
psychological tests and other aids
procedures for their implementation
in workplace selection, placement,
• Undertaking negotiations on terms and conditions of employment, and
appraisal and promotion
examining and resolving disputes and grievances
• Developing, administering and
evaluating individual and group
treatment programmes
• Formulating achievement,
diagnostic and predictive tests for
use by teachers in planning
methods and content of instruction
• Performing job analyses and
establishing job requirements by
observing and interviewing
employees and managers
However, it is still a problem that definitions of the term ‘competencies’ or ‘competency’ are not unequivocal and are sometimes
108
even contradictory (Voskuijl & Evers, 2008). McClelland (1973) introduced the term as a predictor of job performance because he
doubted the predictive validity of cognitive ability tests. He proposed to replace intelligence testing with competency testing. Though
McClelland (1973) did not define the term competency, he made it explicitly clear that the term did not include intelligence. Today, we
know that general mental ability is the most valid predictor of job performance (Voskuijl & Evers, 2008; Schmidt & Hunter, 1998).
However, in spite of the paucity of empirical evidence that competencies add something to the traditional concepts in the prediction or
explanation of job success (KSAOs), competencies, competency modelling (the term mostly used in the USA), and competency
frameworks (the term mostly used in the UK) are very popular.
In general, competency frameworks or competency models describe the competencies required for effective or excellent
performance on the job. As shown in Table 4.10, the models mostly consist of lists of competencies (typically 10 to 20), each with a
definition and examples of specific behaviours. Competency models are often based on content analysis of existing performance models
and managerial performance taxonomies (Voskuijl & Evers, 2008). In addition, organisation-, industry-, and job-specific competencies
are identified. As in job analysis, there are several methods for identifying competencies (Voskuijl & Evers, 2008; Garavan & McGuire,
2001; Robinson et al, 2007):
• Direct observation
• Critical incident technique
• SME panels or teams or focus groups
• Questionnaires
• Repertoire grid technique, or
• Conventional job analysis.
In South Africa, the terms competency modelling and competency framework are both adopted by some companies. However,
considering new legislative developments in the South African organisational context such as the Occupational Learning System (OLS),
the National Qualifications Framework (NQF), and the new Organising Framework for Occupations (OFO) (the topics of chapter 10) –
all of which will have a major influence on the education, training and development of people in South African workplaces – it can be
foreseen that companies will in future endeavour to align their competency modelling approaches with the national occupation-specific
skills-based approach.
Defining competencies
Boyatzis’ (1982) definition of competency is regarded as being the most frequently cited. Boyatzis describes competency as an
underlying characteristic of a person which results in effective and/or superior performance of a job. This definition is based on the data
of McClelland (1973) and refers to key success areas (KSAs), motives, traits and aspects of one’s self-image or social roles of
individuals related to job performance. Garavan and McGuire (2001) define competencies in narrower terms by regarding them as
combinations of KSAOs that are needed to perform a group of related tasks or bundles of demonstrated KSAOs.
Other authors refer more explicitly to behaviours. As illustrated in Figure 4.4, Zacarias and Togonon (2007:3) define a competency
as ‘the sum total of observable and demonstrated skills, knowledge and behaviours that lead to superior performance’, while Tett et al
(2000:215) regard a competency as future-evaluated work behaviour by describing it as an ‘identifiable aspect of prospective work
behaviour attributable to the individual that is expected to contribute positively and/or negatively to organisational effectiveness’.
However, other authors such as Kurz and Bartram (2002) are of the firm opinion that a competency is not behaviour. Kurz and Bartram
(2002:230), for example, state:
‘A competency is not the behaviour or performance itself, but the repertoire of capabilities, activities, processes and responses available that enable
a range of work demands to be met more effectively by some people than by others. Competence is what enables the performance to occur.’
Voskuijl and Evers (2008) point out that the common thread underlying the above-mentioned definitions of competencies is the focus on
the characteristics of the individual job holder. In this regard, the descriptions of competencies are person-based – an approach that
found its origin in the USA. Within this approach, competencies are conceived as individual characteristics that are related to excellent
or superior performance. This perspective is worker-orientated and is concerned with the input of individuals in terms of behaviour,
skills, or underlying personal characteristics required by job holders.
However, the UK approach to competency frameworks tends to integrate the person-based approach with the job-based approach.
The job-based approach is task centred and focuses on the purpose of the job or occupation, rather than the job holder. Further, the job
competency approach is directed at job output and it generally assumes that the characteristics required by job holders exist when the
standards are met. The UK approach further differentiates between the terms ‘competence’ and ‘competency’. The differences between
the terms parallel the differences between the job-based versus the person-based perspectives. In the UK (as in South Africa), in
particular, the term ‘competence’ is generally associated with the job-based approach, especially in the industry and educational sectors.
Person-based competency frameworks appear to be more relevant in the area of management and professionals, especially in selection
and assessment (Voskuijl & Evers, 2008).
Similarly, in South Africa, the concept of competence refers to meaningful skills related to specific occupations, as described in the
OFO. As previously mentioned, the terms skills is used in the context of competency rather than a description of tasks or functions. The
skills level of an occupation is related to the competent performance of tasks associated with an occupation. The OFO identifies
competency and professional requirements through six dimensions: skills, knowledge, qualification, experience, professional
109
registration, and disability. Competence in occupational learning is demonstrated against the three learning components: a knowledge
and theory component and standard, a practical skills standard, and a work experience standard.
Figure 4.4 Definition of a competency (Zacarias & Togonon, 2007:3)
It is important to note that the OFO is aimed at creating an OLS linked with a National Career Path Framework (NCPF) to address
the skills shortages in South Africa rather than addressing specific strategic business goals of companies. Considering that the education,
training and development of a workforce have an effect on companies’ capability to sustain their competitive edge in a global business
market, the OFO provides a valuable framework of occupation-specific skills that can be used as an input to job analysis and
competency modelling practices.
Drawbacks and benefits of competency modelling
Most of the criticism against the use of competencies refers to matters of validity and theoretical background of the concept (Voskuijl &
Evers, 2008; Garavan & McGuire, 2001). Practical disadvantages include aspects related to time, costs and effort. For example,
Mansfield (1996) mentions that the oldest competency models were developed for single jobs. These models included extensive
data-gathering (for example, interviews, focus groups, surveys, and observation) with SMEs (for example, managers, job holders, and
customers). The competencies were distilled from the collected data lists of only ten to twenty skills or traits. Mansfield (1996) refers
further to the ‘one-size-fits-all’ model which defines one set of competencies for a broad category of related jobs, for example, all
managerial jobs. The most important drawback of this model is the impossibility of describing the typical requirements of specific jobs;
therefore it is of limited use in several human resource practices, such as selection and promotion procedures. Another example is the
‘multiple job model’ that assumes, for example, experience in building competency models, and the existence of many single-job
models and consultants specialised in competency work. If such conditions are met, it may be possible to develop a competency model
for a specific job in a quick, low-cost way (Voskuijl & Evers, 2008).
However, the strength of the behavioural competency approach seems to be the use of criterion samples. Sparrow (1997) argues that
the approach focuses on those behaviours that have proved to result in successful performance in a sample of job holders who have had
success in their jobs. Although behavioural competencies have been testified to as being useful in the areas of recruitment and selection,
career development, performance management and other human resource processes, these claims still need to be empirically verified.
Furthermore, although several weaknesses of the use of competences in the context of management development have been pointed out,
the competence approach concentrates on what managers and workers actually do, rather than on beliefs about what they do (Landy &
Conte, 2004).
A study conducted by members of the Job Analysis and Competency Modelling Task Force (Schippman et al, 2000) showed a
superior overall evaluation of job analysis in comparison to competency modelling. The Task Force conducted a literature search and
interviewed thirty-seven subject matter experts, such as human resource consultants, former presidents of the US Society for Industrial
110
and Organizational Psychology, leaders in the area of competency modelling, and industrial psychologists who represent a traditional
job analysis perspective. The sample of experts represented advocates as well as opponents of either job analysis or competency
modelling. The study revealed that, in terms of the evaluative criteria used in the study, except for the focus of competency modelling on
the link between people’s competencies and business goals and strategies, job analysis demonstrates medium/high and high rigour, while
competency modelling demonstrates low/medium to medium rigour with reference to the evaluative criteria. The evaluative criteria used
included: method of investigation, procedures for developing descriptor content, detail of descriptor, and content according to the level
of rigour with which they were practised (Voskuijl & Evers, 2008).
Although job analysis is considered as being more psychometrically sound than competency modelling, in contrasting job analysis
and competency modelling, the conclusion is that the integration of the strengths of both approaches could enhance the quality of the
results (-Cartwright & Cooper, 2008; Lievens, Sanchez & De Corte, 2004). A study by -Siddique (2004) indicated that a company-wide
policy of job analysis is an important source of competitive advantage. He found a relationship between frequency or regularity of
conducting job analysis and organisational performance (for example, administrative efficiency, quality of organisational climate,
financial performance, and overall sales growth). The relationship was stronger when the job analysis approach was competency
focused. By that Siddique meant approaches that place greater emphasis on characteristics of employees considered essential for
successful job performance (for example, motivation, adaptability, teamwork orientation, interpersonal skills, innovative thinking, and
self-motivation).
Phases in developing a competency model
Designing a competency model generally involves the following broad phases:
• Competency identification
• Competency definition, and
• Competency profiling.
Competency identification involves the identification of key competencies that an institution needs in order to deliver on its strategy
effectively. This phase involves a thorough assessment of the organisation’s directions and the internal processes necessary to support its
strategic initiatives. The output of this phase is a complete list of required competencies.
Once the key competencies have been identified, definitions are crafted and specific behavioural manifestations of the competencies
are determined. The behavioural requirements can be developed after a series of interviews with management and key staff. In this
phase, competencies may also be differentiated into proficiency levels. A competency directory should be produced at the end of this
phase.
Competency profiling occurs when competencies are identified for specific jobs within an institution. The mix of competencies will
depend on the level of each position. The resulting job profiles are then tested and validated for accuracy.
Steps in developing a competency model
Typically, the process steps are as follows:
• Assemble the Competency Process Team.
• Identify the key business processes.
• Identify competencies.
• Define proficiency levels.
• Develop job and competency profiles.
• Weight competencies.
• Apply the competency weighting.
• Validate and calibrate results.
Assemble the Competency Process Team
In general, the team should consist of an outside consultant or consultants, and representatives of the organisation (both human capital
and business) undertaking the competency process (Competency Task Team or Steering Committee). It is important that the team
comprises people who have deep knowledge of the institution – in particular, key institutional processes – and people who have the
authority to make decisions. The participation of the CEO is critical in sending the right message to the rest of the team regarding the
significance of the undertaking.
Identify the key business processes
In order to guide the competency model towards strategic success, all key processes or those critical to the success of the business
should be identified and unpacked through a process of process mapping. In this regard, the process would also be influenced by the
organisation’s balanced score card requirements.
Identify competencies
It is important to allocate competencies (as shown in Table 4.10) in terms of the various categories in order to prioritise the focus of
111
competency modelling.
Table 4.9 The balanced scorecard
What exactly is a balanced scorecard?
The balanced scorecard is a managerial accounting technique developed by Robert Kaplan and David Norton that seeks to reshape
strategic management. It is a process of developing goals, measures, targets and initiatives from four perspectives: financial, customer,
internal business process, and learning and growth. According to Kaplan and Norton (1996), the measures of the scorecard are derived
from the company’s strategy and vision. More cynically, and in some cases realistically, a balanced scorecard attempts to translate the
sometimes vague, pious hopes of a company’s vision and mission statement into the practicalities of managing the business better at
every level.
To embark on the balanced scorecard path, an organisation must know (and understand) the following:
• The company’s mission statement
• The company’s strategic plan/vision
• The financial status of the organisation
• How the organisation is currently structured and operating
• The level of expertise of employees
• The customer satisfaction level.
Table 4.10 Categories of competencies
Key
competence
The vital few competencies required for business success from a short- and long-term perspective.
Strategic
competence
Competence related to business success from a long-term perspective.
Critical
competence
Competence related to business success from a short-term perspective.
Declining
competence
Competence that will be phased out or shifted according to market driven changes.
Obsolete
competence
Competence that will be phased out or shifted in accordance with to market driven changes.
Core
competence
• A special skill or technology that creates customer values which can be differentiated.
• It is difficult for competitors to imitate or procure.
• It enables a company to access a wide variety of seemingly unrelated markets by combining skills and
technologies across traditional business units.
In defining the various categories of competencies, the KSAs, emerging competencies, and job inputs and outputs need to be
considered:
Define proficiency levels
In defining and allocating proficiency levels (shown in Tables 4.13 to 4.15), various variations could serve as guiding framework,
depending on the most suitable, valid and fair criteria to promote the strategic impact of this application.
Proficiency levels refer to the competence (KSAs) level required to do the job (Basic, Intermediate, and Advanced) and the
descriptors that are provided for each defined level. Variations in proficiency levels could include scales based on the level of work
criteria application (1, 2 and 3, or Basic, Intermediate, and Advanced) and correlation with existing performance management scales of
proficiency.
Table 4.11 Defining competencies
Knowledge
What the job requires the incumbent to know (bodies of knowledge) to perform a given task successfully, or
knowledge that an individual brings to the organisation through previous job or work experiences, formal
qualifications, training, and/or self-study
Skills
Infers the ‘ability to do’. The practical know-how required for a job, or the abilities individuals bring into the
organisation gained through previous job experiences or training
Attributes
Underlying characteristics of an individual’s personality and traits expressed in behaviour, including physical traits
Emerging
Key success areas (KSAs) required facing new organisational challenges
112
competencies
Inputs
Knowledge, skills and attributes required to achieve job outputs
Job outputs
Job outputs as contained in job profiles
Table 4.12 Excerpts of elements in a competency framework
Table 4.13 Proficiency levels: knowledge
Basic
Can explain day-to-day practices regarding bodies of knowledge, ideas or concepts and why they are needed
Intermediate
• Past or present knowledge on how to administer process or enable productivity
• Shares knowledge by assisting others with work-related problems or issues
Advanced
•
•
•
•
In-depth knowledge of area, considered a Subject Matter Expert (SME)
Expert knowledge must be backed up with qualifications or significant experience
Can advise others
Incorporates new learning into work plans and activities
Table 4.14 Proficiency levels: skills and attributes
Basic
This proficiency level requires functioning in an environment of routine or repetitive situations with detailed rules or
instructions, under supervision, with an impact limited to own area of operation.
Intermediate This proficiency level requires functioning in an environment (context) of similar situations with well-defined
procedures, under general supervision. Impact beyond immediate area.
Advanced
This proficiency level requires functioning in an environment of diverse situations with diversified procedures and
standards, with an impact wider than a specific section. Can work independently. Is capable of coaching others on the
competency, and can apply the competency to a range of new or different situations.
113
Table 4.15 Example of proficiency level descriptors
Proficiency
level
Career sta ge
Description
A
Learner
/Contributor
Has basic knowledge and understanding of the policies, systems and processes required in the
competency area.
Implements and communicates the programme/system associated with the competency.
B
Specialist
Has greater responsibilities and performs more complex duties in the competency area, requiring
analysis and possibly research.
Interacts and influences people beyond own team.
C
Seasoned
professional
Recommends and pursues improvements and/or changes in the programme/system.
Has a broader range of influence, possibly beyond the institution.
Provides coaching, mentoring to others on area of competency.
D
Authority/Expert
Formulates policies in the competency area.
Sets directions and builds culture around this competency.
Provides leadership in this area, within both the organisation and the larger industry.
Develop job and competency profiling
This stage involves setting performance standards, profiling the job, identifying core job categories and compiling core job
competencies. Elements of these processes are shown in Table 4.16.
Weight competencies
Another part of the job profiling process is the weighting of competencies. Weights reflect the degree to which a specific competency is
important to a job and by extension to an employee holding the position. This allows for more important skills to be treated as more
significant in gap analysis. Using a range of one to three, the following weights may be given to each of the competencies in a job
profile:
1 Occasionally helpful, of modest value
2 High importance, regularly used to advantage
3 Critical requirement.
The weight of competency in the job profile is not equivalent to the depth of expertise or breadth of knowledge that is required for the
job, which will already have been captured in the proficiency level. The weighting of competencies should be the extent to which a
particular level of competency can spell either success or failure in a given job.
Apply the competency weighting
The weighting is then applied in terms of critical or core competencies identified, for example, financial control will be of high
importance to a finance executive receiving a weight of 3.
Validate and calibrate results
The final stage of the competency modelling exercise is the validation of the model and the calibration of job profiles. The validation of
the model is done through interviews with key people in the organisation. On-the-job observations of incumbents are also useful in
validating that behaviors captured in the competency model are typical of the job context.
Calibration of job profiles is the process of comparing the competencies across positions or within departments to ensure that the
required proficiency levels reflect the different requirements of the job. To facilitate this process, a benchmark position is usually
chosen; for non-managerial positions this could be the position of credit controller. Once the competency levels have been identified for
the benchmark position, all other positions are calibrated relative to it. For supervisory and managerial positions, a similar approach can
be employed, using the investment fund manager position as the point of reference.
Table 4.16 The job and competency profiling process
114
CRITERION DEVELOPMENT
As noted earlier in this chapter, performance criteria are among the products that arise from a detailed job analysis, for once the specific
elements of a job are known, it is easier to develop the means to assess levels of successful or unsuccessful performance. Criteria within
the context of personnel selection, placement, performance evaluation, and training are defined as evaluative standards that can be used
as yardsticks for measuring employees’ degree of success on the job (Cascio & Aguinis, 2005; Guion, 1965). As discussed in chapter 1,
the profession of personnel psychology is interested in studying human behaviour in work settings by applying scientific procedures
such as those discussed in chapters 2 and 5.
Furthermore, industrial psychologists generally aim to demonstrate the utility of their procedures in practice, such as, for example,
job analysis, to enhance managers’, workers’ and their own understanding of the determinants of job success. For this purpose, criteria
are often used to measure performance constructs that relate to worker attributes and behaviour that constitute successful performance.
The behaviour that constitutes or defines the successful or unsuccessful performance of a given task is regarded as a criterion that needs
to be measured in a reliable and valid manner (Riggio, 2009).
Steps in criterion development
Guion (1961) outlines a five-step procedure for criterion development, depicted in Figure 4.5:
1. Conducting an analysis of the job and/or organisational needs.
2. Developing measures of actual behaviour relative to expected behaviour as identified in the job and need analysis. These measures
should supplement objective measures of organisational outcomes such as quality, turnover, absenteeism, and production.
3. Identifying criteria dimensions underlying such measures by factor analysis, cluster analysis or pattern analysis.
4. Developing reliable measures, each with high construct validity, of the elements so identified.
5. Determining the predictive validity of each independent variable (predictor) for each one of the criterion measures, taking them one at
a time.
Figure 4.5 Steps in criterion development
115
Step one (job analysis) has been discussed in detail in this chapter. In step two, the starting point for the industrial psychologist is to have
a general conceptual or theoretical idea of the set of factors that constitute successful performance. Conceptual criteria are ideas or
theoretical constructs that cannot be measured. We can take the selection of a candidate for a vacancy as an example. Conceptual criteria
would be what we see in our mind’s eye when we think of a successful employee (one who can do the job successfully and thereby
contribute to the profitability of the organisation). To make this ideal measurable, the criteria have to be turned into actual criteria.
Actual criteria can serve as real measures of the conceptual criteria (Schreuder et al, 2006).
For example, quality of work is an objective measure of a valued organisational outcome. However, because quality of work is only
a conceptual criterion or a theoretical abstraction of an evaluative standard, the industrial psychologist needs to find some way to turn
this criterion into an operationally measurable factor. That is, the actual criteria that will serve as measures (or evaluative standards) of
the conceptual criterion (quality of work) need to be identified. Quality is usually operationally measured in terms of errors, which are
defined as deviations from a standard. Therefore, to obtain an actual or operational measure of quality, there must be a standard against
which to compare or evaluate an employee’s work. A secretary’s work quality would be judged by the number of typographical errors
(the standard being correctly spelt words); and a cook’s quality might be judged by how her food resembles a standard as measured by
size, temperature and ingredient amounts (Aamodt, 2010).
Similarly, attendance as a valued organisational outcome is one aspect for objectively measuring employees’ performance.
Attendance can be separated into three distinct conceptual criteria: absenteeism, tardiness, and tenure. In terms of tenure, for example,
employees may be considered ‘successful’ if they stay with the company for at least four months and ‘unsuccessful’ if they leave before
that time (Aamodt, 2010). On the other hand, productivity can be operationally measured by actual criteria such as the number of
products assembled by an assembly-line worker, or the average amount of time it takes to process a claim in the case of an insurance
claims investigator (Riggio, 2009).
The development of criteria is often a difficult undertaking. This is further complicated by the dimension of time. A short-term
definition of quality, for example, may not be the same as the long-term definition. Therefore the criteria used for short-term
descriptions of the ideal may differ from the criteria used for a long-term description. Proximal criteria are used when short-term
decisions about quality must be made, while distal criteria are used to make long-term decisions about quality (Schreuder et al, 2006).
In step three, industrial psychologists can use statistical methods such as a factor analysis, which shows how variables cluster to
form meaningful ‘factors’. Factor analysis (also discussed in chapter 5) is useful when an industrial psychologist has measured many
variables and wants to examine the underlying structure of the variables or combine related variables to reduce their number for later
analysis. Using this technique, a researcher measuring workers’ satisfaction with their supervisors, salary, benefits and working
conditions may find that two of these variables, satisfaction with salary and benefits, cluster to form a single factor that the researcher
calls ‘satisfaction with compensation’. The other two variables, supervisors and working conditions, form a single factor that the
researcher labels ‘satisfaction with the work environment’ (Riggio, 2009).
Steps four and five consider important topics such as predictor and criterion reliability and validity which are addressed in detail in
chapter 5. The aspects of relevance to criterion development are addressed in the discussion that follows.
Predictors and criteria
116
Managers involved in employment decisions are most concerned about the extent to which performance assessment information will
allow accurate predictions about subsequent job performance. Industrial psychologists are therefore always interested in the predictor
construct they are measuring and in making inferences about the degree to which that construct allows them to predict a theoretical
(conceptual) job performance construct. Criterion-related, construct-related and content-related validity studies (also the topics of
chapter 5) are regarded as three general criterion-related research designs in generating direct empirical evidence to justify that
assessment (predictor) scores relate to valid measures of job performance (criterion measures).
Criterion-related validity refers to the extent to which the test scores (predictor scores) used to make selection decisions are
correlated with measures of job performance (criterion scores) (Schmitt & Fandre, 2008). Criterion-related validity requires that a
predictor be related to an operational or actual criterion measure, and the operational criterion measure should be related to the
performance domain it represents. Performance domains are comprised of behaviour-outcome units that are valued by an organisation.
Job analysis data provides the evidence and justification that all major behavioural dimensions that result in valued job or organisational
outcomes (such as the number of products sold, or the number of candidates attracted) have been identified and are represented in the
operational criterion measure (Cascio & Aguinis, 2005).
Construct-related validity is an approach in which industrial psychologists gather evidence to support decisions or inferences about
psychological constructs; often starting off with an industrial psychologist demonstrating that a test (the predictor) designed to measure a
particular construct (criterion) correlates with other tests in the predicted manner. Constructs refer to behavioural patterns that underlie
behaviour sampled by the predictor, and in the performance domain, by the criterion. Construct validity represents the integration of
evidence that bears on the interpretation or meaning of test scores – including content and criterion-related evidence – which are
subsumed as part of construct validity (Landy & Conte, 2004). That is, if it can be shown that a test (for example, reading
comprehension) measures a specific construct, such as reading comprehension, that has been determined by a job analysis to be critical
for job performance, then inferences about job performance from test scores are, by logical implication, justified (Cascio & Aguinis,
2005).
As shown in Figure 4.6, job analysis provides the raw material for criterion development since it allows for the identification of the
important demands (for example, tasks and duties) of a job and the human attributes (KSAOs) necessary to carry out those demands
successfully. Once the attributes (for example, abilities) are identified, the test that is chosen or developed to assess those abilities is
called a predictor (the topic of chapter 5), which is used to forecast another variable. Similarly, when the demands of the job are
identified, the definition of an individual worker’s performance in meeting those demands is called a criterion, which is the variable that
the industrial psychologist wants to predict (Landy & Conte, 2004).
Predictors are also regarded as evaluative standards. An example would be performance tests administered before an employment
decision (for example, to hire or to promote) is made. On the other hand, criteria are regarded as the evaluative standards that are
administered after an employment decision has been made (for example, evaluation of performance effectiveness, or the effectiveness of
a training programme or a recruitment campaign) (Cascio & Aguinis, 2005). The line in Figure 4.6 that connects predictors and criteria
represents the hypothesis that people who do better on the predictor will also do better on the criterion, that is, people who score higher
on the test will be more successful on the job. Validity evidence is then gathered to test that hypothesis (Landy & Conte, 2004). In
criterion-related validity studies (discussed in chapter 5) the criterion is the behaviour (the dependent variable) that constitutes or defines
successful performance of a given task. Independent variables such as scores on a test of mental ability are correlated with criterion
measures to demonstrate that those scores are valid predictors of probable job success.
In content-related validity studies (discussed in chapter 5), the industrial psychologist establishes logical links between important
task-based characteristics of the job and the assessment used to choose among candidates. In a criterion-related validity study of a
problem-solving test for software consultants, a job analysis may indicate that one of most common and important tasks of the
consultant is to identify a flaw in a software programme. As a result, the industrial psychologist may then develop a measure of the
extent to which the consultant consistently identifies the flaw without asking for assistance. This measure may be in the form of a rating
scale of ‘troubleshooting’ that would be completed by the consultant’s supervisor. The industrial psychologist would then have both the
predictor score and a criterion score for the calculation of a validity coefficient (Landy & Conte, 2004).
Criterion-related, construct-related and content-related validity studies generally treat validity as situational to the particular
organisation owing to the problem of obtaining large enough samples to allow validity generalisations to some larger population. In
practical terms, this means that validity needs to be established in every new setting. Therefore, apart from these three criterion-related
research designs, researchers are also using synthetic validation as a methodology that allows validities to be used across organisations
and jobs.
Synthetic validity is an approach whereby validity of tests for a given job is estimated by examining the validity of these tests in
predicting different aspects (components) of the particular job present in other occupations or organisations, instead of making global
assessments of job performance. These estimates are combined to calculate the validity of a new battery of instruments (tests) for the
target job (Schmitt & Fandre, 2008). Because it uses job components and not jobs, synthetic validity allows the researcher to build up
the sample sizes to the levels necessary to conduct validity studies. For example, a particular job with component ‘A’ may have only
five incumbents, but component ‘A’ may be a part of the job for 90 incumbents in an organisation. In most cases, the sample sizes will
be dramatically larger when using job components. The synthetic validity approach is often used when criterion data are not available. In
this case, synthetic validity can be very useful for developing predictor batteries for recently-created jobs or for jobs that have not yet
been created (Scherbaum, 2005).
Synthetic validity is a flexible approach that helps to meet the changing prediction needs of organisations, especially in situations
117
when no jobs exist, when a job is redesigned, or when jobs are rapidly changing. Its techniques rest on two primary assumptions. Firstly,
when jobs have a component in common, the human attribute(s) (such as cognitive, perceptual and psychomotor abilities) required for
performing that component will be the same across jobs. That is, the attributes needed to perform the job component do not vary
between jobs. Therefore, a test for a particular attribute can be used with any job containing the component that requires the particular
attribute. Secondly, the validity of a predictor for a particular job component is fairly constant across jobs. The important components of
a particular job or job family are determined by means of a structured job analysis. The job analysis should allow across-job
comparisons and accurately describe the job components at a level of detail that facilitate the identification of predictors to assess the job
components. Predictors are typically selected using the judgements of SMEs or previous research that demonstrates a predictor test is
related to the job component. However, new predictors can also be developed (Scherbaum, 2005).
Figure 4.6 The link between predictors and criteria
Composite criterion versus multiple criteria
Since job performance is generally regarded as being multi-dimensional in nature, industrial psychologists often have to consider
whether to combine various criterion measures into a composite score, or whether to treat each criterion measure separately (Cascio &
Aguinis, 2005).
Using a composite criterion relates to the assumption that the criterion should represent an economic rather than a behavioural
construct. In practical terms, this means that the criterion should measure the overall contribution of the individual to the organisation in
terms of a rand value. This orientation is generally labelled as the ‘dollar criterion’, since the criterion measures overall efficiency
(quantity, quality, and cost of the finished product) in ‘dollars’ (rands) rather than behavioural or psychological terms by applying cost
accounting concepts and procedures to the individual job behaviours of the employee (Brogden & Taylor, 1950).
A composite criterion is a single index that generally treats multiple criterion dimensions separately in validation. However, when a
decision is required, these single criterion dimensions are then combined into a composite. A quantitative weighting scheme is applied to
determine in an objective manner the importance placed on each of the criteria used to form the composite. An organisation may decide
to combine two measures of customer service: one collected from external customers that purchase the products offered by the
organisation, and the other from internal customers, that is, the individuals employed in other units within the same organisation. Giving
these measures equal weights implies that the organisation values both external and internal customer quality. On the other hand, the
organisation may decide to form the composite by giving 70 per cent weight to external customer service and 30 per cent weight to
internal customer service. This decision is likely to affect the validity coefficients between predictors and criteria (Cascio & Aguinis,
2005).
Reflection 4.7
Scenario A
118
Division X of Company ABC creates a new job associated with a new manufacturing technology. In order to develop a strategy for
hiring applicants for that job, they ask Division Y of the company, a division that has been using the new technology for several
months, to provide a sample of workers who hold the job title in question to complete a potential screening examination for the job.
Division X then correlates the test scores of these workers with performance ratings. What type of validity design has Division X
chosen? Name one alternative design they might have chosen and describe how it would have satisfied their need.
Scenario B
Owing to the rapidly-changing manufacturing technology in the highly competitive industry, a restructuring exercise has led to the
redesign of jobs in Company ABC. Division X decides to develop new predictors of job performance. What type of validity design
would you recommend to the company? Give reasons for your answer.
Advocates of multiple criteria view the increased understanding of work behaviour in terms of psychological and behavioural rather
than economic constructs as an important goal of the criterion validation process. In the multiple criteria approach, different criterion
variables are not combined since it is assumed that combining measures that are in essence unrelated will result in a composite that is
not only ambiguous, but also psychologically nonsensical (Cascio & Aguinis, 2005). In measuring the effectiveness of recruiters,
Pulakos, Borman and Hough (1988) found that selling skills, human relations skills, and organising skills were all important and related
to success. It was further found that these three behavioural dimensions were unrelated to each other – that is, the recruiter with the best
selling skills did not necessarily have the best human relation skills or the best organising skills (Cascio & Aguinis, 2005).
Cascio and Aguinis (2005) posit that the resolution of the composite criterion versus multiple criteria dilemma essentially depends
on the objectives of the investigator. Both methods are legitimate for their own purposes. If the goal is increased psychological
understanding of predictor-criterion relationships, then the criterion elements are best kept separate. If managerial decision-making is the
objective, then the criterion elements should be weighted, regardless of their inter-correlations, into a composite representing an
economic construct of overall worth to the organisation.
Considerations in criterion development
Since human performance is variable (and potentially unreliable) as a result of various situational factors that influence individuals’
performance, industrial psychologists are often confronted by the challenge of developing performance criteria that are relevant, reliable,
practical, adequate and appropriate for measuring worker behaviour. Industrial psychologists generally refer to the ‘criterion problem’
when pointing out the difficulties involved in the process of conceptualising and measuring performance constructs that are
multi-dimensional, dynamic, and appropriate for different purposes (Cascio & Aguinis, 2005).
As already mentioned, the conceptual criterion is an idea or concept that is not measurable. This creates the situation where the
actual criteria are never exactly the same as the conceptual criteria. In practical terms, this means that there is a certain amount of
criterion distortion, which is described as criterion irrelevance, deficiency, and contamination (Muchinsky et al, 2005). In an ideal
world, industrial psychologists would be able to measure all relevant aspects of job and worker performance perfectly. A collective
measure of all these aspects would be called the ultimate criterion (that is, the ideal measure of all the relevant aspects of performance).
However, since industrial psychologists can never reliably measure all aspects of performance, they generally settle for an actual
criterion (that is, the actual measure of job performance obtained) (Landy & Conte, 2004).
Criteria should be chosen on the basis of validity or work or job relevance (as identified by a job analysis), freedom from
contamination, and reliability, rather than availability. In general, if criteria are chosen to represent work-related activities, behaviours or
outcomes, the results of a job analysis are helpful in criteria construction. If the goal of a given study is the prediction of organisational
criteria such as tenure, absenteeism, or other types of organisation-wide criteria such as, for example, employee satisfaction, an in-depth
job or work analysis is usually not necessary, although an understanding of the job or work and its context is beneficial (SIOPSA, 2005).
Criteria relevance
The usefulness of criteria is evaluated in terms of its judged relevance (that is, whether the criteria are logically related to the
performance domain being measured) (Cascio & Aguinis, 2005). Relevant criteria represent important organisational, team, and
individual outcomes such as work-related behaviours, outputs, attitudes, or performance in training, as indicated by a review of
information about the job or work. Criteria can be measures of overall or task-specific work performance, work behaviours, or work
outcomes. These may, for example, include criteria such as behavioural and performance ratings, success in work-relevant training,
turnover, contextual performance (organisational citizenship), or rate of advancement. Regardless of the measures used as criteria,
industrial psychologists must ensure their relevance to work or the job (SIOPSA, 2005).
Criterion deficiency and contamination
Criterion deficiency and contamination reduce the usefulness and relevance of criteria. Criterion deficiency occurs when an actual
criterion is missing information that is part of the behaviour one is trying to measure, that is, the criterion falls short of measuring job
performance or behaviour perfectly. In practical terms, criterion deficiency refers to the extent to which the actual criterion fails to
overlap the conceptual criterion (Landy & Conte, 2004; Riggio, 2009). For example, if an industrial psychologist considers the
performance of a police officer to be defined exclusively by the number of criminals apprehended, ignoring many other important
aspects of the police officer’s job, then that statistic would be considered a deficient criterion (Landy & Conte, 2004). An important goal
119
of performance measures is to choose criteria that optimise the assessment of job success, thereby keeping criterion deficiency to a
minimum.
Figure 4.7 Criterion distortion described in terms of criteria irrelevance, deficiency and contamination
Similarly, criterion contamination occurs when an actual or operational criterion includes information (variance) unrelated to the
behaviour (the ultimate criterion) one is trying to measure. Criteria contamination can result from extraneous factors that contribute to a
worker’s apparent success or failure in a job. For instance, a sales manager may receive a poor performance appraisal because of low
sales levels, even though the poor sales actually result from the fact that the manager supervises a group of young, inexperienced sales
people (Riggio, 2009).
Gathering criteria measures with no checks on their relevance or worth before use often leads to contamination (Cascio & Aguinis,
2005). For example, if a production figure for an individual worker is affected by the technology or the condition of the particular
machine which that worker is using, then that production figure (the criterion) is considered contaminated. Similarly, a classic validity
study may test cognitive ability (the predictor) by correlating it with a measure of job performance (the actual criterion) to see if higher
scores on the test are associated with higher levels of performance. The differences between the ultimate criterion and the actual
criterion represent imperfections in measurement – contamination and deficiency (Landy & Conte, 2004).
Criterion bias
A common source of criterion contamination stems from rater or appraiser biases. Criterion bias is a systematic error resulting from
criterion contamination or deficiency that differentially affects the criterion performance of different subgroups (SIOPSA, 2005). For
example, a supervisor may give an employee an overly-positive performance appraisal because the employee has a reputation of past
work success or because the employee was a graduate of a prestigious university (Riggio, 2009). Similarly, a difference in criterion
scores of older and younger workers or day- and night-shift workers could reflect bias in raters or differences in equipment or
conditions, or the difference may reflect genuine differences in performance. Industrial psychologists must at all times consider the
possibility of criterion bias and attempt to protect against bias insofar as is feasible, and use professional judgement when evaluating
data (SIOPSA, 2005).
Prior knowledge of or exposure to predictor scores are often among the most serious contaminants of criterion data. For example, if
an employee’s supervisor has access to the prediction of the individual’s future potential by the industrial psychologist conducting the
assessment, and if at a later date the supervisor is asked to rate the individual’s performance, the supervisor’s prior exposure to the
assessment prediction is likely to bias this rating. If the supervisor views the employee as a rival, dislikes him or her for that reason, and
wants to impede his or her progress, the prior knowledge of the assessment report could serve as a stimulus for a lower rating than is
deserved. On the other hand, if the supervisor is especially fond of the employee and the assessment report identifies the individual to be
a high-potential candidate, the supervisor may rate the employee as a high-potential individual (Cascio & Aguinis, 2005).
The rule of thumb is to keep predictor information away from those who must provide criterion data. Cascio and Aguinis (2005)
suggest that the best way to guard against predictor bias is to obtain all criterion data before any predictor data are released. By
shielding predictor information from those who have responsibility for making, for example, promotion or hiring decisions, much
120
‘purer’ validity estimates can be obtained.
Ratings as criteria
Although ratings are regarded as the most commonly used and generally appropriate measures of performance, rating errors often reduce
the reliability and validity or accuracy of criterion outcomes. The development of rating factors is ordinarily guided by job analysis
when raters (supervisors, peers, individual self, clients or others) are expected to evaluate several different aspects of a worker’s
performance (SIOPSA, 2005). Since the intent of a rating system is to collect accurate estimations of an individual’s performance,
industrial psychologists build in appropriate structural characteristics (that is, dimension definitions, behavioural anchors, and scoring
schemes) that are based on a job or work analysis in the measurement scales they use to assist in the gathering of those accurate
estimations. However, irrespective of these measures, raters don’t always provide those accurate estimates, leading to rating errors.
Rating errors are regarded as the inaccuracies in ratings that may be actual errors or intentional or systematic distortions. In practical
terms, this means that the ratings do not represent completely ‘true’ estimates of performance. Some of the most common inaccuracies
or errors identified by industrial psychologists include central tendency error, leniency–severity error, and halo error (Landy & Conte,
2004).
Central tendency error occurs when raters choose a middle point on the scale as a way of describing performance, even though a
more extreme point may better describe the employee.
Leniency–severity error is a distortion which is the result of raters who are unusually easy (leniency error) or unusually harsh
(severity error) in their assignment of ratings. The easy rater gives ratings higher than the employee deserves, while the harsh rater gives
ratings lower than the employee deserves. In part, these errors are usually the result of behavioural anchors that permit the rater to
impose idiosyncratic meanings on words like ‘average’, ‘outstanding’, and ‘below average’. The problem is that the rater can feel free to
use a personal average rather than one that would be shared with other raters. One safeguard against this type of distortion is to use
well-defined behavioural anchors for the rating scales.
Halo error occurs when a rater assigns the same rating to an employee on a series of dimensions, creating a halo or aura that
surrounds all of the ratings, causing them to be similar (Landy & Conte, 2004).
Since it is generally assumed that rater distortions are unintentional, and that raters are unaware of influences that distort their
ratings, it is essential that raters should be sufficiently familiar with the relevant demands of the work and job as well as the individual to
be rated to effectively evaluate performance. Since valid inferences are generally supported by knowledgeable raters, rater training in the
observation and evaluation of performance is recommended (SIOPSA, 2005).
Finally, it is important that management be informed thoroughly of the real benefits of using carefully-developed criteria when
making employment decisions. Criterion measurement should be kept practical and should ultimately contribute to the organisation’s
primary goals of profit, growth and service while furthering the goal of building a theory of work behaviour. Management may or may
not have the expertise to appraise the soundness of a criterion measure or a series of criteria measures. However, objections will almost
certainly arise if record-keeping and data collection for criteria measures become impractical and interfere significantly with ongoing
operations (Cascio & Aguinis, 2005).
CHAPTER SUMMARY
This chapter reviewed job analysis and criterion development as important aspects of the employment process. Job descriptions, job and
person specifications, job evaluation, and performance criteria are all products of job analysis that form the cornerstone in measuring
and evaluating employee performance and job success. Job analysis products are invaluable tools in the recruitment, selection, job and
performance evaluation, training and development, reward and remuneration and retention of employees. They also provide the answer
to many questions posed by employment equity legislation, such as whether a certain requirement stated in a vacancy advertisement is
really an inherent requirement of a job and whether a psychological test should be used and which test should be used in making fair
staffing decisions. Various methods and techniques are used in the job analysis process. The South African OFO provides additional
valuable information that can be used as an input to the job analysis process.
Job analysis further provides the raw material for criterion development. Since criteria are used to make decisions on employee
behaviour and performance, industrial psychologists should be well trained in how to determine measures of actual criteria and how to
eliminate criterion distortion.
Review questions
You may wish to attempt the following as practice examination-style questions.
1. What is the purpose of job analysis in the employment process?
2. How does job analysis relate to task performance? How does task performance differ from the contextual performance of workers?
3. What is job analysis? How can managers and industrial psychologists make use of the information it provides?
4. How does the OFO inform the job analysis process?
5. What items are typically included in a job description?
6. Discuss the various methods and techniques for collecting job analysis data. Compare these methods and techniques, explain what
each is useful for, and list the advantages and disadvantages.
7. Discuss the issue of reliability and validity in job analysis.
121
8. Explain the steps you would follow in conducting a job analysis.
9. Why are traditional job analysis approaches shifting to an occupation-specific skills-based approach?
10. Differentiate between job analysis and competency modelling. Which approach would be more appropriate in analysing managerial
jobs? Give reasons for your answer.
11. Explain the concept of criterion development and the aspects that need to be considered by industrial psychologists in developing
criteria.
12. How do synthetic validity studies differ from criterion-related validity research designs? Which of these designs would be more
appropriate for an occupation-specific skills-based job analysis approach? Give reasons for your answer.
13. Explain the concept of criterion distortion.
14. What is the link between predictors and criteria? Draw a diagram to illustrate the link.
Multiple-choice questions
1. Which type of job analysis questionnaire is geared towards the work that is performed on the job?
a. Worker-orientated
b. Job-orientated
c. Performance-orientated
d. Activity-orientated
2. The abbreviation SME refers to a:
a. Structured questionnaire method of job analysis
b. Logbook method of job analysis
c. Job position being evaluated for compensation purposes
d. Job expert
3. Job analysis includes collecting data describing all of the following EXCEPT:
a. What is accomplished on the job
b. Technology used by the employees
c. Each employee’s level of job performance
d. The physical job environment
4. ‘Friendliness’ is one dimension on which Misha’s Café evaluates its waitresses and waiters. Customers’ rating forms are used to
assess levels of friendliness. In this case ‘friendliness’ is a(n):
a. Conceptual criterion
b. Structured criterion
c. Multiple criterion
d. Actual criterion
5. An industrial psychologist decides to use a ‘multiple uses test’ to assess creativity. In this test, individuals who identify the most uses
for a paper clip, for example, are considered to be more creative than their colleagues. Creativity represents the … criterion, while the
multiple uses test represents the … criterion.
a. Conceptual; actual
b. Actual; conceptual
c. Structured; multiple
d. Structured; actual
6. Criterion deficiency refers to the extent to which the actual criterion:
a. And the conceptual criterion coincide
b. Fails to overlap with the conceptual criterion
c. Measures something other than the conceptual criterion
d. Is influenced by random error
7. To assess a job applicant’s citizenship, the Modimele Corporation counts the number of community activities in which the individual
is involved. However, it fails to assess the quality of participation, which may be even more important. This is an example of:
a. Criterion contamination
b. Criterion irrelevance
c. Criterion deficiency
d. Criterion bias
Reflection 4.8
A large engineering company decided to expand its housing project design division. In the pro-cess several new jobs were created for
which new job descriptions had to be compiled. However, owing to their concerns in addressing the skills shortages in the industry,
they decided to follow a skills-based approach in their job analysis.
Draw a flow diagram that shows each step you would take in conducting the job analysis. The flow diagram should include the
122
types of data you will gather, the methods you would use in collecting job analysis information, the order in which you would use
these methods, from whom you will gather the information, and what you will do with those data in making your recommendations.
Explain also why competency modelling would not be appropriate in this scenario.
Reflection 4.9
A private tourism company has decided to advertise a vacancy for a chief operating officer in a national newspaper. Some of the
aspects they decided to include in the advertisement are provided below. Study these requirements, and answer the questions that
follow.
Excerpt from advertisement
We are looking for an experienced individual to take up the position of Chief Operating Officer (COO). The successful candidate will
report to the Chief Executive Officer (CEO).
Duties and responsibilities
The successful candidate will be responsible for overseeing the day-to-day operations of the organisation and working with the CEO
to keep head office focused on its objectives. The incumbent will also be responsible for positioning the organisation as the source of
useful tourism business intelligence.
Key Performance Areas (KPAs)
• Set up and manage the organisation’s tourism knowledge facility programme
• Produce demand-driven and supply-side tourism business and economic intelligence
• Oversee the company’s focus on strategic projects and ability to leverage funds and programmes
• Oversee the day-to-day operation and implementation of programmes of the organisation and relationships that contribute
to organisational priorities
• Manage industry alignment and co-operation agreements and processes
• Act as Company Secretary and assist the CEO in overall management and delivery of the company mandate to its
constituencies.
Required skills and attributes
• At least 8 years’ senior management experience
• Must have at least a degree and a suitable post degree qualifications
• Experience in the travel and tourism sector preferable but not essential
• Experience in research and analysis essential and have good writing skills
• Must have strong interpersonal skills, leadership qualities, be energetic, be able to work in a team and meet targets, and
deadlines.
Questions
1. Review the skills and attributes outlined in the advertisement. Draw up a list with the headings: Knowledge, Skills, Abilities, and
Other characteristics (KSAOs). Now list the KSAOs under the appropriate KSAO headings. Do you think the advertisement covers
sufficient information regarding the job? If not, what type of information should have been added? Give reasons for your answer.
2. Do you think the advertisement complies with employment equity requirements? Give reasons for your answer.
3. Review the KPAs outlined in the advertisement. See if you can identify the KSAOs required to perform successfully in the key
performance areas. Add these to your KSAO list.
4. Explain the use of job analysis information in compiling an advertisement such as the one provided here.
123
CHAPTER 6
RECRUITMENT AND SELECTION
CHAPTER OUTLINE
CHAPTER OVERVIEW
• Learning outcomes
CHAPTER INTRODUCTION
RECRUITMENT
• Sources of recruitment
• Applicant attraction
• Recruitment methods
• Recruitment planning
CANDIDATE SCREENING
• Importance of careful screening
• Short-listing candidates
• Evaluating written materials
• Managing applicant reactions and perceptions
CONSIDERATIONS IN EMPLOYEE SELECTION
• Reliability
• Validity
• Utility
• Fairness and legal considerations
MAKING SELECTION DECISIONS
• Strategies for combining job applicant information
• Methods for combining scores
• Placement
• Selection and affirmative action
FAIRNESS IN PERSONNEL SELECTION
• Defining fairness
• Fairness and bias
• Adverse impact
• Fairness and culture
• Measures of test bias
• The quest for culture-fair tests
• Legal framework
• Models of test fairness
• How to ensure fairness
CHAPTER SUMMARY
REVIEW QUESTIONS
MULTIPLE-CHOICE QUESTIONS
CHAPTER OVERVIEW
This chapter builds on the theory and concepts discussed in the previous chapters. A study hint for understanding the principles and concepts
that are discussed in this chapter is to review the previous chapters before working through this chapter.
The recruitment and selection of competent people are crucial elements of the employment process. Recruitment and selection provide the
means to resource, or staff, the organisation with the human capital it needs to sustain its competitive edge in the broader business
environment. This chapter reviews the theory and practice of recruitment and selection from the perspective of personnel psychology. The
application of psychometric standards in the decision-making process is discussed, including the aspects of employment equity and fairness in
the recruitment and selection process.
124
Learning outcomes
When you have finished studying this chapter, you should be able to:
1. Differentiate between the concepts of recruitment, screening and selection.
2. Discuss the sources and methods of recruitment.
3. Explain the importance of managing applicant reactions and perceptions in the recruitment, screening and selection process, and
suggest strategies for dealing with applicant reactions and perceptions.
4. Explain the role of job analysis in recruitment planning and selection.
5. Discuss the recruitment planning process and the techniques that can be applied to enhance the quality of recruitment strategies.
6. Discuss the screening process.
7. Discuss the decision framework for making selection decisions and the various techniques that can be applied to ensure reliable, valid,
fair, and useful (quality) decisions.
8. Explain and illustrate the relationship between predictor and criterion scores in determining the validity and fairness of selection
decisions.
9. Explain the concept of prediction errors and how they influence selection decisions.
10. Determine the utility of a selection device.
11. Discuss strategies for combining job applicant information.
12. Differentiate between selection and placement.
13. Discuss employment equity and affirmative considerations in recruitment, screening and selection.
14. Discuss the issues of fairness, adverse impact and bias in selection decision-making.
CHAPTER INTRODUCTION
Recruitment and selection provide the conduit for staffing and resourcing the organisation. As discussed in chapter 4, job analysis is
the basic foundation of personnel psychology and a cornerstone in the recruitment and selection of people, since it enables industrial
psychologists, human resource professionals, and managers to obtain a complete and accurate picture of a job, including the important
tasks and duties, and the knowledge, skills, abilities and other desired characteristics (KSAOs) needed to perform the job. Recruitment
is about the optimal fit between the person and the organisation as well as finding the best fit between the job requirements (as
determined by the job analysis process) and the applicants available. If both of these aspects are achieved, it is believed to lead to
increased job satisfaction, enhanced job performance, and potentially the retention of talent (the topic of chapter 7). Selection involves
the steps, devices and methods by which the sourced candidates are screened for choosing the most suitable person for vacant
positions in the organisation. This includes using selection devices and methods that tie in directly with the results of a job analysis.
The importance of recruiting and selecting the right people, including scarce and critical skills, and of being seen as an ‘employer of
choice’, has been enhanced by the increasingly competitive and globalised business environment and the requirement for quality and
customer service (Porter et al, 2008). Therefore, recruitment and selection are increasingly regarded as critical human resource
functions for organisational success and survival. Moreover, the systematic attraction, selection and retention of competent and
experienced scarce and critical skills have become core elements of competitive strategy, and an essential part of an organisation’s
strategic capability for adapting to competition (McCormack & Scholarios, 2009).
In view of the foregoing, a company’s recruitment and selection processes are seen as enablers of important human resource
outcomes, including the competency of employees, their commitment, employee-organisation fit (that is, congruence between the
employee and the organisation’s goals), and the cost-effectiveness of human resource policies and practices (Porter et al, 2008).
Recruitment and selection processes are also increasingly recognised as critical components in successful change management and
organisational transformation, since they provide a means of obtaining employees with a new attitude, as well as new skills and
abilities (Iles & Salaman, 1995). Moreover, companies that are able to employ better people with better human resource processes
(such as job analysis, recruitment and selection processes) have been found to have an increased capacity in sustaining their
competitive advantage (Boxall & Purcell, 2003).
Generally, recruitment provides the candidates for the selector to judge. Although selection techniques cannot overcome failures in
recruitment, they make them evident (Marchington & Wilkinson, 2008). Therefore, industrial psychologists apply the methodological
principles and techniques discussed in chapters 2, 3, 4 and 5 in enabling managers to make high-quality decisions in the recruitment
and selection of people. Effective employee screening, testing, and selection require grounding in research and measurement issues,
particularly reliability and validity (the topic of chapters 2 and 5). Further, much of the strength or weakness of any particular
employment method or process is determined by its ability to predict important work outcomes such as job performance. As discussed
in chapters 4 and 5, the ability to predict future employee performance accurately and in a scientific, valid, fair and legally defensible
manner from the results of employment tests or from other employee screening procedures is regarded as critical to the profession of
personnel psychology (Riggio, 2009).
The function of industrial psychologists in the recruitment and selection of people concerns their involvement in staffing practices,
which include decisions associated with recruiting, selecting, promoting, retaining, and separating employees (Landy & Conte, 2004).
Industrial psychologists generally follow the so-called psychometric approach to recruitment and selection, which focuses on the
125
measurement of individual differences. The psychometric approach is based on scientific rationality and its systematic application by
industrial psychologists to candidate selection in particular. To enable this to be put into practice, psychometric standards such as
reliability, validity, and fairness are applied in the decision-making process, which generally follows a systematic, logical or sequential
order (Porter et al, 2008; McCormack & Scholarios, 2009). In addition, when evaluating the outcomes of selection decisions, the
practical utility and cost-effectiveness of recruitment and selection methods and procedures are also of major concern for industrial
psychologists and managers alike.
RECRUITMENT
Recruitment is concerned with identifying and attracting suitable candidates. Barber (1998:5) describes recruitment as practices and
activities carried out by the organisation with the primary purpose of identifying and attracting potential employees. The recruitment
process focuses on attracting a large number of people with the right qualifications, knowledge, skills, abilities, and other desired
characteristics (KSAOs) or competencies (as determined by the job analysis process), to apply for a vacancy (Schreuder et al, 2006). On
the other hand, selection represents the final stage of decision-making by applying scientific procedures in a systematic manner in
choosing the most suitable candidate from a pool of candidates.
Sources of recruitment
As shown in Table 6.1, sources of recruitment are broadly categorised as internal and external. Internal sources of recruitment are
focused on the organisation’s internal labour market (that is, the people already in the employ of the organisation) as a means of filling
vacancies. On the other hand, external sources are people outside the organisation, who may include full-time, subcontracted,
outsourced, or temporary staff. In South Africa, temporary staffing has increased in popularity, even at executive levels. Temporary
assignments appear to appeal to a fair proportion of the younger generation (generation Y or new millennium). Furthermore, they seem
to prefer a more diverse and flexible work environment. It is expected that temporary staffing will increase. In addition, temporary
appointments allow the company to evaluate a potential candidate and position before it is made permanent. Temporary workers are also
useful in organisations where there are seasonal fluctuations and workload changes. They also allow companies to save on the costs of
permanent employees (O’Callaghan, 2008).
Table 6.1 Sources of internal and external recruitment (Schreuder et al, 2006:53, 54)
Internal sources of labour
External sources of labour
Skills inventories and career development systems
A skills inventory is a record system listing employees with specific skills.
Career development systems develop the skills and knowledge of
employees to prepare them for a career path. Skills inventories and career
development systems are quick ways of identifying candidates.
Employment agencies
Agencies recruit suitable candidates on the instructions of
the organisation. Once candidates have been indentified,
either the organisation or the agency could do the
selection.
Job posting
Information about vacancies is placed on notice boards or in information
bulletins. Details of the job are provided and employees may apply. This
source enhances the possibility that the best candidate will apply, but it can
also cause employees to hop from job to job. In addition, because of a lack
of applications, the position may be vacant for a long time.
Walk-ins
This occurs when a prospective employee applies directly
to the organisation in the hope that a vacancy exists,
without responding to an advertisement or
recommendation by someone.
Inside ‘moonlighting’ or contracting
In the case of a short-term need or a small job project that does not involve
a great deal of additional work, the organisation can offer to pay bonuses
for employees to do the work. Employees who perform well could be
identified, and this could promote multi-skilling.
Supervisor recommendations
Supervisors know their staff and can nominate employees for a specific
job. Supervisors are normally in a good position to know the strengths and
weaknesses of their employees, but their opinions could be subjective and
liable to bias and discrimination.
Referrals
This is an inexpensive and quick resource. Current
employees refer or recommend candidates from outside
the organisation for a specific vacancy. Current
employees are not likely to recommend someone who is
unsuitable, because this may reflect negatively on
themselves.
Professional bodies
People who attend conventions have opportunities to
network. Furthermore, advertisements for vacancies can
be placed in the publications of professional bodies.
Head-hunting
This source refers to the recruitment of top professional
people through specialised agencies. These candidates
126
are approached personally with a job offer, or an
advertisement can be drawn up specifically with that
candidate’s qualifications, experience, and skills in mind.
Educational institutions
This source can provide an opportunity to approach or
invite the best candidates to apply for vacancies or
entry-level positions.
ESSA
Employment Services SA is a national integrated labour
market data management system that keeps accurate
records of skills profiles, scarce and critical skills, and
placement opportunities. (See chapter 3 for more detail.)
Deciding on the source of recruitment depends to a large extent on the expected employment relationship. Managers often apply a
rational decision choice here with respect to the level of human capital required to perform the job. For example, jobs which require high
skills and knowledge unique to the organisation (hence implying greater investment in training) are better managed as internal
promotions or transfers, as these have implications for building a committed workforce. On the other hand, jobs which do not require
costly training and which can be performed at a lower skill level can be externalised with a view to shorter-term employment
relationship (Redman & Wilkinson, 2009).
According to Schreuder et al (2006), using internal sources may increase morale when employees know that promotion or other
forms of intra-organisational career mobility opportunities are available. Research by Zottoli and Wanous (2000) indicates, for example,
that employees recruited through inside sources stayed with the organisation longer (higher tenure) and performed better than employees
recruited through outside sources. While the use of internal sources is regarded as a faster and less expensive way of filling a vacant
position, external sources provide a larger pool of potential candidates, and new employees may bring fresh ideas and approaches into
the organisation. However, it is also important to recognise that there are alternatives to recruitment. For example, it may be possible to
reorganise the job, reallocate tasks, and even eliminate the job through automation, rather than replace an individual who has left the
organisation (Robinson, 2006).
Reflection 6.1
Review Table 6.1, Sources of internal and external recruitment. Study the situations below and indicate which recruitment sources
would be the most appropriate to use. Give reasons for your answer.
1. A private college for higher education programmes experiences severe staff shortages during the registration period.
2. A doctor’s receptionist is going on leave and the doctor has to find a replacement for five months.
3. An engineering technician has to take early retirement for medical reasons and the section where he works cannot function without
someone with his skills and knowledge.
4. The CEO of a company is retiring in three years’ time and his successor has to be appointed.
5. The financial manager of a large multinational has given three months’ notice of her resignation.
6. The till supervisor in a supermarket has to be replaced.
7. The organisation is looking for a replacement for the messenger, who has resigned.
8. A bank which has traditionally filled positions by using employee referrals, because they have found that the most trustworthy
employees can be recruited this way has decided that from now on all vacancies must be filled with candidates from previously
disadvantaged groups.
9. An IT company is looking for a professionally qualified industrial psychologist to assist with the selection of a large pool IT
technicians who applied for an advertised vacancy.
10. A large manufacturing company wants to recruit young, talented graduates as part of their learnership strategy.
(Schreuder et al, 2006:54)
Applicant attraction
An important recent development is the greater attention devoted to the so-called ‘attraction element’ (Searle, 2003) or ‘applicant
perspective’ (Billsberry, 2007), which focuses on how people become applicants, including how potential applicants perceive and act on
the opportunities offered. Applicant attraction to organisations implies getting applicants to view the organisation as a positive place to
work. It further acknowledges a two-way relationship between organisations and applicants, where the applicant’s perception of
decision-making becomes an important factor shaping whether the recruitment process is successful or not (Redman & Wilkinson,
2009).
127
In view of the foregoing, it is important that recruitment activities and methods enhance potential applicants’ interest in and
attraction to the organisation as an employer; and increase the probability that they will accept a job offer (Saks, 2005). The global
challenge of attracting, developing, and retaining scarce and critical skills in times of skill shortages and the search for more highly
engaged and committed employees both place greater importance on the perceptions of potential and actual applicants. In some
situations, companies will have to become smarter in terms of the methods they use to attract qualified and competent applicants,
maintain their interest in the company, and convince them that they should accept an offer of employment (Redman & Wilkinson, 2009).
Recruitment sources generally have only a slight effect on tenure of future employees. Moreover, in their efforts to attract applicants,
companies may at times ‘oversell’ a particular job or their organisation. Advertisements may state that ‘this is a great place to work’, or
that the position is ‘challenging’ and ‘offers ‘tremendous potential for advancement’. This is not a problem if such statements are true,
but if the job and the organisation are presented in a misleading, overly positive manner, the strategy will eventually backfire. New
employees will quickly discover that they were fooled and may look for work elsewhere or become dissatisfied and demotivated
(Riggio, 2009). One method of counteracting potential misperceptions and to attract potential candidates is the realistic job preview
(RJP).
RJPs can take the form of an oral presentation from a recruiter, supervisor, or job incumbent, a visit to the job site, or a discussion in
a brochure, manual, video, or company web site. However, research indicates that face-to-face RJPs may be more effective than written
ones (Saks & Cronshaw, 1990). RJPs provide an accurate description of the duties and responsibilities of a particular job and give an
applicant an honest assessment of a job (Aamodt, 2010). For example, instead of telling the applicant how much fun she will have
working in a call centre environment, the recruiter honestly tells her that although the pay is well above average, the work is often
boring and strenuous, with long working hours and limited career advancement opportunity. Although tel-ling the truth may scare away
many applicants, especially the better qualified ones, the ones who stay will not be surprised about the job, since they know what to
expect. Research has shown that informed applicants will tend to stay on the job longer than applicants who did not understand the
nature of the job. More-over, RJPs are regarded as important in increasing job commitment and satisfaction and in decreasing initial
turn-over of new employees (Hom, Griffeth, Palich & Bracker, 1998). Taylor (1994) found, for example, that an RJP at a long-distance
trucking company increased job satisfaction and reduced annual turnover from 207 per cent to 150 per cent.
Another important aspect that needs to be considered in attracting potential candidates is to avoid intentional or unintentional
discrimination. In terms of the Employment Equity Act 55 of 1998 (EEA) (discussed in chapter 3), employment discrimination against
the designated groups specified in the Act, intentional or unintentional, is illegal. In order to avoid unintentional discrimination,
employers should take steps to attract applicants from the specified designated groups in proportion to their numbers in the population
from which the company’s workforce is drawn. Furthermore, in terms of affirmative action, preference should be given to candidates
from the historically disadvantaged groups, such as blacks, Asians, and coloureds.
Reflection 6.2
Study the two advertisements below and then answer the questions that follow.
Advertisement 1
Advertisement 2
Telesales Executive Classified
publishing group is expanding and
therefore has a unique and exciting
opportunity for a dynamic Telesales
Executive.
Company ABC is looking for a Field Sales Executive.
A fantastic opportunity exists with the leading classified publication in the province for a
Field Sales Executive. The ideal candidate must be a target-driven, dynamic go-getter and be
able to handle the pressures of deadlines. This person will inherit a client base that offers a
current earning potential of approximately R20 000 per month and will be required to grow
this business.
The successful candidate must
have:
• Excellent communication,
verbal and written, coupled
with strong people skills
• Strong sales and
negotiation skills with a
‘can do’ attitude
• Target- and results-driven,
hardworking professional
• Must enjoy working under
pressure; be
entrepreneurial, with no
need for supervision
• Identify sales
opportunities while
developing, maintaining,
The successful applicant must meet the following criteria:
• Minimum 3 years’ sales experience
• A proven sales track record
• Traceable references
• Own reliable car
• Cellphone
• Previous advertising sales experience would be an advantage.
If you fit the profile and feel that you have what it takes, please send your 2-page CV by
e-mail to …
Only short-listed candidates and candidates who meet the minimum requirements will be
contacted.
128
and monitoring relations
with customers to ensure
world class customer
satisfaction
• His or her challenge will
be to create a positive sales
environment and develop
and grow their area
• Publishing experience an
advantage but not
necessary.
In return we offer
• The opportunity to
become part of a
progressive publishing
company
• Full training and support
• An opportunity to earn a
good basic salary,
commission and
performance bonuses
• Benefits – normal fringe
benefits in line with a
well-established company.
If you meet our criteria, we look
forward to receiving your 2-page CV
with a motivating covering letter to
…
Only short-listed candidates and
candidates who meet the minimum
requirements will be contacted.
Questions
•
Consider the needs of the older generation workforce compared with a younger generation applicant. Review the two
advertisements and decide which advertisement will attract a generation Y candidate. Give reasons for your answer.
• Which advertisement shows elements of ‘overselling’, if any?
• Would you recommend a Realistic Job Preview for both companies? Give reasons for your answer.
Recruitment methods
Recruitment methods are the ways organisations use to make it known that they have vacancies. According to Redman and Wilkinson
(2009), there is no ‘best practice’ recruitment approach, although methods which comply with employment equity legislation generally
are a requirement. The type of recruitment method chosen by an organisation will be dependent on the type of vacancy and the
organisation concerned. For example, the methods used by a large well-known organisation seeking to recruit a managing director are
likely to be different from those of a small business seeking an assistant to help out on a specific day of the week. Moreover, the
challenges posed by the employment context discussed in chapter 3 have led to companies using more creative solutions, targeting
diverse applicant groups and using Internet channels to communicate with potential applicants alongside traditional methods, such as
advertising, employment agencies, and personal contact. The challenges faced by South African organisations (as discussed in chapter 3)
will require of them flexible and innovative recruitment strategies and methods to find and retain scarce and critical skills and contribute
to building skills capacity in South Africa.
Furthermore, it is suggested that recruitment methods that work for the older generation workforce (the so-called baby boomers) will
have no appeal for the younger generation (the so-called generation Y or new millennium person). For example, a generation Y or new
millennium candidate will probably find a job through a friend of a friend on Facebook, whereas the baby boomer will diligently look
through a newspaper such as The Sunday Times or the Mail & Guardian to see what is available. Younger generations will also be
attracted to advertisements and job information that sell diversity, individual growth, and career mobility opportunity, as opposed to
older generations, who will be attracted to job content, titles and security (O’Callaghan, 2008).
Most recruitment methods can be classified as open search techniques. Two examples of open searches are advertisements and
129
e-recruitment.
Advertisements
The method most commonly used is the advertisement. When using advertisements as a method of attracting potential candidates, two
issues need to be addressed: the media and the design of the advertisement. The selection of the best medium (be it a local paper – for
example, the Pretoria News – or a national one, such as The Sunday Times, Mail & Guardian, JobMail or The Sowetan, the Internet, or a
technical journal) depends on the type of positions for which the company is recruiting. Local newspapers are usually a good source of
blue-collar candidates, clerical employees, and lower-level administrative people, since the company will be drawing from a local
market. Trade and professional journals are generally more appropriate to attract specialised employees and professionals. One
drawback to print advertising is that there may be a week or more between insertion of the advertisement and the publication of the
journal. Employers tend to turn to the Internet for faster turnaround (Dessler, 2009).
Experienced advertisers use a four-point guide labelled AIDA for the effective design of an advertisement (Dessler, 2009; Schreuder
et al, 2006):
• A = attention. The advertisement should attract the attention of job seekers. Employers usually advertise key positions in
separate display advertisements to attract attention.
• I = interest. The advertisement should be interesting and stimulate the job seeker to read it. Interest can be created by the
nature of the job itself, with lines such as ‘Are you looking to make an impact?’ One can also use other aspects of the job,
such as its location, to create interest.
• D = desire: The advertisement should create the desire to work for the company. This can be done by spotlighting the job’s
interest factors with words such as travel or challenge. For example, having a university nearby may appeal to engineers and
professional people.
• A = action: The advertisement should inspire the right applicants to apply for the advertised position. Action can be
prompted with statements like ‘Call today’ or ‘Please forward your resumé’.
Advertisements should further comply with the EEA (discussed in chapter 3), avoiding features such as, for example, ‘man wanted’. In
terms of the EEA, an advertisement should reach a broad spectrum of diverse people and especially members of designated groups. To
ensure fairness and equity, employers need to consider where the advertisement should be placed in order to reach a large pool of
candidates that are as diverse as possible. However, this can be quite expensive, especially when the goal is to advertise nationally. In
addition to the principles of fairness and equity, the decision to advertise nationally or only locally should be determined by the
employer’s size, financial resources, geographic spread, and the seniority of the vacant position. For example, a small organisation that
sells its product only in the immediate geographical area does not need to advertise nationally (Schreuder et al, 2006).
E-recruitment
E-recruitment (or electronic recruitment) refers to online recruitment, which uses technology or web-based tools to assist the recruitment
process by posting advertisements of job vacancies on relevant job sites on the Internet (O’Callaghan, 2008). The tool can be a job web
site, the organisation’s corporate web site, or its own intranet. In South Africa, CareerJunction (<www.careerjunction.co.za>) is an
example of a commercial job board available on the Internet. Commercial boards are growing in popularity since they consist of large
databanks of vacancies. These may be based on advertising in newspapers and trade magazines, employment agencies, specific
organisation vacancies, social networking web sites, and many other sources. Commercial job boards often have questionnaires or tests
for applicants to improve their job-hunting skills or to act as an incentive for them to return to the web site (O’Callaghan, 2008).
E-recruitment has become a much more significant tool in the last few years, with its use almost doubling since the turn of the
century. By 2007, 75 per cent of employers made use of electronic media – such as the company’s web site – to recruit staff, making
e-recruitment one of the most widely-used methods (Marchington & Wilkinson, 2008). In South Africa the following was found in a
survey conducted by CareerJunction with Human Resource directors and managers from 60 of the top 200 companies (as defined by the
Financial Mail) (O’Callaghan, 2008):
• Approximately two-thirds (69 per cent) believe the Internet is an effective recruitment channel.
• Almost half (47 per cent) are using it as part of their overall recruitment strategy.
• The results show an increase of 23 per cent in the use of e-recruitment since 2003.
The survey further indicated that South African companies make more use of traditional methods than overseas companies. South
African companies that use e-recruitment also use the following methods: printed media (25 per cent); recruitment agencies (37 per
cent); word-of-mouth (19 per cent); and other means (19 per cent). Most companies in South Africa have a careers page on their web
site (71 per cent). Just over 6 per cent advertise their job pages in print, and 13 per cent make use of job boards. In line with international
trends, most South African companies opt to rent recruitment application technology and services (28 per cent), compared to 6 per cent
who opted to develop their own technology (O’Callaghan, 2008).
An interesting statistic revealed by the survey is that over 84 per cent of South African companies store resumés in a talent-pool
database. The survey further indicated that the key drivers for e-recruitment appear to be:
• Reducing recruitment costs
• Broadening the selection pool
130
•
•
•
Increasing the speed of time to hire
Greater flexibility and ease for candidates, and
Strengthening of the employer brand.
However, some of the disadvantages of e-recruitment include limiting the applicant audience, as the Internet is not the first choice for all
job seekers – a large portion of South African’s talent do not necessarily have access to a computer. E-recruitment may also cause
application overload or inappropriate applications, and in terms of equity considerations, it may limit the attraction of those unable to
utilise technology fully, for example, certain disabled groups. E-recruitment may also give rise to allegations of discrimination, in
particular the use of limited keywords in CV search tools. Potential candidates may also be ‘turned-off’, particularly if the web site is
badly designed or technical difficulties are encountered (O’Callaghan, 2008).
Although e-recruitment appears to have become the method of choice for many organisations, agency recruitment has also become
quite popular with the rise of outsourced recruitment and the use of flexible labour (Redman & Wilkinson, 2009). South African
companies appear to take a ‘partnership’ approach, working closely with recruitment consultancies and specialised web agencies who
manage the online process for them as they do not have the necessary skills in-house (O’Callaghan, 2008).
Recruitment planning
Filling a vacancy begins with the job analysis process. The job analysis process and its two products, namely job descriptions (including
competency profiles) and person specifications (discussed in detail in chapter 4), are regarded as the most fundamental pre-recruitment
activities. Person specifications which detail the personal qualities that workers require to perform the job are derived from the job
descriptions and/or competency profiles that result from the job analysis process (McCormack & Scholarios, 2009). These activities
inform the human resource planning process discussed in chapter 3. The actual process of recruitment planning begins with a
specification of human resource needs as determined by the human resource planning process. In practical terms, this means a clear
specification of the numbers, skills mix, levels, and the timeframe within which such requirements must be met (Cascio & Aguinis,
2005). In addition, the company’s affirmative action targets as set out in the employment equity plan must be examined and considered
in the recruitment or resourcing plan. The next step is to project a timetable for reaching the set targets and goals based on expected job
vacancies as set out in the recruitment plan. In addition, the most appropriate, cost-effective and efficient methods that will yield the best
results in the shortest time period in the most cost-effective way must also be determined.
Projecting a timetable involves the estimation of three key parameters (Cascio & Aguinis, 2005): the time, the money, and the
potential candidates necessary to achieve a given hiring rate. The basic statistic needed to estimate these parameters is the yield ratio or
number of leads needed to generate a given number of hires in a given time. Yield ratios include the ratios of leads to invitations,
invitations to interviews, interviews (and other selection instruments) to offers, and offers to hires obtained over a specified time period
(for example, three months, or six months). Time-lapse data provide the average intervals between events, such as between the extension
of an offer to a candidate and acceptance or between acceptance and addition to the payroll. Based on previous recruitment experience
accurately recorded and maintained by the company, yield ratios and time-lapse data can be used to determine trends and generate
reliable predictions (assuming labour market conditions are comparable).
If no prior experience data exist, hypotheses or ‘best guesses’ can be used, and the outcome (or yield) of the recruitment strategy can
be monitored as it unfolds (Cascio & Aguinis, 2005). In this regard, the concept of a recruitment yield pyramid (shown in Figure 6.1) is
useful when deciding on a recruitment strategy. Let us say that the goal is to hire five managers. The company has learned from past
experience that for every two managers who are offered jobs, only one will accept. Therefore, the company will need to make ten offers.
Furthermore, the company has learned that to find ten managers who are good enough to receive an offer, 40 candidates must be
interviewed; that is, only one manager out of four is usually judged acceptable. However, to get 40 managers to travel to the company
for an interview, the company has to invite 60 people; which indicates that typically only two out of three candidates are interested
enough in the job to agree to be interviewed. Finally, to find 60 potentially interested managers, the company needs to get four times as
many contacts or leads. Some people will not want to change jobs, others will not want to move, and still others will simply be
unsuitable for further consideration. Therefore, the company has to make initial contacts with about 240 managerial candidates. Note the
mushrooming effect in recruiting applicants. Stated in reverse order, 240 people are contacted to find 60 who are interested, to find 40
who agree to be interviewed, to find ten who are acceptable, to get the five people who will accept the offer. These yield ratios (that is,
240 : 5) differ depending on the organisation and the job in question (Muchinsky et al, 2005:126).
Figure 6.1 Recruitment yield pyramid (Adapted from Hawk (1967), cited in Muchinksky et al, 2005:126)
131
Additional information can be derived from time-lapse data. For example, past experience may show that the interval from receipt of
a resumé to invitation averages five days. If the candidate is still available, he or she will be interviewed five days later. Offers are
extended on average three days after the interview, and within a week after that, the candidate either accepts or rejects the offer. If the
candidate accepts, he or she reports to work, on average, four weeks from the date of acceptance. Therefore, if the company begins
today, the best estimate is that it will be more or less 40 days before the first new employee is added to the payroll. This information
enables the company to determine the ‘length’ of the recruitment pipeline and adjust the recruitment plan accordingly (Cascio &
Aguinis, 2005).
Although yield ratios and time-lapse data are important for estimating recruiting candidates and time requirements, these parameters
do not take into account the cost of recruitment campaigns, the cost-per-applicant or qualified applicant, and the cost of hiring. Cost
estimates such as the following are essential (Aamodt, 2010; Cascio & Aguinis, 2005):
• Cost-per applicant and/or cost-per-qualified applicant (determined by dividing the number of applicants by the amount spent
for each strategy)
• Cost-per-hire (salaries, benefits and overtime premiums)
• Operational costs (for example, telephone, candidate and recruiting staff travel and accommodation expenses; agency fees,
advertising expenses, medical expenses for pre-employment physical examinations, and any other expenses), and
• Overhead expenses such as rental expenses for facilities and equipment.
Reflection 6.3
A large manufacturing company has drawn up a recruitment strategy to recruit 100 additional engineers. In your role as industrial
psychologist in the recruitment section, management has requested you to assist in the planning of the recruitment of potential
candidates. They also requested an estimate of the yield ratio to be able to establish the costs for budgeting purpose. You have decided
to establish the yield ratios based on previous recruitment experience in the company and to present the data in the form of a
recruitment yield pyramid.
Based on existing past data, you are able to make the following predictions:
With technical candidates, you have learned from past experience that for every two candidates who are offered jobs, only one will
accept (an offer-to-acceptance ratio of 2 : 1). Based on this ratio, you determine that the company will have to extend 200 offers.
Furthermore, if the interview-to-offer ratio in the past has been 3 : 2 (that is, only two engineers out of three is usually judged
acceptable), then the company needs to conduct 300 interviews.
Since the invitations-to-interview ratio is 4 : 3 (that is, typically only three out of four candidates are interested enough in the job to
agree to travel to be interviewed), the company needs to invite as many as 400 candidates. Finally, past experience has shown that
contacts or leads required to find suitable candidates to invite are in a 6 : 1 proportion, so they need to make 2 400 contacts to find 100
potentially interested engineers.
Using this experience data, draw a recruitment yield pyramid that will assist you in presenting the yield ratio estimates to
management.
132
In addition to an analysis of cost-per-hire, yield ratios, and time-lapse from candidate identification to hire, an analysis of the source
yield adds to the effectiveness of a recruitment strategy. Source yield refers to the ratio of the number of candidates generated from a
particular source to hires from that source. For example, a survey of 281 corporate recruiters found that newspapers were the top source
of applicants, followed by e-recruiting and employee referrals (Cascio & Aguinis, 2005). Looking at the number of successful
employees or hires generated by each recruitment source is an effective method, because generally not every candidate will be qualified,
nor will every qualified applicant become a successful employee (Aamodt, 2010).
CANDIDATE SCREENING
Personnel selection occurs whenever the number of applicants exceeds the number of job openings. Having attracted a pool of
applicants, the next step is to select the best person for the job. Screening refers to the earlier stages of the selection process, with the
term ‘selection’ being used when referring to the final decision-making stages (Cascio & Aguinis, 2005). In this regard, screening
involves reviewing information about job applicants by making use of various screening tools or devices (such as those discussed in
chapter 5) to reduce the number of applicants to those candidates with the highest potential for being successful in the advertised
position.
A wide variety of data sources can be used in screening and selecting potential employees to fill a vacancy. We will consider some
of the most widely-used initial screening methods, including resumés and job applications, reference checks and letters of
recommendation, and personal history data. Job applicant testing and selection interviews are also important predictor elements of the
screening and selection decision-making process. However, because of the variety and complexity of tests used in applicant screening
and selection, the use of psychological testing and interviewing as predictors of employee performance have been discussed in more
detail in chapter 5.
Importance of careful screening
The careful testing and screening of job candidates are important because they lead to quality decisions that ultimately contribute to
improved employee and organisational performance. The screening and selection process and in particular the tools and techniques
applied by industrial psychologists enable a company to identify and employ employees with the right skills and attribute mix who will
perform successfully in their jobs. Carefully screening job applicants can also help to reduce dysfunctional behaviours at work by
screening out undesirable behaviours such as theft, vandalism, drug abuse, and voluntary absenteeism, which influence the climate and
performance of the organisation negatively.
Incompetent selection or negligent hiring also have legal implications for a company. Negligent hiring refers to the hiring of workers
with criminal records or other such problems without proper safeguards. Courts will find employers liable when employees with
criminal records or other problems use their access to customers’ homes or similar opportunities to commit crimes. Avoiding negligent
hiring claims requires taking ‘reasonable’ action to investigate the candidates’ background. Among other things, employers must make a
systematic effort to gain relevant information about the applicant, verify documentation, follow up on missing records or gaps in
employment, and keep a detailed log of all attempts to obtain information, including the names and dates of phone calls and other
requests (Dessler, 2009).
Most importantly, particularly in the South African legal context, effective screening and selection can help prevent decisions related
to adverse impact and fairness, which are discussed in more detail towards the end of this chapter. Equal employment opportunity
legislation mandates fair employment practices in the screening and selection of job candidates. Screening and selection techniques (for
example, application forms, psychological tests, and interviews) should be job related (or ‘valid’, in personnel psychology terms) and
minimise adverse impact on disadvantaged and minority groups. Questions asked in the screening and selection process that are not job
related, and especially those that may lead to job discrimination, such as inquiries about age, ethnic background, religious affiliation,
marital status, or finances, should not be included in screening and selection devices. Companies must also try to prevent reverse
discrimination against qualified members of advantaged groups. Targets of discrimination may include historically disadvantaged
groups, older workers, women, disabled persons, gay people, and people who are less physically attractive (Riggio, 2009; Schultz &
Schultz, 2010).
Finally, as previously discussed, recruiting and hiring or employing people can be a costly exercise. Effective screening procedures
can also help contribute to the organisation’s bottom line by ensuring that the tools and techniques applied are cost-effective.
Short-listing candidates
Short-listing is the initial step in the screening process. It is done by comparing the information provided in the application form or
curriculum vitae (CV or resumé) and information obtained from testing (or measures of performance) with the selection criteria, as
identified by the job analysis process and stipulated in the job or person specification. In theory, those who match the selection criteria
will go on to the next stage of the selection process. The profession of personnel psychology follows a psychometric approach which
aims to measure individual characteristics and match these to the requirements of the job in order to predict subsequent performance. To
achieve this, candidates who are successfully short-listed face a number of subsequent selection devices for the purpose of predicting
whether they will be successful in the job. These can be viewed as a series of hurdles to jump, with the ‘winner’ being the candidate or
133
candidates who receive the job offer(s). The techniques that industrial psychologists apply for short-listing are based on the scientific
principles that underlie the science of decision theory. These scientific selection procedures and techniques are discussed in the section
dealing with selection decision-making.
Evaluating written materials
The first step in the screening process involves the collection of biographical information or biodata on the backgrounds of job
applicants. This involves evaluating written materials, such as applications and resumés (CVs), references and letters of
recommendations, application blanks, and biodata inventories. The rationale for these screening devices is the belief that past
experiences or personal attributes can be used to predict work behaviour and potential for success. Because many behaviours, values,
and attitudes remain consistent throughout life, it is not unreasonable to assume that behaviour in the future will be based on behaviour
in the past (Aamodt, 2010; Schultz & Schultz, 2010).
The initial determination about the suitability for employment is likely to be based on the information that applicants supply on a
company’s application blank or questionnaire. The application blank is a formal record of the individual’s application. It contains
biographical data and other information that can be compared with the job specification to determine whether the applicant matches the
minimum job requirements. With the increasing popularity of e-recruiting, fewer organisations today use the standard paper forms as
they tend to rely instead on applications completed online. Although many companies require all applicants to complete an application
form, standard application forms are usually used for screening lower-level positions in the organisation, with resumés used to provide
biographical data and other background information for higher-level jobs.
The main purpose of the application blank and resumé is to collect biographical information such as education, work experience,
skills training, and outstanding work and school accomplishments. Such data are believed to be among the best predictors of future job
performance. Moreover, written materials are usually the first contact a potential employer has with a job candidate, and therefore the
impressions of an applicant’s credentials received from a resumé or application are very important. Research has also shown that
impressions of qualifications from written applications have influenced impressions of applicants in their subsequent interviews (Macan
& Dipboye, 1994; Riggio, 2009). In addition, work samples (also discussed in chapter 5) are often used in the screening process as a
measure to predict future job performance. Work samples usually contain a written sample in the form of a report or document or
portfolio of an applicant’s work products as an indication of their work-related skills (Riggio, 2009).
Biographical information blanks or questionnaires (BIBs) are used by organisations to quantify the biographical information
obtained from application forms. Weighted application forms assign different weights to each piece of information on the form. The
weights are determined through detailed research, conducted by the organisation, to determine the relationship between specific bits of
biographical data, often referred to as biodata, and criteria of success on the job. Although research has also shown that biodata
instruments can predict work behaviour in a wide spectrum of jobs, their ability to predict employee behaviour has been shown to
decrease with time. Furthermore, although biodata instruments are valid and no more prone to adverse impact than other selection
methods, applicants tend to view them and personality tests as being the least job-related selection methods. Therefore, their use may
increase the chance of a lawsuit being filed, but not the chance of losing a lawsuit (Aamodt, 2010).
Most organisations attempt to confirm the accuracy of biographical information by contacting former employers and the persons
named as references. However, research has suggested that references and letters of recommendation may have limited importance in
employee selection because these sources of information tend to be distorted in an overly positive direction so that they may be useless
in distinguishing among applicants. People also usually tend to supply the names of persons who will give positive recommendations. In
addition, because of increased litigation or lawsuits (for example, for slander) against individuals and former employers who provide
negative recommendations, many companies are refusing to provide any kind of reference for former employees except for job titles and
dates of employment. Few give evaluative information such as performance ratings, or answering questions about why the employee left
or whether the company would re-employ the employee, making it difficult for an organisation to verify certain kinds of information
provided on an application form or application blank. Some companies simply forgo the use of references checks and letters of
recommendation (Riggio, 2009; Schultz & Schultz, 2010).
As previously mentioned, psychological testing and selection interviews are important aspects of the screening and selection process.
You are advised to review chapter 5 to deepen your understanding of the role that these selection devices play in predicting human
performance when screening and selecting candidates.
Managing applicant reactions and perceptions
Recently researchers have developed an interest in examining selection from the applicant’s perspective, recognising that not only do
companies select employees, but applicants also select the organisation to which they will apply and where they are willing to work
(Hausknecht, Day & Thomas, 2004). Managing applicants’ perceptions of and reactions to the recruitment, screening and selection
process and procedures is especially important in a recruitment context that is characterised by skills shortages and an inadequate supply
of experienced and high-performing labour. The ‘war for talent’ and the search for more engaged and committed employees has led to
companies realising that following a purely selective approach to hiring based on matching job and person characteristics has become
inadequate for finding the ‘right’ employees. The overall resourcing strategy and in particular the screening and selection process is
therefore viewed as an interactive social process where the applicant has as much power about whether to engage in the applicant
process as the organisation. This places greater importance on the perceptions of potential and actual applicants (McCormack &
134
Scholarios, 2009). Negative impressions may be caused by uninformative web sites; uninterested recruiters; long, complicated
application and screening processes; or any message which communicates undesirable images of the employer brand.
Applicants who find particular aspects of the screening and selection system invasive may view the company as a less attractive
option in the job search process. Maintaining a positive company image during the screening and selection process is of significant
importance because of the high costs associated with losing especially top, talented candidates with scarce and critical skills. Moreover,
candidates with negative reactions to a screening and selection experience may dissuade other potential applicants from seeking
employment with the organisation. Candidates may also be less likely to accept an offer from a company with screening and selection
practices that are perceived as unfavourable, unfair, or biased. More importantly, as previously mentioned, applicant reactions may be
related to the filing of legal complaints and court challenges, particularly in situations where a specific selection technique is perceived
as invasive, unfair, biased, or inappropriate. In addition, although there is little empirical data on these issues, it is also possible that
applicants may be less likely to reapply with an organisation or buy the company’s products if they feel mistreated during the screening
and selection process (Hausknecht et al, 2004).
The term ‘applicant reactions’ has been used to refer to the growing body of literature that examines ‘attitudes, affect, or cognitions
an individual might have about the hiring process (Ryan & Ployhart, 2000:566). Applicant perceptions take into account applicant views
concerning issues related to organisational justice, such as the following (Greenberg, 1993):
• Perceptions regarding the perceived fairness of the outcome of the selection process, including the test scores or rating earned
by applicants on a given selection device (distributive justice)
• Rules and procedures used to make those decisions, including the perceived predictive validity of selection devices, the
length of the selection process (procedural justice)
• Sensitivity and respect shown to individuals (interpersonal justice), and
• Explanations and accounts given to individuals (informational justice).
The basic premise of organisational justice theory in selection contexts is that applicants view selection procedures in terms of the
aforementioned facets of justice, and that these perceptions influence candidates’ thoughts and feelings about testing, their performance
in the screening and selection process, and broader attitudes about tests and selection in general (Hausknecht et al, 2004). Research by
Wiechmann and Ryan (2003)indicates, for example, that applicants perceive selection more favourably when procedures are not
excessively long and when applicants receive positive outcomes. Providing applicants with an adequate explanation for the use of
selection tools and decisions may also foster positive perceptions among applicants. In addition, researchers have proposed that
applicant perceptions may be positively related to perceived test ease (Wiechmann & Ryan, 2003, cited by Hausknecht et al, 2004) and
the transparency of selection procedures (Madigan, 2000). Overall, research suggests that applicants will not react negatively to tools
that are well-developed, job-relevant, and used in screening and selection processes in which the procedures are appropriately and
consistently applied with time for two-way communication, decisions are explained, and applicants are treated respectfully and
sensitively (Ryan & Tippins, 2004). Table 6.2 provides an overview of a list of selection audit questions that can be used to manage
applicant perceptions and reactions during the recruitment, screening and selection process.
Reflection 6.4
Review chapter 1. Study case study 2 in Reflection activity 1.2. Can you identify the various aspects of importance in recruitment
discussed in the case study? How do quality decisions during the planning and execution of recruitment activities influence the
performance of the organisation?
CONSIDERATIONS IN EMPLOYEE SELECTION
Employee selection is concerned with making decisions about people. Employee selection therefore entails the actual process of
choosing people for employment from a pool of applicants by making inferences (predictions) about the match between a person and a
job. By applying specific scientific techniques and procedures in a systematic or step-by-step manner, the candidate pool is narrowed
through sequential rejection decisions until a selection is made. Since the outcomes of staffing decisions are expected to serve a
business-related purpose, staffing decisions are aimed at populating an organisation with workers who possess the KSAOs that will
enable the organisation to sustain its competitive edge and succeed in its business strategy. However, decision-makers cannot know with
absolute certainty the outcomes of rejecting or accepting a candidate. In this regard, they generally prefer to infer or predict in advance
the future performance of various candidates on the basis of available information and choose those candidates with the highest
probability of succeeding in the position for which they applied (Cascio & Aguinis, 2005; Landy & Conte, 2004; Riggio, 2009).
Table 6.2 Audit questions for addressing applicant perceptions and reactions to selection procedures (Ryan & Tippins, 2004:314)
[Reprinted with the permission of CCC/Rightslink on behalf of Wiley Interscience.]
Selection audit questions
•
Have we determined which applicant groups to target?
135
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Are efforts being made to recruit a diverse applicant pool?
Are efforts being made to have a low selection ratio (that is, low number of people selected relative to the total number of
applicants)?
Are we considering combinations of tools to achieve the highest validity and lowest adverse impact?
Have we considered how our ordering of tools affects validity and adverse impact?
Are we considering all aspects of job performance in choosing tools?
Have we determined which recruiting sources provide the best yield?
Are we providing applicants with the specific information they desire?
Have we selected recruiters who are warm and friendly?
Is appropriate attention being given to early or pre-recruitment activities?
Are applicants being processed quickly?
Do we solicit feedback from applicants on satisfaction with the staffing process?
Are applicants provided with accurate information about the job-relatedness of the selection process?
Are applicants provided with accurate information on which to judge their fit with the position?
Do we have evidence that selection procedures are job-related (that is, valid)?
Are applicants treated with respect?
Is the selection process consistently administered?
Does the process allow for some two-way communication?
Is feedback provided to applicants in an informative and timely manner?
‘Best-practice’ employee selection is usually associated with the psychometric model. As discussed in chapter 5, the psychometric
approach recommends rigorously-developed psychometric tests, performance-based or work-simulation methods, and the use of
multiple methods of assessment, all designed to measure accurately and objectively and predict candidates’ knowledge, skills,
competencies, abilities, personality and attitudes (KSAOs). Answering questions such as, ‘How do we identify people with knowledge,
skill, competency, ability and the personality to perform well at a set of tasks we call a job?’ and, ‘How do we do this before we have
ever seen that person perform a job?’ has given prominence to the psychometric model in terms of its utility and value in predicting the
potential job performance ability of applicants in an objective manner (Scholarios, 2009).
Various psychometric standards are applied to evaluate the quality of staffing decision outcomes that are based on the psychometric
model. These include: reliability, validity, utility, fairness and legal considerations. Each of these standards will be considered
separately.
Reliability
The aspect of reliability has been discussed in detail in chapter 5. Reliability is the extent to which a score from a selection measure is
stable and free from error. If a score from a measure is not stable or error free, it is not useful (Aamodt, 2010). Reliable methods are
accurate and free from contamination. They have high physical fidelity with job performance itself, are standardised across applicants,
have some degree of imposed structure, and show consistency across multiple assessors (Redman & Wilkinson, 2009).
Validity
As discussed in chapters 4 and 5, validity refers to the degree to which inferences from scores on tests or assessments are justified by the
evidence. Reliability and validity are related. As with reliability, a test must be valid to be useful. The potential validity of a test is
limited by its reliability. If a test has poor reliability, it cannot have high validity. However, a test’s reliability does not necessarily imply
validity (Aamodt, 2010).
Selection methods must be valid, that is, relevant for the work behaviours they are meant to predict. As discussed in chapter 4, to be
valid, assessment must be designed around a systematic job analysis, job description, and job or person specification for the job. Since
the linkage between predictors and criteria is the essence of validity, a valid method should also show an association between scores on
the assessment tool (predictor) and desired job behaviours (criterion). As shown in the scatter plots depicted in Figure 6.2, this linkage is
often expressed as a correlation coefficient – known as a criterion-related validity coefficient (r) – representing the strength and
direction (positive or negative) of the relationship between scores on the predictor (or proposed selection method, for example, test or
test battery) and scores on a criterion (or proxy measure) of job performance (for example, level of performance). As discussed in
chapter 2, this correlation coefficient (r) can range from 0,00 (chance prediction or no relationship) to 1,0 (perfect prediction). For
example, a researcher measuring the relationship between the age of factory workers and their job performance finds a correlation
coefficient of approximately 0,00, which shows that there is no relationship between age and performance (Riggio, 2009).
As you will recall from studying chapter 2, a positive correlation coefficient means that there is a positive linear relationship
between the predictor and criterion, where an increase in one variable (either the predictor or criterion) is associated with an increase in
the other variable. A negative correlation coefficient indicates a negative linear relationship: an increase in one variable (either the
predictor or criterion) is associated with a decrease in the other.
136
Predictor validity
Scatter plot (a), shown in Figure 6.2, illustrates a case of perfect prediction (which is however often rare), that is, a perfect positive linear
relationship as indicated by the validity coefficient, r = +1,00. In practical terms this means that each and every test score has one and
only one performance score associated with it. If, for example, a candidate scores a 50 on the test, we can predict with great precision
that the candidate will achieve a performance score of 80. Scatter plot (b) depicts a case where the validity coefficient, r = 0,00. In
practical terms this means that no matter what predictor (test) score a candidate obtains, there is no useful predictive information in that
test score. Regardless of whether the candidate obtains a test score of 10, 50 or of 60, there is no way to predict what that candidate’s
performance will be. We know only it will be somewhere between 60 and 100 (Landy & Conte, 2004).
Figure 6.2 Scatter plots depicting various levels of relationships between a predictor (test score) and a criterion (measure of
performance) (Based on Landy & Conte, 2004:263)
137
Figure 6.3 shows a predictor-criterion correlation of r = 0,80, which is reflected in the oval shape of the scatter plot of the predictor
and criterion scores. Apart from representing the relationship between the predictor and criterion, the oval shape also represents the
clustering of the data points of the individual scores on the scatter plot. Along the predictor axis is a vertical line – the predictor cut-off –
that separates passing from failing applicants. People above the cut-off are accepted for hire; those below it are rejected. Also, observe
the three horizontal lines. The solid line, representing the criterion performance of the entire group, cuts the entire distribution of scores
in half. The dotted line, representing the criterion performance of the rejected group, is below the performance of the total group.
Finally, the dashed line, which is the average criterion performance of the accepted group, is above the performance of the total group.
The people who would be expected to perform the best on the job fall above the predictor cut-off. In a simple and straightforward sense,
that is what a valid predictor does in personnel selection: it identifies the more capable people from the total pool (Muchinksky et al,
2005:138).
138
A different picture emerges for a predictor that has no correlation with the criterion, as shown in Figure 6.4. As previously
mentioned, the oval shape represents the clustering of the data points of the scores on the scatter plot. Again, the predictor cut-off
separates those accepted from those rejected. This time, however, the three horizontal lines are all superimposed, that is, the criterion
performance of the accepted group is no better than that of the rejected group, and both are the same as the performance of the total
group. The value of the predictor is measured by the difference between the average performance of the accepted group and the average
performance of the total group. As can be seen, these two values are the same, so their difference equals zero. In other words, predictors
that have no validity also have no value. On the basis of these examples, we see a direct relationship between a predictor’s value and its
validity: the greater the validity of the predictor, the greater its value, as measured by the increase in average criterion performance for
the accepted group over that for the total group (Muchinksky et al, 2005:138).
Figure 6.3 Effect of a predictor with a high validity (r = 0,80) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
Selection ratios
The selection ratio (SR) indicates the relationship between the number of individuals assessed and the number actually hired (SR = n/N,
where n represents the number of jobs available, or number of hires to be made, and N represents the number of people assessed). For
example, if a company has short-listed 100 applicants for 10 vacant positions, the SR would be SR = 0,10 (10/100). If there were 200
applicants instead of 100, the selection ratio would be SR = 0,05 (10/200). Paradoxically, low selection ratios are regarded as being
actually better than high selection ratios, because the more people being assessed, the greater the likelihood that individuals who score
high on the test will be found. By assessing 200 applicants instead of 100, the company is more likely to find high scorers (and better
performers), which, of course, will also be more expensive. In addition, a valid test will translate into a higher likelihood that a good
performer will be hired (Landy & Conte, 2004).
The effect of the SR on a predictor’s value can be seen in Figures 6.5 and 6.6. Let us assume that we have a validity coefficient of r
= 0,80 and the selection ratio is SR = 0,75, meaning we will hire three out of every four applicants. Figure 6.5 shows the
predictor-criterion relationship, the predictor cut-off that results in accepting the top 75 per cent of all applicants, and the respective
average criterion performances of the total group and the accepted group. If a company hires the top 75 per cent, the average criterion
performance of that group is greater than that of the total group (which is weighted down by the bottom 25 per cent of the applicants).
Again, value is measured by this difference between average criterion scores. Furthermore, when the bottom 25 per cent is lopped off
(the one applicant out of four who is not hired), the average criterion performance of the accepted group is greater than that of the total
group (Muchinsky et al, 2005:139, 140).
Figure 6.4 Effect of a predictor test with no validity (r = 0,00) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
139
Figure 6.5 Effect of a large selection ratio (SR = 0,75) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
In Figure 6.6 we have the same validity coefficient (r = 0,80), but this time the SR = 0,25; that is, out of every four applicants, we
will hire only one. The figure shows the location of the predictor cut-off that results in hiring only the top 25 per cent of all applicants
140
and the respective average criterion performances of the total and accepted groups. The average criterion performance of the accepted
group is not only above that of the total group, as before, but the difference is also much greater. In other words, when only the top 25
per cent are hired, their average criterion performance is greater than the performance of the top 75 per cent of the applicants, and both
of these values are greater than the average performance of the total group (Muchinsky et al, 2005:140).
The relationship between the SR and the predictor’s value should be clear: the smaller the SR, the greater the predictor’s value. This
should also make sense intuitively. The more particular decision-makers are in admitting people (that is, the smaller the selection ratio),
the more likely it is that the people admitted (or hired) will be of the quality the company desire (Muchinsky et al, 2005:140).
Prediction errors and cut scores
The level of validity is associated with prediction errors. Prediction errors can be costly for an organisation because they lead to errors in
staffing decisions. Prediction errors are generally common when validity coefficients are less than 1.00. In the case of a validity
coefficient, for example, r = 0,00 (as depicted in Figure 6.2 (b) and Figure 6.4), the ‘best’ prediction about the eventual performance
level of any applicant, regardless of the test score, would be average performance. In contrast, when the validity coefficient, r = +1.00
(perfect prediction), there will be no error in prediction of eventual performance (and therefore no error in the staffing decision) (Landy
& Conte, 2004).
Two types of prediction errors may be committed by decision-makers: false positive errors or false negative errors. A false positive
error occurs when a decision-maker falsely predicted that a positive outcome (for example, that an applicant who was hired would be
successful) would occur, and it did not; the person failed. The decision is false because of the incorrect prediction that the applicant
would have performed successfully and positive because the applicant was hired (Landy & Conte, 2004).
Figure 6.6 Effect of a small selection ratio (SR = 0,25) on test utility (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
A false negative error occurs when a decision led to an applicant who would have performed adequately or successfully was
rejected. The decision is false because of the incorrect prediction that the applicant would not have performed successfully and negative
because the applicant was not hired. Whereas a true positive refers to a case when an applicant is hired based on a prediction that he or
she will be a good performer and this turns out to be true, a true negative refers to a case when an applicant is rejected based on an
accurate prediction that he or she will be a poor performer (Landy & Conte, 2004). Figure 6.7 graphically presents the two types of true
and false decisions.
Cut scores are generally used to minimise both types of errors by moving the score that is used to hire individuals up or down. A cut
score (also referred to as a cut-off score) is a specified point in a distribution of scores below which candidates are rejected. As can be
seen in Figure 6.7, the decision about where to set the cut score in conjunction with a given selection ratio can detract from the
141
predictive value of that device. In Figure 6.7, the predictor cut-off, X’, equates false positive (lower right) and false negative (upper left)
errors, resulting in a minimum of decision errors. Raising the cut-off to W’ results in a decrease of false positives (A), that is, applicants
erroneously accepted, but an even greater increase in false negatives (B), that is, applicants erroneously rejected. Similarly, lowering the
cut-off to Z’ yields a decrease in false negatives (C), but a larger increase in false positives (D), or applicants erroneously hired. In
practical terms, Figure 6.7 illustrates that by lowering the cut score from 50 to 25, the number of candidates that would be incorrectly
rejected will be reduced, but the percentage of poor performers among the candidates that are hired will also substantially increase
(Landy & Conte, 2004:264). However, in some situations where the cost of a performance mistake can be catastrophic (for example, a
nuclear power plant operator hired by Koeberg Power Station, or an airline pilot hired by South African Airways), a better strategy may
be to be very selective and accept a higher false negative error rate to reduce the frequency of false positive errors.
Figure 6.7 Effect on selection errors of moving the cut-off score (Landy & Conte, 2004:264)
[Reproduced with permission of The McGraw-Hill Companies.]
Establishing cut scores
Criterion-referenced cut scores and norm-referenced cut scores are two methods of establishing cut scores. As illustrated in Figure 6.8,
criterion-referenced cut scores (also often called domain-referenced cut scores) are established by considering the desired level of
performance for a new hire and finding the test score (predictor cut-off score) that corresponds to the desired level of performance (
criterion cut-off score). Often the cut score is determined by having a sample of employees take the chosen test, measure their job
performance (for example, through supervisory rating), and then see what test score corresponds to acceptable performance as rated by
the supervisor. Asking a group of subject matter experts (SMEs) to examine the test in question, consider the performance demands of
the job, then pick a test score that they think a candidate would need to attain to be a successful performer, is an alternative method of
setting criterion-referenced cut scores. However, these techniques tend to be complex for SMEs to accomplish and have been the subject
of a great deal of debate with respect to their accuracy (Landy & Conte, 2004).
Norm-referenced cut scores are based on some index (generally the average) of the test-takers’ scores rather than any notion of job
performance (the term ‘norm’ is a shortened version of the word ‘normal’, meaning average). In educational settings such as the
University of South Africa, for example, passing scores are typically pegged at 50 per cent. Any score below 50 is assigned a letter
grade of F (for ‘failing’). In the selection of Masters’ student candidates, a university may, for example, decide to peg the selection of
candidates with a graduate pass average (GPA) of 60 per cent for an honours level qualification for the short-list to be invited to a
selection interview. There is no connection between the chosen cut-off score (60 per cent) and any aspect of anticipated performance
(other than the simple notion that future performance may be predicted by past performance and that people with scores below the
142
cut-off score are likely to do less well in the Masters’ programme than applicants with scores higher than the cut-off score). Frequently,
cut-off scores are set because there are many candidates for a few openings. For example, the university may have openings for a group
of only 25 Masters’ students per annum, and 200 candidates may apply for selection to the Masters’ Programme in a particular year.
Figure 6.8 Determination of cut-off score through test’s criterion-related validity (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
A cut-off score that will reduce the applicant population by a specific percentage is generally determined so that only candidates who
are in the top 25 per cent of the score distribution, for example, are selected. Naturally, if the applicant population turned out to be very
talented, then the staffing strategy would commit many false negative errors, because many who were rejected might have been good
performers. On the other hand, the utility of the staffing strategy might be high because the expense of processing those additional
candidates in later assessment steps might have been substantial (Landy & Conte, 2004).
Cascio, Alexander and Barrett (1988) addressed the legal, psychometric, and professional issues associated with setting cut-off
scores. As they reported, there is wide variation regarding the appropriate standards to use in evaluating the suitability of established
cut-off scores. In general, a cut-off score should normally be set to be reasonable and consistent with the expectations of acceptable job
proficiency in the workplace. Also, the cut-off should be set at a point that selects employees who are capable of learning a job and
performing it in a safe and efficient manner. As previously shown in Figure 6.7, undesirable selection consequences are associated with
setting the cut-off score ‘too low’ (an increase in false positive selection decisions) or ‘too high’ (an increase in false negative selection
decisions) (Muchinsky et al, 2005:143).
When there is criterion-related evidence of a test’s validity, it is possible to demonstrate a direct correspondence between
performance on the test and performance on the criterion, which aids in selecting a reasonable cut-off score. Take, for example, the case
of predicting academic success at university. The criterion of academic success is the university exam marks average, and the criterion
cut-off is 50 per cent. That is, students who attain an average mark of 50 per cent or higher graduate, whereas those with an average less
than 50 per cent do not.
Furthermore, assume that the selection (admission) test for entrance into the university is a 100-point test of cognitive ability. In a
criterion-related validation paradigm, it is established that there is an empirical linkage between scores on the test and the university
examination marks average. The statistical analysis of the scores reveals the relationship shown in Figure 6.8. Because the minimum
average of 50 per cent is needed to graduate and there is a relationship between test scores and the university examination marks
average, we can determine (through regression analysis) the exact test score associated with a predicted average mark of 50 per cent. In
this example, a test score of 50 predicts an examination average mark of 50 per cent. Therefore, a score of 50 becomes the cut-off score
on the intellectual ability test (Muchinsky et al, 2005:144).
The task of determining a cut-off score is more difficult when only content-related evidence of the validity of a given test is
available. In such cases it is important to consider the level of ability associated with a certain test score that is judged suitable or
relevant to job performance. However, obvious subjectivity is associated with such decisions. In general, there is no such thing as a
143
single, uniform, correct cut-off score. Nor is there a single best method of setting cut-off scores for all situations. Cascio and Aguinis
(2005) made several suggestions regarding the setting of cut-off scores. Here are four:
• The process of setting a cut-off score should begin with a job analysis that identifies -relative levels of proficiency on critical
knowledge, skills, and abilities (KSAs).
• When possible, data on the actual relationship of test scores (predictor scores) to criterion measures of job performance
should be considered carefully.
• Cut-off scores should be set high enough to ensure that minimum standards of job performance are met.
• Adverse impact must be considered when setting cut-off scores.
Utility
The concept of utility addresses the cost–benefit ratio of one staffing strategy versus another. The term ‘utility gain’ (the expected
increase in the percentage of successful workers) is therefore synonymous with utility (Landy & Conte, 2004). Utility analysis considers
three important parameters: quantity, quality, and cost. At each stage of the recruitment, screening, and selection process, the candidate
pool can be thought of in terms of the quantity (number) of candidates, the average or dispersion of the quality of the candidates, and the
cost of employing the candidates. For example, the applicant pool may have a quantity of 100 candidates, with an average quality of
R500 000 per year, and a variability in quality value that ranges from a low of R100 000 to a high of R700 000. This group of candidates
may have an anticipated cost (salary, benefits and training) of 70 per cent of their value. After screening and selection, the candidates
who are accepted might have a quantity of 50 who receive job offers, with an average quality value of R650 000 per year, ranging from
a low of R500 000 to a high of R650 000. Candidates who receive offers may require employment costs of 80 per cent of their value,
because the decision-makers have identified highly-qualified and sought-after individuals with scarce and critical skills. Eventually the
organisation ends up with a group of new hires (or promoted candidates in the case of internal staffing) who can also be characterised by
quantity, quality and cost (Cascio & Boudreau, 2008).
Similarly, the application or use of various screening and selection techniques and devices can be thought of in terms of the quantity
of tests and procedures used, the quality of the tests and procedures as reflected in their ability to improve the value or quality of the
pool of individuals that are accepted, and the cost of the tests and procedures in each step of the screening and selection process. For
example, as we have discussed in this chapter and previous chapters, the quality of selection tests and procedures is generally expressed
in terms of their validity, or accuracy in forecasting future job performance. As previously discussed, validity is typically expressed in
terms of the correlation between the predictor scores (the scores on a selection procedure) and some measure of job performance (the
criterion scores), such as the rand value of sales, for example.
Validity may be increased by including a greater quantity (such as a battery of selection procedures or tests), each of which focuses
on an aspect of knowledge, skill or ability, or other characteristic that has been demonstrated to be important to successful performance
on a job. Higher levels of validity imply higher levels of future job performance among those selected or promoted thereby improving
the overall payoff or utility gain to the organisation. As a result, those candidates that are predicted to perform poorly are never hired or
promoted in the first place (Cascio & Boudreau, 2008).
The utility of a selection device is regarded as the degree to which its use improves the quality of the individuals selected beyond
what would have occurred had that device not been used (Cascio & Boudreau, 2008). Consider, for example, that although a test may be
both reliable and valid, it is not necessarily useful or cost effective when applied in predicting applicants’ job performance. In the case of
a company that already has a test that performs quite well in predicting performance and has evidence that the current employees chosen
on the basis of the test are all successful, it will not be cost effective to consider a new test that may be valid, because the old test might
have worked just as well. The organisation may have such a good training programme that current employees are all successful. A new
test, although it is valid, may not provide any improvement or utility gain for the company (Aamodt, 2010).
Several formulae and models have been designed to determine how useful a test would be in any give situation. Each formula and
table provides slightly different information to an employer. Two examples of these formulae and models will be discussed: the
Taylor-Russell tables and proportion of correct decisions charts.
Taylor-Russell tables
The Taylor-Russell tables (Taylor & Russell, 1939) provide an estimate of the percentage of total new hires who will be successful
employees if an organisation uses a particular test. Taylor and Russell (1939) defined the value of the selection device as the success
ratio, which is the ratio of the number of hired candidates who are judged successful on the job divided by the total number of
candidates who were hired. The tables (see Appendix B) illustrate the interactive effect of different validity coefficients, selection ratios,
and base rates on the success ratio (Cascio & Boudreau, 2008):
144
The usefulness of a selection measure or device (test) can therefore be assessed in terms of the success ratio that will be obtained if the
selection measure is used. The gain in utility to be expected from using the instrument (the expected increase in the percentage of
successful workers) then can be derived by subtracting the base rate from the success ratio (equation 6-3 minus equation 6-1). For
example, given an SR of 0,10, a validity of 0,30, and a base rate (BR) of 0,50, the success ratio jumps to 0,71 (a 21 per cent gain in
utility over the base rate) (Cascio & Boudreau, 2008). (To verify this figure, see Appendix B.)
To use the Taylor-Russell tables, the first information needed is the test’s criterion validity coefficient (r). Conducting a criterion
validity study with test scores correlated with some measure of job performance is generally the best way to obtain the criterion validity
coefficient. However, an organisation often wants to know whether testing is useful before investing time and money in a criterion
validity study. Typical validity coefficients can in this case be obtained from previous research studies that indicate the typical validity
coefficients that will result from various methods of selection (Aamodt, 2010; Schmidt & Hunter, 1998). The validity coefficient
referred to by Taylor and Russell (1939) in their tables is, in theory, based on present employees who have already been screened using
methods other than the new selection procedure. It is assumed that the new procedure or measure will simply be added to a group of
selection procedures or measures used previously, and it is the incremental gain in validity from the use of the new procedure that is
most relevant.
As discussed above, the selection ratio (SR) is obtained by determining the relationship between the number of individuals assessed
and the number actually hired (SR = n/N, where n represents the number of jobs available, or number of hires to be made, and N
represents the number of applicants assessed). The lower the selection ratio is, the greater the potential usefulness of the test.
According to the Taylor-Russell tables, utility is affected by the base rate. The base rate (BR) refers to the proportion of candidates
who would be successful without the selection measure. A meaningful way of determining the base rate is to choose a criterion measure
score above which all employees are considered successful. To be of any use in selection, the selection measure or device must
demonstrate incremental validity by improving on the BR. That is, the selection measure (device or test) must result in more correct
decisions than could be made without using it (Cascio & Boudreau, 2008). Figure 6.9 illustrates the effect of the base rate on a predictor
(selection measure) with a given validity.
Figure 6.9 illustrates that with a BR of 0,80, it would be difficult for any selection measure to improve on the base rate. In fact, when
the BR is 0,80 and half of the applicants are selected, a validity of 0,45 is required to produce an improvement of even 10 per cent over
base rate prediction. This is also true at very low BRs. Selection measures are most useful when BRs are about 0,50. As the BR departs
radically in either direction from this value, the benefit of an additional predictor becomes questionable. In practical terms, this means
that applications of selection measures to situations with markedly different SRs or BRs can result in quite different predictive outcomes.
If it is not possible to demonstrate significant incremental utility by adding a predictor, the predictor should not be used, because it
cannot improve on current selection procedures. As illustrated in Figure 6.9, when the BR is either very high or very low, it is difficult
for a selection measure to improve on it (Cascio & Boudreau, 2008, p. 179).
Figure 6.10 presents all of the elements of the Taylor-Russell model together. In this figure, the criterion cut-off (Yc) separates the
present employee group into satisfactory and unsatisfactory workers. The predictor cut-off (Xc) defines the relative proportion of
workers who would be hired at a given level of selectivity. Areas A and C represent correct decisions, that is, if the selection measure
were used to select applicants, those in area A would be hired and become employees who perform satisfactorily. Those in area C would
be rejected correctly, because they scored below the predictor cut-off and would have performed unsatisfactorily on the job. Areas B and
D represent erroneous decisions; those in area B would be hired because they scored above the predictor cut-off, but they would perform
unsatisfactorily on the job, and those in area D would be rejected because they scored below the predictor cut-off, but they would have
been successful if hired (Cascio & Boudreau, 2008:179).
A major shortcoming of the Taylor-Russell utility model is that it underestimates the actual amount of value from the selection
procedure because it reflects the quality of the resulting hires only in terms of success or failure. In other words, it views the value of
hired employees as a dichotomous classification, successful and unsuccessful; and as demonstrated by the tables in Appendix B, when
validity is fixed, the success ratio increases as the selection ratio decreases. Under those circumstances, the success ratio tells
decision-makers that more people are successful, but not how much more successful. For many jobs one would expect to see
improvements in the average level of employee value from increased selectivity. In most jobs, for example, a very high-quality
employee is more valuable than one who just meets the minimum standard of acceptability (Cascio & Boudreau, 2008).
Proportion of correct decisions charts
Determining the proportion of correct decisions is easier to do but less accurate than the Taylor-Russell tables (Aamodt, 2010). The
proportion of correct decisions is determined by obtaining applicant test scores (predictor scores) and the scores on the criterion. The
two scores from each applicant are graphed on a chart similar to that of Figure 6.10. Lines are drawn from the point on the y-axis
145
(criterion score) that represents a successful applicant, and from the point on the x-axis (predictor score) that represents the lowest test
score of a hired applicant. As illustrated in Figure 6.11, these lines divide the scores into four quadrants. The points located in quadrant
D represent applicants who scored poorly on the test but performed well on the job. Points located in quadrant A represent employees
who scored well on the test and were successful on the job. Points in quadrant B represent employees who scored high on the test, yet
did poorly on the job, and points in quadrant C represent applicants who scored low on the test and did poorly on the job.
Figure 6.9 Effect of varying base rates on a predictor with a given validity (Cascio & Boudreau, 2008:178)
[Reproduced with the permission of Pearson Education, Inc.]
146
Figure 6.10 Effect of predictor and criterion cut-offs on selection decisions (Adapted from Cascio & Boudreau, 2008:179)
[Reproduced with the permission of Pearson Education, Inc.]
147
Note: The oval is the shape of the scatter plot that shows the overall relationship between predictor and criterion score.
More points in quadrants A and C indicate that the test is a good predictor of performance because the points in the other two
quadrants (D and B) represent ‘predictive failures’. That is, in quadrants D and B no correspondence is seen between test scores and
criterion scores. To estimate the test’s effectiveness, the number of points in each quadrant is totalled, and the following formula is used
(Aamodt, 2010:198):
The resulting number represents the percentage if time that the decision-maker expects to be accurate in making a selection decision in
future. To determine whether this is an improvement, the following formula is used:
If the percentage from the first formula is higher than that of the second, the proposed test should increase selection accuracy. If not, it
may be better to use the selection method currently in use.
Figure 6.11 illustrates, for example, that there are 4 data points in quadrant D, 7 in quadrant A, 3 in quadrant B, and 8 in quadrant C.
The percentage of time a decision-maker expects to be accurate in the future would be:
If we compare this result with, for example, another test that the company was previously using to select employees, we calculate the
satisfactory performance baseline:
148
Reflection 6.5
Use the Taylor-Russell tables in Appendix B to determine the utility of a particular selection measure by completing the following
table:
Which selection measure will be the most useful? Give reasons for your answer.
Figure 6.11 Determining the proportion of correct decisions
Using the proposed test would result in a 36% per cent increase in selection accuracy over the selection method previously used
(0,68–0,50 =
= 36%).
Paradoxically, research (Cronshaw, 1997; Latham & Whyte, 1994; Whyte & Latham, 1997)) on the effect of utility results on the
decision by experienced managers to adopt or not adopt a selection strategy showed that a presentation of the positive utility (that is, the
utility to be gained) aspects of a selection strategy actually resulted in a lower likelihood that a manager would adopt the selection
strategy because they often regard utility calculations as a ‘hard sell’ of the selection device. However, although the results of Latham
and Whyte’s research suggest that utility analyses may be of little value to managers in deciding whether or not to adopt a selection
strategy, this may be misleading. Cronshaw (1997) and Landy and Conte (2004) argue that although there appears to be little value in
presenting utility calculations to decision-makers (for example, line managers, senior management, or human resource department),
industrial psychologists should take cognisance of the value added by utility calculations.
Based on utility calculations, industrial psychologists can make more informed decisions with respect to which alternatives to
present to managers. Industrial psychologists can present relevant data (for example, the cost of implementing a new selection system or
procedure) to decision-makers in a more familiar cost–benefit framework. Virtually every line manager and human resource
149
representative will eventually ask cost–benefit questions. However, Cronshaw (1997) cautions that industrial psychologists should
refrain from using utility calculations to ‘sell’ their products but should rather present utility analysis information simply as information
to evaluate the effectiveness of a staffing strategy and selection device or procedure and determining the benefits for the organisation.
Reflection 6.6
Compare Figure 6.11 with the results of the following new proposed test for selecting employees, depicted in Figure 6.12 below.
Calculate the satisfactory performance baseline. Compare the baseline results of the two tests and determine whether any utility gains
were made by using the new test. Which test would you recommend in future for selection purposes? Give reasons for your answer.
Figure 6.12 Determining the proportion of correct decisions – new proposed test
Fairness and legal considerations
Although there are multiple perspectives on the concept of fairness in selection, there seems to be general agreement that issues of
equitable treatment, predictive bias, and scrutiny for possible bias when subgroup differences are observed are important concerns in
personnel selection (Society for Industrial and Organisational Psychology of South Africa, 2005). Most industrial psychologists agree
that a test is fair if it can predict performance equally well for all race, genders and national origins.
In terms of legal considerations, industrial psychologists commonly serve as expert witnesses in employment discrimination cases
filed in courts. These cases are most often filed by groups of individuals claiming violations of the Employment Equity Act 55 of 1998
and the Labour Relations Act 66 of 1995 (LRA). The law and the courts recognise two different theories of discrimination. The adverse
treatment theory charges an employer with intentional discrimination. The adverse impact theory acknowledges that an employer may
not have intended to discriminate against a plaintiff, but a practice implemented by the employer had the effect of disadvantaging the
group to which the plaintiff belongs. In an adverse impact case, the burden is on the plaintiff to show that (1) he or she belongs to a
protected group, and (2) members of the protected group were statistically disadvantaged compared to the majority employees or
applicants (Landy & Conte, 2004).
Because of the importance of issues such as fairness, bias and adverse impact in employee selection, particularly in the South
African national and organisational context, these aspects are discussed in more detail towards the ends of this chapter.
MAKING SELECTION DECISIONS
Employee recruitment, screening and selection procedures are regarded as valuable to the extent that it improves vital decisions about
hiring people. These decisions include how to invest scarce resources (such as money, time and materials) in staffing techniques and
activities, such as alternative recruiting resources, different selection and screening technologies, recruiter training or incentives,
alternative mixes of pay and benefits to offer desirable candidates, and decisions by candidates about whether to accept offers. Effective
150
staffing therefore requires measurement procedures that diagnose the quality of the decisions of managers, industrial psychologists,
human resource professionals and applicants in a systematic or step-by-step manner (Cascio & Boudreau, 2008). The emphasis on the
quality of selection decisions has led to systematic selection being increasingly regarded as one of the critical functions of human
resource management, essential for achieving key organisational outcomes (Storey, 2007), and a core component of what has been
called a high-commitment or high-performance management approach to human resource management (Redman & Wilkinson, 2009).
Since decisions about hiring people or human capital are increasingly central to the strategic success and effectiveness of virtually all
organisations, industrial psychologists apply scientifically rigorous ways to measure and predict the potential of candidates to be
successful in the jobs for which they applied. However, as previously mentioned, the decision–science approach to employee selection
requires that selection procedures should do more than evaluate and predict the performance of candidates; they should extend the value
or utility of measurements and predictions by providing logical frameworks that drive sound strategic decisions about hiring people.
A decision framework provides the logical connections between decisions about a resource (for example, potential candidates to be
hired) and the strategic success of the organisation (Cascio & Boudreau, 2008). A scientific approach to decision-making reveals how
decisions and decision-based measures can bring the insights of the field of industrial and organisational psychology to bear on the
practical issues confronting organisation leaders and employees. Because the profession of personnel psychology relies on a decision
system that follows scientific principles in a systematic manner, industrial psychologists are able not only to demonstrate the validity
and utility of their procedures, but also to incorporate new scientific knowledge quickly into practical applications that add value to the
strategic success of the organisation.
As discussed in previous chapters, the measurement and decision frameworks provided by the profession of personnel psychology
are grounded in a set of general principles for data collection, data analysis, and research design that support measurement systems in all
areas of human resource-related decision-making in the organisation. High-quality staffing decisions are therefore made by following a
comprehensive approach in gathering high-quality information about candidates to predict the likelihood of their success on the varied
demands of the job (Landy & Conte, 2004). The employee selection process involves two important procedures: measurement and
prediction (Cascio & Aguinis, 2005; Wiggins, 1973). Measurement entails the systematic collection of multiple pieces of data or
information using tests or other assessment procedures that are relevant to job performance (such as those discussed in chapter 5).
Prediction involves the systematic process of combining the collected data in such a way as to enable the decision-maker to minimise
predictive error in forecasting job performance.
Strategies for combining job applicant information
Job applicant information can be combined in various ways to make good staffing decisions. Clinical (or intuitive) and statistical (also
referred to as mechanical or actuarial) decision-making strategies are generally regarded as the two most basic ways to combine
information in making staffing decisions. In clinical decision-making, data are collected and combined judgementally. Similarly,
predictions about the likely future performance of a candidate or job applicant are also judgemental in the sense that a set of test scores
or impressions are combined subjectively in order to forecast criterion status. The decision-maker examines multiple pieces of
information by using, for example, techniques such as assessment interviews and observations of behaviour. This information is then
weighted in his or her head and, based on experience and beliefs about which types of information are more or less important, a
subjective decision about the relative value of one candidate over another is made – or simply a select/reject decision about an individual
candidate is made. Although some good selection decisions may be made by experienced decision-makers, clinical or intuitive decisions
tend to be regarded as unreliable and idiosyncratic because they are mostly subjective or judgemental in nature, and are therefore
error-prone and often inaccurate (Cascio & Aguinis, 2005; Landy & Conte, 2004; Riggio, 2009).
In statistical decision-making, information is combined according to a mathematical or actuarial formula in an objective,
predetermined fashion. For example, the statistical model used by insurance companies is based on actuarial data and is intended to
predict the likelihood of events (for example, premature death) across a population, given the likelihood that a given person will engage
in certain behaviours or display certain characteristics (for example, smoking, or over-eating). Personnel selection also uses the
statistical model of prediction to select the candidates with the greatest likelihood of job success (Van Ours & Ridder, 1992).
In personnel selection, short listed job applicants are assigned scores based on a psychometric instrument or battery of instruments
used to assess them. The test scores are then correlated with a criterion measure. By applying specific statistical techniques, each piece
of information about job applicants is given some optimal weight that indicates the strength of the specific data component in predicting
future job performance. Most ability tests, objective personality inventories, biographical data forms (biodata), and certain types of
interviews (such as structured interviews) permit the assignment of scores for predictive purposes. However, even if data are collected
mechanically, they may still be combined judgementally. For example, the decision-maker can apply a clinical composite strategy by
means of which data are collected both judgementally (for example, through interviews and observations) and mechanically (for
example, through tests and BIBs), but combined judgementally. The decision-maker may subjectively interpret the personality profile of
a candidate that derived from the scores of a test without ever having interviewed or observed him or her. On the other hand, the
decision-maker can follow a mechanical composite strategy whereby job applicant data are collected judgementally and mechanically,
but combined in a mechanical fashion (that is, according to pre-specified rules such as a multiple regression equation) to derive
behavioural predictions from all available data.
The statistical or mechanical decision-making strategy is regarded as being superior to clinical decisions because selection decisions
are based on an objective decision-making statistical model. Whereas human beings, in most cases, are incapable of accurately
processing all the information gathered from a number of job applicants, statistical models are able to process all of this information
151
without human limitations. Mechanical models allow for the objective and appropriate weighting of predictors, which is important to
improve the accuracy of predictions. In addition, because statistical methods allow the decision-maker to incorporate additional evidence
on candidates, the predictive accuracy of future performance of candidates can be further improved. However, it is important to note that
judgemental or clinical methods can be used to complement mechanical methods, because they provide rich samples of behavioural
information. Mechanical procedures are applied to formulate optimal ways of combining data to improve the accuracy of job applicant
performance predictions (Cascio & Aguinis, 2005; Riggio, 2009).
Methods for combining scores
Various prediction models are used by decision-makers to improve the accuracy of predictions concerning the future performance of job
candidates on which basis candidates are either rejected or selected. Prediction models are generally classified as being compensatory
and non-compensatory. Compensatory models take into cognisance that humans are able to compensate for a relative weakness in one
attribute through a strength in another when assuming that both attributes are required by the job. In practical terms, this means that a
good score on one test can compensate for a lower score on another test. For example, a good score in an interview or work sample test
may compensate for a slightly lower score on a cognitive ability test. If one attribute (for example, communication skill) turns out to be
much more important than another (for example, reasoning skill), compensatory prediction models such as multiple regression provide
ways to weight the individual scores to give one score greater influence on the final total score (Landy & Conte, 2004). On the other
hand, non-compensatory prediction models such as the multiple-cut-off approach assume curvilinearity (or a non-linear relationship) in
the prediction-criterion relationship. For example, minimal visual acuity may be required for the successful performance of a pilot’s job.
However, increasing levels of visual acuity do not necessarily mean that the candidate will be a correspondingly better pilot (Cascio &
Aguinis, 2005).
Compensatory and non-compensatory approaches are often used in combination to improve prediction accuracy in order to make
high-quality selection decisions. The most basic prediction models that will be discussed in more detail are: the multiple regression
approach, the multiple cut-off model, and the multiple hurdle system.
The compensatory approach to combining scores
As you recall, up to now, we have referred only to situations where we had to examine the bivariate relationship between a single
predictor such as a test score, and a criterion, such as a measure of job performance. However, in practice, industrial psychologists have
to deal with situations in which more than one variable is associated with a particular aspect of an employee’s behaviour. Furthermore,
in real-world job-success prediction, decisions generally are made on the basis of multiple sources of information. Although cognitive
ability is an important predictor of job performance, other variables such as personality, experience, and motivation (the various KSAOs
determined by means of a job analysis and described in a job description) may also play an important role in predicting an applicant’s
future performance on the job. However, examining relationships involving multiple predictor variables (also called multivariate
relationships) can be complex and generally requires the use of advanced statistical techniques.
One such statistical technique which is a very popular prediction strategy and used extensively in personnel decision-making is the
so-called multiple regression model. We discussed the concept of regression in chapter 2. Since the multiple regression technique
requires both predictor data (test or non-test data) and criterion data (performance levels), it can be used only if some measures of
performance are available (Landy & Conte, 2004). In the case of multiple regression, we have one criterion variable, but more than one
predictor variable. How well two or more predictors, when combined, improve the predictability of the criterion depends on their
individual relationships to the criterion and their relationship to each other. This relationship is illustrated by the two Venn diagrams in
Figure 6.13. The overlapping areas in each of the two Venn diagrams show how much the predictors overlap the criterion. The
overlapping areas are the validity of the predictors, symbolised by the notation r1c or r2c or r12c, where the subscript 1 or 2 stands for
the first or second predictor and r12 stands for the inter-correlation coefficient (overlap) between two predictors. The notation c stands
for the criterion. Note in Figure 6.13 (a) that the two predictors are unrelated to each other, meaning that they predict different aspects of
the criterion. Figure 6.13 (b) indicates that the inter-correlation between the two predictors (r12c) is not zero, that is, they share some
variance with one another. Figure 6.13 (b) further indicates that each predictor correlates substantially with the criterion (r1c).
Figure 6.13 Venn diagrams depicting multiple predictors (Muchinsky, 2009)
[Reproduced with the permission of PM Muchinsky.]
152
The combined relationship between two or more predictors and the criterion is referred to as a multiple correlation (R). The only
conceptual difference between r and R is that the range of R is from 0 to 1,0, while r ranges from –1,0 to +1,0. When R is squared, the
resulting R2 value represents the total amount of variance in the criterion that can be explained by two or more predictors. When
predictors 1 and 2 are not correlated with each other, the squared multiple correlation (R2) is equal to the sum of the squared individual
validity coefficients, or:
The notation R2c.12 is read ‘the squared multiple correlation between the criterion and two predictors (predictor 1 and predictor 2)’. In this
condition (when the two predictors are unrelated to each other), 61 per cent of the variance in the criterion can be explained by two
predictors (Muchinsky et al, 2005:133).
In most cases, however, it is rare that two predictors related to the same criterion are unrelated to each other. Usually all three
variables share some variance with one another; that is, the inter-correlation between the two predictors (r12) is not zero. Such a
relationship is presented graphically in Figure 6.13 (b). In the figure each predictor correlates substantially with the criterion (r1c), and
the two predictors also overlap each other (r12) (Muchinsky et al, 2005:133).
The addition of the second predictor adds more criterion variance than can be accounted for by one predictor alone. Yet not all of
the criterion variance accounted for by the second predictor is new variance; part of it was explained by the first predictor. When there is
a correlation between the two predictors (r12), the equation for calculating the squared multiple correlation must be expanded to:
For example, if the two predictors inter-correlated at 0,30, given the validity coefficients from the previous example and r12 = 0,30, we
have:
As can be seen, the explanatory power of two inter-correlated predictor variables is diminished compared to the explanatory power when
they are uncorrelated (0,47 versus 0,61). This example provides a rule about multiple predictors: it is generally advisable to seek
predictors that are related to the criterion but are uncorrelated with each other. However, in practice, it is very difficult to find multiple
variables that are statistically related to another variable (the criterion) but at the same time statistically unrelated to each other. Usually
variables that are both predictive of a criterion are also predictive of each other. Also note that the abbreviated version of the equation
used to calculate the squared multiple correlation with independent predictors is just a special case of the expanded equation caused by r
12 being equal to zero (Muchinsky et al, 2005:133, 134).
153
As mentioned previously, multiple regression is a compensatory type of statistical model, which means that high scores on one
predictor can compensate for low scores on another predictor. For example, an applicant’s lack of previous job-related experience can be
compensated for or substituted by test scores that show great potential for mastering the job (Riggio, 2009). The statistical assumptions
and calculations on which the multiple regression model is based (beyond the scope of this text) involve a complex mathematical
process that combines and weights several individual predictor scores in terms of their individual correlations with the criterion and their
inter-correlations with each other in an additive, linear fashion (Tredoux, 2007). In employee selection, this means that the ability of
each of the predictors to predict job performance can be added together and there is a linear relationship between the predictors and the
criterion: higher scores on the predictors will lead to higher scores on the criterion. The result is an equation that uses the various types
of screening information in combination (Riggio, 2009).
However, apart from its obvious strengths, the compensatory nature of the multiple regression approach can also be problematic in
selection. Take, for example, the case where an applicant with an uncorrectable visual problem has applied for a job as an inspector of
micro-circuitry (a position that requires the visual inspection of very tiny computer circuits under a microscope). Although the
applicant’s scores on a test of cognitive ability may show great potential for performing the job, he or she may score poorly on a test of
visual acuity as a result of the visual problem. Here, the compensatory regression model would not lead to a good prediction, for the
visual problem would mean that the applicant would fail, regardless of his or her potential for handling the cognitive aspects of the job
(Riggio, 2009).
Other linear approaches to combining scores
Other linear approaches to selection include: unadjusted top-down selection, rule of 3, passing scores, and banding. In a top-down
selection approach, applicants are rank-ordered on the basis of their test scores, and selection is then made by starting with the highest
score and moving down until all vacancies have been filled. Although the advantage of this approach is that by hiring the top scorers on
a valid test, an organisation will gain the most utility, the top-down approach can result in high levels of adverse impact. Similarly to the
unadjusted top-down selection approach, the rule of 3 (or rule of 5) technique involves giving the names of the top three or five scorers
to the person making the hiring decision. The decision-maker can then choose any of the three (or five), based on the immediate needs of
the employer (Aamodt, 2010).
The passing scores system is a means of reducing adverse impact and increasing flexibility in selection. With this system, an
organisation determines the lowest score on a test that is associated with acceptable performance on the job. For example, suppose a
company determines that any applicant scoring 70 or above will be able to perform adequately the duties of the particular job for which
he or she has applied. If the company set 70 as the passing score, they could fill their vacancies with any of the applicants scoring 70 or
better. Because of affirmative action, they would like some of the openings to be filled by black females, for example. Although the use
of passing scores generally helps companies to reach their affirmative action goals, determining these scores can be a complicated
process, full of legal pitfalls. Legal problems can occur when unsuccessful applicants challenge the validity of the passing score
(Aamodt, 2010).
Score banding creates categories of scores, with the categories arranged from high to low. For example, the ‘A band’ may range
from 100 to 90, the ‘B band’ from 89 to 80, and so on. Banding takes into consideration the degree of error associated with any test
score. Industrial psychologists, psychometrists, and statisticians universally accept that every observed score (whether a test score or
performance score) contains a certain amount of error (which is associated with the reliability of the test). The less reliable a test, the
greater the error. Even though one applicant may score two points higher than another, the two-point difference may be the result of
chance (error) rather than actual differences in ability (Aamodt, 2010; Landy & Conte, 2004).
A statistic called the standard error of measurement (SEM) is generally used to determine how many points apart two applicants
have to be before determining whether the test scores are significantly different. Using the SEM, all candidate scores within a band are
considered ‘equal’ with respect to the attribute being measured if they fall within some specified number of SEMs of each other (usually
two SEMs). It is assumed that any within-band differences are really just differences owing to the unreliability of the measure. Using the
banding approach, all candidates in the highest band would be considered before any candidates in the next lower band (Aamodt, 2010;
Landy & Conte, 2004).
The use of banding has been severely debated. Research indicates that banding will result in lower utility than top-down hiring
(Schmidt, 1991) and may also contribute to issues associated with adverse impact and fairness (Truxillo & Bauer, 1999).
Non-compensatory methods of combining scores
The multiple cut-off and multiple hurdle approaches are two examples of non-compensa-tory strategies which are particularly useful
when screening large pools of applicants. The multiple cut-off strategy can be used when the passing scores of more than one test are
available. The multiple cut-off approach is generally used when one score on a test cannot compensate for another or when the
relationship between the selection test (predictor) and performance (criterion) is not linear. Because the multiple-cut-off approach
assumes a curvilinearity (that is, a non-linear relationship) in the predictor-criterion relationship, its outcomes frequently lead to
different decisions from those of a multiple regression analysis, even when approximately equal proportions of applicants are selected by
each method (Cascio & Aguinis, 2005). When used in combination, applicants would, for example, be eligible for hire only if their
regression scores are high and if they are above the cut-off score of the predictor dimensions.
The multiple cut-off model uses a minimum cut-off score on each of the selected predictors. Applicants will be administered all of
the tests at one time. If they fail any of the tests (that is, they fell below the passing score), they will not be considered further for
154
employment. An applicant must obtain a score above the cut-off on each of the predictors to be hired. Scoring below the cut-off on any
one predictor automatically disqualifies the applicant, regardless of the scores on the other tests or screening variables (Aamodt, 2010;
Riggio, 2009). For example, suppose that a job analysis finds that a good fire-fighter is intelligent, has confidence, stamina, and the
aerobic endurance to fight a fire using a self-contained breathing apparatus. He also has a national senior certificate. A validity study
indicates that the relationship of both intelligence and confidence with job performance are linear. The smarter and more confident the
fire-fighter, the better he or she performs. Because the relationship between having stamina and having a national senior certificate is not
linear, a multiple-cut-off approach in which applicants would need to pass the endurance test and have a national senior certificate is
followed. The fire department might also want to set a minimum or cut-off score on the endurance or stamina measure and the grade
point average (GPA) for the national senior certificate. If the applicants meet both of the minimum requirements, their confidence levels
and cognitive ability test scores are used to determine who will be hired.
The main advantage of the multiple cut-off strategy is that it ensures that all eligible applicants have some minimal amount of ability
in all dimensions to be predictive of job success (Riggio, 2009). However, because of legal issues such as fairness and adverse impact,
particular care needs to be taken by industrial psychologists when setting cut-off scores that the cut-off scores do not unfairly
discriminate against members of certain designated groups stipulated in the EEA. Furthermore, the multiple-cut-off approach can be
quite costly. If an applicant passes only three out of four tests, he or she will not be hired, but the organisation has paid for the applicant
to take all four tests (Aamodt, 2010).
The multiple hurdle system is a non-compensatory selection strategy that is quite effective in reducing large applicant pools to a
more manageable size by grouping individuals with similar test scores together in a category or score band, and selection within the
band is then made based on a minimum cut-off score. The multiple hurdle selection approach uses an ordered sequence of screening
devices. At each stage in the sequence, a decision is made either to reject an applicant or to allow the applicant to proceed to the next
stage. In practical terms. this means that the applicants are administered one test at a time, usually beginning with the least expensive.
Applicants who fail a test are eliminated from further consideration and take no more tests. Applicants who pass all of the tests are then
administered the linearly-related tests, and the applicants with the top scores on these tests are hired (Aamodt, 2010). For example, in the
screening of candidates, the first hurdle may be a test of cognitive ability. If the individual exceeds the minimum score, the next hurdle
may be a work sample test. If the candidate exceeds the cut score for the work sample test, he or she is scheduled for an interview.
Typically, all applicants who pass all the hurdles are then selected for the jobs (Landy & Conte, 2004; Riggio, 2009). Although an
effective selection strategy in screening large pools of candidates, it can also be quite expensive and time consuming and is therefore
usually used only for jobs that are central to the operation of the organisation (Riggio, 2009).
Figure 6.14 summarises the various selection decisions that we have discussed up to now. Note that the diagram builds on the
concepts we have discussed in chapters 4 and 5. The Reflection that follows explains the various steps in selection decision-making by
means of an example.
Placement
Employee placement is the process of assigning workers to appropriate jobs after they have been hired (Riggio, 2009). Appropriate
placement of employees entails identifying what jobs are most compatible with an employee’s skills and interests, as assessed through
questionnaires and interviews (Muchinsky, 2003). After placing employees, industrial psychologists conduct follow-up assessments to
determine how well their selection and placement methods predict employee performance. They refine their methods when needed
(Kuther, 2005).
Usually people are placed in the specific job for which they have applied. However, at times an organisation (and especially large
organisations) may have two or more jobs that an applicant could fill. In this case, it must be decided which job best matches that
person’s talents and abilities. Whereas selection generally involves choosing one individual from among many applicants to fill a given
opening, placement, in contrast is often a process of matching multiple applicants and multiple job openings on the basis of a single
predictor score (Landy & Conte, 2004). Placement decisions are easier when the jobs are very different. It is easier to decide whether a
person should be assigned as a manager or a clerk, for example, than to decide between a secretary and a clerk. The manager and clerk
jobs require very different types of skills, whereas the secretary and clerk jobs have many similar job requirements (Muchinsky et al,
2005).
Landy and Conte (2004:278) suggest three general strategies for matching multiple candidates to multiple openings:
• Provide vocational guidance by placing candidates according to their best talents.
• Make a pure selection decision by filling each job with the most qualified person.
• Use a cut-and-fit approach, that is, place workers so that all jobs are filled with adequate talent.
In today’s global environment, many organisations are multinational, with offices around the world. This trend requires organisations to
become more flexible in their selection and, in particular, their placement strategies. In addition, the increasing use of work teams also
suggests that the rigid, ‘pure’ selection model may be less effective than a placement model that can also take into account the KSAOs
of existing team members in deciding where to place a new hire or promotion (Landy & Conte, 2004; Riggio, 2009).
Figure 6.14 Overview of the selection decision-making process
155
Reflection 6.7
156
Review Figure 6.14. Study the following example carefully and see if you can identify each step outlined in the selection
decision-making framework as illustrated in the diagram.
Decision-making in selection: a systematic process
The local municipal fire department decided to recruit 10 new fire-fighters. Since management had experienced some difficulty with
the selection devices they had applied in the past, they decided to obtain the services of a professionally-qualified industrial
psychologist to assist them with this task.
The industrial psychologist discovered that no current job information was available and decided to conduct a formal job analysis
in order to develop criteria for recruitment and selection purposes. In analysing the job demands of a fire-fighter, the following valued
performance aspects were identified as broad conceptual constructs. Successful or high performing fire-fighters are able to rescue
people and property from all types of accident and disaster. They also make an area safer by minimising the risks, including the social
and economic costs, caused by fire and other hazards. They further promote fire safety and enforce fire safety standards in public and
commercial premises by acting and advising on all matters relating to the protection of life and property from fire and other risks.
Their job duties are linked with the service, health and safety vision and mission of the municipality.
Some of the processes and procedures to be included in the job description of a fire-fighter that were revealed by the job analysis
included:
• Attending emergency incidents, including fires, road accidents, floods, spillages of dangerous substances, rail and air
crashes and bomb incidents
• Rescuing trapped people and animals
• Minimising distress and suffering, including giving first aid before ambulance crews arrive
• Inspecting and maintaining the appliance (fire engine) and its equipment, assisting in the testing of fire hydrants and
checking emergency water supplies
• Maintain the level of physical fitness necessary to carry out all the duties of a fire-fighter
• Responding quickly to unforeseen circumstances as they arise
• Writing incident reports
• Educating and informing the public to help promote safety.
The following knowledge, skills, abilities and other characteristics (KSAOs) were identified for the job description, including a person
specification:
• Physical fitness and strength
• Good, unaided vision and hearing
• Willingness to adapt to shift work
• Ability to operate effectively in a close team
• Initiative
• Flexibility
• Honesty
• Ability to take orders
• A reassuring manner and good communication skills to deal with people who are injured, in shock or under stress
• Sound judgement, courage, decisiveness, quick reactions and the ability to stay calm in difficult circumstances (ability to
deal with accidents and emergencies and dealing with fatalities)
• Willingness and ability to learn on a continuing basis
• An interest in promoting community safety, education and risk prevention.
With the job analysis completed, the industrial psychologist was now ready to make various decisions, including deciding on the
predictors that could be used for selection. In identifying the predictors that would be used as measures of successful performance, the
following psychometric standards were considered: reliability, validity, utility, fairness and cost. The following predictors were
chosen:
Test predictors
•
•
•
Statutory physical fitness test
A stringent medical and eye examination
Occupational personality questionnaire to obtain a personality profile required for a successful fire-fighter.
Non-test predictors (behavioural predictors)
•
•
Structured interview to determine the applicant’s professional conduct and attitude
Situational exercises to determine how the applicant will respond in emergency and high pressure situations. A work
simulation was therefore developed, with higher scores indicating better performance. This decision was based on previous
research that showed work simulations or situational exercises to be a valid and reliable measure for performance ability for
jobs that demand physical strength and fitness.
The municipal outsourced the recru
Download