Practitioners User Guide

advertisement
Futurewise Profile:
Practitioner’s Guide
Third Draft
10/17/2012
Futurewise Profile: Practitioner's Guide
Table of Contents
About this Guide .................................................................................................... 3
Part 1: Futurewise Profile ....................................................................................... 4
1.0 Introduction .................................................................................................... 4
1.2 Administering tests and questionnaires ....................................................... 17
1.3 Scoring tests and questionnaires ................................................................. 22
1.4 Futurewise Profile reports ............................................................................ 24
1.5 Generating job suggestions ......................................................................... 32
1.6 Review and feedback process ..................................................................... 36
Part 2: Additional Resources ................................................................................ 47
2.0 Answers to practice questions ..................................................................... 47
2.1 Type summaries .......................................................................................... 52
2.2 Learning approaches ................................................................................... 55
2.3 Futurewise job families ................................................................................ 56
2.4 Sample reports ............................................................................................ 57
2.5 Supporting manuals ..................................................................................... 58
2.6 Conversion Tables between test levels ....................................................... 58
2
About this Guide
This guide is principally about the assessments and reports that comprise the
psychometric aspect of the Futurewise Profile system. It also provides advice on the
feedback and review process. It mentions aspects of system set-up and online
administration but these are topics that are covered in depth in the User Guide.
3
Part 1: Futurewise Profile
1.0 Introduction
Futurewise Profile is a comprehensive online careers guidance tool. It combines a
range of contemporary reasoning tests and questionnaires with a powerful
management and reporting system. The tests measure verbal, numerical and
abstract aptitude, and memory and attention; and the questionnaires, work-based
personality and career interests.
Verbal Reasoning Test
The ability to understand written information and determine
what follows logically from that information.
Numerical Reasoning Test
The ability to use numerical information to solve everyday
problems.
Abstract Reasoning Test
The ability to identify patterns in abstract shapes and
generate and test hypotheses.
Memory and Attention Test
The ability to remember and follow complex sets of
instructions and to respond quickly and accurately.
Type Dynamics Indicator
The pattern of personality in terms of the four dimensions
of Type psychology.
Career Interests Inventory
The level and pattern of interest in Holland's six career
themes.
Assessment sessions can be conducted flexibly, in either a supervised or an
unsupervised manner. This means that all the assessments can be supervised, or
unsupervised, or a mixture of both. The two methods of administration use two
different sets of aptitude tests: supervised ('closed' tests), unsupervised ('open'
tests). All the assessments are delivered online and can be completed in formal
assessment sessions, or as part of careers or other lessons, or during homework
periods etc.
Within supervised sessions four versions of timed verbal, numerical and abstract
tests are available; within unsupervised sessions two (broader) versions of
equivalent timed tests are available. One version of the timed Memory & Attention
Test is suitable for all students.
Two versions of the untimed personality questionnaire are available: a pictorial
version that is appropriate for all ages, and a text-based version which may be more
appropriate for older students, e.g. for those in the Sixth Form or its equivalent. The
text-based version of the personality questionnaire may also be more suitable in
international settings because of the different cultural interpretations of some of the
pictures. One version of the untimed Career Interests Inventory is suitable for all
students.
4
However, despite the different versions of tests and the personality questionnaire,
the most typical supervised administration would comprise: verbal, numerical and
abstract reasoning tests Version-2 (suitable for those considering A-Level or
equivalent qualifications), the Memory and Attention Test, the pictorial version of the
Type Dynamics Indicator and the Career Interests Inventory.
The average performance time for this combination of assessments is 80-90
minutes. Although the total time may be greater because of any introductory
sessions by advisors or administrators, and the time taken by the student to read the
various sets of instructions, view example items and complete practice material. In
practice however 120 minutes is more than sufficient to complete the assessments
in one session.
The Futurewise Profile system produces two reports: a student's report and an
advisor's report. These are discussed in detail in this Guide.
In addition once the profiling process and guidance interview is complete students
can access the MyFuturewise web site. This contains a record of their results and
also allows them to explore their career suggestions in greater depth using the
careers database. Other careers-related tools and information are also provided.
1.1
Assessment suite
This section provides more details on each of the assessments:
1.1.1 Verbal Reasoning Test
Verbal aptitude is measured using a Verbal Reasoning Test (VRT). The
verbal tests consist of passages of information, with each passage
being followed by a number of statements. Students have to judge
whether each of the statements is true or false on the basis of the
information in the passage, or whether there is insufficient information
in the passage to determine whether the statement is true or false. In
the latter case, the correct answer option is ‘can’t tell’.
As students come to the testing situation with different experiences and
knowledge, the instructions state that responses to the statements
should be based only on the information contained in the passages, not
on any existing information that they may have.
Ultimately these instructions also reflect the situation faced by many
employees who have to make decisions on the basis of information
presented to them. In these circumstances decision-makers are often
not experts in the particular area and have to assume the information is
correct, even if they do not know this for certain.
The passages in the verbal tests cover a broad range of subjects. As
far as possible, these have been selected so that they do not reflect
5
particular occupational areas. Passages were also written to cover
both emotionally neutral areas and areas in which students may hold
opinions. Again, this was seen to make the Verbal test a valid analogy
of decision-making processes, where individuals have to reason
logically with both emotionally neutral and personally involving material.
Each statement has three possible answer options – true, false and
can’t tell – giving students a one-in-three or 33% chance of guessing
the answer correctly.
The quite generous time limits and the ‘not reached’ figures (the
number of people who do not attempt all the questions), suggest
guessing is unlikely to be a major factor for the verbal test.
The proportion of true, false and can’t tell answers was balanced in
both the trial and final versions of the verbal tests. The same answer
option is never the correct answer for more than three consecutive
statements.
The score for the verbal tests is based on the total number of questions
answered correctly. Addition information is also gathered on the
number attempted versus the number correct, and the speed and
accuracy with which the test is completed.
Practically students are able to use rough paper during the test and are
issued with two sheets and a pencil at the beginning; when this test is
completed remotely, or in an unsupervised manner, students should
also be told that they can use rough paper.
The test is designed to measure verbal reasoning ability (aptitude). This
is important for activities or occupations that require the analysis and
interpretation of written material, or the accurate communication of
verbal ideas. Well developed verbal aptitude is essential for many
professional jobs, such as law, medicine and teaching; administrative
and managerial jobs; sales, marketing and related activities; and those
aspects of science and technology in which accurate communication is
important, for example in engineering and computing.
1.1.2 Numerical Reasoning Test
Numerical Aptitude is measured using a Numerical Reasoning Test
(NRT). The numerical tests present students with numerical information
and ask them to solve problems using that information. Some of the
harder questions introduce additional information which also has to be
used to solve the problem. Students have to select the correct answer
from the list of options given with each question.
6
Numerical items require only basic mathematical knowledge to solve
them. All mathematical operations used are covered in the GCSE
mathematics syllabus, with problems reflecting how numerical
information may be used in work-based contexts.
Areas covered include: basic mathematical operations (+, -, x, ),
fractions, decimals, ratios, time, powers, area, volume, weight, angles,
money, approximations and basic algebra. The tests also include
information presented in a variety of formats, again to reflect the skills
need to extract appropriate information from a range of sources.
Formats for presentation include: text, tables, bar graphs, pie charts
and plans.
Each question in the numerical test is followed by five possible answer
options, giving students a one-in-five or 20% chance of obtaining a
correct answer through guessing.
The distracters (incorrect answer options) were developed to reflect the
kinds of errors typically made when performing the calculations needed
for each problem. The answer option ‘can’t tell’ is included as the last
option for some problems. This is included to assess students’ ability
to recognise when they have insufficient information to solve a problem.
As with the verbal tests, the same answer option is never the correct
answer for more than three consecutive statements.
The score for the numerical tests is based on the total number of
questions answered correctly. Addition information is also gathered on
the number attempted versus the number correct, and the speed and
accuracy with which the test is completed.
Practically students are able to use rough paper during the test and are
issued with two sheets and a pencil at the beginning; when this test is
completed remotely, or in an unsupervised manner, students should
also be told that they can use rough paper. Students are not allowed to
use calculators.
The test is designed to measure numerical reasoning ability (aptitude).
This is important for activities or occupations that require the analysis
and interpretation of different forms of numerical information, or the
precise communication of quantitative or numerical ideas, or situations
where accurate measurements are required. Well developed numerical
aptitude is essential in many commercial or related jobs, such as
accounting, banking and finance; numerate administrative work or
project management jobs; and, numerical professional-technical
activities such as the sciences, statistics, IT and all forms of surveying.
7
1.1.3 Abstract Reasoning Test
Abstract aptitude is measured using an Abstract Reasoning Test
(ART). The abstract tests are based on a categorisation task. Students
are shown two sets of shapes, labelled ‘Set A’ and ‘Set B’. All the
shapes in Set A share a common feature or features, as do the shapes
in Set B. Students have to identify the theme linking the shapes in
each set and then decide whether further shapes belong to Set A, Set
B or neither set.
The abstract test requires a holistic, inductive approach to problemsolving and hypothesis-generation, and does not simply involve the
student deciding what the next shape in a linear sequence of shapes
might be.
People operating in professional or managerial positions are often
required to focus on different levels of detail, and to switch between
these rapidly (e.g. understanding budget details and how these relate
to longer-term strategy). These skills are assessed through the
Abstract test, as it requires test takers to see patterns at varying levels
of detail and abstraction.
The test can also be a particularly valuable tool for spotting potential in
young people or those with less formal education, as it has minimal
reliance on educational attainment and language. In exceptional
circumstance this test can provide a guide to a student's overall
capability (general aptitude) - although if it is to be used to produce an
'estimate' of other aptitudes these must be confirmed with an
assessment expert.
Students are required to identify whether each shape belongs to Set A,
Set B or neither set. This gives three possible answer options,
meaning test takers have a one-in-three or 33% chance of guessing
answers correctly.
As with the other tests, the proportion of items to which each option is
the correct answer has been balanced. The same answer option is
never the correct answer for more than four consecutive shapes.
The score for the abstract tests is based on the total number of
questions answered correctly. Additional information is also gathered
on the number attempted versus the number correct, and the speed
and accuracy with which the test is completed.
Practically students are able to use rough paper during the test and are
issued with two sheets and a pencil at the beginning; when this test is
completed remotely, or in an unsupervised manner, students should
also be told that they can use rough paper.
8
The test is designed to measure abstract reasoning ability (aptitude).
This is important for activities or occupations that require the generation
of hypotheses, or as has been mentioned, the ability to rapidly switch
between different levels of information. Well developed abstract
aptitude is important in a broad range of science, mathematics,
engineering, IT and design activities; in technical jobs that require
problem solving or fault identification; and in managerial jobs where an
appreciation of the tactical and strategic implications of a course of
action are essential.
1.1.4 Test versions
The verbal, numerical and abstract tests are available at a number of
different 'closed' or 'open' versions.

Closed versions of tests are only for use in supervised
assessment sessions.

Open versions of tests can be used in supervised sessions but
are actually designed to be completed 'remotely', e.g. by a student
in an unsupervised setting at school or home.
A guide to the versions of the various tests is provided below. In most
circumstances closed Version-2 tests will be used. Also note that if
the decision is made to use different versions of tests it is not possible
to mix the versions for an individual, i.e. for an individual student to
complete a Version-2 Verbal test, a Version-3 Numerical test and so
on.
The closed Version-3 tests should only be used with high performing
students or groups of students. And whilst closed Version-4 tests are
available it is unlikely that they will be appropriate for the students
served by the Futurewise Profile system.
For those who do not have English as a first language, which may
include students at International Schools, the open Version-1 tests
are probably the most appropriate as they cover a greater educational
range than the closed Version-1 tests (see table overleaf).
9
Closed
Test
version
Open Test
version
Approximate educational level
This covers the top 95% of the population
and is broadly representative of the general
population.
Version 1
Version 2
This covers the top 60% of the population
and is broadly representative of people who
study for A/AS Levels (or equivalent), GNVQ
Advanced, NVQ Level 3 and professional
qualifications below degree level.
Version 3
This covers the top 40% of the population
and is broadly representative of the
population who study for a degree at a UK
University or for the BTEC Higher National
Diploma/Certificate, NVQ Level 4 and other
professional qualifications at degree level.
Version 11
Version 12
Version 4
This covers the top 10% of the population
and is broadly representative of the
population who have a postgraduate
qualification, NVQ Level 5 and other
professional qualifications above degree
level.
Note: The number of items and times of the tests vary. See Part II of
this guide for further details.
1.1.5 Memory and Attention Test
The Memory and Attention Test (MAT) is designed to assess a
student's ability to follow and to retain in memory sets of complex
instructions and to respond to these instructions rapidly and accurately.
The test consists of a number of panels (computer screens) which
contain shapes of different colours and the instructions ask the
respondent to click on specific shapes according to a particular set of
instructions.
As the student progresses through the shapes, the complexity of the
instructions increases, so requiring the student to hold a relatively large
amount of information in memory in order to be able to respond
correctly.
10
The student is able to refer to the current instruction set at any time
during the test, but each time they refer to the instructions will count
against them in relation to the assessment of the memory component
of the task. The test is timed and respondents are asked to complete it
as quickly as they are able.
In addition to the memory score based on the number of times they
referred to the instructions, the test is scored in terms both of the
accuracy with which they clicked on the correct shapes and also the
time taken to complete the test.
The test simulates one of the most important aspects of the workplace:
the need to quickly memorise and retain information in order to apply
rules or procedures in a timely and accurate manner, and also to multitask.
The MAT is a test that generates a useful understanding of
performance as students respond to increasingly complex instructions
and screens of information. There are a total of 50 screens to attempt.
The version of the MAT used in Futurewise Profile system produces
scores for memory (remembering sets of instructions), accuracy
(applying instructions precisely) and overall decision making (ability to
make decisions in a quick and accurate manner).
Practically students should not use rough paper during this test or
anything else that is likely to assist their memory, e.g. voice recorder on
mobile phone.
The test is designed to measure memory and attention. This is
important for activities or occupations that require rules or sets of
instructions to be remembered accurately, in order for decisions to be
made in a precise and timely way. More generally, memory helps to
structure thought and is the basis for effective learning. Well developed
memory and attention is essential in jobs that require the rapid
acquisition of job-relevant information, such as for example financial
trading; and where multi-tasking (simultaneously applying different sets
of rules) is needed for safe or effective performance. This is required in
many time-critical and attention dependent jobs, or those where using
the right set of rules can have profound effects on others, such as air
traffic control, the police and armed forces.
1.1.6 Type Dynamics Indicator
Personality is assessed using the Type Dynamics Indicator (TDI). This
is an untimed self-report questionnaire that is available in pictorial or
word versions. The pictorial version is composed of 56 items (pictures
and words); whereas the word version has 64 items (phrases and word
pairs).
11
The TDI is an up-to-date measure of Type psychology based on the
theories of Carl Jung. Globally the Type model is the most widely
researched and popular theory of personality used in individual
development. It is estimated that over 4 million people complete Typebased personality questionnaires every year.
The TDI identifies how someone views the world and makes sense of
it, and the ways in which they prefer to interact with other people - to
identify their most natural style. It does this by exploring the interaction
of four dynamic preferences:




What attracts and energises us: Extraversion - Introversion
How we see the world: Sensing - Intuition
How we make decisions: Thinking - Feeling
The way we manage the world around us: Judging - Perceiving.
These are then summarised as one of sixteen basic types. Full
descriptions of these basic types are provided in a separate publication
called 'Understanding Personality Typei' which is part of the Team
Focus Essential Guide Series. However it is important to realise that
people's reported Type can and does change over time. People grow,
become more insightful and learn to express their personality in
different ways in different circumstances. Thus the descriptions in this
guide, and those in the reports, provide a summary of what the person
believes about themselves which is the most important place to start
a discussion. A flavour of the four key preferences are described in
more detail below but, for a more extensive description of the
implications of a person's preferences please refer to the publication
mentioned above called 'Understanding Personality Type'.
Extraversion and Introversion (E-I) – a different focus for energy
and reality
People differ in terms of the kind of environment they enjoy. It is
apparent how some people thrive when there is a lot going on, when
there is plenty of chance for discussion, interaction and activity which
keeps them busy. Others enjoy a much quieter environment where
there is a chance to internalise, reflect and to create their own internal
reality. This sometimes leads to an apparent anomaly whereby
introverts function better in a busy environment than do extraverts usually because introverts are more effective at shutting that world out
when they need to concentrate whereas extraverts may let themselves
get too involved or distracted by what is going on. Everyone finds their
own way of balancing these fundamental differences which are
reflected in their basic character.
12
Sensing and iNtuition1 (S-N) – different ways of seeing the world
It is apparent that people, when subjected to the same information and
experience, are capable of recounting quite a different version of
events. Jung’s model of Type suggests that this is not just a question of
responding differently, it is also that we actually see differently. We
seem to notice different elements, and remember and extract different
meanings from them. It is as though we all have on a set of spectacles,
which filter and highlight differently. A food analogy would be that there
is a pile of ingredients. One person may see it as a pile of eggs, flour
and sugar. Another person may see that it is a potential cake. Thus
some people are very tuned in to practical details and specific facts (the
individual ingredients of eggs, flour and sugar) - the Sensing
preference. Others seem to make broad and abstract links, to see
patterns and possibilities without a great grasp of the details (the
potential cake) - the iNtuitive preference.
Thinking and Feeling – different ways of making
decisions
People argue, persuade, form their opinions and make judgements in
quite different ways. Some people are impressed by rational argument.
They like to have reasons which are logical and do not feel comfortable
with any decision until they have a clear rationale. They need to make
connections and build a logical framework which justifies the decision
they make - the Thinking preference. Others are less impressed with
these kinds of reasons. They tend to match their decisions to their
underlying values. They seem to have a more direct way of judging and
valuing. This does not mean that they are illogical, it’s simply that logic
is not as important in the decision-making process - the Feeling
preference.
Judging and Perceiving – different ways of managing
the world around us
People display fundamental differences when it comes to managing or
responding to the world around them. Some people with a Judging
preference like to know what is coming. They anticipate, plan and
organise the world and may treat surprises as a nuisance to be
managed. Others with a Perceiving preference have a more responsive
approach. They remain open to new ideas and information which they
happily incorporate into their plans and schedules. In fact they often
welcome or await new information and this sometimes means that they
delay decisions until the last minute. By viewing surprises as a
welcome change, it enables them to show flexibility and spontaneity.
In Futurewise Profile personality information is not only presented in
terms of descriptions of the basic Types but also with relation to workbased competencies, such as 'working with people', 'persuading
people', 'planning style', 'making decisions', 'getting results' and 'being
creative'.
1
Note the abbreviation for iNtuition is the second letter N since I is used for Introversion
13
These competencies are universally recognised as important by
employers and form part of many competency profiling systems.
In addition elements of Type are used to identify preferred learning
approaches and the parts of the 'learning cycle' to which someone is
attracted. There is more detail on what this means in the publication
'Understanding Learning Styles'.
1.1.7 Career Interests Inventory
The Career Interests Inventory (CII) is an untimed self-report
questionnaire based on John Holland’s widely used model of vocational
preferences. It explores interests, competencies and work styles to
provide a tool for supporting career choice.
His model, which splits the world of work into six broad themes, is the
most widely accepted theory of work interests. Holland's theory
underpins all the major career interest inventories and career
exploration products.
The inventory is divided into three sections which cover the six RIASEC
career themes (see below for details). Section One consists of 36
pictorial normative items. Section Two consists of 15 pictorial ipsative
items. Section Three relates to work areas for which a student may
express a particularly strong preference or dislike. Section Four relates
to a small number of natural abilities or natural talents (e.g. music
talents) which are required for certain career areas.
The normative and ipsative items ask how interested the student is in
an activity or career-related environment, or which, out of a choice of
two, would be preferred. The skills items how proficient the student is
at a very specific skill, e.g. music, art, performance, sport.
The additional items are designed to discover if using particular skills,
or working in a particular context or environment is either very
important to a student or they would prefer not to have a job with those
features, e.g. a job requiring the use of numerical skills for much of the
time, or directly caring for others in a health or medical setting, or being
part of the armed forces.
The use of normative (compared to other people) and ipsative
(compared to self) items provide alternative benchmarks for
interpretation. These are discussed later in this guide.
14
The interest themes (scales) covered by the inventory are described below.
Realistic
Jobs which fall into this area are practical occupations that usually
require physical or manual activity. They include skilled and technical
trades, and some of the service occupations. They generally have a
'hands on' element and may involve working outdoors. Realistic work
activities may involve using tools, equipment and machinery; IT; building
and repairing things; and/or work related to nature, agriculture and
animals.
Those with realistic interests are often motivated by the outdoors and by
physical, adventurous and sometimes risky activities. They are interested
in action rather than thought and generally prefer practical problems, as
opposed to those that are ambiguous, theoretical or abstract.
Investigative
Jobs which fall into this area are concerned with finding out about things.
They centre on science, medicine, social concerns, theories, ideas and
data, with the aim of understanding, predicting or controlling these things.
Investigative work activities have a strong 'analytical' element and include
researching, exploring, observing, evaluating, analysing, learning and
solving abstract problems. This may be in a laboratory, medical or
academic establishment, or in the computer industry.
Those with investigative interests are generally motivated by the desire to
probe, question and enquire. They tend to need space and calm to reflect
and think, and often dislike selling and repetitive activities.
Artistic
Jobs which fall into this area have a strong 'expressive' element and are
concerned with creating or appreciating art, drama, music or writing.
Artistic work activities include composing, writing, creating, designing,
cooking, performing and entertaining. This theme is not necessarily about
having an interest in painting or drawing personally, because it includes
occupations where people appreciate some kind of creative expression.
Those with artistic interests enjoy being 'spectators' or 'observers' and
their artistic side is often reflected in leisure and recreational activities.
They also tend to be content in academic environments as artistic
interests are often associated with verbal or linguistic abilities.
Social
Jobs which fall into this area involve working with people in a helpful or
facilitating way. They are concerned with human welfare and community
services. Work activities include caring, teaching and educating, treating,
helping, listening, counselling and discussing.
Those with social interests are motivated by an impetus to help or care
and tend to solve problems through discussing values and feelings, and
by directly interacting with others. In addition they are often particularly
team-minded.
NOTE: 'Teaching' occurs across most of the themes but each one tends
to attract people with an interest in that theme. So 'realistic' teaching
incorporates hands-on or technical type activities; whereas ‘social’
teaching is more concerned with the interpersonal and pastoral elements.
15
Enterprising
Jobs which fall into this area are concerned with business and
leadership. They seek to attain personal or organisational goals, or
economic gain. Work activities include selling, marketing, managing,
influencing, persuading, directing and manipulating others. Being selfemployed (running your own business) falls into this category, as does
work in politics.
Those with enterprising interests are frequently motivated by taking
financial or interpersonal risks, and often like to participate in competitive
activities. Whilst they can be systematic in their approach they are
generally unlike those with investigative interests as they tend to dislike
scientific activities, or those which require intellectual application.
Conventional Jobs which fall into this area are concerned with organisation, data and
finance. They involve working with information, numbers or machines, to
meet organisational demands and standards. Work activities include
setting up procedures, maintaining orderly routines, organising,
operating, accounting and processing.
Those with conventional interests often enjoy mathematics and activities
that involve the management of resources. They tend to work well in
large organisations and are often equally as happy dealing with people
as they are with data or ideas.
16
1.2
Administering tests and questionnaires
1.2.1 Setting up the system
The process of setting up the Futurewise Profiling programme within a
school is described in the User Guide. This guide explains in detail how
to use the Futurewise Profiling system and covers such areas as
programme setup, registering students in the system, adjusting the
general settings for a school account, making adjustments to the
assessment settings for individual students, preparing for and running
assessments, viewing results, generating reports and so on.
It is essential that those users of the system who have a major
responsibility for running the Futurewise Profiling System in a
school are familiar with those parts of the guide that are relevant
to the tasks which they undertake or for which they are
responsible. Users should also consult the online help which is
available from all screens of the system and covers much of the same
content as the User Guide.
Although much of the basic programme setup will normally be
undertaken by Futurewise staff, there are certain essential procedures
which would usually be undertaken by school staff. These include:

Checking the default assessment settings to be used for each
student group

Setting assessment deadlines for each group

Inviting students to register in the system

Making adjustments to the assessment settings for individual
students (e.g. entering ability assessments for disabled students
who are unable to take certain assessments, adjusting aptitude test
versions for particularly bright or less able students, etc)

Planning classroom sessions for supervised assessments

Finally, enabling the assessments once all set up procedures have
been finalised.
It is particularly important to ensure that

the correct default settings for each assessment group have
been made before students are invited to register

any settings required for individual students have been made
before the assessments are enabled.
17
School-based staff who are new to the Futurewise Profiling system will
be able to call upon the assistance of Futurewise staff for help and
advice during the programme setup process.
1.2.2 Supervised versus unsupervised test administration
The Futurewise Profile assessments can be taken either under
supervised or unsupervised conditions and, in the case of the verbal
reasoning, numerical reasoning and abstract reasoning tests, different
versions of the tests must be used depending on whether the tests are
to be supervised or not.
In the case of supervised assessments, once students have logged in
to their Futurewise Profiling Home Page, they will also need to enter an
additional password whenever they begin an assessment. This
password is necessary in order that students cannot take the
assessment at home or under other unsupervised conditions. The
passwords required for supervised assessments will always be
provided to the student only at the time of the assessment itself. Details
of how to obtain the assessment passwords can be found in the User
Guide.
Wherever possible, schools should use the supervised versions
of the three reasoning tests. Although this requires additional effort
in setting up and supervising classroom assessment sessions, the
supervised assessments will normally provide a more valid and reliable
means of assessing a student's aptitudes.
With unsupervised
assessments, there is of course no way of being certain whether the
student has completed the assessments entirely by themselves or with
the help of others. Furthermore, with a supervised assessment, in
addition to being able to ensure that the setting and circumstances for
the assessment are optimal, it is also possible for the administrator to
introduce the assessment to the students before they begin and to be
on hand should there be any difficulties or problems encountered by
students. Notwithstanding this, when assessments are used for
development purposes such as careers guidance, authenticity is not
usually a major issue.
If supervised assessments cannot be arranged, then it is good practice
to ensure that students have received an introduction to the
assessments in a class setting, during which the nature of the
assessments can be discussed and any concerns raised by students
can be addressed.
18
1.2.3 Preparing for a supervised assessment session
The assessment room needs to be suitably heated and ventilated (with
blinds if glaring sunlight is likely to be a problem) for the number of
people taking the assessments and for the length of the session. All
the computer screens need to be clear and easy to read. The room
should be free from noise and interruption, as any disturbances can
affect performance.
It should also be ensured that the computers on which the
assessments are to be taken meet the minimum requirements as set
out in the User Guide.
There should be space between each test student’s computer screen
so that students cannot see others’ progress or answers and the
administrator should be able to walk around to keep an eye on
progress or difficulties – especially during the examples where
misunderstandings can be dealt with.
If the assessments are to be taken as part of an assessment day,
remember that performance tends to deteriorate towards the end of a
long day. If a number of sessions are being planned, for different
groups of students, those who take the assessments towards the end
of the day may be disadvantaged. If there are other mental challenges
remember to organise appropriate breaks.
A notice to the effect of ‘Testing in progress – Do not disturb’ should be
displayed on the door of the assessment room. Ensure that chairs and
desks are correctly positioned and that rough paper and pencils are
available for the verbal, numerical and abstract tests. Also remember
that calculators are not permitted for any of the Futurewise Profile
assessments.
1.2.4 Procedure for a supervised assessment session
Please note that the instructions for test administration which follow
below assume that students have already registered within the system
and have been provided with their logins and instructions on how to
access their home page.
Prior to the first assessment session, whether supervised or
unsupervised, students should already have been provided with a
general introduction to the Futurewise Profiling system.
This
introduction should cover at least the following areas:

the objectives of the Futurewise Profiling system and, briefly, how it
all works

why they are being asked to take the assessments
19

what assessments they have to take

how the assessments will be used in generating their Futurewise
Profile

what feedback they will receive on their results

who will have access to their results

the date by which any unsupervised assessments should be
completed

what will happen when the assessments have been completed

the details of who they should speak to in case of queries or
difficulties
The assessment session itself should then proceed as follows:
Invite students into the assessment room and direct them where to sit.
When all students are seated, you should give an informal introduction
to the session. You should prepare the points you wish to cover in
advance, but should nevertheless deliver the introduction informally in
your own words.
The aim of the introduction is to explain clearly to the students what to
expect and to give them some background information about the
assessments and why they are being used. This will help to reduce
anxiety levels and create a calm environment. The administrator
should aim for a relaxed, personable, efficient tone, beginning by
thanking the students for attending.
During your introduction, you should:

ask students not to touch the computers until they are told to do so.

advise students which assessment they will be taking in the current
session and explain how that assessment fits into the programme
of Futurewise assessments which they are taking.

explain to students that all instructions they will need will be
provided on-screen and, if the test is timed, that the timing will only
begin once they have read the instructions and completed the
practice items. Explain that for each test, there will practice
questions at the start before the test proper begins.

ask the students if there are any questions and deal with these
accordingly. Explain to students that you will be unable to give any
specific advice during the assessment on how they should answer
specific questions, but that they should let you know if they have
any particular difficulties (for example, difficulties with their
computer).

explain to students the policy you wish to adopt as to what they
should do when they have finished the assessment. In some
20
cases, you may wish students to remain seated once they have
completed an assessment so as not to disturb other students. In
other cases, for example when taking non-aptitude tests such as
the personality and career interests questionnaires, you may permit
them to leave as soon as they have submitted the assessment. If
the students are due to take more than one assessment, you may
wish to tell them to continue with the second assessment as soon
as they have finished the first.
Following the introduction, you should then provide the students
with the passwords they will need for each assessment they are
due to take (see the User Guide for details of how to obtain the
passwords.). The passwords should be displayed to the students
on the board or overhead projector or similar. You should not
distribute sheets containing the passwords as this could allow a
student to pass on the password to other students who are due to
take the assessment at a later session.
Depending on what is appropriate for the assessments to be taken,
instruct students as to whether they should begin the test
immediately once they have logged in or wait for your instruction
before clicking on the first assessment on the Home Page
dashboard. When all students are ready, tell them to login to their
Home Page in Futurewise Profiling (see detailed instructions in the
User Guide).
If you have told students they can begin
immediately, they will now do so. Otherwise, wait until all students
are at their Home Page and then give the instruction to click on the
assessment to be taken in their Home Page dashboard.
Once the assessment session has begun, your principal task will
be to ensure that students are working quietly, are not disturbing
other students and are not conferring with each other. Should
difficulties occur, advice is provided in the User Guide as to how
these can be dealt with. Note that for all assessments except the
Memory and Attention Test, the students’ responses are saved as
they go through the test. If a student's computer fails for technical
reasons, then the student can immediately login on another
computer, restart the assessment and be taken automatically
straight to the question they were previously working on, with their
previous responses preserved. In the case of the Memory and
Attention Test, if students experience a technical problem, they will
need to login on a different computer and start the test once again
from the beginning.
21
1.2.5 Unsupervised assessments
Unsupervised assessment is typically used for those assessments such
as the personality and career interest assessments where it is not so
important to ensure that students are not conferring with each other.
Indeed, with these assessments, there is little to be gained by a student
seeking help from another person. Nevertheless, if a school has the
resources to supervise the personality and career assessments, then
this would help to ensure that students complete these assessments
with care and without distraction.
Unsupervised assessments may take place at home or elsewhere, at
the students' convenience, though could also be part of a classroom
session. The latter might be the case for example where you would like
the students to take the assessments in programmed class time but
where you do not have the resources to arrange for a staff member to
supervise the session. In either case, passwords will not be required for
accessing the assessments. Further details of how the programme
setup and the test versions selected determine the requirement for
passwords can be found in the User Guide.
In the case of the four ability tests (Numerical Reasoning, Verbal
Reasoning, Abstract Reasoning and the Memory and Attention Test),
unsupervised administration is less desirable than supervised
administration for the reasons given above.
However, where
unsupervised administration is necessary for resource or logistical
reasons, then it is important to encourage students to undertake these
assessments without seeking help from others. They should be helped
to understand that the results of the assessments will not be 'used
against them' in any way and will be used only to provide them with
accurate information about which jobs their skills and abilities are suited
to. It is therefore in their interests to undertake the assessments
without help from other people.
.
1.3
Scoring tests and questionnaires
1.3.1 Automatic scoring and manual input of scores
Once a student has completed the online tests and questionnaires the
results are generated automatically. The system then uses the results
to produce the Student's and Advisor's Reports. Thus under most
circumstances there is no requirement for the Advisor or system
administrator to input scores, or deal with any aspect of the scoring
process.
22
However there may be occasions when a student cannot take the
online versions of the assessments and paper-and-pencil materials are
used. These are available for all the assessments except the Memory
and Attention Test.
If paper-and-pencil materials are used the responses made by a
student can be entered manually into the system - see the User Guide
for further details.
1.3.2 Scaling scores and reporting results
The system automatically converts all test scores so that they are
reported at the Version-2 test, i.e. the Student's Report will explain that
the results of the aptitude tests are relative to students considering Alevel, Scottish Higher, IB or similar qualifications.
It is important to note that a Student's Reports always use a Version-2
benchmark, irrespective of the tests that have been completed, i.e.
even if the student has completed Version-1 or Version-3 tests. Also,
that the TDI and CII results are always reported by reference to the UK
general population.
The Advisor's Report contains information that allows results to be
compared with other benchmark groups (norms): essentially to ask
what the results would look like if a student was compared to a higher
(or lower) standard. See the Comparison Tables in Part II.
In the reports, the test and questionnaire results are presented on 10point or 'Standard Ten' (STEN) Scales. On a STEN scale a result of 1-3
is described as 'low', 4-7 as 'medium' and 8-10 as 'high'.
If necessary finer distinctions can be made with regard to aptitude test
results by using the percentile results that are presented in the
Advisor's Report. The percentile scale is the 'better than' or 'good as'
scale, so for example a student scoring at the 65%ile has achieved a
higher score than 65% of the population, or they can be described as
being in the top 35%.
Note: Statistically a STEN scale has a mean of 5.5 and a standard
deviation of 2. STENs of 1-3 or 8-10 would each be achieved by ~16%
of the norm group; and a STEN of 4-7 by ~68% of the norm group.
23
1.4
Futurewise Profile reports
1.4.1 Introduction to reports
At the end of the assessment process the Futurwise Profile system
produces PDF reports. These are generated in three stages:

the Student's Pre-interview Report containing the results and
interpretation of the assessments completed by the student. This
is sent when all the assessments have been completed by the
student, and is used by the student to prepare for the guidance
interview.

the Advisor's Report containing the same information plus a
range of additional results - see the section on the Advisor's
Report for further details. This is used by the advisor to prepare
for the guidance interview and as a source of information during
the interview.

the Student's Final Report which is produced after the guidance
interview (if appropriate) which contains the advisor's notes. This
report is sent to the student, the student's parents/guardians and
the school when the advisor's notes have been added after the
guidance interview.
1.4.2 Student's Report
This part of the Guide describes the sections in the Student's Report
and the underlying logic that drives the content.
Note 1: Throughout the narrative report 'tests' are referred to as
'assessments'. However in this explanatory section they are referred to
as tests (or questionnaires).
Note 2: A complete sample report is provided at the end of this Guide
for reference purposes. However as reports may be updated from time
to time make sure you are looking at the latest version.
Introduction
After a title page which includes the student's name, name of school,
report date and IF reference number, there is a short piece of
introductory text and a list of contents.
24
In addition if a student has self-reported learning difficulties (dyslexia,
dyspraxia), visual impairments (colour blindness, poor sight), health
issues (epilepsy), physical disabilities, or possible language problems
(English as a second language) a note will be included on the
introduction page to the effect that these might have influenced their
performance on the tests.
The report is then composed of seven sections (A-G).
Section-A: The Big Picture
This section contains charts of the results for the six tests and
questionnaires completed by the student. It's designed to give students
a quick visual overview of their results.
Personality style
The first chart illustrates the results from the Type Dynamics Indicator
(TDI). For each of the four dimensions there are two bars. For example,
the initial dimension is concerned with Extraversion and Introversion,
and so the first bar provides the result for Extraversion and the second
for Introversion.
In most Type indicators the result would only indicate a person's
preference in terms of Extraversion or Introversion. However this can
obscure the fact that, although one is always greater, everyone has a
mixture of both Extraverted and Introverted preferences, and using two
bars reinforces this important point.
The results are presented on a standard ten point (STEN) scale and
the measurement error in each result is indicated using a short
horizontal line at the end of the bar. Measurement error gives an
indication of the accuracy of the results. For example, if the result for a
particular personality scale is represented by a STEN of 6, and the line
starts at 5 and ends at 7, it means that the result is between 5 and 7.
In all cases the personality results are produced using a comparison
with the general UK population.
General aptitudes
The second chart shows the results for the four aptitude tests. However
in the case of the Memory and Attention Test, there are three separate
results (Memory; Accuracy; Decision Making) and so the chart contains
a total of six bars.
As with the TDI, each of the results is presented on a STEN scale, with
in each case a short horizontal line at the end of each bar showing the
measurement error.
25
It is also important to realise that all the aptitude results are
benchmarked (normed) against students considering A-Level, Scottish
Higher, IB or similar qualifications.
There will also be an indication on the chart, if one or more of the test
results are:



missing (not attempted by the student), indicated by the text
'Missing result'.
estimated (by the school or advisor, for example because of a
physical problem), indicated by the text 'Estimate'.
based on extended timings (for example because of dyslexia),
indicated by the letters 'ET'.
Career interests
The third chart shows the results from the Career Interests Inventory in
terms of the six Holland career themes.
In the same way as the other two charts the results are presented on a
STEN scale, with a short horizontal line at the end of each bar showing
the measurement error.
In all cases the results are produced using a comparison with the
general population.
Section-B: Overview
This section contains narrative relating to the student's personality,
aptitude and interest results. It is written, as are the other sections, at a
level that is appropriate for the average 14-15 year old reader.
In each part there is an introduction, for example describing the
aptitude tests, and additional text that relates directly to the student's
results.
The Personality Style part contains one of 16 general personality
descriptions, depending on the students reported psychological Type.
The General Aptitudes part has two main components. The first
describes the results of the verbal, numerical and abstract tests in
terms of whether the results for each is 'low' (STEN 1-3), 'average'
(STEN 4-7) or 'high' (STEN 8-10). It also contains suggestions on what
the results might mean, in terms of the aptitudes assessed.
As there are three tests and three levels of reporting the text is based
on 27 unique permutations of results.
26
The second component deals with the Memory and Attention Test in
the same way. Again as there are three results and three levels of
reporting the text is based on a further 27 unique permutations of
results.
The Career Interests section describes the two career areas (themes)
that appear to be of the most interest, and the area that is of the least
interest.
In those cases were the results for areas are tied on the basis of the
normative results, the career areas are separated, and the top two
identified, using the ipsative results and/or raw scores.
Note: The career interest narrative also comments on the fact that
career areas may not appear to go together and this may be because
the student has a broad range of interests. In addition that interests
and personality can appear to be mismatched as it is possible to be
interested in something that does not immediately seem to complement
a person's personality, e.g. to have interests concerned with organising
data and information but a personality Type that suggests a preference
for working in a broad brush rather than a detail conscious way. These
two things are not necessarily incompatible but they would probably
need exploring during the feedback process.
Section-C: You and work
This section contains more detail on the student's personality and is
concerned with how it relates to the world of work. It is specifically
written to reflect the competency areas that are recognised as
important by employers.
As such while it reports on six competencies (Working with people;
Persuading people; Planning style; Making decisions; Getting results;
Being creative) it is based on what are agreed by researchers to be the
eight main competencies, or the so-called 'Great Eight'.2
However in the sense of the report, competencies are concerned with
the way in which a student might go about 'working with people' for
example, and not about how effective or successful they might be.
Thus competencies based on personality indicate style of approach
and are not about 'capability'.
2
e.g. Bartram, D. (2005). The Great Eight Competencies: A Criterion Centric Approach to Validation.
Journal of Applied Psychology, 90(6), 185-203.
27
At the beginning of the section there is one of 16 Type workplace
definitions, based on the student's reported Type. This is identified by
analysing the results from the TDI with the position on each of the four
dimensions (E-I; S-N; T-F; JP) being used to define the overall Type,
and thus the appropriate description. This is followed by definitions of
the six competencies described above with three bullet points that
relate directly to the relationship between the reported Type and the
competency.
As there are six competencies and 16 Types there are 96 possible sets
of three bullet points - a pool of 288 bullet points in total. The
descriptors were written for each of the competencies by experts in
Psychological Type, and by reference to Type research, and reflect
how someone with a particular Type would be predicted to act or
respond.
Finally at the end of the section there is a part that brings together the
personality and interest results. This is based on research on the
relationship between Type and Holland career interest themes. In
particular which Types are most likely to be aligned (in agreement) or
unaligned (not in agreement) with which interest themes3.
The text is based on the top two scoring interest themes and is in one
of three formats. If the top two scoring interest themes are aligned, two
sets of three bullet points appear in the text.
When one is aligned and one is unaligned, one set of three bullet points
appears. This is then followed by two bullet points which combine the
interest theme with how this might be expressed in terms of Type. For
example:
"If your personality style is combined with your top two interests, they agree
that you:



prefer to decide for yourself what to do.
get pleasure from becoming an expert in something.
enjoy questioning how the world works.
And you:


have an interest in helping others, but with practical things rather than
feelings.
are interested in giving advice to other people, but like to have a plan
rather than letting things take their course."
3
e.g. Merriam, J.N., Thompson, R.C., Donnay, D.A.C., Morris, M.L. & Schaubhut, N.A. (2006).
Correlating the Newly Revised Strong Interest Inventory with the MBTI®. CPP Inc.
28
And finally, if neither of the top two scoring interest themes are aligned,
there are two sets of two bullet points illustrating how the areas might
be expressed in terms of Type.
Care has been taken to ensure that, as far as possible, all the points
are positive in nature. However the reason there are two bullet points
for unaligned comments, those which include a 'but' element, is so for
most students the 'numerical' balance of bullet points will be in favour of
more positive 'aligned' statements.
As there are 16 Types and 6 interest themes, some with two and some
with three bullet points, there are a total of 242 comments on Type
combined with interest, from which the actual text shown on the
questionnaire will be selected.
Note: At the end of the section there may be an additional comment
about the Type results. In particular if one or more of the pairs (E-I;
S-N; T-F; J-P) produced results which were very similar to each other,
e.g. a single STEN between the results for E and I. When this occurs
the result is called a corridor score. The practical implication is that
the first letter of the Type might be an E or an I, leading to a different
overall Type.
When this happens, the report suggests that the Type results should be
considered with extra care. The implication being that it would be useful
to talk them through with an advisor who will be able to discuss the
other Types that would fit the personality profile. This is a level of
complexity that may be beyond most guidance interviews; however the
Advisor's Report does indicate the 'next best' fit for a student with one
or more corridor scores. It is suggested that each advisor has a copy
of 'Understanding Personality Type' which allows them to understand
the possible differences between the 'best fit' and the 'next best fit'
types. This booklet is published by and available from Team Focus
Limited.
Section-D: You and learning
This section is concerned with learning approaches (learning style).
After a general preamble it identifies the learning approach that the
student is most likely to adopt/prefer. This is one of four approaches
based on an analysis of the first two scales in the TDI (E-I; S-N).
See Part III for a brief description of each of the four approaches Activating, Clarifying, Innovating and Exploring - and also how these
relate back to the 16 Type descriptions.
29
The description of the preferred approach is reinforced with a diagram
that highlights the student's approach alongside the other three
possible approaches.
The diagram is followed by two bullet points that identifying actions
from the less preferred learning approaches that the student might like
to consider.
Note: The way in which someone prefers to learn can influence the
way in which they apply their interests. So this part of the report
suggests that the student looks back at their interest results and thinks
about those that engage them the most, as this may be because they
are a better fit with their learning approach. For example, someone may
have an Activating learning approach, and this practical and hands-on
style might match interests in Realistic job activities.
Section-E: You and careers
This section provides a list of the 15 jobs that are the best matches with
the student's personality, aptitudes and interests. For each job, the 'job
family' to which it belongs is also indicated.
Three lists, of 10 related jobs each, are also provided. These
respectively show the best matches (a) if career interests are given
greater weight than aptitudes or personality, (b) if personality is given
greater weight than aptitudes or interests, and (c) if aptitudes are given
greater weight than interests or personality. In each case the job is
linked to its job family.
The jobs are selected from a database that contains 774 jobs. However
under most circumstances the lists described above are generated
from a sub-set of 437 jobs that have been selected as having the most
relevance for students using the Futurewise Profile system.
Note: See Section 1.5 of this guide for a description of how the job
suggestions are generated.
Section-F: You and subject choice
This section contains three tables. The first shows the academic
subjects that are required (or useful) for each of the 15 primary job
suggestions.
The second lists the Russell Group 'facilitating subjects' that the
student is currently studying, those that are being considered for higher
level study, and the students interest in each. Facilitating subjects are
highly regarded by Russell Group Universities and keep a wider range
of HE options open.
30
The third table lists other academic subjects, which the student is
currently studying or thinking of studying, that might be essential, or
useful, for the 15 job suggestions.
The second and third tables use information on subjects being studied
or considered as gathered from the student during registration - see
the User Guide for details. Thus it uses information on the school
subjects that are available at a particular school, or on occasions, a
default list of subjects if a school does not enter a list of available
subjects.
Section-G: Advisor's comments
The final section of the report incorporates the Advisor's comments, as
appropriate. These are based on the feedback or review session(s)
with the student, having been input into the Futurewise Profile system
via the Advisor's interface - see User Guide.
The end page is a standard check list of further sources of information.
1.4.3 Advisor's Report
The Advisor's Report contains the same narrative and notes which are
presented in the Student's Report. The narrative is in the same voice
as the Student's Report.
The Advisor's Report contains the same charts (with error bars) and
tables as the Student's Report but also incorporates additional
information. This information is designed to give the Advisor further
options with regard to the personality, aptitude and interest results.
Specifically:

Personality style. This comprises a chart and details of each of
the scales with an additional table indicating the clarity of
preference. For those student's who have one or more corridor
scores it will indicate the next best fit with the data. This means
that the Advisor can use the 'alternative' Type description that is
presented in the Advisor's Report if this seems appropriate, for
example if the student feels that their 'reported' Type does not
sound accurate.
A quick reference to all 16 Types is provided at the end of the
Advisor's Report. This has the reported Type and the next best fit
Type highlighted. Other information is provided in this guide.

General aptitudes. This comprises a chart of the VRT, NRT, ART
and MAT results. The table beneath provides considerable
additional information on each test in terms of:
31







Number of questions in the test
Number of questions attempted
Number of questions correct
Comments on speed & accuracy
Result presented as a STEN
Result presented as a percentile
Comparison (IRT) score - allows comparison with other norms.
As with the Student's Report, notes flag missing tests, estimated results
or extended times.

Career interests. This comprises a chart of the student's career
interests. An additional table is provided which shows the
normative STEN scores and the ipsative scores.
For guidance on how to incorporate this additional information in a
review or feedback session see Section 1.6 of this guide.
1.5
Generating job suggestions
As the list of job suggestions if often seen by the student and his or her parents as
the ultimate 'test' of a careers guidance system, what follows is a description of how
job suggestions are generated. In this section, the method which is used to generate
the list of recommended jobs is explained in detail for those who wish to understand
precisely how it works. What follows is of necessity fairly technical and there is no
requirement for users of the Futurewise Profiling system to understand these
principles in detail.
1.5.1 Introduction
The occupational mapping process works by calculating an overall
match score for each of the jobs in the jobs database. Jobs are then
ranked in terms of their match scores, and those with strong match
scores form the basis of suggestions to the student.
The overall match score is made up of three separate components:
aptitudes, interests and personality. Each of these component match
scores expresses the match between the student's assessment profile
and the requirement scores (also expressed in terms of aptitudes,
interests and personality) of the job in question. The way in which the
component match scores are calculated is different for each
component.
The requirement scores for each job are based on a thorough analysis
of the UK based CASCAiD database using, where appropriate,
additional data from the world's most comprehensive jobs database,
the US Department of Labor's O*NET system.
32
Every job has also been reviewed for use in the Futurewise Profile
system by occupational psychologists and other career experts. In
particular each job has been analysed, checked and rated in terms of
the levels of performance required across the four aptitudes, Holland's
six occupational themes and the eight Personality Type roles.
The main jobs database contains 774 jobs. However the job
suggestions are based on a sample of 437 of these jobs. These are
the jobs that were considered suitable for inclusion in the matching
system, and they exclude unskilled and many semi-skilled jobs.
1.5.2 Aptitude matching
For aptitude, the match score is found by computing a match score
using each of the four contributing aptitude areas: verbal reasoning,
numerical reasoning, abstract reasoning and the mean of the three
Memory and Attention Test scores. The match score for each area is a
measure of the closeness of the student's score on the relevant test to
the 'requirement score' for that area for the job in question.
The requirement score is an indication of the degree of aptitude,
expressed on a 1-10 scale, which is considered ideal for the job. Thus,
if a student obtains a STEN score of 7 on numerical reasoning and the
requirement score for numerical reasoning for the job is 7, then this is a
perfect match and the student is allocated a match score of 10 for this
component. If the student obtains a STEN score of 5, and the
requirement score is 7, then the student would be allocated a match
score of 8 for this component, and so on.
The highest match score of 10 is obtained when the student's score is
identical to the requirement score, whatever the requirement score
happens to be.
In addition a correction, which essentially a points 'penalty', is applied if
the student's STEN score is 3 points below the requirement score. This
is to take account of those situations when a student's level of
performance is clearly below that required for a job.
1.5.3 Interest matching
For interest, the match score is found in a similar way, by computing
the individual match scores for each of the six interest scales (Realistic;
Investigative; Artistic; Social; Enterprising; Conventional).
The
individual match scores are calculated in just the same way as for the
aptitudes, using requirement scores which indicate the ideal amount of
interest in the area in question for the given job.
33
A correction is also applied if a student's interest score on any of the
six scales is more than 6 points different from the requirement score.
In addition the ultimate match with the six interest scales is influenced
by any preferences a student has expressed in the third part of the
Career Interests Inventory. If a student has a preference for using a
specific skill (e.g. their musical ability) and/or has a marked preference
for or against some specific feature of work (e.g. dealing with numbers
all day) this is used to promote or demote relevant jobs in the final list
from which the recommended jobs are selected.
1.5.4 Personality matching
For personality matching, a different method is used. Whereas both the
aptitude and interest areas use a range of scores for the matching
process, personality uses the top two 'themes' (out of a possible 8)
that are suggested from the student's Personality Type.
This
simplification of Type preferences helps to match core elements of the
person with core elements of the job. It also fits with Type theory which
suggests that people have a dominant and auxiliary theme which work
together to manage the world effectively.
To achieve the matching all the jobs in the CasCaid database have
been allocated a score between 1 and 5 for the eight possible themes
(e.g. theme one scored a 5, theme two and three scored a 4 etc.). This
allows each job to be 'profiled' and a matching score to be calculated
according to the person's top two themes. This score is then weighted
to suggest the fit between the student's Type and the requirements for
the job.
1.5.5 Scaling and adjustments
The overall match scores are multiplied by constants to equalise the
ranges for each of the different contributory areas. In addition, further
constants are applied to equalise as far as possible the lower and
upper ranges of the match scores from the different areas, these
having been determined by observation of the natural distributions of
the match scores.
The adjustments described have the effect of equalising the scales for
the match scores from the different areas so that each begins on an
equal footing when contributing to the final match score. After these
adjustments, a second set of weightings is applied in order to allow the
three areas to contribute differentially to the matching process. Thus
34
in the first instance the order of weights is: aptitude, interests,
personality.
The end result of the matching process is that all 437 jobs in the
database are put in rank order for each student.
1.5.6 Selecting the main 15 careers suggestions
The criteria for selecting the main 15 career suggestions are based not
only on the overall ranking of jobs but also on the job family to which
each job belongs.
This works by taking the first job in the overall ranking and adding it to
the list of recommended career suggestions. Then by looking for the
next two highest ranking jobs from the same job family as the job just
selected and adding these to the list; and then taking the next highest
job in the overall ranking which has not so far been selected and
repeating the first two steps. This continues until 15 jobs have been
selected.
The objective is to present the student with a coherent list of 15 jobs,
grouped into job families, with the principal job families selected from
the jobs at the very top of the list.
1.5.7 Selecting the alternative job lists
In addition to the 15 main career suggestions, the report also presents
three further recommended job listings, each consisting of 10 jobs.
These listings attempt to answer such questions as: "What jobs would
we recommend to you if we considered only your scores on
Personality" and similar questions for considering only the scores on
Interests and only the scores on Aptitudes.
This works by considering the ranking of jobs, if for example the focus
was on personality. Thus the complete list of jobs ranked according to
personality is used to identify the highest ranking job, as long as the job
is also a good fit for aptitude. The second highest ranking job is then
selected, as long as it is in a different family, or as long as no more
than three jobs have been selected from the same family, and so on
until 10 jobs have been selected.
When the focus is on interests, a similar process is used except that
the jobs are considered in terms of their ranking on interests alone.
Finally, the aptitude list is produced by considering the aptitude
ranking, as long as jobs are also a good fit for interests.
The purpose of these additional lists is to encourage students to
consider what areas of potential there might be for them which do not
35
quite fit their current profile. For example, a given job might fit their
personality and their aptitudes very well, though not their interests. If
that job is shown in the 'Personality' list, it might encourage the student
to find out more about it to see whether in fact they might have an
interest in that area.
As another example, another job might come at the top of the 'Interests'
list but not be included in the main list of career recommendations due
to a mismatch between the aptitude requirements of the job and the
student's test scores. This could prompt the student to consider
whether it would be worth putting time and effort into developing their
skills in the required area if it meant that they could pursue a career
that they were particularly interested in.
1.5.8 Scores shown on the student report
The nature of the raw match scores which determine the overall
rankings meant that these would not be suitable for display in the
student's report. This was not only because of the absolute values and
ranges of the raw match scores, but also because of the fact that for
the jobs recommended to the student, the match scores from each
component, by definition, would be relatively high in almost all cases.
The result of this would have been a set of mini-profiles in the student's
report which showed scores mostly of 8s, 9s and 10s and with very little
obvious discrimination between them.
For this reason, a set of transformations was developed which could be
applied to the raw match scores in order to generate mini-profiles for
the main suggestions tables which would be more meaningful for the
students.
These transformations have the effect of both 'normalising' the match
scores from the three different areas (so they would be comparable
with each other) and also increasing the spread of scores shown in the
mini-profile. The transformed scores are displayed in the mini-profile
bar charts on a scale of 1-10.
1.6
Review and feedback process
1.6.1 Purpose of a review
The purpose of a review and feedback session is to ensure that the
student clearly understands the meaning of their results, is satisfied
with the assessment experience and to explore possible implications of
the results. To reach these goals it is important that the session is
seen as a chance for information to be given and received by both the
student and the advisor, not simply for the advisor to provide the
results. For this process to be successful, it is vital that all advisors
have received appropriate training.
36
1.6.2 General guidance
General guidelines for conducting sessions are given below. These
guidelines should be seen as identifying the main points that need to be
covered and giving suggestions about the structure of the session and
appropriate questioning strategies. They do not set out to provide a set
formula that must be followed.

As with test administration, good preparation is essential for review
sessions. A suitable room, free from disturbances, should be
identified.
Advisors should familiarise themselves with the
individual’s results, what the assessments measure and how this
relates to the purpose of guidance.

Technical language should not be used during the review session,
so it is useful for advisors to be able to use their own (but accurate)
descriptions of what each assessment measures. For example, a
Numerical Reasoning Test may be best described as ‘an
opportunity to show how comfortable you are with using numbers
and numerical information to solve problems’.

The review session should begin with the advisors introducing
themselves and providing a brief overview of the purpose of the
session. Useful information to provide includes the approximate
length of the session, issues around confidentiality and what will
happen to the results, e.g. who will have access to them.

Both parties need to agree on what they want to get out of the
session, such as information, consequences of test performance or
a way forward.

To encourage a balanced discussion from the outset, the student
should be brought into the review session as early as possible.
This can be done through asking the student about their
experiences of the assessments immediately after the brief
introduction (e.g. “How did you find the verbal assessment (test)?”
or “Tell me about your experience of taking the assessments”).
Throughout the review session open questions should be used
wherever possible, as this will encourage the student to provide
more information and make the review more balanced. In a
balanced review session there should be equal contributions from
both the advisors and the student.

If the assessments were completed some time before, a reminder
of these and how they fit into the process may need to be given at
this stage.
37

At this point it is also appropriate to explain results are interpreted
with reference to a norm group. It is generally best to avoid the
term ‘norm group’ as this may not be understood by all students
and for some may imply ‘normal’ performance. A preferable phrase
is ‘comparison group’, which conveys the process of comparing
individual scores to those from a wider group of people, and is
more readily understood.

The next stage involves discussion of the actual results. It may be
preferable to let the student take the lead with regard to the order in
which the assessments are reviewed, rather than going through
them in order. The review process can be started through
questions such as “Which assessment did you prefer and why?“ or
“Which assessment did you find most challenging?”.

Once an assessment has been identified, the advisors can talk
about their result, or on those occasions when a student does not
know their results, can ask them to estimate their own performance
on the assessment. For example “In relation to the comparison
group (describe comparison group) how do you feel you performed
on the (appropriate assessment)?” With questionnaires a similar
sort of process can be used.

It is preferable to describe test scores (VRT, NRT, ART, MAT) in
terms of STENS. If percentiles are used it needs to be clearly
communicated that percentiles refer to the proportion of the
comparison group who the student scored as good as or better
than, and not the percentage of questions they answered correctly.
It may also be informative to explore the number of questions that
the student attempted and number answered correctly as this, in
conjunction any notes about speed and accuracy, can be used to
explore the way in which the student approached the test.

The results for Type need to be explained in term of overall Type
(as it is a categorical system), and where appropriate as STENs.
Career Interest results can be explained in terms of STENs and
also (ipsative) rank order.

Once the student's performance on each assessment has been
established, their reactions to the result and its implications need to
be explored. For example, questions such as “How do you feel
about your result on X?” can be used to assess emotional reaction
and “What implications do you think the results for Y may have on
your career ideas?” can be used to explore the implications of the
results.
38

Although advisors often perceive low (test) scores as more
challenging to discuss, it is important that low scores are not
‘glossed over’ or dismissed. Questions such as “How far do you
think the result is a fair reflection of your ability/aptitude in this
area?” can be very valuable. Often students have a reasonable
insight into their abilities and low scores in some areas may not
necessarily be a great source of concern; likewise students often
find it quite reassuring to know that they have performed at an
‘average’ level.

The rule is to ensure that the student understands why the
assessments were used, the meaning of the results, how various
results may interact with each other, and how these relate to career
and subject choice. Clearly there are various ways of doing this, by
taking the lead from the student, or by working systematically
through a report. Although obviously if group feedback sessions are
used a more systematic approach is usually the only option.

The final stage of the review process is to ask the student to
summarise what has been discussed, to ensure clear
understanding. Summaries can take the form of a brief review of
the assessments that highlight any 'strengths' and 'weaknesses'
that have been identified. The implications of the results and any
development plans should also be summarised, if these have been
discussed. And obviously all of this should be reflected in the
advisor's notes.

To check that the student has understood what has been
discussed, it can be valuable to get them to summarise what they
see as the main points to have emerged from the review session,
rather than this being provided by the advisor. The advisor should
explain the next stage in the process - for example, the expectation
that the student will complete some online research relating to job
suggestions - and inform the student about confidentiality. Finally,
the student should be offered the opportunity to ask any
outstanding questions and then thanked for attending the review
session. Where appropriate a questionnaire may be used to gauge
the impact of the review session on the student.

After the Guidance interview, notes can be added to the student's
report by visiting the student's individual page in the School Control
Panel. Details of how to do this can be found in the system's User
Guide. IF to add to this section to fit with IF’s objectives and the
practicalities of how the Guidance Interviews are organised
39
1.6.3 Using the Futurewise Profile Reports
This is a list of points and tips that relate to all forms of Futurewise
Profile feedback and review sessions. It assumes that the usual
introductory conversation (Advisor introducing self, scope and purpose
of session, length of session, expected output and indication of
possible next steps, confidentially of results etc) has been concluded.
In addition the assumption is that the student has received and read an
interim report. Obviously if this is not the case then more work is
required with regard to positioning the report: describing in simple
terms what it contains and how the contents might help the student.

Assessment experience. The student is asked about the
assessments that were completed. Which assessments does the
student particularly remember? Why? Which assessments did the
student 'like'? Which felt 'easy' and which a little 'harder'? Did the
student understand why a range of assessments were completed?
Etc.
Tip: The Advisor needs a good working understanding of all the
assessments and to be able to describe each in a clear and
understandable way. This can be best done by using this guide and
any supporting manuals, but also by having done the tests
themselves!

Report: Introduction. Report: Introduction. This will contain a
note if the student has reported SpLDs, English as a second
language etc. It will also refer to any adjustments that were made to
the assessment session(s), e.g. extra time for a test.
Tip: These need to be acknowledged and the point should be
made that certain conditions might have affected the performance
of the student on the timed tests – but it is important to recognise
that this is not always the case. It is also worth considering the
effect on the timed and untimed questionnaires separately.

Report: Section-A. The first main section of the Student's Report
contains a set of charts which illustrate the results of the TDI, the
aptitude tests (VRT, NRT, ART, MAT) and the CII. The Advisor's
Reports contains the same charts plus additional information.
As this is an overview of the student's results a variety of points
should be introduced and discussed. In particular:
40
Personality
o Briefly describe the four dimensions of the TDI
o Explain why there are two bars for each of the dimensions.
o Explain who the student has been compared to in order to
generate the results.
o Point out that all the results are presented on a 10-point scale.
o Explain why each of the bars has 'error' lines at the end
(although be careful using the word 'error') - and say this is the
same for all the assessments.
Tip: You might know from your Advisor's Report that the student
has a corridor Type score (or scores) and that this might ultimately
affect your discussion on personality. Do not introduce this
information at this stage as it is likely to confuse the student.
Aptitudes
o Provide descriptions of the assessments (tests), including the
fact that three of them relate to the MAT (although they are all
described in Section-B of the report).
o Explain who the student has been compared to in order to
generate the results.
o Point out that all results are presented on a 10-points scale and
that 4-7 is the average range (relative to a Version-2
benchmark).
o Explain why certain results are missing (student didn’t complete
test), estimated (student couldn't complete test), or are based on
extended test times (student usually has extra time for
tests/exams etc), as appropriate.
Tip: The results can of course be compared to other norm groups.
However as with any additional information about personality this is
best introduced, if relevant, at a later stage of the review process.
For example, there may be occasions when someone has achieved
extremely high scores on Version-2 tests and a comparison with
undergraduate students seems appropriate.
41
Career interests
o You may need to provide descriptions of the career areas
(scales), although they are all described in Section-B of the
report.
Tip: In those situations where a student has a set of 'low' or
completely 'flat' results across all the career themes remember that
you have an alternative set of ipsative results in the Advisor's
Report. Ipsative results put the career themes in order of
preference for the individual rather than in comparison to a norm
group. Exploring the ipsative results might help to tease out some
career interests. However, this is also something that is best done
later in the process. You will also need to remember that it's the
normative results, as displayed in this report section, that are
ultimately used to help generate the job suggestions.

Report: Section-B. This is a narrative explanation of the
personality, aptitude and interest results from Section-A.
Personality
o You do not need to provide any additional interpretation of the
personality results. If the student asks say that this is covered in
depth in the next section, Section-C. However at this stage the
student should be asked if the brief description sounds like a
reasonable summary of their personality.
o If it does not sound correct to the student (maybe because of
corridor scores) you can say that it is one of 16 possible
descriptions and that you will discuss other possibilities later in
the process.
General aptitudes
o Explain that the (VRT, NRT, ART) narrative is based on whether
the student achieved 'low', 'average' or 'high' scores and that the
bullet points summarise what this means.
o That the same mechanism operates for the MAT. Also that a low
memory score does not necessarily mean that the student has a
poor memory. It may just be in the context of the MAT task or
that they are the sort of person who is determined to get things
right and so they check and double-check things before moving
on.
o If you wish to link individual tests to jobs/occupations, guidance
is given at the front of this guide in Section 1.1.
42
Tip: The Advisor's Report contains information on the number of
questions attempted for each of the tests, the number correct, and
if the student appeared quick and inaccurate (S), or slow but
accurate (A). This information can be used to discuss test taking
'strategy' and might help to explain 'low' results, which amongst
other things may be due to S or A; or indeed guessing, which
might be indicated by someone attempted all the questions and
getting none, or very few correct.
Practically, look at the results. Do they suggest that the student is
adopting a quick (attempted most or all of the questions) but
inaccurate approach (got most of them wrong)? Does this apply to
all of the tests or only to some of them? Is there a trend? Ask the
student to comment. Where they trying to work out the answers or
did they guess? Perhaps they only guessed on tests they found
hard?
Are they slow and methodical? Does this explain why they have
answered a limited number of questions, but have got most or all of
the answers correct. Did any other factors slow them down?
Perhaps there were distractions from other students? Where they
wearing their glasses!
Career interests
o Explain that this is based on their 'top' two and bottom one
career areas (themes).
o Remind the student that these are their career interests
compared to those of the general UK population.
o This is the point at which you may wish to introduce the ipsative
results, if they are likely to be of value - possibly in the case of
'low' or 'flat' results, as suggested above. This may give the
student greater clarity with regard to their actual interests.
o You may also have to mention that it is not unusual for career
areas to sometimes look contradictory; it probably means that
they have a broad or diverse set of interests. And also that
career interests do not always 'agree' with personality; it is
perfectly possible to be interested in something that is not a
good match from a personality perspective.

Report: Section-C. This is a narrative description of the student's
preferred workplace, given their psychological Type, and of their
preferences with respect to six work-related competencies. At the
end of the section it brings together personality and interests.
o Define what you mean by a 'competency' and explain that the six
competency areas concern things that employers think are
particularly important.
43
o Explain that the bullet points relate to how the student is likely to
express/enjoy these competencies given their personality, and
that they are not connected to success or capability.
o Check to see if the student thinks the bullet points accurately
describe what they do - if they do, ask for some examples. If
they don't, why not? And ask for some counter-examples.
o Explain that personality and interests can fit together well, or
might appear to be contradictory.
o Explain that when personality and interests are 'aligned' (agree
with each other) it is often useful to pay even greater attention to
what they are suggesting. Also, if they appear 'unaligned' (to
disagree), that it is valuable to see how interests might be
modified by personality. For example that a person might be
interested in Realistic jobs which involve organising things, but
may also want to make last minute changes.
Tip: If there are any corridor scores these will be flagged in a note
at the end of this section. In consequence this may be the point at
which to discuss other possible Types, depending on the student's
reaction to the existing narrative. In the Advisor's Report the next
best fit Type is identified and a thumbnail sketch is provided on the
back page. You will also find descriptions of all the Types at the
end of this guide.

Report: Section-D. This is narrative description of learning style
based on personality Type.
o Explain that learning approach is suggested by personality and
that there are four main approaches.
o Describe the four approaches with reference to the diagram.
o Check to see if the student recognises the approach suggested.
If they do, ask for some confirmatory evidence. If they don't, ask
for some evidence of a different (dominant) approach.
o Explain that the different learning approaches comprise a
learning cycle.
o Explain that we tend to neglect our least preferred learning
approaches, but if we recognise them, and try to use them, they
can make our learning more effective.
o Draw the student's attention to the bullet points at the end of the
section which suggest spending more time on aspects of their
least preferred approach, or approaches.
44
Tip: Learning preferences can interact with interests. As a result it's
worth getting the student to look back at the career interests chart
and to think about which interests go best with their apparent
learning approach. Do they match up? For example, Activating
would seem to go with Realistic interests, but is perhaps not as
good a fit with Investigative.

Report: Section-E. This section includes the career suggestions.
o Explain that the student's personality, aptitudes and interests
have been matched against all the jobs in the jobs database.
And that the jobs presented are the 'top' 15 from a list of 437.
o Explain that the top 15 are all good matches, but the relative
contribution of the student's personality, aptitudes and interests
can be seen by looking at the mini-graphs.
o Explain that the other lists of additional suggestions
(emphasising interests, personality and aptitudes) are ways of
seeing what happens if more weight is given to these others
attributes, i.e. all other things being equal, what would happen to
the suggestions if the most important thing was the student's
interests, or personality, or aptitudes… What sort of ideas does it
throw up? Do any of these feel like a better fit than the list based
on all three attributes at once? Why?
o Emphasise that the lists are not the only jobs the student could
do. Interests and personality change over time, as does
motivation, and the list is a starting point for discussion.
Tip: Do not attempt to explain the occupational mapping algorithm.
This is described for your information in Section 1.5 of this guide;
but is at a level of complexity that is inappropriate for a feedback
session. The advisor also needs to be aware that all the jobs in the
database have been selected for suitability for an independent
school audience which means that unskilled/semi-skilled jobs will
not appear in the listings.

Report: Section-F. This section describes the academic subject
requirements, post-16 and post-18, for the top 15 job suggestions.
It also links degrees to jobs.
In a second list it runs through the Russell Group 'facilitating
subjects', whether the student is currently studying any of them,
and whether the student has selected any for further study, and
their self-reported level of interest in each. For each subject it
indicates whether it is essential or useful for each of the 15 main
job suggestions.
45
A final list covers the remaining (Non-Russell) school subjects in
the same way.
o Explain the difference between 'required' and 'desirable'
subjects.
o Explain why certain non-preferred subjects are needed to enter
particular careers
o Discuss the impact of choices that have been made, or which
are about to be made.
o Introduce the concept of 'facilitating subjects' and why they are
important.
Tip: The job suggestions are based on the jobs database, edited
for independent schools, but which still involves all possible
permutations of school and Higher Education subjects. As a
consequence advisor's need to be prepared to discuss job
suggestions that involve subjects that are not offered by the
student's school. Also remember that the entire un-edited
database, incorporating all levels of jobs, is available via the
Futurewise website.

Report: Section-G. Your notes can be added to the Student's
Report to produce a final report (See the User Guide for details).
You can also highlight (tick) a selection from a choice of further
resources. This is based on the current notes form with which many
advisors will be familiar.
1.6.4 Checklists
IF to insert two checklists: Individual Guidance Session v Group
Session.
46
Part 2: Additional Resources
2.0
Answers to practice questions
The explanations of the practice questions given below relate only to the closed
versions of the aptitude tests. Full explanations of the practice questions on the
open versions are displayed to test takers by the computer, after they have given an
answer to each practice question.
Verbal Reasoning
Versions 1, 2, 3 and 4
P1:
Modern methods of predicting the weather are not always accurate.
The correct answer to this statement is ‘can’t tell’. Although we know that
weather forecasts are not always accurate, the passage gives no information
about how accurate modern methods of predicting the weather are. As no
information on the accuracy of modern methods is given in the passage, we
do not know whether this statement is true or not.
P2:
Personal observations can be accurate predictors of the weather.
This statement is ‘true’ as the passage states that ‘Before modern weather
forecasts, people relied on their own observations to predict the weather’. It
also says that the ‘red sky’ rhyme that came from these observations ‘is quite
a good indicator of what the weather is going to be.’ Therefore, the weather
can be accurately predicted from personal observations.
P3:
If there is a ‘red sky’ in the morning, there is a good chance that the weather
will be fine.
The answer to this statement is ‘false’. The rhyme ‘red sky in the morning,
shepherd’s warning’ tells us to expect ‘bad weather’ and we are told that ‘‘red
sky’ is quite a good indicator of what the weather is going to be’. Therefore, a
red sky in the morning is likely to indicate bad weather.
P4:
All weather rhymes are poor predictors of the weather.
This statement is ‘false’. The passage says that the ‘red sky’ rhyme is ‘quite a
good indicator of what the weather is going to be’, so not all rhymes are poor
predictors of the weather.
47
Numerical Reasoning
Versions 1 and 2
P1:
How many employees does the company have altogether?
The correct answer is 50. This is found by adding the numbers in the
‘Number of employees’ column.
P2:
How many employees travel 8 miles or more to work?
The correct answer is 'Can’t tell'. Although you are told that 15 employees
travel between 6 and 10 miles to work, you cannot tell how many of these 15
employees travel 8 or more miles.
P3:
Which is the most common distance that employees travel?
The correct answer is 1 to 5 miles. This is the distance travelled to work by
most employees (17).
P4:
What percentage of employees travel between 1 and 5 miles to work?
The correct answer is 34%. To find this you need to divide the number of
employees who travel between 1 and 5 miles to work (17) by the total number
of employees (50) and multiply this figure by 100 to give you a percentage.
48
Versions 3 and 4
P1:
In which year did rural houses show their greatest change in value?
The correct answer is Year 3, as the graph shows that rural house prices
increased by 8% in Year 3.
P2:
In which year was the greatest difference between the change in the value of
houses in rural and urban areas?
The correct answer is Year 5. To find the difference in the change of rural
and urban house prices you have to subtract the smaller value for each year
from the larger value. The largest difference (7%) is for Year 5.
P3:
A house in an urban area is worth £110,000 at the beginning of Year 1. What
is it likely to be worth at the end of Year 2?
The correct answer is £106,722. The graph shows that houses in urban
areas lost 1% of their value in Year 1 (£1100 on a house worth £110,000) so
the value of the house at the end of Year 1 is £108,900. In year 2, 2% of the
value is lost (£2178 on a house worth £108,900), making the value £106,722.
P4:
In which year did the combined value of rural and urban houses change the
most?
The correct answer is ‘Can’t tell’. It is not possible to tell in which year the
combined value of rural and urban houses changed the most, without knowing
the proportion of houses that are classified as being ‘rural’ and ‘urban’ and the
average value each year.
49
Abstract Reasoning
Versions 1 and 2
P1:
The correct answer is ‘Set A’. All the shapes in Set A have three triangles.
Two of the triangles point upwards and are white, the other points downwards
and is black. All the shapes in Set B have three diamonds.
P2:
The correct answer is ‘Neither’, as all the shapes in Set A have two white
triangles pointing upwards and one black triangle pointing downwards, and all
the shapes in Set B have three diamonds.
P3:
The correct answer is ‘Set B’. All the shapes in Set A have three triangles.
Two of the triangles point upwards and are white, the other points downwards
and is black. All the shapes in Set B have three diamonds.
P4:
The correct answer is ‘Neither’, as all the shapes in Set A have two white
triangles pointing upwards and one black triangle pointing downwards, and all
the shapes in Set B have three diamonds.
P5:
The correct answer is ‘Set B’. All the shapes in Set A have three triangles.
Two of the triangles point upwards and are white, the other points downwards
and is black. All the shapes in Set B have three diamonds.
50
Versions 3 and 4
P1:
The correct answer is ‘Set A’. All the shapes in Set A have at least one white
triangle. As this is the only common feature in Set A, all other features should
be ignored. All the shapes in Set B have at least one black square. Again, as
this is the only common feature in Set B, all other features should be ignored.
P2:
The correct answer is ‘Neither’, as all the shapes in Set A have at least one
white triangle and all the shapes in Set B have at least one black square.
P3:
The correct answer is ‘Set B’. All the shapes in Set A have at least one white
triangle. As this is the only common feature in Set A, all other features should
be ignored. All the shapes in Set B have at least one black square. Again, as
this is the only common feature in Set B, all other features should be ignored.
P4:
The correct answer is ‘Neither’, as all the shapes in Set A have at least one
white triangle and all the shapes in Set B have at least one black square.
P5:
The correct answer is ‘Set B’. All the shapes in Set A have at least one white
triangle. As this is the only common feature in Set A, all other features should
be ignored. All the shapes in Set B have at least one black square. Again, as
this is the only common feature in Set B, all other features should be ignored.
51
2.1
Type summaries
The 16 Types can be organised using the Type Mapping™ Wheel as shown below:
This wheel shows how the 4 pairs of opposites combine to give the 16 types (using
the 4 Letter Codes such as ESTP) but it also shows the dominant theme for that type
(such as 'Activating' for the ESTP). How this relates to Type Theory will not be
described here but a 'pen-portrait' of the 16 types is shown on the following page:
52
53
The summary descriptions are used in the Advisor's Report. Fuller descriptions of
the 16 types can be found in 'Understanding Personality Type'. However, since the
descriptions are written for adults the structure and the language have been adjusted
in the Student's and Advisor's reports.
Note: Each of the 16 Types described in 'Understanding Personality Type' has a
comprehensive two-page description. However, it also has the 4 letter Type code
and a memorable label (e.g. ESTP labelled as the 'Trouble-shooter'). These letters
and labels help people to remember their type which is important when it is being
used as part of their on-going development. This is NOT the situation for most of
IF's work. It is therefore suggested that such Type letters and labels are NOT used
during the feedback sessions since they can leave an impression of rigidity or
permanence.
54
2.2
Learning approaches
Below is a summary of the four learning approaches together with the names of the four
Types they include.














CLARIFIER
Collects facts and details
Gives practical examples
Wants own time and own
space to reflect
Needs preparation and
research
Prefers working at own pace
Prefers details of implementation
Prefers clear structure and
steady progress
ACTIVATOR
Makes abstract ideas tangible
Wants practical activities
Prefers interaction with others
Moves at a fast pace
Creates a buzz
Prefers hands-on - have a go
Wants action and delivery














INNOVATOR
Enjoys theories and models
Wants the ‘big picture’ first
Thrives on intellectual challenge
Reflects and has insights
Enjoys possibilities and ‘what if’
Seeks to question and innovate
Driven to create something different
EXPLORER
Seeks novelty
Craves variety and options
Learns by trial and error
Needs inspiration
Enjoys discussion and debate
Moves on to new topics quickly
Enjoys exploration flexibility and
discovery
Additional information is provided on each of the approaches, and what they mean in
terms of the learning cycle (Action/Implementation; Reflection/Review;
Construction/Creation; Experimentation) can be found in the publication
'Understanding Learning Stylesii' published by Team Focus Limited.
55
2.3
Futurewise job families
IF to insert Futurewise job family classification.
56
2.4
Sample reports
This section includes the latest sample versions of the Student's and Advisor's
Reports.
IF to insert sample reports.
57
2.5
Supporting manuals
This Guide uses content from a number of Team Focus manuals. These manuals
contain detailed information on the development and use of the tests and
questionnaires used in the Futurewise Profile system.
The manuals are:


Profiling for Success: Reasoning Tests User's Guide
The Memory and Attention Test Manual




Type Dynamics Indicator - TDI User's Guide
Understanding Personality Type (Essential Guide)
Understanding Learning Styles (Essential Guide)
Understanding Team Roles (Essential Guide)

Career Interests Inventory User's Guide
58
Sarah B awaiting first 69 pages of this document to be added
before this page (9/10/12)
Part II: Background Technical Information
2.0 Introduction
This part of the guide provides brief technical information. For example, the
performance times in minutes, and the number of items for each test or
questionnaire, are provided in Table A.
All the tests and questionnaires used in the Futurewise Profile system also have
established technical (psychometric) properties. The issue of reliability is described
in the following section, and the key reliability statistics - internal consistency and
standard error of measurement - are summarised in Table B.
There is also a discussion of the way in which the aptitude tests were validated
against a sample of independent school students, and how the different levels of
each of the tests were equated with each other.
Finally, there is a general discussion of the validity of each assessment.
59
2.1 Performance times
Table A: Performance times in minutes and number of items
Level 1
Closed Reasoning Tests
Verbal
Numerical
Time allowed
12
12
Number of items
32
28
Level 2
Time allowed
Number of items
12
32
12
28
10
50
Level 3
Time allowed
Number of items
15
40
15
36
12
60
15
36
12
60
Numerical
15
40
Abstract
12
70
20
48
15
75
Level 4
Level 1
Level 2
Time allowed
15
Number of items
40
Open Reasoning Tests
Verbal
Time allowed
15
Number of items
44
Time allowed
Number of items
20
60
Memory & Attention Test
Version 1
Time allowed
Number of items
15
50
Type Dynamics Indicator
Pictorial
Word (version
O)
Estimated time
Number of items
8-12
56
Estimated time
Number of items
10-20
64
Career Interests Inventory
Version 1
4
Abstract
10
50
Estimated time
Number of items
20-25
Section 1 = 424
Section 2 = 15
Section 3 = 23
Section 4 = 4
Note that there are also 11 additional items which are being trialled
60
2.2 Reliability
No test or questionnaire gives a perfect indication of reasoning ability, personality or
occupational interests. Despite rigorous technical development, and appropriate use
and administration, there will always be a degree of error in any result. The concept
of reliability is thus concerned with accuracy and the quantification of error.
In practice, the reliability of a test or questionnaire can be assessed in a number of
ways. One method is to look at how the items (questions) work together to form a
coherent assessment of the construct under consideration.
This ‘internal
consistency’ is found by taking the mean of the correlation between each item and
total score, excluding that item.
Internal consistency is usually calculated through a formula known as Cronbach’s
Coefficient Alpha and expressed as a statistic that can range from 0 to 1. The closer
to 1, the more reliable the test is said to be. The 'rule of thumb' is that alpha should
be 0.7 or greater for all assessments, although for those that are measuring broader
concepts (e.g. occupational interests) acceptable figures can be lower.
Coefficient Alpha provides an index of reliability, but does not directly indicate the
degree of error in a given test score. To do this the standard error of measurement
(SEM) is calculated. This provides a way of quantifying the error in an actual test
score, helping to indicate the range within which a person’s 'true' score is likely to
fall.
In the Futurewise Profile reports the SEM for each test or questionnaire is
used to generate the short (error) line, relative to the STEN scale, that appears
at the end of each bar in the bar charts.
Internal consistency and SEM data are presented in the table that follows. It is also
worth noting that test-retest reliability figures (stability over time) are available for the
verbal, numerical and abstract tests. In a study of 169 individuals, with a mean age
of 21, and a typical gap between the completion of the same verbal, numerical and
abstract reasoning tests of 10-40 weeks, the figures are 0.73, 0.71 and 0.67
respectively. These meet acceptable standards, especially given the long time gap.
61
Table B: Descriptive statistics and reliability
Test
Level
Mean
SD
Sample
size
Number
of items
Reliability
(Internal
consistency)
SEM
Closed Reasoning Tests
Verbal
Numerical
Abstract
1
16.62
5.73
210
32
0.90
1.81
2
16.32
5.18
303
32
0.80
2.32
2*
17.48
5.32
393
32
0.78
2.50
3
24.10
6.07
1322
40
0.86
2.27
4
25.45
6.27
1131
40
0.87
2.26
1
19.30
4.64
250
28
0.93
1.23
2
14.95
4.74
337
28
0.84
1.90
2*
14.59
5.04
393
28
0.81
2.20
3
18.04
5.69
1609
36
0.87
2.05
4
16.24
6.50
1510
36
0.89
2.16
1
28.51
7.82
156
50
0.93
2.07
2
20.80
8.24
242
50
0.87
2.97
2*
24.60
7.10
362
50
0.81
3.09
3
31.20
11.18
860
60
0.92
3.16
4
30.35
10.41
881
60
0.91
3.12
Open Reasoning Tests
Verbal
Numerical
Abstract
1
14.90
12.37
1010
44
0.92
3.50
2
29.61
10.32
24072
60
0.91
3.10
1
14.45
10.76
1356
40
0.92
3.04
2
18.31
6.48
37241
48
0.85
2.51
1
39.04
13.47
515
70
0.95
3.01
2
33.69
11.67
13.61
75
0.92
3.30
62
Test
Scale
Mean
SD
Sample
size
Number
of items
Reliability
(Internal
consistency)
SEM
Memory & Attention Test
MAT**
Memory
18.25
17.94
344
50
-
-
Accuracy
25.51
7.35
344
50
0.87
2.65
Decision
Making
3.39
0.95
344
50
-
-
Type Dynamics Indicator – Word version
Word
(64 items)
E-I
54.86
15.31
1260
16
0.88
5.30
S-N
53.05
12.70
1260
16
0.82
5.39
T-F
55.89
13.61
1260
16
0.86
5.09
J-P
55.86
14.70
1260
16
0.88
5.09
Type Dynamics Indicator – Picture version
Pictorial
(56 items)
Pictorial
(56 items)
E-I
55.23
14.26
116
14
0.89
4.73
S-N
52.45
13.81
116
14
0.86
5.17
T-F
57.52
12.93
116
14
0.86
4.84
J-P
49.67
14.61
116
14
0.89
4.85
E-I
46.84
10.23
304
14
0.76
5.01
S-N
47.85
10.58
304
14
0.76
5.18
T-F
50.14
10.71
304
14
0.81
4.67
J-P
43.73
13.98
304
14
0.90
4.42
Career Interests Inventory
Normative
(36 items)
Ipsative
(15 items)
R
9.71
2.56
5843
6
0.63
1.56
I
10.94
2.74
5843
6
0.74
1.40
A
10.41
2.71
5843
6
0.63
1.65
S
11.61
2.51
5843
6
0.66
1.46
E
11.12
2.67
5843
6
0.70
1.46
C
9.92
2.92
5843
6
0.82
1.24
R
2.66
1.35
5843
5***
N/A
N/A
I
2.41
1.37
5843
5***
N/A
N/A
A
2.30
1.29
5843
5***
N/A
N/A
S
2.65
1.15
5843
5***
N/A
N/A
E
2.62
1.19
5843
5***
N/A
N/A
C
2.35
1.33
5843
5***
N/A
N/A
63
Notes
Mean: The mean is the average raw score for a particular test or questionnaire.
SD: The standard deviation (SD) is an indication of the variation from the average. A low
standard deviation, relative to the mean, indicates that the data points tend to be very close
to the mean; a high standard deviation, relative to the mean, indicates that the data points
are spread over a larger range.
Sample size: The number of data points on which the statistics are based.
Number of items: The number of questions in a test or questionnaire.
Reliability: A measure of the accuracy of a test or a scale in a questionnaire.
SEM: The Standard Error of Measurement (SEM) is an indication of the error in a test or
questionnaire score.
* Results for the Level 2 tests from the IF comparability sample are in the rows shaded in
green
** figures for the MAT timed version
 Memory: a measure of how many times a person checks the instructions. A high
score (which results from the respondent checking the instructions relatively
infrequently) indicates good memory (i.e. less reliance on the instructions).
 Accuracy: the total number of correct shapes that have been clicked.
 Decision Making: a measure of the number of items answered correctly per minute.
High scores show people who are both fast and accurate. Completion of the test
accurately and in less than the amount of time available for this test will produce high
scores on this variable.
*** Paired items. For example, there are 5 items that contain R and one each of the other
themes (RI; RA; RS; RE; RC). However, RI is equivalent to IR, thus there are a total of 15
paired questions in the ipsative section of the CII.
64
2.3 Comparability of aptitude tests
A detailed comparability study was conducted in 2011/12 to compare the existing
tests with the results obtained from a sample of 350+ students drawn from Inspiring
Futures client schools.
This had two main aims. Firstly, to establish that the results from the independent
schools students mapped onto the existing test statistics, i.e. that the new sample
was statistically equivalent to the existing Level-2 (students considering A-Level or
equivalent qualifications) sample.
Secondly to produce a 'common scale' so that the results obtained by a student
taking one level of a particular aptitude test (Level-1, Level-2, Level-3 or Level-4)
could be directly compared to any other level.
Verbal, Numerical and Abstract Tests
Producing a common scale involved administering different tests to the same sample
of people. This has been done over a period of several years with the following test
combinations (sample sizes varies for the verbal, numerical and abstract):
Table C: sample sizes for IRT linking data
Closed Level 1 with Closed Level 2
Closed Level 2 with Closed Level 3
Closed Level 3 with Closed Level 4
Open Level 2 with Closed level 2
Verbal
1008
777
498
1547
Sample sizes
Numerical
1773
887
930
1293
Abstract
768
757
210
807
All these data were analysed using Item Response Theory (IRT) which, by
estimating the difficulty of each item and the ability of each test taker can produce a
common scale. This means that any test can generate a score based on this
common scale which can then be used to estimate what he/she would have obtained
if they had completed a different test. These results are presented in the IF
Practitioner's Manual. This provides the methodology whereby the tests can be used
inter-changeably. However, the highest accuracy is obtained when the difficulty of
the test matches the ability of the test taker.
Developing a common scale5
The verbal, numerical and abstract tests have been in use for a number of years.
There is also data available for candidates who have taken more than one level of
each of the tests.
5
There is only one Level for the MAT and so it was not part of the IRT linking study. It
produces sten scores based on the IF comparability sample which feed directly into the
Occupational Algorithm.
65
Item Response Theory is used to derive an estimate of an individual's ability in terms
of the known parameters of each individual item in a test. Using this approach, it is
possible to equate scores obtained on one version of a given test with scores
obtained on a different version if it can be assumed that both tests are measuring the
same underlying ability trait. This approach made it possible to develop a set of
'translation tables' by means of which it would be possible to estimate the scores of
Futurewise students who had taken different versions of a given test in terms of one
particular standard or base version of the test. For example, if a student had taken
'Level 3' of the Numerical Reasoning test, by use of such tables, it would be possible
to estimate what a score that student would have obtained had they actually taken
'Level 2' of the test.
For the purpose of this exercise, the Level 2 versions of each of the Numerical,
Verbal and Abstract Reasoning tests were considered as the base versions, these
tests being those considered suitable for students studying A-Level or equivalent.
The primary source of data for the IRT analysis was an existing set of data obtained
over a number of years from a sample of school leavers and college and university
entrants, these samples varying between 1500 and 3000 records depending on test
and versions in question. In each case, the respondent had taken a special version
of the test which contained sets of items from two of the four possible levels (1 - 4) of
the test. In addition to this primary data, data from the Inspiring Futures
'Comparability Study' was also used to make final adjustments in the analysis.
The outcome of the IRT analysis was a set of tables which allowed firstly a student's
score on the version of the test they had actually taken to be translated to a score
common across all versions of that test (the 'IRT score'). Thus, the score of any
student could be translated to this common score, no matter which version of the test
they had actually taken. Subsequently, the translation tables allowed the IRT score,
determined as above, to be translated back to sten and percentile scores on any
other version of the test. Thus, for a student who had taken Level 3 of the Numerical
Reasoning test, one would firstly be able to find their IRT score and then, given this
score, find the sten and percentile scores they would theoretically have obtained had
they actually taken Level 2 of this test.
To summarise, the IRT analysis made it possible to evaluate all students on the
same basis, no matter which version of any given test they had actually taken. This
was of importance since the process of occupational mapping requires that students'
test performance be evaluated on a common scale in order to determine the match
between their assessment profile and the requirements of specific jobs. However,
this does not mean that the user should not choose the appropriate level
carefully. The closer the match between the test level and the person’s ability, the
more accurate the ability estimate is likely to be.
The translation tables produced using these methods are provided in Section
2.2 of this Guide.
Practically this process demonstrated that:
66


It was possible to construct a coherent common scale that linked all levels of
each of the aptitude tests.
There is a difference between each of the tests in terms of level, i.e. the tests
genuinely differ in order of difficulty from Level-1 (Easiest) to Level-4
(Hardest). The exception is the Level-1 and Level-2 abstract tests.
This last point can be demonstrated visually by looking at the three sets of test
characteristic curves that are reproduced below. In particular it is worth noting that
the curves for each test span the ability range from left to right - Level-1 on the left,
Level-4 on the right - and they do not tend to cross. There is a clean separation
between each test.
Table D: Test Characteristic Curves for Verbal tests 1-4
Table E: Test Characteristic Curves for Numerical tests 1-4
67
Table F: Test Characteristic Curves for Abstract tests 1-4
N.B. The separation of the 4 abstract tests is less clear than with the verbal and numerical. This is
understandable since the abstract test is less correlated with educational level.
2.4 Validity
Validity is a broad concept, with different forms of validity contributing to an overall
judgement about whether a test or questionnaire is 'fit for purpose'. However that
being said, there are four main types: face, content, construct and criterion.
Face validity
An assessment has face validity when it looks as though it is measuring what it
claims to measure. If test takers can clearly see links between what they are being
asked to do, and what the test or questionnaire claims to measure, they are more
likely to be motivated to do their best and to treat the questions seriously.
Evidence for the face validity of the assessments used in the Futurewise Profile
system was collected during the development and trialling stage of each
assessment, by observing administration sessions and by obtaining feedback from
test takers. When asked, users found the tests and questionnaires easy to use and
the content to be acceptable.
Content validity
If the items in a test or questionnaire provide adequate coverage of the area being
assessed, and do not relate to areas outside the domain of the assessment, then it is
said to have content validity. For the verbal, numerical and abstract tests, the
process of ensuring content validity was confirmed by having detailed specifications
of what the tests should assess, e.g. the arithmetic/mathematical skills required to
complete the numerical test.
The same approach was used for the Memory and Attention Test, for example that it
should comprise simple tasks that increased in difficulty as a function of the load on
68
a person's memory; and in checking the coverage of the items in the Type Dynamics
Indicator (TDI) and Career Interests Inventory (CII). Although obviously with the latter
measures content was also reconciled with the underpinning theories: that of
Psychological Type in the case of the TDI, and Holland's theory of occupational
choice with regard to the CII.
Construct validity
Construct validity refers to what a test actually measures. In the case of the aptitude
tests, the constructs are verbal, numerical and abstract reasoning. Evidence for
construct validity comes from the examination of how scores on each of the tests
relate to each other and to established assessments of the same constructs.
An examination of the correlations between the three tests shows that each is
assessing a distinct ability. For example, amongst the closed tests the degree of
association is small; with the mean correlation indicating that just under 20% of
common variance is shared between tests - or to put it another way, there is limited
'overlap' between the tests.
Additionally there is a decrease in the mean correlations between the higher levels of
the closed tests (0.56, 0.45, 0.45 and 0.32 for Levels-1 to 4 respectively). It is known
that as people get older and specialise in their areas of study abilities tend to
become more defined, meaning that the correlations between assessments of
different abilities are reduced. The pattern of relationships found with the tests
supports this differentiation of abilities.
In terms of the relationship with other tests, a variety of information is available. For
example:

Level-1 (open) verbal and numerical tests correlate with their equivalent
Saville & Holdsworth Ltd (SHL) reasoning tests, producing figures of 0.51 and
0.29 (N=44-70, depending on test; mean age=42).

Levels 1-4 (closed) verbal, numerical and abstract tests correlate with their
equivalent Saville & Holdsworth Ltd (SHL) reasoning tests, producing figures
of 0.48, 0.65 and 0.36 (N=115-122, depending on test; mean age=22)

Level-3 (closed) abstract test correlates with its equivalent Graduate and
Managerial Assessment (a high level abstract test), producing a figure of 0.71
(N=126; mean age=16).

Level-4 (closed) verbal and numerical tests correlate with the Graduate
Management Admission Test (a high level test used for admission to
university business schools), producing figures of 0.34 and 0.23 (N=54-74,
depending on test; mean age=26).
Evidence for the construct validity of the MAT comes from the correlation of the test
with the verbal, numerical and abstract reasoning tests. In the tables below the
sample was of 100 respondents undergoing professional training.
69
Table G: Correlations between MAT and the other reasoning tests
Total Raw
(Accuracy)
N Swaps
(Indecision)
Correct items
Per Minute
Total Screen
Time (Speed)
Total
Responding
Time
Verbal Reasoning
0.01
0.08
0.24
-0.24
-0.22
Numerical Reasoning
0.30
-0.10
-0.00
0.14
0.11
Abstract Reasoning
0.32
-0.07
0.34
-0.14
-0.13
Total Help
Time
N Help
Clicks
(Memory)
First Set
Time
N
Attempted
Verbal Reasoning
-0.19
0.19
-0.32
-0.14
Numerical Reasoning
0.15
-0.13
-0.25
0.26
Abstract Reasoning
-0.08
-0.25
-0.18
0.08
Shaded cells are significant at p<0.05.
The results suggest that accuracy is related to non-verbal ability, and that speed is
inversely related to verbal and abstract reasoning, i.e. to complete the MAT items
does not require significant verbal or abstract processing. It also suggests that
memory is not a function of numerical or abstract thinking. Overall the results
suggest that the MAT is measuring a distinct aptitude, something that is separate to
verbal, numerical and abstract reasoning.
Criterion validity
A range of criterion data is available for the tests, showing scores are associated
with three stages of educational attainment: GCSE grades, UCAS points and degree
grades.
The association between the Verbal, Numerical and Abstract Open Level-1 tests and
GCSE results are shown in the table below. It shows moderate and quite consistent
associations between ability assessed by the tests and academic attainment at the
age of 16, with a mean correlation across the three tests of 0.41. Note: As this data
was collected from students who had gone on to further study, the correlations may
underestimate the true association due to the restricted range of GCSE grades.
Table H: Correlations between reasoning tests and GCSE
GCSE English grade
GCSE maths
GCSE science grade
grade
Verbal - L1 (n=48)
0.17
0.44
0.41
Numerical - L1(n=64)
0.48
0.53
0.20
Abstract - L1 (n=66)
0.47
0.66
0.36
Mean sample age: 16.73-16.86 years.
The association between the Verbal, Numerical and Abstract Open Level-2 tests,
UCAS points and degree class are shown in the next table. Overall, test scores
showed only a modest association with UCAS points and very little association with
degree class. It should be noted, however, that UCAS points were collected
retrospectively from test takers and degree class showed considerable restriction in
range, with most respondents indicating their degree class as being ‘2i’.
70
Table I: Correlations between reasoning tests and UCAS points
Sample
UCAS points
Sample
Degree class
Verbal - L2
Age 22.68 (5.53)
63.43% male
0.25
(n=134)
Age 26.36 (9.01)
55.30% male
0.08
(n=302)
Numerical - L 2
Age 21.80 (3.19)
67.06% male
0.11
(n=252)
Age 24.84 (5.53)
60.83% male
-0.01
(n=577)
Abstract - L2
Age 22.38 (5.11)
61.80% male
0.15
(n=102)
Age 26.30 (9.01)
57.67% male
0.08
(n=222)
Despite methodological and measurement issues in the criterion-related validity
studies, it is possible to draw some conclusions from the data. Importantly, it can be
concluded that the tests are assessing constructs that are quite distinct from those
assessed through established educational assessments. For test users, this means
that the results from the tests provide information that is distinct from their
educational attainments.
Again with the MAT there is evidence of criterion-related validity. Using the same
sample of 100 respondents mentioned previously, and predictions of training
outcomes provided by supervisors, there were significant differences over the range
of MAT variables.
Validity and the TDI and CII
Type Dynamics Indicator
At the construct validity level the TDI shows good separation between scales. For
example, the inter-correlations between the E-I scale and the S-N, T-F and J-P
scales are -0.03, -0.20 and -0.16, respectively. The figures for S-N and the T-F and
J-P scales are 0.15 and 0.44; and for T-F and J-P are 0.29.
The highest association between scales is for S-N and J-P, and at 0.44 indicates that
the two share just under 20 per cent of common variance. This is consistent with
other research in this area and suggests that those who prefer cognitive flexibility
and abstraction (S-N) also prefer a more flexible and spontaneous environment.
There is a considerable body of data relating the TDI to other measures of Type. For
example the TDI and the MBTI® (the standard measure of psychological Type) show
agreement between letter preferences in the range 77%-91%.
Studies have also shown that responses to the TDI appear to be almost completely
independent of intellectual capability as measured by the verbal, numerical and
abstract tests.
In terms of how Type changes over time, the following table shows how people
report differently at different age levels.
Table J: Distribution of preference by age
71
Age
12-16
17-19
12-19
N
947
7733
8671
ENFJ
6.86
4.88
5.07
ENFP
19.85
12.79
13.55
ENTJ
3.17
3.70
3.62
ENTP
6.34
4.25
4.51
ESFJ
10.67
11.74
11.62
ESFP
14.04
9.31
9.81
ESTJ
10.77
14.99
14.53
ESTP
8.66
7.75
7.85
INFJ
2.11
2.61
2.56
INFP
4.75
3.65
3.77
INTJ
2.11
2.20
2.18
INTP
1.90
1.94
1.94
ISFJ
3.06
6.65
6.26
ISFP
1.48
2.74
2.61
ISTJ
3.48
8.65
8.10
ISTP
0.74
2.16
2.01
The most notable feature of this table is how the 12-16 age group has a higher
reported preference for extraversion. Post 16 this reduces and more closely
matches the distribution found in adult populations. Part of this will be explained by
younger teenagers feeling the need to be 'part of the group' more keenly. It reminds
the user/administrator to be even more careful when explaining how to complete the
questionnaire with this age group – better understanding will allow some to separate
their personal preference from their strong desire not to be left out. However, the
user should also recognise that the people who report a preference for extraversion
could change in the next few years and this should be explored in feedback.
Career Interests Inventory
The Futurewise Profile report provides information on a student's career interests
(Holland themes) and personality preferences. These are linked in the report and
any meaningful interactions commented upon. The comments are based on the
theories that define the CII and the TDI, and additionally on what is known about the
interaction between the two assessments. For example, the following table shows
the correlations of the six CII Part- A (normative) standard scores with the four
scales of the TDI.
Table K: Distribution of preference by age
TDI continuous scales
EI
SN
TF
JP
72
Realistic
0.03
-0.08
0.17
-0.01
Investigative
0.21
0.09
-0.40
-0.12
Artistic
-0.01
0.18
0.27
0.15
Social
-0.50
-0.14
0.35
0.08
Enterprising
-0.44
0.08
0.03
0.10
Conventional
0.04
-0.22
-0.13
-0.34
N=5843
Shaded cells are significant at p<0.05.
It can be seen that of 24 coefficients computed, 10 are significant at p<0.05. Of
these, virtually all are in line with expectation. For example, respondents who score
highly on:

Investigative tend to show preferences towards Introversion and Thinking

Artistic tend to show preferences towards Intuition and Feeling

Social tend to show preferences towards Extraversion and Feeling

Enterprising tend to show preferences towards Extraversion

Conventional tend to show preferences towards Sensing and Judgement.
The only statistically significant relationship which is not easily interpretable in terms
of a priori expectations is that between Realistic and Feeling. This will be explored
further in due course.
Understanding Personality Type – Type Mapping and the Essential Guide Series (2005), Roy Childs
Published by Team Focus Limited
ii Understanding Learning Styles – Type Mapping and the Essential Guide Series (2006), Roy Childs
Published by Team Focus Limited
i
73
2.6 Comparison tables
The tables that follow allow you to compare the score a student achieves on the
Verbal, Numerical and Abstract Reasoning tests with other norm groups, using a
common scale. In each case take the relevant comparison (IRT) score from the
Adviser's Report and look across the table to the relevant percentile for a particular
comparison.
For example, if Student A achieves an IRT score of 100 (Verbal {Closed} Level-2),
this puts him/her at the 61st percentile when compared to students considering ALevel (or equivalent) qualifications. It also suggests that he/she is at the 40th
percentile when compared to undergraduate students.
VERBAL
(CLOSED)
standardised
IRT score
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
Percentiles
Level 1
GCSE
1
1
1
1
1
2
2
2
2
3
3
4
5
6
7
8
10
11
12
14
16
18
20
Level 2
A-Level
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Level 3
UG
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Level 4
PG
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
74
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
22
23
24
26
28
29
31
32
33
35
37
39
40
41
42
44
45
46
47
49
51
55
57
59
61
63
65
67
69
71
74
76
78
80
82
83
85
85
87
88
89
91
91
92
1
1
1
3
3
4
4
5
5
6
7
8
9
10
11
13
15
17
19
21
24
25
27
29
31
33
35
38
40
42
44
47
50
52
54
57
59
61
63
64
65
67
68
69
2
2
2
2
2
2
3
3
3
4
4
4
5
6
7
7
7
8
9
11
12
13
14
15
17
19
20
21
23
25
26
28
30
32
34
36
38
40
44
46
48
50
52
55
1
1
1
1
1
1
1
2
2
2
3
3
3
4
4
4
5
5
6
7
8
9
10
11
12
12
13
14
15
17
18
19
20
22
24
25
27
29
31
34
36
38
39
41
75
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
93
94
94
94
95
95
96
96
96
96
96
97
97
97
97
97
97
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
99
99
99
99
99
99
99
99
99
99
71
73
75
77
79
81
84
85
86
87
89
89
90
91
92
93
94
95
96
96
96
97
98
98
98
98
98
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
58
60
62
65
67
69
71
74
77
79
81
83
85
86
88
89
90
91
92
93
94
95
95
96
96
97
97
97
97
98
99
99
99
99
99
99
99
99
99
99
99
99
99
99
42
44
45
47
49
50
51
53
54
57
60
63
65
67
69
70
73
75
77
79
80
82
84
85
86
87
88
90
90
91
92
93
93
94
94
95
96
96
96
96
97
97
98
98
76
151
152
153
154
155
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
98
98
98
98
99
Based on
sample size:
1008
2097*
1275
495
* This figure includes 312 from the IF comparability study (2011/12).
NUMERICAL
(CLOSED)
standardised
IRT score
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
Percentiles
Level 1
GCSE
1
2
2
2
2
2
3
4
5
5
6
7
8
9
10
11
12
13
15
17
19
21
24
27
30
34
37
39
Level 2
A-Level
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
3
4
5
6
7
8
10
13
15
16
Level 3
UG
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
3
3
4
5
5
Level 4
PG
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
77
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
43
46
49
53
56
59
62
64
67
70
72
74
76
78
79
81
82
84
85
86
87
88
89
90
90
90
91
92
92
93
93
93
93
93
94
94
94
94
95
95
95
95
95
95
18
20
23
25
28
33
37
40
43
46
49
52
55
58
60
63
65
67
69
70
71
72
74
75
76
78
79
80
81
83
84
85
86
88
89
88
90
91
91
92
92
93
94
94
6
7
8
10
11
12
13
15
18
20
22
24
28
31
35
39
41
43
46
49
53
55
57
59
62
65
67
69
71
74
77
79
81
82
84
86
86
87
88
88
90
91
91
91
1
1
1
1
1
3
4
5
6
7
9
11
13
14
15
17
20
23
26
29
32
35
37
39
41
44
47
50
53
56
60
63
66
68
71
72
73
74
76
78
79
81
82
83
78
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
Based on
sample size:
96
96
96
96
96
97
97
97
97
97
97
97
97
97
98
98
98
98
98
98
98
99
99
99
99
94
95
95
96
96
96
96
96
97
97
97
97
98
98
99
99
99
99
99
99
99
99
99
99
99
92
92
92
94
94
95
96
96
96
96
97
97
97
98
98
98
98
98
99
99
99
99
99
99
99
84
85
86
87
88
89
90
91
91
91
92
92
93
94
94
95
96
96
96
96
97
97
97
97
98
1773
3018*
1817
930
* This figure includes 358 from the IF comparability study (2011/12).
ABSTRACT
(CLOSED)
standardised
IRT score
50
51
52
53
54
55
56
Percentiles
Level 1
GCSE
2
3
3
3
4
4
5
Level 2
A-Level
1
1
1
1
1
1
1
Level 3
UG
2
2
2
2
2
2
2
Level 4
PG
1
1
1
1
1
1
1
79
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
5
5
6
6
7
7
8
9
10
10
11
12
13
14
15
17
19
21
23
25
26
27
29
30
32
34
36
38
40
42
44
47
51
54
57
58
59
61
62
64
66
68
69
70
1
1
1
1
2
3
4
5
6
7
8
9
10
11
13
15
16
18
19
21
23
25
27
28
29
31
34
37
39
41
43
46
48
51
53
55
57
59
61
64
66
69
70
72
2
2
2
2
3
3
4
5
6
6
6
6
6
7
8
9
10
11
12
13
15
16
17
20
21
22
25
26
28
31
32
34
35
37
40
43
44
46
48
50
52
55
57
59
1
1
1
1
1
1
1
2
2
3
3
4
5
6
7
7
8
9
11
12
13
14
16
17
19
21
21
22
23
24
25
26
28
29
30
32
33
35
37
39
42
43
47
51
80
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
72
74
79
81
83
84
85
86
87
88
89
89
90
91
92
93
94
94
95
96
96
96
97
97
97
97
98
98
98
98
99
99
99
99
99
99
99
99
99
99
99
99
99
99
74
76
77
79
80
81
82
83
84
85
86
87
88
90
90
91
92
93
94
94
95
95
95
95
95
96
96
97
97
97
97
97
97
98
98
98
98
98
98
98
98
98
98
99
62
64
66
70
71
72
73
75
77
78
80
80
81
82
84
85
86
88
88
89
90
90
91
92
93
94
95
95
95
96
96
96
96
96
96
97
97
97
98
98
98
98
99
99
53
55
56
58
62
63
65
66
68
70
71
73
75
77
78
80
81
83
84
85
86
88
89
90
91
92
93
94
94
94
95
96
97
97
97
98
98
98
98
98
98
99
99
99
81
145
99
99
99
99
Based on
sample size:
768
1854*
1226
469
* This figure includes 329 from the IF comparability study (2011/12).
Percentile to STEN conversion
If you want to estimate from a percentile to a STEN you can use the table below. For
example, if a student has a percentile score of 30 it is equivalent to a STEN of 4; if it
is 74 it is equivalent to a STEN of 7, and so on.
Percentile
STEN
0+
2+
7+
16+
31+
50+
69+
84+
93+
98+
1
2
3
4
5
6
7
8
9
10
82
Appendix 1 – Start Profiling
Jeremiah Trial
Information Sheet | Verbal Critical Reasoning Test
Basic Information
Test Name
Verbal Critical Reasoning
Test Duration
12 minutes (timed)
Time required for instructions
10 minutes suggested
Summary of test’s purpose
This test looks at the ability of students to think logically about
written information.
Instructions to give students
For this test, students will see passages of text, followed by statements relating to
the text. They will be required to:
 Read each passage of text carefully
 Read the following statements carefully
 Decide whether each statement follows logically from the information in the
passage
 For each statement, decide whether it is True, False or Can’t Tell based on
the information given in the passage
 Click SUBMIT at the end of the test – otherwise test results will not be
recorded.
When deciding on whether a statement is true, false, or one can’t tell, it is important
that students base their answers ONLY on the information in the passage, and not
on any other information they may have. Their task is simply to judge whether or not
the statement follows logically on from the passage.
REMEMBER to inform the students that they will have the opportunity to go through
some practice questions first. These are NOT included in the overall time for the test.
Make sure that students don’t think that they have started the real test when
completing these practice questions – this often occurs and can waste valuable
time.
83
What students should see on their screens
This is the first screen that students will see for this test.
specific test instructions.
84
This screen gives them
Students are shown a worked example with text and are then given four sample
questions to practice.
This is the final screen that students will see before they start the real test. If
unsure, they can go back and read the instructions again.
85
Jeremiah Trial
Information Sheet | Abstract Reasoning Test
Basic Information
Test Name
Abstract Reasoning Test
Test Duration
10 minutes (timed)
Time required for instructions
10 minutes suggested if first test taken, if not 3 minutes.
Summary of test’s purpose
This test looks at the ability of students to identify relationships
between shapes. This ability is related to testing out new ideas
and solving problems.
Instructions to give students
For this test, students will see two sets of shapes per question: ‘Set A’ and ‘Set B’.
All of the shapes in Set A are similar in some way, and all of the shapes in Set B are
similar in some way. Set A and Set B are not related to each other. Students will be
required to:
 Work out how the shapes in Set A are related to each other
 Work out how the shapes in Set B are related to each other
 Then work out whether further shapes provided belong to Set A, Set B or
Neither Set.
 Click SUBMIT at the end of the test – otherwise test results are not recorded
REMEMBER to inform the students that they will have the opportunity to go through
some practice questions first. These are NOT included in the overall time for the test.
Make sure that students don’t think that they have started the real test when
completing these practice questions – this often occurs and can waste valuable
time.
86
What students should see on their screens
This is the first screen that students will see for this test.
specific test instructions.
This screen gives them
Students are shown a worked example with text and are then given five sample
questions to practice.
This is the final screen that students will see before they start the real test. If
they are unsure, they can go back and read the instructions again.
87
Jeremiah Trial
Information Sheet | Careers Interests Inventory Questionnaire
Basic Information
Test Name
Careers Interests Inventory Questionnaire
Test Duration
15 minutes (untimed)
Time required for instructions
5 minutes suggested
Summary of test’s purpose
This test helps students to understand more about their
interests and the kind of work that may be suitable for them.
Instructions to give students
For this test, students will see various jobs or activities. Get students to think
about how interested they are in the type of work or activity described. It is
important that they do not think about whether they have the necessary
skills, abilities or qualifications – this test is looking at how interested they are
in it. Students will be required to:
 Read the question, ‘How interested are you in...’
 Indicate whether they are ‘Not Really’, ‘A Bit’ or ‘Very’ interested in it
by clicking on the appropriate option listed below the image
 When students click on one of the options, it will be highlighted to
show that they have selected this as their answer. They can change
their mind whilst still in that screen or at a later stage by clicking on
the ‘Back’ button.
 When happy with their answer, click on the ‘Next’ button to proceed to
the next question.
 Click SUBMIT at the end of the test – otherwise test results are not
recorded.
At any time during the test, students can see a summary of the instructions
by clicking on the ‘question mark’ button.
REMEMBER to inform the students that they will have the opportunity to go
through some practice questions first. These are NOT included in the overall
time for the test.
Make sure that students don’t think that they have started the real test when
completing these practice questions – this often occurs and can waste
valuable time.
88
What students should see on their screens
This is the first screen that students will see for this test.
specific test instructions.
89
This screen gives them
Students are shown an example and are then given some sample questions to
practice.
This is the final screen that students will see before they start the real
test. If they are unsure, they can go back and read the instructions again.
90
Jeremiah Trial
Information Sheet | Overview of Psychometric Tests
General points to be aware of when guiding students through the six tests
Know what each test looks like
Depending on how you structure your programme of testing (i.e. number of sessions etc), the most
important thing is that you know what each test should look like when the student clicks through to it.
We have included screenshots of all the tests in the individual information sheets to familiarise you
with the initial screens that students will click through before beginning the actual test.
Instructions
Each test is preceded by a set of instruction screens – students can go back through these pages to
read again before they start the actual test if they feel unsure. They will just need to click on the ‘Go
Back’ button at the bottom of the final screen.
Practice Questions
Students will be given the opportunity to complete some practice questions before they start the real
test. It is worth doing these so that they familiarise themselves with the style of questions and the
method of answering them.
Be aware that lots of people think that they have started the real test when they begin the practice
questions, so ensure that your students know that when they are starting the real test, particularly if it
is timed.
Submission
Most importantly, please note that results are ONLY recorded when the student clicks on the SUBMIT
button at the very end of the test. If they complete the whole test but fail to do this, all their results will
be lost. So make students aware that they need to click this button at the end.
If an IT issue causes a loss of connectivity during the test, then students will need to start any
interrupted tests again in order for them to be successfully submitted.
Timing
If you really run out of time with the tests, either because students take longer than expected to
understand the instructions or if there is an unforeseen IT issue for example, then our suggestion is
that you allow students to complete the two untimed tests at home. These do not have to be
supervised as they are preference questionnaires rather than ability tests.
However, if you choose to do this, then it is wise to not let students know until after they have
completed the timed tests that do need to be supervised. The reason for this is that they will use the
same code to login at home as in school, and some clever students may realise that they can get a
preview of the supervised tests beforehand and can practice them without submitting their answers!
So please be very cautious if adopting this approach.
91
Jeremiah Trial
Information Sheet | Memory and Attention Test
Basic Information
Test Name
Memory and Attention Test
Test Duration
17 minutes (timed)
Time required for instructions
5 minutes suggested
Summary of test’s purpose
This test looks at the ability of students to memorise and follow
instructions.
Instructions to give students
For this test, students will see a number of screens containing a number of shapes of
different colours. Half way through the test, the screens will also contain letters and
numbers as well as coloured shapes. Before each set of screens is shown, students
will see some instructions telling them which shapes they should select. Students will
be required to:
 Read the instructions carefully before proceeding onto the screens with
shapes
 Select the appropriate shapes as set out by the instructions (these are not
shown alongside the shapes, although students can click back to them at any
point during the test)
 Click SUBMIT at the end of the test – otherwise test results are not recorded
The purpose of the test is to follow these instructions as quickly and as accurately as
possible. The instructions get harder as students progress through the test.
Sometimes more than one instruction will apply to a particular shape. Where this is
the case, students only need to click on the shape once to select it.
In the bottom left of the screens, there will be an ‘Instructions’ button. Students can
click on this button at any time to remind them of the instructions, but make sure they
are aware that the test is timed and this will slow them down.
REMEMBER to inform the students that they will have the opportunity to go through
some practice questions first. These are NOT included in the overall time for the test.
Make sure that students don’t think that they have started the real test when
completing these practice questions – this often occurs and can waste valuable
time.
92
What students should see on their screens
This is the first screen students should see.
test instructions.
This screen gives them specific
Students are shown an example question and are then given some sample
questions to practice.
93
This is the final screen that students will see before they start the real test. If
they are unsure, they can go back and read the instructions again.
94
Jeremiah Trial
Information Sheet | Numerical Critical Reasoning Test
Basic Information
Test Name
Numerical Critical Reasoning
Test Duration
12 minutes (timed)
Time required for instructions
10 minutes suggested if first test taken, if not 5 minutes.
Summary of test’s purpose
This test looks at the ability of students to solve numerical
problems
Instructions to give students
For this test, students will see some numerical information followed by questions that
relate to that information. They will be required to:
 Read each piece of numerical information carefully
 Look at the five possible answer options and read carefully
 Work out the correct answer from the information provided and select from the
five options given
 Click SUBMIT at the end of the test – otherwise test results are not recorded.
Please note, calculators are not allowed for the numerical test
REMEMBER to inform the students that they will have the opportunity to go through
some practice questions first. These are NOT included in the overall time for the test.
Make sure that students don’t think that they have started the real test when
completing these practice questions – this often occurs and can waste valuable
time.
95
What students should see on their screens
This is the first screen that students will see.
specific test instructions.
96
This screen gives
Students are shown a worked example with text and are then given four
sample questions to practice.
This is the final screen that students will see before they start the real test. If
unsure, they can go back and read the instructions again.
97
Jeremiah Trial
Information Sheet | Types Dynamics Indicator Questionnaire
Basic Information
Test Name
Types Dynamics Indicator Questionnaire
Test Duration
20 minutes (untimed)
Time required for instructions
5 minutes suggested
Summary of test’s purpose
This test looks at students’ preferences and styles according to
the four different dimensions of personality.
Instructions to give students
For this test, students will see a number of different pairs of pictures. Students will be
required to:




Look at each pair
Decide which one best represents their natural preference
Click with their mouse on the 6 point scale to give an answer
Click SUBMIT at the end of the test – otherwise test results will not be
recorded.
Students need to think about how they typically behave or feel rather than what they
do to please others. By choosing one picture, it doesn’t mean that the other picture is
irrelevant – it simply means that it is not quite as natural.
REMEMBER to inform the students that they will have the opportunity to go through
some practice questions first. These are NOT included in the overall time for the test.
Make sure that students don’t think that they have started the real test when
completing practice questions – this often occurs and can waste valuable time.
98
What students should see on their screens
This is the first screen that students will see for this test.
specific test instructions.
99
This screen gives them
Students are shown an example and are then given two sample questions to
practice.
This is the final screen that students will see before they start the real test. If they
are unsure, they can go back and read the instructions again.
100
Download