Assessment Acumen: Do You Have It

advertisement
Assessment Acumen: Do You Have It? (Jan 07)
January 17, 2007
By Margery Weinstein
After taking years to assemble a well-crafted learning program, and recruiting instructors and
subject matter experts (SMEs) competent enough to deliver it, you suddenly realize you don't
know whether it's working. What you need is an assessment plan. Whether hands-on, online, in
the classroom, or some perfected combination of the three, the experts agree, you need a way to
measure the skills mastery of your learners.
On-the-Job
For nearly five years, the Fort Worth, Texas-based aeronautics division of defense contractor
Lockheed Martin has been making a push when it comes to assessment, says Ron Terry, senior
manager, learning and development. Testing of the company's more than 8,000 aeronautics
engineers is becoming more consistent, he explains. That means a uniformity of standards as
well as a meaningful link between what's in the test and what employees need to know on the
job. "We derive the learning objectives from the job performance requirements," says Terry, "and
then we do a skills assessment commensurate with the knowledge or skills that are being tested."
Sometimes that means just an online or paper-based test, but when it's relevant to what's being
taught, it also means a hands-on demonstration by the learner. When engineers need to learn
how to use software to design an airplane part, they are asked to do the work itself. The outcome
is then evaluated by a SME to gauge competency. "We try to get the assessment as close to the
actual task performance as possible," he says.
Three to six months after training, a follow-up evaluation is sent to employees and their
supervisors to determine whether they've used the skills they were taught, and how effective they
think the training was in teaching what they need to know on the job. Lockheed also has
formalized on-the-job (OTJ) training in which a job assignment itself serves as the assessment. A
SME will go over "the process, the approach, and the templates" of a task with a trainee, who will
then complete the assignment for evaluation. Says Terry, "It's product-based so go produce that
product [and] a SME is going to evaluate it." Like the hands-on assessment that sometimes
follows classroom or online learning, the OTJ component gives the evaluation what Terry calls
greater "fidelity," meaning a much more direct connection to the learner’s day-to-day work.
Determining whether a bridge has been made between instruction in the classroom and behavior
on the job should be a priority of evaluation, says Roger Chevalier of Rohnert Park, Calif.-based
consultancy Improving Workplace Performance. "The biggest mistake is we follow-up with
students based on whether or not they've acquired knowledge, and that's a problem. The
outcome we're looking for is actually a change in behavior," he says. "I think all trainers need to
redefine learning as not just the acquisition of knowledge but as the ability to demonstrate a
desired behavior, and that learning is everything, and anything, that contributes to the change in
behavior we’re looking for."
Closing the gap between the test and employee behavior on the job is something Lockheed's
aeronautics division hopes will occur across its ranks. Terry says he and his colleagues will be
"rolling out a similar approach" in assessment for all of aeronautics' approximately 150
engineering technical specialties.
Chevalier says efforts to align training to workplace behavior require more than an initiative—it
requires a shift in thinking. "It's a whole different mindset that needs to take place," he stresses.
"If behavior doesn't change, then there's no return on investment for the knowledge."
Indeed, making rigorous assessment an accepted standard in Lockheed's aeronautics division
requires a culture change, Terry observes. "We're putting teeth into the test one is given in a
course," he says of ensuring the skills are used properly back on the job.
Tests are never fun, but he says employees are much more accepting when the content of the
test has obvious relevance to their work. "I’m not saying they like to be assessed, but if there is
going to be an assessment, if it has high-fidelity to the task being performed, they usually accept
it much better," he stresses. "If they can see the exact benefit that, 'Hey, I was signed off on this,
and now I am qualified to go do this task,' it kind of builds in that buy-in, as opposed to, 'I took a
paper and pencil test, I passed it, and I still don't know how to do the task on my job.'"
"Sprint"ing to Action
Reston, Va.--based Sprint Nextel has the use of Donald L. Kirkpatrick's training evaluation model
for assessment down to a science. The telecommunications provider knows when a course
requires a full four-level evaluation, and when only the first or second level is appropriate. The
evaluation model, which gauges student reaction, whether the learning increased knowledge or
capability, the extent that behavior was changed, and the effect the learning had on the business
or work environment, is now supported with automation at Sprint Nextel. Since August, Sprint
Nextel has had the capability to automatically send level one evaluations to learners, plus the
ability to follow that 60 days later with a level three evaluation to either the learner or their
respective manager, explains Connie Hughes, manager, learning, analytics and reporting.
Kirkpatrick's model, though (sometimes appended with a fifth level pioneered by Dr. Jack Phillips
of Birmingham, Ala.-based ROI Institute, that measures the return on investment gained by the
training), has always been important to the company's learning program.
After training, says director of program development Randy Lewis for Sprint Nextel, employees
complete a level one evaluation that asks how satisfied they were with administrative details like
the enrollment process, satisfaction with the facility and learning environment, as well as how
easily they comprehended the instruction, what they thought of the instructor's delivery (if
appropriate), and whether they would apply the learning to their work. For level two, the
employees are formally assessed, either in the classroom or online, with test preparations, such
as role-playing exercises, included to get them ready.
Training new hires in one of the company's roughly 80 call centers usually only warrants a level
one and two evaluation, but when training field sales reps, aspects of level three will be added to
the mix, says acting vice president for Sprint University Carolyn Fornataro. As part of its sales
mentorship program, in which a sales mentor is assigned to a new rep over a 90-day period, roleplaying exercises are used for the new worker to demonstrate what he or she has mastered in
each stage of learning, and the exercises also are used at the end of the program in a formal
assessment following a written or online test.
Including more than memory-based exercises is, in fact, important to assessment, says Ruth
Clark, president of Cortez, Colo.-based Clark Training & Consulting. "Many people test at the
regurgitation or remember-level. They're just asking people to recall or recognize content, but
that's not really valid because on the job, people have to perform," she says. "Listing the steps is
not the same as doing it." That’s a point Sprint Nextel is well aware of.
"At the end of the sales mentorship program," says Fornataro, "there is a panel review that is
facilitated by a full demonstration of all the selling skills they learned during their mentorship
time." Call center training often incorporates a 30-, 60-, 90-day follow-up to ensure training has
had impact on on-the-job employee performance.
Deciding which assessment level each training program needs is more than guesswork at Sprint
Nextel. "We have an evaluation strategy that we have written," says Hughes, "and we provide
some guidelines on when it is appropriate to do the level one, and stop there, and when do we do
a level two, three, four, and five."
Levels four and five are typically conducted when a training program affects a large number of
employees, is very costly, has high visibility, or is time-consuming and requires added effort.
Sprint Nextel's business sales division conducted a level four after reps received training on a
new selling methodology. "We actually measured the impact to sales in terms of what the impact
to the organization was," says Fornataro. "We were able to prove a multimillion dollar positive
impact to the corporation."
Sprint Nextel isn't alone in its selective use of level four and five evaluations. "The amount of work
being done at level four, or ROI, is not big," says Michael Nolan, president of Ottawa, Ontario,
Canada-based consultancy Friesen, Kaye and Associates. "There's been so much pressure to
get the content out to the learners, no matter how you do it, that we've been focusing on content
being delivered without looking at how that content fits into the context of the individual's job."
Ironically, level five has most often been used to determine the impact of a performance support
tool that makes training unnecessary, says Lewis. The corporate university was asked, a few
years ago, to train reps how to calculate ROI for sales prospects, to show them what they will
likely get for their investment if they purchase a product from Sprint Nextel. Instead of developing
training, the university created an online tool that does the calculation for the reps. "We looked at
what that did to performance by allowing them to do their own ROI studies versus having finance
do it for them, or trying to train them on how to perform ROI calculations," he explains.
Levels three, four, and five, says Lewis, allow for a more rigorous, but complex, assessment to
figure out—the end goal of all assessment—whether the training mattered.
Best Tests
You're developing and delivering questions for your employees, but creating an assessment can
be a test for trainers. Glenbrook, Nev.-based consultant Sharon Bowman offers a solution set to
plug in.
Keep it simple: No need to over-complicate analysis of the results. "Trainers are simplifying the
standard corporate 'return on investment' language," says Bowman, "to the trainer's 'cut-to-thechase' observable and measurable learning outcomes: Did they learn it? Can they use it? Does it
make a difference to the company?' "
A balanced measurement: Remember that all results are relative, says Bowman, so be sure to
use both objective and subjective measures in your assessment. "Trainers know the ROI
numbers can reflect many things, and that both objective and subjective data are needed to
determine benefits to the company," she notes. "So, trainers are including subjective data such
as on-the-job observations, 'secret shopping,' spot-checking skills, feedback and coaching in the
assessment plans."
Follow-up: Don't assess once, and forget about it as if the learning is a done deal. Post-training
follow-up, "including commitments from management that support ongoing learning,
reinforcement, coaching, performance feedback, and experienced employees mentoring new
hires," is becoming increasingly common, Bowman says.
Long-term impact: "Trainers are beginning to assess programs in terms of employees' ongoing
relationships with the learning, with each other, and the company," she says. Assess whether
employees seek out more training following the instruction you just delivered, whether they seem
to feel excited about learning new skills, how enthusiastic they are about their jobs, and the
company's products and services, whether they get along with their co-workers better, are more
productive, and stay longer with the company.
Software Help
Tracking questions, answers, and final scores can be daunting. But, even for a small training
department, it's not insurmountable. Software that does what you don't have time for can help.
Technology makes tying assessment back to job requirements and core competencies easier,
says Amy Wilson, director of human capital management at Redwood Shores, Calif.-based
enterprise software provider Oracle. Such systems can store questions that are automatically
brought to the screens of test-takers based on their role in the company. You can set the software
to recognize job titles so all individuals in those positions are given a particular test, while those in
other positions are given other assessments.
Moreover, if there are skills that cross positions in the company, the software can recognize the
corresponding questions and re-use them on multiple tests. "Being able to use that same type of
assessment technology in multiple facets of the organization in terms of following the employee
lifecycle," she says, "is really where companies get the most benefit."
Software also can direct employees to the next step after finishing the test, based on the outcome
and their job title. The system might store a list of questions for all IT workers receiving training on
the JAVA programming language. After mid-level staffers in the department take a test
customized to their position, the system can then tell them they’ve passed, and will proceed to
another learning module; if they've failed, a new screen will detail the skill areas they need to
review.
The software also can weight certain questions more than others in its scoring, "so perhaps the
first five questions are not so important, and the last 10 are very important, it can weight them that
way, or something more complex," says Wilson.
Like any technology, assessment software—which Wilson considers a form of talent
management—is an investment, so companies typically only use it to assess training for business
functions they deem most critical such as sales. But she says that may be changing: "The talent
management now is on the rise in terms of adoption, and people understand the value of really
understanding the capabilities of their people versus what they need them to do. It fits very well
into that whole concept."
Training Magazine, ©2007 VNU Business Media
Download