Assessing Training - Best Practices


What is the best method to assess training?
Well, there's a whole body of discussion on the topic, most of it centering on, or disagreeing with, the Kirkpatrick model.
Ideally, assessing training would measure the amount of change in the desired behavior. However, that's very challenging (if not flat out unworkable) in many/most situations.

Are you trying to assess Mastery, behavior change, Return on Investment, Return on Expectations, On the Job Performance, or Satisfaction? There are a number of ways to assess training.

There's also Brinkerhoff’s Success Case Method where you find the best and worst performers and use the differences between the two to evaluate and improve your training.


The Model for Affective Process Scoring [MAPS], which in its original form coordinated affective and cognitive development based on both observed behavior and test results. Since MAPS is assessing prior learning it can be used at any time to assess candidate potential and with pre and post training assessment can quantify the success of training.
A behaviorist approach looks at measuring learning through changes in behavior which is one way of assessing learning. But there are many. Smith and Reagan have a fantastic book which you need to find. Its entitled Instructional design 3rd ed. It gave me some great ideas about assessment and transferring knowledge and skills to different areas and contains great examples of learning design. Gagne has a great model which is the 9 steps of instructional design which can also help you. Griffith University named John Stevenson (I think) who has edited a book on vocational expertise. In order to be innovative you need to read the theories and have experience. It all depends on the industry you focus


Provide at the beginning of the Sessions, right at the beginning, a Questionnaire with the topics we have discussed with the client, not just a “yes”- “no” questionnaire, but a written understanding of the questions– their participants should learn; and at the end of the sessions we provide the same questionnaire and review to assess if the participants have learned the topics of the objective.


 
It has been my experience, the best way to assess training outcomes, is to "close the loop" as it were, and look at WHY you did the training in the first place. I used to get ribbed about always asking, "Why are we doing this training?" Usually, that is answered with something along the lines of, "to reduce support costs", "to increase productivity", or something similar.

 
First and foremost we must identify whether it's Knowledge, Skills or Attitude that we're dealing with. If it's knowledge than use simple paper and pencil pre-test and post-test to gauge learning. Similarly for skills, the supervisors at the workplace can evaluate any improvement in performance. And finally for attitude changes (the most difficult of the three), we can look down the line six months or a year later at the individual's behavior and attitudes.


Trainers and training content developers should never have to ask how their training will be assessed, because this question should be asked and answered before they develop their training. As soon as some business leader starts to think “maybe we need some training on this,” one of the very first things a training manager, content developer or consultant should ask is “how will you know if this training is successful—what differences do you expect to see?”

 
Joyce and Showers have done some research on an effective model of training which includes the very important step of coaching. In working with schools we tried to show how important it was to couple ongoing coaching with the training sessions. Whether it was peer coaching, in house coaching, or coaching by a member of my team, it was this step that was crucial in the transfer of learning from the training session to actual practice in the classroom. And coaching is a great way to evaluate the effectiveness of training itself.
Regardless of the purpose of the training, if it is not designed to reveal the inner potentials of the trainees as in relations with their mental creativity, emotional/spiritual maturity (confidence), psychological balance, and physical fitness, very little (or no) desired result will be accomplished.


Training is usually delivered in staggered manner, and at the end of each stage the trainer must assess how effective has the training been so far.
Not everyone learns the same way or needs the same level of training. If the analysis is accurate your assessments will become apparent as you build your behavioral objectives and plan your instructional design and delivery options.

Critical from a psychometric perspective is the clear separation between 1) Training and 2) any subsequent Certification examinations. This can be particularly challenging in organizations that lack leadership in and understanding of this science.


If we undertake the training, what do you expect the result to be - in operational terms e.g. reduced complaints, accurate input of data, use of Excel in creating financial reports? Only when we know this can we evaluate if the training is likely to result in the change - rarely does the training intervention give the result unless it is supported by ongoing supervisor support, coaching etc. If you introduce the Gilbert Behavioral Engineering Model at this stage, you can check if all other alternatives to training have been considered - does the individual really understand what is expected of them, do they have the resources, do they have the motivation etc.

Once you agree on the training, the results define the behaviors (behaviors, skills, knowledge) expected in the workplace. Then you can design the content and methodology which will deliver the specific behaviors. This can be captured in carefully defined learning objectives which specify what needs to be learned, the conditions that apply and the standard to be reached. e.g. the receptionist must respond to a call within 10 seconds, identify the extension number and forward the call in 12 seconds; where there is no response, a message must be taken using the company standard form and emailed to the manager immediately. This allows us then to measure whether the trainer is successful in delivering the change that the organization requires. Good testing of learning then follows -obvious with the above learning objective - test the skills in simulation. Thereafter, agree with the supervisor that they check that their direct reports continue to perform at this level - that is their job after all. Reaction sheets is the final form - not a standard one but a specific to that training - which draws out the participants reaction to the learning event, their motivation to use the new skills and a recognition of the barriers that might restrict application.






Some professionals practice, once they build enough seniority, to walk away from situations where the "why" is not clearly defined. If the sponsor of the training effort cannot articulate the "why", then there is no measure of success. If this is the case, it is a no-win situation and , there is no reason to do "training", it will be seen as a cost, and questions will arise as to the worth of the effort.






For your Level 3 assessment, I would suggest that you not only discuss how they are performing on the job with the learners, but also have that discussion with their managers to ensure they are meeting the stakeholder requirements.


Too often, attendees go to trainings and then return to their jobs still with the same behaviors and no change in productivity. Employers are looking for improved performance and increased productivity in order to justify spending the $$$ on training.

Comments

Most Popular Posts