Instruction will not solve the performance problem when…

In a previous post, Training is Not Always the Solution, we asserted training is rarely the fix, at least by itself, for a performance problem. In this post we put forth three questions, derived from Mager’s (2012) work, to ask during analysis that will definitely rule out instruction or training as the solution for deficits in performance. If the answer to all three questions is Yes then instruction or training is not the solution.

  1. Do your people know what is expected of them?
  2. Do your people have the tools and resources to perform?
  3. Do your people know how to do what is expected of them?

When people know what to do, how to do it, and have the tools they need to perform, instruction is not the solution for a performance problem.

Mager, R.F. (2012). Making instruction work. Carefree, AZ: Mager Associates, Inc.

Know Your Audience

Recently I was asked to complete a brief instructional assignment before proceeding to the next phase of consideration for a project. I was given the assignment and asked if I had questions. My first question was, “Who is the audience for this assignment?”

Knowledge about the audience– average age, occupation, average education, technology skill level, interests, and so on– inform the design and development of instructional materials, from the development of learning objectives; the creation of assessment items; and the development of content.

Knowledge about the audience should be used to develop instructional materials that promote motivation by being relevant, meaningful to the intended audience. Information about the audience will also assist with the development of instructional materials at the appropriate level of skill and language.

For more on audience analysis see Dick and Clark’s (2008) text The Systematic Design of Instruction.

For more on motivation in learning see Keller’s (1987) article the Development and Use of the ARCS Model of Instructional Design


Performance Solutions- It’s Not All or Nothing


Performance solutions can be costly and viewed as cost-prohibitive by organization decision makers. Businesses and organizations err when targeted, comprehensive performance solutions are not implemented.  Mager and Pipe (1970) discussed the “hidden cost” that result from failing to address performance problems. According to Mager and Pipe (1970) the hidden costs of ignoring a peformance problem by failing to implement a solution include “inefficient performance. . . lost or angry customers, employee turnover and absenteeism. . . ” (p. 94).

When resources are not available to implement a total soulution, Mager and Pipe (1970) recommended using a partial solution- address part of the problem or implement a part of the solution. A partial solution is better than no solution.

Mager, R., & Pipe, P. (1970). Analyzing performance problems or ‘You really oughta wanna’. Belmont, CA: Fearon Publishers, Inc.


Performance Improvement: Consider the Conditions and the Consequences

I am still rereading Robert F. Mager and Peter Pipe’s seminal work Analyzing Performance Problems or ‘You Really Oughta wanna’. Like any good work, chock  full of pearls of wisdom, I see something new everytime I read the book. A couple things that stood out to me this reading are conditions and consquences.

When analyzing performance problems, the thorough analyst must consider the conditions of performance and the consequences of performance. For example, are there obstacles that prevent a performer from performing a desired behavior, is management supportive of desirable behavior: those are conditions of performance. On the other hand, an analyst must determine if the desired performance or behavior is punished, ignored, or rewarded; he or she must also determine if behavior other than the desired behavior is rewarded: those are consequences of performance.

A thorough analysis sets the stage for approprate solutions for performance problems.


When Analyzing Performance, Remember to Ask This Question

I am rereading Robert F. Mager and Peter Pipe’s seminal work, Analyzing Performance Problems or ‘You Really Oughta Wanna’. The book is a must read for any performance improvement professional as it provides easy to follow instructions on analyzing performance problems. The authors even provide a handy checklist.

I recently undertook analyzing performance issues at a local organization. One question that I forgot to ask during my analysis, a question Mager and Pipe asserted is often neglected, is Could he or she do it in the past. As with all the questions Mager and Pipe outlined in their problem analysis process, the question is critical for determining the nature of the problem that exists and therefore developing an appopriate solution– one that addresses the problem.

Mager and Pipe concluded, if a performer could carry-out a behavior in the past then formal training is not needed; what is needed is one of two types of skill maintenance programs: regular practice or practice with feedback.

Get the book here!


Got Rubric

A rubric is a tool that promotes fairness in assessment and clarity of learner expectations.  A basic rubric includes assignment components– bibliography, APA style, grammar and spelling, description, summary, etc.– and point values for levels of completeness or accuracy for each assignment component. The combined points for each component should equal the total point value for the assignment.

Rubrics are employed by instructors to provide objectivity in grading. Rubrics should be provided to learners before or when an assignment is given so that learners may refer to it during completion of an assignment  to ensure submissions have all required components to the optimal standard.

For more information on rubrics and to create your own rubric, visit the websites below:



This Is NOT Jeopardy!

The goal of assessment is to measure the extent to which learning objectives have been mastered. That being said, assessment items should be inextricably tied to either module or course-level objectives. Assessments should not include items that require recall of random facts from readings, videos, or other instructional materials. This is NOT Jeopardy; your course or module is a unique instructional unit that contains content and learning activities that will enable learners to do something they could not do before; therefore, assessment items should provide a valid measure of the leaner’s attainment of learning objectives: not his or her ability to recall random even abstruse material from a course or module.

A good practice for developing assessment items that are tied to learning objectives is to create a table with two columns, learning objective and assessment item, and as many rows as you have learning objectives.  Row by row, write a learning objective in the learning objective column and a corresponding assessment item you develop in the assessment item column. To test the validity and clarity of your assessment item, give the assessment items to a colleague and ask him or her to guess the corresponding objectives.

5 considerations before you develop Computer-Based Training

Every instructional project is unique. There are differences in learners, contexts, and content, so what works for one instructional project may not be a good fit for another instructional project. Computer based training is a convenient and effective way to deliver training and instruction; however, not all instructional projects are suitable for computer based delivery. Here are five things to consider when thinking of using a computer based delivery format for your next instructional project.

  1. Is computer based training suitable for the learners and the learning context?

Consider whether the learners have the technical skills to participate in computer based training and consider if the learning context would support computer based training. Consider if the learning context has the bandwidth, hardware, and software to accommodate computer based training.

  1. Will computer based training help the learners achieve the learning objectives?

This question should be answered in the affirmative for any instructional media, graphics, and instructional strategies used. If it does not help the learner achieve the learning objectives then it should not be used.

  1. Is computer based training worth the cost: is it cost effective?

Consider whether the cost of the software, licensing, and complementary software justifies gains in efficiency and instructional effectiveness that may occur with the use of computer based training.

  1. Do you have the human resources to develop high quality computer based training?

Developing high quality computer based training requires skill. Consider whether you have the talent on hand, would have to train someone, or would have to outsource the work. There are time and cost factors to consider with each choice.

  1. Is the content being considered for computer based training stable?

Will the content change in six months or shortly after being developed? If the content is unstable it is likely the time and cost put into design, development, and maintenance will overshadow any delivery efficiencies gained through the use of computer based training.


Is it a learning objective or a learning activity?

As a certified Quality Matter online course reviewer, I have seen instructors include learning activities in their list of learning objectives. For neophyte course developers or those without a background in instructional design, the distinction between learning objectives and learning activities is a thin line. The following explanation will make the distinction clearer. With learning objectives, the idea is that following completion of a unit/course, learners will be able to do something they were not able to do before completing the unit/course. All the course content, instructional strategies, instructional media, and learning activities should facilitate the attainment of the learning objective.

Learning activities should scaffold the attainment of the learning objectives by providing the opportunity to learn and practice the accomplishment of a step required to complete the learning objective. To determine whether something is a learning objective or a learning activity, a course developer might ask, (1) Is this a concept, idea, skill, process, or procedure unique to my course or unit of instruction? and (2) Will my course or unit of instruction include instruction on what this is and how to accomplish it? If the answer to both of those questions is Yes, the instructor is working with a learning objective.

To recap, a learning objective is what a learner will be able to do upon completion of a unit of instruction or course of instruction. A learning activity, on the other hand, is something a learning does while participating in a course. The learning objective is the goal. The learning activity should provide an opportunity to work toward the attainment of the goal, the learning objective.

As an aside, assessments provide opportunity for the instructor and the learner to gauge the extent to which the learning objective has been obtained. More on assessments in a later post.

Good Instruction Codified- Gagne’s Nine Events of Instruction

Robert Gagne was a pioneer in the field of instruction. His Nine Events of Instruction (or G9) are good instruction codified. The nine events are:

  1. Gain Attention
  2. Inform learner of objectives
  3. Stimulate recall of prior learning
  4. Present content
  5. Provide learning guidance
  6. Elicit performance (practice)
  7. Provide feedback
  8. Assess performance
  9. Enhance retention and transfer to the job

I frequently use G9 as a macro instructional strategy framework for courses or units of instruction I develop or review.

Checkout Northern Illinois University’s Faculty Development and Instructional Design Center piece on Gagne’s Nine Events of Instruction.