Wk2 assignment

Please complete All work in details with references 

Week 2

Don't use plagiarized sources. Get Your Custom Essay on
Wk2 assignment
Just from $13/Page
Order Essay

Writing Assignment: Documenting Observations

 Observing work and analyzing what you see into data can be difficult. For this exercise, you will observe via online video. This provides the opportunity to rewind and view segments several times in order to capture the information that you need. 

1. Search the internet for “Dirty Jobs”. This is a television show. The host, Mike, experiences the end-to-end process of several kinds of dirty jobs during each show, but does not necessarily do them in order from beginning to end. 

2. Identify a show with two segments to review.

3. For each of the two segments in that show, identify the items below. To do this you may wish to use a spreadsheet or set up a table so that your data makes sense. 

. The jobs (roles/tasks)

. The flow of work from one role/task to the next (create a process flow)

. The input and outputs of each role/task

. The characteristic differences between a novice (Mike) doing this work and an expert doing the work

. The impact of mistakes on the job

. The reasons that people do this “dirty work” every day – what motivates them?

. Barriers to accomplishing the task.

· Then analyze Mike’s interview techniques. 

. How did he gain acceptance with his experts? What did he do or say to help get them on his side?

. Identify a set of stock questions that he uses repeatedly with small variations? 

. How does his technique differ from the techniques recommended in your text? 

Submit your analysis here including observation of work and observation of Mike’s interview techniques. Summary not required but be sure that your data is organized and understandable 

Your paper should reflect scholarly writing and current APA standards. Please include citations to support your ideas.

CHAPTER THREE
Unleashing the Positive Power of Measurement in the Workplace

Dean R. Spitzer

Effective management is based on a foundation of effective measurement, and almost everything else is based on that. Organizations are conglomerations of many systems. Measurement is actually the most fundamental system of all. The measurement system—for good or ill—triggers virtually everything that happens in an organization, both strategic and tactical. This is because all the other organizational systems are ultimately based on what the measurement system is telling the other systems to do. No organization can be any better than its measurement system. If the measurement system works well, management will tend to manage the right things—and the desired results will occur.

THE IMPORTANCE OF MEASUREMENT

So why is measurement so important? Here are some of the most compelling reasons:

· It cuts through the B.S. and gets right to the point. People can (and often do) advance their points of view with incredible vagueness until they are challenged “to measure it.” Suddenly, clarity emerges.

· It makes performance visible. Even if you can’t see performance directly, you can see it indirectly using measurement. This is the concept of “operational definitions” that is such a critical part of effective measuring.

· It tells you what you need to manage in order to get the results you want. Using “measurement maps,” you will be able to identify, understand, and discuss the high-leverage relationships that drive results, and apply them to your benefit—and to the benefit of your organization.

· Measurement makes accountability possible. It’s difficult to hold yourself—or anyone else—accountable for something that is not being measured because there’s no way to determine that whatever it is that you’re supposed to do has actually been accomplished. Measurement tells you whether you (and your employees) are doing the right things at the right times—the essence of accountability.

· Measurement lets people know if they are off-track so that they can do something to correct their performance. Without measurement, feedback is often too vague and too late—and feedback that is too vague and too late is useless.

· Measurement tells employees what is important. If you don’t measure it, people won’t pay attention to it. As one colleague said: “Measure it, or forget it.”

· Measurement makes things happen; it is the antidote to inertia. We have all experienced, for example, how milestones in a project plan get people moving energetically toward a goal, while open-ended timeframes inevitably lead to complacency and low energy. Give people measurable goals—and help them measure their progress—and they will make progress.

· Measurement results in consequences (rewards and punishment) that further reinforce the inherent power of measurement. Any effective system of rewards and recognition, and any system of performance appraisal, must be based on a solid foundation of measurement.

Above all, measurement helps you to understand what is really happening in your organization and to take action based on that understanding. Measurement enables you to make comparisons, study trends, and identify important correlations and causal relationships that will help establish a roadmap for success. And this is just a sampling of what performance measurement—when well used—can contribute to organizational effectiveness.

The good news is that organizations are finally discovering the importance of measurement. The bad news is that most organizations are still using it very poorly.

THE DYSFUNCTIONS OF MEASUREMENT

Unfortunately, when used poorly, not only does performance measurement not live up to its positive promise, but it can be a very negative force in organizations. In The Agenda, Michael Hammer (2001, p. 105) puts the problem this way: “A company’s measurement systems typically deliver a blizzard of nearly meaningless data that quantifies practically everything in sight, no matter how unimportant; that is devoid of any particular rhyme or reason; that is so voluminous as to be unusable; that is delivered so late as to be virtually useless; and that then languishes in printouts and briefing books, without being put to any significant purpose. . . . In short, measurement is a mess.”

What is commonly referred to as “measurement dysfunction” occurs when the measurement process itself contributes to behavior contrary to what is in the best interests of the organization as a whole. When measurement dysfunctions occur, specific numbers might improve, but the performance that is really important will worsen. While some of the most egregious examples of measurement dysfunction in the history of business were at companies like Enron, WorldCom, and Tyco, its more mundane manifestations are being played out virtually every day in almost every organization around the globe.

Most organizations are full of examples of negative, self-serving measurement: measurement used for self-aggrandizement, self-promotion, and self-protection; measurement used to justify pet projects or to maintain the status quo; and measurement used to prove, rather than improve. Although the more routine cases of dysfunctional measurement might not appear to be very serious individually, the collective consequences of small doses of measurement dysfunction can be profound.

Probably the biggest problem with measurement is not the flaws in the system, but with the consequences, both positive and negative, that so often follow flawed measurement. There are two major types of measures, based on how they are used: informational measurement, measurement that is used for informational purposes, and motivational measurement, measurement that is used for rewards and punishment.

Most of the functionality of measurement, as described in the previous section, is related to the enormous value of measurement as a source of information—information for organizational members to use to improve management and the work that is done. However, when measures are tightly linked with rewards or the threat of punishment, the informational value of the measurement becomes subordinated to its use for inducing people to exert more effort. This is where the major problems begin.

Most organizations have very strong contingencies that tell employees, either explicitly or implicitly, “If you do this (behavior) or achieve this (result), you will get this (reward, punishment).” Because, in most organizations, behavior and results can’t be directly observed, these performance expectations are operationalized by how they are measured. The performance measures become the way to achieve rewards and to avoid punishment. No matter how many other things might be measured, what is rewarded or punished becomes the focal point.

Striving for rewards is one of the most important aspects of life and work. But when rewards are at the end of the line, measurement becomes a means to that end. Furthermore, the greater the rewards that are offered, the less focus there is on the information that measurement can provide. When the focus is on the carrot, it’s difficult to see anything else! And human beings are very adept at doing whatever it takes to get a reward. Because measurement is so powerful, especially when coupled with contingent rewards, measurement dysfunctions are quite prevalent and widespread. Furthermore, when people are being rewarded by the existing measurement system, they will resist any changes that will reduce their rewards.

While linking rewards and measurement does not automatically lead to dysfunction, it very significantly increases the probability of it happening.

HOW PEOPLE EXPERIENCE MEASUREMENT

People tend to refer to those things they perceive as negative and threatening as the enemy. When I ask participants in my workshops about their personal measurement experiences, the negative ones far outnumber—and, more importantly, outweigh—the positive ones. Even more distressing is that, even when I probe deeply, most people can’t even think of any positive experiences!

Almost everybody has, at one time or another, experienced negative measurement used to expose negative things—errors, defects, accidents, cost overruns, out of stock items, exceptions of all kinds—and to trigger negative emotions—like fear, threat, fault-finding, blame, and punishment. They also know how dangerous measurement can be in the hands of those who don’t use it well or benevolently. Although negative measurement can get results, it is mostly short-term compliance, and it leaves a bad taste in people’s mouths.

For most employees, measurement is viewed, at best, as a “necessary evil.” At worst, it is seen as a menacing force that is greeted with about the same enthusiasm as a root canal! When most people think of performance measurement at work, they tend to think of being watched, being timed, and being appraised. This is why Eliyahu Goldratt (1990, p. 144) says that “the issue of measurement is probably the most sensitive issue in an organization.”

The environment of measurement tends to have a major influence on how measurement is perceived by employees and, therefore, how they respond emotionally to it. Since measurement is such an emotionally laden subject, the environment in which it is being conducted is particularly important. Even if people aren’t directly involved in measurement, almost everyone feels strongly about it. And yet, very few people talk about it—which, as we will see, is one of the primary problems with the way performance measurement is implemented in most organizations.

Measurement is powerful, and—for better or for worse—what is measured tends to be managed. Most employees also seem to intuitively understand that measurement provides data upon which many important decisions are made—most prominently personnel decisions. Although seldom explicitly acknowledged as such, measurement is important to people because they know that their success, their rewards, their budgets, their punishments, and a host of other things ultimately are, directly or indirectly, based on it.

Many of these negative attitudes about measurement at work are due to its association (and confusion) with evaluation. Few people, including corporate executives, know the difference between measurement and evaluation—and there is a very significant difference! The word “evaluation” is really composed of three component parts: “e,” “value,” and “ation.” The central element of the concept of evaluation is value. When you evaluate, you place a value on whatever you are evaluating. Most people don’t mind measuring, or even being measured; they just don’t like being measured upon. And that’s what most evaluation is—having a value placed by an external agent on us and our performance. The outcome of an evaluation is a judgment. Evaluation is essentially about making value judgments. People don’t like being judged—especially when they are suspicious about the fairness of the evaluation process and the motives of those who are doing the judging. As long as measurement is closely associated with judgment, there will be fear. And as long as there is fear, measurement will be viewed as a negative force—rather than a positive one. And, as long as the “measurement experience” is negative, there is little hope that performance measurement will realize its potential as a powerful and transformational force in organizations.

In my book Transforming Performance Measurement (Spitzer, 2007), I talk about four keys to transforming performance measurement and making it a more positive force in organizations. These four keys are summarized in 

Table 3.1

.

In the rest of this chapter, I will discuss those four keys.

CONTEXT

The first key to transforming performance measurement is context. Much of what we have already discussed relates to what I call the “context of measurement.”

Context is everything that surrounds a task, including the social and psychological climate in which it is embedded.

This includes such factors as the perceived value of measurement, communication around measurement, education around measurement, measurement leadership, and the history of how measurement has been used in the organization. To a large extent, the context of measurement tends to reflect how measurement is perceived by employees and, therefore, how they respond emotionally to it. Interestingly, even if it is accomplished with great technical skill, it can still carry a negative implication. How people respond to measurement is largely a function of how it is used—that is, what is donewith the data that are collected makes a huge difference in how measurement is perceived. For example, as we have seen, it is experienced much differently if it is used to inspect, control, report, or manipulate—compared with when it is used to provide feedback, to learn, and to improve.

Table 3.1

 Keys to Transforming Performance Measurement

Four Keys

Definition

Importance

Context

Context is everything that surrounds a task, including the social and psychological climate in which it is embedded.

The context of measurement tends to reflect how measurement is perceived by employees and therefore how well it will be used.

Focus

Focus is what gets measured in an organization, the measures themselves.

Selecting the right measures can create leverage and focus the organization on what is most important.

Integration

Integration is how the measures are related to each other, the relationships among the measures.

Measurement frameworks make sure that measures relate to each other and are not just isolated metrics.

Interactivity

Interactivity is the social interaction process around measurement data.

Interactivity is the key to transforming measurement data and information into knowledge and wisdom.

The importance of the context of measurement in an organization cannot be over-stated. It can make the difference between people being energized by measurement or people just minimally complying with it, and even using measurement for their own personal benefit (that is, gaming or cheating). No matter how sophisticated the technical aspects of your performance measurement system, how managers and employees experience it on a day-to-day basis will be due more to the “context of measurement” than anything else.

So what can be done to improve the context of measurement? Here are a few important points: Be aware of the sensitivity of measurement. Be vigilant for dysfunctions. Use it for learning and improvement, so that employees can see the positive side. Avoid using measurement for judgment and, above all, don’t confuse measurement with evaluation. Discuss measurement openly and honestly. Educate employees about measurement and help them use it well. Make measurement less tightly connected with judgment and rewards. And make sure that evaluations are much more data-based.

FOCUS

The second key to transformational performance measurement is focus. The right measures will provide laser focus and clarity to management, while the wrong measures, or too many measures, will likely cause lack of focus.

What gets measured gets managed, and what gets managed gets done. Selecting the right measures can create enormous leverage for any organization. And, of course, the things that are measured command management attention. Because of the validity of “You get what you measure,” it is vital to select the right measures. If the right things are measured, the right things will happen. Unfortunately, most organizations’ measurement systems lack focus, and most of what organizations measure is routine—the hundreds or thousands of measures that permeate every nook and cranny of organizations. This dilutes performance measurement—like trying to boil the ocean! When everything is important, then nothing is really important. This focus on the wrong things, or the lack of focus, tends to do little more than perpetuate the status quo. However, in today’s competitive marketplace, organizations need to have very clear focus. Not only do companies need to do the routine things well, better and better, they must also find new measures that are high-leverage so that they can achieve competitive advantage. This can be done by focusing on a critical few transformational measures—measures that will make a real difference to competitive advantage and that will differentiate the organization from the others with which they compete—measures that will make a real difference to the organization’s competitive advantage.

While most organizations realize that measurement is essential for managing, they don’t realize how important the selection of their measures is. Unfortunately, organizations and organizational entities fritter away much of the power of measurement by not differentiating between the critical few measures that will have the greatest impact from the hundreds, or thousands, of other measures—the trivial many—that permeate every area of their organizations. Knowing what to focus on is crucial to success. Organizations must measure the right things, even if it means coming up with new measures.

Probably the most frequent question I am asked by clients and prospective clients is: “What should we be measuring?” This question drives me crazy, because too many executives and other managers think it is sufficient just to track generic or standard “metrics.” These are what I call “routine measures,” and they are satisfactory for maintaining the status quo, but not for taking the organization to the next level. Many organizations are paralyzed by billions of bits and bytes of fragmented raw data that clog their information systems—like the supermarket chain that was collecting 340 million different data points per week, but using only 2 percent of it!

An organization’s routine measures are not differentiators. How can any organization differentiate itself from the competition while measuring exactly the same things as the competition? It is also important to realize that when we choose to measure a particular object of interest or dimension of performance, we are—at least by default—choosing to ignore other things. Just look at your own organization’s measurement system and you will probably find a vast array of measures that keep your business running—but few, if any, that will help get your organization to the next level.

Focused measurement is not just about “getting things done; it’s about being effective, and getting the right things done. In order to thrive—not just survive—and move to a higher level of performance, organizations need to focus their measurement on one, or a critical few, measures that matter most. The key to what I call “transformational measures” is finding the most crucial few measures that provide the organization with the greatest insight into competitive advantage.

Two great examples of transformational measures are (1) the “turnaround time” measure used at Southwest Airlines, which enabled people (from pilots, to flight attendants, to maintenance workers, to refuelers, to cleaning crews, and everyone else) to see something that was formerly invisible, the time from arrival at the gate to departure from the gate, so that it could be managed to create value and achieve competitive advantage and (2) the “cash conversion cycle time” at Dell Computer, the time from the outlay of cash for parts to the receipt of payment for completed computers, which was able to help the company conquer its cash flow problems and also provide the mechanism to make its innovative business model work in practice, not just in theory. Everybody knows that if aircraft turnaround time increases, Southwest will lose the key to its competitive advantage—and everybody knows what they have to do to keep that number low.

Most transformational measures start off as what I call “emergent measures”—measures that emerge through increased understanding of the major drivers of business success. They rarely come from a textbook, off a menu, or are provided by a vendor. Many of the emergent measures will be measures of difficult-to-measure intangibles, because transforming organizations are realizing that many of their key value drivers are intangible. But don’t let anyone tell you that something isn’t measurable. Everything is measurable in some way that is superior to not measuring it at all.

The next great challenge in organizations is to measure and manage intangible assets. While most of the tangible components of businesses are already being accounted for (albeit in rather traditional ways and with rather predictable effects), in today’s world, the most important drivers of value in today’s organizations are mostly intangible. As Herb Kelleher (1998, p. 223), former CEO of Southwest Airlines, put it: “It’s the intangibles that are the hardest things for competitors to imitate. You can get an airplane. You can get ticket-counter space, you can get baggage conveyors. But it is our esprit de corps—the culture, the spirit—that is truly our most valuable competitive asset.” That’s the problem: Most of what is valuable is intangible, but most of what is measured is tangible!

According to James Brian Quinn (2002, p. 342), “With rare exceptions, the economic and producing power of a modern corporation lies more in its intellectual and service capabilities than in its hard assets.” And Michael Malone (2000) insists that the biggest financial question of our time is how to value the intangible assets that account for as much as 90 percent of the market value of today’s companies. Did you ever think that one of those unmeasured and unmanaged assets might be the key to your organization’s next competitive advantage?

True transformational change will not happen until organizations begin to think much more creatively about the value of the assets, how to connect them with strategy, and how to link them to competitive advantage. One of the reasons it is so important to begin to think differently about intangibles such as intellectual capital is that the way you measure them will determine how you treat them. For example, your organization probably has already begun to manage people differently, because it is at least beginning to view them as assets worthy of investment rather than just as costs to be expensed.

The key to transformational measures is to change perspective. In many organizational areas, dramatic shifts in vision have taken place because of relatively minor changes in perspective, such as from “product-line profit” to “customer profit,” or from “on-time delivery” to “perfect orders.” In addition, few realize that one of the key success factors for supply chain management is the ability to measure trust throughout the system, and a key measure is “supply chain trust.” Transformational measures measure many of the same things, but from a different perspective.

The biggest problem of performance measurement is that the world is different, but the measurement of performance is pretty much the same. If you were to compare the workplace of today with the workplace of fifty years ago, the difference is dramatic. But if you were to compare how most performance is measured, it looks like a throwback to yesteryear. Just think how little progress has been made in performance appraisal! And those who “mind the gates” are not particularly encouraging of those who want to change the measures—much less the “metric system”—because, after all, these gatekeepers have benefited enormously, and continue to benefit, from the legacy systems.

That is why most organizational measurement systems are dominated by antiquated, obsolete, and outdated measures. Many existing measures seriously constrain performance and prevent breakthrough performance improvements (especially in services and knowledge work), but most workplace environments still discourage trying anything new. Take for example the following typical scenario: A company sends out a team with instructions to “improve” a specific project. More often than not, the team comes back with a set of incremental improvement recommendations that only end up further entrenching the status quo, while declaring victory because the project came in on time and under budget! Trying to innovate without the freedom, and the mandate, to explore unconventional approaches and to take risks ultimately leads to more of the same old measures and, of course, the same old managing.

To improve the focus of measurement in your organization, make sure that you don’t measure too much. Focus on what is most important. Don’t just measure the financial things, the lagging indicators, and what is easiest to measure. Don’t just measure what has always been measured. Focus on at least some of the things that are most important to drive future success. Adopt some innovative, emergent measures to measure those things that are difficult to measure—but vitally important to organizational success.

Selecting the right measures can create enormous performance improvement leverage. But, even great isolated measures aren’t enough.

INTEGRATION

The third key to transformational performance measurement is integration. Integration can be defined as “the state of combination, or the process of combining into completeness and harmony; the combining and coordinating of separate parts or elements into a unified whole.” Integration is the effort that must take place in order to achieve alignment of the parts. Integration is about getting things into alignment and then keeping them aligned. Much of what passes for management today, by necessity, involves trying to get the isolated pieces of work done—turning the dis-integration of our organizations into something that is reasonably integrated. But it is an uphill struggle. Because measurement is so crucial to management, measurement must be used in an integrative way.

As powerful as individual measures are—even transformational ones—they can be poorly used if they are not integrated into a larger “measurement framework” that shows how each measure is related to other important measures and how the constructs (which the measures represent) combine to create value for the organization.

Focusing on isolated measures has tended to build functional “silos” that focus on their own self-serving measures and disregard the measures of other functions. Most companies are composed of pieces vying for scarce resources—operating more like competitors than cooperators—acting individually, without regard to systemic interdependencies. Managers at one financial services company were tracking 142 different departmental performance measures that were totally uncoordinated. No two managers could agree on which measures were most strategically important. People were simply following the traditional, if flawed, logic, which was: “If every function meets its goals . . . if every function hits its budget . . . if every project is completed on time and on budget . . . then the organization will win.” However, it should be clear that such thinking no longer works, if it ever did. Organizations should be focused on the performance of the whole, not on the independent performance of the parts.

In order to make strategy more readily executable through the use of performance measurement, Robert Kaplan and David Norton (1996) developed the concept of a “balanced scorecard,” an organizational scorecard that would facilitate the integration of functional scorecards and enable better organization-wide strategy execution. The balanced scorecard is not just a four-quadrant template for categorizing existing measures—although that might be beneficial if the right measures are already in place. But a balanced scorecard will not make the wrong measures right.

A key to the integration of measurement is developing measurement frameworks, which visually depict the interdependencies between measures. In any interdependent system, you can’t change one measure without affecting the others. With an overall framework that shows the relationships among measures, it is easier to make the proper tradeoff decisions, so more optimal decisions can be made.

Another key point is that the cause-and-effect logic between measures (especially between drivers and outcomes) must be understood. The payoff of doing this well is that organizations will be much better able to predict with greater confidence what should be done to create optimal value for the organization and its stakeholders—and that’s what outstanding management is all about!

So what can be done to increase the integration of measurement? The key as far as integration is concerned is that measures must be aligned with strategy, and then must be integrated across the entire organization (even the extended enterprise). In addition, measurement frameworks will spotlight the potential of “cross-functional measures”—measures that can help to integrate functions and lead to higher levels of collaboration. Develop measurement frameworks to help you “see” the actual and hypothesized cause-and-effect relationships among measures, such as a strategy map. Make sure that everyone has a scorecard (a set of measures for their individual work), a clear line-of-sight between his or her 

CHAPTER FOUR

Relating Training to Business Performance: The Case for a Business Evaluation Strategy

William J. Tarnacki II

Eileen R. Banchoff

For many years organizations have been professing that the key to a truly sustainable competitive advantage is an engaged and talented workforce. “Managers are fond of the maxim `Employees are our most important asset.’ Yet beneath the rhetoric, too many executives still regard—and manage—employees as costs. That’s dangerous because, for many companies, people are the only source of long-term competitive advantage” (Bassi & McMurrer, 2007, p. 115). This professed realization has pushed organizations to establish training programs (and even corporate universities) that provide opportunities for employees to develop skills and competencies related to their existing (or sometimes future) roles in the organization.

These training programs have evolved tremendously over time, becoming much more sophisticated and oriented toward creating a well-rounded workforce. Unfortunately, these training and development (T+D) efforts have not kept pace with the changing demands of business. In fact, T+D departments have evolved to be separate entities from the operations of the business, basically managing a repository of training options versus partnering with business and operational leaders to customize solutions based on evolving business needs. Recent attempts to broaden T+D efforts to encompass performance improvement (PI) are a much needed, long overdue, uphill climb. Unfortunately again, today’s business leaders are looking to their human resources (HR), PI, and T+D colleagues to operate at a much higher level and to develop a new language around the expectations and the demands of the business.

If our field of practice is changing (albeit slowly), it stands to reason that the traditional evaluation methods (see

Table 4.1

) we use to measure transfer from our training programs (skills and knowledge) are also too narrow to measure business results. These evaluation methods are being taught and even trained in the context of another narrow model—the ADDIE instructional design model (analyze, design, develop, implement, and evaluate). Evaluation strategies and tools, based on T+D and ADDIE, limit our ability to understand the overall business model and associated metrics in order to offer robust, impactful, meaningful evaluation results that help manage the business.

Table 4.1

Traditional Evaluation Types

Types of Evaluation

Which Evaluates . . .

Formative

appropriateness of all components of an instructional solution through each stage of the instructional systems design (ISD) process

Summative

learning and reaction from an instructional solution during final development and implementation, including all stakeholders, media, environment, and organization as components of that instructional solution

Confirmative

effectiveness of an instructional solution after implementation through analysis of changes in behavior, achievement, and results

Meta

evaluation methods applied to the ISD process

Level 1

participant reaction to an instructional intervention

Level 2

participant learning as a result of the instruction

Level 3

changes in participant behavior on the job due to the instruction

Level 4

results to the business as a result of the instruction

Level 5/ROI

financial value of the instruction, in terms of positive cash flow from the investment minus the cost of the program implementation

To help build the case for a more business-focused evaluation strategy, this chapter will trace the real-life professional journey of Joseph Williams (fictional name) as he matures from a young, academically prepared ADDIE advocate to a sophisticated human resources strategist and business partner. Mapping Joseph’s experiential journey demonstrates how his perspectives changed to align his training results with business results without completely revamping the long-trusted tools and methods he learned in graduate school.

STUDY OF PERSEPCTIVES

The Academic Perspective

It was May 1997 and Joseph Williams had just graduated with a master’s degree in instructional technology from a very reputable urban university. Basking in the glory of his new degree, Joseph was excited about applying all the wonderful concepts and practices he had just spent two plus years learning. He left the graduation ceremonies eager to employ ADDIE and its corresponding evaluation techniques to become the hero he knew some organizations were desperately waiting for.

While completing his coursework in the evenings, Joseph spent his days as a foreman and quality specialist on the shop floor of a small manufacturing organization. In this academic preparation period, he worked diligently to incorporate several new skills and techniques into his day-to-day manufacturing activities. But he soon ran into the brick wall of lack of interest and engagement and found that this organization was just too small and had too few resources to “hear his voice.” In order to begin applying his new wealth of knowledge and expertise, he decided he had to move on to a bigger and more strategic enterprise.

The Novice Perspective

Soon after graduation (in 1997), Joseph jumped at an opportunity to work in the training and development department of a large automotive company’s finance division. From his first day in the new white-collar environment, Joseph was sure he could make a difference by analyzing, designing, and delivering training programs that would have an immediate, positive impact on this global finance business.

Alas, it did not take long for Joseph to realize that the full application of the instructional design model was not part of the expectations of his new job. Instead, management directed him to use his ISD background to oversee projects that addressed needs they had identified during an analysis performed five years earlier! He also soon discovered that his deliverables were predetermined (seminar courses in a standard format) and that the implementers (the project trainers) would have minimal involvement in the design, development, and evaluation processes. Joseph held the title of “Instructional Designer,” but he soon acquiesced to being just a project manager.

The “siloed” nature of the firm’s T+D structure was perplexing to Joseph, but there were several other issues that also seemed even more counter-intuitive to this fledging ISDer.

· First, Joseph could not understand why design and development activities were focused on “old” information, especially when the organization was in the process of a major restructuring. For some time now, many department customers were indicating that much of the proposed content was out-of-date due to recent technological and organizational improvements.

· Second, he did not know why T+D operated as its own function, was not integrated with the operational areas of the business, and was only minimally assimilated with the broader human resources function.

· Third, Joseph struggled with the reality that there were stand-alone needs analysis, design, development, and delivery functions, but evaluation was not its own entity. And worse yet, evaluation was not conducted with any depth besides basic Levels 1 and 2 or summative evaluation (even formative evaluation was minimally applied).

Table 4.2

illustrates Joseph’s foundational perspectives—how different types of evaluation were taught and aligned during Joseph’s (and most professionals’) graduate preparation.

Table 4.2

Application of Traditional Evaluation Types

A Second Academic Perspective

Joseph knew the business had problems, but his formal ISD preparation was not robust enough to help him fix them, so in 1998 he set off to get a master’s in business administration (MBA). He hoped this additional degree would give him a better understanding of how a business operates and how he could better apply his instructional systems design background to those operations. As he progressed through the finance-focused graduate program, things became much clearer in terms of how identified gaps have a negative effect on organizational results. At the turn of the century, the concept of human performance technology (HPT) was emerging in his ISD world, providing a new perspective on how to address performance gaps with instructional and non-instructional interventions. But, again, even with HPT and a deeper understanding of overall business concepts and functions, Joseph was still struggling. He was discovering that much of what happened was not a result of “pulling” business priorities to create a true linkage to results, but rather a “pushing” of functional priorities based on limited, higher-level evaluation data and a desire to justify turfs and budgets. Was this how all businesses operated? Was HR and, more specifically, T+D, simply a budgeted function of the business or could it be a strategic, value-generating consulting partner? To find out, Joseph decided to move up in his career.

The Generalist Perspective

While continuing to build his business acumen through the MBA program, Joseph ventured into an HR generalist role in 2000 with the same large automotive manufacturer, serving both the finance arm and the parent manufacturing organization. At last, Joseph was positioned to get more exposure to the operational workings of the business, and he was able to work closely with the business leaders to deploy solutions to address problems to close performance gaps. What a difference from his previous position in which he was a specialist working for the T+D function and evaluation strategy and execution were performed in a vacuum.

Joseph was getting used to being a human resources generalist, but he still knew he was not truly operating at the strategic level and really utilizing all the skills and knowledge he had learned through his new MBA and previous ISD programs. As a result, Joseph changed jobs once again in 2003 and accepted a role with a large, global, diversified industrial organization. From his first day on this new job, he felt he was really part of the leadership team and a key decision-maker and influencer in the organization (at least in the plant location in which he led the human resources function). Consequently, he learned a new way of looking at evaluation. Joseph began to understand that evaluation, as he learned it (see Table 4.2), was a very narrow application of the term, especially in relating training or general HPT interventions to business performance.

The Experienced Perspective

Joseph spent more than two years as a plant human resources manager with one diversified industrial organization and then an additional two plus years with another as a division human resources manager. Both experiences proved invaluable. Joseph learned that evaluation started with an established baseline. In other words, having a solid evaluation strategy was contingent on (1) knowing what was important to the business, (2) knowing how the business measured what was important, and (3) knowing how the organization or function was performing against specified metrics.

Once he began aligning current performance to strategic business goals and metrics, Joseph knew he could develop a solid evaluation strategy that would measure the success of the targeted interventions. At long last . . . this was something Joseph did not gain exposure to in his previous assignment as an instructional designer. His previous training and experience in evaluation was narrowly focused on being able to demonstrate that learning occurred, rather than focusing on the more meaningful metric of how the learners’ enhanced job performance affected the business overall. He also was able to recognize that he had been treating training as an intervention focused solely on accomplishing specific department metrics instead of concentrating on something that was devised in partnership with the internal business customer (process owner). He concluded that this was in large part due to the segregated nature of the T+D department from the actual operations of the business.

Joseph soon learned that metrics started at the top of the house and cascaded down through the organization so that each facility and function could be aligned according to what was important. This type of top-down alignment ensured that evaluation was focused on the quantitative and qualitative priorities, as expressed in the organizations’ business models. Evaluation became more than just focused on one particular aspect of a cycle (for example, formative evaluation as focused on certain portions of the ISD model). It was really ensuring that any evaluation eventually tied back to the overall business metrics no matter what the focus of measurement was. As a member of the operational leadership teams for the two large diversified industrials, Joseph learned a better way to integrate his business and instructional technology educational experiences (2003 through 2007).

Joseph found, however, that the evolution of business toward talent management, employee engagement, and business consultation by the human resources function was much broader than where many HR, T&D, and PI professionals were focused. He also observed that evaluation became an overused tool, overcomplicating the nature of business because everything became a target for evaluation, even if there really was not a valid business reason or linkage to business results. Evaluation was performed for evaluation’s sake instead of determining the true value derived from performing an evaluation by relating interventions such as training, recognition, succession, and recruiting to overall business performance. This prompted him to consider a new opportunity that would allow him to take everything he had learned, good and bad, and demonstrate the type of human resources focus and alignment that would close all the gaps in the evolution of the function he had observed.

The Business Perspective

Joseph’s frustrations with continuing to perform wasted activities and employ non-value-added measurement systems drove him to accept a new position as director of human resources and organization development at a much smaller, private organization (in 2008). Having spent most of his career in manufacturing, Joseph was excited about getting into a different type of organization—the information publishing and services industry. He was thrilled with the proposition of owning the development of the human performance improvement and development strategy for the organization, aptly called the “talent management” strategy. This was finally his chance to take all the great practices (perspectives) he had been exposed to and “lean out” those things that he considered waste, to produce what was bound to be considered an industry benchmark talent management system—at least in his mind!

Immediately after accepting the new position, he realized that the organization (a business that was growing very quickly) had nothing in place to build on. The non-existent program aspects did not pose a serious issue for him. But he did find the non-existent systems infrastructure—so critical in linking all aspects of a talent management and an employee engagement strategy together—to be particularly stressful.

Business Model. Beginning with this context, Joseph knew the business and its respective leaders were not looking for a narrowly focused T+D program, or a time-consuming HPT gap assessment, to determine immediate, critical solutions. They were looking for a full-scope talent management program that aligned to the business’s philosophy, values, vision, and mission, as well as to their long-term organizational strategy. And they expected this to be built very quickly, with minimal resources and a focus on achieving their high-level business metrics. Joseph knew that training and development was a component of this, as well as some gap assessment and closure, but he also knew that they were small components in the grander scheme of things. He, therefore, set out to understand the business, which started with a grasp of the organization’s business model, as seen in

Figure 4.1

.

This metrics-based, three-tiered way of thinking was critical to help Joseph understand exactly how corporate leadership defined the culture and essence of the company, how they envisioned the future of the organization, and how they measured success. Joseph knew that this was the basis for each functional strategy (including his HR operations) and the related evaluation strategy of all interventions or solutions that his group would have to deliver. Once Joseph established a foundational understanding of the business, he embarked on getting to know each function and location.

Figure 4.1

Business Model.

HR Strategic Model. Through his extensive U.S. travels, Joseph came to understand that some specific training programs were important to the firm’s overall talent management program. But T+D was still a microcosm of the employee engagement strategy the organization needed in order to continue to prosper and meet its long-term objectives. His vision of a truly strategic HR partnership with the business began to take shape and he was able to see how the three pieces fit together in his mental HR strategic model (see

Figure 4.2

) to culminate in a high-performance culture and organization.

Figure 4.2

Strategic HR Model.

· Engagement and satisfaction would have to address the environment in which people work and interact and needed to include communication, culture and change management, reward and recognition, and community, team, and employee relations (see

Figure 4.3

).

· Talent management is integrated strategies to increase workforce productivity through having the right people (capacity and attitude) with the right skills and knowledge to meet current and future business needs. The integrated cycle had to be a continuous flow of finding talent inside and outside the organization and continuously cultivating that talent, and it needed to include assessment, identification, development, acquisition, and alignment (see

Figure 4.4

).

· Functional excellence would then become the focus on data-driven continuous improvement across the enterprise and needed to include lean/six sigma, business consulting, coaching, and competency modeling.

Figure 4.3

Employee Engagement Model.

Figure 4.4

Talent Management Model..

The Strategic HR Model (Figure 4.2) allowed Joseph to demonstrate to the organization that managing talent effectively was at the core of ensuring an engaged workforce. Following this strategy would allow each function to perform at its very best to drive improved business performance. Together these models helped Joseph visualize and enforce with the organization that training and development are important but are minute in the context of how the organization benefits from its human capital.

Joseph reflected on how he used to “push” training programs based on the internal demands of the department versus the ever-changing needs of the business. He cringed to think how many businesses used to (and still do) create giant repositories of training to try to cover any potential employee need, rather than only producing what was actually critical to achieve true success. He finally found himself in a position to change these outdated practices and satisfy the organization’s hunger for customized solutions. If, by starting with the business’s overall reason for existing, he could work with individuals and managers on solutions that always linked back to those reasons. Joseph came to understand that evaluation was not just about figuring out whether people were learning, or were happy, or whether an intervention produced a return on investment (justifying financial investments versus showing whether solutions were working and serving as a means for continuous evolution and improvement). He now knew that he had to first determine what aspects of the model needed to be measured and then focus on how each piece of the puzzle was contributing to a pre-defined desired outcome, all starting with the organization’s business model.

Strategic Business Model. The following systematic breakdown is how Joseph approached evaluation as a tool for his new human resources business strategy (see

Figure 4.5

):

1 Start with a comprehensive understanding of the overall business model.

2 Align the employee engagement program to the business model.

3 Assess the major components of the employee engagement program to establish a baseline to measure against (performance to metrics and cultural gaps).

4 Build a plan for talent management that capitalizes on the employee engagement program and linkages to the business model.

5 Assess the culture and environment to establish a baseline to measure against (performance to metrics and talent gaps).

6 Develop the gap closure strategy utilizing basic PI principles and practices.

7 Identify the evaluation needs and measurement tools based on business-defined desired outcomes.

8 Evaluate (and improve):

· External sources of support and solutions for appropriateness

· Processes for internally designed and developed solutions

· Employee and manager attitudes toward solutions, culture, organizational strategy, and level of engagement

· Knowledge transfer related to personnel development solutions

· Job performance and individual and team capability and capacity

· Effects of solutions on functional and organizational performance

· Perpetual alignment of solutions to achieving the business model value proposition

Figure 4.5

Evaluation Strategy Model.

By thinking business first, Joseph’s process for designing, developing, and deploying solutions would never grow stale, and he would always be keeping

CHAPTER FIVE

Success Case Methodology in Measurement and Evaluation

Anne M. Apking

Tim Mooney

Fifty years ago, Donald Kirkpatrick, one of the pioneers in the learning and performance improvement fields, developed his taxonomy, the four levels of training evaluation. His seminal work has played a vital role in structuring how our profession thinks about evaluation and in giving us a common language for how to talk about this important topic. Human resource development (HRD) professionals around the world have benefited from his valuable contribution, which identified the following four levels of evaluation:

· Level 1: Did the participants like the training or intervention?

· Level 2: Did the participants learn the new skills or knowledge?

· Level 3: Did the participants apply the skill or knowledge back on the job?

· Level 4: Did this intervention have a positive impact on the results of the organization?

Yet, when we recently went to the Internet and typed “training evaluation process” into the search engine, more than six million entries surfaced on the subject. They included recommended processes, reports, tips, books, articles, and websites. This multitude of resources was provided by universities, vendors, hospitals, state agencies, various military branches, and the federal government.

We believe this extraordinarily large number of entries on this topic strongly suggests two things:

1 The concept of training evaluation is a hot topic that many HRD organizations are interested in, and

2 Our profession is still searching for the approach or formula that will make evaluation practical and the results meaningful.

So why does this search for the evaluation “Holy Grail” continue fifty years after Kirkpatrick first developed his taxonomy and approach? And why do we struggle as a profession to crack the code?

We suspect that many of you reading this chapter are hoping to find this magic formula for evaluation—one that is easy to use, yields compelling Level 3 and 4 results, and will solve the evaluation mystery. It is our belief that our profession does not need a slicker formula for evaluation or a new technique for performing ROI evaluation. Nor do we need more technology to make our current efforts faster and easier. Our profession is awash in formulas, equations, and techniques for evaluation. Therefore, the solution does not lie in inventing yet another formula or technique. The key to unlocking the mystery is developing a fresh perspective around the evaluation of training and performance improvement interventions—developing a whole new strategy that looks at why we do evaluation and how we approach it.

THE REALITIES OF TRAINING

After having conducted numerous evaluation studies during our careers, reviewing the evaluation studies conducted by prestigious organizations around the world, and talking with HRD professionals about the challenges associated with their evaluation efforts, we have seen two factors consistently emerge:

1 All training interventions will yield predictable results, and

2 Training interventions alone never produce business impact.

These factors are the realities operating whenever training is done. In order to perform a meaningful evaluation, we need to use a methodology that acknowledges these two realities and leverages them.

Throughout this chapter we will frequently refer to “training,” “learning,” or “training evaluation.” To clarify our terminology, we will use these terms in the broad sense to refer to any performance improvement intervention in which there is a training component. Our intent is not to ignore or marginalize the importance of other HPT components. In reality, solutions are almost never all training or all non-training. Virtually every intervention aimed at driving performance or business results will have a training component to build employees’ skills and knowledge, just as every training solution will need to be augmented with performance support tools, such as revised incentives, job aids, more explicit supervisory direction, and so forth. Our intent behind shining the bright light on the training component is to make sure that this large and visible expenditure is truly paying off and that the organization is getting full value from its investment, because frequently, organizations do not.

All Training Interventions Will Yield Predictable Results

The first reality is that all training will yield predictable results. No matter whether the training is an executive development program, customer service skills training, technical skills training, or a coaching program, there will be a predictable outcome:

1 Some participants will learn valuable information from the training and utilize it back on the job in ways that will produce concrete results for their organizations.

2 Some participants will not learn anything new or will not apply it back on the job at all.

3 And most participants will learn some new things, try to use the newly acquired knowledge or skills but for some reason (for example, lack of opportunity, lack of reinforcement and coaching, time pressures, lack of initial success) will largely give up and go back to their old ways.

The exact percentage of people in each category will vary depending on the nature of the training, the level being trained, and the organization. For example, participants in technical training typically use their new knowledge or skill back at a higher rate than participants in soft-skills training. But regardless of the specific numbers in any intervention, this pattern will emerge.

Traditional Method of Evaluating Usage and Results. Because of these two realities, relying on traditional statistical methods such as the mean (or average) can be misleading when it comes to capturing or evaluating the impact of training. Let us explain. (We promise this will not digress into an esoteric discussion of mind-numbing statistics.)

The problem with the average is that it tries to describe the entire distribution with a single number. By definition, that number is always going to be “average.” There will be many cases that were much better, and there will be many cases that were much worse than the average. And they all get “smooshed” together into a number that is “average.” So why is a single number a problem? As we described earlier, there are actually three categories of participants who will leave training programs, not one. To use a single number to characterize these three groups, which are very different and produced different levels of results, is misleading and not particularly useful. Consider this simple example. If Microsoft founder Bill Gates were in a room with one thousand homeless and destitute people, the average net worth of those individuals would be about $40 million. In reality, that average does not begin to describe what the real situation is and to report that, on average, the people in the room are doing well economically would be an egregious misrepresentation, or possibly a dishonest deception.

In the same way, it can be misleading or dishonest to report an average impact of training, because those few participants who use their training to accomplish some extraordinary results may mask the fact that the larger proportion of participants received no value at all. Or vice versa: the large proportion of participants who failed to employ the concepts from the training can overshadow the important value that a few people were able to produce for the organization when they actually used the training. The average will always overstate the value of the training for people who did nothing with it, and it will always understate the good the training did for those who actually used it. In short, it obfuscates what really happened and what we as HPT professionals need to do about it. This leads to the second reality of training. It surrounds the issue of why training works or does not work to produce business impact.

Training Alone Never Produces Business Impact

Our profession largely operates with a mythical view that states: “If we are doing the training well, the business results should be good.” This is depicted in

Figure 5.1

.

Figure 5.1

Mythical View of Training.

Unfortunately, this is not what happens in the real world. Anyone who has been in the HRD business for very long has probably experienced a situation similar to this: two people attend the same training program, taught by the same instructor, using the same materials, demonstrating comparable skills on an end-of-course assessment, even eating the same doughnuts on breaks. Yet, one of them takes what she learned and consistently applies it on the job in a way that helps improve her performance and produces a great benefit for the organization. At the same time, the second person hardly uses the new skills/knowledge at all and has nothing to show for his efforts. How can the same training program produce such radically different levels of results? How would you judge the effectiveness of this training program?

This example dramatizes the fact that there is almost always something operating outside of the training experience that can have a significant impact on whether the trainees will even use the new skills/knowledge and what results they will achieve by doing so. Therefore, the second reality, simply stated, is that the training program alone never accounts for the success or failure of the training to produce results. There is always something else happening before or after the training that has just as much impact (or more) on whether the training is used to produce results for the individual and organization. This is depicted in

Figure 5.2

.

Figure 5.2

The Reality of Training.

The size of the “learning event” square is relatively small compared to the “before” and “after” rectangles to signify that the training itself is typically a smaller player in the results outcome. Other performance factors usually have a greater influence in determining the level of results. Sometimes those factors are deliberate and desirable, such as job aids, new work processes, or manager coaching; frequently, they are accidental and undesirable, such as peer pressure to stick with the old approach, lack of confidence in using the new skills, or no support or time to try out the new techniques.

Restating the Two Realities

To restate the two realities:

· Reality Number 1: Training yields predictable results. Typical statistical measures used in evaluation studies can be very misleading.

· Reality Number 2: Training alone never accounts for the success or failure of the training to produce results. Therefore, attempting to parcel out the results produced by the intervention is impossible and terribly counter-productive.

To be useful, an evaluation strategy must acknowledge that these two realities are operating and then leverage them. By leverage, we mean capture and report the kind of data that helps the organization maximize the impact of training and any other performance improvement interventions in the future. An evaluation that is simply “a look in the rear-view mirror” and reports statistics on what happened in the past has limited value to the organization. Moreover, it can be perceived as self-serving or defensive. “Look at the wonderful results the L&D organization produced” or “We are producing meaningful results; please approve our budgets.” The message that runs through this chapter is quite simple: The goal of evaluation is not to prove the value of training or performance intervention. The goal of evaluation is to improve the value of training. Its primary purpose should be to help the organization produce more business impact from its training and performance improvement investments. This goal cannot be accomplished by creating a new and slicker formula for calculating ROI, but requires a strategy and method that will help L&D departments collect the kind of data and communicate the kind of information that will begin to change the paradigm for how their organizations view and implement training and performance improvement interventions. In other words, greater results will not be achieved by using better counting tactics, but only by taking a more strategic approach toward evaluation.

SUCCESS CASE EVALUATION METHOD

The Success Case Evaluation Method, developed by Dr. Robert Brinkerhoff, has provided this strategic perspective and has enabled HRD professionals to begin this change effort in their organizations. This strategic approach answers four basic questions:

1 To what extent did the training or performance intervention help produce valuable and concrete results for the organization?

2 When the intervention worked and produced these valuable results, why did it work?

3 When the intervention did not work, why not?

4 What should be done differently to maximize the impact of this training (or any future performance intervention) so the organization is getting the best return from its investment?

Success Case Method Case Study

Below is an actual case of one of the member companies in our user group that used the success case method (SCM) to proactively improve training, rather than just document the success of a training intervention. This organization was implementing a large and strategically critical business initiative to help employ new marketing concepts and tools in their business plans and pricing decisions. Training was an important part of this initiative for building the capabilities of the managers with these new pricing and marketing approaches. A training director discovered from the evaluation study that just one of the several dozen trainees had used the training to directly increase operating income by an impressive $1.87 million. In this case, it would have been very easy (although this training leader did not succumb to the temptation) to calculate an average impact estimate that would have made it look as if the typical participant had produced close to $100,000 of value from the training, well above and beyond what it had cost. And indeed, had this training function employed one of the typical ROI methodologies, this is exactly what they would have discovered and reported.

Instead, this training leader happily reported and shared in the recognition for the wonderful success that the training had helped one participant produce. But he also dutifully reported the darker side of the picture—that there was a large proportion of the trainees who came nowhere near this sort of outcome—and that, in fact, many trainees made no use of the training at all. This took courage to tell the whole story, but it also drew attention to the factors that needed to be better managed in order to help more trainees use their training in similarly positive ways.

By bringing critical attention to the low-level usage of the training and the marketing tools and the projected business consequences that would ensue if the strategic shift could not be made, our user group member was able to stimulate some key executive decisions in some of the business’s divisions. These decisions would drive more accountability for employing the new marketing skills and more effective manager involvement. The bold actions of this training leader spawned a new attention to the many performance factors that drive impact and enabled the entire organization to accelerate strategic execution more deeply through the organization.

The SCM enables HPT professionals to dig beneath the results headline and investigate the real truth. Why were these great outcomes achieved? Who did what to cause them to happen? What would it take to get more such outcomes in future interventions? What prevented other people from obtaining similar great results? Only when the L&D organization begins reporting the complete story about training and business outcomes in ways that senior managers and line managers can understand and act on will they be able to effectively change the way that training and other performance interventions are perceived and ensure that they lead to business results.

The Success Case Evaluation Method: Five Simple Steps

The primary intent of the SCM is to discover and shine a light on instances in which training and other performance tools have been leveraged by employees in the workplace in remarkable and impactful ways. Conversely, the SCM also allows us to investigate instances of non-success and to better understand why these employees were unable to use what they had learned to make a significant difference in the organization.

The SCM is an elegantly simple approach that can be used to evaluate a multitude of organizational improvement initiatives, with training being just one. SCM employs an initial survey process to identify instances of success, as well as instances of non-success. The successful instances are then investigated through in-depth interviews to determine the magnitude of the impact these employees achieved when they applied their new skills and capabilities on the job. In addition, employees who were unable to successfully leverage the training are also interviewed to determine the likely causes for their lack of success. It is through this collection of “stories,” both positive and negative, that we can gain keen insight into how the organization can get maximum impact from learning interventions.

Specifically, the SCM consists of five essential steps:

Step 1: Focus and plan the evaluation study.

Step 2: Craft an “impact model.”

Step 3: Design and implement a survey.

Step 4: Interview and document both success and non-success cases.

Step 5: Communicate findings, conclusions, and recommendations.

The remainder of this chapter will look more closely at the process for conducting a success case evaluation study.

Step 1: Focus and Plan the Evaluation Study. It does not take a college degree in accounting to conclude that any dollar invested in the evaluation of learning is a dollar that will not be leveraged in the design, development, or deployment of learning. In other words, any effort to evaluate training diverts valuable resources from the training function’s most essential products and services to the organization. In times of dwindling training budgets and downsized training staffs, evaluation efforts must be thoughtfully and strategically expended.

The focal point of this first step is to clearly articulate the business question that needs to be answered. What information would help business leaders accelerate the key results or better execute a business strategy? What information would help the organization understand how the training can support these business goals? Success case evaluation studies that place these questions “front and center” are the studies that yield the greatest value for the organization. In our experience of conducting SCM evaluation studies, we have often found the following types of training initiatives to be good candidates for this type of evaluation:

1 A performance intervention that is an integral part of a critical business initiative such as a new product launch or a culture change effort. The organization cannot afford these initiatives to falter. An SCM study can help the organization assess lead indicators of success or barriers that need to be changed before it is too late and the business initiative fails to deliver the expected results.

2 A training initiative that is a new offering. Typically, business leaders want to be reassured that a large investment in the launch of a new training solution is going to return the favor. A new training initiative will benefit from an SCM study, especially if it was launched under tight timeframes. An SCM study can readily identify areas of an implementation that are not working as well as they should and can provide specific recommendations for modification or even redesign. In addition, an SCM study conducted following the pilot of a new initiative will provide invaluable data regarding its initial impact and help to determine whether a broader roll-out is advisable.

3 An existing training solution that is under scrutiny by senior-level management. Often, in good economic times, organizations perennially offer development opportunities for employees, without much thought to the relative value they add. But in times of severe “belt tightening,” these training solutions are usually the first to be considered for an SCM study, in order to truly understand their worth, especially if the initiative is expansive, expensive, and visible.

4 Any “soft-skills” learning resource. When looking for opportunities to increase impact and business results, business leaders frequently question the value of learning solutions that teach skills that can be generally applied in many settings, such as communication skills, customer service skills, and leadership, management, and supervisory skills. An SCM evaluation study can clearly pinpoint the ways in which employees are able to leverage these broad skill sets in ways that have a positive impact on business goals.

Regardless of the initiative selected for the SCM study, it is imperative that a relevant and important business question lies at the heart of any evaluation study.

Step 2: Craft an “Impact Model.” In this world of technology-enabled gadgets, where would we be without our onboard navigation systems, our GPS devices, or even mapquest.com? Well, frankly, we’d be lost, which is exactly where we would be without an impact model during an SCM study.

The impact model is the GPS device for the evaluation study. It is a simple illustration of the successful outcome of the training initiative we are evaluating. In other words, the impact model creates the “line of sight” that connects the following:

1 The skills, knowledge, and capabilities learned through our training solution;

2 The performance improvement we expect from employees back on the job in specific and important situations as a result of acquiring the skill, knowledge, and capabilities;

3 The results we expect given this new and improved performance; and

4 The business goals that will be directly impacted.

This visual depiction provides us with a snapshot of success that will drive the entire evaluation study. The model will help us to craft our survey in Step 3, to formulate the interview questions in Step 4, and to derive our findings, conclusions, and recommendations during Step 5.

Table 5.1

illustrates an example of an impact model for a call center supervisor. Note that the statements within columns do not necessarily correspond with one another. An impact map works more like a funnel. For example, in this case the third column lists applications that obtain the results and help to achieve the business goals in the fourth column.

Table 5.1

Impact Model for Coaching Skills Training for Call Center Supervisors

Key Skills and Knowledge

Critical Applications

Key Job Results

Business Goals

Learn a questioning technique for effective diagnosing of development level.

Help integrate new representatives into CSR teams.

Understand how to assess team strengths and performance gaps.

Use behavior observation and targeted questions to determine skill level of representatives.

75 percent of CSR representatives score 90 percent or better on the universal QA form.

Increase customer renewal rates by 10 percent.

Understand how to adapt leadership style to effectively coach a CSR representative.

Coach representatives by explaining the call metrics and their relationship to the model and impact on corporate goals.

Attrition reduced to 30 percent.

Maintain or improve J.D. Power rating of 92.

Develop ability to help teams set goals and achieve goals.

Learn techniques to reduce defensiveness in coaching situations.

Coach by mapping day-to-day tasks and corporate goals. Ask questions like: “Why is this task important?”

Step 3: Design and Implement a Survey. It is during this step of the process that our efforts with the SCM study move from being strategic to being tactical. At this point, we have selected the performance intervention we will evaluate and have an impact model documenting what success should look like in terms of behavior and results. We now need to craft a survey that will be administered to the target audience of that initiative, so that we can identify employees who successfully applied their learning in significant and meaningful ways. In addition, we also want to uncover employees who were unable to get positive results, as their stories yield valuable insights as well.

Many questions typically arise with regard to the design and implementation of this survey. The questions we are asked most frequently include:

1 What questions should be asked on the survey? If your only goal for the survey is to identify the most successful and least successful training participants, it is possible that the survey consists of a single question: “To what extent have you been able to leverage [the name of the performance intervention] to have a significant positive impact on [some organizational goal]?” If, however, you want to solicit additional input from the survey, such as demographic information or the degree of managerial support, you will include additional questions to collect this data. In general, it is recommended that the survey be brief, not to exceed five to eight multiple-choice questions in total, and follow accepted best practices of survey construction.

2 To whom should the survey be sent? While there is an extensive amount of research available on sampling theory, such as Babbie’s (1990) book, Survey Research Methods, here are a few helpful guidelines. First, survey your entire target audience if it is fewer than one hundred participants. We anticipate, and usually experience, a 50 to 70 percent response rate, which will yield about fifty to seventy completed surveys in this case. If your target audience exceeds one hundred, then use a sample size that will result in at least fifty completed surveys assuming a 50 percent response rate.

3 Is the survey anonymous? No, this initial survey cannot be anonymous, as we need to be able to follow up with those survey respondents whom we believe have a success story, or a non-success story, to tell. Even though we do not provide anonymity, we steadfastly guarantee every respondent’s confidentiality throughout the process.

How much time should elapse between participants ’attendance at the training and the receipt of the survey? This question is best answered with another question: “How long after exposure to the training is it reasonable to expect that participants would have the opportunity to

What Will You Get?

We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.

Premium Quality

Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.

Experienced Writers

Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.

On-Time Delivery

Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.

24/7 Customer Support

Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.

Complete Confidentiality

Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.

Authentic Sources

We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.

Moneyback Guarantee

Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.

Order Tracking

You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.

image

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

image

Trusted Partner of 9650+ Students for Writing

From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.

Preferred Writer

Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.

Grammar Check Report

Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.

One Page Summary

You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.

Plagiarism Report

You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.

Free Features $66FREE

  • Most Qualified Writer $10FREE
  • Plagiarism Scan Report $10FREE
  • Unlimited Revisions $08FREE
  • Paper Formatting $05FREE
  • Cover Page $05FREE
  • Referencing & Bibliography $10FREE
  • Dedicated User Area $08FREE
  • 24/7 Order Tracking $05FREE
  • Periodic Email Alerts $05FREE
image

Our Services

Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.

  • On-time Delivery
  • 24/7 Order Tracking
  • Access to Authentic Sources
Academic Writing

We create perfect papers according to the guidelines.

Professional Editing

We seamlessly edit out errors from your papers.

Thorough Proofreading

We thoroughly read your final draft to identify errors.

image

Delegate Your Challenging Writing Tasks to Experienced Professionals

Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!

Check Out Our Sample Work

Dedication. Quality. Commitment. Punctuality

Categories
All samples
Essay (any type)
Essay (any type)
The Value of a Nursing Degree
Undergrad. (yrs 3-4)
Nursing
2
View this sample

It May Not Be Much, but It’s Honest Work!

Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.

0+

Happy Clients

0+

Words Written This Week

0+

Ongoing Orders

0%

Customer Satisfaction Rate
image

Process as Fine as Brewed Coffee

We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.

See How We Helped 9000+ Students Achieve Success

image

We Analyze Your Problem and Offer Customized Writing

We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.

  • Clear elicitation of your requirements.
  • Customized writing as per your needs.

We Mirror Your Guidelines to Deliver Quality Services

We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.

  • Proactive analysis of your writing.
  • Active communication to understand requirements.
image
image

We Handle Your Writing Tasks to Ensure Excellent Grades

We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.

  • Thorough research and analysis for every order.
  • Deliverance of reliable writing service to improve your grades.
Place an Order Start Chat Now
image

Order your essay today and save 30% with the discount code Happy