Experience API (Part 4)

O.K., last week we finished up with SCORM, which paved the way for a discussion about Tin Can API and, yes, Experience API.  Let’s get right into it . . .

SCORM had peaked in its level of development and value, and the ADL (Advanced Distributed Learning Authority) decided a newer version of SCORM would not meet its continuing need. As such, in 2011, ADL issued a contract to investigate, research, and basically re-think SCORM in order to advance its mission and goals. The Nashville-based business Rustici Software won this contract, and the firm initiated its work by starting a conversation, a conversation that became Project Tin Can.

Project/Tin Can

Rustici termed the research phase of the contract to be Project Tin Can. They embraced the image and notion of tin-can communication to convey the two-way communication between Rustici and the eLearning community.

Per Rustici, this process included seeking information through five different avenues:

  • nput from hundreds of xAPI stakeholders;
  • Interviews with key industry leaders;
  • LETSI SCORM 2.0 White Papers (this was, in many ways, a precursor of Project Tin Can; for an archive of these papers, see the Rustici site: https://scorm.com/tincanoverview/the-letsi-scorm-2-0-white-papers/;
  • Interactions with then current Rusti customers; and,
  • The ADL contract specifications.

A Rose By Any Other Name . . . Tin Can API/Experience API/xAPI

While the results of the Project Tin Can research produced Tin Can API, the latter was a qualitative successor to SCORM and an earlier version of the continually evolving Experience API.  xAPI, then, is simply an acronym for Experience [eXperience] API, neither a successor to nor a different version of Experience API.  

It really seems confusion arose and still arises from the period when Tin Can and Experience API virtually were synonymous. This was the period of and the immediate years following Rustici’s submission of its deliverables to the ADL. At that time, perhaps understandably, Rustici stated:

ADL will be transferring ownership of the spec to a public standards body after v1.0 is complete this spring. After that transfer, we don’t expect the official government name “Experience API” to last much longer [emphasis added].”

(https://experienceapi.com/we-call-it-tin-can/)

They had branded their process and deliverable with the “Tin Can” name, and their work was widely known by many in the industry as the Tin Can API.  Yet, the ADL used the name Experience API in their contract specifications and in their continuing usage. Experience API is the pervasive name that is used, and the name “Tin Can” is only formally used in reference to Rustici’s original contract work. Indeed, Rustici later called its response to the ADL contract “Project xAPI.”

(https://experienceapi.com/overview/)

Ownership versus Web Domains

The ADL awarded the contract—the BAA [Broad Agency Announcement, which in general, is for basic and applied research and development]—to Rustici and, as such, the work derived from that contract was and is the property of the United States Government. The issue of name “ownership” publicly arose in a May 2012 Google Group discussion:

https://groups.google.com/a/adlnet.gov/forum/?hl=en&fromgroups=#!topic/tincanapi-info/q87uy3XJXX8

The concern centered on the Rustici trademark petition for the names of “Tin Can” and “Project Tin Can.” No less a figure than Rustici President, Mike Rustici, weighed in on the concern to assure writers that the company had no proprietary claim on the use of “Tin Can” and sought to obtain trademark status to avoid it from being “pirated” by others who might be less community minded.

The continuing Google Group discussion continued on the topic as to whether or not Rustici would use the “Tin Can” moniker in any of its future commercial enterprises; to wit, Rustici replied that it would, however, the company would not prevent others from doing so.

Toward this end, while Tin Can, Experience API, and, for that matter, SCORM, are names under government “contract,” as it were, Rustici owns the web domains for www.tincanapi.com, which is redirected to another one of their web domains, www.experienceapi.com; they also own the web domain of www.scorm.com. In their separate domains, they clearly differentiate the administration, ownership, and stewardship of the respective names to the ADL, but that they also offer services for companies seeking to utilize the SCORM and/or Experience API specifications.  For a response by Rustici Software on this subject, please see:

https://experienceapi.com/we-call-it-tin-can/ and

https://experienceapi.com/tin-can-experience-api-xapi/

Note: To be clear, the above comments neither are intended nor take away from any of Rustici Software’s groundbreaking work in the field of eLearning; rather, the comments are included to simply clarify distinctions amongst terms and the like.

Next step: Finally, a focused discussion on the Tin Can API/Experience API Innovations and its evolution.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 3)

In our last blog, we further detailed the foundation for ADL and its areas of research.  One of these areas of research is Total Learning Architecture Structure, which provided the basis for interoperability between different systems. One of the results of this work was the Sharable Content Object Reference Model (SCORM). The initial edition of SCORM was released in January 2000 with a couple of SCORM iterations produced the following year in 2001. However, a new version of SCORM was introduced in January 2004, and DOD made SCORM use mandatory in 2006. In total, there have been four versions of SCORM 2004. The next generation of SCORM arose in 2010 with Project Tin Can, but we’re getting a little ahead of ourselves.

SCORM or What do you mean be a Sharable Content Object Reference Model?

To understand SCORM, let’s break it down into its constituent elements.

  • Sharable Content Object (SCO)—an object is the means of relating various pieces of data and their value.  For us, this refers to an element within a learning system, for example, a question or image. Each “object” is a part of the larger educational program. The desire and demand to make objects “sharable,” is linked back to our original quest for interoperability. In one sense, think of it as a specific lesson or module in an on-line course.
  • Reference Model—by reference, SCORM is referring to a computer term of art, that is, the means of finding specific data or datum located on a computer hard drive or, increasing, on a cloud-based server. In short, a reference provides the basis for discerning a physical location for information. Yes, there are all of these 0000s and 1111s out there in the digital world, so wouldn’t it be nice to be able to keep track of them. By reference model, SCORM is creating rules and protocols for references in the context of sharing that information with other Learning Management Systems (LMS).

To better understand let’s look how software designers create their programs. I remember writing programs in the defense industry. I already knew the languages of BASIC and easily learned FORTRAN in addition to LOCUS (an early spreadsheet proprietary program). My work was a mess, truly. LOL! I knew how to program, but insisted on writing my programs without a flowchart—breaking rule number 1. Anyway, you can imagine all of the problems I faced.

There are other rules in computer science that make it easier to write, track, and modify code. One of these approaches is object or class-based programming. Instead of lumping all the data together, in object-based programming, the programmer defines a group of fields or attributes, which then provides the basis for relating actual data values and associated operations and/or methods. This type of organization, then, provides the basis for generating a commonality that can be shared amongst different programs. That is, if data value, its characterization, and associated operations can be made uniform, then different programs can be capable of utilizing that same digital information. SCORM is about creating the basis for doing just that.

Now imagine trying to perform all of these functions while transmitting the information through the Internet in the context of a client-server relationship. Sharing information through a cycle of request and response from the client (you) to the server (the repository of data and generally the program) gets complicated enough. Imagine trying to force-feed your information into a different LMS. Whhhew! You get the picture. Yes the horror as it were. So, again, that’s the basis for creating SCORM.

SCORM Protocols

Let’s be clear. SCORM is neither a software program nor a programming language. Rather SCORM provides standards for data and programming that make it possible to have data sets that can be interchangeable amongst differing LMSs. So, software designers are extremely mindful to utilize SCORM when designing and coding their proprietary LMS. There have been numerous limitations to SCORM. Why? It’s simple: trial and error. Software designers within and without the government have found flaws or limitations to the SCORM protocols, which, of course, gave rise to successive iterations of SCORM. The SCORM protocols are the rules utilized by different Application Programming Interfaces (API). An API is the part of the programming language that facilitates a communication between different computer systems. API and software developers use SCORM to create the standard of interoperability for eLearning systems. Now there exists a SCORM API, but that is just one of many forms of an API.

A major SCORM component was adopted with the 2004 version. Researchers with the ADL created the notion of “sequencing.” The sequencing protocol specified that learners could only experience content objects in a specified order. Such can be valuable, but it also can be a limitation.

Next step: The movement away from SCORM toward Tin Can API and Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 2)

In our last blog we discussed the foundation for Experience API, Advanced Distributed Learning (ADL). Let’s reiterate the truly pressing need for ADL, because it’s so, so easy to get lost in the rapid pace of technological innovation and moreover system transformation.

Prior to ADL, and prior to eLearning, education that utilized computers, in many ways, was a solitary, alienating experience for both the user and the providers of educational content. How’s that? Education and training took place at a single computer station, prior to the days of networking. Looking backwards, that form of education has been termed Computer Based Training (CBT). Think of a painfully sad image of a bureaucrat in a cubical toiling away. Let’s look to Dilbert for insight:

In CBT, administrators purchased software packages—generally expensive software packages—that could be utilized by single or multiple users based on purchased licensing privileges. Proprietary packages, unlike today, were not cloud based, but utilized compact discs (CDs) to access programs. I remember an actor on television regaling the value and permanency of CDs pronouncing, “they can even be dropped in your goldfish bowl and nothing happens!” Information input through the software would be saved on the resident computer (or back in my day, on floppy disks ☺) and coded into a file structure that could only be accessed via proprietary software. When software was updated to fix glitches and to add additional functions, the educational administrator generally had to purchase next iteration of proprietary software in order to access old data and/or use it with the new functions. Software companies might offer mechanisms for translating the files of a competitor, but frequently the results were disappointing. As stated before, all of this was the problem that ADL set out to address.

ADL and the Need for SCORM Protocols

As noted in last week’s blog, the ADL was developed by the U.S. Department of Defense in the mid-1990s to streamline its technological approach to education and training. As one might suspect, though, other agencies within the federal government simultaneously were engaged in similar projects for their own programs in education and technology.  In order to avoid duplication and inevitable conflicts in integration, the array of federal ADL programs were consolidated within DoD ADL Initiative. It would not be surprising for private industry to fall in line with this program, as a large portion of their revenue is generated from government contracting.

Based on Congressional defense authorization and President William Clinton’s Executive Order 13111, DoD created a strategic plan for ADL with the the following areas of research:

  • eLearning (web-based learning)—Research technical components and techniques to develop and support electronic-based education and training . . . consistent and interoperable . . . best practices . . . learning management systems, content registries, and Massive Open Online Courses
  • Mobile learning and mobile performance support—Research focused on the use of commercially-available handheld computing devises to provide access to learning content and information systems . . .
  • Learning analytics and performance modeling—Research in collection, measurement, analysis and reporting of data, which may include “big data,” about learns and their contexts, for purposes of understanding, optimizing, and predicting learning success . . . competencies, credentialing, learner profiles, data visualization . . . associated privacy and information security concerns.
  • Learning Theory—Research focused on the application, evaluation, and embedding of efficient and effective, current, new, and emerging theories of learning, instructional technology . . .
  • Total Learning Architecture infrastructure (TLA)—Research focused on modernizing the platforms used for education and training, to interoperability of disparate systems so they can be used together as a Service Oriented Architecture (SOA) to securely share relevant learning data including, but not limited to, granular learning experience . . .
  • Web-based Virtual Worlds and simimulations (VWs)—Research into the emerging fields of serious games, simulations, and virtual reality (within a distributed learning context)  . . .[https://adlnet.gov/research]

Given the research mandates noted above, it became necessary to develop a language that facilitated the goals of accessibility, reusability, and interoperability. The Sharable Content Object Reference Model (SCORM) was the first solution.

Next step: the discussion of references in computer science and their relationship to and the development of SCORM and ultimately Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API

What is Experience API? There’s a bunch of names swirling around that sound similar—Tin Can API, Experience API, and xAPI. What are they and what do they mean in relation to eLearning? They are a set of names sequentially adopted regarding the development of software specifications (rules) that govern the communication and relationship between learning content (educational information) and learning systems in order to record and track a wide range of learning activities on a wide range of technological platforms.

To best understand Experience API, the reader should appreciate its relationship to any number of interrelated terms and concepts. Here’s a short list to lend the reader a head’s up:

  • Advanced Distributed Learning (ADL)
  • API (Application Programming Interface)
  • Learning Management Systems (LMS)
  • Learning Record Store (LRS)
  • SCORM (Sharable Content Object Reference Model)

The Foundation and History of Experience API

It is a truism that with the advent of eLearning, educators increasingly shifted their focus from “hard copies,” that is, printed material, toward information stored in digital format. With the explosion of digital information platforms and the wide range of proprietary software, educators were faced with the herculean task of analyzing and integrating digital information stored on various platforms, in various divergent software programs and formats. That was and still is the challenge.

Let’s look at the different actors in this framework. First, there are individual users who use educational content and input responses accessed from a variety of technological platforms—think smart phones, tablets, desk top computers, and online portals. Second, there are educational and training administrators who utilize, third, proprietary software to convey, collect, and organize educational information (input and output). Fourth, and this is the key part, others work toward developing protocols for integrating digital information collected from different software packages or programs. This is the basis for ADL: Advance Distributed Learning.

The ADL Initiative is a government-based program that, as per its mission:

“bridges across Defense and other Federal agencies, as well as coalition partners and industry and academia, to encourage collaboration, facilitate interoperability, and promote best practices for using distributed learning to provide the highest-quality education, training, informal learning, and just-in-time support, tailored to individual needs and delivered cost-effectively, anytime and anywhere” (http://www.adlnet.gov/about).

As an original program of the U.S. Department of Defense, the initiative was created from an early-1990s Congressional funding for electronic classrooms and learning networks. After a few years of work, the Quadrennial Defense Review recommended the creation of a centralized strategy, which ultimately became the original ADL Initiative. The initiative now has three main activities: thought leadership, R&D innovation, and outreach and transition.

All of this sounds vaguely familiar, yes? The government mounts a massive program to streamline defense and national security operations? Sounds a lot like the creation of ARPANET in the 1960s, which, of course, led to the creation of the World Wide Web and the explosion of commercialization and private use on a widespread basis. During that entire time, interested parties in government, academia, and industry collaborated to create operational protocols. Move forward in time . . . The Defense Department created a related program for education and training for its personnel—witness the birth of and need for the ADL Initiative.

Next step: the creation of ADL Initiative SCORM protocols and the rise of Experience API.

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Principles

O.K.! With this blog, we’re finishing our description of the Kirkpatrick Model by detailing its Principles. Before that part, however, we really need to recap the previous blogs in this series. Why? It’s so easy to forget or simply get trapped by details. In short, we need to be able to see the forest for the trees (with the Kirkpatrick Business Partnership Model [KBPM] being the forest). So, quickly . . .

(click to enlarge)

The KPBM obviously has many similarities with the levels, though the order seems to have been reversed. Why is that? Let’s look at the first Kirkpatrick principle.

 

KIRKPATRICK PRINCIPLES

Let’s remember that the Kirkpatrick Partners argue that the chain model is the best way to appreciate the interrelated nature of assessing training programs. And, of course, the reason for the training program is because of a business need that has been identified.

  1. The end is the beginning. This principle reminds us that any training program—really any business decision—should be directly linked to a business need that was established at the onset.
  2. Inventor Don Kirkpatrick realized that assessing a training program necessitates understanding the organizational framework. This conditions data collection, surveying learning, and monitoring subsequent work behavior, in other words a chain of understanding and evidence. Administrators will be forced to rely on anecdotal comments and impressions if they don’t keep the end (the business need) in mind.
  3. Return on Expectations (ROE) is the ultimate indicator of value. In short, administrators need to understand that the money spent on training and assessment should translate into a positive organizational net gain. This part is quantitative, but it’s not necessarily simple. Program managers need to be able to envision what “success” would look like to them. In so doing, those designing training will be understanding business desires/needs while helping administrators and managers refine their business goals and expectations. Business partnership is essential to bring about positive ROE. When the Kirkpatrick Partners speak of a business partnership, they are redirecting the focus of training from the traditional focus of course content and employee knowledge. First, and yes, course content is extremely important, however, it’s not an end in itself; the end is the ROE. Bringing about a positive ROE will be impossible if employees fail to apply their learning, especially if it is forgotten after a period. That’s why the business partnership is key.
  4. he partnership is amongst employees, managers, and the administrator. Managers must be able to coach and encourage employees, and the administrator and managers must be able to create and offer incentives for success. This is one of the reasons why it’s important to be able to visualize success during the phase of training design.
  5. Value must be created before it can be demonstrated. In the aforementioned Kirkpatrick “A Fresh Look,” they call upon an industry study that identifies the sources of training failure. The largest area of failure by far was the application of the training in the work environment (70%). Principle 4 is a direct correlate of Principle 3. What do Principles 3 and 4 mean when taken together? Simply that training professions need to radically adjust their understanding of role. Instead of solely being the traditional, knowledgeable, empathic instructor, they need to guide organizations (administrators, managers, and employees) in a plan that includes operational execution and oversight. A compelling chain of evidence demonstrates your bottom line value. Principle 5 brings us back to the beginning, being able to demonstrate the ROE for the specified business need. The sequential nature of the levels and principles is based on the requirement to document value through the associated causation of the training and its follow-up. With this principle, the results are related to the business need, and organizations can begin the process of refining goals and modifying training practices.

Next week, we’ll finish up the series by comparing the KPBM with other models and placing this in the context of the modern business environment.

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Context, Critique & Conclusions

This is the last part of the Kirkpatrick Model series. Let’s get started, by going back to the beginning, not the beginning of the Model but the beginning of the context surrounding how a model like the Kirkpatrick Model arose in the first place.

 

CONTEXT

You might remember from the first blog in this series we briefly discussed the background of the Kirkpatrick Model. In that subsection, we provided some details about Donald J. Kirkpatrick, the founding of his Model, and his partnership with his children. The immediate context, though, has its roots in WW II and the rise of a field termed operations research.

The roots of operations research arose during the 19th century, but the field really became operational (pun intended) during WW II. By using analysis, mathematics, and statistics, decision makers were able to optimize their choices. This applies to a range of fields such military operations (e.g., targeting) or transportation (e.g., queuing theory). This may remind you of the sub-field of game theory, which gained academic legitimacy with the 1944 publication of the book Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern (think John Nash and the movie A Beautiful Mind). It’s not a far stretch to imagine that all of these concepts would be useful to executives in a range of fields, witness the birth of management sciences and decision sciences.

Donald W. Miller, a former professor of mine, taught a class on operations research at the Columbia University Business School. Like others, he was involved in the war effort using various quantitative techniques to maximize efficiency. His 1960 book Executive Decisions and Operations Research formalized the already existing trend of management science as a sub-field of operations research. Of course, the Kirkpatrick Model arose from this much larger intellectual and professional context.

 

CRITIQUE

Detailing all of the critiques of the Kirkpatrick Model is far beyond the scope of this blog. In fact, such work might constitute a lengthy article or a book in itself. The fact the Kirkpatrick Model is a point of reference for training professionals is illustrative. That Kirkpatrick did not get the “original” model “true” or “complete” is not surprising; few ideas emerge fully developed and nuanced in their original form. As colleagues developed competing models, Kirkpatrick continued to refine his original thoughts and, naturally, expanded his articulation of his model. A major area of critique, however, was the issue scientific accuracy.

Operations research (or management science), attempts to understand why and how a decision maker’s choices (inputs) impact a given outcome (outputs). Doing so requires an analysis of the proverbial “black box.” Some critiques argue that the Kirkpatrick Model was flawed; in this regard, they argue that Kirkpatrick’s model (his black box) was inaccurate. When confronted with such comments, Kirkpatrick refers his critics back to his original work, which stipulates that the four levels were not a model (a black box) but simply a framework to guide decisions.

Another critique of the Kirkpatrick “Model” deals with Return of Investment (ROI). ROI focuses on an extrapolated analysis, which compares resources expended for a business goal versus the realized value associated with its output. Was it worth it? While one assumes such a calculation was in Kirpatrick’s mind when crafting his “framework,” it was formally added as an element of the “true” and “complete” New World Kirkpatrick Model, Return of Expectations (ROE).

Again, for the best articulation of Kirkpatrick framework and principles go to their web site: www.kirkpatrickpartners.com and/or look at the following white papers:

“The Kirkpatrick Four Levels: A Fresh Look After 50 Years, 1959-2009,” Jim Kirkpatrick and Wendy Kirkpatrick, April 2009;

“An Introduction to the New World Kirkpatrick Model,” Jim Kirkpatrick and Wendy Kirkpatrick, March 2015.

 

CONCLUSIONS

This six-blog series covered the Kirkpatrick Model and the New World Kirkpatrick Model. Despite critiques to numerous to detail, if remains as the standard point of reference in any discussion regarding training analysis. Upon your review and reflection, it may overlook issues germane to your field. This, really, should not be a problem. The “model” or framework is not intended to be a definitive guide to the extent of serving as an oracle; rather, if lends analytical tools to assist administrators and executives in their business decisions.

 

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Level 4

We are beginning to close up our discussion on the Kirkpatrick Model with a description of its last level, Level 4, which deals with the issue of Results.

LEVEL 4: RESULTS

Results . . . in short, yes, this is what administrators ultimately look at in the context of their training plans. Be it a commercial business or a not-for-profit, mindful organizations are extremely careful with costs that some might view as discretionary, such as training. For training to be of value, it ultimately needs to be translated into Results.

The Kirkpatrick Model characterizes Results as: “the degree to which targeted outcomes occur as a result of the training and support and accountability package.”

The KNWM adds another dimension to Level 4: Leading Indicators. This addition focuses on “short-term observations and measurements suggesting that critical behaviors are on track to create a positive impact on desired results” (www.kirkpatrickpartners.com).

The last level should be analyzed and structured even before Level 1 and even before the training begins. Why? First, administrators find it difficult to determine useful metrics for measuring employee behavior. Attempting to create metrics after the completion of the training is problematic, because doing so can lead to accepting poor measures of outcomes or just accepting a “general sense” of the outcome without looking at how the training actually impacted the bottom line. Let’s look at an example of possible consequences:

Imagine that training costs $10,000; imagine that training only increased production value by $1,000 per year; and, imagine that there is a complete turnover of employees every eight years. Such measurements confirm that the training decreased overall organization income by at least $2,000.

Second, administrators need to be able to discuss not only the training with employees but also the means by which they plan on measuring its value. Educational Technologies confirm that consulting with employees makes the collection of data for the metric easier; problems identified with the collection process can be fed back into the training program to modify future assessments (www.educationaltechnology.net).

Educational Technologies also suggests training value can be determined by introducing a “control group,” as one might in a formal scientific experiment. Creating a control group might seem to be discriminatory toward those not included in the training. However, such need not be depending upon the structure and timing of the training. For example, if the training takes place over rotating, consecutive phases that last, say, over six months, then it would be possible to assess performance metrics of the first group versus that of the last group still awaiting training.

As noted at the onset, the Kirkpatrick Model changed its conceptualization from a hierarchical pyramid toward links in a chain. The notion of a chain connotes an interconnected process, but the Kirkpatrick Partners also use the notion of chain to develop the means of determining Results: a Chain of Evidence.

KIRKPATRICK PRINCIPLES

Those at Kirkpatrick Partners argue that the chain model needs to be followed while being mindful of five different principles:

  1. The end is the beginning
  2. Return on Expectations (ROE) is the ultimate indicator of value
  3. Business partnership is essential to bring about positive ROE
  4. Value must be created before it can be demonstrated, and
  5. A compelling chain of evidence demonstrates your bottom line value.

Ultimately, these principles have led to what the Kirkpatrick Partners term as the “true” model or “complete model,” the Kirkpatrick Business Partnership Model as depicted below (“The Kirkpatrick Four Levels: A Fresh Look After 50 Years, 1959-2009,” Jim Kirkpatrick and Wendy Kirkpatrick).

Next week, we’ll describe the Kirkpatrick Principles in detail and, in a final blog, discuss the critiques of the Kirkpatrick Model while placing it in the context of other models.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Model: Level 3

For many administrators and managers “in the trenches,” the notion of appreciating post-training behavior is a novel concept. They are consumed with responsibilities and tasks in the workplace; some may even believe that extra work was created by the detour from work to attend the training.

LEVEL 3: BEHAVIOR

This level is fairly straight forward, but a key link in the original Kirkpatrick Model. Again, to state the obvious, trainings have extremely limited value if their intended purposes are not some how realized in the workplace. The Kirkpatrick model utilizes a single element for the third level and adds an additional one for the NWKM: Required Drivers (www.kirkpatrickpartners.com).

A. Behavior

  1. Behavior is defined by “the degree to which participants apply what they have learned during the training when they are back at the job.”

B. Required Drivers

  1. Similarly, required drivers are “processes and systems that reinforce, encourage, and reward performance of critical behaviors on the job.”

The web site www.educationaltechnology.net confirms that determining the level of staff application of key principles, mindsets, and skill sets is quite challenging at the onset. They argue that assessment of Level 3 Behavior should take place between three to six months after the training. Much of this assessment includes informal observations; however, discerning whether or not the training has truly taken root is determined through staff counseling and interviews. Using “tests” can be problematic, for as discussed, the ability to “know” information is very different from being able to “apply” training information in real-life job situations.

The Required Drivers as the second element of Level 3 is truly significant. Without administrative processes and systems affirming the training, many employees—perhaps most—will simply forget about the training and leave the materials under an ever-increasing pile of training materials never to be looked at again. So what does it mean to implement Required Drivers?

Required Drivers necessitate administrators and managers to become actively involved in the process of implementing the training in the workplace. The NWKM identifies three ways this can be accomplished: reinforcement, encouragement, and rewards.  Functionally speaking, what does this mean in the workplace?

Reinforcing training material requires administrators and managers to serve as a “coach.” Playing the role of the coach is essential; here instead of being a judge, coaches provide reminders and refreshers of training material in situations.

Encouragement requires management to be sympathetic to their employees. Such encouragement is founded on the management insight that knowledge levels and positive dispositions are not the only factor when organizations seek to implement new practices and work models. In short, employees may attempt doff off old practices in exchange for new ones, but old habits can be hard to break. Equally, the skill of recognizing when to apply the training is frequently accomplished through trial and error until a given employee can develop a sufficient level of skill.

Rewards make things easier for employees. While having an affirming manager/coach is essential, rewards offer the external incentives that can further motivate staff during periods when the training model has not been fully implemented.

In our wrap up of the Kirkpatrick Model, we’ll look at Level 4: Results. With this level, we’ll include a discussion of the Kirkpatrick Principles, which govern and provide direction for the different links in the model.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Model: Level 2

THE STRUCTURE OF DIFFERENT MODELS

As a prelude to discussing Level 2, let’s take a quick step back to look at the structure of different Instructional Design Models (IDM). Any reader will find numerous different models; common among all of them, though, is the differentiation of the evaluation process into different components or levels. Some of the models have a minimal number of components, whereas others have seven or so. What’s going on here? It’s not difficult to understand when comparing different models. The creators of a given IDM might extrapolate a single component into two or more. The number of components bares notice as they signify the structure of the learning process. In other words, the structure conveys a cognitive schematic for appreciating educational design and evaluation, not simply an expanded PowerPoint that assists students in their learning.

The original Kirkpatrick Model utilized four components, or as they put it, levels. The fact that they used the concept of “levels” to depict the evaluation process is significant, as the original schematic depicted the evaluation process as different levels of a pyramid with the final level on the top. Such a schematic is hierarchical in its structure. The New World Kirkpatrick Model (NWKM) still utilizes four levels, however, instead of a pyramid, the NWKM utilizes links in a chain to depict the cognitive schematic. The notion of a “chain” signifies a connected process, with each link, or level, having an impact on the next. One might surmise that the NWKM still terms each component as a “level” to create continuity with the original model.

LEVEL 2: LEARNING

With our review of the Kirkpatrick Level 2, let’s begin by inspecting how the “links,” that is, Level 1 and Level 2, are connected. To recall, Level 1 deals with the concept of Reaction with the intent on evaluating customer satisfaction. The NWKM identified three dimensions of Level 1: the degrees to which the training was favorable, engaging, and relevant. Level 2 deals with the concept of Learning. Clearly, and this is common sense, the ability—or even desire—to learn is predicated on and “linked” with a student’s initial reaction to the training. In short, if a student’s reaction to the training is negative, there is no incentive for active listening, participation, and overall motivation for content retention.

There are three dimensions of the original Kirkpatrick Model and two additional ones for the NWKM: knowledge, skill, attitude, confidence, and commitment (www.kirkpatrickpartners.com ).

A. Knowledge

  • Knowledge is the foundation of cognitive learning; this part is hierarchical and based educational content. Knowledge is content based regardless of whether the information is conveyed in textual, pictorial, or through a hands-on demonstration. This dimension is familiar to most: “Mom, I received a 91% on test!” The depth of knowledge is another element, which begins to blend into the matter of skill.

B. Skill

  • A student may know how to perform a task, however, that is quite different from having the skill to perform the task.  For example, knowing how to solder and wire a circuit board is very different than doing it oneself; similarly, it is different to know all of the rules in soccer versus having the skill to be a referee. In the former case, skill is a tangible manipulation, whereas in the latter case, skill is a matter of cognitive interpretation. In both cases, the learning process includes skill, the operationalization of knowledge.

C. Attitude

  • A learner’s “attitude” toward training is predicated upon her/his value judgment about the utility of the new practice and/or process. The trainer and learner may agree that the course material is “relevant” to the work of the learner (Level 1). However, the learner may disagree in its value. For example, he/she may find the knowledge to be incomplete or simply incorrect. Second, he/she may find the skill required is too complicated or simplistic. A positive attitude toward the training requires an appreciation of the knowledge and the required associated skills.

D. Confidence

  • Confidence is linked to attitude. If students are positively disposed toward the knowledge and skill, then they will need confidence to perform the task.  This is an essential, perhaps, pivotal dimension of the training. Everything can be completely in place, including our next element, commitment, but if students lack confidence, they falter in their ability to place the training into practice. To facilitate confidence, course administrators and trainers need to be conscious that prior elements in learning—knowledge and skill—are clearly and completely detailed and depicted. Given that trainers are the functional experts compared to their students, they must be certain not to skip over elements they take for granted.

E. Commitment

  • This NWKM dimension is a sibling of confidence. Since learning is a process, having a commitment is essential, because few of us “get it right” in the beginning stages of trying out a new way of doing things. The student must be committed in light of failure and not fall back on the old way of doing things thinking “well, at least that worked.”

The NWKM envisions learning (Level 2) as a longer process separate from the traditional didactic of knowing facts. In this context, learning is far more dynamic and dependent upon on the trainer to create a vision for the material and empower the students to take educational ownership.

Next week we’ll look at Level 3: Behavior.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Levels: Background & Context

When discussing the Flipped Classroom last year, we ended our four-part discussion with an appreciation of how to measure its effectiveness. Several ways of evaluating trainings were identified including informal feedback from students and formal assessments of content mastery among others. Our discussion was intended to offer different ways for to evaluate trainings from various perspectives.

As you probably guessed, a well-developed field exists to evaluate courses, educational techniques, and training approaches: Instructional Design Models (IDM). There is a range of IDMs, but the best-known model is the Kirkpatrick Model.

BACKGROUND

Donald L. Kirkpatrick developed the Kirkpatrick Model, which was based on his 1954 dissertation, and later serialized in the US Training and Development Journal, the organ for the American Society for Training and Development (ASTD). In 1994, he and his son, James D. Kirkpatrick, published Evaluating Training Programs, which provided a complete and formal foundation and basis for his original ideas. Kirkpatrick with the assistance of his son and daughter, Wendy K. Kirpatrick, founded the business enterprise Kirkpatrick Partners, to offer consulting, products, and various events and training based on the “one and only Kirkpatrick” model. Donald passed away in 2014, but his children continue promoting the Kirkpatrick model, and James Kirkpatrick, who also has a doctorate, created the New World Kirkpatrick Model, which adds additional facets to each of the four levels for evaluating trainings.

MODEL STRUCTURE

The following four elements constitute the basis for the Kirkpatrick Model: Reaction, Learning, Behavior, and Results. Kirkpatrick originally used a pyramid schematic to visualize his concepts, but the Kirkpatrick Partners on their website currently use the image of interconnected links of a chain. Each level progressively leads to the next and is best understood as the straightforward definition of the level’s name.

LEVEL 1: REACTION

The basic understanding for level one is simple: how did training participants react to the training? This is a simple method for appreciating customer satisfaction. Or as described on its website: “the degree to which the participants find the training favorable, engaging, and relevant to their jobs.” The New World Kirkpatrick Model (NWKM) added the latter two elements. They describe engagement as the following: “The degree to which participants are involved in and contributed to the learning experience.” Similarly, relevance is described as the following: “The degree to which training participants will have the opportunity to use or apply what they learned in the training on the job.”

A. Reaction Sheets/Smile Sheets

The basic means of determining reaction is the use of “smile sheets,” otherwise known as a survey of participants’ reactions. Such a survey, generally speaking, is handed out and completed just after the training using paper and pencil or on-line.

B. Survey Questions

Most surveys query participants about a number of the training facets. What is the facility like? Was the facility located in a convenient place? Did the training start on time? Were the training goals clearly outlined? Were the training materials helpful? Was the facilitator knowledgeable? Did you like the facilitator’s style? Were there a sufficient number of breaks during the training? Was the content relevant to your work?

Training participants are prompted to answer each question in either a binary yes/no fashion or rate the response along a scale (Likert Scale). The distinctive element of these questions is that the focus is on the training and the trainer.

C. NWKM Shift of Focus

Jim Kirkpatrick realized that there was something missing in the traditional Kirkpatrick Model. He realized the surveys were self-centered around the environment and themselves: their facility, their course, and their trainer.

The NWKM remodeled the smile sheets to be “learner-centered.” As noted in their training material, instead of the training-centered category “The program objectives were clearly defined,” the learning-centered category is “I understood the learning objectives.”

Jim Kirkpatrick believes, most importantly, that this level needs to be tied to the last two levels, behavior and performance. That is, the NWKM is structured and functions to reinforce or stimulate positive on-the-job practices, which in turn directly impact organizational goals. He notes that the since the “smile sheets” generally are training-centered that they create a perception for training participants that are anchored between the participant and her/his job. Rather, the training needs to be learner-centered in a manner that links the participant to the organizational goals through their jobs.

Next week we’ll look at Level 2: Learning.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Sources:

http://educationaltechnology.net/

http://www.kirkpatrickpartners.com/