Header Image - Latest Blogs...

Trends and Upcoming eLearning Conferences

When looking toward the future and trends, conferences are a great place for investigation. These days most academic and higher education conferences generally have sessions that touch upon eLearning. For example, a big topic over the years at the American Historical Association and the American Studies Association is the concept of the “Digital Humanities.” This topic is just like it sounds: how to utilize various computer and web-based technologies in the teaching and promotion of the humanities in academic and public settings. You guessed it—the material covered in these sessions generally is not “cutting edge” and simply conveys the application of mainstream ideas and technology for use in the “trenches.”

The conferences that really focus on new technological trends are those specifically geared toward the professionals tasked with the job of setting up eLearning at their respective institutions, be it within academia, government, or the private sector. While there are general, broad-based conferences in the field of eLearning, there are also more specialized conferences in sub-fields, for example, professional training or for Chief Information Officers.

Let’s list some of the conferences remaining during 2017 and in early 2018— including one we just missed: October 30–November 1: mLearn 2017: 16th World Conference on Mobile and Contextual Learning (Larnaca, Cyprus).

November 2017

  • November 16–18: 10th Annual International Conference on Education, Research & Innovation: 10 Years Building the Future of Learning (Seville, SP)

December 2017

  • December 6–8: OEB Global 2017: Learning Uncertainty (Berlin, GER)

January 2018

  • January 24–26: Association for Talent Development TechKnowledge Conference (San Jose, CA)
  • January 31–February 2: Human Capital Management Excellence Conference (Palm Beach Gardens, FL)

February 2018

  • February 11–14: The Institutional Technology Council eLearning Conference (Tuscan, AZ); Keynote Speaker: John Landis, Apple Learning

March 2018

  • March 2–3: 11th International Conference on eLearning & Innovative Pedagogies: Digital Pedagogies for Social Justice (New York, NY)
  • March 5–7: 12th Annual International Education, Technology, & Development Conference: Rethinking Learning in a Connected Age (Valencia, SP)

The following conferences are sponsored by the eLearning Guild, an eLearning organization for information, networking, and community:

March 27–29, 2018: Learning Solutions 2018 Conference & Expo (Orlando, FL)

June 26–28: 2018 Realities360 Conference (San Jose, CA)

October 24–26: DevLearn 2018 Conference & Expo (Las Vegas, NV)

The eLearning Guild conferences are major events, but each one is directed toward a different target population. The Learning Solutions conference focuses on developing knowledge and skill sets for addressing “real life” problems for individuals working in the “trenches.” There are sessions on all of the basic arenas of eLearning, e.g., games/gamification, instructional design, mobile learning, etc. An important part of these sessions is to present “best practices” in various sub-fields.

Per the eLearning Guild, the Realities360 conference focuses on “opportunities presented by virtual reality, augmented reality, and other alternate reality technologies.” This conference is “hands on,” and its Technology Showcase offers participants time to work with the new technologies and engage others as to how it might fit into their own learning needs. An interesting session during the 2017 conference was titled “Wayfinding, Storytelling, and Structuring Interaction in VR.”

Many consider the DevLearn conference to be one of the major events in eLearning each year. DevLearn offers a window into the “cutting-edge” technologies in a wide range of sub-fields and prides itself in showcasing an array of “thought leaders” in the field. Looking back at its 2017 Keynote Speakers:

  • Amy Web, “Sci-Fi Meets Reality: The Future, Today”
  • LaVar Burton (Actor/Director), “Technology and Storytelling: Making a Difference in a Digital Age”
  • Jane McGonigal, “How to Think Like a Futurist”
  • Glen Keane (Disney Animator/Legend), “Embracing Technology-Based Creativity”

DevLearn has sessions focusing on emerging technology, innovation, and management, among others. It also touched upon the following familiar subject: “Going Beyond SCORM: Using xAPI and WordPress as an LMS.” As you might imagine, a big draw for any of the conferences by the eLearning Guild or any other entity is the vendor showroom, which displays all of the latest strategies and technologies.

Craig Lee Keller, Ph.D., Learning Strategist

Experience API (Part 5)

Tin Can API/Experience API Concept

So, issues of nomenclature not withstanding, what were some of the key elements Rustici introduced with its Tin Can API? (For a copy of a slightly modified version of the Rustici deliverable to ADL [Tin Can API], cut and paste this link to your web browser: https://www.adlnet.gov/public/uploads/Experience-API-Release-v0.95.pdf )

Rustici addressed the reality and the administrative needs that exist in our increasing complex, disaggregated, and de-centralized technological world. As noted, yes, there are so many different types of technologies, and, yes, there are so many different types of platforms, and, yes, there are so many different sources of information. Moreover, not all of these learning experiences take place online. So how to capture the range of these “experiences” for modern learner . . .

The key concept and innovation for Tin Can API and Experience API is as follows.

Whenever a learning moment has to be recorded, documentation of this experience is sent to a Learning Record Store (LRS) using the following format:

The basic notion is “I did this.” This format permits administrators to track when learners begin educational courses/modules, review a given page, answer a question, and/or finish (or fail) a given course of study. While the information might have originated within a proprietary Learning Management System (LMS), the data ultimately is routed to an independent LRS, which then, in theory, could be accessed by other parties and software applications.

Since a LRS is the ultimate destination for learning data, individuals can learn off-line and simply upload their learning data once given an Internet connection. Now this does not mean that a learner could be reading a hardback book and an article on a PDF reader and magically that information is transmitted to the LRS. Rather, the learning still needs to take place through a digital format that tracks steps taken by the learner. (The reading of a hardback book, in fact, could be added to the LRS, but this would simply need to be documented and inputted by an administrator.)

Let’s look at some examples:

  1. Peter began the intermediate course for sommeliers
  2. Peter read module 1
  3. Peter scored 50% on module 1 questions
    1. Peter scored 100% on module 1 questions about white wine
    2. Peter scored 0% on module 1 questions about red wine
  4. Peter read a refresher on red wine for module 1
  5. Peter scored 95% on module 1 questions

     .  .  .  .  .  .

     27. Peter achieved competency in Burgundy style wines

This information could have originated from a cell phone, a tablet app, a desktop computer at home, or a school-based workstation. Imagine Peter began the class as an outside student at the US Department of Agriculture, and then received a job working at the U.S. Food and Drug Administration. While at FDA, he continued his study as a sommelier though using a different LMS. His old learning records are still accessible even though the FDA is using a new LMS, since the records are stored in a LRS that is universally accessible using protocols developed by Tin Can API/Experience API.

Another important feature of Experience API is that it can record learning data derived from simulations and virtual reality environments. This data, of course, is qualitatively different from other data given its dynamic nature. In this regard, too, Experience API can record data from “groups,” as distinct from individuals, that participate in a learning process. For example, ADL highlights one related element in its portfolio: Hyper-Personalized Intelligent Tutor (HPIT), which “is able to detect non-cognitive factors (e.g., determination, boredom, motivation) in a learner . . .”

(https://www.adlnet.gov/hpit).

Similarly, SAVE (Semantically Automated Assessment in Virtual Environments) “provides a framework for learning procedural skills (e.g., repairing a car, flying an airplane, or shooting/maintaining a weapon system) through simulation.

(https://www.adlnet.gov/save)

Apart from the sleek, sexy uses of xAPI [note the devolution into an abbreviation], there are basic, fundamental uses of value regardless of whether or not or an organization employs novel gaming training or the like. Welcome to the ADL/DOE Learning Registry (LR) Project. (https://www.adlnet.gov/learning-registry) There is a huge need for a tool like this—especially within the government or other large and multi-faceted organizations. Imagine an organization having a simple need, say, developing an emergency building evacuation training. Divisions on the east coast may have completely different missions and operations from divisions on the west coast, however the character of their building evacuation plans will likely be fairly similar discounting local elements. A training that one division develops can then be used and, perhaps, improved upon by another division. Maintaining a central LR valuable to leverage corporate expertise and intellect and minimize waste in expenses and time. In fact, many corporations have developed positions specifically for this function: Chief Knowledge Curators.

(http://www.clomedia.com/2017/05/22/organizations-need-chief-knowledge-curators/)

Credentialing increasingly is becoming an important element that is facilitated through xAPI, especially in government service. Witness the birth of MIL-CRED (Military Micro-Credentials), which is designed to create “a fully vetted, fully automated, personally controlled digital resume.” This project was developed to ease the transition from military to “civilian careers and educational opportunites.”

(https://www.adlnet.gov/mil-cred)

Administrators using xAPI can generate meta-data drawn across different groups of students over periods of time. This can be valuable in terms of fine-tuning elements of educational content and course focus. Ultimately, xAPI was built to document a relationship between training and job performance, which for administrators, managers, and supervisors is a key if not the key element in any program of workplace development.

Next Step: Actually, next and final step, is to look at the future of Experience API (xAPI) and the current collaborations and research initiatives of the ADL.

Experience API (Part 4)

O.K., last week we finished up with SCORM, which paved the way for a discussion about Tin Can API and, yes, Experience API.  Let’s get right into it . . .

SCORM had peaked in its level of development and value, and the ADL (Advanced Distributed Learning Authority) decided a newer version of SCORM would not meet its continuing need. As such, in 2011, ADL issued a contract to investigate, research, and basically re-think SCORM in order to advance its mission and goals. The Nashville-based business Rustici Software won this contract, and the firm initiated its work by starting a conversation, a conversation that became Project Tin Can.

Project/Tin Can

Rustici termed the research phase of the contract to be Project Tin Can. They embraced the image and notion of tin-can communication to convey the two-way communication between Rustici and the eLearning community.

Per Rustici, this process included seeking information through five different avenues:

  • nput from hundreds of xAPI stakeholders;
  • Interviews with key industry leaders;
  • LETSI SCORM 2.0 White Papers (this was, in many ways, a precursor of Project Tin Can; for an archive of these papers, see the Rustici site: https://scorm.com/tincanoverview/the-letsi-scorm-2-0-white-papers/;
  • Interactions with then current Rusti customers; and,
  • The ADL contract specifications.

A Rose By Any Other Name . . . Tin Can API/Experience API/xAPI

While the results of the Project Tin Can research produced Tin Can API, the latter was a qualitative successor to SCORM and an earlier version of the continually evolving Experience API.  xAPI, then, is simply an acronym for Experience [eXperience] API, neither a successor to nor a different version of Experience API.  

It really seems confusion arose and still arises from the period when Tin Can and Experience API virtually were synonymous. This was the period of and the immediate years following Rustici’s submission of its deliverables to the ADL. At that time, perhaps understandably, Rustici stated:

ADL will be transferring ownership of the spec to a public standards body after v1.0 is complete this spring. After that transfer, we don’t expect the official government name “Experience API” to last much longer [emphasis added].”

(https://experienceapi.com/we-call-it-tin-can/)

They had branded their process and deliverable with the “Tin Can” name, and their work was widely known by many in the industry as the Tin Can API.  Yet, the ADL used the name Experience API in their contract specifications and in their continuing usage. Experience API is the pervasive name that is used, and the name “Tin Can” is only formally used in reference to Rustici’s original contract work. Indeed, Rustici later called its response to the ADL contract “Project xAPI.”

(https://experienceapi.com/overview/)

Ownership versus Web Domains

The ADL awarded the contract—the BAA [Broad Agency Announcement, which in general, is for basic and applied research and development]—to Rustici and, as such, the work derived from that contract was and is the property of the United States Government. The issue of name “ownership” publicly arose in a May 2012 Google Group discussion:

https://groups.google.com/a/adlnet.gov/forum/?hl=en&fromgroups=#!topic/tincanapi-info/q87uy3XJXX8

The concern centered on the Rustici trademark petition for the names of “Tin Can” and “Project Tin Can.” No less a figure than Rustici President, Mike Rustici, weighed in on the concern to assure writers that the company had no proprietary claim on the use of “Tin Can” and sought to obtain trademark status to avoid it from being “pirated” by others who might be less community minded.

The continuing Google Group discussion continued on the topic as to whether or not Rustici would use the “Tin Can” moniker in any of its future commercial enterprises; to wit, Rustici replied that it would, however, the company would not prevent others from doing so.

Toward this end, while Tin Can, Experience API, and, for that matter, SCORM, are names under government “contract,” as it were, Rustici owns the web domains for www.tincanapi.com, which is redirected to another one of their web domains, www.experienceapi.com; they also own the web domain of www.scorm.com. In their separate domains, they clearly differentiate the administration, ownership, and stewardship of the respective names to the ADL, but that they also offer services for companies seeking to utilize the SCORM and/or Experience API specifications.  For a response by Rustici Software on this subject, please see:

https://experienceapi.com/we-call-it-tin-can/ and

https://experienceapi.com/tin-can-experience-api-xapi/

Note: To be clear, the above comments neither are intended nor take away from any of Rustici Software’s groundbreaking work in the field of eLearning; rather, the comments are included to simply clarify distinctions amongst terms and the like.

Next step: Finally, a focused discussion on the Tin Can API/Experience API Innovations and its evolution.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 3)

In our last blog, we further detailed the foundation for ADL and its areas of research.  One of these areas of research is Total Learning Architecture Structure, which provided the basis for interoperability between different systems. One of the results of this work was the Sharable Content Object Reference Model (SCORM). The initial edition of SCORM was released in January 2000 with a couple of SCORM iterations produced the following year in 2001. However, a new version of SCORM was introduced in January 2004, and DOD made SCORM use mandatory in 2006. In total, there have been four versions of SCORM 2004. The next generation of SCORM arose in 2010 with Project Tin Can, but we’re getting a little ahead of ourselves.

SCORM or What do you mean be a Sharable Content Object Reference Model?

To understand SCORM, let’s break it down into its constituent elements.

  • Sharable Content Object (SCO)—an object is the means of relating various pieces of data and their value.  For us, this refers to an element within a learning system, for example, a question or image. Each “object” is a part of the larger educational program. The desire and demand to make objects “sharable,” is linked back to our original quest for interoperability. In one sense, think of it as a specific lesson or module in an on-line course.
  • Reference Model—by reference, SCORM is referring to a computer term of art, that is, the means of finding specific data or datum located on a computer hard drive or, increasing, on a cloud-based server. In short, a reference provides the basis for discerning a physical location for information. Yes, there are all of these 0000s and 1111s out there in the digital world, so wouldn’t it be nice to be able to keep track of them. By reference model, SCORM is creating rules and protocols for references in the context of sharing that information with other Learning Management Systems (LMS).

To better understand let’s look how software designers create their programs. I remember writing programs in the defense industry. I already knew the languages of BASIC and easily learned FORTRAN in addition to LOCUS (an early spreadsheet proprietary program). My work was a mess, truly. LOL! I knew how to program, but insisted on writing my programs without a flowchart—breaking rule number 1. Anyway, you can imagine all of the problems I faced.

There are other rules in computer science that make it easier to write, track, and modify code. One of these approaches is object or class-based programming. Instead of lumping all the data together, in object-based programming, the programmer defines a group of fields or attributes, which then provides the basis for relating actual data values and associated operations and/or methods. This type of organization, then, provides the basis for generating a commonality that can be shared amongst different programs. That is, if data value, its characterization, and associated operations can be made uniform, then different programs can be capable of utilizing that same digital information. SCORM is about creating the basis for doing just that.

Now imagine trying to perform all of these functions while transmitting the information through the Internet in the context of a client-server relationship. Sharing information through a cycle of request and response from the client (you) to the server (the repository of data and generally the program) gets complicated enough. Imagine trying to force-feed your information into a different LMS. Whhhew! You get the picture. Yes the horror as it were. So, again, that’s the basis for creating SCORM.

SCORM Protocols

Let’s be clear. SCORM is neither a software program nor a programming language. Rather SCORM provides standards for data and programming that make it possible to have data sets that can be interchangeable amongst differing LMSs. So, software designers are extremely mindful to utilize SCORM when designing and coding their proprietary LMS. There have been numerous limitations to SCORM. Why? It’s simple: trial and error. Software designers within and without the government have found flaws or limitations to the SCORM protocols, which, of course, gave rise to successive iterations of SCORM. The SCORM protocols are the rules utilized by different Application Programming Interfaces (API). An API is the part of the programming language that facilitates a communication between different computer systems. API and software developers use SCORM to create the standard of interoperability for eLearning systems. Now there exists a SCORM API, but that is just one of many forms of an API.

A major SCORM component was adopted with the 2004 version. Researchers with the ADL created the notion of “sequencing.” The sequencing protocol specified that learners could only experience content objects in a specified order. Such can be valuable, but it also can be a limitation.

Next step: The movement away from SCORM toward Tin Can API and Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API (Part 2)

In our last blog we discussed the foundation for Experience API, Advanced Distributed Learning (ADL). Let’s reiterate the truly pressing need for ADL, because it’s so, so easy to get lost in the rapid pace of technological innovation and moreover system transformation.

Prior to ADL, and prior to eLearning, education that utilized computers, in many ways, was a solitary, alienating experience for both the user and the providers of educational content. How’s that? Education and training took place at a single computer station, prior to the days of networking. Looking backwards, that form of education has been termed Computer Based Training (CBT). Think of a painfully sad image of a bureaucrat in a cubical toiling away. Let’s look to Dilbert for insight:

In CBT, administrators purchased software packages—generally expensive software packages—that could be utilized by single or multiple users based on purchased licensing privileges. Proprietary packages, unlike today, were not cloud based, but utilized compact discs (CDs) to access programs. I remember an actor on television regaling the value and permanency of CDs pronouncing, “they can even be dropped in your goldfish bowl and nothing happens!” Information input through the software would be saved on the resident computer (or back in my day, on floppy disks ☺) and coded into a file structure that could only be accessed via proprietary software. When software was updated to fix glitches and to add additional functions, the educational administrator generally had to purchase next iteration of proprietary software in order to access old data and/or use it with the new functions. Software companies might offer mechanisms for translating the files of a competitor, but frequently the results were disappointing. As stated before, all of this was the problem that ADL set out to address.

ADL and the Need for SCORM Protocols

As noted in last week’s blog, the ADL was developed by the U.S. Department of Defense in the mid-1990s to streamline its technological approach to education and training. As one might suspect, though, other agencies within the federal government simultaneously were engaged in similar projects for their own programs in education and technology.  In order to avoid duplication and inevitable conflicts in integration, the array of federal ADL programs were consolidated within DoD ADL Initiative. It would not be surprising for private industry to fall in line with this program, as a large portion of their revenue is generated from government contracting.

Based on Congressional defense authorization and President William Clinton’s Executive Order 13111, DoD created a strategic plan for ADL with the the following areas of research:

  • eLearning (web-based learning)—Research technical components and techniques to develop and support electronic-based education and training . . . consistent and interoperable . . . best practices . . . learning management systems, content registries, and Massive Open Online Courses
  • Mobile learning and mobile performance support—Research focused on the use of commercially-available handheld computing devises to provide access to learning content and information systems . . .
  • Learning analytics and performance modeling—Research in collection, measurement, analysis and reporting of data, which may include “big data,” about learns and their contexts, for purposes of understanding, optimizing, and predicting learning success . . . competencies, credentialing, learner profiles, data visualization . . . associated privacy and information security concerns.
  • Learning Theory—Research focused on the application, evaluation, and embedding of efficient and effective, current, new, and emerging theories of learning, instructional technology . . .
  • Total Learning Architecture infrastructure (TLA)—Research focused on modernizing the platforms used for education and training, to interoperability of disparate systems so they can be used together as a Service Oriented Architecture (SOA) to securely share relevant learning data including, but not limited to, granular learning experience . . .
  • Web-based Virtual Worlds and simimulations (VWs)—Research into the emerging fields of serious games, simulations, and virtual reality (within a distributed learning context)  . . .[https://adlnet.gov/research]

Given the research mandates noted above, it became necessary to develop a language that facilitated the goals of accessibility, reusability, and interoperability. The Sharable Content Object Reference Model (SCORM) was the first solution.

Next step: the discussion of references in computer science and their relationship to and the development of SCORM and ultimately Experience API.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Experience API

What is Experience API? There’s a bunch of names swirling around that sound similar—Tin Can API, Experience API, and xAPI. What are they and what do they mean in relation to eLearning? They are a set of names sequentially adopted regarding the development of software specifications (rules) that govern the communication and relationship between learning content (educational information) and learning systems in order to record and track a wide range of learning activities on a wide range of technological platforms.

To best understand Experience API, the reader should appreciate its relationship to any number of interrelated terms and concepts. Here’s a short list to lend the reader a head’s up:

  • Advanced Distributed Learning (ADL)
  • API (Application Programming Interface)
  • Learning Management Systems (LMS)
  • Learning Record Store (LRS)
  • SCORM (Sharable Content Object Reference Model)

The Foundation and History of Experience API

It is a truism that with the advent of eLearning, educators increasingly shifted their focus from “hard copies,” that is, printed material, toward information stored in digital format. With the explosion of digital information platforms and the wide range of proprietary software, educators were faced with the herculean task of analyzing and integrating digital information stored on various platforms, in various divergent software programs and formats. That was and still is the challenge.

Let’s look at the different actors in this framework. First, there are individual users who use educational content and input responses accessed from a variety of technological platforms—think smart phones, tablets, desk top computers, and online portals. Second, there are educational and training administrators who utilize, third, proprietary software to convey, collect, and organize educational information (input and output). Fourth, and this is the key part, others work toward developing protocols for integrating digital information collected from different software packages or programs. This is the basis for ADL: Advance Distributed Learning.

The ADL Initiative is a government-based program that, as per its mission:

“bridges across Defense and other Federal agencies, as well as coalition partners and industry and academia, to encourage collaboration, facilitate interoperability, and promote best practices for using distributed learning to provide the highest-quality education, training, informal learning, and just-in-time support, tailored to individual needs and delivered cost-effectively, anytime and anywhere” (http://www.adlnet.gov/about).

As an original program of the U.S. Department of Defense, the initiative was created from an early-1990s Congressional funding for electronic classrooms and learning networks. After a few years of work, the Quadrennial Defense Review recommended the creation of a centralized strategy, which ultimately became the original ADL Initiative. The initiative now has three main activities: thought leadership, R&D innovation, and outreach and transition.

All of this sounds vaguely familiar, yes? The government mounts a massive program to streamline defense and national security operations? Sounds a lot like the creation of ARPANET in the 1960s, which, of course, led to the creation of the World Wide Web and the explosion of commercialization and private use on a widespread basis. During that entire time, interested parties in government, academia, and industry collaborated to create operational protocols. Move forward in time . . . The Defense Department created a related program for education and training for its personnel—witness the birth of and need for the ADL Initiative.

Next step: the creation of ADL Initiative SCORM protocols and the rise of Experience API.

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Principles

O.K.! With this blog, we’re finishing our description of the Kirkpatrick Model by detailing its Principles. Before that part, however, we really need to recap the previous blogs in this series. Why? It’s so easy to forget or simply get trapped by details. In short, we need to be able to see the forest for the trees (with the Kirkpatrick Business Partnership Model [KBPM] being the forest). So, quickly . . .

(click to enlarge)

The KPBM obviously has many similarities with the levels, though the order seems to have been reversed. Why is that? Let’s look at the first Kirkpatrick principle.

 

KIRKPATRICK PRINCIPLES

Let’s remember that the Kirkpatrick Partners argue that the chain model is the best way to appreciate the interrelated nature of assessing training programs. And, of course, the reason for the training program is because of a business need that has been identified.

  1. The end is the beginning. This principle reminds us that any training program—really any business decision—should be directly linked to a business need that was established at the onset.
  2. Inventor Don Kirkpatrick realized that assessing a training program necessitates understanding the organizational framework. This conditions data collection, surveying learning, and monitoring subsequent work behavior, in other words a chain of understanding and evidence. Administrators will be forced to rely on anecdotal comments and impressions if they don’t keep the end (the business need) in mind.
  3. Return on Expectations (ROE) is the ultimate indicator of value. In short, administrators need to understand that the money spent on training and assessment should translate into a positive organizational net gain. This part is quantitative, but it’s not necessarily simple. Program managers need to be able to envision what “success” would look like to them. In so doing, those designing training will be understanding business desires/needs while helping administrators and managers refine their business goals and expectations. Business partnership is essential to bring about positive ROE. When the Kirkpatrick Partners speak of a business partnership, they are redirecting the focus of training from the traditional focus of course content and employee knowledge. First, and yes, course content is extremely important, however, it’s not an end in itself; the end is the ROE. Bringing about a positive ROE will be impossible if employees fail to apply their learning, especially if it is forgotten after a period. That’s why the business partnership is key.
  4. he partnership is amongst employees, managers, and the administrator. Managers must be able to coach and encourage employees, and the administrator and managers must be able to create and offer incentives for success. This is one of the reasons why it’s important to be able to visualize success during the phase of training design.
  5. Value must be created before it can be demonstrated. In the aforementioned Kirkpatrick “A Fresh Look,” they call upon an industry study that identifies the sources of training failure. The largest area of failure by far was the application of the training in the work environment (70%). Principle 4 is a direct correlate of Principle 3. What do Principles 3 and 4 mean when taken together? Simply that training professions need to radically adjust their understanding of role. Instead of solely being the traditional, knowledgeable, empathic instructor, they need to guide organizations (administrators, managers, and employees) in a plan that includes operational execution and oversight. A compelling chain of evidence demonstrates your bottom line value. Principle 5 brings us back to the beginning, being able to demonstrate the ROE for the specified business need. The sequential nature of the levels and principles is based on the requirement to document value through the associated causation of the training and its follow-up. With this principle, the results are related to the business need, and organizations can begin the process of refining goals and modifying training practices.

Next week, we’ll finish up the series by comparing the KPBM with other models and placing this in the context of the modern business environment.

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Context, Critique & Conclusions

This is the last part of the Kirkpatrick Model series. Let’s get started, by going back to the beginning, not the beginning of the Model but the beginning of the context surrounding how a model like the Kirkpatrick Model arose in the first place.

 

CONTEXT

You might remember from the first blog in this series we briefly discussed the background of the Kirkpatrick Model. In that subsection, we provided some details about Donald J. Kirkpatrick, the founding of his Model, and his partnership with his children. The immediate context, though, has its roots in WW II and the rise of a field termed operations research.

The roots of operations research arose during the 19th century, but the field really became operational (pun intended) during WW II. By using analysis, mathematics, and statistics, decision makers were able to optimize their choices. This applies to a range of fields such military operations (e.g., targeting) or transportation (e.g., queuing theory). This may remind you of the sub-field of game theory, which gained academic legitimacy with the 1944 publication of the book Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern (think John Nash and the movie A Beautiful Mind). It’s not a far stretch to imagine that all of these concepts would be useful to executives in a range of fields, witness the birth of management sciences and decision sciences.

Donald W. Miller, a former professor of mine, taught a class on operations research at the Columbia University Business School. Like others, he was involved in the war effort using various quantitative techniques to maximize efficiency. His 1960 book Executive Decisions and Operations Research formalized the already existing trend of management science as a sub-field of operations research. Of course, the Kirkpatrick Model arose from this much larger intellectual and professional context.

 

CRITIQUE

Detailing all of the critiques of the Kirkpatrick Model is far beyond the scope of this blog. In fact, such work might constitute a lengthy article or a book in itself. The fact the Kirkpatrick Model is a point of reference for training professionals is illustrative. That Kirkpatrick did not get the “original” model “true” or “complete” is not surprising; few ideas emerge fully developed and nuanced in their original form. As colleagues developed competing models, Kirkpatrick continued to refine his original thoughts and, naturally, expanded his articulation of his model. A major area of critique, however, was the issue scientific accuracy.

Operations research (or management science), attempts to understand why and how a decision maker’s choices (inputs) impact a given outcome (outputs). Doing so requires an analysis of the proverbial “black box.” Some critiques argue that the Kirkpatrick Model was flawed; in this regard, they argue that Kirkpatrick’s model (his black box) was inaccurate. When confronted with such comments, Kirkpatrick refers his critics back to his original work, which stipulates that the four levels were not a model (a black box) but simply a framework to guide decisions.

Another critique of the Kirkpatrick “Model” deals with Return of Investment (ROI). ROI focuses on an extrapolated analysis, which compares resources expended for a business goal versus the realized value associated with its output. Was it worth it? While one assumes such a calculation was in Kirpatrick’s mind when crafting his “framework,” it was formally added as an element of the “true” and “complete” New World Kirkpatrick Model, Return of Expectations (ROE).

Again, for the best articulation of Kirkpatrick framework and principles go to their web site: www.kirkpatrickpartners.com and/or look at the following white papers:

“The Kirkpatrick Four Levels: A Fresh Look After 50 Years, 1959-2009,” Jim Kirkpatrick and Wendy Kirkpatrick, April 2009;

“An Introduction to the New World Kirkpatrick Model,” Jim Kirkpatrick and Wendy Kirkpatrick, March 2015.

 

CONCLUSIONS

This six-blog series covered the Kirkpatrick Model and the New World Kirkpatrick Model. Despite critiques to numerous to detail, if remains as the standard point of reference in any discussion regarding training analysis. Upon your review and reflection, it may overlook issues germane to your field. This, really, should not be a problem. The “model” or framework is not intended to be a definitive guide to the extent of serving as an oracle; rather, if lends analytical tools to assist administrators and executives in their business decisions.

 

Craig Lee Keller, Ph.D., Learning Strategist

The Kirkpatrick Model: Level 4

We are beginning to close up our discussion on the Kirkpatrick Model with a description of its last level, Level 4, which deals with the issue of Results.

LEVEL 4: RESULTS

Results . . . in short, yes, this is what administrators ultimately look at in the context of their training plans. Be it a commercial business or a not-for-profit, mindful organizations are extremely careful with costs that some might view as discretionary, such as training. For training to be of value, it ultimately needs to be translated into Results.

The Kirkpatrick Model characterizes Results as: “the degree to which targeted outcomes occur as a result of the training and support and accountability package.”

The KNWM adds another dimension to Level 4: Leading Indicators. This addition focuses on “short-term observations and measurements suggesting that critical behaviors are on track to create a positive impact on desired results” (www.kirkpatrickpartners.com).

The last level should be analyzed and structured even before Level 1 and even before the training begins. Why? First, administrators find it difficult to determine useful metrics for measuring employee behavior. Attempting to create metrics after the completion of the training is problematic, because doing so can lead to accepting poor measures of outcomes or just accepting a “general sense” of the outcome without looking at how the training actually impacted the bottom line. Let’s look at an example of possible consequences:

Imagine that training costs $10,000; imagine that training only increased production value by $1,000 per year; and, imagine that there is a complete turnover of employees every eight years. Such measurements confirm that the training decreased overall organization income by at least $2,000.

Second, administrators need to be able to discuss not only the training with employees but also the means by which they plan on measuring its value. Educational Technologies confirm that consulting with employees makes the collection of data for the metric easier; problems identified with the collection process can be fed back into the training program to modify future assessments (www.educationaltechnology.net).

Educational Technologies also suggests training value can be determined by introducing a “control group,” as one might in a formal scientific experiment. Creating a control group might seem to be discriminatory toward those not included in the training. However, such need not be depending upon the structure and timing of the training. For example, if the training takes place over rotating, consecutive phases that last, say, over six months, then it would be possible to assess performance metrics of the first group versus that of the last group still awaiting training.

As noted at the onset, the Kirkpatrick Model changed its conceptualization from a hierarchical pyramid toward links in a chain. The notion of a chain connotes an interconnected process, but the Kirkpatrick Partners also use the notion of chain to develop the means of determining Results: a Chain of Evidence.

KIRKPATRICK PRINCIPLES

Those at Kirkpatrick Partners argue that the chain model needs to be followed while being mindful of five different principles:

  1. The end is the beginning
  2. Return on Expectations (ROE) is the ultimate indicator of value
  3. Business partnership is essential to bring about positive ROE
  4. Value must be created before it can be demonstrated, and
  5. A compelling chain of evidence demonstrates your bottom line value.

Ultimately, these principles have led to what the Kirkpatrick Partners term as the “true” model or “complete model,” the Kirkpatrick Business Partnership Model as depicted below (“The Kirkpatrick Four Levels: A Fresh Look After 50 Years, 1959-2009,” Jim Kirkpatrick and Wendy Kirkpatrick).

Next week, we’ll describe the Kirkpatrick Principles in detail and, in a final blog, discuss the critiques of the Kirkpatrick Model while placing it in the context of other models.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Model: Level 3

For many administrators and managers “in the trenches,” the notion of appreciating post-training behavior is a novel concept. They are consumed with responsibilities and tasks in the workplace; some may even believe that extra work was created by the detour from work to attend the training.

LEVEL 3: BEHAVIOR

This level is fairly straight forward, but a key link in the original Kirkpatrick Model. Again, to state the obvious, trainings have extremely limited value if their intended purposes are not some how realized in the workplace. The Kirkpatrick model utilizes a single element for the third level and adds an additional one for the NWKM: Required Drivers (www.kirkpatrickpartners.com).

A. Behavior

  1. Behavior is defined by “the degree to which participants apply what they have learned during the training when they are back at the job.”

B. Required Drivers

  1. Similarly, required drivers are “processes and systems that reinforce, encourage, and reward performance of critical behaviors on the job.”

The web site www.educationaltechnology.net confirms that determining the level of staff application of key principles, mindsets, and skill sets is quite challenging at the onset. They argue that assessment of Level 3 Behavior should take place between three to six months after the training. Much of this assessment includes informal observations; however, discerning whether or not the training has truly taken root is determined through staff counseling and interviews. Using “tests” can be problematic, for as discussed, the ability to “know” information is very different from being able to “apply” training information in real-life job situations.

The Required Drivers as the second element of Level 3 is truly significant. Without administrative processes and systems affirming the training, many employees—perhaps most—will simply forget about the training and leave the materials under an ever-increasing pile of training materials never to be looked at again. So what does it mean to implement Required Drivers?

Required Drivers necessitate administrators and managers to become actively involved in the process of implementing the training in the workplace. The NWKM identifies three ways this can be accomplished: reinforcement, encouragement, and rewards.  Functionally speaking, what does this mean in the workplace?

Reinforcing training material requires administrators and managers to serve as a “coach.” Playing the role of the coach is essential; here instead of being a judge, coaches provide reminders and refreshers of training material in situations.

Encouragement requires management to be sympathetic to their employees. Such encouragement is founded on the management insight that knowledge levels and positive dispositions are not the only factor when organizations seek to implement new practices and work models. In short, employees may attempt doff off old practices in exchange for new ones, but old habits can be hard to break. Equally, the skill of recognizing when to apply the training is frequently accomplished through trial and error until a given employee can develop a sufficient level of skill.

Rewards make things easier for employees. While having an affirming manager/coach is essential, rewards offer the external incentives that can further motivate staff during periods when the training model has not been fully implemented.

In our wrap up of the Kirkpatrick Model, we’ll look at Level 4: Results. With this level, we’ll include a discussion of the Kirkpatrick Principles, which govern and provide direction for the different links in the model.

Craig Lee Keller, Ph.D., JAG Learning Strategist