The Kirkpatrick Model: Level 4

We are beginning to close up our discussion on the Kirkpatrick Model with a description of its last level, Level 4, which deals with the issue of Results.

LEVEL 4: RESULTS

Results . . . in short, yes, this is what administrators ultimately look at in the context of their training plans. Be it a commercial business or a not-for-profit, mindful organizations are extremely careful with costs that some might view as discretionary, such as training. For training to be of value, it ultimately needs to be translated into Results.

The Kirkpatrick Model characterizes Results as: “the degree to which targeted outcomes occur as a result of the training and support and accountability package.”

The KNWM adds another dimension to Level 4: Leading Indicators. This addition focuses on “short-term observations and measurements suggesting that critical behaviors are on track to create a positive impact on desired results” (www.kirkpatrickpartners.com).

The last level should be analyzed and structured even before Level 1 and even before the training begins. Why? First, administrators find it difficult to determine useful metrics for measuring employee behavior. Attempting to create metrics after the completion of the training is problematic, because doing so can lead to accepting poor measures of outcomes or just accepting a “general sense” of the outcome without looking at how the training actually impacted the bottom line. Let’s look at an example of possible consequences:

Imagine that training costs $10,000; imagine that training only increased production value by $1,000 per year; and, imagine that there is a complete turnover of employees every eight years. Such measurements confirm that the training decreased overall organization income by at least $2,000.

Second, administrators need to be able to discuss not only the training with employees but also the means by which they plan on measuring its value. Educational Technologies confirm that consulting with employees makes the collection of data for the metric easier; problems identified with the collection process can be fed back into the training program to modify future assessments (www.educationaltechnology.net).

Educational Technologies also suggests training value can be determined by introducing a “control group,” as one might in a formal scientific experiment. Creating a control group might seem to be discriminatory toward those not included in the training. However, such need not be depending upon the structure and timing of the training. For example, if the training takes place over rotating, consecutive phases that last, say, over six months, then it would be possible to assess performance metrics of the first group versus that of the last group still awaiting training.

As noted at the onset, the Kirkpatrick Model changed its conceptualization from a hierarchical pyramid toward links in a chain. The notion of a chain connotes an interconnected process, but the Kirkpatrick Partners also use the notion of chain to develop the means of determining Results: a Chain of Evidence.

KIRKPATRICK PRINCIPLES

Those at Kirkpatrick Partners argue that the chain model needs to be followed while being mindful of five different principles:

  1. The end is the beginning
  2. Return on Expectations (ROE) is the ultimate indicator of value
  3. Business partnership is essential to bring about positive ROE
  4. Value must be created before it can be demonstrated, and
  5. A compelling chain of evidence demonstrates your bottom line value.

Ultimately, these principles have led to what the Kirkpatrick Partners term as the “true” model or “complete model,” the Kirkpatrick Business Partnership Model as depicted below (“The Kirkpatrick Four Levels: A Fresh Look After 50 Years, 1959-2009,” Jim Kirkpatrick and Wendy Kirkpatrick).

Next week, we’ll describe the Kirkpatrick Principles in detail and, in a final blog, discuss the critiques of the Kirkpatrick Model while placing it in the context of other models.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Model: Level 3

For many administrators and managers “in the trenches,” the notion of appreciating post-training behavior is a novel concept. They are consumed with responsibilities and tasks in the workplace; some may even believe that extra work was created by the detour from work to attend the training.

LEVEL 3: BEHAVIOR

This level is fairly straight forward, but a key link in the original Kirkpatrick Model. Again, to state the obvious, trainings have extremely limited value if their intended purposes are not some how realized in the workplace. The Kirkpatrick model utilizes a single element for the third level and adds an additional one for the NWKM: Required Drivers (www.kirkpatrickpartners.com).

A. Behavior

  1. Behavior is defined by “the degree to which participants apply what they have learned during the training when they are back at the job.”

B. Required Drivers

  1. Similarly, required drivers are “processes and systems that reinforce, encourage, and reward performance of critical behaviors on the job.”

The web site www.educationaltechnology.net confirms that determining the level of staff application of key principles, mindsets, and skill sets is quite challenging at the onset. They argue that assessment of Level 3 Behavior should take place between three to six months after the training. Much of this assessment includes informal observations; however, discerning whether or not the training has truly taken root is determined through staff counseling and interviews. Using “tests” can be problematic, for as discussed, the ability to “know” information is very different from being able to “apply” training information in real-life job situations.

The Required Drivers as the second element of Level 3 is truly significant. Without administrative processes and systems affirming the training, many employees—perhaps most—will simply forget about the training and leave the materials under an ever-increasing pile of training materials never to be looked at again. So what does it mean to implement Required Drivers?

Required Drivers necessitate administrators and managers to become actively involved in the process of implementing the training in the workplace. The NWKM identifies three ways this can be accomplished: reinforcement, encouragement, and rewards.  Functionally speaking, what does this mean in the workplace?

Reinforcing training material requires administrators and managers to serve as a “coach.” Playing the role of the coach is essential; here instead of being a judge, coaches provide reminders and refreshers of training material in situations.

Encouragement requires management to be sympathetic to their employees. Such encouragement is founded on the management insight that knowledge levels and positive dispositions are not the only factor when organizations seek to implement new practices and work models. In short, employees may attempt doff off old practices in exchange for new ones, but old habits can be hard to break. Equally, the skill of recognizing when to apply the training is frequently accomplished through trial and error until a given employee can develop a sufficient level of skill.

Rewards make things easier for employees. While having an affirming manager/coach is essential, rewards offer the external incentives that can further motivate staff during periods when the training model has not been fully implemented.

In our wrap up of the Kirkpatrick Model, we’ll look at Level 4: Results. With this level, we’ll include a discussion of the Kirkpatrick Principles, which govern and provide direction for the different links in the model.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Model: Level 2

THE STRUCTURE OF DIFFERENT MODELS

As a prelude to discussing Level 2, let’s take a quick step back to look at the structure of different Instructional Design Models (IDM). Any reader will find numerous different models; common among all of them, though, is the differentiation of the evaluation process into different components or levels. Some of the models have a minimal number of components, whereas others have seven or so. What’s going on here? It’s not difficult to understand when comparing different models. The creators of a given IDM might extrapolate a single component into two or more. The number of components bares notice as they signify the structure of the learning process. In other words, the structure conveys a cognitive schematic for appreciating educational design and evaluation, not simply an expanded PowerPoint that assists students in their learning.

The original Kirkpatrick Model utilized four components, or as they put it, levels. The fact that they used the concept of “levels” to depict the evaluation process is significant, as the original schematic depicted the evaluation process as different levels of a pyramid with the final level on the top. Such a schematic is hierarchical in its structure. The New World Kirkpatrick Model (NWKM) still utilizes four levels, however, instead of a pyramid, the NWKM utilizes links in a chain to depict the cognitive schematic. The notion of a “chain” signifies a connected process, with each link, or level, having an impact on the next. One might surmise that the NWKM still terms each component as a “level” to create continuity with the original model.

LEVEL 2: LEARNING

With our review of the Kirkpatrick Level 2, let’s begin by inspecting how the “links,” that is, Level 1 and Level 2, are connected. To recall, Level 1 deals with the concept of Reaction with the intent on evaluating customer satisfaction. The NWKM identified three dimensions of Level 1: the degrees to which the training was favorable, engaging, and relevant. Level 2 deals with the concept of Learning. Clearly, and this is common sense, the ability—or even desire—to learn is predicated on and “linked” with a student’s initial reaction to the training. In short, if a student’s reaction to the training is negative, there is no incentive for active listening, participation, and overall motivation for content retention.

There are three dimensions of the original Kirkpatrick Model and two additional ones for the NWKM: knowledge, skill, attitude, confidence, and commitment (www.kirkpatrickpartners.com ).

A. Knowledge

  • Knowledge is the foundation of cognitive learning; this part is hierarchical and based educational content. Knowledge is content based regardless of whether the information is conveyed in textual, pictorial, or through a hands-on demonstration. This dimension is familiar to most: “Mom, I received a 91% on test!” The depth of knowledge is another element, which begins to blend into the matter of skill.

B. Skill

  • A student may know how to perform a task, however, that is quite different from having the skill to perform the task.  For example, knowing how to solder and wire a circuit board is very different than doing it oneself; similarly, it is different to know all of the rules in soccer versus having the skill to be a referee. In the former case, skill is a tangible manipulation, whereas in the latter case, skill is a matter of cognitive interpretation. In both cases, the learning process includes skill, the operationalization of knowledge.

C. Attitude

  • A learner’s “attitude” toward training is predicated upon her/his value judgment about the utility of the new practice and/or process. The trainer and learner may agree that the course material is “relevant” to the work of the learner (Level 1). However, the learner may disagree in its value. For example, he/she may find the knowledge to be incomplete or simply incorrect. Second, he/she may find the skill required is too complicated or simplistic. A positive attitude toward the training requires an appreciation of the knowledge and the required associated skills.

D. Confidence

  • Confidence is linked to attitude. If students are positively disposed toward the knowledge and skill, then they will need confidence to perform the task.  This is an essential, perhaps, pivotal dimension of the training. Everything can be completely in place, including our next element, commitment, but if students lack confidence, they falter in their ability to place the training into practice. To facilitate confidence, course administrators and trainers need to be conscious that prior elements in learning—knowledge and skill—are clearly and completely detailed and depicted. Given that trainers are the functional experts compared to their students, they must be certain not to skip over elements they take for granted.

E. Commitment

  • This NWKM dimension is a sibling of confidence. Since learning is a process, having a commitment is essential, because few of us “get it right” in the beginning stages of trying out a new way of doing things. The student must be committed in light of failure and not fall back on the old way of doing things thinking “well, at least that worked.”

The NWKM envisions learning (Level 2) as a longer process separate from the traditional didactic of knowing facts. In this context, learning is far more dynamic and dependent upon on the trainer to create a vision for the material and empower the students to take educational ownership.

Next week we’ll look at Level 3: Behavior.

Craig Lee Keller, Ph.D., JAG Learning Strategist

The Kirkpatrick Levels: Background & Context

When discussing the Flipped Classroom last year, we ended our four-part discussion with an appreciation of how to measure its effectiveness. Several ways of evaluating trainings were identified including informal feedback from students and formal assessments of content mastery among others. Our discussion was intended to offer different ways for to evaluate trainings from various perspectives.

As you probably guessed, a well-developed field exists to evaluate courses, educational techniques, and training approaches: Instructional Design Models (IDM). There is a range of IDMs, but the best-known model is the Kirkpatrick Model.

BACKGROUND

Donald L. Kirkpatrick developed the Kirkpatrick Model, which was based on his 1954 dissertation, and later serialized in the US Training and Development Journal, the organ for the American Society for Training and Development (ASTD). In 1994, he and his son, James D. Kirkpatrick, published Evaluating Training Programs, which provided a complete and formal foundation and basis for his original ideas. Kirkpatrick with the assistance of his son and daughter, Wendy K. Kirpatrick, founded the business enterprise Kirkpatrick Partners, to offer consulting, products, and various events and training based on the “one and only Kirkpatrick” model. Donald passed away in 2014, but his children continue promoting the Kirkpatrick model, and James Kirkpatrick, who also has a doctorate, created the New World Kirkpatrick Model, which adds additional facets to each of the four levels for evaluating trainings.

MODEL STRUCTURE

The following four elements constitute the basis for the Kirkpatrick Model: Reaction, Learning, Behavior, and Results. Kirkpatrick originally used a pyramid schematic to visualize his concepts, but the Kirkpatrick Partners on their website currently use the image of interconnected links of a chain. Each level progressively leads to the next and is best understood as the straightforward definition of the level’s name.

LEVEL 1: REACTION

The basic understanding for level one is simple: how did training participants react to the training? This is a simple method for appreciating customer satisfaction. Or as described on its website: “the degree to which the participants find the training favorable, engaging, and relevant to their jobs.” The New World Kirkpatrick Model (NWKM) added the latter two elements. They describe engagement as the following: “The degree to which participants are involved in and contributed to the learning experience.” Similarly, relevance is described as the following: “The degree to which training participants will have the opportunity to use or apply what they learned in the training on the job.”

A. Reaction Sheets/Smile Sheets

The basic means of determining reaction is the use of “smile sheets,” otherwise known as a survey of participants’ reactions. Such a survey, generally speaking, is handed out and completed just after the training using paper and pencil or on-line.

B. Survey Questions

Most surveys query participants about a number of the training facets. What is the facility like? Was the facility located in a convenient place? Did the training start on time? Were the training goals clearly outlined? Were the training materials helpful? Was the facilitator knowledgeable? Did you like the facilitator’s style? Were there a sufficient number of breaks during the training? Was the content relevant to your work?

Training participants are prompted to answer each question in either a binary yes/no fashion or rate the response along a scale (Likert Scale). The distinctive element of these questions is that the focus is on the training and the trainer.

C. NWKM Shift of Focus

Jim Kirkpatrick realized that there was something missing in the traditional Kirkpatrick Model. He realized the surveys were self-centered around the environment and themselves: their facility, their course, and their trainer.

The NWKM remodeled the smile sheets to be “learner-centered.” As noted in their training material, instead of the training-centered category “The program objectives were clearly defined,” the learning-centered category is “I understood the learning objectives.”

Jim Kirkpatrick believes, most importantly, that this level needs to be tied to the last two levels, behavior and performance. That is, the NWKM is structured and functions to reinforce or stimulate positive on-the-job practices, which in turn directly impact organizational goals. He notes that the since the “smile sheets” generally are training-centered that they create a perception for training participants that are anchored between the participant and her/his job. Rather, the training needs to be learner-centered in a manner that links the participant to the organizational goals through their jobs.

Next week we’ll look at Level 2: Learning.

Craig Lee Keller, Ph.D., JAG Learning Strategist

Sources:

http://educationaltechnology.net/

http://www.kirkpatrickpartners.com/

 

Trends 2017: mLearning

One of the big trends over the past few years is mLearning: Mobile Learning. As a subset of eLearning, mLearning is defined by the utilization of mobile devices: laptops, tablets, smart phones, and smart watches. Isn’t this just the same thing we’ve been talking about? As is often the case, yes and no. The platform for eLearning is simply the vehicle for delivering content; the platform in that context is irrelevant. Though one could immediately rank the ease of use for accessing an Internet portal between, say, a desktop computer and a smart watch. The power behind mLearning is not the mobile platform per se; the power of the various platforms is associated with how each is integrated into our increasing mobile personal and professional lives.

BACKGROUND: THE MOBILIZATION OF MEDIUMS

For our film aficionados, many will recall the scene from Annie Hall where the characters played by Woody Allen and Diane Keaton are waiting for movie tickets only to be verbally assaulted by a person loudly expounding to his date the ideas of media guru Marshall McLuhan. After a short vitriolic argument, Allen draws forth Marshall McLuhan from behind a cardboard sign with the latter pronouncing that the loud bombast knew nothing of his ideas. In addition to a lesson in personal civility and avoiding humiliation, this vignette prompts one to investigate McLuhan’s ideas.

McLuhan’s best-known work is the book The Medium is the Message. Many intuitively translate its meaning to the following interpretation of the title: a given medium, for example, television, is superficial, so therefore messages from that medium are superficial. On a crude level, such an interpretation is interesting, but is entirely separate from McLuhan’s main argument. He argued that the medium for transmitting information does not per se impact the interpretation of the information; rather, “the ‘message’ of any medium or technology is the change of scale or pace or pattern it introduces into human affairs.” McLuhan employs the example of the railway to argue his point. The railway does not create “transportation,” though “it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure.” The same, for that matter, can be argued about the uniform standardization of time, which facilitated commerce.

THE IMPACT OF MOBILE TECHNOLOGY

  • Oh how we love our technology! Let me (and us) count the ways!
    • Remote Access
    • Centralization and Integration of Tasks
    • Expansion of Possibilities
    • Technological Independence versus Interdependence
    • Remote Monitoring and Supervision
    • ???????? . . . ????????
    • What are your ideas?

 

Instead of going into great detail about any of these issues, let’s just pause for a moment and think about a single change it has generated in our lives.

If you’re old enough, remember the time when children—or the vast majority of adults—did not have cell phones. When working as a contractor in the defense industry, a colleague shared with me his disgust of a very select number of individuals who used their bulky cell phones while waiting in rush hour traffic. At this time, virtually nobody had cell phones due to the cost or the perceived need. To mock those individuals, he tied a plastic child’s phone to his rear view mirror and used it when aside one of Washington’s movers and shakers in earnest conversation.

The introduction of mobile technology has vastly revolutionized our collective personal and professional lives (and psyches). Before one might have to track down a payphone to inform a colleague he or she was going to be late to a meeting due to traffic; now one simply calls using their cell phone—hands free, mind you! One could spend pages if not books discussing the implications of the transformations of our lives due to mobile technology.

Now, truly, it’s hard to imagine life without a smart phone no less a cell phone in general. If one does not have one, then the most common response is to reflexively judge that the non-use is either old or has something wrong with her or him. It’s not surprising with this overwhelming system of belief that a backlash has arisen sponsoring an array of products favoring “slow” living.

IMPACT ON EDUCATION: mLearning

O.K.! We’re back to mLearning!! So how has the advent of mobile technology changed the “scale or pace or pattern” of education? Like eLearning, the advent of mobile technology has further de-centered the classroom from a fixed site. Instead of coming to the “one-room classroom” governed in style and content by an instructor, mobile technology has added a variety of different sources for content and platforms for accessing it. Moreover, the variety of sources has empowered learners to question and challenge the value of received content from their instructors. In short, the ease of accessing information has impacted the student-teacher dynamic and has impacted, more importantly for our purposes, the style of learning, which is no longer classroom-centered.  

To begin to appreciate the impact of mLearning, one only needs to recall the dynamics of the “Flipped Classroom.” Yet, mLearning has a number of facets not present in the “standard” flipped classroom.

A. The Daily Use of Mobile/Personal Technology

Given the omnipresence of personal technology, let’s enumerate the way it is used in our daily lives:

    • Waking up in the morning
    • Scheduling our days
    • Reminding users of time-based tasks
    • Capacity for constant communication
    • Problem-solving
    • Multitasking
    • Bio-metric monitoring
    • Others?

 

By definition, mobile technology is used when one is “on the move” and not tethered to a desk-based computer station. One can be driving in her or his car, one can be sitting in a bus or metro, one can be a lunch, one can be, one can be, one can be . . . In short, mobile technology has been completely integrated into virtually every single aspect of our waking life, which as noted, has become increasing mobile. So, what are the opportunities for eLearning in this context?

B. Opportunities for eLearning: mLearning

  1. Alternative Remote-Based Sites. Given the portability of laptops or even tablets, a learner can access a web-based portal to access information anywhere there is an Internet connection. In fact, a learner can almost always connect to the Internet using a Wi-Fi or Bluetooth connection from one’s smart phone. In this context, opportunities for sustained mLearning are created in areas other than the classroom or the home.  
  2. Transient Access. Think about accessing audio and/or textual content while driving a vehicle or in transit sitting in public transportation. These are the obvious opportunities and can be engaged in a sustained fashion, albeit often for shorter periods of time than when at a fixed-based learning site. What are some others?
    • When the learner is driving, a colleague or friend can serve as a veritable educational co-pilot, and even transforming transient access into an alternative remote-based site during longer trips
    •  mLearning can take place in transit when the content can be accessed using audio information. The value here is creating a backdrop of content that can engage the consciousness and cognitive functions through simple though repeated exposures
  3.  Anecdotal Access. How many times during the day do we simply look at our mobile technology for reasons other than that associated with eLearning? Many, many, and, for some, too many! In short, most individuals are tethered to their devices; in fact, a colleague named Kori told me her cell phone is her life. (Don’t judge; it’s a generational thing LOL! If fact, I joked that I “saved her life” when I once found her cell phone.) Regardless, an opportunity exists for mLearning each of the times we access our mobile technology.

C. Styles of mLearning

  1. Standard Web-Based Portal Access. This is the conventional style, which by its nature does not require discussion.
  2. Micro-Learning. We already have discussed micro-learning in the past. However, with the advent and increasing impact of mobile technology, the task for eLearning administrators will be developing content packages that are appropriate for different lengths of time, et cetera.
  3. Audio Content. Audio content currently exists in the form of recorded lectures and podcasts. However, as the technology advances, text-to-speech converters will become commonplace and will be utilized to access traditional text during transient moments.
  4. Flash Cards. An old friend of mine, David Margulious, was co-founder of the Quizlet start-up, which uses the “flash card” concept in eLearning. Flash cards are great ideas! In fact, I recently used old-fashioned paper flash cards when learning menu descriptions at a restaurant (long story LOL!). Anyway, using flash cards in a site-based setting is great, and opportunities for personalizing and sharing information sets are huge.

Imagine this though: using the flash card concept through an app that pops up whenever you access your mobile technology, think accessing your smart phone or smart watch. In order to check your e-mail or to read your text, the user is prompted to answer a single, quick flash card query. In this context, mLearning is completely integrated into the uses of a variety of mobile technologies.

Next time we’ll discuss a variety of eLearning issues. Stay safe and warm!

Craig Lee Keller, Ph.D., Learning Strategist

Useful Articles in Gamification (Part 3)

When looking for “seminal” articles in the field of gamification, one is confronted with works published in formal academic journals but more frequently from a variety of conferences. The field is fairly nascent and has been directed toward a wide variety of areas of focus. The best way to appreciate the field, perhaps, is to provide a sampling of a couple articles that provide an overview of the field. First, though, a quick discussion about the “father” of gamification . . .

NICK PELLING AND GAMIFICATION

Nick Pelling is credited for coining the concept of “gamification” in 2002 (though in a conference, he tells his audience it was in 2003). He divined the concept through his experiences in business school. Being told that success was based on defining what “you’re good at,” he quickly concluded that he was good at games given his years of work in the gaming industry. Business school, he stated, was centered on determining “present worth” of an asset. But upon reflecting upon games and business enterprises, Pelling had an epiphany: business schools were asking the wrong question. Economic value should be less associated with present worth and more associated with potential worth in the future.

The gaming industry was impacting all areas of culture, economy, and society in such a massive way that thoughtful entrepreneurs should be able monetize that insight. Pelling attempted to do that with an admittedly ill-fated startup called Conundra from 2003-06. Regardless of his failure, he identified two key elements in then-contemporary games: immersive interface design and digital content platforms; in the next wave, social media became an integral factor. Pelling is critical of the last element and envisions it as a crass forms of advertising and persuasion. His favored incarnations of the second stage focus on platforms that join people together and empower them. His examples of this include Kickstarter, AngelList, and even Match.com.

ARTICLES ON GAMIFICATION

A. “Does Gamification Work?—a literature review of empirical studies on gamification,” J Hamari, J Koivisto, H Sarsa, System Sciences HICSS, 2014.

Gamification, like any big idea, is great in theory. Articles such as Hamari’s are essential when investigating whether or not the theory actually works in practice. First, though, they demonstrate the almost exponential growth of literature in the field. Hamari and company look at a range of independent and dependent variables.

Hamari and company break down gamification into a simple linear progression:

Motivational AffordancePsychological OutcomesBehavioral Outcomes

Their literature review includes sites such as Google Scholar and ProQuest, which yielded large results; however, upon inspection, the literature that had been subjected to traditional peer-review was substantially lower. All withstanding, for those peer-reviewed articles with empirical studies and analysis, they did find positive associations, which were tempered due to factors associated with the study participants and other limitations. They pointed to other areas for possible research.

B. “How to Gamify? A Method for Designing Gamification,” Morshheuser B, Werder K, et al. System Sciences HICSS, 2017.

This article is interesting for a couple of reasons. First, it touches upon the major question of entrepreneurs seeking to apply game dynamics in existing or new fields. Second, and strangely, this concept of “designing gamification” runs contrary to Pelling’s original assessment. Gamification was not something that one applies to a given business sector, but what the games culture had been doing for years to all other industries. While Pelling was prescient, with passing years the operative issue became recognizing the mutual dynamics between the formal game industry and other industries. At this stage, of course, most of the influence flows from the game industry to other fields

Morshheuser’s article looks at the future of gamification given the prediction by the Connecticut-based Gartner Group. Gartner predicted in 2011 that by 2015 half of all organizations would be incorporating some type of gamification into their operations; however, others also have predicted that ventures into gamification are doomed to failure due to flawed concepts about game design.  In this context, Morshheuser and company sought to develop a methodology.

For the methodology, they developed a set of best practices, performed a literature review for gamification design, and then received feedback about their model from a battery of recognized experts. Their product was represented by an extremely detailed, if not tedious flow chart that focused on thirteen straight-forward requirements, such as: understanding goals and characteristics, engaging in an iterative design process, obtaining input from stakeholders, involve users, et al. Many of these requirements make eminent sense, for example, and most importantly, obtaining input from stakeholders and users.

Next week we’ll take a look at something completely new in the field of eLearning. Any ideas about areas you’d like us to pursue? Send us an e-mail!

Craig Lee Keller, Ph.D., Learning Strategist