Active Learning Through the Ages (Literally)

Aristotle (philosopher), Thomas Edison (inventor) and Jean Piaget (cognitive psychologist) all had similar thoughts about learning. Can you see the common thread?

Aristotle:  Exercise in repeatedly recalling a thing strengthens the memory.

Edison:   A man will resort to almost any expedient to avoid the real labor of thinking.

Piaget:  Thinking is interiorized action.

Yes, they all believed that the key to learning and learning retention is what we today would call Active Learning.

I have spent many years following the learning research literature. What I have found is that just about every learning strategy that can be shown to be effective through evidence-based studies falls within the domain of Active Learning. It comes under different names depending on the researcher (e.g. Robert A. Bjork and his colleagues call it Desirable Difficulties) but it is always the case that:

To learn you must cognitively act upon the learning materials and to retain what you have learned you must actively  re-engage with the learning repeatedly over a period of time.

So what does work? I think all of the techniques that research has shown to be effective can be categorized into one of the following domains:

Review and Reinforcement – using Successive Relearning (a combination of the Testing Effect and the Spacing Effect) to boost retention

Gamification – for motivation and reward, and at least in some studies, improved learning

Subscription Learning – using microlearning so as not to overload working memory, and to leverage the Interleaving Effect

Active Assessments – because for learning and learning retention, testing and retesting always outperform studying and restudying

These strategies are not mutually exclusive. In fact they are mutually reinforcing. A valid learning process will incorporate them all.

Coming to the Life Science Trainers and Educators Network (LTEN) Annual Conference next week? Be sure to attend my workshop on Active Learning:

Workshop Title: Learning is Not a Spectator Sport: The Science of Active Learning
 Wednesday, June 15th
2:00 – 3:30 PM
Room: Magnolia 3

And be sure to stop by the Intela Learning ( booth (Booth 431) to see the IntelaTM Active Learning and Assessment System, the first fully mobile, cloud-based learning system to incorporate all four of these key instructional strategies.



How Adding One Question to an Exam Can Make it Easier to Pass

While working on test validation projects I often get asked: How long should my test be? The answer is pretty simple: The test needs to be long enough to cover all the important content of the learning system.  And how do you do that? You do it by writing the questions to the learning objectives. Depending on the importance and complexity of each learning objective it could be one question or maybe as many as five.  As a consequence of this process your test might have a nice round number of questions that easily divides into 100%, like 20 or 25, resulting in a whole number of points per question, or it could be a number that doesn’t easily divide into 100%, like 17 or 32. We have a bias for a whole number of points per question but that is really an artifact of hand scoring (though the test takers do like it as well). Today it doesn’t matter so much. Computers are very good at doing the math for you.

I was recently working on a test validation project that got me thinking about test length, points per question and passing score. Most of my clients have set their passing scores at numbers divisible by five: 80%, 85% or 90%. This isn’t exactly kosher since the passing score should really be determined by something like the Angoff process, which could result in a non-round passing score, like 87%. But trainers and test takers alike don’t like “odd” passing scores, so we often round them to the nearest multiple of five.

So what relationship does passing score have to test length?  A 90% passing score is pretty common for my clients so let’s work with that as an example. If we give a 10 question test then a student can get one question wrong and pass the test, but two questions wrong fails the test. What about an 11 question test? Same result. A 15 question test?  Same. All the way up to 19 questions. But let’s add one more question. At 20 questions the student can now get two questions incorrect but still pass the test.

So think about that: A 19 question test is much harder to pass than a 10 question test, which kind of makes sense, but by adding just one more question (from 19 to 20) you have made passing significantly easier! Counterintuitive but true. Another break occurs after 29 questions (at 29 questions the student fails with three wrong; at 30 questions the student passes with three wrong) and so on.

Here’s a handy table showing some of the breaks for three different passing scores:

80 % Passing Score 85% Passing Score 90% Passing Score
Number of Questions in Exam Number Permitted Incorrect Number of Questions in Exam Number Permitted Incorrect Number of Questions in Exam Number Permitted Incorrect
1-4 0 1-6 0 1-9 0
5-9 1 7-13 1 10-19 1
10-14 2 14-19 2 20-29 2
15-19 3 20-26 3 30-39 3
20-24 4 27-33 4 40-49 4
25-29 5 34-39 5 50-59 5
30-34 6 40-46 6 60-69 6
Etc.   Etc.   Etc.  

At any of these break points the number of incorrect answers permitted jumps by one question. So, the key takeaway here is: Think about exam length and its consequences the next time you create a test.

In Defense of Forgetting

In our quest to teach and to learn who is the good guy? Remembering, of course. And who is the bad guy? Forgetting, of course. Well, not so fast. While you may not think so, forgetting is an important part of the learning process.

Forgetting benefits us in multiple ways:

  1. It frees our brains up to remember what is really important. Do you really need to remember what you ate for lunch three weeks ago Tuesday? Unless you got food poisoning, probably not. Similarly, in a course, not everything you learn is important to remember. Some information, if needed, can be looked up and some information is just not that important. Our brains are constantly bombarded by sights, sounds, smells. Imagine if you could remember everything? There have been a few documented cases of people with 100% recall of everything that has ever happened to them. It’s not a positive attribute; in fact it’s debilitating. It causes mental exhaustion. And the same is true in the learning process. You don’t need to remember everything.  Just what’s important.
  2. It helps us avoid a phenomenon called Proactive Interference. During proactive interference something you already know actually interferes with learning something new. For example, you are trying to remember your new office phone number but you keep remembering your old office number instead. Or, you already know some French and this knowledge interferes with your ability to learn Spanish.
  3. A little forgetting helps in remembering. Deep learning occurs when memories are stored in long term memory and stabilized. This is called consolidation. An effective method for consolidating a memory is to retrieve the memory from long-term memory, bring it into working memory and then re-store (re-encode) it in long-term memory. The well-known spacing effect (spacing learning over a period of time) uses this process. But one outstanding question about spaced learning is: What is the optimal spacing period? Many studies have shown that optimal retrieval and reconsolidation occurs when the learner is just at the point of forgetting. In 1989 Banaji and Crowder wrote: “As an empirical rule, the generalization seems to be that a repetition will help most if the material had been in storage long enough to be just on the verge of being forgotten.”

So, remember (pun intended) not all forgetting is bad for you or your learners.


The Importance of Associating Metadata with Questions

Traditionally a multiple choice question consists of a stem, the choices and, among the choices, the correct answer. But a valid question should have something else stored along with each question: metadata (information about the question).  Why? For at least two reasons:

  1. As important information for any other exam author who might be using the question. In a typical assessment system the item pool and the exams themselves are separate, so more than one exam author may be using the items. Even if you anticipate that you will be the only person using the question what happens if you leave your current position and someone else becomes responsible for maintaining and administering these questions and exams? He/she will find this information valuable.
  2. For defensibility. Exams need to be fair, valid and reliable. A key step in the process of building valid exams is beginning with valid questions. You must be able to justify that the questions adequately cover the training content (content validity) and are important for the test taker to know in order to perform his or her job.

So, what metadata should you store? Here are some suggestions:

Rationale for question. Why is this question important to the job?

Estimated difficulty. This can be either quantitative – what percentage of test takers do you anticipate will get this question correct? Or it can be categorical (e.g. easy, medium, difficult). Note: Once you have real exam data for this question this estimate can be replaced by actual difficulty data.

Reference. Where in the training material does this question come from (e.g. module, lesson, screen/page) — or if it is for a pharmaceutical company PI exam, the section of the PI.

Cognitive Level. Typically Bloom’s Taxonomy (or revised Taxonomy).

Are Your Learners Cognitively Passive or Cognitively Active?

There was a guy who lived in my dorm in college who spent a lot of time talking about how much he studied. Since his grades weren’t very high we used to joke that he spent more time talking about studying than actually studying. But, giving him the benefit of the doubt and assuming he did actually study for long hours, then why didn’t he get better grades?  It could be that he just wasn’t very bright but I don’t think that was the case. I think he was just studying “wrong.”

What does “studying wrong” mean? Many research papers have shown that traditional learning strategies that many, if not all, of us used in school are worthless. Strategies such as rereading material, highlighting, underlining, etc. are ineffective, no matter how much time we spend studying. As a group we can classify these strategies and the learners who use them as “cognitively passive.”

In contrast are “cognitively active” strategies such as, discovering hidden similarities, self-testing, creating new categories, assembling higher level “big picture” models, and relating the current material to previously learned material.

Many studies have shown that “active” learners outperform “passive” learners.

Most of the readers of this blog train adult learners in corporate settings. By the time our learners appear in our training programs they are probably already either cognitively passive learners or cognitively active learners based upon many years’ experience and prior schooling. So do we just throw up our hands and say “they are what they are” or can we use training strategies that require our passive learners to become active learners? It turns out we can do the latter.

To promote active learning in your training programs try these techniques:

  • Use pre-testing
  • Use pre-reading
  • Organize your learning materials to promote contextual learning
  • Use cumulative testing
  • Provide review and reinforcement exercises after the training event
  • Interleave subjects
  • Space the learning over time
  • Use a “spiral’ approach to the content

All of these techniques require your learners to become cognitively active even if they are by nature cognitively passive.

Can Testing Contribute to Higher Grades?

I spent 20 years as CEO of a testing company so I had lots of opportunity to get a very good sense of how our corporate clients were using testing within their training programs.  Of the millions of tests that were delivered on our platform virtually 100% were Summative Assessments. For those not familiar with the term, summative assessments are assessments that are given at the completion of a training program to determine mastery of the material. This is understandable. It’s what training departments do: train, test and certify.

But there is another type of assessment that is just as important, especially in its impact on learning: Formative Assessments. Formative assessments are assessments that help students learn, and lots of research evidence over the past few decades points to the significant role testing can have as a learning tool.

Formative assessments can be diagnostic tests, self-assessments, module-level tests with feedback, pre-tests, etc – basically any test for learning rather than of learning.

How effective can formative assessments be? How much can they improve learning outcomes?

In this recent study :

Daily Online Testing

the authors gave daily on-line quizzes, with feedback, to  a psychology class that was traditionally taught by lecture only. The quizzes took 10 minutes at the beginning of each lecture; there were eight questions per quiz.

The authors then compared the outcomes (lecture only vs. lecture plus daily tests) on final grades at the end of the course. Grades were half a letter grade higher for the lecture plus testing group than for the lecture only group.  Interestingly, the gains were most significant for lower socioeconomic students (note: the TOWER group was the group that had test-enhanced learning):

Psychology grades:

And what was really fascinating was that the effect extended to courses the students took that were not part of the study:

Non-psychology grades

For most of us testing has a negative connotation because we associate it only with summative assessments, but repeated studies, including this one, demonstrate that formative assessments can enhance learning significantly.