Ced Chin wrote a synopsis of Accelerated Expertise. He opens:
This is a summary of a remarkable 🌳 tree book, which presents a theory for and methods to accelerate expertise in real-world contexts. This summary is not comprehensive; it only covers some of the actionable theories and recommendations in the book and leaves out the considerable lit review and the book’s recommendations for future research directions. I’ll note that Accelerated Expertise is not written for the lay person — it is a book primarily written for organisational psychologists, training program designers and researchers employed in the US military. If you must read it — say because you want to put the ideas in the book to practice — my recommendation is to read Chapters 9-13 and skim everything else.
Accelerated Expertise is about ‘taking the concept of skill development to the limit’. This is not a book about pure theory; nor is this a book about deliberate practice in well-developed skill domains. No: this is a book that pushes the limits of two lesser-known learning theories, and in so doing have created successful accelerated training programs in messy, real-world military and industrial contexts.
The following is are my takeaways from Ced’s summary repurposed for my own future reference. As a dues-paying member of the “learn in public” congregation, I’m posting it for anyone else who might care.
Context
In the current era of frequent deployments to a variety of locations worldwide to fight the War on Terror, there are far fewer opportunities to have systematic training and practice. These are highly dynamic tasks that require considerable cognitive flexibility. Speed in acquiring the knowledge and skills to perform the tasks is crucial, as the training must often be updated and provided shortly before the personnel must deploy to the theatres where the wars are being fought.
The ideas and recommendations in the book deviate from certain mainstream ideas about pedagogy and training.
(I notice that military applications like trading are adversarial environments — skill domains involving an adversary who is constantly evolving their tactics)
What is the book about?
Accelerated Expertise is about ‘taking the concept of skill development to the limit’. This is not a book about pure theory; nor is this a book about deliberate practice in well-developed skill domains. No: this is a book that pushes the limits of two lesser-known learning theories, and in so doing have created successful accelerated training programs in messy, real-world military and industrial contexts.
-
- The report that come out of those meetings became the precursor to Accelerated Expertise — which was prepared by Robert R. Hoffman, Paul Ward, Paul J. Feltovich, Lia DiBello, Stephen M. Fiore and Dee H. Andrews for the Department of Defence and published in 2016.
Accelerated Expertise is divided into three parts. Part 1 presents a literature review of the entire expertise research landscape circa 2016. Part 2 presents several demonstrations of successful accelerated training programs, and then an underlying theory for why those training programs work so well. Part 2 also contains a generalized structure for creating these accelerated expertise training programs. Part 3 presents a research agenda for the future, and unifies Parts 1 and 2 by pointing out all the holes in the empirical base on which existing accelerated training programs have been built. This summary will focus on Part 2.
Goals
- Accelerate proficiency
- Increase retention
This necessitated 4 sub-goals:
- Rapid training
- Dynamic adjustments: rapidly incorporate changes in the metagame
- Higher levels of proficiency
- Facilitate retention
The classic tension: mastery vs time
Mastery or expertise takes time. It’s a higher bar than “accelerating” proficiency.
But what we do know is this: the set of successful accelerated training programs that currently exist enable accelerated proficiency, not accelerated mastery.
- Conventional approach to trainingFigure out a set of atomised skills and lay them out from the most basic skills to the most advanced, and then design a training syllabus to teach each skill in the right order, making sure to teach the pre-requisites first, and then incrementally complexify our taught concepts and skills and training programs. We would probably also design exercises for the lower levels of skills, and attempt to create intermediate assessment tasks or ‘tests’.
We would, in short, attempt to replicate how we are taught in school.
Problems with this approach:
- It takes too long
- Breaking a skill domain down to atomised skills is risky — it is likely that you will accidentally cause the construction of subtly wrong mental models, due to the incomplete nature of a skill hierarchy. This then slows expertise development, since trainers would now have to do additional work to help student unlearn.
- Experts are able to see connections and draw links between concepts or cues that novices cannot. Teaching atomised concepts will prevent novices from learning these connections, and may in fact result in either subpar performance or a training plateau later on.
- Assessments for atomised skills do not translate to assessments of real-world performance.
- Conventional methods try to lower the cognitive load of extraneous details to isolate skill acquisition, but this risks oversimplification.
- It is not easy to update the training program if the skill domain changes…a hierarchical syllabus resists updating. Which lesson do you update? At what level of complexity? What prerequisites must change? (I’m less concerned about this)
- A counterintuitive reason: external assessments often degrade the learner’s ability to sensemake in the field. In other words, extremely clear feedback sometimes prevents students from learning effectively from experience, which may slow their learning when they arrive in a real world domai
The NDM Approach
The NDM field uses CTA, cognitive task analysis, to extract tacit mental models of expertise…This allows you to sidestep the problem of good hierarchical skill tree design. Once you have an explicated mental model of the expertise you desire, you may ask a simpler question: what kind of simulations may I design to provoke the construction of those mental models in the heads of my students?…This core insight underpins many of the successful accelerated expertise training programs in use today.
General structure of an accelerated expertise training program
- Identify who the domain experts are.
- In depth career interviews about education, training and job experiences
- professional standards or licensing
- measures of actual performance at familiar tasks
- social interaction analysis (asking groups of practitioners who is a master at what)
- Perform cognitive task analysis on these identified experts to extract their expertise.Depending on the exact CTA method you use, this step will initially take a few months, and require multiple interviews with multiple experts (and also with some novices) in order to perform good extraction.
- Building a case library of difficult cases
Store these cases, and code them according to measures of difficulty.
- Turn the case library into a set of training simulations
This step is a bit of an art — the researchers say that ‘no set of generalised principles currently exist for designing a good simulation’. They know that cognitive fidelity to the real world is key — but how good must the fidelity be? Training programs here span from full virtual simulations (using VR headsets) to pen-and-paper decision making exercises (called Tactical Decision-making Games) employed by the Marines.
- Feedback in simulation training is sometimes qualitative and multi-factorial.
Some exercises like Gary Klein’s Shadowbox method ask multiple-choice question at critical decision points during a presented scenario (e.g., ‘at this point of the cardiac arrest (freeze-frame the video), what cues do you consider important?’). Learners then compare their answers to an experts and then reflect on what they missed.
A common objection
A common reaction to this training approach is to say “wait, but novices will feel lost and overwhelmed if they have no basic conceptual training and are instead thrown into real world tasks!” — and this is certainly a valid concern. To be fair, the book’s approach may be combined with some form of atomised skill training up front. But it’s worth asking if a novice’s feeling of artificial progression is actually helpful, if the progression comes at the expense of real world performance. The authors basically shrug this off and say (I’m paraphrasing): “well, do you want accelerated expertise or not?” In more formal learning science terms, this ‘overwhelming’ feeling is probably best seen as a ‘**desirable difficulty’**, and may be an acceptable price to pay for acceleration. (When Zak had to figure out what was going on at the first club basketball practice I think the coach had premeditated this desirable difficulty and this was confirmed by another parent as the coach’s “style”. It’s intentional)
The importance of a case library
*Case experience is so important to the achievement of proficiency that it can be assumed that organisations would need very large case repositories for use in training (and also to preserve organisational memory). Instruction using cases is greatly enhanced when “just the right case” or set of cases can be engaged at a prime learning moment for a learner (Kolodner, 1993). This also argues for a need for large numbers of cases, to cover many contingencies. Creating and maintaining case libraries is a matter of organisation among cases, good retrieval schemes, and smart indexing—all so that “lessons learned” do not become “lessons forgotten.”
The US Marines, for instance, own a large and growing library of ‘Tactical Decision-Making Games’, or ‘TDGs’, built from various real or virtual battlefield scenarios; these represent a corpus of the collective operational expertise of the Marines Corps.*
The underlying theory behind this training approach
Cognitive Flexibility Theory (CFT)
Core syllogism
- Learning is the active construction of conceptual understanding.
- Training must support the learner in overcoming reductive explanation.
- Reductive explanation reinforces and preserves itself through misconception networks and through knowledge shields.
- Advanced learning is the ability to flexibly apply knowledge to cases within the domain. [This is what I mean when I use the word “learning” — effective behavior change]
Therefore, instruction by incremental complexification will not be conducive of advanced learning.
Therefore, advanced learning is promoted by emphasizing the interconnectedness of multiple cases and concepts along multiple dimensions, and the use of multiple, highly organized representations.
Empirical ground
- Studies of learning of topics that have conceptual complexity (medical students).
- Demonstrations of knowledge shields and dimensions of difficulty.
- Demonstrations that learners tend to oversimplify (reductive bias) by the spurious reduction of complexity.
- Studies of the value of using multiple analogies.
- Demonstrations that learners tend to regularise that which is irregular, which leads to failure to transfer knowledge to new cases.
- Demonstrations that learners tend to de-contextualize concepts, which leads to failure to transfer knowledge to new cases.
- Demonstrations that learners tend to take the role of passive recipients versus active participants.
- Hypothesis that learners tend to rely too much on generic abstractions, which can be too far removed from the specific instances experienced to be apparently applicable to new cases, i.e., failure to transfer knowledge to new cases.
- Conceptual complexity and case-to-case irregularity pose problems for traditional theories and modes of instruction.
- Instruction that simplifies and then complicates incrementally can detract from advanced knowledge acquisition by facilitating the formation of reductive understanding and knowledge shields.
- Instruction that emphasizes recall memory will not contribute to inferential understanding and advanced knowledge acquisition (transfer).
Cognitive Transformation Theory (CTT)
Core syllogism
- Learning consists of the elaboration and replacement of mental models.
- Mental models are limited and include knowledge shields.
- Knowledge shields lead to wrong diagnoses and enable the discounting of evidence.
Therefore learning must also involve unlearning.
Empirical ground and claims
- Studies of the reasoning of scientists
- Flawed “storehouse” memory metaphor and the teaching philosophy it entailed (memorization of facts; practice plus immediate feedback, outcome feedback).
- Studies of science learning showing how misconceptions lead to error.
- Studies of scientist and student reactions to anomalous data.
- Success of “cognitive conflict” methods at producing conceptual change.
Additional propositions in the theory
- Mental models are reductive and fragmented, and therefore incomplete and flawed.
- Learning is the refinement of mental models. Mental models provide causal explanations.
- Experts have more detailed and more sophisticated mental models than novices. Experts have more accurate causal mental models.
- Flawed mental models are barriers to learning (knowledge shields).
- Learning is by sensemaking (discovery, reflection) as well as by teaching.
- Refinement of mental models entails at least some un-learning (accommodation; restructuring, changes to core concepts).
- Refinement of mental models can take the form of increased sophistication of a flawed model, making it easier for the learner to explain away inconsistencies or anomalous data.
- Learning is discontinuous. (Learning advances when flawed mental models are replaced, and is stable when a model is refined and gets harder to disconfirm.)
- People have a variety of fragmented mental models. “Central” mental models are causal stories.
The emphasis of CFT is on overcoming simplifying mental models. Hence it advises against applying instructional methods that involve progressive complexity.
CTT, on the other hand, focuses on strategies, and the learning and unlearning of strategies.
CFT and CTT each try to achieve increases in proficiency, but in different ways. For CFT, it is flexibility and for CTT, it is a better mental model, but one that will have to be thrown out later on. CFT does not say what the sweet spot is for flexibility. A learner who over complexifies may not get any traction and might become paralysed. It thus might be considered a “lopsided” theory, or at least an incomplete one. CFT emphasises the achievement of flexibility whereas CTT emphasises the need for unlearning and relearning. Both theories regard advanced learning as a form of sensemaking (discovery, reflection) and both regard learning as discontinuous; advancing when flawed mental models are replaced, stable when a model is refined and gets harder to disconfirm.
The core syllogism of the CFT-CTT merger
- Learning is the active construction of knowledge; the elaboration and replacement of mental models, causal stories, or conceptual understandings.
- All mental models are limited. People have a variety of fragmentary and often reductive mental models.
- Training must support the learner in overcoming reductive explanations.
- Knowledge shields lead to wrong diagnoses and enable the discounting of evidence.
- Reductive explanation reinforces and preserves itself through misconception networks and through knowledge shields. Flexible learning involves the interplay of concepts and contextual particulars as they play out within and are influenced by cases of application within a domain.
Therefore learning must also involve unlearning and relearning.
Therefore advanced learning is promoted by emphasizing the interconnectedness of multiple cases and concepts along multiple conceptual dimensions, and the use of multiple, highly organized representations.
2 frustrating realities
- That, first, everything in the expertise literature is difficult to generalise. Some methods work well in some domains but not in others. The ultimate test is in the application: if you attempt to put something to practice, and it doesn’t work out, it doesn’t necessarily mean that the technique is bad. It just means that it doesn’t work for your particular context. The sooner you learn to embrace this, the better.
- Second, the authors take care to point out that a great many things about training can probably never be known. For instance, it is nearly impossible to isolate the factors that result in successful training in real world contexts — and yet real world contexts is ultimately where we want training to occur. There are simply too many confounding variables.
Ced’s final point
The overall picture that I got from the book goes something like this: “We know very little about expertise. There are large gaps in our empirical base. (Please, DoD, fund us so we can plug them!) What we do know is messy, because there are a ton of confounding variables. And yet, given that we’ve mostly worked in applied domains, our training programs seem to deliver results for businesses and soldiers, even if we don’t perfectly understand how they do so. Perhaps this is simply the nature of things in expertise research. We have discovered several things that work — the biggest of which is Cognitive Task Analysis, which enable us to extract actual mental models of expertise. We also have a usable macrocognitive theory of learning. But beyond that — phooey. Perhaps we just have to keep trying things, and check that our learners get better faster, and we can only speculate at why our programs work; we can never know for sure.”
This appears to be the price of research in real world environments. And I have to say: if the price of progress in expertise is that we don’t really know what works for sure, then I think on balance, this isn’t too bad. But I am a practitioner, not a scientist; I want things that work, I don’t necessarily need to get at the truth