Scenario Planning: How to Test Scenarios
As an organization converges on a set of long-range scenarios, it typically wants to validate its scenarios. Since scenarios reflect logical speculations about the future, there is no right or wrong about them, but that does not mean that they cannot be validated for their ability to serve as strategic insight and foresight tools. Rather than right or wrong, think of testing scenarios as an exercise in logical consistency. If the scenarios cannot stand up to the scrutiny outline below, then they will probably not serve as a useful tool of strategy or innovation.
Getting to the Matrix
Serious Insights outlines the process of getting to the matrix in this post. Getting to a matrix is part of the testing process. As various uncertainties overlay each other in potential matrices, scenario planners develop prototype stories for each quadrant of a matrix. If the emergent stores start to sound similar quadrant to quadrant, then the matrix will likely be a throwaway. Capture the axis values, however. If those values re-emerge on other uncertainties they may be proxies for other uncertainties. See this you can avoid spending time on any matrix that incorporates proxies (see Purposeful divergence for more on proxies).
For more on this topic see our post on Getting to the Matrix.
The chosen matrix requires an additional assessment to ensure that the emergent stories offer purposefully divergent versions of the future. Scenario stories should not overlap. A list of uncertainties associated with the scenario matrix should have a unique value in each quadrant.
It is often the case that two uncertainties can act as proxies for one another, despite them belong to different conceptual categories. For example, in our University System of Georgia work uncertainties (parentheticals indicate extremes for each uncertainty) around the source of learning (closed vs. open source), the approach to learning (traditional scheduled vs, just-in-time), credentials (degrees vs. portfolios), paths to learning (linear vs. open-ended), measurement (standardized testing vs. demonstration of skill) acted as proxies for each other in a scenario matrix. Using any of them as an axis would result in the same general story emerging from the exercise. Using them together as the pair of uncertainties driving the primary axes would end up with similar stories emerging from the quadrants.
Deep causes seek to discover the underlying factors that drive the quadrants in the matrix. The process involves capturing first, the primary reasons the logic of the scenario will likely drive events toward the top or bottom of the matrix, and also to the left or the right. (Some practitioners call these combinations of quadrants “North” and “South”/“East” and “West”).
There will be many ideas in both clusters that appear opposite of those in the opposing quadrants, and that is OK.
And now the hard part.
Teams/analysts seek to synthesize a set of unique causes for each quadrant. It is not permissible to the same causes in more than one quadrant.
The deep causes exercise does not represent just a “test” of the scenarios because the exercise also provides for a deeper development of the scenario narratives using new information not initially modeled by the uncertainties. The deep causes exercise does act as a test because if the work to get to unique descriptors of the causes results in too much similarity, then the scenarios are not divergent enough and should be reconsidered.
Another test of internal logic, divergence and plausibility is to explore the conditions under which one scenario might transform into another scenario. The specific question to ask is under what circumstances would scenario A transform into scenario B. This means thinking of an event that would change the values for the majority of the underlying uncertainties, and definitely change the value of the scenario axes into those of the other scenario.
This test isn’t definitive in its affirmation of the scenario values, but it does help confirm that the logic underlying the scenarios supports transformative pathways for the uncertainties and that the uncertainties transform in a relative lock-step manner. Working through this process helps validate the cohesion of the narratives and their associated uncertainties.
Organizations build scenarios around a focal question, a particular strategic variable for which the scenarios will offer a range of plausible answers under uncertainty. Examples include: What will work look like in 2030? How will people learn in 2030? What will be the nature of money in 2025?
While the scenarios are built for a specific purpose, asking them other questions, and finding meaningful answers also test the value of the scenarios. We have conducted exercises in the future of work and the future of learning, take either of those scenario sets and add uncertainties from the other and divergent values for those uncertainties will emerge.
This happens because the uncertainties driving the matrix act not only as proxies for other critical uncertainties identified during a scenario project but as proxies for ideas outside of the domain of exploration.
Scenario sets do not arrive with a “best before” or “expiration date.” Scenario logic, driven by the underlying forces and uncertainties, depend on the continued uncertainty of values associated with the uncertainties. If a major uncertainty collapses into a certainty, then the entire scenario set becomes invalid.
However, if underlying uncertainties begin to exhibit significant deviations in the real world from their proposed values across the scenario set, then the organization should examine the scenarios for two purposes. The first purpose is to refine the logic in light of the emergent potential values.
The revalidation exercise may also result in the revelation that the pace of the scenarios unfolding has changed from the original assertions. If the pace picks up, and the scenarios remain valid, then making relevant business decisions within the scenario framework becomes even more imperative/
Revalidation may also result in the exposure of an emergent uncertainty that was not previously considered—an uncertainty that does not subsume or act as a proxy for the primary axis drivers but manifests as a new character to integrated into the strategic narratives.
Assessing the assessments
Any scenarios set with divergent logics will provide good tools for producing strategic insight, for creating an early warning system framework for tracking the evolving values associated with the critical uncertainties, and for challenging assumptions that may fuel agility or drive innovation.
The real test of the scenarios is their ability to provide value to those who create them. The list of “tests” here offer procedural guidance that refine the scenarios, making them richer and deeper. Some scenarios projects suffice with basic scenarios sketched out over a few hours to inform a particular decision or help make a point. Others become strategic planning tools, refined over weeks and months as those engaged with them frame their strategic dialog in terms of the futures that emerge from the scenarios. How much time an organization spends “testing” their scenarios depends on how much time they plan on spending with the scenarios.
For those who embrace scenarios with the intent to use them as a core strategic tool, spending time on all of the steps outlined here will provide value. For those who chose a lighter approach, make sure that the scenarios, at a minimum result in purposefully divergent narratives.