HIGHLIGHTING COMMON FORECASTING ERRORS TO AVOID
The top 10 errors that forecasters make fall into three broad categories: model design issues, where the tool itself is not created in an optimal manner; model maintenance issues, where poor planning leads to forecasting issues down the road; and model abuse issues, where clients ask for more out of the forecast model than it is capable of delivering.
Focusing on these modeling issues, an analysis from Trinity Partners breaks down the common missteps in forecasting and highlights how to design and use the best models for the most accurate outcomes.
Trinity has seen the good, the bad, and the ugly when it comes to forecasting models. These frequent errors almost always result in confusion, inaccuracies and more work, and avoiding these hazards can substantially improve the forecasting process. However, with these tips, a successful forecasting process is achievable.
Model Design Issues
1. Too many segments
Why does it happen? It sounds great in theory to have a model that allows you to see everything you think about as a marketer. When we talk about our product’s strategy, it is alluring to talk about our differential share in each segment of the market we have spent so long defining and shaping. And it isn’t difficult to build a model with all of that complexity – the challenge comes in the usage.
Why does it make my forecast worse? Nine different segments means nine different penetration rates, nine different competitor impacts, and nine different dosages. Having that granularity, in practice, becomes an exercise in repetition – taking assumptions from one segment and copying it over to the next.
What should I do instead? When considering model design, Trinity recommends simplicity. The more granular the forecast is, the harder the model is to use. The harder it is to use, the less valuable the model becomes. When considering whether including an additional segment is valuable, we recommend asking yourself a few questions:
- Do the segments behave differently?
- Are they identifiable?
- Are they trackable?
If the answer is not “Yes” to all of the above, collapse the segment.
2. A messy, confusing model
Why does it happen? Especially in consulting, forecasters often “inherit” models that others have built. Models are often scotch-taped together, the end result of many authors and forecasters. Some models are designed with only the end output in mind, with no heed for the “user interface” taken. Others have proprietary processes that the designer does not want customers to see. Some clients have argued that vendors purposefully obfuscate their models so they become necessary for ongoing forecast management. Any or all of the above can be the culprit.
Why does it make my forecast worse? The forecasting process is difficult enough as is – forecasters are integrating a variety of disparate data sources, gaining alignment with stakeholders, and attempting to tell a story about value – without having to fight the tool they are using. Worse, a model that is hard to use can result in errors during the process. A model without a clear flow can be difficult to follow, leaving the user without the ability to explain why input X led to output Y.
What should I do instead? Models should follow a simple, regular structure. Assumptions should be centralized in a single place, so all the inputs can be toggled at once. The flow of the model should be clearly laid out, so each key assumption can be seen flowing intuitively to the bottom line. A clean, user-friendly model reduces time spent fighting Excel, or similar programs, and allows the user to instead focus on the strategic implications of the numbers they are generating.
3. Wrong model for the wrong time
Why does it happen? No one wants to pay multiple times for the same thing. We often receive requests to build forecasts for early-stage assets that include modules for tracking, manufacturing and stock-outs, or requests for epidemiology-based models for products that are well into their lifecycle. While both situations could be appropriate for certain situations, in the vast majority of cases, clients are hoping to have one tool that will support them throughout the entire lifespan of a product, when often it simply is not appropriate.
Why does it make my forecast worse? Functionality that is not necessary results in bloat and complexity that clouds the reality of what you know about your product at a given point in time. For products late in life-cycle, a full-blown epidemiology model will typically be less accurate and far more work to maintain than a trend-based model.
What should I do instead? Remember – models should serve your needs and display your knowledge at a given point in time. We think of models like your home – your home right out of college will be different from your starter house which will be different from where you live in retirement, even though they are all homes. Think of forecasts the same way. Speak with your vendor to agree on what you actually need at a given point in time.
Model Maintenance Issues
4. Poor data governance
Why does it happen? We have so many sources of data in the life sciences industry: distributor data, hub data, audit data, claims data, EMR data, ATUs, demand studies, chart audits. All of it can be valid and useful. Different teams in your company may have purchased different datasets for different purposes. Often, you are asked to take all of it into account for the forecast. But they do not totally agree, they have not all been pressure-tested, and you do not know how to integrate all of them into the forecast.
Why does it make my forecast worse? If you are constantly switching or changing data to inform your assumptions, your forecast output will keep changing, even if the market remains the same. This leaves you answering questions about why the forecast is different from prior outputs.
What should I do instead? The forecasting process should start with a clear data governance exercise. The best forecasters know all the data available to them, which source they will use to populate each assumption, and a system for evaluating and changing data sources that is described well in advance to creating outputs.
5. Garbage in, garbage out
Why does it happen? At some point, we have knowledge about almost everything that can inform the forecast. Access by age and gender? Sure. Sales force coverage and its impact on dosing? Why not? And we ask the forecasters to take all of this into account. If we have the knowledge, why not use it to make the forecast more accurate?
Why does it make my forecast worse? A forecast is only as good as the assumptions that drive it. If you do not have an ongoing process to continually update, track, and validate your assumptions, they only lead to confusion. It is a lot easier to explain why you missed your forecast if there are only five assumptions to discuss instead of 18.
What should I do instead? Only incorporate assumptions that you have a clearly defined, repeatable, and trackable data process to maintain. Each assumption adds complexity to an already difficult process, and additional assumptions, if not well vetted, make your forecast worse, not better.
6. Top-Down and bottom-up outputs are not aligned
Why does it happen? When available, a “hybrid” approach which combines top-down epidemiology-based data with bottom-up unit-based data can help create a comprehensive picture of a given market. Unfortunately, it can be difficult to properly segment top-down and bottom-up data and ensure that all sources align in coherent manner.
Why does it make my forecast worse? If you create a hybrid model without taking the time to ensure that your top-down and bottom-up data are in alignment, you lose the advantages of a hybrid model. Misalignment from the top-down can lead to a forecast with unrealistic unit outputs, while misalignment from the bottom-up can lead to future product performance that is not grounded. Misalignment in either direction robs a forecast of much of its predictive power.
What should I do instead? A clear plan needs to be put in place for combining top-down and bottom-up data. Both top-down and bottom-up data should be segmented appropriately, then compared historically to ensure that the outputs make sense from both directions. If the combination of top-down and bottom-up data does not make sense, the structure, flow and layout of the forecast will need to be examined and refined to ensure accuracy.
Abuse Of Forecast Model
7. Too many scenarios
Why does it happen? We start with a base case, an upside and a downside. But we might easily come in with three different prices; at that point, shouldn’t we see what each case looks like at each price? And our competitor may be launching a drug; shouldn’t we see what each case looks like with that competitor? Before you know it, you have an upside, downside, and a base case for your original upside, downside and base case. And it only spirals from there.
Why does it make my forecast worse? Forecasts give you a good understanding of your business. A well-thought out base, upside, and downside describe the bounds of your business and where the risks are. As you start to add more scenarios, though, each individual scenario becomes less valuable, leaving you with a confusing mess of revenue lines and an incoherent story.
What should I do instead? Practice restraint. Each scenario should reflect a real, known risk to your business that you need to describe to all stakeholders. At most, the human brain can only manage about three different scenarios. So, choose your biggest unknowns wisely, with the full alignment of your team.
8. Sensitivity analysis poorly conducted
Why does it happen? Sensitivity analysis is often an afterthought. After investing a great deal of time and energy into designing a forecast and populating it with well-researched data and assumptions, it can be tempting to put together a rushed sensitivity analysis with no more than a token amount of thought put into the sensitivity assumptions. In many cases, forecasters are encouraged to focus on determining one specific input for each assumption, without giving any thought to a realistic range of possible inputs. The unfortunate reality is that sensitivity analysis is often no more sophisticated than taking a given set of base assumptions and varying them by an arbitrary number to create an “upside” and a “downside.”
Why does it make my forecast worse? A sensitivity analysis is only as good as the thinking behind the inputs supporting it. The principle “garbage in, garbage out” applies. If you conduct your sensitivity analysis by arbitrarily varying assumptions by a certain amount, your sensitivity analysis will tell you that your forecast will vary by a similarly arbitrary amount. At best, an inaccurate sensitivity analysis is a waste of time. At worst, it can give an organization’s stakeholders a false sense of security about the range of possible outcomes of the forecast.
What should I do instead? Sensitivity analysis should be treated as an integral part of the forecasting process. Stakeholders in the forecast need to come to a consensus about why the key assumptions might deviate from the assumptions captured in the base forecast. A careful consideration of potential scenarios and their interactions with each other will allow your sensitivity analysis to provide valuable insight to your organization.
9. False precision
Why does it happen? It is the year before launch, and you have many stakeholders asking for information. Your board needs a forecast. Your sales team needs targets. Your chief commercial officer needs to know how to budget. Your investors need guidance. So, you look at your forecast, and the spreadsheet has a number at the end. $46,246,550.25. And that is the number you send to your team.
Why does it make my forecast worse? Yes, the math of the forecast will give you a real number, down to the penny at the end. Does that reflect your certainty about each assumption? Each assumption likely had an error bar associated with it – does the number you provided incorporate this error bar? If not, you are confusing your stakeholders and convincing them you know more than you do about the product.
What should I do instead? Everyone should be aware of where the gaps are in your forecast. The implication, then, is that there is fundamental uncertainty about your forecast – and when your stakeholders ask you for more precision, you should challenge them about what is known about the market. Do they feel comfortable promising that level of precision? Then they should not expect the same out of the forecast.
10. Assuming the model is a “crystal ball”
Why does it happen? By definition, a forecast is supposed to provide insight into the future. At the same time, we know that forecasts are really a function of what we know now (the mix of assumptions) and the model that we put them in. They reflect our best understanding of the market and our product. However, stakeholders routinely expect some additional “magic” out of the forecast, as though it could tell the future (this has not been helped by a Monte Carlo forecasting add-in actually named “Crystal Ball”).
Why does it make my forecast worse? Once a forecast moves from being about assumptions and uncertainty to a discussion about why an output looks like X versus Y, it has already begun to lose its power. We move away from thoughtful debate over what we know and what risks there are to the business towards a less fruitful argument over whose prior assumptions about the value of the product are correct.
What should I do instead? The question forecasters are always asked: “Is this what will happen?” Instead, the expectation should be shifted to “What do we know today, and does this output accurately reflect that knowledge?” While not as immediately gratifying, such an answer helps all stakeholders better understand their business.
Alex Chiang is a principal at Trinity and head of the New York City office. He advises clients from a variety of different groups within life science organizations, including marketing, market planning, market access, sales operations, data analytics, medical affairs, and business development. Adrian Watson is senior manager of modeling and analytics at Trinity. He has helped to establish best practices in forecast modeling in over 25 therapeutic areas, with specific experience leading multiple engagements in oncology.